public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
* [PVE-User] proxmox-restore - performance issues
@ 2021-09-17  9:29 Gregor Burck
  2021-09-30 13:07 ` Gregor Burck
  0 siblings, 1 reply; 8+ messages in thread
From: Gregor Burck @ 2021-09-17  9:29 UTC (permalink / raw)
  To: pve-user

Hi,

I've setup an pve/pbs on the same machine:

HP DL380 Gen9

E5-2640 v3 @ 2.60GHz (2 x 8 core)
256 GB RAM
2x 1TB SAMSUNG NVME PM983
12x 8 TB HP SAS HDDs

I create with HDDs and NVME an zfs Raid10.

I still got restore rates of 50 MB/s on one restore job.
If I start multiple jobs parallel the single rate is still on this  
rate, but I see with iotop that the summary rate is even higher (max  
around 200 MB/s.

When I use htop for the CPU utilisation it seems that an single Job  
run only on one core, even when there are multiple tasks.

So I searching the bottle neck, it realy seems not the HDDs.

Any idea so long?

Thank for every,..

Bye

Gregor



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PVE-User] proxmox-restore - performance issues
  2021-09-17  9:29 [PVE-User] proxmox-restore - performance issues Gregor Burck
@ 2021-09-30 13:07 ` Gregor Burck
  2021-09-30 13:24   ` Dominik Csapak
  0 siblings, 1 reply; 8+ messages in thread
From: Gregor Burck @ 2021-09-30 13:07 UTC (permalink / raw)
  To: Proxmox VE user list

Hi,

I made some other test with the same machine but an other proccessor.

I use an Intel(R) Xeon(R) CPU E5-2643 v3 @ 3.40GHz, wich has a higher  
frequency.

The restore rate for an single job dind't change.

Any idea what it could be?

Bye

Gregor




^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PVE-User] proxmox-restore - performance issues
  2021-09-30 13:07 ` Gregor Burck
@ 2021-09-30 13:24   ` Dominik Csapak
  0 siblings, 0 replies; 8+ messages in thread
From: Dominik Csapak @ 2021-09-30 13:24 UTC (permalink / raw)
  To: Proxmox VE user list, Gregor Burck

On 9/30/21 15:07, Gregor Burck wrote:
> Hi,
> 
> I made some other test with the same machine but an other proccessor.
> 
> I use an Intel(R) Xeon(R) CPU E5-2643 v3 @ 3.40GHz, wich has a higher 
> frequency.
> 
> The restore rate for an single job dind't change.
> 
> Any idea what it could be?
> 
> Bye
> 
> Gregor
> 
> 

hi,

can you tell us a bit more about the setup and test?
is the target storage able to handle more than 50MB/s?
how do you measure the 50MB/s?

with kind regards
Dominik




^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PVE-User] proxmox-restore - performance issues
  2021-10-01  9:00 ` Gregor Burck
@ 2021-10-01  9:29   ` Dominik Csapak
  0 siblings, 0 replies; 8+ messages in thread
From: Dominik Csapak @ 2021-10-01  9:29 UTC (permalink / raw)
  To: Proxmox VE user list, Gregor Burck


On 10/1/21 11:00, Gregor Burck wrote:
> And more information:
> 
> I test an fio job, I got the settings not by my own instaed someone from 
> the forum give me this for testing ZFS:

is that the source as well as the target storage?
if not please benchmark both

> 
> 
> root@ph-pbs:~# fio --name=typical-vm --size=8G --rw=readwrite 
> --rwmixread=69 --direct=1 --bs=4K --numjobs=4 --ioengine=libaio 
> --iodepth=12 --group_reporting --runtime=20m --time_based^C
> root@ph-pbs:~# cd /ZFSPOOL/
> BACKUPSTORE001/ VMSTORE001/
> root@ph-pbs:~# cd /ZFSPOOL/VMSTORE001/
> root@ph-pbs:/ZFSPOOL/VMSTORE001# fio --name=typical-vm --size=8G 
> --rw=readwrite --rwmixread=69 --direct=1 --bs=4K --numjobs=4 
> --ioengine=libaio --iodepth=12 --group_reporting --runtime=20m --time_based
> typical-vm: (g=0): rw=rw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 
> 4096B-4096B, ioengine=libaio, iodepth=12
> ...
> fio-3.25
> Starting 4 processes
> typical-vm: Laying out IO file (1 file / 8192MiB)
> typical-vm: Laying out IO file (1 file / 8192MiB)
> typical-vm: Laying out IO file (1 file / 8192MiB)
> typical-vm: Laying out IO file (1 file / 8192MiB)
> Jobs: 4 (f=0): [f(4)][100.0%][r=1518MiB/s,w=682MiB/s][r=389k,w=175k 
> IOPS][eta 00m:00s]
> typical-vm: (groupid=0, jobs=4): err= 0: pid=3804786: Fri Oct  1 
> 10:56:30 2021
>    read: IOPS=356k, BW=1392MiB/s (1460MB/s)(1631GiB/1200001msec)

this looks too high for the storage array, so i guess something is
off with the benchmark (may be cache or missing filename parameter)
and the size is too little (i'd use something that cannot fit
into the cache)

in any case, i'd do read and write benchmarks seperately
as well as setting iodepth and numjobs to 1, to get a baseline
single thread performance

as i wrote in my previous message, check out examples at:
https://pve.proxmox.com/wiki/Benchmarking_Storage


>      slat (nsec): min=1854, max=176589k, avg=5156.08, stdev=39010.68
>      clat (usec): min=4, max=191637, avg=85.89, stdev=133.21
>       lat (usec): min=32, max=191640, avg=91.13, stdev=139.42
>      clat percentiles (usec):
>       |  1.00th=[   42],  5.00th=[   46], 10.00th=[   49], 20.00th=[   52],
>       | 30.00th=[   56], 40.00th=[   59], 50.00th=[   65], 60.00th=[   85],
>       | 70.00th=[   97], 80.00th=[  111], 90.00th=[  141], 95.00th=[  176],
>       | 99.00th=[  265], 99.50th=[  318], 99.90th=[  570], 99.95th=[  693],
>       | 99.99th=[ 1090]
>     bw (  MiB/s): min=  250, max= 2159, per=100.00%, avg=1392.77, 
> stdev=63.78, samples=9596
>     iops        : min=64218, max=552858, avg=356548.75, stdev=16328.20, 
> samples=9596
>    write: IOPS=160k, BW=626MiB/s (656MB/s)(733GiB/1200001msec); 0 zone 
> resets
>      slat (usec): min=3, max=191425, avg= 9.71, stdev=34.41
>      clat (usec): min=2, max=191641, avg=86.02, stdev=137.32
>       lat (usec): min=35, max=191650, avg=95.85, stdev=144.10
>      clat percentiles (usec):
>       |  1.00th=[   42],  5.00th=[   46], 10.00th=[   49], 20.00th=[   52],
>       | 30.00th=[   56], 40.00th=[   59], 50.00th=[   65], 60.00th=[   85],
>       | 70.00th=[   98], 80.00th=[  111], 90.00th=[  141], 95.00th=[  178],
>       | 99.00th=[  265], 99.50th=[  318], 99.90th=[  578], 99.95th=[  701],
>       | 99.99th=[ 1106]
>     bw (  KiB/s): min=114464, max=995856, per=100.00%, avg=640817.51, 
> stdev=29342.79, samples=9596
>     iops        : min=28616, max=248964, avg=160204.26, stdev=7335.70, 
> samples=9596
>    lat (usec)   : 4=0.01%, 10=0.01%, 50=13.69%, 100=58.80%, 250=26.29%
>    lat (usec)   : 500=1.08%, 750=0.10%, 1000=0.02%
>    lat (msec)   : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
>    lat (msec)   : 100=0.01%, 250=0.01%
>    cpu          : usr=18.17%, sys=79.17%, ctx=982498, majf=10, minf=2977
>    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, 
>  >=64=0.0%
>       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, 
>  >=64=0.0%
>       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, 
>  >=64=0.0%
>       issued rwts: total=427672030,192161509,0,0 short=0,0,0,0 
> dropped=0,0,0,0
>       latency   : target=0, window=0, percentile=100.00%, depth=12
> 
> Run status group 0 (all jobs):
>     READ: bw=1392MiB/s (1460MB/s), 1392MiB/s-1392MiB/s 
> (1460MB/s-1460MB/s), io=1631GiB (1752GB), run=1200001-1200001msec
>    WRITE: bw=626MiB/s (656MB/s), 626MiB/s-626MiB/s (656MB/s-656MB/s), 
> io=733GiB (787GB), run=1200001-1200001msec
> 
> 
> And this is while two of the restore jobs still running.
> 
> 
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 
> 





^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PVE-User] proxmox-restore - performance issues
  2021-10-01  7:18 Gregor Burck
@ 2021-10-01  9:00 ` Gregor Burck
  2021-10-01  9:29   ` Dominik Csapak
  0 siblings, 1 reply; 8+ messages in thread
From: Gregor Burck @ 2021-10-01  9:00 UTC (permalink / raw)
  To: Proxmox VE user list

And more information:

I test an fio job, I got the settings not by my own instaed someone  
from the forum give me this for testing ZFS:


root@ph-pbs:~# fio --name=typical-vm --size=8G --rw=readwrite  
--rwmixread=69 --direct=1 --bs=4K --numjobs=4 --ioengine=libaio  
--iodepth=12 --group_reporting --runtime=20m --time_based^C
root@ph-pbs:~# cd /ZFSPOOL/
BACKUPSTORE001/ VMSTORE001/
root@ph-pbs:~# cd /ZFSPOOL/VMSTORE001/
root@ph-pbs:/ZFSPOOL/VMSTORE001# fio --name=typical-vm --size=8G  
--rw=readwrite --rwmixread=69 --direct=1 --bs=4K --numjobs=4  
--ioengine=libaio --iodepth=12 --group_reporting --runtime=20m  
--time_based
typical-vm: (g=0): rw=rw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T)  
4096B-4096B, ioengine=libaio, iodepth=12
...
fio-3.25
Starting 4 processes
typical-vm: Laying out IO file (1 file / 8192MiB)
typical-vm: Laying out IO file (1 file / 8192MiB)
typical-vm: Laying out IO file (1 file / 8192MiB)
typical-vm: Laying out IO file (1 file / 8192MiB)
Jobs: 4 (f=0): [f(4)][100.0%][r=1518MiB/s,w=682MiB/s][r=389k,w=175k  
IOPS][eta 00m:00s]
typical-vm: (groupid=0, jobs=4): err= 0: pid=3804786: Fri Oct  1 10:56:30 2021
   read: IOPS=356k, BW=1392MiB/s (1460MB/s)(1631GiB/1200001msec)
     slat (nsec): min=1854, max=176589k, avg=5156.08, stdev=39010.68
     clat (usec): min=4, max=191637, avg=85.89, stdev=133.21
      lat (usec): min=32, max=191640, avg=91.13, stdev=139.42
     clat percentiles (usec):
      |  1.00th=[   42],  5.00th=[   46], 10.00th=[   49], 20.00th=[   52],
      | 30.00th=[   56], 40.00th=[   59], 50.00th=[   65], 60.00th=[   85],
      | 70.00th=[   97], 80.00th=[  111], 90.00th=[  141], 95.00th=[  176],
      | 99.00th=[  265], 99.50th=[  318], 99.90th=[  570], 99.95th=[  693],
      | 99.99th=[ 1090]
    bw (  MiB/s): min=  250, max= 2159, per=100.00%, avg=1392.77,  
stdev=63.78, samples=9596
    iops        : min=64218, max=552858, avg=356548.75,  
stdev=16328.20, samples=9596
   write: IOPS=160k, BW=626MiB/s (656MB/s)(733GiB/1200001msec); 0 zone resets
     slat (usec): min=3, max=191425, avg= 9.71, stdev=34.41
     clat (usec): min=2, max=191641, avg=86.02, stdev=137.32
      lat (usec): min=35, max=191650, avg=95.85, stdev=144.10
     clat percentiles (usec):
      |  1.00th=[   42],  5.00th=[   46], 10.00th=[   49], 20.00th=[   52],
      | 30.00th=[   56], 40.00th=[   59], 50.00th=[   65], 60.00th=[   85],
      | 70.00th=[   98], 80.00th=[  111], 90.00th=[  141], 95.00th=[  178],
      | 99.00th=[  265], 99.50th=[  318], 99.90th=[  578], 99.95th=[  701],
      | 99.99th=[ 1106]
    bw (  KiB/s): min=114464, max=995856, per=100.00%, avg=640817.51,  
stdev=29342.79, samples=9596
    iops        : min=28616, max=248964, avg=160204.26, stdev=7335.70,  
samples=9596
   lat (usec)   : 4=0.01%, 10=0.01%, 50=13.69%, 100=58.80%, 250=26.29%
   lat (usec)   : 500=1.08%, 750=0.10%, 1000=0.02%
   lat (msec)   : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
   lat (msec)   : 100=0.01%, 250=0.01%
   cpu          : usr=18.17%, sys=79.17%, ctx=982498, majf=10, minf=2977
   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,  
 >=64=0.0%
      complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%,  
 >=64=0.0%
      issued rwts: total=427672030,192161509,0,0 short=0,0,0,0 dropped=0,0,0,0
      latency   : target=0, window=0, percentile=100.00%, depth=12

Run status group 0 (all jobs):
    READ: bw=1392MiB/s (1460MB/s), 1392MiB/s-1392MiB/s  
(1460MB/s-1460MB/s), io=1631GiB (1752GB), run=1200001-1200001msec
   WRITE: bw=626MiB/s (656MB/s), 626MiB/s-626MiB/s (656MB/s-656MB/s),  
io=733GiB (787GB), run=1200001-1200001msec


And this is while two of the restore jobs still running.




^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PVE-User] proxmox-restore - performance issues
@ 2021-10-01  7:18 Gregor Burck
  2021-10-01  9:00 ` Gregor Burck
  0 siblings, 1 reply; 8+ messages in thread
From: Gregor Burck @ 2021-10-01  7:18 UTC (permalink / raw)
  To: Proxmox VE user list

Hi,

it is PVE and PBS on the same machine, I wrote it on my first mail ;-)

an additional information:

I made in the moment a new test aftar an reboot to exclude to much RAM
iisues on the zfs cache.

With 6 jobs the total io I see over iotop alter between 120MB/s and
240MB/s with peaks of 300 MB/s
When I start htop I see work on all cores but no core is on 100%

and the benchmark:


root@ph-pbs:~# proxmox-backup-client benchmark --repository
root@pam@localhost:BACKUPSTORE001
Password for "root@pam": ************
Uploaded 373 chunks in 5 seconds.
Time per request: 13456 microseconds.
TLS speed: 311.70 MB/s
SHA256 speed: 418.54 MB/s
Compression speed: 572.10 MB/s
Decompress speed: 818.43 MB/s
AES256/GCM speed: 1838.87 MB/s
Verify speed: 298.19 MB/s
┌───────────────────────────────────┬────────────────────┐
│ Name                              │ Value              │
╞═══════════════════════════════════╪════════════════════╡
│ TLS (maximal backup upload speed) │ 311.70 MB/s (25%)  │
├───────────────────────────────────┼────────────────────┤
│ SHA256 checksum computation speed │ 418.54 MB/s (21%)  │
├───────────────────────────────────┼────────────────────┤
│ ZStd level 1 compression speed    │ 572.10 MB/s (76%)  │
├───────────────────────────────────┼────────────────────┤
│ ZStd level 1 decompression speed  │ 818.43 MB/s (68%)  │
├───────────────────────────────────┼────────────────────┤
│ Chunk verification speed          │ 298.19 MB/s (39%)  │
├───────────────────────────────────┼────────────────────┤
│ AES256 GCM encryption speed       │ 1838.87 MB/s (50%) │
└───────────────────────────────────┴────────────────────┘





^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PVE-User] proxmox-restore - performance issues
  2021-10-01  6:52 Gregor Burck
@ 2021-10-01  7:00 ` Dominik Csapak
  0 siblings, 0 replies; 8+ messages in thread
From: Dominik Csapak @ 2021-10-01  7:00 UTC (permalink / raw)
  To: Proxmox VE user list, Gregor Burck

On 10/1/21 08:52, Gregor Burck wrote:
> Hi,
hi,

> 
> thank you for reply. I made a lot of different tests and setups, but
> this the setup I want to use:
> 
> Original setup:
> 
> HP DL380 Gen9 with
> 
> E5-2640 v3 @ 2.60GHz
> 256 GB RAM
> 
> 2x SSDs for host OS
> 
> For an ZFS Rais 10:
> 
> 2x 1TB SAMSUNG NVME PM983 for spezial devices
> 12x 8 TB HP SAS HDDs

i guess thats the server?
what about the restore client? encryption/sha/etc. will be done by the 
client

> 
> root@ph-pbs:~# zpool status
>     pool: ZFSPOOL
>    state: ONLINE
> config:
> 
>           NAME         STATE     READ WRITE CKSUM
>           ZFSPOOL      ONLINE       0     0     0
>             mirror-0   ONLINE       0     0     0
>               sdc      ONLINE       0     0     0
>               sdd      ONLINE       0     0     0
>             mirror-1   ONLINE       0     0     0
>               sde      ONLINE       0     0     0
>               sdf      ONLINE       0     0     0
>             mirror-2   ONLINE       0     0     0
>               sdg      ONLINE       0     0     0
>               sdh      ONLINE       0     0     0
>             mirror-3   ONLINE       0     0     0
>               sdi      ONLINE       0     0     0
>               sdj      ONLINE       0     0     0
>             mirror-4   ONLINE       0     0     0
>               sdk      ONLINE       0     0     0
>               sdl      ONLINE       0     0     0
>             mirror-5   ONLINE       0     0     0
>               sdm      ONLINE       0     0     0
>               sdn      ONLINE       0     0     0
>           special
>             mirror-6   ONLINE       0     0     0
>               nvme0n1  ONLINE       0     0     0
>               nvme1n1  ONLINE       0     0     0
> 
> errors: No known data errors
> 
>     pool: rpool
>    state: ONLINE
>     scan: scrub repaired 0B in 00:02:40 with 0 errors on Sun Aug  8
> 00:26:43 2021
> config:
> 
>           NAME        STATE     READ WRITE CKSUM
>           rpool       ONLINE       0     0     0
>             mirror-0  ONLINE       0     0     0
>               sda3    ONLINE       0     0     0
>               sdb3    ONLINE       0     0     0
> 
> errors: No known data errors
> 
> The VMSTORE and the BACKUPSTORE is on the zsf as an dataset:
> 
> root@ph-pbs:~# zfs list
> NAME                     USED  AVAIL     REFER  MOUNTPOINT
> ZFSPOOL                 10.1T  32.1T       96K  /ZFSPOOL
> ZFSPOOL/BACKUPSTORE001  5.63T  32.1T     5.63T  /ZFSPOOL/BACKUPSTORE001
> ZFSPOOL/VMSTORE001      4.52T  32.1T     4.52T  /ZFSPOOL/VMSTORE001
> rpool                   27.3G  80.2G       96K  /rpool
> rpool/ROOT              27.3G  80.2G       96K  /rpool/ROOT
> rpool/ROOT/pbs-1        27.3G  80.2G     27.3G  /
> 
> The VM I tested with is our Exchange Server. Raw image size 500GB,
> netto ~400GB content
> 
> First Test with one restore job:
> 
> Virtual
> Environment 7.0-11
> Datacenter
> Search:
> Logs
> new
> volume ID is 'VMSTORE:vm-101-disk-0'
> restore
> proxmox backup image: /usr/bin/pbs-restore --repository
> root@pam@ph-pbs.peiker-holding.de:ZFSPOOLBACKUP
> vm/121/2021-07-23T19:00:03Z drive-virtio0.img.fidx
> /dev/zvol/ZFSPOOLVMSTORE/vm-101-disk-0 --verbose --format raw
> --skip-zero
> connecting
> to repository 'root@pam@ph-pbs.peiker-holding.de:ZFSPOOLBACKUP'
> open
> block backend for target '/dev/zvol/ZFSPOOLVMSTORE/vm-101-disk-0'
> starting
> to restore snapshot 'vm/121/2021-07-23T19:00:03Z'
> download
> and verify backup index
> progress
> 1% (read 5368709120 bytes, zeroes = 2% (125829120 bytes), duration 86
> sec)
> progress
> 2% (read 10737418240 bytes, zeroes = 1% (159383552 bytes), duration
> 181 sec)
> progress
> 3% (read 16106127360 bytes, zeroes = 0% (159383552 bytes), duration
> 270 sec)
> .
> .
> progress
> 98% (read 526133493760 bytes, zeroes = 0% (3628072960 bytes),
> duration 9492 sec)
> progress
> 99% (read 531502202880 bytes, zeroes = 0% (3628072960 bytes),
> duration 9583 sec)
> progress
> 100% (read 536870912000 bytes, zeroes = 0% (3628072960 bytes),
> duration 9676 sec)
> restore
> image complete (bytes=536870912000, duration=9676.97s,
> speed=52.91MB/s)
> rescan
> volumes...
> TASK
> OK
> 
> When I regard iotop I see about the same rate.
> 
> But when I start multiple restore jobs parallel, I see that the single
> jon is still on IO 40-50 MB/s but the total IO is multiple of the
> rate. I see on iotop rates to 200-250 MB/s
> So I guess it isn't the store. In some Test with an Setup where I used
> the nvmes as source and target I could reach a singele restore rate
> about 70 MB/s

some disks/storages do not scale with single threaded workloads
(and AFAIR, the pbs-restore must restore a disk single threaded because
of qemu limitations?), but will scale with multiple threads just fine

a 'fio' benchmark of the source as well as the target storage
would be good to get a baseline storage perfomance

see for example: https://pve.proxmox.com/wiki/Benchmarking_Storage

> 
> Now I test an other CPU in this machine, cause on other test machines
> with other CPU (AMD Ryzen or others) I get an higher rate.
> Unfortunaly the rate on the current machine doesn't rise with the other 
> CPU

can you do a proxmox-backup-client benchmark on all machines and their
respective restore speed (especially the clients; also specify a
repository to see tls speed)

kind regards




^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PVE-User] proxmox-restore - performance issues
@ 2021-10-01  6:52 Gregor Burck
  2021-10-01  7:00 ` Dominik Csapak
  0 siblings, 1 reply; 8+ messages in thread
From: Gregor Burck @ 2021-10-01  6:52 UTC (permalink / raw)
  To: Proxmox VE user list

Hi,

thank you for reply. I made a lot of different tests and setups, but
this the setup I want to use:

Original setup:

HP DL380 Gen9 with

E5-2640 v3 @ 2.60GHz
256 GB RAM

2x SSDs for host OS

For an ZFS Rais 10:

2x 1TB SAMSUNG NVME PM983 for spezial devices
12x 8 TB HP SAS HDDs

root@ph-pbs:~# zpool status
    pool: ZFSPOOL
   state: ONLINE
config:

          NAME         STATE     READ WRITE CKSUM
          ZFSPOOL      ONLINE       0     0     0
            mirror-0   ONLINE       0     0     0
              sdc      ONLINE       0     0     0
              sdd      ONLINE       0     0     0
            mirror-1   ONLINE       0     0     0
              sde      ONLINE       0     0     0
              sdf      ONLINE       0     0     0
            mirror-2   ONLINE       0     0     0
              sdg      ONLINE       0     0     0
              sdh      ONLINE       0     0     0
            mirror-3   ONLINE       0     0     0
              sdi      ONLINE       0     0     0
              sdj      ONLINE       0     0     0
            mirror-4   ONLINE       0     0     0
              sdk      ONLINE       0     0     0
              sdl      ONLINE       0     0     0
            mirror-5   ONLINE       0     0     0
              sdm      ONLINE       0     0     0
              sdn      ONLINE       0     0     0
          special
            mirror-6   ONLINE       0     0     0
              nvme0n1  ONLINE       0     0     0
              nvme1n1  ONLINE       0     0     0

errors: No known data errors

    pool: rpool
   state: ONLINE
    scan: scrub repaired 0B in 00:02:40 with 0 errors on Sun Aug  8
00:26:43 2021
config:

          NAME        STATE     READ WRITE CKSUM
          rpool       ONLINE       0     0     0
            mirror-0  ONLINE       0     0     0
              sda3    ONLINE       0     0     0
              sdb3    ONLINE       0     0     0

errors: No known data errors

The VMSTORE and the BACKUPSTORE is on the zsf as an dataset:

root@ph-pbs:~# zfs list
NAME                     USED  AVAIL     REFER  MOUNTPOINT
ZFSPOOL                 10.1T  32.1T       96K  /ZFSPOOL
ZFSPOOL/BACKUPSTORE001  5.63T  32.1T     5.63T  /ZFSPOOL/BACKUPSTORE001
ZFSPOOL/VMSTORE001      4.52T  32.1T     4.52T  /ZFSPOOL/VMSTORE001
rpool                   27.3G  80.2G       96K  /rpool
rpool/ROOT              27.3G  80.2G       96K  /rpool/ROOT
rpool/ROOT/pbs-1        27.3G  80.2G     27.3G  /

The VM I tested with is our Exchange Server. Raw image size 500GB,
netto ~400GB content

First Test with one restore job:

Virtual
Environment 7.0-11
Datacenter
Search:
Logs
new
volume ID is 'VMSTORE:vm-101-disk-0'
restore
proxmox backup image: /usr/bin/pbs-restore --repository
root@pam@ph-pbs.peiker-holding.de:ZFSPOOLBACKUP
vm/121/2021-07-23T19:00:03Z drive-virtio0.img.fidx
/dev/zvol/ZFSPOOLVMSTORE/vm-101-disk-0 --verbose --format raw
--skip-zero
connecting
to repository 'root@pam@ph-pbs.peiker-holding.de:ZFSPOOLBACKUP'
open
block backend for target '/dev/zvol/ZFSPOOLVMSTORE/vm-101-disk-0'
starting
to restore snapshot 'vm/121/2021-07-23T19:00:03Z'
download
and verify backup index
progress
1% (read 5368709120 bytes, zeroes = 2% (125829120 bytes), duration 86
sec)
progress
2% (read 10737418240 bytes, zeroes = 1% (159383552 bytes), duration
181 sec)
progress
3% (read 16106127360 bytes, zeroes = 0% (159383552 bytes), duration
270 sec)
.
.
progress
98% (read 526133493760 bytes, zeroes = 0% (3628072960 bytes),
duration 9492 sec)
progress
99% (read 531502202880 bytes, zeroes = 0% (3628072960 bytes),
duration 9583 sec)
progress
100% (read 536870912000 bytes, zeroes = 0% (3628072960 bytes),
duration 9676 sec)
restore
image complete (bytes=536870912000, duration=9676.97s,
speed=52.91MB/s)
rescan
volumes...
TASK
OK

When I regard iotop I see about the same rate.

But when I start multiple restore jobs parallel, I see that the single
jon is still on IO 40-50 MB/s but the total IO is multiple of the
rate. I see on iotop rates to 200-250 MB/s
So I guess it isn't the store. In some Test with an Setup where I used
the nvmes as source and target I could reach a singele restore rate
about 70 MB/s

Now I test an other CPU in this machine, cause on other test machines
with other CPU (AMD Ryzen or others) I get an higher rate.
Unfortunaly the rate on the current machine doesn't rise with the other CPU.

Now I confused if there is any chance to get the restore rate better.

Bye

Gregor





^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-10-01  9:30 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-17  9:29 [PVE-User] proxmox-restore - performance issues Gregor Burck
2021-09-30 13:07 ` Gregor Burck
2021-09-30 13:24   ` Dominik Csapak
2021-10-01  6:52 Gregor Burck
2021-10-01  7:00 ` Dominik Csapak
2021-10-01  7:18 Gregor Burck
2021-10-01  9:00 ` Gregor Burck
2021-10-01  9:29   ` Dominik Csapak

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal