all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: Gregor Burck <gregor@aeppelbroe.de>
To: Proxmox VE user list <pve-user@lists.proxmox.com>
Subject: Re: [PVE-User] proxmox-restore - performance issues
Date: Fri, 01 Oct 2021 11:00:37 +0200	[thread overview]
Message-ID: <20211001110037.EGroupware.Ez22YX0Ys-I09F4bK5VPukF@heim.aeppelbroe.de> (raw)
In-Reply-To: <20211001091851.EGroupware.HVxe1rjAizNRB0o9XSoY2QH@heim.aeppelbroe.de>

And more information:

I test an fio job, I got the settings not by my own instaed someone  
from the forum give me this for testing ZFS:


root@ph-pbs:~# fio --name=typical-vm --size=8G --rw=readwrite  
--rwmixread=69 --direct=1 --bs=4K --numjobs=4 --ioengine=libaio  
--iodepth=12 --group_reporting --runtime=20m --time_based^C
root@ph-pbs:~# cd /ZFSPOOL/
BACKUPSTORE001/ VMSTORE001/
root@ph-pbs:~# cd /ZFSPOOL/VMSTORE001/
root@ph-pbs:/ZFSPOOL/VMSTORE001# fio --name=typical-vm --size=8G  
--rw=readwrite --rwmixread=69 --direct=1 --bs=4K --numjobs=4  
--ioengine=libaio --iodepth=12 --group_reporting --runtime=20m  
--time_based
typical-vm: (g=0): rw=rw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T)  
4096B-4096B, ioengine=libaio, iodepth=12
...
fio-3.25
Starting 4 processes
typical-vm: Laying out IO file (1 file / 8192MiB)
typical-vm: Laying out IO file (1 file / 8192MiB)
typical-vm: Laying out IO file (1 file / 8192MiB)
typical-vm: Laying out IO file (1 file / 8192MiB)
Jobs: 4 (f=0): [f(4)][100.0%][r=1518MiB/s,w=682MiB/s][r=389k,w=175k  
IOPS][eta 00m:00s]
typical-vm: (groupid=0, jobs=4): err= 0: pid=3804786: Fri Oct  1 10:56:30 2021
   read: IOPS=356k, BW=1392MiB/s (1460MB/s)(1631GiB/1200001msec)
     slat (nsec): min=1854, max=176589k, avg=5156.08, stdev=39010.68
     clat (usec): min=4, max=191637, avg=85.89, stdev=133.21
      lat (usec): min=32, max=191640, avg=91.13, stdev=139.42
     clat percentiles (usec):
      |  1.00th=[   42],  5.00th=[   46], 10.00th=[   49], 20.00th=[   52],
      | 30.00th=[   56], 40.00th=[   59], 50.00th=[   65], 60.00th=[   85],
      | 70.00th=[   97], 80.00th=[  111], 90.00th=[  141], 95.00th=[  176],
      | 99.00th=[  265], 99.50th=[  318], 99.90th=[  570], 99.95th=[  693],
      | 99.99th=[ 1090]
    bw (  MiB/s): min=  250, max= 2159, per=100.00%, avg=1392.77,  
stdev=63.78, samples=9596
    iops        : min=64218, max=552858, avg=356548.75,  
stdev=16328.20, samples=9596
   write: IOPS=160k, BW=626MiB/s (656MB/s)(733GiB/1200001msec); 0 zone resets
     slat (usec): min=3, max=191425, avg= 9.71, stdev=34.41
     clat (usec): min=2, max=191641, avg=86.02, stdev=137.32
      lat (usec): min=35, max=191650, avg=95.85, stdev=144.10
     clat percentiles (usec):
      |  1.00th=[   42],  5.00th=[   46], 10.00th=[   49], 20.00th=[   52],
      | 30.00th=[   56], 40.00th=[   59], 50.00th=[   65], 60.00th=[   85],
      | 70.00th=[   98], 80.00th=[  111], 90.00th=[  141], 95.00th=[  178],
      | 99.00th=[  265], 99.50th=[  318], 99.90th=[  578], 99.95th=[  701],
      | 99.99th=[ 1106]
    bw (  KiB/s): min=114464, max=995856, per=100.00%, avg=640817.51,  
stdev=29342.79, samples=9596
    iops        : min=28616, max=248964, avg=160204.26, stdev=7335.70,  
samples=9596
   lat (usec)   : 4=0.01%, 10=0.01%, 50=13.69%, 100=58.80%, 250=26.29%
   lat (usec)   : 500=1.08%, 750=0.10%, 1000=0.02%
   lat (msec)   : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
   lat (msec)   : 100=0.01%, 250=0.01%
   cpu          : usr=18.17%, sys=79.17%, ctx=982498, majf=10, minf=2977
   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,  
 >=64=0.0%
      complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%,  
 >=64=0.0%
      issued rwts: total=427672030,192161509,0,0 short=0,0,0,0 dropped=0,0,0,0
      latency   : target=0, window=0, percentile=100.00%, depth=12

Run status group 0 (all jobs):
    READ: bw=1392MiB/s (1460MB/s), 1392MiB/s-1392MiB/s  
(1460MB/s-1460MB/s), io=1631GiB (1752GB), run=1200001-1200001msec
   WRITE: bw=626MiB/s (656MB/s), 626MiB/s-626MiB/s (656MB/s-656MB/s),  
io=733GiB (787GB), run=1200001-1200001msec


And this is while two of the restore jobs still running.




  reply	other threads:[~2021-10-01  9:00 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-01  7:18 Gregor Burck
2021-10-01  9:00 ` Gregor Burck [this message]
2021-10-01  9:29   ` Dominik Csapak
  -- strict thread matches above, loose matches on Subject: below --
2021-10-01  6:52 Gregor Burck
2021-10-01  7:00 ` Dominik Csapak
2021-09-17  9:29 Gregor Burck
2021-09-30 13:07 ` Gregor Burck
2021-09-30 13:24   ` Dominik Csapak

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211001110037.EGroupware.Ez22YX0Ys-I09F4bK5VPukF@heim.aeppelbroe.de \
    --to=gregor@aeppelbroe.de \
    --cc=pve-user@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal