public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Niko Fellner <n.fellner@logics.de>
To: "pbs-devel@lists.proxmox.com" <pbs-devel@lists.proxmox.com>
Subject: Re: [pbs-devel] parallelize restore.rs fn restore_image: problems in
Date: Wed, 9 Dec 2020 02:07:11 +0000	[thread overview]
Message-ID: <AM0PR09MB2754B517100FCD14B174CA26F2CC0@AM0PR09MB2754.eurprd09.prod.outlook.com> (raw)

I did some benchmarks now at an Azure L32s_v2 (32 vCPUs, 256 GB memory, 4x 2 TB NVMe; on tests with ZFS I reduced the zfs-arc-max to 8 GiB, because I didn't want to test the memory only)

Overall the speedup of the parallel pbs-restore I encountered was at least 2.1x here, often about 2.6x, but in some tests even up to 6.8x; On big VMs a speedup of 3.6x.

The CPU usage of the most busy pbs-restore process during the 750 GiB restore was often about 300-450% (not great for 32 threads, but we only do parallel reads, so...); the original sync pbs-restore only showed about 98% CPU usage in htop.

First, some tests with my 32 GiB VM (zeroes = 64%):

> Results (relative to 32 GiB):
> Host;          Method;              source;                 target;                 Start;    End;      seconds(End - Start);  speed (End-Start);   speedup;  seconds(PVE log); speed(PVE log); speedup
> Azure L32s_v2; orig. pbs-restore;   nvme disk 1 ZFS pool;   nvme disk 2 ZFS dir;    21:10:38; 21:13:14;  156;                    210.05 MiB/s;      1.0;        58.69;            558.31 MiB/s;  1.0
> Azure L32s_v2; paral.pbs-restore;   nvme disk 1 ZFS pool;   nvme disk 2 ZFS dir;    21:34:25; 21:34:49;   24;                   1365.33 MiB/s;      6.5;        22.85;           1433.99 MiB/s;  2.6

> Azure L32s_v2; orig. pbs-restore;   nvme disk 1 ZFS pool;   nvme disk 2 ZFS dir;    21:19:51; 21:22:27;  156;                    210.05 MiB/s;      1.0;        54.46;            601.66 MiB/s;  1.0
> Azure L32s_v2; paral.pbs-restore;   nvme disk 1 ZFS pool;   nvme disk 2 ZFS dir;    21:41:53; 21:42:16;   23;                   1424.70 MiB/s;      6.8;        22.63;           1447.70 MiB/s;  2.4

> Azure L32s_v2; orig. pbs-restore;   nvme disk 1 ext4 dir;   nvme disk 2 ext4 dir;   02:10:04; 02:10:49;   45;                    728.18 MiB/s;      1.0;        43.97;            745.16 MiB/s;  1.0
> Azure L32s_v2; paral.pbs-restore;   nvme disk 1 ext4 dir;   nvme disk 2 ext4 dir;   02:12:35; 02:12:56;   21;                   1560.38 MiB/s;      2.1;        21.06;           1556.12 MiB/s;  2.1

> Azure L32s_v2; orig. pbs-restore;   2x nvme RAID0 ext4 dir; 2x nvme RAID0 ext4 dir; 01:56:35; 01:57:24;   49;                     668.73 MiB/s;     1.0;        48.94;            669.52 MiB/s;  1.0
> Azure L32s_v2; paral.pbs-restore;   2x nvme RAID0 ext4 dir; 2x nvme RAID0 ext4 dir; 22:24:45; 22:25:04;   19;                    1724.63 MiB/s;     2.6;        18.67;           1754.79 MiB/s;  2.6

Now some tests with my 750 GiB VM (zeroes = 5%):

> Results (relative to 750 GiB):
> Host;          Method;              source;                 target;                 Start;    End;      seconds(End - Start);  speed (End-Start);   speedup;  seconds(PVE log); speed(PVE log); speedup

> Azure L32s_v2; orig. pbs-restore;   2x nvme RAID0 ext4 dir; 2x nvme RAID0 ext4 dir; 00:45:19; 01:47:38; 3739;                     205,40 MiB/s;     1.0;      3736.73;          205.53 MiB/s;   1.0
> Azure L32s_v2; paral.pbs-restore;   2x nvme RAID0 ext4 dir; 2x nvme RAID0 ext4 dir; 00:23:44; 00:41:00; 1036;                     741,31 MiB/s;     3.6;      1034.96;          742.06 MiB/s;   3.6


PVE Logs:

> orig. pbs-restore: restore image complete (bytes=34359738368, duration=58.69s, speed=558.31MB/s)
> paral.pbs-restore: restore image complete (bytes=34359738368, duration=22.85s, speed=1433.99MB/s)

> orig. pbs-restore: restore image complete (bytes=34359738368, duration=54.46s, speed=601.66MB/s)
> paral.pbs-restore: restore image complete (bytes=34359738368, duration=22.63s, speed=1447.70MB/s)

> orig. pbs-restore: restore image complete (bytes=34359738368, duration=43.97s, speed=745.16MB/s)
> paral.pbs-restore: restore image complete (bytes=34359738368, duration=21.06s, speed=1556.12MB/s)

> orig. pbs-restore: restore image complete (bytes=34359738368, duration=48.94s, speed=669.52MB/s)
> paral.pbs-restore: restore image complete (bytes=34359738368, duration=18.67s, speed=1754.79MB/s)

> orig. pbs-restore: restore image complete (bytes=805306368000, duration=3736.73s, speed=205.53MB/s)
> paral.pbs-restore: restore image complete (bytes=805306368000, duration=1034.96s, speed=742.06MB/s)

Maybe anyone is interested in "proxmox-backup-client benchmark --repository chunks2tbnvme":

Host | Datastore<br>(tls bench only) | aes256_gcm<br>MB/s | compress<br>MB/s | decompress<br>MB/s | sha256<br>MB/s | tls<br>MB/s | verify<br>MB/s
-- | -- | --: | --: | --: | --: | --: | --:
Azure L32s_v2 | chunks2tbnvme | 2552.57 | 549.95 | 856.72 | 1479.59 | 588.78 | 545.02




             reply	other threads:[~2020-12-09  2:07 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-09  2:07 Niko Fellner [this message]
  -- strict thread matches above, loose matches on Subject: below --
2020-12-07 22:59 Niko Fellner
2020-12-07  2:51 Niko Fellner
2020-12-07 13:24 ` Dominik Csapak
2020-12-06  2:51 Niko Fellner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=AM0PR09MB2754B517100FCD14B174CA26F2CC0@AM0PR09MB2754.eurprd09.prod.outlook.com \
    --to=n.fellner@logics.de \
    --cc=pbs-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal