all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: "DERUMIER, Alexandre via pve-devel" <pve-devel@lists.proxmox.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>,
	"adam.kalisz@notnullmakers.com" <adam.kalisz@notnullmakers.com>
Cc: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
Subject: Re: [pve-devel] Discussion of major PBS restore speedup in proxmox-backup-qemu
Date: Tue, 24 Jun 2025 09:09:49 +0000	[thread overview]
Message-ID: <mailman.567.1750756230.395.pve-devel@lists.proxmox.com> (raw)
In-Reply-To: <9995c68d9c0d6e699578f5a45edb2731b5724ef1.camel@notnullmakers.com>

[-- Attachment #1: Type: message/rfc822, Size: 15970 bytes --]

From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "adam.kalisz@notnullmakers.com" <adam.kalisz@notnullmakers.com>
Subject: Re: Discussion of major PBS restore speedup in proxmox-backup-qemu
Date: Tue, 24 Jun 2025 09:09:49 +0000
Message-ID: <32645c96c7a1e247202d9d34e6102f08a7f08c97.camel@groupe-cyllene.com>

Hi, 

nice work !

Could it be possible to have an option to configure the
CONCURRENT_REQUESTS  ?

(to avoid to put too much load on slow spinning storage)




-------- Message initial --------
De: Adam Kalisz <adam.kalisz@notnullmakers.com>
À: pve-devel@lists.proxmox.com
Objet: Discussion of major PBS restore speedup in proxmox-backup-qemu
Date: 23/06/2025 18:10:01

Hi list,

before I go through all the hoops to submit a patch I wanted to discuss
the current form of the patch that can be found here:

https://github.com/NOT-NULL-Makers/proxmox-backup-
qemu/commit/e91f09cfd1654010d6205d8330d9cca71358e030

The speedup process was discussed here:

https://forum.proxmox.com/threads/abysmally-slow-restore-from-
backup.133602/

The current numbers are:

With the most current snapshot of a VM with 10 GiB system disk and 2x
100 GiB disks with random data:

Original as of 1.5.1:
10 GiB system:    duration=11.78s,  speed=869.34MB/s
100 GiB random 1: duration=412.85s, speed=248.03MB/s
100 GiB random 2: duration=422.42s, speed=242.41MB/s

With the 12-way concurrent fetching:

10 GiB system:    duration=2.05s,   speed=4991.99MB/s
100 GiB random 1: duration=100.54s, speed=1018.48MB/s
100 GiB random 2: duration=100.10s, speed=1022.97MB/s

The hardware is on the PVE side:
2x Intel Xeon Gold 6244, 1 TB RAM, 2x 100 Gbps Mellanox, 14x Samsung
NVMe 3,8 TB drives in RAID10 using mdadm/ LVM-thin.

On the PBS side:
2x Intel Xeon Gold 6334, 1 TB RAM, 2x 100 Gbps Mellanox, 8x Samsung
NVMe in RAID using 4 ZFS mirrors with recordsize 1M, lz4 compression.

Similar or slightly better speeds were achieved on Hetzner AX52 with
AMD Ryzen 7 7700 with 64 GB RAM and 2x 1 TB NVMe in stripe on PVE with
recordsize 16k connected to another Hetzner AX52 using a 10 Gbps
connection. The PBS has normal NVMe ZFS mirror again with recordsize
1M.

On bigger servers a 16-way concurrency was even better on smaller
servers with high frequency CPUs 8-way concurrency performed better.
The 12-way concurrency is a compromise. We seem to hit a bottleneck
somewhere in the realm of TLS connection and shallow buffers. The
network on the 100 Gbps servers can support up to about 3 GBps (almost
20 Gbps) of traffic in a single TCP connection using mbuffer. The
storage can keep up with such a speed.

Before I submit the patch, I would also like to do the most up to date
build but I have trouble updating my build environment to reflect the
latest commits. What do I have to put in my /etc/apt/sources.list to be
able to install e.g. librust-cbindgen-0.27+default-dev librust-http-
body-util-0.1+default-dev librust-hyper-1+default-dev and all the rest?

This work was sponsored by ČMIS s.r.o. and consulted with the General
Manager Václav Svátek (ČMIS), Daniel Škarda (NOT NULL Makers s.r.o.)
and Linux team leader Roman Müller (ČMIS).

Best regards
Adam Kalisz


[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

       reply	other threads:[~2025-06-24  9:09 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <9995c68d9c0d6e699578f5a45edb2731b5724ef1.camel@notnullmakers.com>
2025-06-24  9:09 ` DERUMIER, Alexandre via pve-devel [this message]
     [not found] ` <32645c96c7a1e247202d9d34e6102f08a7f08c97.camel@groupe-cyllene.com>
2025-06-24 10:11   ` Adam Kalisz via pve-devel
2025-06-23 16:10 Adam Kalisz via pve-devel
2025-06-24  7:28 ` Fabian Grünbichler
     [not found]   ` <c11ce09eb42ad0ef66e1196a1c33462647ead381.camel@notnullmakers.com>
2025-06-24 10:43     ` Fabian Grünbichler
2025-07-03  8:29       ` Adam Kalisz via pve-devel
2025-07-03  8:57         ` Dominik Csapak
2025-07-03 14:27           ` Adam Kalisz via pve-devel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=mailman.567.1750756230.395.pve-devel@lists.proxmox.com \
    --to=pve-devel@lists.proxmox.com \
    --cc=adam.kalisz@notnullmakers.com \
    --cc=alexandre.derumier@groupe-cyllene.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal