public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Adam Kalisz via pve-devel <pve-devel@lists.proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Cc: Adam Kalisz <adam.kalisz@notnullmakers.com>, pbs-devel@lists.proxmox.com
Subject: Re: [pve-devel] applied: [PATCH proxmox-backup-qemu 1/1] restore: make chunk loading more parallel
Date: Wed, 16 Jul 2025 13:51:21 +0200	[thread overview]
Message-ID: <mailman.11.1752666722.354.pve-devel@lists.proxmox.com> (raw)
In-Reply-To: <175260709111.2310514.11288961991494643201.b4-ty@proxmox.com>

[-- Attachment #1: Type: message/rfc822, Size: 6764 bytes --]

From: Adam Kalisz <adam.kalisz@notnullmakers.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Cc: pbs-devel@lists.proxmox.com, Dominik Csapak <d.csapak@proxmox.com>
Subject: Re: [pve-devel] applied: [PATCH proxmox-backup-qemu 1/1] restore: make chunk loading more parallel
Date: Wed, 16 Jul 2025 13:51:21 +0200
Message-ID: <0fb686395742d507041329058c1b59134701bde2.camel@notnullmakers.com>

Hi Thomas,

would you please consider applying it also to the bookworm branch?

Thank you
Adam

On Tue, 2025-07-15 at 21:18 +0200, Thomas Lamprecht wrote:
> On Mon, 14 Jul 2025 10:34:38 +0200, Dominik Csapak wrote:
> > by using async futures to load chunks and stream::buffer_unordered
> > to
> > buffer up to 16 of them, depending on write/load speed, use tokio's
> > task
> > spawn to make sure the continue to run in the background, since
> > buffer_unordered starts them, but does not poll them to completion
> > unless we're awaiting.
> > 
> > With this, we don't need to increase the number of threads in the
> > runtime to trigger parallel reads and network traffic to us. This
> > way
> > it's only limited by CPU if decoding and/or decrypting is the
> > bottleneck.
> > 
> > [...]
> 
> Applied, thanks!
> 
> [1/1] restore: make chunk loading more parallel
>       commit: 429256c05f2526632e16f6ef261669b9d0e0ee6b
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

  reply	other threads:[~2025-07-16 11:50 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-07-14  8:34 [pve-devel] [PATCH docs/proxmox-backup-qemu 0/2] increase concurrency for pbs-restore Dominik Csapak
2025-07-14  8:34 ` [pve-devel] [PATCH proxmox-backup-qemu 1/1] restore: make chunk loading more parallel Dominik Csapak
2025-07-14 12:17   ` Adam Kalisz via pve-devel
2025-07-15 19:18   ` [pve-devel] applied: " Thomas Lamprecht
2025-07-16 11:51     ` Adam Kalisz via pve-devel [this message]
     [not found]     ` <0fb686395742d507041329058c1b59134701bde2.camel@notnullmakers.com>
2025-07-17 15:57       ` Thomas Lamprecht
2025-07-14  8:34 ` [pve-devel] [PATCH docs 1/1] qmrestore: document pbs restore environment variables Dominik Csapak

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=mailman.11.1752666722.354.pve-devel@lists.proxmox.com \
    --to=pve-devel@lists.proxmox.com \
    --cc=adam.kalisz@notnullmakers.com \
    --cc=pbs-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal