public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Thomas Lamprecht <t.lamprecht@proxmox.com>
To: Proxmox Backup Server development discussion
	<pbs-devel@lists.proxmox.com>,
	Dominik Csapak <d.csapak@proxmox.com>
Subject: [pbs-devel] applied: [PATCH proxmox-backup v3 1/1] tape: introduce a tape backup job worker thread option
Date: Wed, 2 Apr 2025 16:48:50 +0200	[thread overview]
Message-ID: <8d6c35fe-d5d2-4e0e-92c6-66e494ae7f3e@proxmox.com> (raw)
In-Reply-To: <20250221150631.3791658-3-d.csapak@proxmox.com>

Am 21.02.25 um 16:06 schrieb Dominik Csapak:
> Using a single thread for reading is not optimal in some cases, e.g.
> when the underlying storage can handle reads from multiple threads in
> parallel.
> 
> We use the ParallelHandler to handle the actual reads. Make the
> sync_channel buffer size depend on the number of threads so we have
> space for two chunks per thread. (But keep the minimum to 3 like
> before).
> 
> How this impacts the backup speed largely depends on the underlying
> storage and how the backup is laid out on it.
> 
> I benchmarked the following setups:
> 
> * Setup A: relatively spread out backup on a virtualized pbs on single HDDs
> * Setup B: mostly sequential chunks on a virtualized pbs on single HDDs
> * Setup C: backup on virtualized pbs on a fast NVME
> * Setup D: backup on bare metal pbs with ZFS in a RAID10 with 6 HDDs
>   and 2 fast special devices in a mirror
> 
> (values are reported in MB/s as seen in the task log, caches were
> cleared between runs, backups were bigger than the memory available)
> 
> setup  1 thread  2 threads  4 threads  8 threads
> A      55        70         80         95
> B      110       89         100        108
> C      294       294        294        294
> D      118       180        300        300
> 
> So there are cases where multiple read threads speed up the tape backup
> (dramatically). On the other hand there are situations where reading
> from a single thread is actually faster, probably because we can read
> from the HDD sequentially.
> 
> I left the default value of '1' to not change the default behavior.
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
> changes from v2:
> * move the ui config to the tape backup jobs and tape backup window
> * use pool writer to store the thread count, not datastore
> * keep minimum of channel size 3
> * keep default of one thread
> 
> i left the benchmark data intact, since the actual code that does
> the multithreading is the same as before, and i could not find a
> virtual setup to replicate the performance of a hdd very well
> (limiting virtual disks iops does not really do the trick due to
> disk caching, etc.)
> 
> If wanted i can of course setup a physical testbed with multiple hdds
> again.
> 
>  src/api2/config/tape_backup_job.rs          |  8 ++++
>  src/api2/tape/backup.rs                     |  4 ++
>  src/tape/pool_writer/mod.rs                 | 14 ++++++-
>  src/tape/pool_writer/new_chunks_iterator.rs | 44 +++++++++++++--------
>  www/tape/window/TapeBackup.js               | 12 ++++++
>  www/tape/window/TapeBackupJob.js            | 14 +++++++
>  6 files changed, 79 insertions(+), 17 deletions(-)
> 
>

applied, thanks! bumped the dependency for pbs-api-types upfront and adapted
the comment as discussed in this thread. I also dropped the commented-out
printf statement, feels always rather odd to read them to me, especially for
compiled code where one cannot uncomment them ad-hoc. debug/trace-logs might
be better suited if that can be useful.


_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


  parent reply	other threads:[~2025-04-02 14:49 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-02-21 15:06 [pbs-devel] [PATCH proxmox/proxmox-backup v3] tape: implement multithreaded read Dominik Csapak
2025-02-21 15:06 ` [pbs-devel] [PATCH proxmox v3 1/1] pbs api types: tape backup job: add worker threads option Dominik Csapak
2025-04-02 12:48   ` [pbs-devel] applied: " Thomas Lamprecht
2025-02-21 15:06 ` [pbs-devel] [PATCH proxmox-backup v3 1/1] tape: introduce a tape backup job worker thread option Dominik Csapak
2025-03-20 16:30   ` Thomas Lamprecht
2025-03-21  8:31     ` Dominik Csapak
2025-03-21 15:14     ` Laurențiu Leahu-Vlăducu
2025-04-02 14:48   ` Thomas Lamprecht [this message]
2025-03-19 14:12 ` [pbs-devel] [PATCH proxmox/proxmox-backup v3] tape: implement multithreaded read Dominik Csapak
2025-03-20  6:39   ` Dietmar Maurer
2025-03-20  8:03     ` Dietmar Maurer
2025-04-01 13:54 ` Bastian Mäuser

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8d6c35fe-d5d2-4e0e-92c6-66e494ae7f3e@proxmox.com \
    --to=t.lamprecht@proxmox.com \
    --cc=d.csapak@proxmox.com \
    --cc=pbs-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal