public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: "Max Carrara" <m.carrara@proxmox.com>
To: "Proxmox Backup Server development discussion"
	<pbs-devel@lists.proxmox.com>
Subject: Re: [pbs-devel] [PATCH proxmox-backup v2 1/3] tape: introduce a tape backup read thread tuning option
Date: Wed, 08 May 2024 19:47:38 +0200	[thread overview]
Message-ID: <D14G9UYFTU1T.32ZQL6RJ2ZHA7@proxmox.com> (raw)
In-Reply-To: <20240507072955.364206-2-d.csapak@proxmox.com>

Finally managed to test this; will summarize all my comments in this
reply to make it easier to keep track.

There are only a few minor rewords in the commit message that I suggest
(see my comments inline); otherwise, this patch LGTM. Good work!

Testing
-------

* Set up a fresh Debian Bookworm VM and configured a VTL as per our
  docs [0]
  * The docs *mostly* work for Bookworm btw, just had to manually import
    a bunch of device definitions from text files from the Quadstor docs
    in the UI [1]. (Thanks for the help, Sterzy!)
* Eventually managed to add the VTL to my virtualized PBS
* Configured everything on PBS (changers, drives, media pools, etc.)
  * Had to get a little more comfortable with it first, because I hadn't
    had the time to play around with tapes yet.
* Ran a backup without the patch first as a comparison
* Applied the patches & rebuilt PBS
* Ran some backups again, didn't really notice any difference in terms
  of speed
  * That might just be because of my configuration or perhaps I missed a
    step (tape is more... elaborate than I thought); once I'm back from
    my holidays, we can have a look at that if you want
  * Then again, most backups ran just fine, so see this as a passed
    smoke test

Note: *Sometimes* the backup would sporadically fail with this message:

TASK ERROR: write filemark  failed - scsi command failed: transport error

I'm assuming that this is probably also related to my configuration;
after starting a failed backup job again, it just magically worked.

To conclude, I'm not sure if the above testing provides any additional
insights, but the patch at the very least doesn't negatively impact
anything on my side, at least nothing that I've noticed.

I will probably test this some more once I'm back in the office, just to
really get a good grasp of everything regarding tapes.


Review
------

As mentioned in my initial response, the code's absolutely fine and the
changes are easy to follow. See the two comments regarding the commit
message inline; the suggested changes there could IMO just be made when
applying the patch.

Nice work!

Reviewed-by: Max Carrara <m.carrara@proxmox.com>
Tested-by: Max Carrara <m.carrara@proxmox.com>

[0]: https://pbs.proxmox.com/wiki/index.php/Installing_a_Virtual_Tape_Library
[1]: https://www.quadstor.com/vtlsupport/configuring-device-definitions.html

On Tue May 7, 2024 at 9:29 AM CEST, Dominik Csapak wrote:
> using a single thread for reading is not optimal in some cases, e.g.
> when the underlying storage can handle more reads in parallel than with
> a single thread.

Would prefer to reword this as:

Using a single thread for reading is not optimal in some cases, e.g.
when the underlying storage can handle reads from multiple threads in
parallel.

>
> We use the ParallelHandler to handle the actual reads. Make the
> sync_channel buffer size depending on the number of threads so we have
> space for two chunks per thread.

Would s/depending/depend/ for the above paragraph.

>
> How this impacts the backup speed largely depends on the underlying
> storage and how the backup is laid out on it.
>
> I benchmarked the following setups:
>
> * Setup A: relatively spread out backup on a virtualized pbs on single HDDs
> * Setup B: mostly sequential chunks on a virtualized pbs on single HDDs
> * Setup C: backup on virtualized pbs on a fast NVME
> * Setup D: backup on bare metal pbs with ZFS in a RAID10 with 6 HDDs
>   and 2 fast special devices in a mirror
>
> (values are reported in MB/s as seen in the task log, caches were
> cleared between runs, backups were bigger than the memory available)
>
> setup  1 thread  2 threads  4 threads  8 threads
> A      55        70         80         95
> B      110       89         100        108
> C      294       294        294        294
> D      118       180        300        300
>
> So there are cases where multiple read threads speed up the tape backup
> (dramatically). On the other hand there are situations where reading
> from a single thread is actually faster, probably because we can read
> from the HDD sequentially.
>
> I used a new default value of '4' here since that gave good performance
> on all setups (not necessarily the best) and we also have a default
> value of '4' for verification threads.
>
> We use a struct here for the datastore since we want to introduce other
> thread tuning options too.
>
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
>  pbs-api-types/src/datastore.rs              |  8 ++++
>  pbs-datastore/src/datastore.rs              | 26 ++++++++++++
>  src/tape/pool_writer/new_chunks_iterator.rs | 45 +++++++++++++--------
>  www/Utils.js                                |  5 +++
>  www/datastore/OptionView.js                 |  8 ++++
>  5 files changed, 76 insertions(+), 16 deletions(-)
>
> diff --git a/pbs-api-types/src/datastore.rs b/pbs-api-types/src/datastore.rs
> index 31767417a..1dae3867f 100644
> --- a/pbs-api-types/src/datastore.rs
> +++ b/pbs-api-types/src/datastore.rs
> @@ -209,6 +209,11 @@ pub enum DatastoreFSyncLevel {
>              type: ChunkOrder,
>              optional: true,
>          },
> +        "tape-backup-read-threads": {
> +            type: usize,
> +            optional: true,
> +            minimum: 1,
> +        },
>      },
>  )]
>  #[derive(Serialize, Deserialize, Default)]
> @@ -220,6 +225,9 @@ pub struct DatastoreTuning {
>      pub chunk_order: Option<ChunkOrder>,
>      #[serde(skip_serializing_if = "Option::is_none")]
>      pub sync_level: Option<DatastoreFSyncLevel>,
> +    #[serde(skip_serializing_if = "Option::is_none")]
> +    /// Configures how many threads to use to read from the datastore while backing up to tape.
> +    pub tape_backup_read_threads: Option<usize>,
>  }
>  
>  pub const DATASTORE_TUNING_STRING_SCHEMA: Schema = StringSchema::new("Datastore tuning options")
> diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
> index f95da7615..73a1cfa39 100644
> --- a/pbs-datastore/src/datastore.rs
> +++ b/pbs-datastore/src/datastore.rs
> @@ -49,6 +49,19 @@ pub fn check_backup_owner(owner: &Authid, auth_id: &Authid) -> Result<(), Error>
>      Ok(())
>  }
>  
> +/// Contains the configuration of how many threads to use in various situations
> +pub struct ThreadConfiguration {
> +    pub tape_backup_read_threads: usize,
> +}
> +
> +impl Default for ThreadConfiguration {
> +    fn default() -> Self {
> +        Self {
> +            tape_backup_read_threads: 4,
> +        }
> +    }
> +}
> +
>  /// Datastore Management
>  ///
>  /// A Datastore can store severals backups, and provides the
> @@ -61,6 +74,7 @@ pub struct DataStoreImpl {
>      chunk_order: ChunkOrder,
>      last_digest: Option<[u8; 32]>,
>      sync_level: DatastoreFSyncLevel,
> +    thread_config: ThreadConfiguration,
>  }
>  
>  impl DataStoreImpl {
> @@ -75,6 +89,7 @@ impl DataStoreImpl {
>              chunk_order: Default::default(),
>              last_digest: None,
>              sync_level: Default::default(),
> +            thread_config: Default::default(),
>          })
>      }
>  }
> @@ -305,6 +320,11 @@ impl DataStore {
>                  .parse_property_string(config.tuning.as_deref().unwrap_or(""))?,
>          )?;
>  
> +        let mut thread_config = ThreadConfiguration::default();
> +        if let Some(value) = tuning.tape_backup_read_threads {
> +            thread_config.tape_backup_read_threads = value;
> +        }
> +
>          Ok(DataStoreImpl {
>              chunk_store,
>              gc_mutex: Mutex::new(()),
> @@ -313,6 +333,7 @@ impl DataStore {
>              chunk_order: tuning.chunk_order.unwrap_or_default(),
>              last_digest,
>              sync_level: tuning.sync_level.unwrap_or_default(),
> +            thread_config,
>          })
>      }
>  
> @@ -1377,6 +1398,11 @@ impl DataStore {
>          Ok(())
>      }
>  
> +    /// returns the datatstore thread configuration
> +    pub fn get_thread_configuration(&self) -> &ThreadConfiguration {
> +        &self.inner.thread_config
> +    }
> +
>      /// Destroy a datastore. This requires that there are no active operations on the datastore.
>      ///
>      /// This is a synchronous operation and should be run in a worker-thread.
> diff --git a/src/tape/pool_writer/new_chunks_iterator.rs b/src/tape/pool_writer/new_chunks_iterator.rs
> index 1454b33d2..a2a8091f6 100644
> --- a/src/tape/pool_writer/new_chunks_iterator.rs
> +++ b/src/tape/pool_writer/new_chunks_iterator.rs
> @@ -6,8 +6,9 @@ use anyhow::{format_err, Error};
>  use pbs_datastore::{DataBlob, DataStore, SnapshotReader};
>  
>  use crate::tape::CatalogSet;
> +use crate::tools::parallel_handler::ParallelHandler;
>  
> -/// Chunk iterator which use a separate thread to read chunks
> +/// Chunk iterator which uses separate threads to read chunks
>  ///
>  /// The iterator skips duplicate chunks and chunks already in the
>  /// catalog.
> @@ -25,7 +26,11 @@ impl NewChunksIterator {
>          snapshot_reader: Arc<Mutex<SnapshotReader>>,
>          catalog_set: Arc<Mutex<CatalogSet>>,
>      ) -> Result<(std::thread::JoinHandle<()>, Self), Error> {
> -        let (tx, rx) = std::sync::mpsc::sync_channel(3);
> +        let read_threads = datastore
> +            .get_thread_configuration()
> +            .tape_backup_read_threads;
> +
> +        let (tx, rx) = std::sync::mpsc::sync_channel(read_threads * 2);
>  
>          let reader_thread = std::thread::spawn(move || {
>              let snapshot_reader = snapshot_reader.lock().unwrap();
> @@ -35,36 +40,44 @@ impl NewChunksIterator {
>              let datastore_name = snapshot_reader.datastore_name().to_string();
>  
>              let result: Result<(), Error> = proxmox_lang::try_block!({
> -                let mut chunk_iter = snapshot_reader.chunk_iterator(move |digest| {
> +                let chunk_iter = snapshot_reader.chunk_iterator(move |digest| {
>                      catalog_set
>                          .lock()
>                          .unwrap()
>                          .contains_chunk(&datastore_name, digest)
>                  })?;
>  
> -                loop {
> -                    let digest = match chunk_iter.next() {
> -                        None => {
> -                            let _ = tx.send(Ok(None)); // ignore send error
> -                            break;
> +                let reader_pool =
> +                    ParallelHandler::new("tape backup chunk reader pool", read_threads, {
> +                        let tx = tx.clone();
> +                        move |digest| {
> +                            let blob = datastore.load_chunk(&digest)?;
> +                            //println!("LOAD CHUNK {}", hex::encode(&digest));
> +
> +                            tx.send(Ok(Some((digest, blob)))).map_err(|err| {
> +                                format_err!("error sending result from reader thread: {err}")
> +                            })?;
> +
> +                            Ok(())
>                          }
> -                        Some(digest) => digest?,
> -                    };
> +                    });
> +
> +                for digest in chunk_iter {
> +                    let digest = digest?;
>  
>                      if chunk_index.contains(&digest) {
>                          continue;
>                      }
>  
> -                    let blob = datastore.load_chunk(&digest)?;
> -                    //println!("LOAD CHUNK {}", hex::encode(&digest));
> -                    if let Err(err) = tx.send(Ok(Some((digest, blob)))) {
> -                        eprintln!("could not send chunk to reader thread: {err}");
> -                        break;
> -                    }
> +                    reader_pool.send(digest)?;
>  
>                      chunk_index.insert(digest);
>                  }
>  
> +                reader_pool.complete()?;
> +
> +                let _ = tx.send(Ok(None)); // ignore send error
> +
>                  Ok(())
>              });
>              if let Err(err) = result {
> diff --git a/www/Utils.js b/www/Utils.js
> index 1d7351a32..4d224cd4a 100644
> --- a/www/Utils.js
> +++ b/www/Utils.js
> @@ -790,6 +790,11 @@ Ext.define('PBS.Utils', {
>  	sync = PBS.Utils.tuningOptions['sync-level'][sync ?? '__default__'];
>  	options.push(`${gettext('Sync Level')}: ${sync}`);
>  
> +	let tapeBackupRT = tuning['tape-backup-read-threads'];
> +	delete tuning['tape-backup-read-threads'];
> +	tapeBackupRT ??= Proxmox.Utils.defaultText + ` (4)`;
> +	options.push(`${gettext('Tape Backup Read Threads')}: ${tapeBackupRT}`);
> +
>  	for (const [k, v] of Object.entries(tuning)) {
>  	    options.push(`${k}: ${v}`);
>  	}
> diff --git a/www/datastore/OptionView.js b/www/datastore/OptionView.js
> index e1f38af6f..cfbb89151 100644
> --- a/www/datastore/OptionView.js
> +++ b/www/datastore/OptionView.js
> @@ -271,6 +271,14 @@ Ext.define('PBS.Datastore.Options', {
>  			    deleteEmpty: true,
>  			    value: '__default__',
>  			},
> +			{
> +			    xtype: 'proxmoxintegerfield',
> +			    name: 'tape-backup-read-threads',
> +			    fieldLabel: gettext('Tape Backup Read Threads'),
> +			    min: 1,
> +			    emptyText: '4',
> +			    deleteEmpty: true,
> +			},
>  		    ],
>  		},
>  	    },



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


  parent reply	other threads:[~2024-05-08 17:48 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-05-07  7:29 [pbs-devel] [PATCH proxmox-backup v2 0/3] tape/verify: improve multithreaded Dominik Csapak
2024-05-07  7:29 ` [pbs-devel] [PATCH proxmox-backup v2 1/3] tape: introduce a tape backup read thread tuning option Dominik Csapak
2024-05-07 15:10   ` Max Carrara
2024-05-08  6:56     ` [pbs-devel] [PATCH proxmox-backup v2 1/3] tape: introduce a tape backup read thread tuning opti Dominik Csapak
2024-05-08 13:37       ` Max Carrara
2024-05-08 17:47   ` Max Carrara [this message]
2024-05-07  7:29 ` [pbs-devel] [RFC PATCH proxmox-backup v2 2/3] verify: add tuning option for number of threads to use Dominik Csapak
2024-05-07  7:29 ` [pbs-devel] [RFC PATCH proxmox-backup v2 3/3] verify: move chunk loading into the worker threads Dominik Csapak

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=D14G9UYFTU1T.32ZQL6RJ2ZHA7@proxmox.com \
    --to=m.carrara@proxmox.com \
    --cc=pbs-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal