* [pbs-devel] [PATCH proxmox-backup 1/2] tape: improve throughput by not unnecessarily syncing/committing @ 2024-05-07 13:45 Dominik Csapak 2024-05-07 13:45 ` [pbs-devel] [PATCH proxmox-backup 2/2] examples: add tape write benchmark Dominik Csapak 2024-05-08 7:06 ` [pbs-devel] applied: [PATCH proxmox-backup 1/2] tape: improve throughput by not unnecessarily syncing/committing Dietmar Maurer 0 siblings, 2 replies; 3+ messages in thread From: Dominik Csapak @ 2024-05-07 13:45 UTC (permalink / raw) To: pbs-devel When writing data on tape, the idea was to sync/committing to tape and the catalog to disk every 128GiB of data. For that the counter 'bytes_written' was introduced and checked after every chunk/snapshot archive. Sadly we forgot to reset the counter after doing so, which meant that after 128GiB was written onto the tape, we synced/committed after every archive on the tape for the remaining length of the tape. Since syncing to tape and writing to disk takes a bit of time, the drive had to slow down every time and reduced the available throughput. (In our tests here from ~300MB/s to ~255MB/s). By resetting the value to zero after syncing, we avoid that and increase throughput performance when backups are bigger than 128GiB on tape. Signed-off-by: Dominik Csapak <d.csapak@proxmox.com> --- src/tape/pool_writer/mod.rs | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/src/tape/pool_writer/mod.rs b/src/tape/pool_writer/mod.rs index 214260804..1a47e837c 100644 --- a/src/tape/pool_writer/mod.rs +++ b/src/tape/pool_writer/mod.rs @@ -43,7 +43,7 @@ struct PoolWriterState { media_uuid: Uuid, // tell if we already moved to EOM at_eom: bool, - // bytes written after the last tape fush/sync + // bytes written after the last tape flush/sync and catalog commit bytes_written: usize, } @@ -200,8 +200,9 @@ impl PoolWriter { /// This is done automatically during a backupsession, but needs to /// be called explicitly before dropping the PoolWriter pub fn commit(&mut self) -> Result<(), Error> { - if let Some(PoolWriterState { ref mut drive, .. }) = self.status { - drive.sync()?; // sync all data to the tape + if let Some(ref mut status) = self.status { + status.drive.sync()?; // sync all data to the tape + status.bytes_written = 0; // reset bytes written } self.catalog_set.lock().unwrap().commit()?; // then commit the catalog Ok(()) -- 2.39.2 _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 3+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 2/2] examples: add tape write benchmark 2024-05-07 13:45 [pbs-devel] [PATCH proxmox-backup 1/2] tape: improve throughput by not unnecessarily syncing/committing Dominik Csapak @ 2024-05-07 13:45 ` Dominik Csapak 2024-05-08 7:06 ` [pbs-devel] applied: [PATCH proxmox-backup 1/2] tape: improve throughput by not unnecessarily syncing/committing Dietmar Maurer 1 sibling, 0 replies; 3+ messages in thread From: Dominik Csapak @ 2024-05-07 13:45 UTC (permalink / raw) To: pbs-devel A small example that simply writes pseudo-random chunks to a drive. This is useful to benchmark throughput on tape drives. The output and behavior is similar to what the pool writer does, but without writing multiple files, committing or loading data from disk. Signed-off-by: Dominik Csapak <d.csapak@proxmox.com> --- examples/tape-write-benchmark.rs | 91 ++++++++++++++++++++++++++++++++ 1 file changed, 91 insertions(+) create mode 100644 examples/tape-write-benchmark.rs diff --git a/examples/tape-write-benchmark.rs b/examples/tape-write-benchmark.rs new file mode 100644 index 000000000..d5686e65a --- /dev/null +++ b/examples/tape-write-benchmark.rs @@ -0,0 +1,91 @@ +use std::{ + fs::File, + io::Read, + time::{Duration, SystemTime}, +}; + +use anyhow::{format_err, Error}; +use pbs_tape::TapeWrite; +use proxmox_backup::tape::drive::{LtoTapeHandle, TapeDriver}; + +const URANDOM_PATH: &str = "/dev/urandom"; +const CHUNK_SIZE: usize = 4 * 1024 * 1024; // 4 MiB +const LOG_LIMIT: usize = 4 * 1024 * 1024 * 1024; // 4 GiB + +fn write_chunks<'a>( + mut writer: Box<dyn 'a + TapeWrite>, + blob_size: usize, + max_size: usize, + max_time: Duration, +) -> Result<(), Error> { + // prepare chunks in memory + + let mut blob: Vec<u8> = vec![0u8; blob_size]; + + let mut file = File::open(URANDOM_PATH)?; + file.read_exact(&mut blob[..])?; + + let start_time = SystemTime::now(); + loop { + let iteration_time = SystemTime::now(); + let mut count = 0; + let mut bytes_written = 0; + let mut idx = 0; + let mut incr_count = 0; + loop { + if writer.write_all(&blob)? { + eprintln!("LEOM reached"); + break; + } + + // modifying chunks a bit to mitigate compression/deduplication + blob[idx] = blob[idx].wrapping_add(1); + incr_count += 1; + if incr_count >= 256 { + incr_count = 0; + idx += 1; + } + count += 1; + bytes_written += blob_size; + + if bytes_written > max_size { + break; + } + } + + let elapsed = iteration_time.elapsed()?.as_secs_f64(); + let elapsed_total = start_time.elapsed()?; + eprintln!( + "{:.2}s: wrote {} chunks ({:.2} MB at {:.2} MB/s, average: {:.2} MB/s)", + elapsed_total.as_secs_f64(), + count, + bytes_written as f64 / 1_000_000.0, + (bytes_written as f64) / (1_000_000.0 * elapsed), + (writer.bytes_written() as f64) / (1_000_000.0 * elapsed_total.as_secs_f64()), + ); + + if elapsed_total > max_time { + break; + } + } + + Ok(()) +} +fn main() -> Result<(), Error> { + let mut args = std::env::args_os(); + args.next(); // binary name + let path = args.next().expect("no path to tape device given"); + let file = File::open(path).map_err(|err| format_err!("could not open tape device: {err}"))?; + let mut drive = LtoTapeHandle::new(file) + .map_err(|err| format_err!("error creating drive handle: {err}"))?; + write_chunks( + drive + .write_file() + .map_err(|err| format_err!("error starting file write: {err}"))?, + CHUNK_SIZE, + LOG_LIMIT, + Duration::new(60 * 20, 0), + ) + .map_err(|err| format_err!("error writing data to tape: {err}"))?; + Ok(()) +} -- 2.39.2 _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 3+ messages in thread
* [pbs-devel] applied: [PATCH proxmox-backup 1/2] tape: improve throughput by not unnecessarily syncing/committing 2024-05-07 13:45 [pbs-devel] [PATCH proxmox-backup 1/2] tape: improve throughput by not unnecessarily syncing/committing Dominik Csapak 2024-05-07 13:45 ` [pbs-devel] [PATCH proxmox-backup 2/2] examples: add tape write benchmark Dominik Csapak @ 2024-05-08 7:06 ` Dietmar Maurer 1 sibling, 0 replies; 3+ messages in thread From: Dietmar Maurer @ 2024-05-08 7:06 UTC (permalink / raw) To: Proxmox Backup Server development discussion, Dominik Csapak applied both patches. _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2024-05-08 7:06 UTC | newest] Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2024-05-07 13:45 [pbs-devel] [PATCH proxmox-backup 1/2] tape: improve throughput by not unnecessarily syncing/committing Dominik Csapak 2024-05-07 13:45 ` [pbs-devel] [PATCH proxmox-backup 2/2] examples: add tape write benchmark Dominik Csapak 2024-05-08 7:06 ` [pbs-devel] applied: [PATCH proxmox-backup 1/2] tape: improve throughput by not unnecessarily syncing/committing Dietmar Maurer
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox