From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id 1D3BB1FF389 for ; Tue, 7 May 2024 15:46:21 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 35889F51A; Tue, 7 May 2024 15:46:26 +0200 (CEST) From: Dominik Csapak To: pbs-devel@lists.proxmox.com Date: Tue, 7 May 2024 15:45:52 +0200 Message-Id: <20240507134553.3233550-1-d.csapak@proxmox.com> X-Mailer: git-send-email 2.39.2 MIME-Version: 1.0 X-SPAM-LEVEL: Spam detection results: 0 AWL 0.016 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [mod.rs] Subject: [pbs-devel] [PATCH proxmox-backup 1/2] tape: improve throughput by not unnecessarily syncing/committing X-BeenThere: pbs-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox Backup Server development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Proxmox Backup Server development discussion Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: pbs-devel-bounces@lists.proxmox.com Sender: "pbs-devel" When writing data on tape, the idea was to sync/committing to tape and the catalog to disk every 128GiB of data. For that the counter 'bytes_written' was introduced and checked after every chunk/snapshot archive. Sadly we forgot to reset the counter after doing so, which meant that after 128GiB was written onto the tape, we synced/committed after every archive on the tape for the remaining length of the tape. Since syncing to tape and writing to disk takes a bit of time, the drive had to slow down every time and reduced the available throughput. (In our tests here from ~300MB/s to ~255MB/s). By resetting the value to zero after syncing, we avoid that and increase throughput performance when backups are bigger than 128GiB on tape. Signed-off-by: Dominik Csapak --- src/tape/pool_writer/mod.rs | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/src/tape/pool_writer/mod.rs b/src/tape/pool_writer/mod.rs index 214260804..1a47e837c 100644 --- a/src/tape/pool_writer/mod.rs +++ b/src/tape/pool_writer/mod.rs @@ -43,7 +43,7 @@ struct PoolWriterState { media_uuid: Uuid, // tell if we already moved to EOM at_eom: bool, - // bytes written after the last tape fush/sync + // bytes written after the last tape flush/sync and catalog commit bytes_written: usize, } @@ -200,8 +200,9 @@ impl PoolWriter { /// This is done automatically during a backupsession, but needs to /// be called explicitly before dropping the PoolWriter pub fn commit(&mut self) -> Result<(), Error> { - if let Some(PoolWriterState { ref mut drive, .. }) = self.status { - drive.sync()?; // sync all data to the tape + if let Some(ref mut status) = self.status { + status.drive.sync()?; // sync all data to the tape + status.bytes_written = 0; // reset bytes written } self.catalog_set.lock().unwrap().commit()?; // then commit the catalog Ok(()) -- 2.39.2 _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel