From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id CD9D11FF161 for ; Tue, 8 Oct 2024 12:11:48 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id C1B6B1362D; Tue, 8 Oct 2024 12:12:14 +0200 (CEST) Date: Tue, 8 Oct 2024 12:11:39 +0200 From: Gabriel Goller To: Proxmox Backup Server development discussion Message-ID: <20241008101139.mwvpou3a5bep6prl@luna.proxmox.com> References: <20240925122233.398359-1-c.ebner@proxmox.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20240925122233.398359-1-c.ebner@proxmox.com> User-Agent: NeoMutt/20220429 X-SPAM-LEVEL: Spam detection results: 0 AWL -0.043 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pbs-devel] [PATCH proxmox-backup] part fix 5560: client: periodically show backup progress X-BeenThere: pbs-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox Backup Server development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Proxmox Backup Server development discussion Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Errors-To: pbs-devel-bounces@lists.proxmox.com Sender: "pbs-devel" >diff --git a/pbs-client/src/backup_writer.rs b/pbs-client/src/backup_writer.rs >index d63c09b5a..9080ce4af 100644 >--- a/pbs-client/src/backup_writer.rs >+++ b/pbs-client/src/backup_writer.rs >@@ -65,7 +65,13 @@ struct UploadStats { > csum: [u8; 32], > } > >-type UploadQueueSender = mpsc::Sender<(MergedChunkInfo, Option)>; >+struct ChunkUploadResponse { >+ future: h2::client::ResponseFuture, >+ size: usize, >+ size_encoded: usize, >+} >+ >+type UploadQueueSender = mpsc::Sender<(MergedChunkInfo, Option)>; > type UploadResultReceiver = oneshot::Receiver>; > > impl BackupWriter { >@@ -332,6 +338,12 @@ impl BackupWriter { > .as_u64() > .unwrap(); > >+ let archive = if log::log_enabled!(log::Level::Debug) { IMO this is ok, I will convert this to tracing in a later patch. (We need to rip out the 'log' crate from pbs-client and rename all log statements, but that's not in the scope of this patch at all.) >+ archive_name >+ } else { >+ pbs_tools::format::strip_server_file_extension(archive_name) >+ }; >+ > let upload_stats = Self::upload_chunk_info_stream( > self.h2.clone(), > wid, >@@ -646,25 +665,60 @@ impl BackupWriter { > > let stream_len = Arc::new(AtomicUsize::new(0)); > let stream_len2 = stream_len.clone(); >+ let stream_len3 = stream_len.clone(); > let compressed_stream_len = Arc::new(AtomicU64::new(0)); > let compressed_stream_len2 = compressed_stream_len.clone(); > let reused_len = Arc::new(AtomicUsize::new(0)); > let reused_len2 = reused_len.clone(); >+ let reused_len3 = reused_len.clone(); > let injected_len = Arc::new(AtomicUsize::new(0)); > let injected_len2 = injected_len.clone(); >+ let uploaded_len = Arc::new(AtomicUsize::new(0)); >+ let uploaded_len_compressed = Arc::new(AtomicUsize::new(0)); > > let append_chunk_path = format!("{}_index", prefix); > let upload_chunk_path = format!("{}_chunk", prefix); > let is_fixed_chunk_size = prefix == "fixed"; > >- let (upload_queue, upload_result) = >- Self::append_chunk_queue(h2.clone(), wid, append_chunk_path); >+ let (upload_queue, upload_result) = Self::append_chunk_queue( >+ h2.clone(), >+ wid, >+ append_chunk_path, >+ uploaded_len.clone(), >+ uploaded_len_compressed.clone(), >+ ); > > let start_time = std::time::Instant::now(); > > let index_csum = Arc::new(Mutex::new(Some(openssl::sha::Sha256::new()))); > let index_csum_2 = index_csum.clone(); > >+ let archive = archive.to_string(); >+ tokio::spawn(async move { >+ loop { >+ tokio::time::sleep(tokio::time::Duration::from_secs(60)).await; >+ >+ let size = stream_len3.load(Ordering::SeqCst); >+ let size_uploaded = uploaded_len.load(Ordering::SeqCst); >+ let size_reused = reused_len3.load(Ordering::SeqCst); >+ let size_compressed = uploaded_len_compressed.load(Ordering::SeqCst); >+ let size_new = size - size_reused; >+ let elapsed = start_time.elapsed(); >+ let speed = size_uploaded * 1_000_000 / elapsed.as_micros() as usize; >+ >+ log::info!( >+ " {archive}: elapsed {:.2} s, new: {}, reused: {}, total: {}, uploaded: {} (compressed {}, average: {}/s)", >+ elapsed.as_secs_f64(), >+ HumanByte::from(size_new), >+ HumanByte::from(size_reused), >+ HumanByte::from(size), >+ HumanByte::from(size_uploaded), >+ HumanByte::from(size_compressed), >+ HumanByte::from(speed), >+ ); This is also ok, I will change it later to tracing. >+ } >+ }); >+ > stream > .inject_reused_chunks(injections, stream_len.clone()) > .and_then(move |chunk_info| match chunk_info { One issue I encountered is that when having multiple archives, the output is kind of messy, e.g.: > root@pbs:~# PBS_REPOSITORY=test3 proxmox-backup-client backup root.pxar:. test.pxar:./something > Starting backup: host/pbs/2024-10-08T09:46:48Z > Client name: pbs > Starting backup protocol: Tue Oct 8 11:46:48 2024 > No previous manifest available. > Upload directory '.' to 'test3' as root.pxar.didx > root.pxar: elapsed 60.00 s, new: 13.077 GiB, reused: 0 B, total: 13.077 GiB, uploaded: 13.061 GiB (compressed 11.487 GiB, average: 222.902 MiB/s) > catalog.pcat1: elapsed 60.00 s, new: 2.073 MiB, reused: 0 B, total: 2.073 MiB, uploaded: 2.073 MiB (compressed 911.743 KiB, average: 35.384 KiB/s) > root.pxar: had to backup 29.285 GiB of 29.294 GiB (compressed 27.707 GiB) in 116.49 s (average 257.426 MiB/s) > root.pxar: backup was done incrementally, reused 8.493 MiB (0.0%) > Upload directory './something' to 'test3' as test.pxar.didx > catalog.pcat1: elapsed 120.01 s, new: 2.073 MiB, reused: 0 B, total: 2.073 MiB, uploaded: 2.073 MiB (compressed 911.743 KiB, average: 17.692 KiB/s) > root.pxar: elapsed 120.01 s, new: 29.285 GiB, reused: 8.493 MiB, total: 29.294 GiB, uploaded: 29.285 GiB (compressed 27.707 GiB, average: 249.891 MiB/s) > test.pxar: elapsed 60.00 s, new: 18.208 GiB, reused: 0 B, total: 18.208 GiB, uploaded: 18.194 GiB (compressed 18.194 GiB, average: 310.5 MiB/s) > root.pxar: elapsed 180.01 s, new: 29.285 GiB, reused: 8.493 MiB, total: 29.294 GiB, uploaded: 29.285 GiB (compressed 27.707 GiB, average: 166.596 MiB/s) > catalog.pcat1: elapsed 180.01 s, new: 2.073 MiB, reused: 0 B, total: 2.073 MiB, uploaded: 2.073 MiB (compressed 911.743 KiB, average: 11.795 KiB/s) > test.pxar: had to backup 22.304 GiB of 22.304 GiB (compressed 22.304 GiB) in 73.81 s (average 309.419 MiB/s) > Uploaded backup catalog (2.287 MiB) > Duration: 190.40s > End Time: Tue Oct 8 11:49:58 2024 Note the last two "root.pxar ..." lines, they get printed after root.pxar has been uploaded and simply have the same numbers as the lines before (because obviously nothing has changed, everything is already uploaded). But IMO if this is hard to fix, we can also leave it like this. _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel