From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id BEF0E1FF16F for ; Tue, 8 Jul 2025 12:04:23 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 502D2134AD; Tue, 8 Jul 2025 12:05:08 +0200 (CEST) To: Dominik Csapak , Proxmox VE development discussion Date: Tue, 08 Jul 2025 12:04:56 +0200 In-Reply-To: <20250708084900.1068146-1-d.csapak@proxmox.com> References: <20250708084900.1068146-1-d.csapak@proxmox.com> MIME-Version: 1.0 Message-ID: List-Id: Proxmox VE development discussion List-Post: From: Adam Kalisz via pve-devel Precedence: list Cc: Adam Kalisz X-Mailman-Version: 2.1.29 X-BeenThere: pve-devel@lists.proxmox.com List-Subscribe: , List-Unsubscribe: , List-Archive: Reply-To: Proxmox VE development discussion List-Help: Subject: Re: [pve-devel] [RFC PATCH v2 proxmox-backup-qemu] restore: make chunk loading more parallel Content-Type: multipart/mixed; boundary="===============4232780217661892625==" Errors-To: pve-devel-bounces@lists.proxmox.com Sender: "pve-devel" --===============4232780217661892625== Content-Type: message/rfc822 Content-Disposition: inline Return-Path: X-Original-To: pve-devel@lists.proxmox.com Delivered-To: pve-devel@lists.proxmox.com Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id C9721D6C60 for ; Tue, 8 Jul 2025 12:05:06 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id B30821346A for ; Tue, 8 Jul 2025 12:05:06 +0200 (CEST) Received: from mail-ed1-x52d.google.com (mail-ed1-x52d.google.com [IPv6:2a00:1450:4864:20::52d]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Tue, 8 Jul 2025 12:05:04 +0200 (CEST) Received: by mail-ed1-x52d.google.com with SMTP id 4fb4d7f45d1cf-60702d77c60so8224237a12.3 for ; Tue, 08 Jul 2025 03:05:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=notnullmakers.com; s=google; t=1751969098; x=1752573898; darn=lists.proxmox.com; h=mime-version:user-agent:content-transfer-encoding:references :in-reply-to:date:to:from:subject:message-id:from:to:cc:subject:date :message-id:reply-to; bh=UgTlwP8HPAz6BKXu0O5ZHUNba5mWD2BY3QRgFquK70Q=; b=gYXzzqii3hZMvXqpOTnu43Pwg9dYBiNWXLft/76cuW2sbZnvxXtZ7LvIqgQjUrJ/6l rhyAmz9gIX7ko5NLXHru8Sxf9cVEP23eV6p9tiO8hcytROdvh/FA2D0Hj9EL3AaaFavp oYQ4EWiuetN00gSzHfgggSLLcvEjn26IGL4G7ILsLLVG3PDB1GNRh/RPFrLkiYhgzH7p SZyaGLKSD1+SvMz/4wTlsmXdMl1uflzGz/LcOMtfIwPncT+v2o1KOEdqTbPjn+jpzKta YSba5I2W8sZe1/6289R/eHMWXgPu2crpegSHEOUG+cEhaJpTjR4UdXCBEww7KMQWZSBC BN8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751969098; x=1752573898; h=mime-version:user-agent:content-transfer-encoding:references :in-reply-to:date:to:from:subject:message-id:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=UgTlwP8HPAz6BKXu0O5ZHUNba5mWD2BY3QRgFquK70Q=; b=gq8VESQrN/zt2jQFWbxiyX9aWtSjUfvGS7ZRM1u1k1WNUxYYY4uQ36/hSNY2gbgq4+ MMLt1Go/pNl3DcvXnXgVP04eljQIvzd3rRmqmrXfxoIhNexrhGFmhTGo7JWpKKZKO3yv AKBX8C3mQNrLAIKT06hSmhiWwpwUdem7LwUIAq02gnolwwIYnD1SOE8Oq3jdp/vUF0PJ +aCBGvUKk0kKst0MI9crIxxzXF2qBMPKP2A/BqPVfV2TbPnfjr36oj6kSFp0YOnRJePY 8QVfGu5AY06kx6Ep/FHMycojoXJcP3NiA6jbPZajYQ9q4g2c+UhW9ZlXKUc13jIw8UCk TFRQ== X-Forwarded-Encrypted: i=1; AJvYcCWOil2ZwJuv9I+PPp1S+mdEDtjJ/PfaOIqLU4Lf1j3btw6YAcTakwM7hj6txpvzgbUxY6W0cK/PsJ0=@lists.proxmox.com X-Gm-Message-State: AOJu0YzC+UHFffg02MKJqTcftt1TPE9NptlxLe5GCuc08qWh/xwvTtJW U99M+JRfN6cg5dshScd7OEX6oyFSiT3oMOul67PS4419t8g45dc20uPLmnjUOFombbifqcQ62d5 TJ9GE X-Gm-Gg: ASbGncvPnkXUIXuKDQ+wJzXZEpvOrWrpI2kZCsbFKYOoqPNiRt7lE1Lew05oR9JGhkP 47nrSkIukPYypwP/6gz17xjiudv2Mh8QjtxDAWJhcLFft/4YGQ5/Jjlbi9pIe2H5qNM9Dg6ApBj 0+Mv0XK1Enxa4Vj4MoibYpZHwVc1JdHkx3yTHd1Ef0TWG1K/52gvGkM3cm/KJA+WPnbV9zXZqlj vsJKKeEedfFzUtdto4S47bEKCO7M80aeCVkI6d3NX9J3vzMtnQGjOZ85O/llxSmMPQYnG0IZV4l OsuwXDB1WI1TrtLA8V8MN4+QbygWWODsEyoshcEy6EKmzXEc5LywJHxxVoHPVJbR8hZbrTzUyLY Ux+SeyoAr7g== X-Google-Smtp-Source: AGHT+IHC6UxddwrN2hHHIVX27AIXgv87PY2eP+Zzp7PrqLLkW5wuP5z+Rtd6QRBs0Ek+RjzRhk84PA== X-Received: by 2002:a17:906:6a02:b0:ae0:cf28:6ec2 with SMTP id a640c23a62f3a-ae6b0ef560dmr245049466b.61.1751969098032; Tue, 08 Jul 2025 03:04:58 -0700 (PDT) Received: from ?IPv6:2a02:8308:299:4600::9185? ([2a02:8308:299:4600::9185]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-ae3f66e8109sm848858266b.21.2025.07.08.03.04.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Jul 2025 03:04:57 -0700 (PDT) Message-ID: <9dc2c099169ee1ed64c274d64cc0a1c19f9f6c92.camel@notnullmakers.com> Subject: Re: [RFC PATCH v2 proxmox-backup-qemu] restore: make chunk loading more parallel From: Adam Kalisz To: Dominik Csapak , Proxmox VE development discussion Date: Tue, 08 Jul 2025 12:04:56 +0200 In-Reply-To: <20250708084900.1068146-1-d.csapak@proxmox.com> References: <20250708084900.1068146-1-d.csapak@proxmox.com> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable User-Agent: Evolution 3.56.1-1 MIME-Version: 1.0 X-SPAM-LEVEL: Spam detection results: 0 BAYES_00 -1.9 Bayes spam probability is 0 to 1% DKIM_SIGNED 0.1 Message has a DKIM or DK signature, not necessarily valid DKIM_VALID -0.1 Message has at least one valid DKIM or DK signature DKIM_VALID_AU -0.1 Message has a valid DKIM or DK signature from author's domain DKIM_VALID_EF -0.1 Message has a valid DKIM or DK signature from envelope-from domain DMARC_PASS -0.1 DMARC pass policy RCVD_IN_DNSWL_NONE -0.0001 Sender listed at https://www.dnswl.org/, no trust SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Hi Dominik, this is a big improvement, I have done some performance measurements again: Ryzen: 4 worker threads: restore image complete (bytes=3D53687091200, duration=3D52.06s, speed=3D983.47MB/s) 8 worker threads: restore image complete (bytes=3D53687091200, duration=3D50.12s, speed=3D1021.56MB/s) 4 worker threads, 4 max-blocking: restore image complete (bytes=3D53687091200, duration=3D54.00s, speed=3D948.22MB/s) 8 worker threads, 4 max-blocking: restore image complete (bytes=3D53687091200, duration=3D50.43s, speed=3D1015.25MB/s) 8 worker threads, 4 max-blocking, 32 buffered futures: restore image complete (bytes=3D53687091200, duration=3D52.11s, speed=3D982.53MB/s) Xeon: 4 worker threads: restore image complete (bytes=3D10737418240, duration=3D3.06s, speed=3D3345.97MB/s) restore image complete (bytes=3D107374182400, duration=3D139.80s, speed=3D732.47MB/s) restore image complete (bytes=3D107374182400, duration=3D136.67s, speed=3D749.23MB/s) 8 worker threads: restore image complete (bytes=3D10737418240, duration=3D2.50s, speed=3D4095.30MB/s) restore image complete (bytes=3D107374182400, duration=3D127.14s, speed=3D805.42MB/s) restore image complete (bytes=3D107374182400, duration=3D121.39s, speed=3D843.59MB/s) Much better but it would need to be 25% faster on this older system to hit the numbers I have already seen with my solution: For comparison, with my solution on the same Xeon system I was hitting: With 8-way concurrency, 16 max-blocking threads: restore image complete (bytes=3D10737418240, avg fetch time=3D16.7572ms, avg time per nonzero write=3D1.9310ms, storage nonzero total write time=3D1.580s, duration=3D2.25s, speed=3D4551.25MB/s) restore image complete (bytes=3D107374182400, avg fetch time=3D29.1714ms, avg time per nonzero write=3D2.2216ms, storage nonzero total write time=3D55.739s, duration=3D106.17s, speed=3D964.52MB/s) restore image complete (bytes=3D107374182400, avg fetch time=3D28.2543ms, avg time per nonzero write=3D2.1473ms, storage nonzero total write time=3D54.139s, duration=3D103.52s, speed=3D989.18MB/s) With 16-way concurrency, 32 max-blocking threads: restore image complete (bytes=3D10737418240, avg fetch time=3D25.3444ms, avg time per nonzero write=3D2.0709ms, storage nonzero total write time=3D1.694s, duration=3D2.02s, speed=3D5074.13MB/s) restore image complete (bytes=3D107374182400, avg fetch time=3D53.3046ms, avg time per nonzero write=3D2.6692ms, storage nonzero total write time=3D66.969s, duration=3D106.65s, speed=3D960.13MB/s) restore image complete (bytes=3D107374182400, avg fetch time=3D47.3909ms, avg time per nonzero write=3D2.6352ms, storage nonzero total write time=3D66.440s, duration=3D98.09s, speed=3D1043.95MB/s) -> this seemed to be the best setting for this system On the Ryzen system I was hitting: With 8-way concurrency, 16 max-blocking threads: restore image complete (bytes=3D53687091200, avg fetch time=3D24.7342ms, avg time per nonzero write=3D1.6474ms, storage nonzero total write time=3D19.996s, duration=3D45.83s, speed=3D1117.15MB/s) -> this seemed to be the best setting for this system It seems the counting of zeroes works in some kind of steps (seen on the Xeon system with mostly incompressible data): download and verify backup index progress 1% (read 1073741824 bytes, zeroes =3D 0% (0 bytes), duration 1 sec) progress 2% (read 2147483648 bytes, zeroes =3D 0% (0 bytes), duration 2 sec) progress 3% (read 3221225472 bytes, zeroes =3D 0% (0 bytes), duration 3 sec) progress 4% (read 4294967296 bytes, zeroes =3D 0% (0 bytes), duration 5 sec) progress 5% (read 5368709120 bytes, zeroes =3D 0% (0 bytes), duration 6 sec) progress 6% (read 6442450944 bytes, zeroes =3D 0% (0 bytes), duration 7 sec) progress 7% (read 7516192768 bytes, zeroes =3D 0% (0 bytes), duration 8 sec) progress 8% (read 8589934592 bytes, zeroes =3D 0% (0 bytes), duration 10 sec) progress 9% (read 9663676416 bytes, zeroes =3D 0% (0 bytes), duration 11 sec) progress 10% (read 10737418240 bytes, zeroes =3D 0% (0 bytes), duration 12 sec) progress 11% (read 11811160064 bytes, zeroes =3D 0% (0 bytes), duration 14 sec) progress 12% (read 12884901888 bytes, zeroes =3D 0% (0 bytes), duration 15 sec) progress 13% (read 13958643712 bytes, zeroes =3D 0% (0 bytes), duration 16 sec) progress 14% (read 15032385536 bytes, zeroes =3D 0% (0 bytes), duration 18 sec) progress 15% (read 16106127360 bytes, zeroes =3D 0% (0 bytes), duration 19 sec) progress 16% (read 17179869184 bytes, zeroes =3D 0% (0 bytes), duration 20 sec) progress 17% (read 18253611008 bytes, zeroes =3D 0% (0 bytes), duration 21 sec) progress 18% (read 19327352832 bytes, zeroes =3D 0% (0 bytes), duration 23 sec) progress 19% (read 20401094656 bytes, zeroes =3D 0% (0 bytes), duration 24 sec) progress 20% (read 21474836480 bytes, zeroes =3D 0% (0 bytes), duration 25 sec) progress 21% (read 22548578304 bytes, zeroes =3D 0% (0 bytes), duration 27 sec) progress 22% (read 23622320128 bytes, zeroes =3D 0% (0 bytes), duration 28 sec) progress 23% (read 24696061952 bytes, zeroes =3D 0% (0 bytes), duration 29 sec) progress 24% (read 25769803776 bytes, zeroes =3D 0% (0 bytes), duration 31 sec) progress 25% (read 26843545600 bytes, zeroes =3D 1% (515899392 bytes), duration 31 sec) progress 26% (read 27917287424 bytes, zeroes =3D 1% (515899392 bytes), duration 33 sec) Especially during a restore the speed is quite important if you need to hit Restore Time Objectives under SLAs. That's why we were targeting 1 GBps for incompressible data. Thank you Adam On Tue, 2025-07-08 at 10:49 +0200, Dominik Csapak wrote: > by using async futures to load chunks and stream::buffer_unordered to > buffer up to 16 of them, depending on write/load speed, use tokio's > task > spawn to make sure the continue to run in the background, since > buffer_unordered starts them, but does not poll them to completion > unless we're awaiting. >=20 > With this, we don't need to increase the number of threads in the > runtime to trigger parallel reads and network traffic to us. This way > it's only limited by CPU if decoding and/or decrypting is the bottle > neck. >=20 > I measured restoring a VM backup with a 60GiB disk (filled with > ~42GiB > data) and fast storage over a local network link (from PBS VM to the > host). Let it 3=C2=A0 runs, but the variance was not that big, so here's > some > representative log output with various MAX_BUFFERED_FUTURES values. >=20 > benchmark=C2=A0=C2=A0 duration=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 = speed=C2=A0=C2=A0 cpu percentage > current=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 107.18s=C2=A0=C2=A0 573.25MB/s=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 < 100% > 4:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 44.7= 4s=C2=A0 1373.34MB/s=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 ~ 180% > 8:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 32.3= 0s=C2=A0 1902.42MB/s=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 ~ 290% > 16:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 25.75s=C2= =A0 2386.44MB/s=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= ~ 360% >=20 > I saw an increase in CPU usage proportional to the speed increase, so > while in the current version it uses less than a single core total, > using 16 parallel futures resulted in 3-4 available threads of the > tokio runtime to be utilized. >=20 > In general I'd like to limit the buffering somehow, but I don't think > there is a good automatic metric we can use, and giving the admin a > knob > that is hard to explain what the actual ramifications about it are is > also not good, so I settled for a value that showed improvement but > does > not seem too high. >=20 > In any case, if the target and/or source storage is too slow, there > will > be back/forward pressure, and this change should only matter for > storage > systems where IO depth plays a role and that are fast enough. >=20 > The way we count the finished chunks also changes a bit, since they > can come unordered, so we can't rely on the index position to > calculate > the percentage. >=20 > This patch is loosely based on the patch from Adam Kalisz[0], but > removes > the need to increase the blocking threads and uses the (actually > always > used) underlying async implementation for reading remote chunks. >=20 > 0: > https://lore.proxmox.com/pve-devel/mailman.719.1751052794.395.pve-devel@l= ists.proxmox.com/ >=20 > Signed-off-by: Dominik Csapak > Based-on-patch-by: Adam Kalisz > --- > changes from RFC v1: > * uses tokio task spawn to actually run the fetching in the > background > * redo the counting for the task output (pos was unordered so we got > =C2=A0 weird ordering sometimes) >=20 > When actually running the fetching in the background the speed > increase > is much higher than just using buffer_unordered for the fetching > futures, which is nice (altough the cpu usage is much higher now). >=20 > Since the benchmark was much faster with higher values, I used a > different bigger VM this time around so the timings are more > consistent > and it makes sure the disk does not fit in the PBS's memory. >=20 > The question what count we should use remains though... >=20 > =C2=A0src/restore.rs | 63 +++++++++++++++++++++++++++++++++++++----------= - > -- > =C2=A01 file changed, 47 insertions(+), 16 deletions(-) >=20 > diff --git a/src/restore.rs b/src/restore.rs > index 5a5a398..4e6c538 100644 > --- a/src/restore.rs > +++ b/src/restore.rs > @@ -2,6 +2,7 @@ use std::convert::TryInto; > =C2=A0use std::sync::{Arc, Mutex}; > =C2=A0 > =C2=A0use anyhow::{bail, format_err, Error}; > +use futures::StreamExt; > =C2=A0use once_cell::sync::OnceCell; > =C2=A0use tokio::runtime::Runtime; > =C2=A0 > @@ -13,7 +14,7 @@ use > pbs_datastore::cached_chunk_reader::CachedChunkReader; > =C2=A0use pbs_datastore::data_blob::DataChunkBuilder; > =C2=A0use pbs_datastore::fixed_index::FixedIndexReader; > =C2=A0use pbs_datastore::index::IndexFile; > -use pbs_datastore::read_chunk::ReadChunk; > +use pbs_datastore::read_chunk::AsyncReadChunk; > =C2=A0use pbs_datastore::BackupManifest; > =C2=A0use pbs_key_config::load_and_decrypt_key; > =C2=A0use pbs_tools::crypt_config::CryptConfig; > @@ -29,6 +30,9 @@ struct ImageAccessInfo { > =C2=A0=C2=A0=C2=A0=C2=A0 archive_size: u64, > =C2=A0} > =C2=A0 > +// use this many buffered futures to make loading of chunks more > concurrent > +const MAX_BUFFERED_FUTURES: usize =3D 16; > + > =C2=A0pub(crate) struct RestoreTask { > =C2=A0=C2=A0=C2=A0=C2=A0 setup: BackupSetup, > =C2=A0=C2=A0=C2=A0=C2=A0 runtime: Arc, > @@ -165,26 +169,53 @@ impl RestoreTask { > =C2=A0 > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 let start_time =3D std::= time::Instant::now(); > =C2=A0 > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 for pos in 0..index.index_cou= nt() { > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 let d= igest =3D index.index_digest(pos).unwrap(); > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 let read_queue =3D (0..index.= index_count()).map(|pos| { > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 let d= igest =3D *index.index_digest(pos).unwrap(); > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 = let offset =3D (pos * index.chunk_size) as u64; > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if di= gest =3D=3D &zero_chunk_digest { > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 let res =3D write_zero_callback(offset, > index.chunk_size as u64); > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 if res < 0 { > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 bail!("write_zero_callback faile= d ({})", res); > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 let c= hunk_reader =3D chunk_reader.clone(); > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 async= move { > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 let chunk =3D if digest =3D=3D zero_chunk_digest { > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 None > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 } else { > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 let raw_data =3D tokio::task::sp= awn(async move { > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 AsyncRea= dChunk::read_chunk(&chunk_reader, > &digest).await > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 }) > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .await??; > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Some(raw_data) > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 }; > + > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 Ok::<_, Error>((chunk, offset)) > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 } > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 }); > + > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 // this buffers futures and p= re-fetches some chunks for us > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 let mut stream =3D > futures::stream::iter(read_queue).buffer_unordered(MAX_BUFFERED_FUTUR > ES); > + > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 let mut count =3D 0; > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 while let Some(res) =3D strea= m.next().await { > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 let r= es =3D res?; > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 match= res { > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 (None, offset) =3D> { > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 let res =3D write_zero_callback(= offset, > index.chunk_size as u64); > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if res < 0 { > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 bail!("w= rite_zero_callback failed ({})", > res); > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 } > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 bytes +=3D index.chunk_size; > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 zeroes +=3D index.chunk_size; > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0 } > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 bytes +=3D index.chunk_size; > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 zeroes +=3D index.chunk_size; > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 } els= e { > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 let raw_data =3D ReadChunk::read_chunk(&chunk_reader, > digest)?; > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 let res =3D write_data_callback(offset, &raw_data); > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 if res < 0 { > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 bail!("write_data_callback faile= d ({})", res); > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 (Some(raw_data), offset) =3D> { > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 let res =3D write_data_callback(= offset, > &raw_data); > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if res < 0 { > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 bail!("w= rite_data_callback failed ({})", > res); > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 } > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 bytes +=3D raw_data.len(); > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0 } > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 bytes +=3D raw_data.len(); > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 = } > + > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 count= +=3D 1; > + > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 = if verbose { > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 let next_per =3D ((pos + 1) * 100) / > index.index_count(); > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 let next_per =3D (count * 100) / index.index_count(); > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0 if per !=3D next_per { > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 eprintln!( > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 "p= rogress {}% (read {} bytes, zeroes =3D {}% > ({} bytes), duration {} sec)", --===============4232780217661892625== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel --===============4232780217661892625==--