From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id BD28E1FF185 for ; Mon, 7 Jul 2025 21:49:08 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id E9CC69DF3; Mon, 7 Jul 2025 21:49:50 +0200 (CEST) To: Dominik Csapak , pve-devel@lists.proxmox.com Date: Mon, 07 Jul 2025 21:49:09 +0200 In-Reply-To: <20250707081451.696685-1-d.csapak@proxmox.com> References: <20250707081451.696685-1-d.csapak@proxmox.com> MIME-Version: 1.0 Message-ID: List-Id: Proxmox VE development discussion List-Post: From: Adam Kalisz via pve-devel Precedence: list Cc: Adam Kalisz X-Mailman-Version: 2.1.29 X-BeenThere: pve-devel@lists.proxmox.com List-Subscribe: , List-Unsubscribe: , List-Archive: Reply-To: Proxmox VE development discussion List-Help: Subject: Re: [pve-devel] [RFC PATCH proxmox-backup-qemu] restore: make chunk loading more parallel Content-Type: multipart/mixed; boundary="===============1709911661571778444==" Errors-To: pve-devel-bounces@lists.proxmox.com Sender: "pve-devel" --===============1709911661571778444== Content-Type: message/rfc822 Content-Disposition: inline Return-Path: X-Original-To: pve-devel@lists.proxmox.com Delivered-To: pve-devel@lists.proxmox.com Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 1F7D8D68FC for ; Mon, 7 Jul 2025 21:49:49 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id F26969DBA for ; Mon, 7 Jul 2025 21:49:18 +0200 (CEST) Received: from mail-ed1-x532.google.com (mail-ed1-x532.google.com [IPv6:2a00:1450:4864:20::532]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Mon, 7 Jul 2025 21:49:17 +0200 (CEST) Received: by mail-ed1-x532.google.com with SMTP id 4fb4d7f45d1cf-60c5b8ee2d9so7883824a12.2 for ; Mon, 07 Jul 2025 12:49:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=notnullmakers.com; s=google; t=1751917751; x=1752522551; darn=lists.proxmox.com; h=mime-version:user-agent:content-transfer-encoding:references :in-reply-to:date:to:from:subject:message-id:from:to:cc:subject:date :message-id:reply-to; bh=Md1P0WxX6sk+q/rQsJ/PMLoUzAJPslyPykgw7iYl0aw=; b=pICo945l2ZdHgFTNVJHujTs6AUW4vK6hLP4V1wpZk1+N1sk5bipLopwV4li50AYFY6 DCNzJ4vLGViV3fsLs2pRV188Z53xU9mta5mPXp+tYqI4R/rkImM9TgJ9lcgFX8KIxHHx o4WyVU0c1Q+khGELCko1nAbRi0pM22jFI0HBeFuWyQTwj/uV8/yCYpfAl0TFitbWaONJ aP3uDzjP+YmweeADgRxfKjKgbGTu9gkL1n0Qu03icCdK7xEtngqadm0dWiJplPgkt3Hh cmBlJuJ/i4FEMLSoi0Lr6c0T7eifWKVFwbhQKw0BJahhDxprvEpj3upPZymQpr2ijpZo xLKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751917751; x=1752522551; h=mime-version:user-agent:content-transfer-encoding:references :in-reply-to:date:to:from:subject:message-id:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=Md1P0WxX6sk+q/rQsJ/PMLoUzAJPslyPykgw7iYl0aw=; b=IvJ7w2snybcUhlEbFtF6fwusHpVNNiGWh9xNy5Az+zqAClqUqKcIYshRCbTrJVXcPF /p6NOmYy6mLdFUe+QB/5B0JqykOJPBF2RkmP3zx7drmEPi8glbF16X0urLlftfLSd6jm mpgPTT3eTgxzekzvnAg0nPXyKlXw4lIEQMbfQLaa4mjUQk0vZxzzlp4s0X8h/24MWxJu ijS3uWxDgDTxOmDzFA4nxlaEmjkrRAephW92kuyOAVcnIMq+V8OKgDtLxCXg6iMorJpu rQnvgNhr0+FaDzONSaF9a9R6cFcxNw3GnTWlsFSCVujdV4ybC6zaYl9+3F9/KET18a+A Yahg== X-Forwarded-Encrypted: i=1; AJvYcCWr09P2SadtTQeL2oHmXahqCsmDpwqr1gOGzh0QY93QhxHp0Vh5jl+Ge3fx5uioapze/QEUmN8gBB4=@lists.proxmox.com X-Gm-Message-State: AOJu0YxDQwmzYF5uW14Yn7KUZH+fWw7m0/VRFRmE427UbkrAobXNTvEG G9k6n0G7gYWYtt9vddnA6LdP8E6IXbR0J/vtOKalAMGdtPT9+NWc9NBFKl51oGhlAuJoOr1EP7z k0JhS X-Gm-Gg: ASbGncvRobN0ENkbNCZrqED2dy4SnAWwyt8/in4PRfMMpgUlKvC1f1yS+4ymhPPjElj J8DpMYrOkLbl2dLRG8BfqQCGTzhkrOYPwcmhc0cXWpydWPtUCt3/q6/FujVhyS5bLF/LbKLpGqY uOWKXDy+DuU1BkY+7nPXkxsaGX232ZI97l2fQ+51/gjcnEa9KtFcEo9a88cBeDuDHFAKkxBZjun TabHClxLWY0vOG47/TlM0gdR+1q7IVaEmTTsGQLeI3mTGc0BHRKCz829Pp3Jk34sxtFbgoto6Rh 5y+5iH/bSGfxvNMvO4wn/R2JfAWC0UnBZWv8WoxIgKmAWIDfZDXxvIZTYxFUie3DaE8Wg10bpGr k7O04oGORdA== X-Google-Smtp-Source: AGHT+IGQ1zeDmB51ARB00KU65yQgHwJ7h3aoXi0TzIffC+4P5O+tm6yWDEo0ATSeEBxJfAXcGbQR1g== X-Received: by 2002:a17:907:3f96:b0:ae3:5e70:32f7 with SMTP id a640c23a62f3a-ae6b0e40ff8mr24258866b.47.1751917750606; Mon, 07 Jul 2025 12:49:10 -0700 (PDT) Received: from ?IPv6:2a02:8308:299:4600::9185? ([2a02:8308:299:4600::9185]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-ae3f66e6f7dsm765896066b.37.2025.07.07.12.49.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Jul 2025 12:49:10 -0700 (PDT) Message-ID: Subject: Re: [RFC PATCH proxmox-backup-qemu] restore: make chunk loading more parallel From: Adam Kalisz To: Dominik Csapak , pve-devel@lists.proxmox.com Date: Mon, 07 Jul 2025 21:49:09 +0200 In-Reply-To: <20250707081451.696685-1-d.csapak@proxmox.com> References: <20250707081451.696685-1-d.csapak@proxmox.com> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable User-Agent: Evolution 3.56.1-1 MIME-Version: 1.0 X-SPAM-LEVEL: Spam detection results: 0 BAYES_00 -1.9 Bayes spam probability is 0 to 1% DKIM_SIGNED 0.1 Message has a DKIM or DK signature, not necessarily valid DKIM_VALID -0.1 Message has at least one valid DKIM or DK signature DKIM_VALID_AU -0.1 Message has a valid DKIM or DK signature from author's domain DKIM_VALID_EF -0.1 Message has a valid DKIM or DK signature from envelope-from domain DMARC_PASS -0.1 DMARC pass policy RCVD_IN_DNSWL_NONE -0.0001 Sender listed at https://www.dnswl.org/, no trust SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Hi Dominik, seems like a more elegant solution. In my testing on the Ryzen system: progress 100% (read 53624176640 bytes, zeroes =3D 5% (2776629248 bytes), du= ration 52 sec) progress 96% (read 53628370944 bytes, zeroes =3D 5% (2776629248 bytes), dur= ation 52 sec) progress 97% (read 53645148160 bytes, zeroes =3D 5% (2776629248 bytes), dur= ation 52 sec) progress 96% (read 53657731072 bytes, zeroes =3D 5% (2776629248 bytes), dur= ation 52 sec) progress 97% (read 53666119680 bytes, zeroes =3D 5% (2776629248 bytes), dur= ation 52 sec) progress 96% (read 53678702592 bytes, zeroes =3D 5% (2776629248 bytes), dur= ation 52 sec) progress 97% (read 53682896896 bytes, zeroes =3D 5% (2776629248 bytes), dur= ation 52 sec) restore image complete (bytes=3D53687091200, duration=3D52.35s, speed=3D977= .94MB/s) As you can see the speed is much better than the original still but lower than my solution and also the progress reporting is showing the lack of proper ordering. This is with 8 futures: progress 97% (read 52076478464 bytes, zeroes =3D 2% (1199570944 bytes), dur= ation 57 sec) progress 96% (read 52080672768 bytes, zeroes =3D 2% (1199570944 bytes), dur= ation 57 sec) progress 97% (read 52084867072 bytes, zeroes =3D 2% (1203765248 bytes), dur= ation 57 sec) progress 98% (read 52583989248 bytes, zeroes =3D 3% (1702887424 bytes), dur= ation 57 sec) progress 99% (read 53120860160 bytes, zeroes =3D 4% (2239758336 bytes), dur= ation 57 sec) progress 100% (read 53657731072 bytes, zeroes =3D 5% (2776629248 bytes), du= ration 57 sec) progress 97% (read 53661925376 bytes, zeroes =3D 5% (2776629248 bytes), dur= ation 57 sec) restore image complete (bytes=3D53687091200, duration=3D57.29s, speed=3D893= .66MB/s) With 32 futures I have got 984 MB/s so that's the limit I guess. The do have checksum acceleration using SHA NI. What I have noticed that in my solution the zero chunks seem to get written very early while the restore process is fetching other chunks, which speeds up images with mostly zeroes greatly. This shows on the Xeon system where I restore 3 drives: progress 80% (read 10716446720 bytes, zeroes =3D 69% (7415529472 bytes), du= ration 10 sec) progress 79% (read 10720641024 bytes, zeroes =3D 69% (7415529472 bytes), du= ration 10 sec) progress 80% (read 10737418240 bytes, zeroes =3D 69% (7415529472 bytes), du= ration 10 sec) restore image complete (bytes=3D10737418240, duration=3D10.18s, speed=3D100= 6.31MB/s) --> this can be 5x faster with my solution, as it contains largely zero blo= cks progress 99% (read 106292051968 bytes, zeroes =3D 1% (1606418432 bytes), du= ration 314 sec) progress 98% (read 106296246272 bytes, zeroes =3D 1% (1606418432 bytes), du= ration 314 sec) progress 99% (read 106304634880 bytes, zeroes =3D 1% (1606418432 bytes), du= ration 314 sec) progress 100% (read 107374182400 bytes, zeroes =3D 1% (2139095040 bytes), d= uration 316 sec) restore image complete (bytes=3D107374182400, duration=3D316.35s, speed=3D3= 23.70MB/s) --> here I would do ~1 GBps with my solution The other image is similarly slow. This system does not have checksum acceleration using SHA NI, maybe that could explain the large difference in speed between the solutions? Is the sha256 calculation a bottleneck when fetching chunks? With the futures solution, mpstat shows only 1-2 cores are busy on these 2 sockets systems. The speed is very similar to the single threaded solution that was there before but I confirmed multiple times that I am using the new version (and the output of progress shows that as well, which is why I included it). Best regards Adam On Mon, 2025-07-07 at 10:14 +0200, Dominik Csapak wrote: > by using async futures to load chunks and stream::buffer_unordered to > buffer up to 16 of them, depending on write/load speed. >=20 > With this, we don't need to increase the number of threads in the > runtime to trigger parallel reads and network traffic to us. This way > it's only limited by CPU if decoding and/or decrypting is the bottle > neck. >=20 > I measured restoring a VM backup with about 30GiB data and fast > storage > over a local network link (from PBS VM to the host). > Let it do multiple runs, but the variance was not that big, so here's > some representative log output with various MAX_BUFFERED_FUTURES > values. >=20 > no MAX_BUFFERED_FUTURES: duration=3D43.18s, speed=3D758.82MB/s > 4:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 duration=3D= 38.61s, speed=3D848.77MB/s > 8:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 duration=3D= 33.79s, speed=3D969.85MB/s > 16:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 duration=3D31.45s= , speed=3D1042.06MB/s >=20 > note that increasing the number has a diminishing returns after 10-12 > on > my system, but I guess it depends on the exact configuration. (For > more > than 16 I did not see any improvement, but this is probably just my > setup). >=20 > I saw an increase in CPU usage (from ~75% to ~100% of one core), > which > are very likely the additional chunks to be decoded. >=20 > In general I'd like to limit the buffering somehow, but I don't think > there is a good automatic metric we can use, and giving the admin a > knob > that is hard to explain what the actual ramifications about it are is > also not good, so I settled for a value that showed improvement but > does > not seem too high. >=20 > In any case, if the target and/or source storage is too slow, there > will > be back/forward pressure, and this change should only matter for > storage > systems where IO depth plays a role. >=20 > This patch is loosely based on the patch from Adam Kalisz[0], but > removes > the need to increase the blocking threads and uses the (actually > always > used) underlying async implementation for reading remote chunks. >=20 > 0: > https://lore.proxmox.com/pve-devel/mailman.719.1751052794.395.pve-devel@l= ists.proxmox.com/ >=20 > Signed-off-by: Dominik Csapak > Based-on-patch-by: Adam Kalisz > --- > @Adam could you please test this patch too to see if you still see > the > improvements you saw in your version? >=20 > Also I sent it as RFC to discuss how we decide how many chunks we > want > to buffer/threads we want to allocate. This is a non-trivial topic, > and > as i wrote we don't have a real metric to decide upfront, but giving > the > admin knobs that are complicated is also not the best solution. >=20 > My instinct would be to simply increase to 16 (as I have done here) > and > maybe expose this number in /etc/vzdump.conf or > /etc/pve/datacenter.cfg >=20 > Also I tried to make the writes multi-threaded too, but my > QEMU-knowledge is not very deep for this kind of thing, and I wanted > to > get this version out there soon. (Increasing the write threads can > still > be done afterwards if this change is enough for now) >=20 > I developed this patch on top of the 'stable-bookworm' branch, but it > should apply cleanly on master as well. >=20 > =C2=A0src/restore.rs | 57 +++++++++++++++++++++++++++++++++++++----------= - > -- > =C2=A01 file changed, 42 insertions(+), 15 deletions(-) >=20 > diff --git a/src/restore.rs b/src/restore.rs > index 5a5a398..741b3e1 100644 > --- a/src/restore.rs > +++ b/src/restore.rs > @@ -2,6 +2,7 @@ use std::convert::TryInto; > =C2=A0use std::sync::{Arc, Mutex}; > =C2=A0 > =C2=A0use anyhow::{bail, format_err, Error}; > +use futures::StreamExt; > =C2=A0use once_cell::sync::OnceCell; > =C2=A0use tokio::runtime::Runtime; > =C2=A0 > @@ -13,7 +14,7 @@ use > pbs_datastore::cached_chunk_reader::CachedChunkReader; > =C2=A0use pbs_datastore::data_blob::DataChunkBuilder; > =C2=A0use pbs_datastore::fixed_index::FixedIndexReader; > =C2=A0use pbs_datastore::index::IndexFile; > -use pbs_datastore::read_chunk::ReadChunk; > +use pbs_datastore::read_chunk::AsyncReadChunk; > =C2=A0use pbs_datastore::BackupManifest; > =C2=A0use pbs_key_config::load_and_decrypt_key; > =C2=A0use pbs_tools::crypt_config::CryptConfig; > @@ -29,6 +30,9 @@ struct ImageAccessInfo { > =C2=A0=C2=A0=C2=A0=C2=A0 archive_size: u64, > =C2=A0} > =C2=A0 > +// use at max 16 buffered futures to make loading of chunks more > concurrent > +const MAX_BUFFERED_FUTURES: usize =3D 16; > + > =C2=A0pub(crate) struct RestoreTask { > =C2=A0=C2=A0=C2=A0=C2=A0 setup: BackupSetup, > =C2=A0=C2=A0=C2=A0=C2=A0 runtime: Arc, > @@ -165,24 +169,47 @@ impl RestoreTask { > =C2=A0 > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 let start_time =3D std::= time::Instant::now(); > =C2=A0 > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 for pos in 0..index.index_cou= nt() { > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 let read_queue =3D (0..index.= index_count()).map(|pos| { > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 = let digest =3D index.index_digest(pos).unwrap(); > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 = let offset =3D (pos * index.chunk_size) as u64; > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if di= gest =3D=3D &zero_chunk_digest { > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 let res =3D write_zero_callback(offset, > index.chunk_size as u64); > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 if res < 0 { > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 bail!("write_zero_callback faile= d ({})", res); > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 let c= hunk_reader =3D chunk_reader.clone(); > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 async= move { > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 let chunk =3D if digest =3D=3D &zero_chunk_digest { > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 None > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 } else { > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 let raw_data =3D > AsyncReadChunk::read_chunk(&chunk_reader, digest).await?; > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Some(raw_data) > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 }; > + > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 Ok::<_, Error>((chunk, pos, offset)) > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 } > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 }); > + > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 // this buffers futures and p= re-fetches some chunks for us > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 let mut stream =3D > futures::stream::iter(read_queue).buffer_unordered(MAX_BUFFERED_FUTUR > ES); > + > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 while let Some(res) =3D strea= m.next().await { > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 let r= es =3D res?; > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 let p= os =3D match res { > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 (None, pos, offset) =3D> { > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 let res =3D write_zero_callback(= offset, > index.chunk_size as u64); > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if res < 0 { > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 bail!("w= rite_zero_callback failed ({})", > res); > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 } > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 bytes +=3D index.chunk_size; > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 zeroes +=3D index.chunk_size; > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 pos > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0 } > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 bytes +=3D index.chunk_size; > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 zeroes +=3D index.chunk_size; > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 } els= e { > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 let raw_data =3D ReadChunk::read_chunk(&chunk_reader, > digest)?; > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 let res =3D write_data_callback(offset, &raw_data); > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 if res < 0 { > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 bail!("write_data_callback faile= d ({})", res); > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 (Some(raw_data), pos, offset) =3D> { > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 let res =3D write_data_callback(= offset, > &raw_data); > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if res < 0 { > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 bail!("w= rite_data_callback failed ({})", > res); > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 } > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 bytes +=3D raw_data.len(); > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 pos > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0 } > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 bytes +=3D raw_data.len(); > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 } > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 }; > + > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 = if verbose { > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0 let next_per =3D ((pos + 1) * 100) / > index.index_count(); > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0 if per !=3D next_per { --===============1709911661571778444== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel --===============1709911661571778444==--