From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id B19E660645 for ; Wed, 9 Sep 2020 15:11:18 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 9975A12337 for ; Wed, 9 Sep 2020 15:10:48 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [212.186.127.180]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 8F4211232A for ; Wed, 9 Sep 2020 15:10:46 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 5AC4A44ACF for ; Wed, 9 Sep 2020 15:10:46 +0200 (CEST) To: pbs-devel@lists.proxmox.com References: <20200909115410.27881-1-d.csapak@proxmox.com> <1599655578.53xhjxn63t.astroid@nora.none> From: Dominik Csapak Message-ID: <4cb44239-8f42-65cc-e620-226dda2497c9@proxmox.com> Date: Wed, 9 Sep 2020 15:10:44 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:81.0) Gecko/20100101 Thunderbird/81.0 MIME-Version: 1.0 In-Reply-To: <1599655578.53xhjxn63t.astroid@nora.none> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 AWL 0.982 Adjusted score from AWL reputation of From: address KAM_ASCII_DIVIDERS 0.8 Spam that uses ascii formatting tricks KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment NICE_REPLY_A -1.626 Looks like a legit reply (A) RCVD_IN_DNSWL_MED -2.3 Sender listed at https://www.dnswl.org/, medium trust SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [proxmox.com, proxmox-backup-proxy.rs] Subject: Re: [pbs-devel] [PATCH proxmox-backup] fix #2983: improve tcp performance X-BeenThere: pbs-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox Backup Server development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Sep 2020 13:11:18 -0000 On 9/9/20 2:51 PM, Fabian Grünbichler wrote: > On September 9, 2020 1:54 pm, Dominik Csapak wrote: >> by leaving the buffer sizes on default, we get much better tcp performance >> for high latency links >> >> throughput is still impacted by latency, but much less so when >> leaving the sizes at default. >> the disadvantage is slightly higher memory usage of the server >> (details below) >> >> my local benchmarks (proxmox-backup-client benchmark): >> >> pbs client: >> PVE Host >> Epyc 7351P (16core/32thread) >> 64GB Memory >> >> pbs server: >> VM on Host >> 1 Socket, 4 Cores (Host CPU type) >> 4GB Memory >> >> average of 3 runs, rounded to MB/s >> | no delay | 1ms | 5ms | 10ms | 25ms | >> without this patch | 230MB/s | 55MB/s | 13MB/s | 7MB/s | 3MB/s | >> with this patch | 293MB/s | 293MB/s | 249MB/s | 241MB/s | 104MB/s | >> >> memory usage (resident memory) of proxmox-backup-proxy: >> >> | peak during benchmarks | after benchmarks | >> without this patch | 144MB | 100MB | >> with this patch | 145MB | 130MB | >> >> Signed-off-by: Dominik Csapak > > Tested-by: Fabian Grünbichler > > AFAICT, the same applies to the client side despite the comment there: arg, .. i forgot to commit this, i wanted to include that as well > > > diff --git a/src/client/http_client.rs b/src/client/http_client.rs > index dd457c12..ae3704d6 100644 > --- a/src/client/http_client.rs > +++ b/src/client/http_client.rs > @@ -292,7 +292,6 @@ impl HttpClient { > > let mut httpc = hyper::client::HttpConnector::new(); > httpc.set_nodelay(true); // important for h2 download performance! > - httpc.set_recv_buffer_size(Some(1024*1024)); //important for h2 download performance! > httpc.enforce_http(false); // we want https... > > let https = HttpsConnector::with_connector(httpc, ssl_connector_builder.build()); > > leaves restore speed unchanged without artifical delay, but improves it > to the speed without delay when adding 25ms (in this test, the > throughput is not limited by the network since it's an actual restore): > > no delay, without patch: ~50MB/s > no delay, with patch: ~50MB/s > 25ms delay, without patch: ~11MB/s > 25ms delay, with path: ~50MB/s > > do you see the same effect on your system (proxmox-backup-client restore > .. | pv -trab > /dev/null)? I haven't setup a proper test bed to > minimize effects of caching (yet), but I did the following sequence: i get different results, but the same trend (large vm backup with possibly big amount of zero chunks, so... fast) avg was: no delay, without client patch: ~1.5GiB/s no delay, with client patch: ~1.5GiB/s 25ms delay, without patch: 30MiB/s 25ms delay, with patch: ~950MiB/s > > build, restart > test restore without delay for 1 minute and watch throughput > test restore with delay for 1 minute and watch throughput > test restore without delay for 1 minute and watch throughput > test restore with delay for 1 minute and watch throughput > patch, rinse, repeat i left the server the same (with the server side patch applied) and only changed the client, so any caching on the server is active for all runs but the first (which i did not count) > >> --- >> src/bin/proxmox-backup-proxy.rs | 2 -- >> 1 file changed, 2 deletions(-) >> >> diff --git a/src/bin/proxmox-backup-proxy.rs b/src/bin/proxmox-backup-proxy.rs >> index 75065e6f..5844e632 100644 >> --- a/src/bin/proxmox-backup-proxy.rs >> +++ b/src/bin/proxmox-backup-proxy.rs >> @@ -87,8 +87,6 @@ async fn run() -> Result<(), Error> { >> let acceptor = Arc::clone(&acceptor); >> async move { >> sock.set_nodelay(true).unwrap(); >> - sock.set_send_buffer_size(1024*1024).unwrap(); >> - sock.set_recv_buffer_size(1024*1024).unwrap(); >> Ok(tokio_openssl::accept(&acceptor, sock) >> .await >> .ok() // handshake errors aren't be fatal, so return None to filter >> -- >> 2.20.1 >> >> >> >> _______________________________________________ >> pbs-devel mailing list >> pbs-devel@lists.proxmox.com >> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel >> >> >> > > > _______________________________________________ > pbs-devel mailing list > pbs-devel@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel >