public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: "Fabian Grünbichler" <f.gruenbichler@proxmox.com>
To: Proxmox Backup Server development discussion
	<pbs-devel@lists.proxmox.com>
Subject: Re: [pbs-devel] [PATCH proxmox-backup] fix #2983: improve tcp performance
Date: Wed, 09 Sep 2020 14:51:10 +0200	[thread overview]
Message-ID: <1599655578.53xhjxn63t.astroid@nora.none> (raw)
In-Reply-To: <20200909115410.27881-1-d.csapak@proxmox.com>

On September 9, 2020 1:54 pm, Dominik Csapak wrote:
> by leaving the buffer sizes on default, we get much better tcp performance
> for high latency links
> 
> throughput is still impacted by latency, but much less so when
> leaving the sizes at default.
> the disadvantage is slightly higher memory usage of the server
> (details below)
> 
> my local benchmarks (proxmox-backup-client benchmark):
> 
> pbs client:
> PVE Host
> Epyc 7351P (16core/32thread)
> 64GB Memory
> 
> pbs server:
> VM on Host
> 1 Socket, 4 Cores (Host CPU type)
> 4GB Memory
> 
> average of 3 runs, rounded to MB/s
>                     | no delay |     1ms |     5ms |     10ms |    25ms |
> without this patch  |  230MB/s |  55MB/s |  13MB/s |    7MB/s |   3MB/s |
> with this patch     |  293MB/s | 293MB/s | 249MB/s |  241MB/s | 104MB/s |
> 
> memory usage (resident memory) of proxmox-backup-proxy:
> 
>                     | peak during benchmarks | after benchmarks |
> without this patch  |                  144MB |            100MB |
> with this patch     |                  145MB |            130MB |
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>

Tested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>

AFAICT, the same applies to the client side despite the comment there:


diff --git a/src/client/http_client.rs b/src/client/http_client.rs
index dd457c12..ae3704d6 100644
--- a/src/client/http_client.rs
+++ b/src/client/http_client.rs
@@ -292,7 +292,6 @@ impl HttpClient {
 
         let mut httpc = hyper::client::HttpConnector::new();
         httpc.set_nodelay(true); // important for h2 download performance!
-        httpc.set_recv_buffer_size(Some(1024*1024)); //important for h2 download performance!
         httpc.enforce_http(false); // we want https...
 
         let https = HttpsConnector::with_connector(httpc, ssl_connector_builder.build());

leaves restore speed unchanged without artifical delay, but improves it 
to the speed without delay when adding 25ms (in this test, the 
throughput is not limited by the network since it's an actual restore):

no delay, without patch: ~50MB/s
no delay, with patch: ~50MB/s
25ms delay, without patch: ~11MB/s
25ms delay, with path: ~50MB/s

do you see the same effect on your system (proxmox-backup-client restore 
.. | pv -trab > /dev/null)? I haven't setup a proper test bed to 
minimize effects of caching (yet), but I did the following sequence:

build, restart
test restore without delay for 1 minute and watch throughput
test restore with delay for 1 minute and watch throughput
test restore without delay for 1 minute and watch throughput
test restore with delay for 1 minute and watch throughput
patch, rinse, repeat

> ---
>  src/bin/proxmox-backup-proxy.rs | 2 --
>  1 file changed, 2 deletions(-)
> 
> diff --git a/src/bin/proxmox-backup-proxy.rs b/src/bin/proxmox-backup-proxy.rs
> index 75065e6f..5844e632 100644
> --- a/src/bin/proxmox-backup-proxy.rs
> +++ b/src/bin/proxmox-backup-proxy.rs
> @@ -87,8 +87,6 @@ async fn run() -> Result<(), Error> {
>                      let acceptor = Arc::clone(&acceptor);
>                      async move {
>                          sock.set_nodelay(true).unwrap();
> -                        sock.set_send_buffer_size(1024*1024).unwrap();
> -                        sock.set_recv_buffer_size(1024*1024).unwrap();
>                          Ok(tokio_openssl::accept(&acceptor, sock)
>                              .await
>                              .ok() // handshake errors aren't be fatal, so return None to filter
> -- 
> 2.20.1
> 
> 
> 
> _______________________________________________
> pbs-devel mailing list
> pbs-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
> 
> 
> 




  reply	other threads:[~2020-09-09 12:51 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-09 11:54 Dominik Csapak
2020-09-09 12:51 ` Fabian Grünbichler [this message]
2020-09-09 13:10   ` Dominik Csapak
2020-09-10  5:17 ` [pbs-devel] applied: " Dietmar Maurer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1599655578.53xhjxn63t.astroid@nora.none \
    --to=f.gruenbichler@proxmox.com \
    --cc=pbs-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal