public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pbs-devel] [PATCH proxmox-backup] fix #2983: improve tcp performance
@ 2020-09-09 11:54 Dominik Csapak
  2020-09-09 12:51 ` Fabian Grünbichler
  2020-09-10  5:17 ` [pbs-devel] applied: " Dietmar Maurer
  0 siblings, 2 replies; 4+ messages in thread
From: Dominik Csapak @ 2020-09-09 11:54 UTC (permalink / raw)
  To: pbs-devel

by leaving the buffer sizes on default, we get much better tcp performance
for high latency links

throughput is still impacted by latency, but much less so when
leaving the sizes at default.
the disadvantage is slightly higher memory usage of the server
(details below)

my local benchmarks (proxmox-backup-client benchmark):

pbs client:
PVE Host
Epyc 7351P (16core/32thread)
64GB Memory

pbs server:
VM on Host
1 Socket, 4 Cores (Host CPU type)
4GB Memory

average of 3 runs, rounded to MB/s
                    | no delay |     1ms |     5ms |     10ms |    25ms |
without this patch  |  230MB/s |  55MB/s |  13MB/s |    7MB/s |   3MB/s |
with this patch     |  293MB/s | 293MB/s | 249MB/s |  241MB/s | 104MB/s |

memory usage (resident memory) of proxmox-backup-proxy:

                    | peak during benchmarks | after benchmarks |
without this patch  |                  144MB |            100MB |
with this patch     |                  145MB |            130MB |

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 src/bin/proxmox-backup-proxy.rs | 2 --
 1 file changed, 2 deletions(-)

diff --git a/src/bin/proxmox-backup-proxy.rs b/src/bin/proxmox-backup-proxy.rs
index 75065e6f..5844e632 100644
--- a/src/bin/proxmox-backup-proxy.rs
+++ b/src/bin/proxmox-backup-proxy.rs
@@ -87,8 +87,6 @@ async fn run() -> Result<(), Error> {
                     let acceptor = Arc::clone(&acceptor);
                     async move {
                         sock.set_nodelay(true).unwrap();
-                        sock.set_send_buffer_size(1024*1024).unwrap();
-                        sock.set_recv_buffer_size(1024*1024).unwrap();
                         Ok(tokio_openssl::accept(&acceptor, sock)
                             .await
                             .ok() // handshake errors aren't be fatal, so return None to filter
-- 
2.20.1





^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup] fix #2983: improve tcp performance
  2020-09-09 11:54 [pbs-devel] [PATCH proxmox-backup] fix #2983: improve tcp performance Dominik Csapak
@ 2020-09-09 12:51 ` Fabian Grünbichler
  2020-09-09 13:10   ` Dominik Csapak
  2020-09-10  5:17 ` [pbs-devel] applied: " Dietmar Maurer
  1 sibling, 1 reply; 4+ messages in thread
From: Fabian Grünbichler @ 2020-09-09 12:51 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion

On September 9, 2020 1:54 pm, Dominik Csapak wrote:
> by leaving the buffer sizes on default, we get much better tcp performance
> for high latency links
> 
> throughput is still impacted by latency, but much less so when
> leaving the sizes at default.
> the disadvantage is slightly higher memory usage of the server
> (details below)
> 
> my local benchmarks (proxmox-backup-client benchmark):
> 
> pbs client:
> PVE Host
> Epyc 7351P (16core/32thread)
> 64GB Memory
> 
> pbs server:
> VM on Host
> 1 Socket, 4 Cores (Host CPU type)
> 4GB Memory
> 
> average of 3 runs, rounded to MB/s
>                     | no delay |     1ms |     5ms |     10ms |    25ms |
> without this patch  |  230MB/s |  55MB/s |  13MB/s |    7MB/s |   3MB/s |
> with this patch     |  293MB/s | 293MB/s | 249MB/s |  241MB/s | 104MB/s |
> 
> memory usage (resident memory) of proxmox-backup-proxy:
> 
>                     | peak during benchmarks | after benchmarks |
> without this patch  |                  144MB |            100MB |
> with this patch     |                  145MB |            130MB |
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>

Tested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>

AFAICT, the same applies to the client side despite the comment there:


diff --git a/src/client/http_client.rs b/src/client/http_client.rs
index dd457c12..ae3704d6 100644
--- a/src/client/http_client.rs
+++ b/src/client/http_client.rs
@@ -292,7 +292,6 @@ impl HttpClient {
 
         let mut httpc = hyper::client::HttpConnector::new();
         httpc.set_nodelay(true); // important for h2 download performance!
-        httpc.set_recv_buffer_size(Some(1024*1024)); //important for h2 download performance!
         httpc.enforce_http(false); // we want https...
 
         let https = HttpsConnector::with_connector(httpc, ssl_connector_builder.build());

leaves restore speed unchanged without artifical delay, but improves it 
to the speed without delay when adding 25ms (in this test, the 
throughput is not limited by the network since it's an actual restore):

no delay, without patch: ~50MB/s
no delay, with patch: ~50MB/s
25ms delay, without patch: ~11MB/s
25ms delay, with path: ~50MB/s

do you see the same effect on your system (proxmox-backup-client restore 
.. | pv -trab > /dev/null)? I haven't setup a proper test bed to 
minimize effects of caching (yet), but I did the following sequence:

build, restart
test restore without delay for 1 minute and watch throughput
test restore with delay for 1 minute and watch throughput
test restore without delay for 1 minute and watch throughput
test restore with delay for 1 minute and watch throughput
patch, rinse, repeat

> ---
>  src/bin/proxmox-backup-proxy.rs | 2 --
>  1 file changed, 2 deletions(-)
> 
> diff --git a/src/bin/proxmox-backup-proxy.rs b/src/bin/proxmox-backup-proxy.rs
> index 75065e6f..5844e632 100644
> --- a/src/bin/proxmox-backup-proxy.rs
> +++ b/src/bin/proxmox-backup-proxy.rs
> @@ -87,8 +87,6 @@ async fn run() -> Result<(), Error> {
>                      let acceptor = Arc::clone(&acceptor);
>                      async move {
>                          sock.set_nodelay(true).unwrap();
> -                        sock.set_send_buffer_size(1024*1024).unwrap();
> -                        sock.set_recv_buffer_size(1024*1024).unwrap();
>                          Ok(tokio_openssl::accept(&acceptor, sock)
>                              .await
>                              .ok() // handshake errors aren't be fatal, so return None to filter
> -- 
> 2.20.1
> 
> 
> 
> _______________________________________________
> pbs-devel mailing list
> pbs-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
> 
> 
> 




^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup] fix #2983: improve tcp performance
  2020-09-09 12:51 ` Fabian Grünbichler
@ 2020-09-09 13:10   ` Dominik Csapak
  0 siblings, 0 replies; 4+ messages in thread
From: Dominik Csapak @ 2020-09-09 13:10 UTC (permalink / raw)
  To: pbs-devel

On 9/9/20 2:51 PM, Fabian Grünbichler wrote:
> On September 9, 2020 1:54 pm, Dominik Csapak wrote:
>> by leaving the buffer sizes on default, we get much better tcp performance
>> for high latency links
>>
>> throughput is still impacted by latency, but much less so when
>> leaving the sizes at default.
>> the disadvantage is slightly higher memory usage of the server
>> (details below)
>>
>> my local benchmarks (proxmox-backup-client benchmark):
>>
>> pbs client:
>> PVE Host
>> Epyc 7351P (16core/32thread)
>> 64GB Memory
>>
>> pbs server:
>> VM on Host
>> 1 Socket, 4 Cores (Host CPU type)
>> 4GB Memory
>>
>> average of 3 runs, rounded to MB/s
>>                      | no delay |     1ms |     5ms |     10ms |    25ms |
>> without this patch  |  230MB/s |  55MB/s |  13MB/s |    7MB/s |   3MB/s |
>> with this patch     |  293MB/s | 293MB/s | 249MB/s |  241MB/s | 104MB/s |
>>
>> memory usage (resident memory) of proxmox-backup-proxy:
>>
>>                      | peak during benchmarks | after benchmarks |
>> without this patch  |                  144MB |            100MB |
>> with this patch     |                  145MB |            130MB |
>>
>> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> 
> Tested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
> 
> AFAICT, the same applies to the client side despite the comment there:

arg, .. i forgot to commit this, i wanted to include that as well

> 
> 
> diff --git a/src/client/http_client.rs b/src/client/http_client.rs
> index dd457c12..ae3704d6 100644
> --- a/src/client/http_client.rs
> +++ b/src/client/http_client.rs
> @@ -292,7 +292,6 @@ impl HttpClient {
>   
>           let mut httpc = hyper::client::HttpConnector::new();
>           httpc.set_nodelay(true); // important for h2 download performance!
> -        httpc.set_recv_buffer_size(Some(1024*1024)); //important for h2 download performance!
>           httpc.enforce_http(false); // we want https...
>   
>           let https = HttpsConnector::with_connector(httpc, ssl_connector_builder.build());
> 
> leaves restore speed unchanged without artifical delay, but improves it
> to the speed without delay when adding 25ms (in this test, the
> throughput is not limited by the network since it's an actual restore):
> 
> no delay, without patch: ~50MB/s
> no delay, with patch: ~50MB/s
> 25ms delay, without patch: ~11MB/s
> 25ms delay, with path: ~50MB/s
> 
> do you see the same effect on your system (proxmox-backup-client restore
> .. | pv -trab > /dev/null)? I haven't setup a proper test bed to
> minimize effects of caching (yet), but I did the following sequence:

i get different results, but the same trend
(large vm backup with possibly big amount of zero chunks, so... fast)

avg was:

no delay, without client patch: ~1.5GiB/s
no delay, with client patch: ~1.5GiB/s
25ms delay, without patch: 30MiB/s
25ms delay, with patch: ~950MiB/s

> 
> build, restart
> test restore without delay for 1 minute and watch throughput
> test restore with delay for 1 minute and watch throughput
> test restore without delay for 1 minute and watch throughput
> test restore with delay for 1 minute and watch throughput
> patch, rinse, repeat

i left the server the same (with the server side patch applied)
and only changed the client, so any caching on the server is
active for all runs but the first (which i did not count)

> 
>> ---
>>   src/bin/proxmox-backup-proxy.rs | 2 --
>>   1 file changed, 2 deletions(-)
>>
>> diff --git a/src/bin/proxmox-backup-proxy.rs b/src/bin/proxmox-backup-proxy.rs
>> index 75065e6f..5844e632 100644
>> --- a/src/bin/proxmox-backup-proxy.rs
>> +++ b/src/bin/proxmox-backup-proxy.rs
>> @@ -87,8 +87,6 @@ async fn run() -> Result<(), Error> {
>>                       let acceptor = Arc::clone(&acceptor);
>>                       async move {
>>                           sock.set_nodelay(true).unwrap();
>> -                        sock.set_send_buffer_size(1024*1024).unwrap();
>> -                        sock.set_recv_buffer_size(1024*1024).unwrap();
>>                           Ok(tokio_openssl::accept(&acceptor, sock)
>>                               .await
>>                               .ok() // handshake errors aren't be fatal, so return None to filter
>> -- 
>> 2.20.1
>>
>>
>>
>> _______________________________________________
>> pbs-devel mailing list
>> pbs-devel@lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
>>
>>
>>
> 
> 
> _______________________________________________
> pbs-devel mailing list
> pbs-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
> 





^ permalink raw reply	[flat|nested] 4+ messages in thread

* [pbs-devel] applied: [PATCH proxmox-backup] fix #2983: improve tcp performance
  2020-09-09 11:54 [pbs-devel] [PATCH proxmox-backup] fix #2983: improve tcp performance Dominik Csapak
  2020-09-09 12:51 ` Fabian Grünbichler
@ 2020-09-10  5:17 ` Dietmar Maurer
  1 sibling, 0 replies; 4+ messages in thread
From: Dietmar Maurer @ 2020-09-10  5:17 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Dominik Csapak

applied




^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-09-10  5:18 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-09 11:54 [pbs-devel] [PATCH proxmox-backup] fix #2983: improve tcp performance Dominik Csapak
2020-09-09 12:51 ` Fabian Grünbichler
2020-09-09 13:10   ` Dominik Csapak
2020-09-10  5:17 ` [pbs-devel] applied: " Dietmar Maurer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal