public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Thomas Lamprecht <t.lamprecht@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>,
	Friedrich Weber <f.weber@proxmox.com>
Subject: Re: [pve-devel] [RFC http-server] fix #5391: proxy request: avoid HTTP 599 Too many redirections
Date: Thu, 20 Jun 2024 09:45:18 +0200	[thread overview]
Message-ID: <8392cb10-bec5-402d-bab1-6a1351049370@proxmox.com> (raw)
In-Reply-To: <20240617160303.297352-1-f.weber@proxmox.com>

Am 17/06/2024 um 18:03 schrieb Friedrich Weber:
> The API server proxies HTTP requests in two cases:
> 
> - between cluster nodes (pveproxy->pveproxy)
> - between daemons on one node for protected API endpoints
>   (pveproxy->pvedaemon)
> 
> The API server uses AnyEvent::HTTP for proxying, with unfortunate
> settings for connection reuse (details below). With these settings,
> long-running synchronous API requests on the proxy destination's side
> can cause unrelated proxied requests to fail with a misleading HTTP
> 599 "Too many redirections" error response. In order to avoid these
> errors, improve the connection reuse settings.
> 
> In more detail:
> 
> Per default, AnyEvent::HTTP reuses previously-opened connections for
> requests with idempotent HTTP verbs, e.g. GET/PUT/DELETE [1]. However,
> when trying to reuse a previously-opened connection, it can happen
> that the destination unexpectedly closes the connection. In case of
> idempotent requests, AnyEvent::HTTP's http_request will retry by
> recursively calling itself. Since the API server disallows recursion
> by passing `recurse => 0` to http_request initially, the recursive
> call fails with "HTTP 599 Too many redirections".
> 
> This can happen both for pveproxy->pveproxy and pveproxy->pvedaemon,
> as connection reuse is enabled in both cases. Connection reuse being
> enabled in the pveproxy->pvedaemon case was likely not intended: A
> comment mentions that "keep alive for localhost is not worth it", but
> only sets `keepalive => 0` and not `persistent => 0`. This setting
> switches from HTTP/1.1 persistent connections to HTTP/1.0-style
> keep-alive connections, but still allows connection reuse.
> 
> The destination unexpectedly closing the connection can be due to
> unfortunate timing, but it becomes much more likely in case of
> long-running synchronous requests. An example sequence:
> 
> 1) A pveproxy worker P1 handles a protected request R1 and proxies it
>    to a pvedaemon worker D1, opening a pveproxy worker->pvedaemon
>    worker connection C1. The pvedaemon worker D1 is relatively fast
>    (<1s) in handling R1. P1 saves connection C1 for later reuse.
> 2) A different pveproxy worker P2 handles a protected request R2 and
>    proxies it to the same pvedaemon worker D1, opening a new pveproxy
>    worker->pvedaemon connection C2. Handling this request takes a long
>    time (>5s), for example because it queries a slow storage. While
>    the request is being handled, the pvedaemon worker D1 cannot do
>    anything else.
> 3) Since pvedaemon worker D1 sets a timeout of 5s when accepting
>    connections and it did not see anything on connection C1 for >5s
>    (because it was busy handling R2), it closes the connection C1.
> 3) pveproxy worker P1 handles a protected idempotent request R3. Since
>    the request is idempotent, it tries to reuse connection C1. But C1
>    was just closed by D1, so P1 fails request R3 with HTTP 599 as
>    described above.
> 
> In addition, AnyEvent::HTTP's default of reusing connections for all
> idempotent HTTP verbs is problematic in our case, as not all PUT
> requests of the PVE API are actually idempotent, e.g. /sendkey [2].
> 
> To fix the issues above, improve the connection reuse settings:
> 
> - Actually disable connection reuse for pveproxy->pvedaemon requests,
>   by passing `persistent => 0`.
> - For pveproxy->pveproxy requests, enable connection reuse for GET
>   requests only, as these should be actually idempotent.
> - If connection reuse is enabled, allow one retry by passing `recurse
>   => 1`, to avoid the HTTP 599 errors.
> 
> [1] https://metacpan.org/pod/AnyEvent::HTTP#persistent-=%3E-$boolean
> [2] https://pve.proxmox.com/pve-docs/api-viewer/index.html#/nodes/{node}/qemu/{vmid}/sendkey
> 
> Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
> Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
> ---
>  src/PVE/APIServer/AnyEvent.pm | 19 ++++++++++++++-----
>  1 file changed, 14 insertions(+), 5 deletions(-)
> 

Nice work and write up!

Acked-by: Thomas Lamprecht <t.lamprecht@proxmox.com>

But yeah, seeing some benchmarking for before/after this patch would still be
great, that's also the main reason for me not applying this now already.



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

      parent reply	other threads:[~2024-06-20  7:45 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-06-17 16:03 Friedrich Weber
2024-06-17 16:13 ` Friedrich Weber
2024-06-20  7:45 ` Thomas Lamprecht [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8392cb10-bec5-402d-bab1-6a1351049370@proxmox.com \
    --to=t.lamprecht@proxmox.com \
    --cc=f.weber@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal