all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: Kefu Chai <k.chai@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH v3 http-server 0/1] fix pveproxy OOM in websocket and spice proxy handlers
Date: Fri, 24 Apr 2026 20:11:39 +0800	[thread overview]
Message-ID: <20260424121140.3687865-1-k.chai@proxmox.com> (raw)

see v2's cover letter [1] for the problem description and the approach.

Changes since v2:

* extract handle_proxy_eof(); the four on_eof sites were copy-paste of
  each other with only $reader and the peer handle differing.

* fix a busy-loop in the on_eof drain loop: v2's unguarded
  `while length($hdl->{rbuf})` spins when the reader's
  `return if !$peer` short-circuits without consuming rbuf. reachable
  on a ws client close that sets block_disconnect on the backend
  handle, so a final reply from the backend pins the worker at 100%
  CPU instead of completing teardown. the new loop bails on
  peer-gone or zero progress.

* clear on_drain in apply_read_backpressure() after firing instead of
  leaving the wrapper installed when prev_on_drain is undef. no
  functional impact (idempotent re-set of on_read) but stops pinning
  a reader reference for the rest of the connection.

both of the above are verified with the same synthetic AnyEvent setup
used for v1/v2. reverting just the busy-loop guard reproduces a spin
that trips a 2 s alarm; reverting just the on_drain clear leaves the
wrapper installed after the drain.

on the peer-gone branch the drain loop no-ops and rbuf is released on
handle teardown, same as the pre-v2 behavior (before this series added
on_eof draining, rbuf at on_eof was always discarded). I audited the
users:

* PDM migration's control tunnel (mtunnel) completes each command
  synchronously via write_tunnel, so its teardown carries no protocol
  data; disk data goes over separate NBD-over-ws tunnels set up by
  forward_unix_socket, and a connection drop there surfaces as a clean
  migration abort on the source side rather than silent corruption.
* NoVNC and SPICE display (plus termproxy shell output) lose at most a
  final frame or line, cosmetic.
* SPICE USB passthrough is the one case with potential real data loss,
  but that requires an abrupt ws client close mid-transfer, which is rare.

[1] https://lore.proxmox.com/pve-devel/20260413125650.2569621-1-k.chai@proxmox.com/

Kefu Chai (1):
  fix #7483: apiserver: add backpressure to proxy handlers

 src/PVE/APIServer/AnyEvent.pm | 178 +++++++++++++++++++++++++---------
 1 file changed, 133 insertions(+), 45 deletions(-)

-- 
2.47.3





             reply	other threads:[~2026-04-24 12:19 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-24 12:11 Kefu Chai [this message]
2026-04-24 12:11 ` [PATCH v3 http-server 1/1] fix #7483: apiserver: add backpressure to proxy handlers Kefu Chai

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260424121140.3687865-1-k.chai@proxmox.com \
    --to=k.chai@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal