public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Dominik Csapak <d.csapak@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH http-server] make daemons compatible with installed AnyEvent::AIO
Date: Mon,  3 Jul 2023 08:39:55 +0200	[thread overview]
Message-ID: <20230703063955.575427-1-d.csapak@proxmox.com> (raw)

when installing AnyEvent::AIO (by the package libanyevent-aio-perl), the
worker forks of our daemons using AnyEvent would consume 100% cpu cycles
while trying to do an epoll_wait which no one read from. It was not
really clear which part of the code set that fd up.

Reading the documentation of the related perl modules, it became clear
that the issue was with AnyEvent::IO. By default this uses AnyEvent::AIO
(if installed) which in turn uses IO::AIO which explicitly says it uses
pthreads and is not really fork compatible (which we rely heavy upon).

It seems that IO::AIO sets up some fds with epoll in the END handler of
it's library (or earlier, but sends data to it in the END handler), so
that when using 'exit' instead of 'POSIX::_exit' (which we do in
PVE::Daemon) creates the observed behavior.

Interestingly we did not use any of AnyEvent::IO's functionality, so we
can safely remove it. Even if we would have used  it in the past,
without AnyEvent::AIO the IO would not have been async anyway (the pure
perl impl doesn't do async IO). My best guess is that we wanted to use
it, but noticed that we can't, and forgot to remove the use statement.
(This is indicated by a comment that says aio_load is not async unless
IO::AIO is used)

This only occurs now, since bookworm is the first debian release to
package the library.

if we ever wanted to use AnyEvent::AIO, there are probably two other
ways that could fix it:
* replace our 'exit()' calls with 'POSIX::_exit()', which seems to fix
  it, but other side effects are currently unknown
* use 'IO::AIO::reinit()' after forking, which also seems to fix it, but
  perldoc says it 'is not an operation supported by any standards, but
  happens to work on GNU/LINUX and some newer BSD systems'

With this fix, one can safely install 'libanyevent-aio-perl' and
'libperl-languageserver-perl' (the only user of it AFAICS) on a PVE or
PMG system.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
maybe we should leave the use statement in and only comment it out,
with a note not to use this?

 src/PVE/APIServer/AnyEvent.pm | 1 -
 1 file changed, 1 deletion(-)

diff --git a/src/PVE/APIServer/AnyEvent.pm b/src/PVE/APIServer/AnyEvent.pm
index 1fd7a74..8fb1a7a 100644
--- a/src/PVE/APIServer/AnyEvent.pm
+++ b/src/PVE/APIServer/AnyEvent.pm
@@ -12,7 +12,6 @@ use warnings;
 
 use AnyEvent::HTTP;
 use AnyEvent::Handle;
-use AnyEvent::IO;
 use AnyEvent::Socket;
 # use AnyEvent::Strict; # only use this for debugging
 use AnyEvent::TLS;
-- 
2.30.2





             reply	other threads:[~2023-07-03  6:40 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-03  6:39 Dominik Csapak [this message]
2023-07-03  7:42 ` [pve-devel] applied: " Thomas Lamprecht

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230703063955.575427-1-d.csapak@proxmox.com \
    --to=d.csapak@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal