public inbox for pmg-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pmg-devel] [RFC pmg-api] pmgtunnel: do not set SIGCHLD handler
@ 2025-08-18 18:07 Stoiko Ivanov
  2025-08-19 10:13 ` Stefan Hanreich
  0 siblings, 1 reply; 2+ messages in thread
From: Stoiko Ivanov @ 2025-08-18 18:07 UTC (permalink / raw)
  To: pmg-devel

Drop the SIGCHLD handling in pmgtunnel, as it's not necessary and
parts deeper in our code-base (e.g. PVE::Tools::run_command) rely on
signals not being handled by the callers.

In the case of pmgtunnel the signalhander (finish_children) simply
wait(2)'s for all children (no matter if it's a ssh-tunnel process, or
e.g. `ip link` being called to get network information), and clears
the ssh-forwarded postgres socket, schedules a (delayed) restart, and
logs an exit message for the ssh-tunnels.

all those tasks can happen a synchronously in the main loop in run as
well (especially the cleaning of the postgres socket is done directly
before running ssh anyways).

from a quick look through our perl codebase it seems that this is the
only service setting a similar SIGCHLD handler (thus the change in
reading/parsing /etc/network/interfaces did not cause issues anywhere
else).

Suggested-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
---
Sending as RFC, as such changes to code that has not been touched in 7+
years might cause regressions, which my testing would not catch.
Let this run in my cluster after blocking access between the nodes via
netfilter rules - the exiting and logging was the same.

 src/PMG/Service/pmgtunnel.pm | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/src/PMG/Service/pmgtunnel.pm b/src/PMG/Service/pmgtunnel.pm
index 7b9fa28..062e764 100644
--- a/src/PMG/Service/pmgtunnel.pm
+++ b/src/PMG/Service/pmgtunnel.pm
@@ -173,17 +173,12 @@ sub hup {
 sub run {
     my ($self) = @_;
 
-    local $SIG{CHLD} = \&finish_children;
-
     for (;;) { # forever
 
         $next_update = time() + $updatetime;
 
         eval {
-            # reset SIGCHLD handler as ClusterConfig::new uses run_command (for reading ip link)
-            $SIG{CHLD} = 'DEFAULT';
             my $cinfo = PMG::ClusterConfig->new(); # reload
-            $SIG{CHLD} = \&finish_children;
             $self->purge_tunnels($cinfo);
             $self->start_tunnels($cinfo);
         };
-- 
2.39.5



_______________________________________________
pmg-devel mailing list
pmg-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pmg-devel


^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [pmg-devel] [RFC pmg-api] pmgtunnel: do not set SIGCHLD handler
  2025-08-18 18:07 [pmg-devel] [RFC pmg-api] pmgtunnel: do not set SIGCHLD handler Stoiko Ivanov
@ 2025-08-19 10:13 ` Stefan Hanreich
  0 siblings, 0 replies; 2+ messages in thread
From: Stefan Hanreich @ 2025-08-19 10:13 UTC (permalink / raw)
  To: Stoiko Ivanov, pmg-devel

Gave this a quick spin on a PMG cluster. Tried to reproduce the initial
issue via editing /etc/hosts and didn't run into any errors with this
patch. Downgraded to the initial version to cross-check and the error
started to appear.

Terminated the SSH process of pmgtunnel several times and checked if the
postgres tunnel in /run/pmgtunnel gets cleaned up.

Consider this:

Tested-by: Stefan Hanreich <s.hanreich@proxmox.com>

On 8/18/25 8:07 PM, Stoiko Ivanov wrote:
> Drop the SIGCHLD handling in pmgtunnel, as it's not necessary and
> parts deeper in our code-base (e.g. PVE::Tools::run_command) rely on
> signals not being handled by the callers.
> 
> In the case of pmgtunnel the signalhander (finish_children) simply
> wait(2)'s for all children (no matter if it's a ssh-tunnel process, or
> e.g. `ip link` being called to get network information), and clears
> the ssh-forwarded postgres socket, schedules a (delayed) restart, and
> logs an exit message for the ssh-tunnels.
> 
> all those tasks can happen a synchronously in the main loop in run as
> well (especially the cleaning of the postgres socket is done directly
> before running ssh anyways).
> 
> from a quick look through our perl codebase it seems that this is the
> only service setting a similar SIGCHLD handler (thus the change in
> reading/parsing /etc/network/interfaces did not cause issues anywhere
> else).
> 
> Suggested-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
> Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
> ---
> Sending as RFC, as such changes to code that has not been touched in 7+
> years might cause regressions, which my testing would not catch.
> Let this run in my cluster after blocking access between the nodes via
> netfilter rules - the exiting and logging was the same.
> 
>  src/PMG/Service/pmgtunnel.pm | 5 -----
>  1 file changed, 5 deletions(-)
> 
> diff --git a/src/PMG/Service/pmgtunnel.pm b/src/PMG/Service/pmgtunnel.pm
> index 7b9fa28..062e764 100644
> --- a/src/PMG/Service/pmgtunnel.pm
> +++ b/src/PMG/Service/pmgtunnel.pm
> @@ -173,17 +173,12 @@ sub hup {
>  sub run {
>      my ($self) = @_;
>  
> -    local $SIG{CHLD} = \&finish_children;
> -
>      for (;;) { # forever
>  
>          $next_update = time() + $updatetime;
>  
>          eval {
> -            # reset SIGCHLD handler as ClusterConfig::new uses run_command (for reading ip link)
> -            $SIG{CHLD} = 'DEFAULT';
>              my $cinfo = PMG::ClusterConfig->new(); # reload
> -            $SIG{CHLD} = \&finish_children;
>              $self->purge_tunnels($cinfo);
>              $self->start_tunnels($cinfo);
>          };



_______________________________________________
pmg-devel mailing list
pmg-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pmg-devel


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2025-08-19 10:11 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-08-18 18:07 [pmg-devel] [RFC pmg-api] pmgtunnel: do not set SIGCHLD handler Stoiko Ivanov
2025-08-19 10:13 ` Stefan Hanreich

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal