public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH manager 1/3] pvescheduler: catch errors in forked childs
@ 2021-11-18 13:28 Dominik Csapak
  2021-11-18 13:28 ` [pve-devel] [PATCH manager 2/3] pvescheduler: reworking child pid tracking Dominik Csapak
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Dominik Csapak @ 2021-11-18 13:28 UTC (permalink / raw)
  To: pve-devel

if '$sub' dies, the error handler of PVE::Daemon triggers, which
initiates a shutdown of the child, resulting in confusing error logs
(e.g. 'got shutdown request, signal running jobs to stop')

instead, run it under 'eval' and print the error to the sylog instead

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 PVE/Service/pvescheduler.pm | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/PVE/Service/pvescheduler.pm b/PVE/Service/pvescheduler.pm
index 9f5c4515..d4f73702 100755
--- a/PVE/Service/pvescheduler.pm
+++ b/PVE/Service/pvescheduler.pm
@@ -47,7 +47,12 @@ sub run {
 	    die "fork failed: $!\n";
 	} elsif ($child == 0) {
 	    $self->after_fork_cleanup();
-	    $sub->();
+	    eval {
+		$sub->();
+	    };
+	    if (my $err = $@) {
+		syslog('err', "ERROR: $err");
+	    }
 	    POSIX::_exit(0);
 	}
 
-- 
2.30.2





^ permalink raw reply	[flat|nested] 5+ messages in thread

* [pve-devel] [PATCH manager 2/3] pvescheduler: reworking child pid tracking
  2021-11-18 13:28 [pve-devel] [PATCH manager 1/3] pvescheduler: catch errors in forked childs Dominik Csapak
@ 2021-11-18 13:28 ` Dominik Csapak
  2021-11-22  8:46   ` Fabian Ebner
  2021-11-18 13:28 ` [pve-devel] [PATCH manager 3/3] pvescheduler: implement graceful reloading Dominik Csapak
  2021-11-22 19:35 ` [pve-devel] applied-series: [PATCH manager 1/3] pvescheduler: catch errors in forked childs Thomas Lamprecht
  2 siblings, 1 reply; 5+ messages in thread
From: Dominik Csapak @ 2021-11-18 13:28 UTC (permalink / raw)
  To: pve-devel

previously, systemd timers were responsible for running replication jobs.
those timers would not restart if the previous one is still running.

though trying again while it is running does no harm really, it spams
the log with errors about not being able to acquire the correct lock

to fix this, we rework the handling of child processes such that we only
start one per loop if there is currently none running. for that,
introduce the types of forks we do and allow one child process per type
(for now, we have 'jobs' and 'replication' as types)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 PVE/Service/pvescheduler.pm | 42 ++++++++++++++++++++++++-------------
 1 file changed, 27 insertions(+), 15 deletions(-)

diff --git a/PVE/Service/pvescheduler.pm b/PVE/Service/pvescheduler.pm
index d4f73702..466cc599 100755
--- a/PVE/Service/pvescheduler.pm
+++ b/PVE/Service/pvescheduler.pm
@@ -17,12 +17,16 @@ my $cmdline = [$0, @ARGV];
 my %daemon_options = (stop_wait_time => 180, max_workers => 0);
 my $daemon = __PACKAGE__->new('pvescheduler', $cmdline, %daemon_options);
 
+my @types = qw(replication jobs);
+
 my $finish_jobs = sub {
     my ($self) = @_;
-    foreach my $cpid (keys %{$self->{jobs}}) {
-	my $waitpid = waitpid($cpid, WNOHANG);
-	if (defined($waitpid) && ($waitpid == $cpid)) {
-	    delete ($self->{jobs}->{$cpid});
+    for my $type (@types) {
+	if (my $cpid = $self->{jobs}->{$type}) {
+	    my $waitpid = waitpid($cpid, WNOHANG);
+	    if (defined($waitpid) && ($waitpid == $cpid)) {
+		$self->{jobs}->{$type} = undef;
+	    }
 	}
     }
 };
@@ -41,7 +45,11 @@ sub run {
     };
 
     my $fork = sub {
-	my ($sub) = @_;
+	my ($type, $sub) = @_;
+
+	# don't fork again if the previous iteration still runs
+	return if defined($self->{jobs}->{$type});
+
 	my $child = fork();
 	if (!defined($child)) {
 	    die "fork failed: $!\n";
@@ -56,16 +64,16 @@ sub run {
 	    POSIX::_exit(0);
 	}
 
-	$jobs->{$child} = 1;
+	$jobs->{$type} = $child;
     };
 
     my $run_jobs = sub {
 
-	$fork->(sub {
+	$fork->('replication', sub {
 	    PVE::API2::Replication::run_jobs(undef, sub {}, 0, 1);
 	});
 
-	$fork->(sub {
+	$fork->('jobs', sub {
 	    PVE::Jobs::run_jobs();
 	});
     };
@@ -92,14 +100,16 @@ sub run {
 	}
     }
 
-    # jobs have a lock timeout of 60s, wait a bit more for graceful termination
+    # replication jobs have a lock timeout of 60s, wait a bit more for graceful termination
     my $timeout = 0;
-    while (keys %$jobs > 0 && $timeout < 75) {
-	kill 'TERM', keys %$jobs;
-	$timeout += sleep(5);
+    for my $type (@types) {
+	while (defined($jobs->{$type}) && $timeout < 75) {
+	    kill 'TERM', $jobs->{$type};
+	    $timeout += sleep(5);
+	}
+	# ensure the rest gets stopped
+	kill 'KILL', $jobs->{$type} if defined($jobs->{$type});
     }
-    # ensure the rest gets stopped
-    kill 'KILL', keys %$jobs if (keys %$jobs > 0);
 }
 
 sub shutdown {
@@ -107,7 +117,9 @@ sub shutdown {
 
     syslog('info', 'got shutdown request, signal running jobs to stop');
 
-    kill 'TERM', keys %{$self->{jobs}};
+    for my $type (@types) {
+	kill 'TERM', $self->{jobs}->{$type} if $self->{jobs}->{$type};
+    }
     $self->{shutdown_request} = 1;
 }
 
-- 
2.30.2





^ permalink raw reply	[flat|nested] 5+ messages in thread

* [pve-devel] [PATCH manager 3/3] pvescheduler: implement graceful reloading
  2021-11-18 13:28 [pve-devel] [PATCH manager 1/3] pvescheduler: catch errors in forked childs Dominik Csapak
  2021-11-18 13:28 ` [pve-devel] [PATCH manager 2/3] pvescheduler: reworking child pid tracking Dominik Csapak
@ 2021-11-18 13:28 ` Dominik Csapak
  2021-11-22 19:35 ` [pve-devel] applied-series: [PATCH manager 1/3] pvescheduler: catch errors in forked childs Thomas Lamprecht
  2 siblings, 0 replies; 5+ messages in thread
From: Dominik Csapak @ 2021-11-18 13:28 UTC (permalink / raw)
  To: pve-devel

utilize PVE::Daemons 'hup' functionality to reload gracefully.

Leaves the children running (if any) and give them to the new instance
via ENV variables. After loading, check if they are still around

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
the only weird behaviour is that the re-exec can be up to one minute
after the reload, since we only get into the loop once a minute

we can shorten the loop cycle if we want though..

 PVE/Service/pvescheduler.pm   | 22 +++++++++++++++++++++-
 services/pvescheduler.service |  1 +
 2 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/PVE/Service/pvescheduler.pm b/PVE/Service/pvescheduler.pm
index 466cc599..700c96ec 100755
--- a/PVE/Service/pvescheduler.pm
+++ b/PVE/Service/pvescheduler.pm
@@ -24,19 +24,35 @@ my $finish_jobs = sub {
     for my $type (@types) {
 	if (my $cpid = $self->{jobs}->{$type}) {
 	    my $waitpid = waitpid($cpid, WNOHANG);
-	    if (defined($waitpid) && ($waitpid == $cpid)) {
+	    if (defined($waitpid) && ($waitpid == $cpid) || $waitpid == -1) {
 		$self->{jobs}->{$type} = undef;
 	    }
 	}
     }
 };
 
+sub hup {
+    my ($self) = @_;
+
+    for my $type (@types) {
+	my $pid = $self->{jobs}->{$type};
+	next if !defined($pid);
+	$ENV{"PVE_DAEMON_${type}_PID"} = $pid;
+    }
+}
+
 sub run {
     my ($self) = @_;
 
     my $jobs = {};
     $self->{jobs} = $jobs;
 
+    for my $type (@types) {
+	$self->{jobs}->{$type} = delete $ENV{"PVE_DAEMON_${type}_PID"};
+	# check if children finished in the meantime
+	$finish_jobs->($self);
+    }
+
     my $old_sig_chld = $SIG{CHLD};
     local $SIG{CHLD} = sub {
 	local ($@, $!, $?); # do not overwrite error vars
@@ -82,6 +98,8 @@ sub run {
 
     for (my $count = 1000;;$count++) {
 	last if $self->{shutdown_request};
+	# we got a reload signal, return gracefully and leave the forks running
+	return if $self->{got_hup_signal};
 
 	$run_jobs->();
 
@@ -125,11 +143,13 @@ sub shutdown {
 
 $daemon->register_start_command();
 $daemon->register_stop_command();
+$daemon->register_restart_command(1);
 $daemon->register_status_command();
 
 our $cmddef = {
     start => [ __PACKAGE__, 'start', []],
     stop => [ __PACKAGE__, 'stop', []],
+    restart => [ __PACKAGE__, 'restart', []],
     status => [ __PACKAGE__, 'status', [], undef, sub { print shift . "\n";} ],
 };
 
diff --git a/services/pvescheduler.service b/services/pvescheduler.service
index 11769e80..e6f10832 100644
--- a/services/pvescheduler.service
+++ b/services/pvescheduler.service
@@ -8,6 +8,7 @@ After=pve-storage.target
 [Service]
 ExecStart=/usr/bin/pvescheduler start
 ExecStop=/usr/bin/pvescheduler stop
+ExecReload=/usr/bin/pvescheduler restart
 PIDFile=/var/run/pvescheduler.pid
 KillMode=process
 Type=forking
-- 
2.30.2





^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [pve-devel] [PATCH manager 2/3] pvescheduler: reworking child pid tracking
  2021-11-18 13:28 ` [pve-devel] [PATCH manager 2/3] pvescheduler: reworking child pid tracking Dominik Csapak
@ 2021-11-22  8:46   ` Fabian Ebner
  0 siblings, 0 replies; 5+ messages in thread
From: Fabian Ebner @ 2021-11-22  8:46 UTC (permalink / raw)
  To: pve-devel, Dominik Csapak

On 18.11.21 14:28, Dominik Csapak wrote:
> previously, systemd timers were responsible for running replication jobs.
> those timers would not restart if the previous one is still running.
> 
> though trying again while it is running does no harm really, it spams
> the log with errors about not being able to acquire the correct lock
> 
> to fix this, we rework the handling of child processes such that we only
> start one per loop if there is currently none running. for that,
> introduce the types of forks we do and allow one child process per type
> (for now, we have 'jobs' and 'replication' as types)
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
>   PVE/Service/pvescheduler.pm | 42 ++++++++++++++++++++++++-------------
>   1 file changed, 27 insertions(+), 15 deletions(-)
> 
> diff --git a/PVE/Service/pvescheduler.pm b/PVE/Service/pvescheduler.pm
> index d4f73702..466cc599 100755
> --- a/PVE/Service/pvescheduler.pm
> +++ b/PVE/Service/pvescheduler.pm
> @@ -17,12 +17,16 @@ my $cmdline = [$0, @ARGV];
>   my %daemon_options = (stop_wait_time => 180, max_workers => 0);
>   my $daemon = __PACKAGE__->new('pvescheduler', $cmdline, %daemon_options);
>   
> +my @types = qw(replication jobs);
> +
>   my $finish_jobs = sub {
>       my ($self) = @_;
> -    foreach my $cpid (keys %{$self->{jobs}}) {
> -	my $waitpid = waitpid($cpid, WNOHANG);
> -	if (defined($waitpid) && ($waitpid == $cpid)) {
> -	    delete ($self->{jobs}->{$cpid});
> +    for my $type (@types) {
> +	if (my $cpid = $self->{jobs}->{$type}) {
> +	    my $waitpid = waitpid($cpid, WNOHANG);
> +	    if (defined($waitpid) && ($waitpid == $cpid)) {
> +		$self->{jobs}->{$type} = undef;
> +	    }
>   	}
>       }
>   };
> @@ -41,7 +45,11 @@ sub run {
>       };
>   
>       my $fork = sub {
> -	my ($sub) = @_;
> +	my ($type, $sub) = @_;
> +
> +	# don't fork again if the previous iteration still runs
> +	return if defined($self->{jobs}->{$type});
> +
>   	my $child = fork();
>   	if (!defined($child)) {
>   	    die "fork failed: $!\n";
> @@ -56,16 +64,16 @@ sub run {
>   	    POSIX::_exit(0);
>   	}
>   
> -	$jobs->{$child} = 1;
> +	$jobs->{$type} = $child;
>       };
>   
>       my $run_jobs = sub {
>   
> -	$fork->(sub {
> +	$fork->('replication', sub {
>   	    PVE::API2::Replication::run_jobs(undef, sub {}, 0, 1);
>   	});
>   
> -	$fork->(sub {
> +	$fork->('jobs', sub {
>   	    PVE::Jobs::run_jobs();
>   	});
>       };
> @@ -92,14 +100,16 @@ sub run {
>   	}
>       }
>   
> -    # jobs have a lock timeout of 60s, wait a bit more for graceful termination
> +    # replication jobs have a lock timeout of 60s, wait a bit more for graceful termination
>       my $timeout = 0;
> -    while (keys %$jobs > 0 && $timeout < 75) {
> -	kill 'TERM', keys %$jobs;
> -	$timeout += sleep(5);
> +    for my $type (@types) {
> +	while (defined($jobs->{$type}) && $timeout < 75) {
> +	    kill 'TERM', $jobs->{$type};
> +	    $timeout += sleep(5);
> +	}
> +	# ensure the rest gets stopped
> +	kill 'KILL', $jobs->{$type} if defined($jobs->{$type});
>       }
> -    # ensure the rest gets stopped
> -    kill 'KILL', keys %$jobs if (keys %$jobs > 0);
>   }

Nit: this changes the behavior a bit, as it can happen that the timeout 
is "used up" for one job type and all following ones only get the KILL 
singal. Because of the code below, each child still gets at least one 
TERM, so not a big deal.

>   
>   sub shutdown {
> @@ -107,7 +117,9 @@ sub shutdown {
>   
>       syslog('info', 'got shutdown request, signal running jobs to stop');
>   
> -    kill 'TERM', keys %{$self->{jobs}};
> +    for my $type (@types) {
> +	kill 'TERM', $self->{jobs}->{$type} if $self->{jobs}->{$type};
> +    }
>       $self->{shutdown_request} = 1;
>   }
>   
> 




^ permalink raw reply	[flat|nested] 5+ messages in thread

* [pve-devel] applied-series: [PATCH manager 1/3] pvescheduler: catch errors in forked childs
  2021-11-18 13:28 [pve-devel] [PATCH manager 1/3] pvescheduler: catch errors in forked childs Dominik Csapak
  2021-11-18 13:28 ` [pve-devel] [PATCH manager 2/3] pvescheduler: reworking child pid tracking Dominik Csapak
  2021-11-18 13:28 ` [pve-devel] [PATCH manager 3/3] pvescheduler: implement graceful reloading Dominik Csapak
@ 2021-11-22 19:35 ` Thomas Lamprecht
  2 siblings, 0 replies; 5+ messages in thread
From: Thomas Lamprecht @ 2021-11-22 19:35 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

On 18.11.21 14:28, Dominik Csapak wrote:
> if '$sub' dies, the error handler of PVE::Daemon triggers, which
> initiates a shutdown of the child, resulting in confusing error logs
> (e.g. 'got shutdown request, signal running jobs to stop')
> 
> instead, run it under 'eval' and print the error to the sylog instead
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
>  PVE/Service/pvescheduler.pm | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
> 
>

applied series, thanks!

I reworked the worker termination on stop though and also fixed a issue with a possible
artificial delay of the reload command (it was also aligned to the minute boundary).




^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2021-11-22 19:35 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-18 13:28 [pve-devel] [PATCH manager 1/3] pvescheduler: catch errors in forked childs Dominik Csapak
2021-11-18 13:28 ` [pve-devel] [PATCH manager 2/3] pvescheduler: reworking child pid tracking Dominik Csapak
2021-11-22  8:46   ` Fabian Ebner
2021-11-18 13:28 ` [pve-devel] [PATCH manager 3/3] pvescheduler: implement graceful reloading Dominik Csapak
2021-11-22 19:35 ` [pve-devel] applied-series: [PATCH manager 1/3] pvescheduler: catch errors in forked childs Thomas Lamprecht

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal