public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH manager] pvestatd: fix container cpuset scheduling
@ 2020-12-03 15:01 Dominik Csapak
  2020-12-03 15:35 ` Aaron Lauterer
  2020-12-03 15:37 ` [pve-devel] applied: " Thomas Lamprecht
  0 siblings, 2 replies; 4+ messages in thread
From: Dominik Csapak @ 2020-12-03 15:01 UTC (permalink / raw)
  To: pve-devel

Since pve-container commit

c48a25452dccca37b3915e49b7618f6880aeafb1

the code to get the cpuset controller path lives in pve-commons PVE::CGroup.
Use that and improve the logging in case some error happens in the future.
Such an error will only be logged once per pvestatd run,
so it does not spam the log.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 PVE/Service/pvestatd.pm | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/PVE/Service/pvestatd.pm b/PVE/Service/pvestatd.pm
index 5e533ca3..7193388c 100755
--- a/PVE/Service/pvestatd.pm
+++ b/PVE/Service/pvestatd.pm
@@ -20,7 +20,7 @@ use PVE::Storage;
 use PVE::QemuServer;
 use PVE::QemuServer::Monitor;
 use PVE::LXC;
-use PVE::LXC::CGroup;
+use PVE::CGroup;
 use PVE::LXC::Config;
 use PVE::RPCEnvironment;
 use PVE::API2::Subscription;
@@ -257,7 +257,11 @@ my $NO_REBALANCE;
 sub rebalance_lxc_containers {
     # Make sure we can find the cpuset controller path:
     return if $NO_REBALANCE;
-    my $cpuset_base = eval { PVE::LXC::CGroup::cpuset_controller_path() };
+    my $cpuset_base = eval { PVE::CGroup::cpuset_controller_path() };
+    if (my $err = $@) {
+	syslog('info', "could not get cpuset controller path: $err");
+    }
+
     if (!defined($cpuset_base)) {
 	$NO_REBALANCE = 1;
 	return;
-- 
2.20.1





^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [pve-devel] [PATCH manager] pvestatd: fix container cpuset scheduling
  2020-12-03 15:01 [pve-devel] [PATCH manager] pvestatd: fix container cpuset scheduling Dominik Csapak
@ 2020-12-03 15:35 ` Aaron Lauterer
  2020-12-03 15:38   ` Thomas Lamprecht
  2020-12-03 15:37 ` [pve-devel] applied: " Thomas Lamprecht
  1 sibling, 1 reply; 4+ messages in thread
From: Aaron Lauterer @ 2020-12-03 15:35 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

Tested-By: Aaron Lauterer <a.lauterer@proxmox.com>

On 12/3/20 4:01 PM, Dominik Csapak wrote:
> Since pve-container commit
> 
> c48a25452dccca37b3915e49b7618f6880aeafb1
> 
> the code to get the cpuset controller path lives in pve-commons PVE::CGroup.
> Use that and improve the logging in case some error happens in the future.
> Such an error will only be logged once per pvestatd run,
> so it does not spam the log.
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
>   PVE/Service/pvestatd.pm | 8 ++++++--
>   1 file changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/PVE/Service/pvestatd.pm b/PVE/Service/pvestatd.pm
> index 5e533ca3..7193388c 100755
> --- a/PVE/Service/pvestatd.pm
> +++ b/PVE/Service/pvestatd.pm
> @@ -20,7 +20,7 @@ use PVE::Storage;
>   use PVE::QemuServer;
>   use PVE::QemuServer::Monitor;
>   use PVE::LXC;
> -use PVE::LXC::CGroup;
> +use PVE::CGroup;
>   use PVE::LXC::Config;
>   use PVE::RPCEnvironment;
>   use PVE::API2::Subscription;
> @@ -257,7 +257,11 @@ my $NO_REBALANCE;
>   sub rebalance_lxc_containers {
>       # Make sure we can find the cpuset controller path:
>       return if $NO_REBALANCE;
> -    my $cpuset_base = eval { PVE::LXC::CGroup::cpuset_controller_path() };
> +    my $cpuset_base = eval { PVE::CGroup::cpuset_controller_path() };
> +    if (my $err = $@) {
> +	syslog('info', "could not get cpuset controller path: $err");
> +    }
> +
>       if (!defined($cpuset_base)) {
>   	$NO_REBALANCE = 1;
>   	return;
> 




^ permalink raw reply	[flat|nested] 4+ messages in thread

* [pve-devel] applied: [PATCH manager] pvestatd: fix container cpuset scheduling
  2020-12-03 15:01 [pve-devel] [PATCH manager] pvestatd: fix container cpuset scheduling Dominik Csapak
  2020-12-03 15:35 ` Aaron Lauterer
@ 2020-12-03 15:37 ` Thomas Lamprecht
  1 sibling, 0 replies; 4+ messages in thread
From: Thomas Lamprecht @ 2020-12-03 15:37 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

On 03.12.20 16:01, Dominik Csapak wrote:
> Since pve-container commit
> 
> c48a25452dccca37b3915e49b7618f6880aeafb1
> 
> the code to get the cpuset controller path lives in pve-commons PVE::CGroup.
> Use that and improve the logging in case some error happens in the future.
> Such an error will only be logged once per pvestatd run,
> so it does not spam the log.

That was worded confusingly for me, I thought you mean "once per pvestatd update
loop run", but it is actually only the first loop (which I like more ^^)

> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
>  PVE/Service/pvestatd.pm | 8 ++++++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
> 
>

applied, thanks!





^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [pve-devel] [PATCH manager] pvestatd: fix container cpuset scheduling
  2020-12-03 15:35 ` Aaron Lauterer
@ 2020-12-03 15:38   ` Thomas Lamprecht
  0 siblings, 0 replies; 4+ messages in thread
From: Thomas Lamprecht @ 2020-12-03 15:38 UTC (permalink / raw)
  To: Proxmox VE development discussion, Aaron Lauterer, Dominik Csapak

On 03.12.20 16:35, Aaron Lauterer wrote:
> Tested-By: Aaron Lauterer <a.lauterer@proxmox.com>

saw this to late for adding it into the commit message, still thanks for
the feedback though!




^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-12-03 15:39 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-03 15:01 [pve-devel] [PATCH manager] pvestatd: fix container cpuset scheduling Dominik Csapak
2020-12-03 15:35 ` Aaron Lauterer
2020-12-03 15:38   ` Thomas Lamprecht
2020-12-03 15:37 ` [pve-devel] applied: " Thomas Lamprecht

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal