public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Alexandre Derumier <aderumier@odiso.com>
To: Dietmar Maurer <dietmar@proxmox.com>
Cc: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH pve-common 1/1] ProcFSTools: add read_pressure
Date: Tue, 13 Oct 2020 08:32:50 +0200	[thread overview]
Message-ID: <CAMGxVzBo940=kdAHLxWgY6RkuFYnHxwP-pEexs_rw8zSegogLA@mail.gmail.com> (raw)
In-Reply-To: <190420382.634.1602569100580@webmail.proxmox.com>

>>I have no idea how reliable this is, because we do not use cgroups v2.
But yes,
>>I think this would be useful.

I have tested it on a host with a lot of small vms. (something like 400vms
on  a 48cores), with this number of vms, they was a lot of context
switches, and vms was laggy.
cpu usage was ok (maybe 40%), loadaverage was around 40,  but pressure was
around 20%. (so it seem more precise than loadaverage)

global /proc/pressure/cpu   was almost the sum of cgroups of
/sys/fs/cgroup/unified/qemu.slice/<vmid>.scope/cpu.pressure

so,it seem reliable.

(I don't have lxc container in production, but I think it should be the
same)

So, yes, I think we could add them to rrd for both host/vms.


BTW, I'm currently playing with reading the rrd files, and I have notice
than lower precision is 1minute.
as pvestatd send values around each 10s,  is this 1minute precision an
average of 6x10s values send by pvestatd ?

I'm currently working on a poc of vm balancing, but I would like to have
something like 15min of 10s precision (90 samples of 10s).
So currently I'm getting stats each 10s manually
with PVE::API2Tools::extract_vm_stats like the ressource api.
(This use PVE::Cluster::rrd_dump , but I don't understand the ipcc_. code.
does it only return current streamed values?
 then after the rrdcached daemon is writing to rrd file the average values
each minute ?)

I don't known if we could have rrd files with 15min of 10s precision ?
(don't known the write load impact on disks)




Le mar. 13 oct. 2020 à 08:05, Dietmar Maurer <dietmar@proxmox.com> a écrit :

> > I have notice that it's possible to get pressure info for each vm/ct
> > through cgroups
> >
> > /sys/fs/cgroup/unified/qemu.slice/<vmid>.scope/cpu.pressure
> > /sys/fs/cgroup/unified/lxc/<vmid>/cpu.pressure
> >
> >
> > Maybe it could be great to have some new rrd graphs for each vm/ct ?
> > They are very useful counters to known a specific vm/ct is overloaded
>
> I have no idea how reliable this is, because we do not use cgroups v2. But
> yes,
> I think this would be useful.
>
>


  reply	other threads:[~2020-10-13  6:33 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-06 11:58 [pve-devel] [PATCH pve-common 0/1] " Alexandre Derumier
2020-10-06 11:58 ` [pve-devel] [PATCH pve-common 1/1] " Alexandre Derumier
2020-10-11  8:23   ` Alexandre Derumier
2020-10-13  6:05     ` Dietmar Maurer
2020-10-13  6:32       ` Alexandre Derumier [this message]
2020-10-13  7:38         ` Dietmar Maurer
2020-10-13 12:05           ` Alexandre Derumier
2020-10-13  5:35   ` [pve-devel] applied: " Dietmar Maurer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAMGxVzBo940=kdAHLxWgY6RkuFYnHxwP-pEexs_rw8zSegogLA@mail.gmail.com' \
    --to=aderumier@odiso.com \
    --cc=dietmar@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal