all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: Thomas Lamprecht <t.lamprecht@proxmox.com>
To: "DERUMIER, Alexandre" <Alexandre.DERUMIER@groupe-cyllene.com>,
	"pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] applied: [RFC pve-qemu] disable jemalloc
Date: Sat, 11 Mar 2023 10:01:28 +0100	[thread overview]
Message-ID: <ab594136-845f-bcf9-c1b6-8aaeea8ef486@proxmox.com> (raw)
In-Reply-To: <5d774074596e5430faaa26146c70fdd13b513598.camel@groupe-cyllene.com>

Hi,

Am 10/03/2023 um 19:05 schrieb DERUMIER, Alexandre:
> I'm currently benching again qemu with librbd and memory allocator.
> 
> 
> It's seem that they are still performance problem with default glibc
> allocator, around 20-25% less iops and bigger latency.

Are those numbers compared to jemalloc or tcmalloc?

Also, a key problem with allocator tuning is that its heavily dependent on
the workload of each specific library (i.e., not only QEMU itself but also
the specific block backend (library).

> 
> From my bench, i'm around 60k iops vs 80-90k iops with 4k randread.
> 
> Redhat have also notice it
> https://bugzilla.redhat.com/show_bug.cgi?id=1717414
> https://sourceware.org/bugzilla/show_bug.cgi?id=28050
> 
> 
> I known than jemalloc was buggy with rust lib  && pbs block driver,
> but did you have evaluated tcmalloc ?

Yes, for PBS once - was way worse in how it generally worked than either
jemalloc and default glibc IIRC, but I don't think I checked for latency,
as then we tracked down freed memory that the allocator did not give back
to the system to how they internally try to keep a pool of available memory
around.

So for latency it might be a win, but IMO not to sure if the other effects
it has are worth that.

> 
> Note that it's possible to load it dynamically with LD_PRELOAD,
> so maybe could we add an option in vm config to enable it ? 
> 

I'm not 100% sure if QEMU copes well with preloading it via the dynlinker
as is, or if we need to hard-disable malloc_trim support for it then.
As currently with the "system" allocator (glibc) there's malloc_trim called
(semi-) periodically via call_rcu_thread - and at least qemu's meson build
system config disables malloc_trim for tcmalloc or jemalloc.


Or did you already test this directly on QEMU, not just rbd bench? As then
I'd be open to add some tuning config with a allocator sub-property in there
to our CFGs.




  reply	other threads:[~2023-03-11  9:02 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-10 15:23 [pve-devel] " Stefan Reiter
2020-12-11 15:21 ` alexandre derumier
2020-12-15 13:43 ` [pve-devel] applied: " Thomas Lamprecht
2023-03-10 18:05   ` DERUMIER, Alexandre
2023-03-11  9:01     ` Thomas Lamprecht [this message]
     [not found] <1c4d80a05d8328a52b9d15e991fd4d348bce1327.camel@groupe-cyllene.com>
2023-03-11 13:14 ` DERUMIER, Alexandre
2023-03-13  7:17   ` DERUMIER, Alexandre

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ab594136-845f-bcf9-c1b6-8aaeea8ef486@proxmox.com \
    --to=t.lamprecht@proxmox.com \
    --cc=Alexandre.DERUMIER@groupe-cyllene.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal