From: "DERUMIER, Alexandre" <Alexandre.DERUMIER@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] applied: [RFC pve-qemu] disable jemalloc
Date: Sat, 11 Mar 2023 13:14:30 +0000 [thread overview]
Message-ID: <5d005774ec9f139f998eb018188c41d274042f2b.camel@groupe-cyllene.com> (raw)
In-Reply-To: <1c4d80a05d8328a52b9d15e991fd4d348bce1327.camel@groupe-cyllene.com>
Le samedi 11 mars 2023 à 10:01 +0100, Thomas Lamprecht a écrit :
> Hi,
>
> Am 10/03/2023 um 19:05 schrieb DERUMIER, Alexandre:
> > I'm currently benching again qemu with librbd and memory allocator.
> >
> >
> > It's seem that they are still performance problem with default
> > glibc
> > allocator, around 20-25% less iops and bigger latency.
>
> Are those numbers compared to jemalloc or tcmalloc?
>
oh sorry,
tcmalloc. (I'm gotting almost same result with jmalloc, maybe a little
bit more less/unstable)
> Also, a key problem with allocator tuning is that its heavily
> dependent on
> the workload of each specific library (i.e., not only QEMU itself but
> also
> the specific block backend (library).
>
> >
yes, it should help librbd mainly. I don't think help other storage.
> > From my bench, i'm around 60k iops vs 80-90k iops with 4k randread.
> >
> > Redhat have also notice it
> >
> >
> > I known than jemalloc was buggy with rust lib && pbs block driver,
> > but did you have evaluated tcmalloc ?
>
> Yes, for PBS once - was way worse in how it generally worked than
> either
> jemalloc and default glibc IIRC, but I don't think I checked for
> latency,
> as then we tracked down freed memory that the allocator did not give
> back
> to the system to how they internally try to keep a pool of available
> memory
> around.
>
I known than jemalloc could have strange effect on memory. (ceph was
using jemalloc some year ago with this kind of side effect, and they
have migrate to tcmalloc later)
> So for latency it might be a win, but IMO not to sure if the other
> effects
> it has are worth that.
>
> >
yes, latency is my main objective, mainly for ceph synchronous write
with low iodepth,they are pretty slow, so 20% improvement is really
big.
> > Note that it's possible to load it dynamically with LD_PRELOAD,
> > so maybe could we add an option in vm config to enable it ?
> >
> I'm not 100% sure if QEMU copes well with preloading it via the
> dynlinker
> as is, or if we need to hard-disable malloc_trim support for it then.
> As currently with the "system" allocator (glibc) there's malloc_trim
> called
> (semi-) periodically via call_rcu_thread - and at least qemu's meson
> build
> system config disables malloc_trim for tcmalloc or jemalloc.
>
>
> Or did you already test this directly on QEMU, not just rbd bench? As
> then
> I'd be open to add some tuning config with a allocator sub-property
> in there
> to our CFGs.
>
I have tried directly in qemu, with
"
my $run_qemu = sub {
PVE::Tools::run_fork sub {
$ENV{LD_PRELOAD} = "/usr/lib/x86_64-linux-
gnu/libtcmalloc.so.4" ;
PVE::Systemd::enter_systemd_scope($vmid, "Proxmox VE VM
$vmid", %systemd_properties);
"
I really don't known about malloc_trim,
the initial discussion about is here,
https://patchwork.ozlabs.org/project/qemu-devel/patch/1510899814-19372-1-git-send-email-yang.zhong@intel.com/
and indeed, it's disabled when building with tcmalloc/jemalloc , but I
don't known about dynamic loading.
But I don't have any crash or segfault.
next parent reply other threads:[~2023-03-11 13:14 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <1c4d80a05d8328a52b9d15e991fd4d348bce1327.camel@groupe-cyllene.com>
2023-03-11 13:14 ` DERUMIER, Alexandre [this message]
2023-03-13 7:17 ` DERUMIER, Alexandre
2020-12-10 15:23 [pve-devel] " Stefan Reiter
2020-12-15 13:43 ` [pve-devel] applied: " Thomas Lamprecht
2023-03-10 18:05 ` DERUMIER, Alexandre
2023-03-11 9:01 ` Thomas Lamprecht
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5d005774ec9f139f998eb018188c41d274042f2b.camel@groupe-cyllene.com \
--to=alexandre.derumier@groupe-cyllene.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.