From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id 6304C1FF141 for ; Mon, 13 Apr 2026 13:13:08 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 1B38B1FF8E; Mon, 13 Apr 2026 13:13:55 +0200 (CEST) Mime-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=UTF-8 Date: Mon, 13 Apr 2026 19:13:12 +0800 Message-Id: Subject: Re: [PATCH pve-qemu 0/2] Re-enable tcmalloc as the memory allocator From: "Kefu Chai" To: "Fiona Ebner" , "DERUMIER, Alexandre" , "pve-devel@lists.proxmox.com" X-Mailer: aerc 0.20.0 References: <20260410043027.3621673-1-k.chai@proxmox.com> <4628fcc1c283bc4ae80f19e6fe8ae922c0968af9.camel@groupe-cyllene.com> <9db86c5e-d382-4eed-a1fd-905e44a259e1@proxmox.com> In-Reply-To: <9db86c5e-d382-4eed-a1fd-905e44a259e1@proxmox.com> X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1776078725008 X-SPAM-LEVEL: Spam detection results: 0 AWL 0.359 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Message-ID-Hash: RMRGNHAKLIBRRJ5YBTMYXTCRFTRK4IWG X-Message-ID-Hash: RMRGNHAKLIBRRJ5YBTMYXTCRFTRK4IWG X-MailFrom: k.chai@proxmox.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header X-Mailman-Version: 3.3.10 Precedence: list List-Id: Proxmox VE development discussion List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: On Mon Apr 13, 2026 at 4:14 PM CST, Fiona Ebner wrote: > Am 10.04.26 um 12:44 PM schrieb DERUMIER, Alexandre: >>=20 >>>> How does the performance change when doing IO within a QEMU guest? >>>> >>>> How does this affect the performance for other storage types, like >>>> ZFS, >>>> qcow2 on top of directory-based storages, qcow2 on top of LVM, LVM- >>>> thin, >>>> etc. and other workloads like saving VM state during snapshot, >>>> transfer >>>> during migration, maybe memory hotplug/ballooning, network >>>> performance >>>> for vNICs? Hi Fiona, Thanks for the questions. I traced QEMU's source code. It turns out that guest's RAM is allocated via direct mmap() calls, which completely bypassing QEMU's C library allocator. The path looks like: -m 4G on command line: memory_region_init_ram_flags_nomigrate() =20 qemu_ram_alloc() qemu_ram_alloc_internal() g_malloc0(sizeof(*new_block)) <-- only the RAMBlock metadata struct, about 512 bytes ram_block_add() qemu_anon_ram_alloc() qemu_ram_mmap(-1, size, ...) mmap_reserve(total) mmap(0, size, PROT_NONE, ...) <-- reserve address space mmap_activate(ptr, size, ...) mmap(ptr, size, PROT_RW, ...) <-- actual guest RAM So the gigabytes of guest memory go straight to the kernel via mmap() and never touch malloc or tcmalloc. Only the small RAMBlock metadata structure (~512 bytes per region) goes through g_malloc0(). In other words, tcmalloc's scope is limited to QEMU's own working memory, among other things, block layer buffers, coroutine stacks, internal data structs. It does not touch gues RAM at all. The inflat/deflate path of balloon code does involve glib malloc. It's virtio_balloon_pbp_alloc(), which calls bitmap_new() via g_new0() to track partially-ballooned pages. But these bitmaps tracks only metadata, whose memory footprints are relatively small. And this only happens in=20 infrequent operations. Hopefully, this addresses the balloon concern. If I've missed something or misread the code, please help point it out. When it comes to different storage types and workloads, I ran benchmarks covering some scenarios you listed. All comparison use the same binary (pve-qemu-kvm 10.1.2-7) with and without LD_PRELOAD=3Dlibtcmalloc.so.4, so the allocator is the only variable. Storage backends (block layer, via qemu-img bench) 4K reads, depth=3D32, io_uring, cache=3Dnone, 5M ops per run, best of 3. Host NVMe (ext4) or LVM-thin backed by NVMe. backend glibc ops/s tcmalloc ops/s delta qcow2 on ext4 1,188,495 1,189,343 +0.1% qcow2 on ext4 write 1,036,914 1,036,699 0.0% raw on ext4 1,263,583 1,277,465 +1.1% raw on LVM-thin 433,727 433,576 0.0% The reason why raw/LVM-thin is slower, is that, I *think* it actually hits the dm-thin layer rather than page cache. The allocator delta is noise in all cases. RBD tested via a local vstart cluster (3 OSDs, bluestore on NVMe): path glibc tcmalloc delta qemu-img bench + librbd 19,111 19,156 +0.2% rbd bench (librbd direct) 35,329 36,622 +3.7% I don't have ZFS configured on this (host) machine, but QEMU's file I/O path to ZFS goes through the same code as ext4, so the difference is in the kernel. I'd expect the same null result, though I could be wrong and am happy to set up a test pool if you'd like to see the numbers. Guest I/O (Debian 13 guest, virtio-blk) Ran dd inside a Debian 13 cloud image guest (4 vCPU, 2 GB RAM, qcow2 on ext4, cache=3Dnone, aio=3Dnative) against a second virtio disk (/dev/vdb, 8 GB qcow2). workload glibc tcmalloc delta dd if=3D/dev/vdb bs=3D4M count=3D1024 15.3 GB/s 15.5 GB/s +1.3% iflag=3Ddirect (sequential) 8x parallel dd if=3D/dev/vdb bs=3D4k 787 MB/s 870 MB/s +10.5% count=3D100k iflag=3Ddirect (each starting at a different offset) Migration and savevm Tested with a 4 GB guest where about 2 GB of RAM was dirtied (filled /dev/shm with urandom, 8x256 MB) before triggering each operation. scenario glibc tcmalloc delta migrate (exec: URI) 0.622 s 0.622 s 0.0% savevm (qcow2 snap) 0.503 s 0.504 s +0.2% What I didn't measure I did'nt test ZFS/vNIC throughput, as the host path lives in the kernel side, while QEMU only handles the control plane. Hence I'd expect very little allocator impact there, though I could be wrong. I also didn't test memory hotplug on ballooning cases, because, as explained above, these are rare one-shot operations. and as the source code trace shows malloc is not involved. But again, happy to look into it, if it's still a concern. >>=20 >> Hi Fiona, >>=20 >> I'm stil running in production (I have keeped tcmalloc after the >> removal some year ago from the pve build), and I didn't notice problem. >> (but I still don't use pbs). >>=20 >> But I never have done bench with/without it since 5/6 year. And thanks Alexandre for sharing your production experience, that's very valuable context. >>=20 >> Maybe vm memory should be checked too, I'm thinking about RSS memory >> with balloon free_page_reporting, to see if it's correcting freeing >> memory. For free_page_reporting, the VIRTIO_BALLOON_F_REPORTING path: virtual_balloon_handle_report() ram_block_discard_range() madvise(host_startaddr, kebngth, QEMU_MADV_DONTNEED) This operates directly on the mmap'ed guest RAM region. Still no malloc involvement anywhere in this path. In short, the tests above reveal a consistent pattern: tcmalloc helps where allocation pressure is high, and is neutral everywhere else. Please let me know if you'd like more data on any specific workload, or if there is anything I overlooked. I appreciate the careful review and insights. > > Thanks! If it was running fine for you for this long, that is a very > good data point :) Still, testing different scenarios/configurations > would be nice to rule out that there is a (performance) regression > somewhere else. cheers, Kefu