public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Fiona Ebner <f.ebner@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH pve-storage 0/2] move qemu_img_create to common helpers and enable preallocation on backed images
Date: Tue, 27 May 2025 10:49:00 +0200	[thread overview]
Message-ID: <59aa0b2f-bae3-4ba5-9b01-e1c74eaf5b56@proxmox.com> (raw)
In-Reply-To: <mailman.561.1747922029.394.pve-devel@lists.proxmox.com>

Am 22.05.25 um 15:53 schrieb Alexandre Derumier via pve-devel:
> This is part of my work on qcow2 external snapshot, but could improve current qcow2 linked clone
> 
> This patch serie move qemu_img_create to common helpers,
> and enable preallocation on backed_image to increase performance
> 
> This require l2_extended=on on the backed image
>
> I don't have enabled it for base image, as I think that Fabian see performance regression some month ago.
> but I don't see performance difference in my bench. (can you could test on your side again ?)
> 
> It could help to reduce qcow2 overhead on disk,
> and allow to keep more metadatas in memory for bigger image, as qemu default memory l2_cache_size=1MB)
> https://www.ibm.com/products/tutorials/how-to-tune-qemu-l2-cache-size-and-qcow2-cluster-size
> Maybe more test with bigger image (>1TB) could be done too to see if it's help
> 
> I have done some tests with suballocated cluster and base image without
> backing_file, indeed, I'm seeing a small performance degradation on big
> 1TB image.
> 
> with a 30GB image, I'm around 22000 iops 4k randwrite/randread  (with
> or without l2_extended=on)
> 
> with a 1TB image, the result is different
> 
> 
> fio –filename=/dev/sdb –direct=1 –rw=randwrite –bs=4k –iodepth=32
> –ioengine=libaio –name=test
> 
> default l2-cache-size (32MB) , extended_l2=off, cluster_size=64k : 2700 iops
> default l2-cache-size (32MB) , extended_l2=on, cluster_size=128k: 1500 iops

It was not Fabian but me, who reported the regression regarding read
performance and performance for initial allocation back then:
https://lore.proxmox.com/pve-devel/d5e11d01-f54e-4dd9-b1c0-a02077a0c65f@proxmox.com/
The space usage on the underlying storage is greatly improved however.

> I have also play with qemu l2-cache-size option of drive (default value
> is 32MB, and it's not enough for a 1TB image to keep all metadatas in
> memory)
> https://github.com/qemu/qemu/commit/80668d0fb735f0839a46278a7d42116089b82816
> 
> 
> l2-cache-size=8MB , extended_l2=off, cluster_size=64k: 2900 iops
> l2-cache-size=64MB , extended_l2=off, cluster_size=64k: 5100 iops
> l2-cache-size=128MB , extended_l2=off, cluster_size=64k : 22000 iops
> 
> l2-cache-size=8MB , extended_l2=on, cluster_size=128k: 2000 iops
> l2-cache-size=64MB , extended_l2=on, cluster_size=128k: 4500 iops
> l2-cache-size=128MB , extended_l2=on, cluster_size=128k: 22000 iops
> 
> 
> So no difference in needed memory, with or with extended_l2.
> 
> but the l2-cache-size tuning is really something we should add in
> another patch I think ,for general performance with qcow2.

If we want to enable extended_l2=on, cluster_size=128k by default for
all new qcow2 image, I think we should do it together with an increased
l2-cache-size then. But yes, should be its own patch. The above results
sound promising, but we'll need to test a bigger variety of workloads.
If we don't find settings that improve most workloads, we can still make
it configurable.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

  reply	other threads:[~2025-05-27  8:48 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-05-22 13:53 Alexandre Derumier via pve-devel
2025-05-27  8:49 ` Fiona Ebner [this message]
2025-05-27  8:59   ` DERUMIER, Alexandre via pve-devel
  -- strict thread matches above, loose matches on Subject: below --
2025-05-19 10:23 Alexandre Derumier via pve-devel
2025-05-22 12:44 ` DERUMIER, Alexandre via pve-devel
2025-05-22 13:37 ` Fabian Grünbichler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=59aa0b2f-bae3-4ba5-9b01-e1c74eaf5b56@proxmox.com \
    --to=f.ebner@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal