From: "DERUMIER, Alexandre via pve-devel" <pve-devel@lists.proxmox.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>,
"f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Cc: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
Subject: Re: [pve-devel] [PATCH qemu-server] qcow2: increase cache-size to 1GB
Date: Thu, 14 Aug 2025 11:10:06 +0000 [thread overview]
Message-ID: <mailman.75.1755169848.385.pve-devel@lists.proxmox.com> (raw)
In-Reply-To: <1755094937.hut7e2rxef.astroid@yuna.none>
[-- Attachment #1: Type: message/rfc822, Size: 14793 bytes --]
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Subject: Re: [pve-devel] [PATCH qemu-server] qcow2: increase cache-size to 1GB
Date: Thu, 14 Aug 2025 11:10:06 +0000
Message-ID: <dbd48cbf4e617b2b6e56cf7c73fbd71aaa9926b9.camel@groupe-cyllene.com>
>
> This patch increase cache to 1GB, enough to handle 8TB image
>
> with default 32MB cache
> fio benchmark 4k randread/write:
>
> 256GB image : 32MB cache : 40000 iops
> 1TB image: 32MB cache: 2500 iops
> 8TB image: 32MB cache: 2500 iops
> 1TB image: 1G cache: 40000 iops
> 8TB image: 1G cache: 40000 iops
>
>
>>have you benchmarked this?
yes, results are in this commit message (2500->40000iops with a 1TB
image with 64k cluster, same results with 128k cluster)
>> if so, did you compare it with using the
>>smaller cache-entry variant described in the file you linked:
Don't have tested it (I don't understand exactly this part to be
honest, but the default l2-cache-entry-size is already smaller, it's
4k, and we use 64k or 128k cluster size)
>>we also know the image size here, so we could use a capped, derived
>>value?
>>
>>what if the disk is resized?
One problem is disk resize, because the cache size can't be increase
without restart. That's why I think it's better to use a big cache
size.(It's really a max value)
>> what about image files with bigger clusters?
I have tried with bigger blocksize (so less metadatas, less memory),
but snapshot performance are not great. (for example, 1MB cluster, this
is 32MB sub-cluster on snapshot (vs 4k sub-cluster with 128k cluster),
with 4k write, you need to rewrite 32MB.
Maybe l2_extended2=on on main image could reduce the needed cache
memory, but from my tests it don't seem to help, I still need to
increase the cache (I'll try to retest it)
They are some good info in the suballocattion paper (including in the
video presentation)
https://blogs.igalia.com/berto/2020/12/03/subcluster-allocation-for-qcow2-images/
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
next prev parent reply other threads:[~2025-08-14 11:09 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-13 13:46 Alexandre Derumier via pve-devel
2025-08-13 14:25 ` Fabian Grünbichler
2025-08-14 11:10 ` DERUMIER, Alexandre via pve-devel [this message]
2025-08-14 14:14 ` DERUMIER, Alexandre via pve-devel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=mailman.75.1755169848.385.pve-devel@lists.proxmox.com \
--to=pve-devel@lists.proxmox.com \
--cc=alexandre.derumier@groupe-cyllene.com \
--cc=f.gruenbichler@proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox