From: Fiona Ebner <f.ebner@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [RFC storage] work-around #6543: do not use preallocation for qcow2 on top of LVM
Date: Wed, 23 Jul 2025 15:07:58 +0200 [thread overview]
Message-ID: <59e8085b-b78e-47a1-83b6-619acf815cd7@proxmox.com> (raw)
In-Reply-To: <mailman.23.1753272956.367.pve-devel@lists.proxmox.com>
Hi Alexandre,
Am 23.07.25 um 2:15 PM schrieb DERUMIER, Alexandre via pve-devel:
> Hi Fiona, I'm on holiday, can't verify, but before using qemu-img
> measure I had implemented compute of metadatas size (It was not 100%
> perfect).
>
> This is strange, because I thinked that "qemu-img measure" was working
> correctly (we need to pass it blocksize && l2_extended option too). I
> had tried with a 1TB qcow2 volume.
>
>
> Note that I'm almost pretty sure that l2_extended=on need
> preallocation. (If I remember it's failing without it, and performance
> of backed volumes are pretty bad with l2_extended).
allocation seems to work fine:
Formatting '/dev/sharedlvm/vm-101-disk-0.qcow2', fmt=qcow2
cluster_size=131072 extended_l2=on compression_type=zlib size=1073741824
backing_file=snap_vm-101-disk-0_foobar.qcow2 backing_fmt=qcow2
lazy_refcounts=off refcount_bits=16
Do you mean for the performance benefits?
> it seem to be normal that preallocation option don't make a difference
> here. qemu-img measure always compute size of all metadatas. (only
> cluster_size && l2_extended change the result)
>
>
> I don't understand why disabling preallocation could help here (until
> it's bugged), because the same amount of metadatas need to be allocated
> later on the fly with disabled preallocation.
Yes, intuitively and telling from qemu-img measure, preallocation
shouldn't make a difference, but it seems to. The bug has not been fully
investigated, but I can clearly see qcow2 trying to write to an offset
beyond the LV size (which is even slightly larger than what qemu-img
measure reported, because it rounded up to full extent).
In practice, something is wrong, see the bug report. Friedrich and I
were both able to reproduce the issue, but only with
preallocation=metadata for now. I used a 4GiB disk on a ZFS storage that
first was filled with zeroes from within the VM (like in the bug
report). Then offline move it to qcow2-on-LVM and again fill it with
zeroes from within the VM. The error doesn't happen every time though,
still need to investigate further.
Best Regards,
Fiona
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
next prev parent reply other threads:[~2025-07-23 13:06 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-22 15:31 Fiona Ebner
2025-07-23 11:25 ` DERUMIER, Alexandre via pve-devel
2025-07-23 12:15 ` DERUMIER, Alexandre via pve-devel
2025-07-23 13:07 ` Fiona Ebner [this message]
2025-07-24 9:41 ` Fiona Ebner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=59e8085b-b78e-47a1-83b6-619acf815cd7@proxmox.com \
--to=f.ebner@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.