all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: Fiona Ebner <f.ebner@proxmox.com>
To: Maximiliano Sandoval <m.sandoval@proxmox.com>
Cc: pve-devel@lists.proxmox.com
Subject: Re: [pve-devel] [PATCH docs] storage: note that qcow2 internal snapshots are inefficient
Date: Mon, 23 Mar 2026 11:27:02 +0100	[thread overview]
Message-ID: <d31fc3b2-1167-47ae-98e5-830817ffb606@proxmox.com> (raw)
In-Reply-To: <s8otsu6c1ep.fsf@toolbox>

Am 23.03.26 um 10:58 AM schrieb Maximiliano Sandoval:
> Fiona Ebner <f.ebner@proxmox.com> writes:
>> Am 28.11.25 um 4:56 PM schrieb Fiona Ebner:
>>> It's a commonly reported issue, most recently again in the enterprise
>>> support, that taking or removing snapshots of large qcow2 files on
>>> file-based network storages can take a very long time. Add a note
>>> about this limitation.
>>>
>>> Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
>>> ---
>>>  pvesm.adoc | 5 +++++
>>>  1 file changed, 5 insertions(+)
>>>
>>> diff --git a/pvesm.adoc b/pvesm.adoc
>>> index d36baf8..165b446 100644
>>> --- a/pvesm.adoc
>>> +++ b/pvesm.adoc
>>> @@ -88,6 +88,11 @@ block device functionality.
>>>  
>>>  ^2^: On file based storages, snapshots are possible with the 'qcow2' format,
>>>  either using the internal snapshot function, or snapshots as volume chains^4^.
>>> +Creating and deleting internal 'qcow2' snapshots will block a running VM and
>>> +is not an efficient operation. The performance is particularly bad with network
>>> +storages like NFS. On some setups and for large disks (multiple hundred GiB or
>>> +TiB sized), these operations may take several minutes, or in extreme cases, even
>>> +hours.
>>>  
>>>  ^3^: It is possible to use LVM on top of an iSCSI or FC-based storage.
>>>  That way you get a `shared` LVM storage
>>
>> Ping
> 
> In my experience, users mostly into issues when deleting qcow2 snapshots
> on NFS can take up to 10 hours. I would personally put more emphasis on
> this specific combination being problematic, and probably mention that
> in such case one should delete snapshots while the VM is offline. 

NFS is already explicitly mentioned as being particularly bad. We also
had reports where creating snapshots on NFS took hours.

Doing it offline doesn't change the fact that it takes that long, but I
guess doing it while offline is still better, because being paused for a
long time is not that nice from a guest perspective.

I'll add the following in v2:
"If your setup is affected, create and remove snapshots while the VM is
shut down, expecting a long task duration."




      reply	other threads:[~2026-03-23 10:26 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-28 15:56 Fiona Ebner
2026-03-23  9:47 ` Fiona Ebner
2026-03-23  9:58   ` Maximiliano Sandoval
2026-03-23 10:27     ` Fiona Ebner [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d31fc3b2-1167-47ae-98e5-830817ffb606@proxmox.com \
    --to=f.ebner@proxmox.com \
    --cc=m.sandoval@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal