public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] qcow2 internal snapshot: 4k write speed reduce to 100~200 iops on ssd ?
@ 2024-08-30 16:01 DERUMIER, Alexandre via pve-devel
  2024-09-11  9:11 ` Fiona Ebner
  0 siblings, 1 reply; 3+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2024-08-30 16:01 UTC (permalink / raw)
  To: pve-devel; +Cc: DERUMIER, Alexandre

[-- Attachment #1: Type: message/rfc822, Size: 14195 bytes --]

From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>
Subject: qcow2 internal snapshot: 4k write speed reduce to 100~200 iops on ssd ?
Date: Fri, 30 Aug 2024 16:01:44 +0000
Message-ID: <fbddd72caac9d3474d63bdfcdb34bc515059ad70.camel@groupe-cyllene.com>

Hi,

I was doing tests with gfs2 && ocfs2,

and I have notice 4k randwrite iops going from 20000 iops to
100~200iops when a snapshot is present.

I thinked it was related to gfs2 && ocfs2 allocation, 
but I can reproduce too with a simple qcow2 file on
a local ssd drive.

is it expected ?

(I don't have use qcow2 snapshot since 10year, so I really don't 
 remember performance).


Using external snapshot file, I'm around 10000iops.

test inside the vm:

fio --filename=/dev/sdX --ioengine=libaio --direct 1 --bs=4k  --
rw=randwrite --numjobs=64



[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2024-09-19 13:51 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-08-30 16:01 [pve-devel] qcow2 internal snapshot: 4k write speed reduce to 100~200 iops on ssd ? DERUMIER, Alexandre via pve-devel
2024-09-11  9:11 ` Fiona Ebner
2024-09-19 13:50   ` DERUMIER, Alexandre via pve-devel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal