public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
* [PVE-User] Again on ZFS strange performance...
@ 2022-07-21 12:10 Marco Gaiarin
  0 siblings, 0 replies; only message in thread
From: Marco Gaiarin @ 2022-07-21 12:10 UTC (permalink / raw)
  To: pve-user


Situation: DELL PowerEdge T440, 64GB RAM, Intel(R) Xeon(R) Silver 4208 CPU @
2.10GHz, 16 cores.

On the server we have created a 'data' pool, backing it with some SSD cache:

 root@pppve2:~# zpool status rpool-data
   pool: rpool-data
  state: ONLINE
 config:

	NAME                                        STATE     READ WRITE CKSUM
	rpool-data                                  ONLINE       0     0     0
	  raidz1-0                                  ONLINE       0     0     0
	    ata-HGST_HUS726T4TALA6L0_V1KNZMLL       ONLINE       0     0     0
	    ata-HGST_HUS726T4TALA6L0_V1KNY60L       ONLINE       0     0     0
	    ata-HGST_HUS726T4TALA6L0_V1KNY6AL       ONLINE       0     0     0
	logs	
	  ata-MZ7KH480HAHQ0D3_S5CNNA0RA29699-part1  ONLINE       0     0     0
	cache
	  ata-MZ7KH480HAHQ0D3_S5CNNA0RA29699-part2  ONLINE       0     0     0

 errors: No known data errors


On the server we have created a NFS server that use the 'rpool-data' pool
space, and we are migrating disks (disks, not VM) from an older PVE
infrastructure in this NFS server (and thus, on this pool).

Moving disks into the NFS server are done at 'wirespeed' (gigabit), eg 50/60
MByte/s, with little or no load on the destination server: iodelay under 10%,
load around 4-6.


After moving disks, i've moved manually the <ID>.conf machine configuration
in the new architecture, the started it.


After that, i've moved the disks from the NFS server to the zpools; i've
moved the (little) systemd disk from NFS to 'local-zf' (full SSD ZFS pool)
in some second.

I've tried to move the data disk (2TB) from NFS to 'rpool-data' and:

1) if i setup NO limit on disk move, system get thrashed roughly
 istantaneous (load 40, iodelay 60-80%).

2) if i put as a limit 25MB/s, i can copy 100G roughly, after that probably
 the SSD cache get full and i get load 20 and iodelay 50-60%: system is
thrashed but still usable.

3) now i've put 10MB/s as a limit, and seems is working, slowly but with no
 consequences. But is slow...


But still i'm a bit confused... why i can write to a pool via NFS at 60MB/s,
but i have to move a disk at 10MB/s?!

-- 
  ``... La memoria conta veramente solo se tiene insieme l'impronta del
  presente e il progetto del futuro, se permette di fare senza dimenticare
  quel che si voleva fare, di diventare senza smettere di essere,
  di essere senza smettere di diventare...''		(Italo Calvino)



^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2022-07-21 12:40 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-21 12:10 [PVE-User] Again on ZFS strange performance Marco Gaiarin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal