public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
From: Marco Gaiarin <gaio@lilliput.linux.it>
To: pve-user@lists.proxmox.com
Subject: [PVE-User] Again on ZFS strange performance...
Date: Thu, 21 Jul 2022 14:10:09 +0200	[thread overview]
Message-ID: <v9snqi-hel.ln1@hermione.lilliput.linux.it> (raw)


Situation: DELL PowerEdge T440, 64GB RAM, Intel(R) Xeon(R) Silver 4208 CPU @
2.10GHz, 16 cores.

On the server we have created a 'data' pool, backing it with some SSD cache:

 root@pppve2:~# zpool status rpool-data
   pool: rpool-data
  state: ONLINE
 config:

	NAME                                        STATE     READ WRITE CKSUM
	rpool-data                                  ONLINE       0     0     0
	  raidz1-0                                  ONLINE       0     0     0
	    ata-HGST_HUS726T4TALA6L0_V1KNZMLL       ONLINE       0     0     0
	    ata-HGST_HUS726T4TALA6L0_V1KNY60L       ONLINE       0     0     0
	    ata-HGST_HUS726T4TALA6L0_V1KNY6AL       ONLINE       0     0     0
	logs	
	  ata-MZ7KH480HAHQ0D3_S5CNNA0RA29699-part1  ONLINE       0     0     0
	cache
	  ata-MZ7KH480HAHQ0D3_S5CNNA0RA29699-part2  ONLINE       0     0     0

 errors: No known data errors


On the server we have created a NFS server that use the 'rpool-data' pool
space, and we are migrating disks (disks, not VM) from an older PVE
infrastructure in this NFS server (and thus, on this pool).

Moving disks into the NFS server are done at 'wirespeed' (gigabit), eg 50/60
MByte/s, with little or no load on the destination server: iodelay under 10%,
load around 4-6.


After moving disks, i've moved manually the <ID>.conf machine configuration
in the new architecture, the started it.


After that, i've moved the disks from the NFS server to the zpools; i've
moved the (little) systemd disk from NFS to 'local-zf' (full SSD ZFS pool)
in some second.

I've tried to move the data disk (2TB) from NFS to 'rpool-data' and:

1) if i setup NO limit on disk move, system get thrashed roughly
 istantaneous (load 40, iodelay 60-80%).

2) if i put as a limit 25MB/s, i can copy 100G roughly, after that probably
 the SSD cache get full and i get load 20 and iodelay 50-60%: system is
thrashed but still usable.

3) now i've put 10MB/s as a limit, and seems is working, slowly but with no
 consequences. But is slow...


But still i'm a bit confused... why i can write to a pool via NFS at 60MB/s,
but i have to move a disk at 10MB/s?!

-- 
  ``... La memoria conta veramente solo se tiene insieme l'impronta del
  presente e il progetto del futuro, se permette di fare senza dimenticare
  quel che si voleva fare, di diventare senza smettere di essere,
  di essere senza smettere di diventare...''		(Italo Calvino)



                 reply	other threads:[~2022-07-21 12:40 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=v9snqi-hel.ln1@hermione.lilliput.linux.it \
    --to=gaio@lilliput.linux.it \
    --cc=pve-user@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal