From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <oscar@dearriba.es>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id BC72E92384
 for <pve-user@lists.proxmox.com>; Tue, 27 Dec 2022 19:00:54 +0100 (CET)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id 9E5631D5D3
 for <pve-user@lists.proxmox.com>; Tue, 27 Dec 2022 19:00:54 +0100 (CET)
Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com
 [66.111.4.26])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS
 for <pve-user@lists.proxmox.com>; Tue, 27 Dec 2022 19:00:52 +0100 (CET)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailout.nyi.internal (Postfix) with ESMTP id 0965C5C00E1
 for <pve-user@lists.proxmox.com>; Tue, 27 Dec 2022 12:54:56 -0500 (EST)
Received: from imap50 ([10.202.2.100])
 by compute5.internal (MEProxy); Tue, 27 Dec 2022 12:54:56 -0500
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=dearriba.es; h=
 cc:content-type:date:date:from:from:in-reply-to:message-id
 :mime-version:reply-to:sender:subject:subject:to:to; s=fm1; t=
 1672163696; x=1672250096; bh=J/EvGNPI7pnfaBsA9EhcQ61yxN1Zqw18Vu7
 XnaQaggo=; b=YMRCTMPQWIK0x6KAxxuUdy8TtCUFZIyvq/ydh1Ahy/JSBn6V9AS
 Z0ObsRowo3oqjwJAyyr/71Fp3VnD5q0Cm69giF+UbQMkw3bfdEQc6CQrDtWPv4gA
 MF0PjeN1UIwdTpyFAj61gdrqN5UV+b3tRoM4dzb7R4KqsSd+z+y2AlPI9YEMEYxv
 eT9rXr0rjSeWRcLMeyIGTlGme09GJ3vz+vP/t+i92mNCXnxWyVgoEycpMESIxDOR
 LXkWmciA+jLnxsycO/zOetLYLu4ffj4W8vjdXt9h1YIVw9z+6mreeuHJ6QX7mcI+
 P6/dYsleHnDux3nYUVbSHqZfuxTopqTLWkA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
 messagingengine.com; h=cc:content-type:date:date:feedback-id
 :feedback-id:from:from:in-reply-to:message-id:mime-version
 :reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy
 :x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1672163696; x=
 1672250096; bh=J/EvGNPI7pnfaBsA9EhcQ61yxN1Zqw18Vu7XnaQaggo=; b=g
 fsWYtUAa1dzURTA0VE8pkYEduqxmYOEMtJ89NIDdaDVTBgOG0IUOxNMYZNnR9GeJ
 lWtU7J93A56CEGijPuQ+ASRpkBzswpThC8+HhTa6gmvywGKcVEN2ws6Q0r5b5siL
 7gWYCjjFwHY+Ri7xldEgto/aSQyS/w4CollnghEhE85kHqGaG8TORTvhL9CR42Yu
 +3XL9/kVNgJZaTtQBpgUR+9S43DBNnEoSZ4CeAMOwN2js2UjLMxh7pa3fd1TYsPs
 P2MBnRNnC16z7cGorUJMHJDI7tnV5VP0BK/q8VybWn7njupx5dY4TlOtwM8a4C65
 WzWo4YD/hdSL7JHETiuPw==
X-ME-Sender: <xms:bzGrYyjmQCnikkSzMHpSoY5r2-Js5_rxBoovaaxYiwP8X6PAqTswhg>
 <xme:bzGrYzALQAQLg9eD1oBpzq96KJ26bd5tR9gdi494Uni887HI1u3q-rYqg9fbOlp1F
 m8wF4CaTHn5_akT5H4>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedriedtgddutdeiucetufdoteggodetrfdotf
 fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
 uceurghilhhouhhtmecufedttdenucenucfjughrpefofgggkfffhffvufgtsegrtderre
 erreejnecuhfhrohhmpenmshgtrghrucguvgcutehrrhhisggruceoohhstggrrhesuggv
 rghrrhhisggrrdgvsheqnecuggftrfgrthhtvghrnhepkeeltdelfedtudeivdduvddvgf
 duvdehheduveehgfelffduffffjeeukefghedtnecuffhomhgrihhnpehprhhogihmohig
 rdgtohhmnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomh
 epohhstggrrhesuggvrghrrhhisggrrdgvsh
X-ME-Proxy: <xmx:bzGrY6E78kDS8CU4mztmV8gdDynYkQvGr5_u_V9wDZofQ5wxIlHXYQ>
 <xmx:bzGrY7R1BSTEpFXId9kd9tN2xACoFz8O4yY1q5eCRuCTsb6DvZsYNA>
 <xmx:bzGrY_yZRhThCBFYYrb5oHbBZr6wCd75xYy4VfBtad5J-RofaILosg>
 <xmx:cDGrY-_taEgy36QDiY-MkCiPUENfhy1B7NBB6Bg3Xk1huV_WyDNc8Q>
Feedback-ID: ib1694623:Fastmail
Received: by mailuser.nyi.internal (Postfix, from userid 501)
 id B608C1700089; Tue, 27 Dec 2022 12:54:55 -0500 (EST)
X-Mailer: MessagingEngine.com Webmail Interface
User-Agent: Cyrus-JMAP/3.7.0-alpha0-1185-g841157300a-fm-20221208.002-g84115730
Mime-Version: 1.0
Message-Id: <c9aa3de9-01e8-4728-8344-6452c1bb0f3e@app.fastmail.com>
Date: Tue, 27 Dec 2022 18:54:16 +0100
From: =?UTF-8?Q?=C3=93scar_de_Arriba?= <oscar@dearriba.es>
To: pve-user@lists.proxmox.com
X-SPAM-LEVEL: Spam detection results:  0
 BAYES_00                 -1.9 Bayes spam probability is 0 to 1%
 DKIM_SIGNED               0.1 Message has a DKIM or DK signature,
 not necessarily valid
 DKIM_VALID -0.1 Message has at least one valid DKIM or DK signature
 DKIM_VALID_AU -0.1 Message has a valid DKIM or DK signature from author's
 domain
 DKIM_VALID_EF -0.1 Message has a valid DKIM or DK signature from envelope-from
 domain HTML_MESSAGE            0.001 HTML included in message
 JMQ_SPF_NEUTRAL           0.5 SPF set to ?all
 RCVD_IN_DNSWL_LOW        -0.7 Sender listed at https://www.dnswl.org/,
 low trust RCVD_IN_MSPIKE_H3       0.001 Good reputation (+3)
 RCVD_IN_MSPIKE_WL       0.001 Mailspike good senders
 SPF_HELO_PASS          -0.001 SPF: HELO matches SPF record
 SPF_PASS               -0.001 SPF: sender matches SPF record
 URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See
 http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more
 information. [messagingengine.com, dearriba.es, proxmox.com]
Content-Type: text/plain
X-Content-Filtered-By: Mailman/MimeDel 2.1.29
Subject: [PVE-User] Thin LVM showing more used space than expected
X-BeenThere: pve-user@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE user list <pve-user.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-user>, 
 <mailto:pve-user-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-user/>
List-Post: <mailto:pve-user@lists.proxmox.com>
List-Help: <mailto:pve-user-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user>, 
 <mailto:pve-user-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Tue, 27 Dec 2022 18:00:54 -0000

Hello all,

>From ~1 week ago, one of my Proxmox nodes' data LVM is doing strange things.

 For storage, I'm using a commercial Crucial MX500 SATA SSD connected directly to the motherboard controller (no PCIe HBA for the system+data disk) and it is brand new - and S.M.A.R.T. checks are passing, only 4% of wearout. I have set up proxmox inside a cluster with LVM and making backups to a NFS external location.

Last week I tried to migrate an stopped VM of ~64 GiB from one server to another, and found out *the SSD started to underperform (~5 MB/s) after roughly 55 GiB copied *(this pattern was repeated several times). 
It was so bad that *even cancelling the migration, the SSD continued busy writting at that speeed and I need to reboot the instance, as it was completely unusable* (it is in my homelab, not running mission critical workloads, so it was okay to do that). After the reboot, I could remove the half-copied VM disk.

After that, (and several retries, even making a backup to an external storage and trying to restore the backup, just in case the bottleneck was on the migration process) I ended up creating the instance from scratch and migrating data from one VM to another - so the VM was crearted brand new and no bottleneck was hit.

The problem is that *now the pve/data logical volume is showing 377 GiB used, but the total size of stored VM disks (even if they are 100% approvisioned) is 168 GiB*. I checked and both VMs have no snapshots. 

I don't know if the reboot while writting to the disk (always having cancelled the migration first) damaged the LV in some way, but after thinking about it it does not even make sense that an SSD of this type ends up writting at 5 MB/s, even with the writting cache full. It should be writting far faster than that even without cache.

Some information about the storage:

`root@venom:~# lvs -a
  LV              VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data            pve twi-aotz-- 377.55g             96.13  1.54                           
  [data_tdata]    pve Twi-ao---- 377.55g                                                   
  [data_tmeta]    pve ewi-ao----  <3.86g                                                   
  [lvol0_pmspare] pve ewi-------  <3.86g                                                   
  root            pve -wi-ao----  60.00g                                                   
  swap            pve -wi-ao----   4.00g                                                   
  vm-150-disk-0   pve Vwi-a-tz--   4.00m data        14.06                                 
  vm-150-disk-1   pve Vwi-a-tz-- 128.00g data        100.00                                 
  vm-201-disk-0   pve Vwi-aotz--   4.00m data        14.06                                 
  vm-201-disk-1   pve Vwi-aotz--  40.00g data        71.51`

and can be also seen on this post on the forum I did a couple of days ago: https://forum.proxmox.com/threads/thin-lvm-showing-more-used-space-than-expected.120051/

Any ideas aside from doing a backup and reinstall from scratch?

Thanks in advance!