From: Lukas Sichert <l.sichert@proxmox.com>
To: pve-devel@lists.proxmox.com
Cc: Lukas Sichert <l.sichert@proxmox.com>
Subject: [PATCH manager/storage v2 0/2] fix #7339: lvmthick: add option to free storage for deleted VMs
Date: Thu, 16 Apr 2026 16:37:22 +0200 [thread overview]
Message-ID: <20260416143729.54192-1-l.sichert@proxmox.com> (raw)
Logical volumes (LV) in an LVM (thick) volume group (VG) are
thick-provisioned, but the underlying backing storage can be
thin-provisioned. In particular, this can be the case if the VG resides
on a LUN provided by a SAN via ISCSI/FC/SAS [1], where the LUN may be
thin-provisioned on the SAN side.
In such setups, one usually wants that deleting an LV (e.g. VM disk)
frees up space on the SAN side, especially when using
snapshots-as-volume-chains, because snapshot LVs are thick-provisioned
LVs from the LVM point of view, so users may want to over-provision the
LUN on the SAN side.
One option to free up space when deleting an LV is to set
`issue_discards = 1` in the LVM config. With this setting, `lvremove`
will send discards for the regions previously used by the LV, which will
(if the SAN supports it) inform the SAN that the space is not in use
anymore and can be freed up. Since `lvremove` modifies LVM metadata, it
has to be issued while holding a cluster-wide lock on the storage.
Unfortunately, depending on the setup, `issue_discards = 1` can make
`lvremove` take very long for big disks (due to the large number of
discards being issued), so that it eventually hits the 60s timeout of
the cluster lock. The 60s are a hard-coded limit and cannot be easily
changed [2].
A better option would be to use 'blkdiscard'.This will issue discard for
all the blocks of the device and therefore free the storage on the san
[3]. As this option does not require changing any LVM metadata it can be
executed with a storage lock.
There is already a setting for 'saferemove', which zeros-out
to-be-deleted LVs using 'blkdiscard --zeroout' as a worker, but it does
not discard the blocks afterwards. This, similarly to just 'blkdiscard',
does not require a cluster-wide lock and therefore can be executed
without running into the 60s timeout.
This series extends the 'saferemove' worker so that he can execute
'blkdiscard', 'blkdiscard --zeroout' or both of them together and adds
an option to select this in the Gui.
Changes from v1 to v2 (thanks @Michael, Maximiliano, Fabian):
-add more explicit descriptions in front- and backend, specifically
mentioning discard (TRIM)
-add a verbose description in the backend explaining the mechanism and
why it should be used for thin-provisioned storage
-add a forked fallback worker execution to allow other plugins to
issue workers without these config options
-rename variable issue-blkdiscard -> 'issue-blkdiscard' to conform to
newer style
[1] https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Storage_boxes_(SAN/NAS)
[2] https://forum.proxmox.com/threads/175849/post-820043
storage:
Lukas Sichert (1):
fix #7339: lvmthick: add worker to free space of to be deleted VMs
src/PVE/Storage.pm | 16 ++++++++++++----
src/PVE/Storage/LVMPlugin.pm | 34 ++++++++++++++++++++++++++--------
2 files changed, 38 insertions(+), 12 deletions(-)
manager:
Lukas Sichert (1):
fix #7339: lvmthick: ui: add UI fields for option to free storage
www/manager6/Utils.js | 3 +++
www/manager6/storage/LVMEdit.js | 15 +++++++++++++++
2 files changed, 18 insertions(+)
Summary over all repositories:
4 files changed, 56 insertions(+), 12 deletions(-)
--
Generated by murpp 0.11.0
next reply other threads:[~2026-04-16 14:37 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-16 14:37 Lukas Sichert [this message]
2026-04-16 14:37 ` [PATCH storage v2 1/2] fix #7339: lvmthick: add worker to free space of to be " Lukas Sichert
2026-04-16 14:37 ` [PATCH manager v2 2/2] fix #7339: lvmthick: ui: add UI fields for option to free storage Lukas Sichert
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260416143729.54192-1-l.sichert@proxmox.com \
--to=l.sichert@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox