From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id 417921FF13C for ; Thu, 16 Apr 2026 16:37:43 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 25C6E8D36; Thu, 16 Apr 2026 16:37:43 +0200 (CEST) From: Lukas Sichert To: pve-devel@lists.proxmox.com Subject: [PATCH manager/storage v2 0/2] fix #7339: lvmthick: add option to free storage for deleted VMs Date: Thu, 16 Apr 2026 16:37:22 +0200 Message-ID: <20260416143729.54192-1-l.sichert@proxmox.com> X-Mailer: git-send-email 2.47.3 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1776350179704 X-SPAM-LEVEL: Spam detection results: 0 AWL 1.182 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Message-ID-Hash: YRBVBSKZTGNYVZVXNGU5TRHTRDOFGAEW X-Message-ID-Hash: YRBVBSKZTGNYVZVXNGU5TRHTRDOFGAEW X-MailFrom: l.sichert@proxmox.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Lukas Sichert X-Mailman-Version: 3.3.10 Precedence: list List-Id: Proxmox VE development discussion List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: Logical volumes (LV) in an LVM (thick) volume group (VG) are thick-provisioned, but the underlying backing storage can be thin-provisioned. In particular, this can be the case if the VG resides on a LUN provided by a SAN via ISCSI/FC/SAS [1], where the LUN may be thin-provisioned on the SAN side. In such setups, one usually wants that deleting an LV (e.g. VM disk) frees up space on the SAN side, especially when using snapshots-as-volume-chains, because snapshot LVs are thick-provisioned LVs from the LVM point of view, so users may want to over-provision the LUN on the SAN side. One option to free up space when deleting an LV is to set `issue_discards = 1` in the LVM config. With this setting, `lvremove` will send discards for the regions previously used by the LV, which will (if the SAN supports it) inform the SAN that the space is not in use anymore and can be freed up. Since `lvremove` modifies LVM metadata, it has to be issued while holding a cluster-wide lock on the storage. Unfortunately, depending on the setup, `issue_discards = 1` can make `lvremove` take very long for big disks (due to the large number of discards being issued), so that it eventually hits the 60s timeout of the cluster lock. The 60s are a hard-coded limit and cannot be easily changed [2]. A better option would be to use 'blkdiscard'.This will issue discard for all the blocks of the device and therefore free the storage on the san [3]. As this option does not require changing any LVM metadata it can be executed with a storage lock. There is already a setting for 'saferemove', which zeros-out to-be-deleted LVs using 'blkdiscard --zeroout' as a worker, but it does not discard the blocks afterwards. This, similarly to just 'blkdiscard', does not require a cluster-wide lock and therefore can be executed without running into the 60s timeout. This series extends the 'saferemove' worker so that he can execute 'blkdiscard', 'blkdiscard --zeroout' or both of them together and adds an option to select this in the Gui. Changes from v1 to v2 (thanks @Michael, Maximiliano, Fabian): -add more explicit descriptions in front- and backend, specifically mentioning discard (TRIM) -add a verbose description in the backend explaining the mechanism and why it should be used for thin-provisioned storage -add a forked fallback worker execution to allow other plugins to issue workers without these config options -rename variable issue-blkdiscard -> 'issue-blkdiscard' to conform to newer style [1] https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Storage_boxes_(SAN/NAS) [2] https://forum.proxmox.com/threads/175849/post-820043 storage: Lukas Sichert (1): fix #7339: lvmthick: add worker to free space of to be deleted VMs src/PVE/Storage.pm | 16 ++++++++++++---- src/PVE/Storage/LVMPlugin.pm | 34 ++++++++++++++++++++++++++-------- 2 files changed, 38 insertions(+), 12 deletions(-) manager: Lukas Sichert (1): fix #7339: lvmthick: ui: add UI fields for option to free storage www/manager6/Utils.js | 3 +++ www/manager6/storage/LVMEdit.js | 15 +++++++++++++++ 2 files changed, 18 insertions(+) Summary over all repositories: 4 files changed, 56 insertions(+), 12 deletions(-) -- Generated by murpp 0.11.0