From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id 5BE7A1FF136 for ; Mon, 23 Mar 2026 11:15:44 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 904BA10A46; Mon, 23 Mar 2026 11:16:03 +0100 (CET) From: Lukas Sichert To: pve-devel@lists.proxmox.com Subject: [PATCH manager/storage 0/2] fix #7339: lvmthick: add option to free storage for deleted VMs Date: Mon, 23 Mar 2026 11:14:53 +0100 Message-ID: <20260323101506.56098-1-l.sichert@proxmox.com> X-Mailer: git-send-email 2.47.3 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1774260880797 X-SPAM-LEVEL: Spam detection results: 0 AWL 1.564 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Message-ID-Hash: B5LSOEEBTID4WGXD4J272R6DC2YNUHID X-Message-ID-Hash: B5LSOEEBTID4WGXD4J272R6DC2YNUHID X-MailFrom: l.sichert@proxmox.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Lukas Sichert X-Mailman-Version: 3.3.10 Precedence: list List-Id: Proxmox VE development discussion List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: Logical volumes (LV) in an LVM (thick) volume group (VG) are thick-provisioned, but the underlying backing storage can be thin-provisioned. In particular, this can be the case if the VG resides on a LUN provided by a SAN via ISCSI/FC/SAS [1], where the LUN may be thin-provisioned on the SAN side. In such setups, one usually wants that deleting an LV (e.g. VM disk) frees up space on the SAN side, especially when using snapshots-as-volume-chains, because snapshot LVs are thick-provisioned LVs from the LVM point of view, so users may want to over-provision the LUN on the SAN side. One option to free up space when deleting an LV is to set `issue_discards = 1` in the LVM config. With this setting, `lvremove` will send discards for the regions previously used by the LV, which will (if the SAN supports it) inform the SAN that the space is not in use anymore and can be freed up. Since `lvremove` modifies LVM metadata, it has to be issued while holding a cluster-wide lock on the storage. Unfortunately, depending on the setup, `issue_discards = 1` can make `lvremove` take very long for big disks (due to the large number of discards being issued), so that it eventually hits the 60s timeout of the cluster lock. The 60s are a hard-coded limit and cannot be easily changed [2]. A better option would be to use 'blkdiscard'.This will issue discard for all the blocks of the device and therefore free the storage on the san [3]. As this option does not require changing any LVM metadata it can be executed with a storage lock. There is already a setting for 'saferemove', which zeros-out to-be-deleted LVs using 'blkdiscard --zeroout' as a worker, but it does not discard the blocks afterwards. This, similarly to just 'blkdiscard', does not require a cluster-wide lock and therefore can be executed without running into the 60s timeout. This series extends the 'saferemove' worker so that he can execute 'blkdiscard', 'blkdiscard --zeroout' or both of them together and adds an option to select this in the Gui. [1] https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Storage_boxes_(SAN/NAS) [2] https://forum.proxmox.com/threads/175849/post-820043 storage: Lukas Sichert (1): fix #7339: lvmthick: add worker to free space of to be deleted VMs src/PVE/Storage.pm | 16 ++++++++++++---- src/PVE/Storage/LVMPlugin.pm | 31 ++++++++++++++++++++++--------- 2 files changed, 34 insertions(+), 13 deletions(-) manager: Lukas Sichert (1): fix #7339: lvmthick: ui: add UI fields for option to free storage www/manager6/Utils.js | 3 +++ www/manager6/storage/LVMEdit.js | 13 +++++++++++++ 2 files changed, 16 insertions(+) Summary over all repositories: 4 files changed, 50 insertions(+), 13 deletions(-) -- Generated by murpp 0.11.0