From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id D9E0F1FF136 for ; Mon, 20 Apr 2026 16:50:53 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id B9D58565E; Mon, 20 Apr 2026 16:50:53 +0200 (CEST) Message-ID: <903f48a7-0bfc-4602-aa82-cff7e01e8f08@proxmox.com> Date: Mon, 20 Apr 2026 16:50:48 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: superseeded: [PATCH manager/storage 0/2] fix #7339: lvmthick: add option to free storage for deleted VMs To: pve-devel@lists.proxmox.com References: <20260323101506.56098-1-l.sichert@proxmox.com> Content-Language: en-US From: Lukas Sichert In-Reply-To: <20260323101506.56098-1-l.sichert@proxmox.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1776696564857 X-SPAM-LEVEL: Spam detection results: 0 AWL 0.946 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Message-ID-Hash: SNPRO5TVJU2LW7I2DPYZIDJX6PP3TPPK X-Message-ID-Hash: SNPRO5TVJU2LW7I2DPYZIDJX6PP3TPPK X-MailFrom: l.sichert@proxmox.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header X-Mailman-Version: 3.3.10 Precedence: list List-Id: Proxmox VE development discussion List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: superseeded-by: https://lore.proxmox.com/all/20260416143729.54192-1-l.sichert@proxmox.com/ On 2026-03-23 11:14, Lukas Sichert wrote: > Logical volumes (LV) in an LVM (thick) volume group (VG) are > thick-provisioned, but the underlying backing storage can be > thin-provisioned. In particular, this can be the case if the VG resides > on a LUN provided by a SAN via ISCSI/FC/SAS [1], where the LUN may be > thin-provisioned on the SAN side. > > In such setups, one usually wants that deleting an LV (e.g. VM disk) > frees up space on the SAN side, especially when using > snapshots-as-volume-chains, because snapshot LVs are thick-provisioned > LVs from the LVM point of view, so users may want to over-provision the > LUN on the SAN side. > > One option to free up space when deleting an LV is to set > `issue_discards = 1` in the LVM config. With this setting, `lvremove` > will send discards for the regions previously used by the LV, which will > (if the SAN supports it) inform the SAN that the space is not in use > anymore and can be freed up. Since `lvremove` modifies LVM metadata, it > has to be issued while holding a cluster-wide lock on the storage. > Unfortunately, depending on the setup, `issue_discards = 1` can make > `lvremove` take very long for big disks (due to the large number of > discards being issued), so that it eventually hits the 60s timeout of > the cluster lock. The 60s are a hard-coded limit and cannot be easily > changed [2]. > > A better option would be to use 'blkdiscard'.This will issue discard for > all the blocks of the device and therefore free the storage on the san > [3]. As this option does not require changing any LVM metadata it can be > executed with a storage lock. > > There is already a setting for 'saferemove', which zeros-out > to-be-deleted LVs using 'blkdiscard --zeroout' as a worker, but it does > not discard the blocks afterwards. This, similarly to just 'blkdiscard', > does not require a cluster-wide lock and therefore can be executed > without running into the 60s timeout. > > This series extends the 'saferemove' worker so that he can execute > 'blkdiscard', 'blkdiscard --zeroout' or both of them together and adds > an option to select this in the Gui. > > > [1] https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Storage_boxes_(SAN/NAS) > [2] https://forum.proxmox.com/threads/175849/post-820043 > > > storage: > > Lukas Sichert (1): > fix #7339: lvmthick: add worker to free space of to be deleted VMs > > src/PVE/Storage.pm | 16 ++++++++++++---- > src/PVE/Storage/LVMPlugin.pm | 31 ++++++++++++++++++++++--------- > 2 files changed, 34 insertions(+), 13 deletions(-) > > > manager: > > Lukas Sichert (1): > fix #7339: lvmthick: ui: add UI fields for option to free storage > > www/manager6/Utils.js | 3 +++ > www/manager6/storage/LVMEdit.js | 13 +++++++++++++ > 2 files changed, 16 insertions(+) > > > Summary over all repositories: > 4 files changed, 50 insertions(+), 13 deletions(-) >