From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id C6B0F1FF13C for ; Thu, 16 Apr 2026 16:38:14 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id A6B108FF6; Thu, 16 Apr 2026 16:38:14 +0200 (CEST) From: Lukas Sichert To: pve-devel@lists.proxmox.com Subject: [PATCH storage v2 1/2] fix #7339: lvmthick: add worker to free space of to be deleted VMs Date: Thu, 16 Apr 2026 16:37:23 +0200 Message-ID: <20260416143729.54192-2-l.sichert@proxmox.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260416143729.54192-1-l.sichert@proxmox.com> References: <20260416143729.54192-1-l.sichert@proxmox.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1776350181137 X-SPAM-LEVEL: Spam detection results: 0 AWL 1.074 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment KAM_SHORT 0.001 Use of a URL Shortener for very short URL SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Message-ID-Hash: EGQCSQ4MYH5LPHBBMVESQVMRSG3QEW2D X-Message-ID-Hash: EGQCSQ4MYH5LPHBBMVESQVMRSG3QEW2D X-MailFrom: l.sichert@proxmox.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Lukas Sichert X-Mailman-Version: 3.3.10 Precedence: list List-Id: Proxmox VE development discussion List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: Currently when deleting a VM whose disk is stored on a thinly-provisioned LUN there is no way to also free the storage space used by the VM. This is because the current implementation only calls 'lvremove'. This command deletes the LVM meta-data for the disk, but it does not send discards to the SAN. 'lvmremove' can also be used with 'issue_discards', but since LVM meta-data is changed, it needs to be done under a cluster-wide lock, which can lead to timeouts. There is already an option to enable 'saferemove', which executes 'blkdiscard --zeroout' to override the whole storage space allocated to the disk with zeros. However it does not free the storage space.[1] To add the functionality that frees the storage space, adjust the worker in the code that is already there for zeroing out. In the worker parse the storage config and if 'discard' is enabled execute 'blkdiscard'. This can also be executed in combination with 'blkdiscard --zeroout' to first zero out the disk and then free the storage space.[1] To add an option to set 'discard' in the frontend, add a description, so that the variable will be included in the json-Schema. [1] https://man7.org/linux/man-pages/man8/blkdiscard.8.html Signed-off-by: Lukas Sichert --- src/PVE/Storage.pm | 16 ++++++++++++---- src/PVE/Storage/LVMPlugin.pm | 34 ++++++++++++++++++++++++++-------- 2 files changed, 38 insertions(+), 12 deletions(-) diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm index 6e87bac..ef1596f 100755 --- a/src/PVE/Storage.pm +++ b/src/PVE/Storage.pm @@ -1192,7 +1192,7 @@ sub vdisk_free { activate_storage($cfg, $storeid); - my $cleanup_worker; + my $discard_worker; # lock shared storage $plugin->cluster_lock_storage( @@ -1206,16 +1206,24 @@ sub vdisk_free { my (undef, undef, undef, undef, undef, $isBase, $format) = $plugin->parse_volname($volname); - $cleanup_worker = $plugin->free_image($storeid, $scfg, $volname, $isBase, $format); + $discard_worker = $plugin->free_image($storeid, $scfg, $volname, $isBase, $format); }, ); - return if !$cleanup_worker; + return if !$discard_worker; my $rpcenv = PVE::RPCEnvironment::get(); my $authuser = $rpcenv->get_user(); - $rpcenv->fork_worker('imgdel', undef, $authuser, $cleanup_worker); + if ($scfg->{saferemove} && $scfg->{'issue-blkdiscard'}) { + $rpcenv->fork_worker('diskzerodiscard', undef, $authuser, $discard_worker); + } elsif ($scfg->{saferemove}) { + $rpcenv->fork_worker('diskzero', undef, $authuser, $discard_worker); + } elsif ($scfg->{'issue-blkdiscard'}) { + $rpcenv->fork_worker('diskdiscard', undef, $authuser, $discard_worker); + } else { + $rpcenv->fork_worker('imgdel', undef, $authuser, $discard_worker); + } } sub vdisk_list { diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm index 32a8339..5a63ae5 100644 --- a/src/PVE/Storage/LVMPlugin.pm +++ b/src/PVE/Storage/LVMPlugin.pm @@ -352,11 +352,11 @@ my sub free_lvm_volumes { # we need to zero out LVM data for security reasons # and to allow thin provisioning - my $zero_out_worker = sub { + my $blkdiscard_worker = sub { + my ($saferemove, $issue_blkdiscard) = @_; + for my $name (@$volnames) { my $lvmpath = "/dev/$vg/del-$name"; - print "zero-out data on image $name ($lvmpath)\n"; - my $cmd_activate = ['/sbin/lvchange', '-aly', $lvmpath]; run_command( $cmd_activate, @@ -367,8 +367,15 @@ my sub free_lvm_volumes { $cmd_activate, errmsg => "can't refresh LV '$lvmpath' to zero-out its data", ); - - $secure_delete_cmd->($lvmpath); + if ($saferemove) { + print "zero-out data on image $name ($lvmpath)\n"; + $secure_delete_cmd->($lvmpath); + } + if ($issue_blkdiscard) { + print "discard image $name ($lvmpath)\n"; + eval { run_command(['/sbin/blkdiscard', $lvmpath]); }; + warn $@ if $@; + } $class->cluster_lock_storage( $storeid, @@ -379,17 +386,18 @@ my sub free_lvm_volumes { run_command($cmd, errmsg => "lvremove '$vg/del-$name' error"); }, ); - print "successfully removed volume $name ($vg/del-$name)\n"; } }; - if ($scfg->{saferemove}) { + if ($scfg->{saferemove} || $scfg->{'issue-blkdiscard'}) { for my $name (@$volnames) { # avoid long running task, so we only rename here my $cmd = ['/sbin/lvrename', $vg, $name, "del-$name"]; run_command($cmd, errmsg => "lvrename '$vg/$name' error"); } - return $zero_out_worker; + return sub { + $blkdiscard_worker->($scfg->{saferemove}, $scfg->{'issue-blkdiscard'}); + }; } else { for my $name (@$volnames) { my $cmd = ['/sbin/lvremove', '-f', "$vg/$name"]; @@ -428,6 +436,15 @@ sub properties { description => "Zero-out data when removing LVs.", type => 'boolean', }, + 'issue-blkdiscard' => { + description => "Issue discard (TRIM) requests for LVs before removing them.", + type => 'boolean', + verbose_description => + "If enabled, blkdiscard is issued for the LV before removing it." + . " This sends discard requests for the LV's block range, allowing" + . " thin-provisioned storage to reclaim previously allocated physical" + . " space, provided the storage supports discard.", + }, 'saferemove-stepsize' => { description => "Wipe step size in MiB." . " It will be capped to the maximum supported by the storage.", @@ -453,6 +470,7 @@ sub options { shared => { optional => 1 }, disable => { optional => 1 }, saferemove => { optional => 1 }, + 'issue-blkdiscard' => { optional => 1 }, 'saferemove-stepsize' => { optional => 1 }, saferemove_throughput => { optional => 1 }, content => { optional => 1 }, -- 2.47.3