From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id B877C1FF136 for ; Mon, 23 Mar 2026 11:15:16 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 68F8710907; Mon, 23 Mar 2026 11:15:33 +0100 (CET) From: Lukas Sichert To: pve-devel@lists.proxmox.com Subject: [PATCH storage 1/2] fix #7339: lvmthick: add worker to free space of to be deleted VMs Date: Mon, 23 Mar 2026 11:14:54 +0100 Message-ID: <20260323101506.56098-2-l.sichert@proxmox.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260323101506.56098-1-l.sichert@proxmox.com> References: <20260323101506.56098-1-l.sichert@proxmox.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1774260882165 X-SPAM-LEVEL: Spam detection results: 0 AWL 1.466 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment KAM_SHORT 0.001 Use of a URL Shortener for very short URL RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Message-ID-Hash: S5OIONUZJG6KDD2ARQQCKXQKR7GR6V4M X-Message-ID-Hash: S5OIONUZJG6KDD2ARQQCKXQKR7GR6V4M X-MailFrom: l.sichert@proxmox.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Lukas Sichert X-Mailman-Version: 3.3.10 Precedence: list List-Id: Proxmox VE development discussion List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: Currently when deleting a VM whose disk is stored on a thinly-provisioned LUN there is no way to also free the storage space used by the VM. This is because the current implementation only calls 'lvremove'. This command deletes the LVM meta-data for the disk, but it does not send discards to the SAN. 'lvmremove' can also be used with 'issue_discards', but since LVM meta-data is changed, it needs to be done under a cluster-wide lock, which can lead to timeouts. There is already an option to enable 'saferemove', which executes 'blkdiscard --zeroout' to override the whole storage space allocated to the disk with zeros. However it does not free the storage space.[1] To add the functionality that frees the storage space, adjust the worker in the code that is already there for zeroing out. In the worker parse the storage config and if 'discard' is enabled execute 'blkdiscard'. This can also be executed in combination with 'blkdiscard --zeroout' to first zero out the disk and then free the storage space.[1] To add an option to set 'discard' in the frontend, add a description, so that the variable will be included in the json-Schema. [1] https://man7.org/linux/man-pages/man8/blkdiscard.8.html Signed-off-by: Lukas Sichert --- src/PVE/Storage.pm | 16 ++++++++++++---- src/PVE/Storage/LVMPlugin.pm | 31 ++++++++++++++++++++++--------- 2 files changed, 34 insertions(+), 13 deletions(-) diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm index 6e87bac..b5d97fd 100755 --- a/src/PVE/Storage.pm +++ b/src/PVE/Storage.pm @@ -1192,7 +1192,7 @@ sub vdisk_free { activate_storage($cfg, $storeid); - my $cleanup_worker; + my $discard_worker; # lock shared storage $plugin->cluster_lock_storage( @@ -1206,16 +1206,24 @@ sub vdisk_free { my (undef, undef, undef, undef, undef, $isBase, $format) = $plugin->parse_volname($volname); - $cleanup_worker = $plugin->free_image($storeid, $scfg, $volname, $isBase, $format); + $discard_worker = $plugin->free_image($storeid, $scfg, $volname, $isBase, $format); }, ); - return if !$cleanup_worker; + return if !$discard_worker; my $rpcenv = PVE::RPCEnvironment::get(); my $authuser = $rpcenv->get_user(); - $rpcenv->fork_worker('imgdel', undef, $authuser, $cleanup_worker); + if ($scfg->{saferemove} && $scfg->{issue_blkdiscard}) { + $rpcenv->fork_worker('diskzerodiscard', undef, $authuser, $discard_worker); + } elsif ($scfg->{saferemove}) { + $rpcenv->fork_worker('diskzero', undef, $authuser, $discard_worker); + } elsif ($scfg->{issue_blkdiscard}) { + $rpcenv->fork_worker('diskdiscard', undef, $authuser, $discard_worker); + } else { + die "config must have changed during execution. Disk can't be deleted safely"; + } } sub vdisk_list { diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm index 32a8339..b0b94a6 100644 --- a/src/PVE/Storage/LVMPlugin.pm +++ b/src/PVE/Storage/LVMPlugin.pm @@ -352,12 +352,12 @@ my sub free_lvm_volumes { # we need to zero out LVM data for security reasons # and to allow thin provisioning - my $zero_out_worker = sub { + my $blkdiscard_worker = sub { + my ($saferemove, $issue_blkdiscard) = @_; + for my $name (@$volnames) { my $lvmpath = "/dev/$vg/del-$name"; - print "zero-out data on image $name ($lvmpath)\n"; - - my $cmd_activate = ['/sbin/lvchange', '-aly', $lvmpath]; + my $cmd_activate = ['/sbin/lvchange', '-aly', $lvmpath]; run_command( $cmd_activate, errmsg => "can't activate LV '$lvmpath' to zero-out its data", @@ -367,8 +367,15 @@ my sub free_lvm_volumes { $cmd_activate, errmsg => "can't refresh LV '$lvmpath' to zero-out its data", ); - - $secure_delete_cmd->($lvmpath); + if ($saferemove) { + print "zero-out data on image $name ($lvmpath)\n"; + $secure_delete_cmd->($lvmpath); + } + if ($issue_blkdiscard) { + print "free storage of image $name ($lvmpath)\n"; + eval { run_command(['/sbin/blkdiscard', $lvmpath]); }; + warn $@ if $@; + } $class->cluster_lock_storage( $storeid, @@ -379,17 +386,18 @@ my sub free_lvm_volumes { run_command($cmd, errmsg => "lvremove '$vg/del-$name' error"); }, ); - print "successfully removed volume $name ($vg/del-$name)\n"; } }; - if ($scfg->{saferemove}) { + if ($scfg->{saferemove} || $scfg->{issue_blkdiscard}) { for my $name (@$volnames) { # avoid long running task, so we only rename here my $cmd = ['/sbin/lvrename', $vg, $name, "del-$name"]; run_command($cmd, errmsg => "lvrename '$vg/$name' error"); } - return $zero_out_worker; + return sub { + $blkdiscard_worker->($scfg->{saferemove}, $scfg->{issue_blkdiscard}); + }; } else { for my $name (@$volnames) { my $cmd = ['/sbin/lvremove', '-f', "$vg/$name"]; @@ -428,6 +436,10 @@ sub properties { description => "Zero-out data when removing LVs.", type => 'boolean', }, + issue_blkdiscard => { + description => "Free Storage space when removing LVs.", + type => 'boolean', + }, 'saferemove-stepsize' => { description => "Wipe step size in MiB." . " It will be capped to the maximum supported by the storage.", @@ -453,6 +465,7 @@ sub options { shared => { optional => 1 }, disable => { optional => 1 }, saferemove => { optional => 1 }, + issue_blkdiscard => { optional => 1 }, 'saferemove-stepsize' => { optional => 1 }, saferemove_throughput => { optional => 1 }, content => { optional => 1 }, -- 2.47.3