* [PATCH storage v2 1/2] fix #7339: lvmthick: add worker to free space of to be deleted VMs
2026-04-16 14:37 [PATCH manager/storage v2 0/2] fix #7339: lvmthick: add option to free storage for deleted VMs Lukas Sichert
@ 2026-04-16 14:37 ` Lukas Sichert
2026-04-16 14:37 ` [PATCH manager v2 2/2] fix #7339: lvmthick: ui: add UI fields for option to free storage Lukas Sichert
1 sibling, 0 replies; 3+ messages in thread
From: Lukas Sichert @ 2026-04-16 14:37 UTC (permalink / raw)
To: pve-devel; +Cc: Lukas Sichert
Currently when deleting a VM whose disk is stored on a
thinly-provisioned LUN there is no way to also free the storage space
used by the VM. This is because the current implementation only calls
'lvremove'. This command deletes the LVM meta-data for the disk, but it
does not send discards to the SAN. 'lvmremove' can also be used with
'issue_discards', but since LVM meta-data is changed, it needs to be
done under a cluster-wide lock, which can lead to timeouts. There is
already an option to enable 'saferemove', which executes 'blkdiscard
--zeroout' to override the whole storage space allocated to the disk
with zeros. However it does not free the storage space.[1]
To add the functionality that frees the storage space, adjust the worker
in the code that is already there for zeroing out. In the worker parse
the storage config and if 'discard' is enabled execute 'blkdiscard'.
This can also be executed in combination with 'blkdiscard --zeroout' to
first zero out the disk and then free the storage space.[1]
To add an option to set 'discard' in the frontend, add a description, so
that the variable will be included in the json-Schema.
[1] https://man7.org/linux/man-pages/man8/blkdiscard.8.html
Signed-off-by: Lukas Sichert <l.sichert@proxmox.com>
---
src/PVE/Storage.pm | 16 ++++++++++++----
src/PVE/Storage/LVMPlugin.pm | 34 ++++++++++++++++++++++++++--------
2 files changed, 38 insertions(+), 12 deletions(-)
diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index 6e87bac..ef1596f 100755
--- a/src/PVE/Storage.pm
+++ b/src/PVE/Storage.pm
@@ -1192,7 +1192,7 @@ sub vdisk_free {
activate_storage($cfg, $storeid);
- my $cleanup_worker;
+ my $discard_worker;
# lock shared storage
$plugin->cluster_lock_storage(
@@ -1206,16 +1206,24 @@ sub vdisk_free {
my (undef, undef, undef, undef, undef, $isBase, $format) =
$plugin->parse_volname($volname);
- $cleanup_worker = $plugin->free_image($storeid, $scfg, $volname, $isBase, $format);
+ $discard_worker = $plugin->free_image($storeid, $scfg, $volname, $isBase, $format);
},
);
- return if !$cleanup_worker;
+ return if !$discard_worker;
my $rpcenv = PVE::RPCEnvironment::get();
my $authuser = $rpcenv->get_user();
- $rpcenv->fork_worker('imgdel', undef, $authuser, $cleanup_worker);
+ if ($scfg->{saferemove} && $scfg->{'issue-blkdiscard'}) {
+ $rpcenv->fork_worker('diskzerodiscard', undef, $authuser, $discard_worker);
+ } elsif ($scfg->{saferemove}) {
+ $rpcenv->fork_worker('diskzero', undef, $authuser, $discard_worker);
+ } elsif ($scfg->{'issue-blkdiscard'}) {
+ $rpcenv->fork_worker('diskdiscard', undef, $authuser, $discard_worker);
+ } else {
+ $rpcenv->fork_worker('imgdel', undef, $authuser, $discard_worker);
+ }
}
sub vdisk_list {
diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
index 32a8339..5a63ae5 100644
--- a/src/PVE/Storage/LVMPlugin.pm
+++ b/src/PVE/Storage/LVMPlugin.pm
@@ -352,11 +352,11 @@ my sub free_lvm_volumes {
# we need to zero out LVM data for security reasons
# and to allow thin provisioning
- my $zero_out_worker = sub {
+ my $blkdiscard_worker = sub {
+ my ($saferemove, $issue_blkdiscard) = @_;
+
for my $name (@$volnames) {
my $lvmpath = "/dev/$vg/del-$name";
- print "zero-out data on image $name ($lvmpath)\n";
-
my $cmd_activate = ['/sbin/lvchange', '-aly', $lvmpath];
run_command(
$cmd_activate,
@@ -367,8 +367,15 @@ my sub free_lvm_volumes {
$cmd_activate,
errmsg => "can't refresh LV '$lvmpath' to zero-out its data",
);
-
- $secure_delete_cmd->($lvmpath);
+ if ($saferemove) {
+ print "zero-out data on image $name ($lvmpath)\n";
+ $secure_delete_cmd->($lvmpath);
+ }
+ if ($issue_blkdiscard) {
+ print "discard image $name ($lvmpath)\n";
+ eval { run_command(['/sbin/blkdiscard', $lvmpath]); };
+ warn $@ if $@;
+ }
$class->cluster_lock_storage(
$storeid,
@@ -379,17 +386,18 @@ my sub free_lvm_volumes {
run_command($cmd, errmsg => "lvremove '$vg/del-$name' error");
},
);
- print "successfully removed volume $name ($vg/del-$name)\n";
}
};
- if ($scfg->{saferemove}) {
+ if ($scfg->{saferemove} || $scfg->{'issue-blkdiscard'}) {
for my $name (@$volnames) {
# avoid long running task, so we only rename here
my $cmd = ['/sbin/lvrename', $vg, $name, "del-$name"];
run_command($cmd, errmsg => "lvrename '$vg/$name' error");
}
- return $zero_out_worker;
+ return sub {
+ $blkdiscard_worker->($scfg->{saferemove}, $scfg->{'issue-blkdiscard'});
+ };
} else {
for my $name (@$volnames) {
my $cmd = ['/sbin/lvremove', '-f', "$vg/$name"];
@@ -428,6 +436,15 @@ sub properties {
description => "Zero-out data when removing LVs.",
type => 'boolean',
},
+ 'issue-blkdiscard' => {
+ description => "Issue discard (TRIM) requests for LVs before removing them.",
+ type => 'boolean',
+ verbose_description =>
+ "If enabled, blkdiscard is issued for the LV before removing it."
+ . " This sends discard requests for the LV's block range, allowing"
+ . " thin-provisioned storage to reclaim previously allocated physical"
+ . " space, provided the storage supports discard.",
+ },
'saferemove-stepsize' => {
description => "Wipe step size in MiB."
. " It will be capped to the maximum supported by the storage.",
@@ -453,6 +470,7 @@ sub options {
shared => { optional => 1 },
disable => { optional => 1 },
saferemove => { optional => 1 },
+ 'issue-blkdiscard' => { optional => 1 },
'saferemove-stepsize' => { optional => 1 },
saferemove_throughput => { optional => 1 },
content => { optional => 1 },
--
2.47.3
^ permalink raw reply [flat|nested] 3+ messages in thread* [PATCH manager v2 2/2] fix #7339: lvmthick: ui: add UI fields for option to free storage
2026-04-16 14:37 [PATCH manager/storage v2 0/2] fix #7339: lvmthick: add option to free storage for deleted VMs Lukas Sichert
2026-04-16 14:37 ` [PATCH storage v2 1/2] fix #7339: lvmthick: add worker to free space of to be " Lukas Sichert
@ 2026-04-16 14:37 ` Lukas Sichert
1 sibling, 0 replies; 3+ messages in thread
From: Lukas Sichert @ 2026-04-16 14:37 UTC (permalink / raw)
To: pve-devel; +Cc: Lukas Sichert
In the previous commit the backend recieved the functionality to discard
allocated space of a VM disk on a SAN, when a VM is deleted. The backend
checks whether to use this option by parsing storage.cfg to see if
'issue_blkdiscard' is set to 1. This variable will automatically be
stored into the config file if it is present in the 'PUT' API request.
To be able to append this to the API call, the variable needs
to be defined in the json-Schema, but this has also been added in the
previous commit.
To enable a option to free storage in the Gui, create a checkbox with
the name 'issue_blkdiscard'. In the checkbox use cbind to evaluate
'iscreate' from the component, to only autoenable on new creations of
LVM Storages. This checkbox also adds the 'issue_blkdiscard'
variable and its value to the return call.
Also add remappings for the description of the new worker tasks, to show
in the Gui, which options are used for the current deletion.
Signed-off-by: Lukas Sichert <l.sichert@proxmox.com>
---
www/manager6/Utils.js | 3 +++
www/manager6/storage/LVMEdit.js | 15 +++++++++++++++
2 files changed, 18 insertions(+)
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index b36e46fd..03558942 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -2133,6 +2133,9 @@ Ext.define('PVE.Utils', {
clusterjoin: ['', gettext('Join Cluster')],
dircreate: [gettext('Directory Storage'), gettext('Create')],
dirremove: [gettext('Directory'), gettext('Remove')],
+ diskdiscard: ['', gettext('Discard disk')],
+ diskzero: ['', gettext('Zero out disk')],
+ diskzerodiscard: ['', gettext('Zero out and discard disk')],
download: [gettext('File'), gettext('Download')],
hamigrate: ['HA', gettext('Migrate')],
hashutdown: ['HA', gettext('Shutdown')],
diff --git a/www/manager6/storage/LVMEdit.js b/www/manager6/storage/LVMEdit.js
index 148f0601..56463bce 100644
--- a/www/manager6/storage/LVMEdit.js
+++ b/www/manager6/storage/LVMEdit.js
@@ -241,5 +241,20 @@ Ext.define('PVE.storage.LVMInputPanel', {
uncheckedValue: 0,
fieldLabel: gettext('Wipe Removed Volumes'),
},
+ {
+ xtype: 'proxmoxcheckbox',
+ name: 'issue-blkdiscard',
+ uncheckedValue: 0,
+ cbind: {
+ checked: '{isCreate}',
+ },
+ fieldLabel: gettext('Discard Removed Volumes'),
+ autoEl: {
+ tag: 'div',
+ 'data-qtip': gettext(
+ 'Enable to issue discard (TRIM) requests for logical volumes before removing them.',
+ ),
+ },
+ },
],
});
--
2.47.3
^ permalink raw reply [flat|nested] 3+ messages in thread