* [PATCH manager/storage v4 0/2] fix #7339: lvmthick: add option to free storage for deleted VMs
@ 2026-05-13 12:02 Lukas Sichert
2026-05-13 12:02 ` [PATCH storage v4 1/2] fix #7339: lvmthick: add worker to free space of to be " Lukas Sichert
2026-05-13 12:02 ` [PATCH manager v4 2/2] fix #7339: lvmthick: ui: add UI option to free storage Lukas Sichert
0 siblings, 2 replies; 5+ messages in thread
From: Lukas Sichert @ 2026-05-13 12:02 UTC (permalink / raw)
To: pve-devel; +Cc: Lukas Sichert
Logical volumes (LV) in an LVM (thick) volume group (VG) are
thick-provisioned, but the underlying backing storage can be
thin-provisioned. In particular, this can be the case if the VG resides
on a LUN provided by a SAN via ISCSI/FC/SAS [1], where the LUN may be
thin-provisioned on the SAN side.
In such setups, one usually wants that deleting an LV (e.g. VM disk)
frees up space on the SAN side, especially when using
snapshots-as-volume-chains, because snapshot LVs are thick-provisioned
LVs from the LVM point of view, so users may want to over-provision the
LUN on the SAN side.
One option to free up space when deleting an LV is to set
`issue_discards = 1` in the LVM config. With this setting, `lvremove`
will send discards for the regions previously used by the LV, which will
(if the SAN supports it) inform the SAN that the space is not in use
anymore and can be freed up. Since `lvremove` modifies LVM metadata, it
has to be issued while holding a cluster-wide lock on the storage.
Unfortunately, depending on the setup, `issue_discards = 1` can make
`lvremove` take very long for big disks (due to the large number of
discards being issued), so that it eventually hits the 60s timeout of
the cluster lock. The 60s are a hard-coded limit and cannot be easily
changed [2].
A better option would be to use 'blkdiscard'.This will issue discard for
all the blocks of the device and therefore free the storage on the san
[3]. As this option does not require changing any LVM metadata it can be
executed with a storage lock.
There is already a setting for 'saferemove', which zeros-out
to-be-deleted LVs using 'blkdiscard --zeroout' as a worker, but it does
not discard the blocks afterwards. This, similarly to just 'blkdiscard',
does not require a cluster-wide lock and therefore can be executed
without running into the 60s timeout.
This series extends the 'saferemove' worker so that he can execute
'blkdiscard', 'blkdiscard --zeroout' or both of them together and adds
an option to select this in the Gui.
Changes from v3 to v4 (thanks Friedrich and Thomas):
- rework the worker-starting logic to avoid breaking abstraction layers
- rewrite the 'imgdel' task description in the UI to better match the
worker's behavior
- extend the code comment to also describe discard handling
-add additional syslog logging
Changes from v2 to v3 (thanks @Michael):
-correct issue_blkdiscard -> 'issue-blkdiscard' in the commit message
for the pve-manager
-replace 'previous commit' with a more obvious reference to first commit
of this series in the commit message for pve-manager
Changes from v1 to v2 (thanks @Michael, Maximiliano, Fabian):
-add more explicit descriptions in front- and backend, specifically
mentioning discard (TRIM)
-add a verbose description in the backend explaining the mechanism and
why it should be used for thin-provisioned storage
-add a forked fallback worker execution to allow other plugins to
issue workers without these config options
-rename variable issue_blkdiscard -> 'issue-blkdiscard' to conform to
newer style
[1] https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Storage_boxes_(SAN/NAS)
[2] https://forum.proxmox.com/threads/175849/post-820043
storage:
Lukas Sichert (1):
fix #7339: lvmthick: add worker to free space of to be deleted VMs
src/PVE/Storage.pm | 8 +++----
src/PVE/Storage/LVMPlugin.pm | 42 ++++++++++++++++++++++++++++++------
2 files changed, 40 insertions(+), 10 deletions(-)
manager:
Lukas Sichert (1):
fix #7339: lvmthick: ui: add UI option to free storage
www/manager6/Utils.js | 2 +-
www/manager6/storage/LVMEdit.js | 15 +++++++++++++++
2 files changed, 16 insertions(+), 1 deletion(-)
Summary over all repositories:
4 files changed, 56 insertions(+), 11 deletions(-)
--
Generated by murpp 0.10.0
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH storage v4 1/2] fix #7339: lvmthick: add worker to free space of to be deleted VMs
2026-05-13 12:02 [PATCH manager/storage v4 0/2] fix #7339: lvmthick: add option to free storage for deleted VMs Lukas Sichert
@ 2026-05-13 12:02 ` Lukas Sichert
2026-05-15 14:02 ` Friedrich Weber
2026-05-13 12:02 ` [PATCH manager v4 2/2] fix #7339: lvmthick: ui: add UI option to free storage Lukas Sichert
1 sibling, 1 reply; 5+ messages in thread
From: Lukas Sichert @ 2026-05-13 12:02 UTC (permalink / raw)
To: pve-devel; +Cc: Lukas Sichert
Currently when deleting a VM whose disk is stored on a
thinly-provisioned LUN there is no way to also free the storage space
used by the VM. This is because the current implementation only calls
'lvremove'. This command deletes the LVM meta-data for the disk, but it
does not send discards to the SAN. 'lvmremove' can also be used with
'issue_discards', but since LVM meta-data is changed, it needs to be
done under a cluster-wide lock, which can lead to timeouts. There is
already an option to enable 'saferemove', which executes 'blkdiscard
--zeroout' to override the whole storage space allocated to the disk
with zeros. However it does not free the storage space.[1]
To add the functionality that frees the storage space, adjust the worker
in the code that is already there for zeroing out. In the worker parse
the storage config and if 'issue-blkdiscard' is enabled execute
'blkdiscard'. This can also be executed in combination with 'blkdiscard
--zeroout' to first zero out the disk and then free the storage
space.[1]
To add an option to set 'issue-blkdiscard' in the frontend, add a
description, so that the variable will be included in the json-Schema.
[1] https://man7.org/linux/man-pages/man8/blkdiscard.8.html
Signed-off-by: Lukas Sichert <l.sichert@proxmox.com>
---
src/PVE/Storage.pm | 8 +++----
src/PVE/Storage/LVMPlugin.pm | 42 ++++++++++++++++++++++++++++++------
2 files changed, 40 insertions(+), 10 deletions(-)
diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index 020fa03..fa0c0bc 100755
--- a/src/PVE/Storage.pm
+++ b/src/PVE/Storage.pm
@@ -1192,7 +1192,7 @@ sub vdisk_free {
activate_storage($cfg, $storeid);
- my $cleanup_worker;
+ my $discard_worker;
# lock shared storage
$plugin->cluster_lock_storage(
@@ -1206,16 +1206,16 @@ sub vdisk_free {
my (undef, undef, undef, undef, undef, $isBase, $format) =
$plugin->parse_volname($volname);
- $cleanup_worker = $plugin->free_image($storeid, $scfg, $volname, $isBase, $format);
+ $discard_worker = $plugin->free_image($storeid, $scfg, $volname, $isBase, $format);
},
);
- return if !$cleanup_worker;
+ return if !$discard_worker;
my $rpcenv = PVE::RPCEnvironment::get();
my $authuser = $rpcenv->get_user();
- $rpcenv->fork_worker('imgdel', undef, $authuser, $cleanup_worker);
+ $rpcenv->fork_worker('imgdel', undef, $authuser, $discard_worker);
}
sub vdisk_list {
diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
index 3a35e38..03455c6 100644
--- a/src/PVE/Storage/LVMPlugin.pm
+++ b/src/PVE/Storage/LVMPlugin.pm
@@ -13,6 +13,7 @@ use PVE::Tools qw(run_command file_read_firstline trim);
use PVE::Storage::Common;
use PVE::Storage::Plugin;
+use PVE::SafeSyslog;
use base qw(PVE::Storage::Plugin);
@@ -351,11 +352,20 @@ my sub free_lvm_volumes_locked {
};
# we need to zero out LVM data for security reasons
- # and to allow thin provisioning
- my $zero_out_worker = sub {
+ # and discard images to free storage space to allow
+ # thin provisioning
+ my $discard_worker = sub {
+
for my $name (@$volnames) {
my $lvmpath = "/dev/$vg/del-$name";
- print "zero-out data on image $name ($lvmpath)\n";
+
+ my $discard_action =
+ $scfg->{saferemove}
+ && $scfg->{'issue-blkdiscard'} ? 'zero-out data and discard (TRIM)'
+ : $scfg->{saferemove} ? 'zero-out data on'
+ : $scfg->{'issue-blkdiscard'} ? 'discard (TRIM)'
+ : undef;
+ print "$discard_action image $name ($lvmpath)\n";
my $cmd_activate = ['/sbin/lvchange', '-aly', $lvmpath];
run_command(
@@ -367,8 +377,18 @@ my sub free_lvm_volumes_locked {
$cmd_activate,
errmsg => "can't refresh LV '$lvmpath' to zero-out its data",
);
+ syslog('info', "starting to $discard_action $name ($lvmpath)")
+ if defined($discard_action);
- $secure_delete_cmd->($lvmpath);
+ if ($scfg->{saferemove}) {
+ print "zero-out data on image $name ($lvmpath)\n";
+ $secure_delete_cmd->($lvmpath);
+ }
+ if ($scfg->{'issue-blkdiscard'}) {
+ print "discard image $name ($lvmpath)\n";
+ eval { run_command(['/sbin/blkdiscard', $lvmpath]); };
+ warn $@ if $@;
+ }
$class->cluster_lock_storage(
$storeid,
@@ -383,13 +403,13 @@ my sub free_lvm_volumes_locked {
}
};
- if ($scfg->{saferemove}) {
+ if ($scfg->{saferemove} || $scfg->{'issue-blkdiscard'}) {
for my $name (@$volnames) {
# avoid long running task, so we only rename here
my $cmd = ['/sbin/lvrename', $vg, $name, "del-$name"];
run_command($cmd, errmsg => "lvrename '$vg/$name' error");
}
- return $zero_out_worker;
+ return $discard_worker;
} else {
for my $name (@$volnames) {
my $cmd = ['/sbin/lvremove', '-f', "$vg/$name"];
@@ -428,6 +448,15 @@ sub properties {
description => "Zero-out data when removing LVs.",
type => 'boolean',
},
+ 'issue-blkdiscard' => {
+ description => "Issue discard (TRIM) requests for LVs before removing them.",
+ type => 'boolean',
+ verbose_description =>
+ "If enabled, blkdiscard is issued for the LV before removing it."
+ . " This sends discard requests for the LV's block range, allowing"
+ . " thin-provisioned storage to reclaim previously allocated physical"
+ . " space, provided the storage supports discard.",
+ },
'saferemove-stepsize' => {
description => "Wipe step size in MiB."
. " It will be capped to the maximum supported by the storage.",
@@ -453,6 +482,7 @@ sub options {
shared => { optional => 1 },
disable => { optional => 1 },
saferemove => { optional => 1 },
+ 'issue-blkdiscard' => { optional => 1 },
'saferemove-stepsize' => { optional => 1 },
saferemove_throughput => { optional => 1 },
content => { optional => 1 },
--
2.47.3
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH manager v4 2/2] fix #7339: lvmthick: ui: add UI option to free storage
2026-05-13 12:02 [PATCH manager/storage v4 0/2] fix #7339: lvmthick: add option to free storage for deleted VMs Lukas Sichert
2026-05-13 12:02 ` [PATCH storage v4 1/2] fix #7339: lvmthick: add worker to free space of to be " Lukas Sichert
@ 2026-05-13 12:02 ` Lukas Sichert
1 sibling, 0 replies; 5+ messages in thread
From: Lukas Sichert @ 2026-05-13 12:02 UTC (permalink / raw)
To: pve-devel; +Cc: Lukas Sichert
In the commit 'fix #7339: lvmthick: add worker to free space of to be
deleted VMs' for the 'pve-storage' repo the backend received the
functionality to discard allocated space of a VM disk on a SAN, when a
VM is deleted. The backend checks whether to use this option by parsing
storage.cfg to see if 'issue-blkdiscard' is set to 1. This variable will
automatically be stored into the config file if it is present in the
'PUT' API request. To be able to append this to the API call, the
variable needs to be defined in the json-Schema, but this has also been
added in the previous commit.
To enable an option to free storage in the GUI, create a checkbox with
the name 'issue-blkdiscard'. In the checkbox use cbind to evaluate
'iscreate' from the component, to only auto-enable on new creations of
LVM Storages. This checkbox also adds the 'issue-blkdiscard'
variable and its value to the return call.
Also rename the 'imgdel' task description from 'Erase data' to
'Destroy disk', since the task can now also discard storage blocks in
addition to zeroing data.
Signed-off-by: Lukas Sichert <l.sichert@proxmox.com>
---
www/manager6/Utils.js | 2 +-
www/manager6/storage/LVMEdit.js | 15 +++++++++++++++
2 files changed, 16 insertions(+), 1 deletion(-)
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index 01e80682..417db00b 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -2151,7 +2151,7 @@ Ext.define('PVE.Utils', {
hastart: ['HA', gettext('Start')],
hastop: ['HA', gettext('Stop')],
imgcopy: ['', gettext('Copy data')],
- imgdel: ['', gettext('Erase data')],
+ imgdel: ['', gettext('Destroy image')],
lvmcreate: [gettext('LVM Storage'), gettext('Create')],
lvmremove: ['Volume Group', gettext('Remove')],
lvmthincreate: [gettext('LVM-Thin Storage'), gettext('Create')],
diff --git a/www/manager6/storage/LVMEdit.js b/www/manager6/storage/LVMEdit.js
index 148f0601..56463bce 100644
--- a/www/manager6/storage/LVMEdit.js
+++ b/www/manager6/storage/LVMEdit.js
@@ -241,5 +241,20 @@ Ext.define('PVE.storage.LVMInputPanel', {
uncheckedValue: 0,
fieldLabel: gettext('Wipe Removed Volumes'),
},
+ {
+ xtype: 'proxmoxcheckbox',
+ name: 'issue-blkdiscard',
+ uncheckedValue: 0,
+ cbind: {
+ checked: '{isCreate}',
+ },
+ fieldLabel: gettext('Discard Removed Volumes'),
+ autoEl: {
+ tag: 'div',
+ 'data-qtip': gettext(
+ 'Enable to issue discard (TRIM) requests for logical volumes before removing them.',
+ ),
+ },
+ },
],
});
--
2.47.3
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH storage v4 1/2] fix #7339: lvmthick: add worker to free space of to be deleted VMs
2026-05-13 12:02 ` [PATCH storage v4 1/2] fix #7339: lvmthick: add worker to free space of to be " Lukas Sichert
@ 2026-05-15 14:02 ` Friedrich Weber
2026-05-15 14:28 ` Fabian Grünbichler
0 siblings, 1 reply; 5+ messages in thread
From: Friedrich Weber @ 2026-05-15 14:02 UTC (permalink / raw)
To: Lukas Sichert, pve-devel
Hi, thanks for the v4! Some comments inline. CC'ing Fabian because of
naming questions (see below).
On 13/05/2026 14:03, Lukas Sichert wrote:
> Currently when deleting a VM whose disk is stored on a
> thinly-provisioned LUN there is no way to also free the storage space
> used by the VM. This is because the current implementation only calls
> 'lvremove'. This command deletes the LVM meta-data for the disk, but it
> does not send discards to the SAN. 'lvmremove' can also be used with
> 'issue_discards', but since LVM meta-data is changed, it needs to be
> done under a cluster-wide lock, which can lead to timeouts. There is
> already an option to enable 'saferemove', which executes 'blkdiscard
> --zeroout' to override the whole storage space allocated to the disk
> with zeros. However it does not free the storage space.[1]
>
> To add the functionality that frees the storage space, adjust the worker
> in the code that is already there for zeroing out. In the worker parse
> the storage config and if 'issue-blkdiscard' is enabled execute
> 'blkdiscard'. This can also be executed in combination with 'blkdiscard
> --zeroout' to first zero out the disk and then free the storage
> space.[1]
>
> To add an option to set 'issue-blkdiscard' in the frontend, add a
> description, so that the variable will be included in the json-Schema.
>
> [1] https://man7.org/linux/man-pages/man8/blkdiscard.8.html
>
> Signed-off-by: Lukas Sichert <l.sichert@proxmox.com>
> ---
> src/PVE/Storage.pm | 8 +++----
> src/PVE/Storage/LVMPlugin.pm | 42 ++++++++++++++++++++++++++++++------
> 2 files changed, 40 insertions(+), 10 deletions(-)
>
> diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
> index 020fa03..fa0c0bc 100755
> --- a/src/PVE/Storage.pm
> +++ b/src/PVE/Storage.pm
> @@ -1192,7 +1192,7 @@ sub vdisk_free {
>
> activate_storage($cfg, $storeid);
>
> - my $cleanup_worker;
> + my $discard_worker;
>
> # lock shared storage
> $plugin->cluster_lock_storage(
> @@ -1206,16 +1206,16 @@ sub vdisk_free {
>
> my (undef, undef, undef, undef, undef, $isBase, $format) =
> $plugin->parse_volname($volname);
> - $cleanup_worker = $plugin->free_image($storeid, $scfg, $volname, $isBase, $format);
> + $discard_worker = $plugin->free_image($storeid, $scfg, $volname, $isBase, $format);
> },
> );
>
> - return if !$cleanup_worker;
> + return if !$discard_worker;
>
> my $rpcenv = PVE::RPCEnvironment::get();
> my $authuser = $rpcenv->get_user();
>
> - $rpcenv->fork_worker('imgdel', undef, $authuser, $cleanup_worker);
> + $rpcenv->fork_worker('imgdel', undef, $authuser, $discard_worker);
> }
>
Do we have to rename the variable here? If we don't have to, I don't
think we should -- IMO $cleanup_worker is a better name than
$discard_worker ('cleanup' is more general than 'discard').
> sub vdisk_list {
> diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
> index 3a35e38..03455c6 100644
> --- a/src/PVE/Storage/LVMPlugin.pm
> +++ b/src/PVE/Storage/LVMPlugin.pm
> @@ -13,6 +13,7 @@ use PVE::Tools qw(run_command file_read_firstline trim);
>
> use PVE::Storage::Common;
> use PVE::Storage::Plugin;
> +use PVE::SafeSyslog;
>
> use base qw(PVE::Storage::Plugin);
>
> @@ -351,11 +352,20 @@ my sub free_lvm_volumes_locked {
> };
>
> # we need to zero out LVM data for security reasons
> - # and to allow thin provisioning
> - my $zero_out_worker = sub {
> + # and discard images to free storage space to allow
> + # thin provisioning
> + my $discard_worker = sub {
> +
> for my $name (@$volnames) {
> my $lvmpath = "/dev/$vg/del-$name";
> - print "zero-out data on image $name ($lvmpath)\n";
> +
> + my $discard_action =
> + $scfg->{saferemove}
> + && $scfg->{'issue-blkdiscard'} ? 'zero-out data and discard (TRIM)'
> + : $scfg->{saferemove} ? 'zero-out data on'
> + : $scfg->{'issue-blkdiscard'} ? 'discard (TRIM)'
> + : undef;
> + print "$discard_action image $name ($lvmpath)\n";
Could be just me, but I find nested ?: expressions quite hard to read.
I'd slightly favor a big if/else here (though that is admittedly verbose).
>
> my $cmd_activate = ['/sbin/lvchange', '-aly', $lvmpath];
> run_command(
> @@ -367,8 +377,18 @@ my sub free_lvm_volumes_locked {
> $cmd_activate,
> errmsg => "can't refresh LV '$lvmpath' to zero-out its data",
> );
> + syslog('info', "starting to $discard_action $name ($lvmpath)")
> + if defined($discard_action);
>
> - $secure_delete_cmd->($lvmpath);
> + if ($scfg->{saferemove}) {
> + print "zero-out data on image $name ($lvmpath)\n";
> + $secure_delete_cmd->($lvmpath);
> + }
> + if ($scfg->{'issue-blkdiscard'}) {
> + print "discard image $name ($lvmpath)\n";
> + eval { run_command(['/sbin/blkdiscard', $lvmpath]); };
> + warn $@ if $@;
> + }
>
> $class->cluster_lock_storage(
> $storeid,
> @@ -383,13 +403,13 @@ my sub free_lvm_volumes_locked {
> }
> };
>
> - if ($scfg->{saferemove}) {
> + if ($scfg->{saferemove} || $scfg->{'issue-blkdiscard'}) {
> for my $name (@$volnames) {
> # avoid long running task, so we only rename here
> my $cmd = ['/sbin/lvrename', $vg, $name, "del-$name"];
> run_command($cmd, errmsg => "lvrename '$vg/$name' error");
> }
> - return $zero_out_worker;
> + return $discard_worker;
> } else {
> for my $name (@$volnames) {
> my $cmd = ['/sbin/lvremove', '-f', "$vg/$name"];
> @@ -428,6 +448,15 @@ sub properties {
> description => "Zero-out data when removing LVs.",
> type => 'boolean',
> },
> + 'issue-blkdiscard' => {
I'm still not so sure about the name -- in my v3 comment [1] I was in
favor of 'discard' over 'blkdiscard' (and I still am), but in addition,
'issue-discard' (or 'issue-blkdiscard' for that matter) may be a bit too
general, because this is only relevant for removal, right? What about
'discard-on-remove'? @Fabian what do you think?
> + description => "Issue discard (TRIM) requests for LVs before removing them.",
> + type => 'boolean',
> + verbose_description =>
> + "If enabled, blkdiscard is issued for the LV before removing it."
> + . " This sends discard requests for the LV's block range, allowing"
I'd mention TRIM here too, see my v3 comment [1].
> + . " thin-provisioned storage to reclaim previously allocated physical"
> + . " space, provided the storage supports discard.",
> + },
> 'saferemove-stepsize' => {
> description => "Wipe step size in MiB."
> . " It will be capped to the maximum supported by the storage.",
> @@ -453,6 +482,7 @@ sub options {
> shared => { optional => 1 },
> disable => { optional => 1 },
> saferemove => { optional => 1 },
> + 'issue-blkdiscard' => { optional => 1 },
> 'saferemove-stepsize' => { optional => 1 },
> saferemove_throughput => { optional => 1 },
> content => { optional => 1 },
[1]
https://lore.proxmox.com/all/24d217bf-5b9f-48d2-8754-9614bbbc5484@proxmox.com/
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH storage v4 1/2] fix #7339: lvmthick: add worker to free space of to be deleted VMs
2026-05-15 14:02 ` Friedrich Weber
@ 2026-05-15 14:28 ` Fabian Grünbichler
0 siblings, 0 replies; 5+ messages in thread
From: Fabian Grünbichler @ 2026-05-15 14:28 UTC (permalink / raw)
To: Friedrich Weber, Lukas Sichert, pve-devel
Quoting Friedrich Weber (2026-05-15 16:02:52)
> Hi, thanks for the v4! Some comments inline. CC'ing Fabian because of
> naming questions (see below).
>
> On 13/05/2026 14:03, Lukas Sichert wrote:
> > Currently when deleting a VM whose disk is stored on a
> > thinly-provisioned LUN there is no way to also free the storage space
> > used by the VM. This is because the current implementation only calls
> > 'lvremove'. This command deletes the LVM meta-data for the disk, but it
> > does not send discards to the SAN. 'lvmremove' can also be used with
> > 'issue_discards', but since LVM meta-data is changed, it needs to be
> > done under a cluster-wide lock, which can lead to timeouts. There is
> > already an option to enable 'saferemove', which executes 'blkdiscard
> > --zeroout' to override the whole storage space allocated to the disk
> > with zeros. However it does not free the storage space.[1]
> >
> > To add the functionality that frees the storage space, adjust the worker
> > in the code that is already there for zeroing out. In the worker parse
> > the storage config and if 'issue-blkdiscard' is enabled execute
> > 'blkdiscard'. This can also be executed in combination with 'blkdiscard
> > --zeroout' to first zero out the disk and then free the storage
> > space.[1]
> >
> > To add an option to set 'issue-blkdiscard' in the frontend, add a
> > description, so that the variable will be included in the json-Schema.
> >
> > [1] https://man7.org/linux/man-pages/man8/blkdiscard.8.html
> >
> > Signed-off-by: Lukas Sichert <l.sichert@proxmox.com>
> > ---
> > src/PVE/Storage.pm | 8 +++----
> > src/PVE/Storage/LVMPlugin.pm | 42 ++++++++++++++++++++++++++++++------
> > 2 files changed, 40 insertions(+), 10 deletions(-)
> >
> > diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
> > index 020fa03..fa0c0bc 100755
> > --- a/src/PVE/Storage.pm
> > +++ b/src/PVE/Storage.pm
> > @@ -1192,7 +1192,7 @@ sub vdisk_free {
> >
> > activate_storage($cfg, $storeid);
> >
> > - my $cleanup_worker;
> > + my $discard_worker;
> >
> > # lock shared storage
> > $plugin->cluster_lock_storage(
> > @@ -1206,16 +1206,16 @@ sub vdisk_free {
> >
> > my (undef, undef, undef, undef, undef, $isBase, $format) =
> > $plugin->parse_volname($volname);
> > - $cleanup_worker = $plugin->free_image($storeid, $scfg, $volname, $isBase, $format);
> > + $discard_worker = $plugin->free_image($storeid, $scfg, $volname, $isBase, $format);
> > },
> > );
> >
> > - return if !$cleanup_worker;
> > + return if !$discard_worker;
> >
> > my $rpcenv = PVE::RPCEnvironment::get();
> > my $authuser = $rpcenv->get_user();
> >
> > - $rpcenv->fork_worker('imgdel', undef, $authuser, $cleanup_worker);
> > + $rpcenv->fork_worker('imgdel', undef, $authuser, $discard_worker);
> > }
> >
>
> Do we have to rename the variable here? If we don't have to, I don't
> think we should -- IMO $cleanup_worker is a better name than
> $discard_worker ('cleanup' is more general than 'discard').
agreed.
>
> > sub vdisk_list {
> > diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
> > index 3a35e38..03455c6 100644
> > --- a/src/PVE/Storage/LVMPlugin.pm
> > +++ b/src/PVE/Storage/LVMPlugin.pm
> > @@ -13,6 +13,7 @@ use PVE::Tools qw(run_command file_read_firstline trim);
> >
> > use PVE::Storage::Common;
> > use PVE::Storage::Plugin;
> > +use PVE::SafeSyslog;
> >
> > use base qw(PVE::Storage::Plugin);
> >
> > @@ -351,11 +352,20 @@ my sub free_lvm_volumes_locked {
> > };
> >
> > # we need to zero out LVM data for security reasons
> > - # and to allow thin provisioning
> > - my $zero_out_worker = sub {
> > + # and discard images to free storage space to allow
> > + # thin provisioning
> > + my $discard_worker = sub {
> > +
> > for my $name (@$volnames) {
> > my $lvmpath = "/dev/$vg/del-$name";
> > - print "zero-out data on image $name ($lvmpath)\n";
> > +
> > + my $discard_action =
> > + $scfg->{saferemove}
> > + && $scfg->{'issue-blkdiscard'} ? 'zero-out data and discard (TRIM)'
> > + : $scfg->{saferemove} ? 'zero-out data on'
> > + : $scfg->{'issue-blkdiscard'} ? 'discard (TRIM)'
> > + : undef;
> > + print "$discard_action image $name ($lvmpath)\n";
>
> Could be just me, but I find nested ?: expressions quite hard to read.
> I'd slightly favor a big if/else here (though that is admittedly verbose).
would say so else well. also the undef branch is dead code - should that be
enforced/handled?
>
> >
> > my $cmd_activate = ['/sbin/lvchange', '-aly', $lvmpath];
> > run_command(
> > @@ -367,8 +377,18 @@ my sub free_lvm_volumes_locked {
> > $cmd_activate,
> > errmsg => "can't refresh LV '$lvmpath' to zero-out its data",
> > );
> > + syslog('info', "starting to $discard_action $name ($lvmpath)")
> > + if defined($discard_action);
> >
> > - $secure_delete_cmd->($lvmpath);
> > + if ($scfg->{saferemove}) {
> > + print "zero-out data on image $name ($lvmpath)\n";
> > + $secure_delete_cmd->($lvmpath);
> > + }
> > + if ($scfg->{'issue-blkdiscard'}) {
> > + print "discard image $name ($lvmpath)\n";
> > + eval { run_command(['/sbin/blkdiscard', $lvmpath]); };
> > + warn $@ if $@;
> > + }
> >
> > $class->cluster_lock_storage(
> > $storeid,
> > @@ -383,13 +403,13 @@ my sub free_lvm_volumes_locked {
> > }
> > };
> >
> > - if ($scfg->{saferemove}) {
> > + if ($scfg->{saferemove} || $scfg->{'issue-blkdiscard'}) {
> > for my $name (@$volnames) {
> > # avoid long running task, so we only rename here
> > my $cmd = ['/sbin/lvrename', $vg, $name, "del-$name"];
> > run_command($cmd, errmsg => "lvrename '$vg/$name' error");
> > }
> > - return $zero_out_worker;
> > + return $discard_worker;
> > } else {
> > for my $name (@$volnames) {
> > my $cmd = ['/sbin/lvremove', '-f', "$vg/$name"];
> > @@ -428,6 +448,15 @@ sub properties {
> > description => "Zero-out data when removing LVs.",
> > type => 'boolean',
> > },
> > + 'issue-blkdiscard' => {
>
> I'm still not so sure about the name -- in my v3 comment [1] I was in
> favor of 'discard' over 'blkdiscard' (and I still am), but in addition,
> 'issue-discard' (or 'issue-blkdiscard' for that matter) may be a bit too
> general, because this is only relevant for removal, right? What about
> 'discard-on-remove'? @Fabian what do you think?
discard-on-remove sounds okay to me as well, but what about setting this up for
a unification with PVE 10 by doing:
on-volume-remove = discard=1
and then migrating the saferemove parameters to this new property string?
on-volume-remove = discard=bool,zero=bool,zero_stepsize=..,zero_throughput=..
?
>
> > + description => "Issue discard (TRIM) requests for LVs before removing them.",
> > + type => 'boolean',
> > + verbose_description =>
> > + "If enabled, blkdiscard is issued for the LV before removing it."
> > + . " This sends discard requests for the LV's block range, allowing"
>
> I'd mention TRIM here too, see my v3 comment [1].
>
> > + . " thin-provisioned storage to reclaim previously allocated physical"
> > + . " space, provided the storage supports discard.",
> > + },
> > 'saferemove-stepsize' => {
> > description => "Wipe step size in MiB."
> > . " It will be capped to the maximum supported by the storage.",
> > @@ -453,6 +482,7 @@ sub options {
> > shared => { optional => 1 },
> > disable => { optional => 1 },
> > saferemove => { optional => 1 },
> > + 'issue-blkdiscard' => { optional => 1 },
> > 'saferemove-stepsize' => { optional => 1 },
> > saferemove_throughput => { optional => 1 },
> > content => { optional => 1 },
>
> [1]
> https://lore.proxmox.com/all/24d217bf-5b9f-48d2-8754-9614bbbc5484@proxmox.com/
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2026-05-15 14:28 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-13 12:02 [PATCH manager/storage v4 0/2] fix #7339: lvmthick: add option to free storage for deleted VMs Lukas Sichert
2026-05-13 12:02 ` [PATCH storage v4 1/2] fix #7339: lvmthick: add worker to free space of to be " Lukas Sichert
2026-05-15 14:02 ` Friedrich Weber
2026-05-15 14:28 ` Fabian Grünbichler
2026-05-13 12:02 ` [PATCH manager v4 2/2] fix #7339: lvmthick: ui: add UI option to free storage Lukas Sichert
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox