* [PATCH manager/storage v3 0/2] fix #7339: lvmthick: add option to free storage for deleted VMs @ 2026-04-23 14:47 Lukas Sichert 2026-04-23 14:47 ` [PATCH storage v3 1/2] fix #7339: lvmthick: add worker to free space of to be " Lukas Sichert 2026-04-23 14:47 ` [PATCH manager v3 2/2] fix #7339: lvmthick: ui: add UI fields for option to free storage Lukas Sichert 0 siblings, 2 replies; 4+ messages in thread From: Lukas Sichert @ 2026-04-23 14:47 UTC (permalink / raw) To: pve-devel; +Cc: Lukas Sichert Logical volumes (LV) in an LVM (thick) volume group (VG) are thick-provisioned, but the underlying backing storage can be thin-provisioned. In particular, this can be the case if the VG resides on a LUN provided by a SAN via ISCSI/FC/SAS [1], where the LUN may be thin-provisioned on the SAN side. In such setups, one usually wants that deleting an LV (e.g. VM disk) frees up space on the SAN side, especially when using snapshots-as-volume-chains, because snapshot LVs are thick-provisioned LVs from the LVM point of view, so users may want to over-provision the LUN on the SAN side. One option to free up space when deleting an LV is to set `issue_discards = 1` in the LVM config. With this setting, `lvremove` will send discards for the regions previously used by the LV, which will (if the SAN supports it) inform the SAN that the space is not in use anymore and can be freed up. Since `lvremove` modifies LVM metadata, it has to be issued while holding a cluster-wide lock on the storage. Unfortunately, depending on the setup, `issue_discards = 1` can make `lvremove` take very long for big disks (due to the large number of discards being issued), so that it eventually hits the 60s timeout of the cluster lock. The 60s are a hard-coded limit and cannot be easily changed [2]. A better option would be to use 'blkdiscard'.This will issue discard for all the blocks of the device and therefore free the storage on the san [3]. As this option does not require changing any LVM metadata it can be executed with a storage lock. There is already a setting for 'saferemove', which zeros-out to-be-deleted LVs using 'blkdiscard --zeroout' as a worker, but it does not discard the blocks afterwards. This, similarly to just 'blkdiscard', does not require a cluster-wide lock and therefore can be executed without running into the 60s timeout. This series extends the 'saferemove' worker so that he can execute 'blkdiscard', 'blkdiscard --zeroout' or both of them together and adds an option to select this in the Gui. Changes from v1 to v2 (thanks @Michael, Maximiliano, Fabian): -add more explicit descriptions in front- and backend, specifically mentioning discard (TRIM) -add a verbose description in the backend explaining the mechanism and why it should be used for thin-provisioned storage -add a forked fallback worker execution to allow other plugins to issue workers without these config options -rename variable issue_blkdiscard -> 'issue-blkdiscard' to conform to newer style Changes from v2 to v3 (thanks @Michael): -correct issue_blkdiscard -> 'issue-blkdiscard' in the commit message for the pve-manager -replace 'previous commit' with a more obvious reference to first commit of this series in the commit message for pve-manager [1] https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Storage_boxes_(SAN/NAS) [2] https://forum.proxmox.com/threads/175849/post-820043 storage: Lukas Sichert (1): fix #7339: lvmthick: add worker to free space of to be deleted VMs .gitignore | 1 + src/PVE/Storage.pm | 16 ++++++++++++---- src/PVE/Storage/LVMPlugin.pm | 34 ++++++++++++++++++++++++++-------- 3 files changed, 39 insertions(+), 12 deletions(-) manager: Lukas Sichert (1): fix #7339: lvmthick: ui: add UI fields for option to free storage www/manager6/Utils.js | 3 +++ www/manager6/storage/LVMEdit.js | 15 +++++++++++++++ 2 files changed, 18 insertions(+) Summary over all repositories: 5 files changed, 57 insertions(+), 12 deletions(-) -- Generated by murpp 0.10.0 ^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH storage v3 1/2] fix #7339: lvmthick: add worker to free space of to be deleted VMs 2026-04-23 14:47 [PATCH manager/storage v3 0/2] fix #7339: lvmthick: add option to free storage for deleted VMs Lukas Sichert @ 2026-04-23 14:47 ` Lukas Sichert 2026-05-05 15:59 ` Friedrich Weber 2026-04-23 14:47 ` [PATCH manager v3 2/2] fix #7339: lvmthick: ui: add UI fields for option to free storage Lukas Sichert 1 sibling, 1 reply; 4+ messages in thread From: Lukas Sichert @ 2026-04-23 14:47 UTC (permalink / raw) To: pve-devel; +Cc: Lukas Sichert Currently when deleting a VM whose disk is stored on a thinly-provisioned LUN there is no way to also free the storage space used by the VM. This is because the current implementation only calls 'lvremove'. This command deletes the LVM meta-data for the disk, but it does not send discards to the SAN. 'lvmremove' can also be used with 'issue_discards', but since LVM meta-data is changed, it needs to be done under a cluster-wide lock, which can lead to timeouts. There is already an option to enable 'saferemove', which executes 'blkdiscard --zeroout' to override the whole storage space allocated to the disk with zeros. However it does not free the storage space.[1] To add the functionality that frees the storage space, adjust the worker in the code that is already there for zeroing out. In the worker parse the storage config and if 'discard' is enabled execute 'blkdiscard'. This can also be executed in combination with 'blkdiscard --zeroout' to first zero out the disk and then free the storage space.[1] To add an option to set 'discard' in the frontend, add a description, so that the variable will be included in the json-Schema. [1] https://man7.org/linux/man-pages/man8/blkdiscard.8.html Signed-off-by: Lukas Sichert <l.sichert@proxmox.com> --- .gitignore | 1 + src/PVE/Storage.pm | 16 ++++++++++++---- src/PVE/Storage/LVMPlugin.pm | 34 ++++++++++++++++++++++++++-------- 3 files changed, 39 insertions(+), 12 deletions(-) diff --git a/.gitignore b/.gitignore index 0a409d5..cd44ab1 100644 --- a/.gitignore +++ b/.gitignore @@ -4,3 +4,4 @@ /*.buildinfo /*.changes /libpve-storage-perl-[0-9]*/ +/libpve-storage_perl-[0-9]*.tar.xz diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm index 6e87bac..ef1596f 100755 --- a/src/PVE/Storage.pm +++ b/src/PVE/Storage.pm @@ -1192,7 +1192,7 @@ sub vdisk_free { activate_storage($cfg, $storeid); - my $cleanup_worker; + my $discard_worker; # lock shared storage $plugin->cluster_lock_storage( @@ -1206,16 +1206,24 @@ sub vdisk_free { my (undef, undef, undef, undef, undef, $isBase, $format) = $plugin->parse_volname($volname); - $cleanup_worker = $plugin->free_image($storeid, $scfg, $volname, $isBase, $format); + $discard_worker = $plugin->free_image($storeid, $scfg, $volname, $isBase, $format); }, ); - return if !$cleanup_worker; + return if !$discard_worker; my $rpcenv = PVE::RPCEnvironment::get(); my $authuser = $rpcenv->get_user(); - $rpcenv->fork_worker('imgdel', undef, $authuser, $cleanup_worker); + if ($scfg->{saferemove} && $scfg->{'issue-blkdiscard'}) { + $rpcenv->fork_worker('diskzerodiscard', undef, $authuser, $discard_worker); + } elsif ($scfg->{saferemove}) { + $rpcenv->fork_worker('diskzero', undef, $authuser, $discard_worker); + } elsif ($scfg->{'issue-blkdiscard'}) { + $rpcenv->fork_worker('diskdiscard', undef, $authuser, $discard_worker); + } else { + $rpcenv->fork_worker('imgdel', undef, $authuser, $discard_worker); + } } sub vdisk_list { diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm index 32a8339..5a63ae5 100644 --- a/src/PVE/Storage/LVMPlugin.pm +++ b/src/PVE/Storage/LVMPlugin.pm @@ -352,11 +352,11 @@ my sub free_lvm_volumes { # we need to zero out LVM data for security reasons # and to allow thin provisioning - my $zero_out_worker = sub { + my $blkdiscard_worker = sub { + my ($saferemove, $issue_blkdiscard) = @_; + for my $name (@$volnames) { my $lvmpath = "/dev/$vg/del-$name"; - print "zero-out data on image $name ($lvmpath)\n"; - my $cmd_activate = ['/sbin/lvchange', '-aly', $lvmpath]; run_command( $cmd_activate, @@ -367,8 +367,15 @@ my sub free_lvm_volumes { $cmd_activate, errmsg => "can't refresh LV '$lvmpath' to zero-out its data", ); - - $secure_delete_cmd->($lvmpath); + if ($saferemove) { + print "zero-out data on image $name ($lvmpath)\n"; + $secure_delete_cmd->($lvmpath); + } + if ($issue_blkdiscard) { + print "discard image $name ($lvmpath)\n"; + eval { run_command(['/sbin/blkdiscard', $lvmpath]); }; + warn $@ if $@; + } $class->cluster_lock_storage( $storeid, @@ -379,17 +386,18 @@ my sub free_lvm_volumes { run_command($cmd, errmsg => "lvremove '$vg/del-$name' error"); }, ); - print "successfully removed volume $name ($vg/del-$name)\n"; } }; - if ($scfg->{saferemove}) { + if ($scfg->{saferemove} || $scfg->{'issue-blkdiscard'}) { for my $name (@$volnames) { # avoid long running task, so we only rename here my $cmd = ['/sbin/lvrename', $vg, $name, "del-$name"]; run_command($cmd, errmsg => "lvrename '$vg/$name' error"); } - return $zero_out_worker; + return sub { + $blkdiscard_worker->($scfg->{saferemove}, $scfg->{'issue-blkdiscard'}); + }; } else { for my $name (@$volnames) { my $cmd = ['/sbin/lvremove', '-f', "$vg/$name"]; @@ -428,6 +436,15 @@ sub properties { description => "Zero-out data when removing LVs.", type => 'boolean', }, + 'issue-blkdiscard' => { + description => "Issue discard (TRIM) requests for LVs before removing them.", + type => 'boolean', + verbose_description => + "If enabled, blkdiscard is issued for the LV before removing it." + . " This sends discard requests for the LV's block range, allowing" + . " thin-provisioned storage to reclaim previously allocated physical" + . " space, provided the storage supports discard.", + }, 'saferemove-stepsize' => { description => "Wipe step size in MiB." . " It will be capped to the maximum supported by the storage.", @@ -453,6 +470,7 @@ sub options { shared => { optional => 1 }, disable => { optional => 1 }, saferemove => { optional => 1 }, + 'issue-blkdiscard' => { optional => 1 }, 'saferemove-stepsize' => { optional => 1 }, saferemove_throughput => { optional => 1 }, content => { optional => 1 }, -- 2.47.3 ^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH storage v3 1/2] fix #7339: lvmthick: add worker to free space of to be deleted VMs 2026-04-23 14:47 ` [PATCH storage v3 1/2] fix #7339: lvmthick: add worker to free space of to be " Lukas Sichert @ 2026-05-05 15:59 ` Friedrich Weber 0 siblings, 0 replies; 4+ messages in thread From: Friedrich Weber @ 2026-05-05 15:59 UTC (permalink / raw) To: Lukas Sichert, pve-devel Thanks for tackling this! Comments inline: On 23/04/2026 16:46, Lukas Sichert wrote: > Currently when deleting a VM whose disk is stored on a > thinly-provisioned LUN there is no way to also free the storage space > used by the VM. This is because the current implementation only calls > 'lvremove'. This command deletes the LVM meta-data for the disk, but it > does not send discards to the SAN. 'lvmremove' can also be used with > 'issue_discards', but since LVM meta-data is changed, it needs to be > done under a cluster-wide lock, which can lead to timeouts. There is > already an option to enable 'saferemove', which executes 'blkdiscard > --zeroout' to override the whole storage space allocated to the disk > with zeros. However it does not free the storage space.[1] > > To add the functionality that frees the storage space, adjust the worker > in the code that is already there for zeroing out. In the worker parse > the storage config and if 'discard' is enabled execute 'blkdiscard'. > This can also be executed in combination with 'blkdiscard --zeroout' to > first zero out the disk and then free the storage space.[1] > > To add an option to set 'discard' in the frontend, add a description, so > that the variable will be included in the json-Schema. > > [1] https://man7.org/linux/man-pages/man8/blkdiscard.8.html > > Signed-off-by: Lukas Sichert <l.sichert@proxmox.com> > --- > .gitignore | 1 + > src/PVE/Storage.pm | 16 ++++++++++++---- > src/PVE/Storage/LVMPlugin.pm | 34 ++++++++++++++++++++++++++-------- > 3 files changed, 39 insertions(+), 12 deletions(-) > > diff --git a/.gitignore b/.gitignore > index 0a409d5..cd44ab1 100644 > --- a/.gitignore > +++ b/.gitignore > @@ -4,3 +4,4 @@ > /*.buildinfo > /*.changes > /libpve-storage-perl-[0-9]*/ > +/libpve-storage_perl-[0-9]*.tar.xz > diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm > index 6e87bac..ef1596f 100755 > --- a/src/PVE/Storage.pm > +++ b/src/PVE/Storage.pm > @@ -1192,7 +1192,7 @@ sub vdisk_free { > > activate_storage($cfg, $storeid); > > - my $cleanup_worker; > + my $discard_worker; > > # lock shared storage > $plugin->cluster_lock_storage( > @@ -1206,16 +1206,24 @@ sub vdisk_free { > > my (undef, undef, undef, undef, undef, $isBase, $format) = > $plugin->parse_volname($volname); > - $cleanup_worker = $plugin->free_image($storeid, $scfg, $volname, $isBase, $format); > + $discard_worker = $plugin->free_image($storeid, $scfg, $volname, $isBase, $format); > }, > ); > > - return if !$cleanup_worker; > + return if !$discard_worker; > > my $rpcenv = PVE::RPCEnvironment::get(); > my $authuser = $rpcenv->get_user(); > > - $rpcenv->fork_worker('imgdel', undef, $authuser, $cleanup_worker); > + if ($scfg->{saferemove} && $scfg->{'issue-blkdiscard'}) { > + $rpcenv->fork_worker('diskzerodiscard', undef, $authuser, $discard_worker); > + } elsif ($scfg->{saferemove}) { > + $rpcenv->fork_worker('diskzero', undef, $authuser, $discard_worker); > + } elsif ($scfg->{'issue-blkdiscard'}) { > + $rpcenv->fork_worker('diskdiscard', undef, $authuser, $discard_worker); > + } else { > + $rpcenv->fork_worker('imgdel', undef, $authuser, $discard_worker); > + } > } Is this change necessary? To me it seems like the fairly general PVE::Storage::vdisk_free shouldn't check for plugin-specific config options (like saferemove/issue-blkdiscard) -- in a way, it breaks the abstraction provided by PVE::Storage. Wouldn't it be nicer to keep PVE::Storage::vdisk_free as-is, have it spawn the dedicated 'imgdel' task, and the 'imgdel' task returned by the LVM plugin then zeroouts+discards/only-zeroouts/only-discards depending on the LVM storage settings? > > sub vdisk_list { > diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm > index 32a8339..5a63ae5 100644 > --- a/src/PVE/Storage/LVMPlugin.pm > +++ b/src/PVE/Storage/LVMPlugin.pm > @@ -352,11 +352,11 @@ my sub free_lvm_volumes { > > # we need to zero out LVM data for security reasons > # and to allow thin provisioning Would be nice to extend the comment here and mention that the worker will also discard. > - my $zero_out_worker = sub { > + my $blkdiscard_worker = sub { I see that $zero_out_worker is not a fitting name anymore because it doesn't only zero-out, but $blkdiscard_worker isn't fitting either, because it doesn't only discard, right? What about something general like $cleanup_worker? > + my ($saferemove, $issue_blkdiscard) = @_; Is it necessary to pass $saferemove and $issue_blkdiscard explicitly here? Doesn't the $zero_out_worker "closure" have access to $scfg->{saferemove} and $scfg->{issue-blkdiscard}? But maybe I'm missing some Perl intricacy here. > + > for my $name (@$volnames) { > my $lvmpath = "/dev/$vg/del-$name"; > - print "zero-out data on image $name ($lvmpath)\n"; > - > my $cmd_activate = ['/sbin/lvchange', '-aly', $lvmpath]; > run_command( > $cmd_activate, > @@ -367,8 +367,15 @@ my sub free_lvm_volumes { > $cmd_activate, > errmsg => "can't refresh LV '$lvmpath' to zero-out its data", > ); > - > - $secure_delete_cmd->($lvmpath); > + if ($saferemove) { > + print "zero-out data on image $name ($lvmpath)\n"; > + $secure_delete_cmd->($lvmpath); > + } > + if ($issue_blkdiscard) { > + print "discard image $name ($lvmpath)\n"; > + eval { run_command(['/sbin/blkdiscard', $lvmpath]); }; > + warn $@ if $@; > + } > > $class->cluster_lock_storage( > $storeid, > @@ -379,17 +386,18 @@ my sub free_lvm_volumes { > run_command($cmd, errmsg => "lvremove '$vg/del-$name' error"); > }, > ); > - print "successfully removed volume $name ($vg/del-$name)\n"; > } > }; > > - if ($scfg->{saferemove}) { > + if ($scfg->{saferemove} || $scfg->{'issue-blkdiscard'}) { > for my $name (@$volnames) { > # avoid long running task, so we only rename here > my $cmd = ['/sbin/lvrename', $vg, $name, "del-$name"]; > run_command($cmd, errmsg => "lvrename '$vg/$name' error"); > } > - return $zero_out_worker; > + return sub { > + $blkdiscard_worker->($scfg->{saferemove}, $scfg->{'issue-blkdiscard'}); > + }; > } else { > for my $name (@$volnames) { > my $cmd = ['/sbin/lvremove', '-f', "$vg/$name"]; > @@ -428,6 +436,15 @@ sub properties { > description => "Zero-out data when removing LVs.", > type => 'boolean', > }, > + 'issue-blkdiscard' => { I'd prefer 'discard' over 'blkdiscard'. 'blkdiscard' is the name of the tool we call, but that's more of an implementation detail -- the important thing is that it discards the previously used blocks. > + description => "Issue discard (TRIM) requests for LVs before removing them.", > + type => 'boolean', > + verbose_description => > + "If enabled, blkdiscard is issued for the LV before removing it." > + . " This sends discard requests for the LV's block range, allowing" I'd mention TRIM here too: "This sends discard (TRIM) requests [...]" > + . " thin-provisioned storage to reclaim previously allocated physical" > + . " space, provided the storage supports discard.", > + }, > 'saferemove-stepsize' => { > description => "Wipe step size in MiB." > . " It will be capped to the maximum supported by the storage.", > @@ -453,6 +470,7 @@ sub options { > shared => { optional => 1 }, > disable => { optional => 1 }, > saferemove => { optional => 1 }, > + 'issue-blkdiscard' => { optional => 1 }, > 'saferemove-stepsize' => { optional => 1 }, > saferemove_throughput => { optional => 1 }, > content => { optional => 1 }, ^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH manager v3 2/2] fix #7339: lvmthick: ui: add UI fields for option to free storage 2026-04-23 14:47 [PATCH manager/storage v3 0/2] fix #7339: lvmthick: add option to free storage for deleted VMs Lukas Sichert 2026-04-23 14:47 ` [PATCH storage v3 1/2] fix #7339: lvmthick: add worker to free space of to be " Lukas Sichert @ 2026-04-23 14:47 ` Lukas Sichert 1 sibling, 0 replies; 4+ messages in thread From: Lukas Sichert @ 2026-04-23 14:47 UTC (permalink / raw) To: pve-devel; +Cc: Lukas Sichert In the commit 'fix #7339: lvmthick: add worker to free space of to be deleted VMs' for the 'pve-storage' repo the backend recieved the functionality to discard allocated space of a VM disk on a SAN, when a VM is deleted. The backend checks whether to use this option by parsing storage.cfg to see if 'issue-blkdiscard' is set to 1. This variable will automatically be stored into the config file if it is present in the 'PUT' API request. To be able to append this to the API call, the variable needs to be defined in the json-Schema, but this has also been added in the previous commit. To enable a option to free storage in the GUI, create a checkbox with the name 'issue-blkdiscard'. In the checkbox use cbind to evaluate 'iscreate' from the component, to only autoenable on new creations of LVM Storages. This checkbox also adds the 'issue_blkdiscard' variable and its value to the return call. Also add remappings for the description of the new worker tasks, to show in the Gui, which options are used for the current deletion. Signed-off-by: Lukas Sichert <l.sichert@proxmox.com> --- www/manager6/Utils.js | 3 +++ www/manager6/storage/LVMEdit.js | 15 +++++++++++++++ 2 files changed, 18 insertions(+) diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js index b36e46fd..03558942 100644 --- a/www/manager6/Utils.js +++ b/www/manager6/Utils.js @@ -2133,6 +2133,9 @@ Ext.define('PVE.Utils', { clusterjoin: ['', gettext('Join Cluster')], dircreate: [gettext('Directory Storage'), gettext('Create')], dirremove: [gettext('Directory'), gettext('Remove')], + diskdiscard: ['', gettext('Discard disk')], + diskzero: ['', gettext('Zero out disk')], + diskzerodiscard: ['', gettext('Zero out and discard disk')], download: [gettext('File'), gettext('Download')], hamigrate: ['HA', gettext('Migrate')], hashutdown: ['HA', gettext('Shutdown')], diff --git a/www/manager6/storage/LVMEdit.js b/www/manager6/storage/LVMEdit.js index 148f0601..56463bce 100644 --- a/www/manager6/storage/LVMEdit.js +++ b/www/manager6/storage/LVMEdit.js @@ -241,5 +241,20 @@ Ext.define('PVE.storage.LVMInputPanel', { uncheckedValue: 0, fieldLabel: gettext('Wipe Removed Volumes'), }, + { + xtype: 'proxmoxcheckbox', + name: 'issue-blkdiscard', + uncheckedValue: 0, + cbind: { + checked: '{isCreate}', + }, + fieldLabel: gettext('Discard Removed Volumes'), + autoEl: { + tag: 'div', + 'data-qtip': gettext( + 'Enable to issue discard (TRIM) requests for logical volumes before removing them.', + ), + }, + }, ], }); -- 2.47.3 ^ permalink raw reply related [flat|nested] 4+ messages in thread
end of thread, other threads:[~2026-05-05 15:59 UTC | newest] Thread overview: 4+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2026-04-23 14:47 [PATCH manager/storage v3 0/2] fix #7339: lvmthick: add option to free storage for deleted VMs Lukas Sichert 2026-04-23 14:47 ` [PATCH storage v3 1/2] fix #7339: lvmthick: add worker to free space of to be " Lukas Sichert 2026-05-05 15:59 ` Friedrich Weber 2026-04-23 14:47 ` [PATCH manager v3 2/2] fix #7339: lvmthick: ui: add UI fields for option to free storage Lukas Sichert
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox