public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [PATCH manager/storage 0/2] fix #7339: lvmthick: add option to free storage for deleted VMs
@ 2026-03-23 10:14 Lukas Sichert
  2026-03-23 10:14 ` [PATCH storage 1/2] fix #7339: lvmthick: add worker to free space of to be " Lukas Sichert
  2026-03-23 10:14 ` [PATCH manager 2/2] fix #7339: lvmthick: ui: add UI fields for option to free storage Lukas Sichert
  0 siblings, 2 replies; 9+ messages in thread
From: Lukas Sichert @ 2026-03-23 10:14 UTC (permalink / raw)
  To: pve-devel; +Cc: Lukas Sichert

Logical volumes (LV) in an LVM (thick) volume group (VG) are
thick-provisioned, but the underlying backing storage can be
thin-provisioned. In particular, this can be the case if the VG resides
on a LUN provided by a SAN via ISCSI/FC/SAS [1], where the LUN may be
thin-provisioned on the SAN side.

In such setups, one usually wants that deleting an LV (e.g. VM disk)
frees up space on the SAN side, especially when using
snapshots-as-volume-chains, because snapshot LVs are thick-provisioned
LVs from the LVM point of view, so users may want to over-provision the
LUN on the SAN side.

One option to free up space when deleting an LV is to set
`issue_discards = 1` in the LVM config. With this setting, `lvremove`
will send discards for the regions previously used by the LV, which will
(if the SAN supports it) inform the SAN that the space is not in use
anymore and can be freed up. Since `lvremove` modifies LVM metadata, it
has to be issued while holding a cluster-wide lock on the storage.
Unfortunately, depending on the setup, `issue_discards = 1` can make
`lvremove` take very long for big disks (due to the large number of
discards being issued), so that it eventually hits the 60s timeout of
the cluster lock. The 60s are a hard-coded limit and cannot be easily
changed [2].

A better option would be to use 'blkdiscard'.This will issue discard for
all the blocks of the device and therefore free the storage on the san
[3]. As this option does not require changing any LVM metadata it can be
executed with a storage lock.

There is already a setting for  'saferemove', which zeros-out
to-be-deleted LVs using 'blkdiscard --zeroout' as a worker, but it does
not discard the blocks afterwards. This, similarly to just 'blkdiscard',
does not require a cluster-wide lock and therefore can be executed
without running into the 60s timeout.

This series extends the 'saferemove' worker so that he can execute
'blkdiscard', 'blkdiscard --zeroout' or both of them together and adds
an option to select this in the Gui.


[1] https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Storage_boxes_(SAN/NAS)
[2] https://forum.proxmox.com/threads/175849/post-820043


storage:

Lukas Sichert (1):
  fix #7339: lvmthick: add worker to free space of to be deleted VMs

 src/PVE/Storage.pm           | 16 ++++++++++++----
 src/PVE/Storage/LVMPlugin.pm | 31 ++++++++++++++++++++++---------
 2 files changed, 34 insertions(+), 13 deletions(-)


manager:

Lukas Sichert (1):
  fix #7339: lvmthick: ui: add UI fields for option to free storage

 www/manager6/Utils.js           |  3 +++
 www/manager6/storage/LVMEdit.js | 13 +++++++++++++
 2 files changed, 16 insertions(+)


Summary over all repositories:
  4 files changed, 50 insertions(+), 13 deletions(-)

-- 
Generated by murpp 0.11.0




^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH storage 1/2] fix #7339: lvmthick: add worker to free space of to be deleted VMs
  2026-03-23 10:14 [PATCH manager/storage 0/2] fix #7339: lvmthick: add option to free storage for deleted VMs Lukas Sichert
@ 2026-03-23 10:14 ` Lukas Sichert
  2026-03-23 10:31   ` Michael Köppl
                     ` (2 more replies)
  2026-03-23 10:14 ` [PATCH manager 2/2] fix #7339: lvmthick: ui: add UI fields for option to free storage Lukas Sichert
  1 sibling, 3 replies; 9+ messages in thread
From: Lukas Sichert @ 2026-03-23 10:14 UTC (permalink / raw)
  To: pve-devel; +Cc: Lukas Sichert

Currently when deleting a VM whose disk is stored on a
thinly-provisioned LUN there is no way to also free the storage space
used by the VM. This is because the current implementation only calls
'lvremove'. This command deletes the LVM meta-data for the disk, but it
does not send discards to the SAN. 'lvmremove' can also be used with
'issue_discards', but since LVM meta-data is changed, it needs to be
done under a cluster-wide lock, which can lead to timeouts. There is
already an option to enable 'saferemove', which executes 'blkdiscard
--zeroout' to override the whole storage space allocated to the disk
with zeros. However it does not free the storage space.[1]

To add the functionality that frees the storage space, adjust the worker
in the code that is already there for zeroing out. In the worker parse
the storage config and if 'discard' is enabled execute 'blkdiscard'.
This can also be executed in combination with 'blkdiscard --zeroout' to
first zero out the disk and then free the storage space.[1]

To add an option to set 'discard' in the frontend, add a description, so
that the variable will be included in the json-Schema.

[1] https://man7.org/linux/man-pages/man8/blkdiscard.8.html

Signed-off-by: Lukas Sichert <l.sichert@proxmox.com>
---
 src/PVE/Storage.pm           | 16 ++++++++++++----
 src/PVE/Storage/LVMPlugin.pm | 31 ++++++++++++++++++++++---------
 2 files changed, 34 insertions(+), 13 deletions(-)

diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index 6e87bac..b5d97fd 100755
--- a/src/PVE/Storage.pm
+++ b/src/PVE/Storage.pm
@@ -1192,7 +1192,7 @@ sub vdisk_free {
 
     activate_storage($cfg, $storeid);
 
-    my $cleanup_worker;
+    my $discard_worker;
 
     # lock shared storage
     $plugin->cluster_lock_storage(
@@ -1206,16 +1206,24 @@ sub vdisk_free {
 
             my (undef, undef, undef, undef, undef, $isBase, $format) =
                 $plugin->parse_volname($volname);
-            $cleanup_worker = $plugin->free_image($storeid, $scfg, $volname, $isBase, $format);
+            $discard_worker = $plugin->free_image($storeid, $scfg, $volname, $isBase, $format);
         },
     );
 
-    return if !$cleanup_worker;
+    return if !$discard_worker;
 
     my $rpcenv = PVE::RPCEnvironment::get();
     my $authuser = $rpcenv->get_user();
 
-    $rpcenv->fork_worker('imgdel', undef, $authuser, $cleanup_worker);
+    if ($scfg->{saferemove} && $scfg->{issue_blkdiscard}) {
+        $rpcenv->fork_worker('diskzerodiscard', undef, $authuser, $discard_worker);
+    } elsif ($scfg->{saferemove}) {
+        $rpcenv->fork_worker('diskzero', undef, $authuser, $discard_worker); 
+    } elsif ($scfg->{issue_blkdiscard}) {
+        $rpcenv->fork_worker('diskdiscard', undef, $authuser, $discard_worker); 
+    } else {
+      die "config must have changed during execution. Disk can't be deleted safely";
+    }
 }
 
 sub vdisk_list {
diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
index 32a8339..b0b94a6 100644
--- a/src/PVE/Storage/LVMPlugin.pm
+++ b/src/PVE/Storage/LVMPlugin.pm
@@ -352,12 +352,12 @@ my sub free_lvm_volumes {
 
     # we need to zero out LVM data for security reasons
     # and to allow thin provisioning
-    my $zero_out_worker = sub {
+    my $blkdiscard_worker = sub {
+        my ($saferemove, $issue_blkdiscard) = @_;
+
         for my $name (@$volnames) {
             my $lvmpath = "/dev/$vg/del-$name";
-            print "zero-out data on image $name ($lvmpath)\n";
-
-            my $cmd_activate = ['/sbin/lvchange', '-aly', $lvmpath];
+                        my $cmd_activate = ['/sbin/lvchange', '-aly', $lvmpath];
             run_command(
                 $cmd_activate,
                 errmsg => "can't activate LV '$lvmpath' to zero-out its data",
@@ -367,8 +367,15 @@ my sub free_lvm_volumes {
                 $cmd_activate,
                 errmsg => "can't refresh LV '$lvmpath' to zero-out its data",
             );
-
-            $secure_delete_cmd->($lvmpath);
+            if ($saferemove) {
+                print "zero-out data on image $name ($lvmpath)\n";
+                $secure_delete_cmd->($lvmpath);
+            }
+            if ($issue_blkdiscard) {
+              print "free storage of image $name ($lvmpath)\n";
+              eval { run_command(['/sbin/blkdiscard', $lvmpath]); };
+              warn $@ if $@;
+            }
 
             $class->cluster_lock_storage(
                 $storeid,
@@ -379,17 +386,18 @@ my sub free_lvm_volumes {
                     run_command($cmd, errmsg => "lvremove '$vg/del-$name' error");
                 },
             );
-            print "successfully removed volume $name ($vg/del-$name)\n";
         }
     };
 
-    if ($scfg->{saferemove}) {
+    if ($scfg->{saferemove} || $scfg->{issue_blkdiscard}) {
         for my $name (@$volnames) {
             # avoid long running task, so we only rename here
             my $cmd = ['/sbin/lvrename', $vg, $name, "del-$name"];
             run_command($cmd, errmsg => "lvrename '$vg/$name' error");
         }
-        return $zero_out_worker;
+        return sub {
+            $blkdiscard_worker->($scfg->{saferemove}, $scfg->{issue_blkdiscard});
+        };
     } else {
         for my $name (@$volnames) {
             my $cmd = ['/sbin/lvremove', '-f', "$vg/$name"];
@@ -428,6 +436,10 @@ sub properties {
             description => "Zero-out data when removing LVs.",
             type => 'boolean',
         },
+        issue_blkdiscard => {
+            description => "Free Storage space when removing LVs.",
+            type => 'boolean',
+        },
         'saferemove-stepsize' => {
             description => "Wipe step size in MiB."
                 . " It will be capped to the maximum supported by the storage.",
@@ -453,6 +465,7 @@ sub options {
         shared => { optional => 1 },
         disable => { optional => 1 },
         saferemove => { optional => 1 },
+        issue_blkdiscard => { optional => 1 },
         'saferemove-stepsize' => { optional => 1 },
         saferemove_throughput => { optional => 1 },
         content => { optional => 1 },
-- 
2.47.3





^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH manager 2/2] fix #7339: lvmthick: ui: add UI fields for option to free storage
  2026-03-23 10:14 [PATCH manager/storage 0/2] fix #7339: lvmthick: add option to free storage for deleted VMs Lukas Sichert
  2026-03-23 10:14 ` [PATCH storage 1/2] fix #7339: lvmthick: add worker to free space of to be " Lukas Sichert
@ 2026-03-23 10:14 ` Lukas Sichert
  2026-03-23 10:52   ` Maximiliano Sandoval
  2026-03-23 12:39   ` Michael Köppl
  1 sibling, 2 replies; 9+ messages in thread
From: Lukas Sichert @ 2026-03-23 10:14 UTC (permalink / raw)
  To: pve-devel; +Cc: Lukas Sichert

In the previous commit the backend recieved the functionality to discard
allocated space of a VM disk on a SAN, when a VM is deleted. The backend
checks whether to use this option by parsing storage.cfg to see if
'issue_blkdiscard' is set to 1. This variable will automatically be
stored into the config file if it is present in the 'PUT' API request.
To be able to append this to the API call, the variable needs
to be defined in the json-Schema, but this has also been added in the
previous commit.

To enable a option to free storage in the Gui, create a checkbox with
the name 'issue_blkdiscard'. In the checkbox use cbind to evaluate
'iscreate' from the component, to only autoenable on new creations of
LVM Storages. This checkbox also adds the 'issue_blkdiscard'
variable and its value to the return call.

Also add remappings for the description of the new worker tasks, to show
in the Gui, which options are used for the current deletion.

Signed-off-by: Lukas Sichert <l.sichert@proxmox.com>
---
 www/manager6/Utils.js           |  3 +++
 www/manager6/storage/LVMEdit.js | 13 +++++++++++++
 2 files changed, 16 insertions(+)

diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index 77dae42e..1c70b159 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -2133,6 +2133,9 @@ Ext.define('PVE.Utils', {
             clusterjoin: ['', gettext('Join Cluster')],
             dircreate: [gettext('Directory Storage'), gettext('Create')],
             dirremove: [gettext('Directory'), gettext('Remove')],
+            diskdiscard: ['', gettext('Discard disk')],
+            diskzero: ['', gettext('Zero out disk')],
+            diskzerodiscard: ['', gettext('Zero out and discard disk')],
             download: [gettext('File'), gettext('Download')],
             hamigrate: ['HA', gettext('Migrate')],
             hashutdown: ['HA', gettext('Shutdown')],
diff --git a/www/manager6/storage/LVMEdit.js b/www/manager6/storage/LVMEdit.js
index 148f0601..35d2b329 100644
--- a/www/manager6/storage/LVMEdit.js
+++ b/www/manager6/storage/LVMEdit.js
@@ -241,5 +241,18 @@ Ext.define('PVE.storage.LVMInputPanel', {
             uncheckedValue: 0,
             fieldLabel: gettext('Wipe Removed Volumes'),
         },
+        {
+            xtype: 'proxmoxcheckbox',
+            name: 'issue_blkdiscard',
+            uncheckedValue: 0,
+            cbind: {
+                checked: '{isCreate}',
+            },
+            fieldLabel: gettext('Discard Removed Volumes'),
+            autoEl: {
+                tag: 'div',
+                'data-qtip': gettext('Enable if storage space should be freed when VM is deleted.'),
+            },
+        },
     ],
 });
-- 
2.47.3





^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH storage 1/2] fix #7339: lvmthick: add worker to free space of to be deleted VMs
  2026-03-23 10:14 ` [PATCH storage 1/2] fix #7339: lvmthick: add worker to free space of to be " Lukas Sichert
@ 2026-03-23 10:31   ` Michael Köppl
  2026-03-23 10:57   ` Maximiliano Sandoval
  2026-03-23 12:50   ` Michael Köppl
  2 siblings, 0 replies; 9+ messages in thread
From: Michael Köppl @ 2026-03-23 10:31 UTC (permalink / raw)
  To: Lukas Sichert, pve-devel

needs a `make tidy`. also left 2 comment inline

On Mon Mar 23, 2026 at 11:14 AM CET, Lukas Sichert wrote:

[snip]

> -    if ($scfg->{saferemove}) {
> +    if ($scfg->{saferemove} || $scfg->{issue_blkdiscard}) {
>          for my $name (@$volnames) {
>              # avoid long running task, so we only rename here
>              my $cmd = ['/sbin/lvrename', $vg, $name, "del-$name"];
>              run_command($cmd, errmsg => "lvrename '$vg/$name' error");
>          }
> -        return $zero_out_worker;
> +        return sub {
> +            $blkdiscard_worker->($scfg->{saferemove}, $scfg->{issue_blkdiscard});
> +        };
>      } else {
>          for my $name (@$volnames) {
>              my $cmd = ['/sbin/lvremove', '-f', "$vg/$name"];
> @@ -428,6 +436,10 @@ sub properties {
>              description => "Zero-out data when removing LVs.",
>              type => 'boolean',
>          },
> +        issue_blkdiscard => {
> +            description => "Free Storage space when removing LVs.",

nit: s/Storage/storage to have the same capitalization as in other
description fields

> +            type => 'boolean',
> +        },
>          'saferemove-stepsize' => {
>              description => "Wipe step size in MiB."
>                  . " It will be capped to the maximum supported by the storage.",
> @@ -453,6 +465,7 @@ sub options {
>          shared => { optional => 1 },
>          disable => { optional => 1 },
>          saferemove => { optional => 1 },
> +        issue_blkdiscard => { optional => 1 },

I think for new fields we use - instead of _?

>          'saferemove-stepsize' => { optional => 1 },
>          saferemove_throughput => { optional => 1 },
>          content => { optional => 1 },





^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH manager 2/2] fix #7339: lvmthick: ui: add UI fields for option to free storage
  2026-03-23 10:14 ` [PATCH manager 2/2] fix #7339: lvmthick: ui: add UI fields for option to free storage Lukas Sichert
@ 2026-03-23 10:52   ` Maximiliano Sandoval
  2026-03-23 14:01     ` Fabian Grünbichler
  2026-03-23 12:39   ` Michael Köppl
  1 sibling, 1 reply; 9+ messages in thread
From: Maximiliano Sandoval @ 2026-03-23 10:52 UTC (permalink / raw)
  To: Lukas Sichert; +Cc: pve-devel

Lukas Sichert <l.sichert@proxmox.com> writes:

> In the previous commit the backend recieved the functionality to discard
> allocated space of a VM disk on a SAN, when a VM is deleted. The backend
> checks whether to use this option by parsing storage.cfg to see if
> 'issue_blkdiscard' is set to 1. This variable will automatically be
> stored into the config file if it is present in the 'PUT' API request.
> To be able to append this to the API call, the variable needs
> to be defined in the json-Schema, but this has also been added in the
> previous commit.
>
> To enable a option to free storage in the Gui, create a checkbox with
> the name 'issue_blkdiscard'. In the checkbox use cbind to evaluate
> 'iscreate' from the component, to only autoenable on new creations of
> LVM Storages. This checkbox also adds the 'issue_blkdiscard'
> variable and its value to the return call.
>
> Also add remappings for the description of the new worker tasks, to show
> in the Gui, which options are used for the current deletion.
>
> Signed-off-by: Lukas Sichert <l.sichert@proxmox.com>
> ---
>  www/manager6/Utils.js           |  3 +++
>  www/manager6/storage/LVMEdit.js | 13 +++++++++++++
>  2 files changed, 16 insertions(+)
>
> diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
> index 77dae42e..1c70b159 100644
> --- a/www/manager6/Utils.js
> +++ b/www/manager6/Utils.js
> @@ -2133,6 +2133,9 @@ Ext.define('PVE.Utils', {
>              clusterjoin: ['', gettext('Join Cluster')],
>              dircreate: [gettext('Directory Storage'), gettext('Create')],
>              dirremove: [gettext('Directory'), gettext('Remove')],
> +            diskdiscard: ['', gettext('Discard disk')],
> +            diskzero: ['', gettext('Zero out disk')],
> +            diskzerodiscard: ['', gettext('Zero out and discard disk')],
>              download: [gettext('File'), gettext('Download')],
>              hamigrate: ['HA', gettext('Migrate')],
>              hashutdown: ['HA', gettext('Shutdown')],
> diff --git a/www/manager6/storage/LVMEdit.js b/www/manager6/storage/LVMEdit.js
> index 148f0601..35d2b329 100644
> --- a/www/manager6/storage/LVMEdit.js
> +++ b/www/manager6/storage/LVMEdit.js
> @@ -241,5 +241,18 @@ Ext.define('PVE.storage.LVMInputPanel', {
>              uncheckedValue: 0,
>              fieldLabel: gettext('Wipe Removed Volumes'),
>          },
> +        {
> +            xtype: 'proxmoxcheckbox',
> +            name: 'issue_blkdiscard',
> +            uncheckedValue: 0,
> +            cbind: {
> +                checked: '{isCreate}',
> +            },
> +            fieldLabel: gettext('Discard Removed Volumes'),
> +            autoEl: {
> +                tag: 'div',
> +                'data-qtip': gettext('Enable if storage space should be freed when VM is deleted.'),

I would recommend:

'Whether to free up storage space when deleting the VM.'

instead.

But this begs the question: How is this different from "Wipe Remove
Volumes" above? Imho it should be possible to tell them apart just with
the information available in the web UI.

> +            },
> +        },
>      ],
>  });

-- 
Maximiliano




^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH storage 1/2] fix #7339: lvmthick: add worker to free space of to be deleted VMs
  2026-03-23 10:14 ` [PATCH storage 1/2] fix #7339: lvmthick: add worker to free space of to be " Lukas Sichert
  2026-03-23 10:31   ` Michael Köppl
@ 2026-03-23 10:57   ` Maximiliano Sandoval
  2026-03-23 12:50   ` Michael Köppl
  2 siblings, 0 replies; 9+ messages in thread
From: Maximiliano Sandoval @ 2026-03-23 10:57 UTC (permalink / raw)
  To: Lukas Sichert; +Cc: pve-devel

Lukas Sichert <l.sichert@proxmox.com> writes:

> Currently when deleting a VM whose disk is stored on a
> thinly-provisioned LUN there is no way to also free the storage space
> used by the VM. This is because the current implementation only calls
> 'lvremove'. This command deletes the LVM meta-data for the disk, but it
> does not send discards to the SAN. 'lvmremove' can also be used with
> 'issue_discards', but since LVM meta-data is changed, it needs to be
> done under a cluster-wide lock, which can lead to timeouts. There is
> already an option to enable 'saferemove', which executes 'blkdiscard
> --zeroout' to override the whole storage space allocated to the disk
> with zeros. However it does not free the storage space.[1]
>
> To add the functionality that frees the storage space, adjust the worker
> in the code that is already there for zeroing out. In the worker parse
> the storage config and if 'discard' is enabled execute 'blkdiscard'.
> This can also be executed in combination with 'blkdiscard --zeroout' to
> first zero out the disk and then free the storage space.[1]
>
> To add an option to set 'discard' in the frontend, add a description, so
> that the variable will be included in the json-Schema.
>
> [1] https://man7.org/linux/man-pages/man8/blkdiscard.8.html
>
> Signed-off-by: Lukas Sichert <l.sichert@proxmox.com>
> ---
>  src/PVE/Storage.pm           | 16 ++++++++++++----
>  src/PVE/Storage/LVMPlugin.pm | 31 ++++++++++++++++++++++---------
>  2 files changed, 34 insertions(+), 13 deletions(-)
>
> diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
> index 6e87bac..b5d97fd 100755
> --- a/src/PVE/Storage.pm
> +++ b/src/PVE/Storage.pm
> @@ -1192,7 +1192,7 @@ sub vdisk_free {
>  
>      activate_storage($cfg, $storeid);
>  
> -    my $cleanup_worker;
> +    my $discard_worker;
>  
>      # lock shared storage
>      $plugin->cluster_lock_storage(
> @@ -1206,16 +1206,24 @@ sub vdisk_free {
>  
>              my (undef, undef, undef, undef, undef, $isBase, $format) =
>                  $plugin->parse_volname($volname);
> -            $cleanup_worker = $plugin->free_image($storeid, $scfg, $volname, $isBase, $format);
> +            $discard_worker = $plugin->free_image($storeid, $scfg, $volname, $isBase, $format);
>          },
>      );
>  
> -    return if !$cleanup_worker;
> +    return if !$discard_worker;
>  
>      my $rpcenv = PVE::RPCEnvironment::get();
>      my $authuser = $rpcenv->get_user();
>  
> -    $rpcenv->fork_worker('imgdel', undef, $authuser, $cleanup_worker);
> +    if ($scfg->{saferemove} && $scfg->{issue_blkdiscard}) {
> +        $rpcenv->fork_worker('diskzerodiscard', undef, $authuser, $discard_worker);
> +    } elsif ($scfg->{saferemove}) {
> +        $rpcenv->fork_worker('diskzero', undef, $authuser, $discard_worker); 
> +    } elsif ($scfg->{issue_blkdiscard}) {
> +        $rpcenv->fork_worker('diskdiscard', undef, $authuser, $discard_worker); 
> +    } else {
> +      die "config must have changed during execution. Disk can't be deleted safely";
> +    }
>  }
>  
>  sub vdisk_list {
> diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
> index 32a8339..b0b94a6 100644
> --- a/src/PVE/Storage/LVMPlugin.pm
> +++ b/src/PVE/Storage/LVMPlugin.pm
> @@ -352,12 +352,12 @@ my sub free_lvm_volumes {
>  
>      # we need to zero out LVM data for security reasons
>      # and to allow thin provisioning
> -    my $zero_out_worker = sub {
> +    my $blkdiscard_worker = sub {
> +        my ($saferemove, $issue_blkdiscard) = @_;
> +
>          for my $name (@$volnames) {
>              my $lvmpath = "/dev/$vg/del-$name";
> -            print "zero-out data on image $name ($lvmpath)\n";
> -
> -            my $cmd_activate = ['/sbin/lvchange', '-aly', $lvmpath];
> +                        my $cmd_activate = ['/sbin/lvchange', '-aly', $lvmpath];
>              run_command(
>                  $cmd_activate,
>                  errmsg => "can't activate LV '$lvmpath' to zero-out its data",
> @@ -367,8 +367,15 @@ my sub free_lvm_volumes {
>                  $cmd_activate,
>                  errmsg => "can't refresh LV '$lvmpath' to zero-out its data",
>              );
> -
> -            $secure_delete_cmd->($lvmpath);
> +            if ($saferemove) {
> +                print "zero-out data on image $name ($lvmpath)\n";
> +                $secure_delete_cmd->($lvmpath);
> +            }
> +            if ($issue_blkdiscard) {
> +              print "free storage of image $name ($lvmpath)\n";
> +              eval { run_command(['/sbin/blkdiscard', $lvmpath]); };
> +              warn $@ if $@;
> +            }
>  
>              $class->cluster_lock_storage(
>                  $storeid,
> @@ -379,17 +386,18 @@ my sub free_lvm_volumes {
>                      run_command($cmd, errmsg => "lvremove '$vg/del-$name' error");
>                  },
>              );
> -            print "successfully removed volume $name ($vg/del-$name)\n";
>          }
>      };
>  
> -    if ($scfg->{saferemove}) {
> +    if ($scfg->{saferemove} || $scfg->{issue_blkdiscard}) {
>          for my $name (@$volnames) {
>              # avoid long running task, so we only rename here
>              my $cmd = ['/sbin/lvrename', $vg, $name, "del-$name"];
>              run_command($cmd, errmsg => "lvrename '$vg/$name' error");
>          }
> -        return $zero_out_worker;
> +        return sub {
> +            $blkdiscard_worker->($scfg->{saferemove}, $scfg->{issue_blkdiscard});
> +        };
>      } else {
>          for my $name (@$volnames) {
>              my $cmd = ['/sbin/lvremove', '-f', "$vg/$name"];
> @@ -428,6 +436,10 @@ sub properties {
>              description => "Zero-out data when removing LVs.",
>              type => 'boolean',
>          },
> +        issue_blkdiscard => {
> +            description => "Free Storage space when removing LVs.",

I think the verb "Free up" is more appropriate than "free" in this
context.

Additionally, please add a verbose_description explaining the mechanism
and when this can be useful.

> +            type => 'boolean',
> +        },
>          'saferemove-stepsize' => {
>              description => "Wipe step size in MiB."
>                  . " It will be capped to the maximum supported by the storage.",
> @@ -453,6 +465,7 @@ sub options {
>          shared => { optional => 1 },
>          disable => { optional => 1 },
>          saferemove => { optional => 1 },
> +        issue_blkdiscard => { optional => 1 },
>          'saferemove-stepsize' => { optional => 1 },
>          saferemove_throughput => { optional => 1 },
>          content => { optional => 1 },

-- 
Maximiliano




^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH manager 2/2] fix #7339: lvmthick: ui: add UI fields for option to free storage
  2026-03-23 10:14 ` [PATCH manager 2/2] fix #7339: lvmthick: ui: add UI fields for option to free storage Lukas Sichert
  2026-03-23 10:52   ` Maximiliano Sandoval
@ 2026-03-23 12:39   ` Michael Köppl
  1 sibling, 0 replies; 9+ messages in thread
From: Michael Köppl @ 2026-03-23 12:39 UTC (permalink / raw)
  To: Lukas Sichert, pve-devel

On Mon Mar 23, 2026 at 11:14 AM CET, Lukas Sichert wrote:
> In the previous commit the backend recieved the functionality to discard

this is a bit weird to put in the commit message like that since in the
commit history of pve-manager, there is no related previous commit.

> allocated space of a VM disk on a SAN, when a VM is deleted. The backend
> checks whether to use this option by parsing storage.cfg to see if
> 'issue_blkdiscard' is set to 1. This variable will automatically be
> stored into the config file if it is present in the 'PUT' API request.
> To be able to append this to the API call, the variable needs
> to be defined in the json-Schema, but this has also been added in the
> previous commit.
>
> To enable a option to free storage in the Gui, create a checkbox with
> the name 'issue_blkdiscard'. In the checkbox use cbind to evaluate
> 'iscreate' from the component, to only autoenable on new creations of
> LVM Storages. This checkbox also adds the 'issue_blkdiscard'
> variable and its value to the return call.
>

[snip]




^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH storage 1/2] fix #7339: lvmthick: add worker to free space of to be deleted VMs
  2026-03-23 10:14 ` [PATCH storage 1/2] fix #7339: lvmthick: add worker to free space of to be " Lukas Sichert
  2026-03-23 10:31   ` Michael Köppl
  2026-03-23 10:57   ` Maximiliano Sandoval
@ 2026-03-23 12:50   ` Michael Köppl
  2 siblings, 0 replies; 9+ messages in thread
From: Michael Köppl @ 2026-03-23 12:50 UTC (permalink / raw)
  To: Lukas Sichert, pve-devel

noticed one more thing here.

On Mon Mar 23, 2026 at 11:14 AM CET, Lukas Sichert wrote:
>      # lock shared storage
>      $plugin->cluster_lock_storage(
> @@ -1206,16 +1206,24 @@ sub vdisk_free {
>  
>              my (undef, undef, undef, undef, undef, $isBase, $format) =
>                  $plugin->parse_volname($volname);
> -            $cleanup_worker = $plugin->free_image($storeid, $scfg, $volname, $isBase, $format);
> +            $discard_worker = $plugin->free_image($storeid, $scfg, $volname, $isBase, $format);
>          },
>      );
>  
> -    return if !$cleanup_worker;
> +    return if !$discard_worker;
>  
>      my $rpcenv = PVE::RPCEnvironment::get();
>      my $authuser = $rpcenv->get_user();
>  
> -    $rpcenv->fork_worker('imgdel', undef, $authuser, $cleanup_worker);
> +    if ($scfg->{saferemove} && $scfg->{issue_blkdiscard}) {
> +        $rpcenv->fork_worker('diskzerodiscard', undef, $authuser, $discard_worker);
> +    } elsif ($scfg->{saferemove}) {
> +        $rpcenv->fork_worker('diskzero', undef, $authuser, $discard_worker); 
> +    } elsif ($scfg->{issue_blkdiscard}) {
> +        $rpcenv->fork_worker('diskdiscard', undef, $authuser, $discard_worker); 
> +    } else {
> +      die "config must have changed during execution. Disk can't be deleted safely";

This changes the current behavior in a way that affects other storage
plugins. vdisk_free is used for various storage types and this part kind
of assumes specifics of the LVM plugin (as in, for the LVM plugin it is
true that if the discard_workers is set, at least one of the fields
would have to be set) and will simply not fork a worker for any storage
plugin where this is not the case. From what I can see right now, this
discard worker is not returned for any of the other plugins (?), but
that might not always be the case. I think it'd make sense to still run
the 'imgdel' worker from before in the else case since the
discard_worker was obviously defined.

I'm also not sure about the error message. The storage config is only
read once at the beginning of the function. Even if it did change in the
meantime, you would not detect it here.

> +    }
>  }
>  
>  sub vdisk_list {
> diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
> index 32a8339..b0b94a6 100644
> --- a/src/PVE/Storage/LVMPlugin.pm
> +++ b/src/PVE/Storage/LVMPlugin.pm
> @@ -352,12 +352,12 @@ my sub free_lvm_volumes {
>  
>      # we need to zero out LVM data for security reasons
>      # and to allow thin provisioning





^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH manager 2/2] fix #7339: lvmthick: ui: add UI fields for option to free storage
  2026-03-23 10:52   ` Maximiliano Sandoval
@ 2026-03-23 14:01     ` Fabian Grünbichler
  0 siblings, 0 replies; 9+ messages in thread
From: Fabian Grünbichler @ 2026-03-23 14:01 UTC (permalink / raw)
  To: Lukas Sichert, Maximiliano Sandoval; +Cc: pve-devel

On March 23, 2026 11:52 am, Maximiliano Sandoval wrote:
> Lukas Sichert <l.sichert@proxmox.com> writes:
> 
>> In the previous commit the backend recieved the functionality to discard
>> allocated space of a VM disk on a SAN, when a VM is deleted. The backend
>> checks whether to use this option by parsing storage.cfg to see if
>> 'issue_blkdiscard' is set to 1. This variable will automatically be
>> stored into the config file if it is present in the 'PUT' API request.
>> To be able to append this to the API call, the variable needs
>> to be defined in the json-Schema, but this has also been added in the
>> previous commit.
>>
>> To enable a option to free storage in the Gui, create a checkbox with
>> the name 'issue_blkdiscard'. In the checkbox use cbind to evaluate
>> 'iscreate' from the component, to only autoenable on new creations of
>> LVM Storages. This checkbox also adds the 'issue_blkdiscard'
>> variable and its value to the return call.
>>
>> Also add remappings for the description of the new worker tasks, to show
>> in the Gui, which options are used for the current deletion.
>>
>> Signed-off-by: Lukas Sichert <l.sichert@proxmox.com>
>> ---
>>  www/manager6/Utils.js           |  3 +++
>>  www/manager6/storage/LVMEdit.js | 13 +++++++++++++
>>  2 files changed, 16 insertions(+)
>>
>> diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
>> index 77dae42e..1c70b159 100644
>> --- a/www/manager6/Utils.js
>> +++ b/www/manager6/Utils.js
>> @@ -2133,6 +2133,9 @@ Ext.define('PVE.Utils', {
>>              clusterjoin: ['', gettext('Join Cluster')],
>>              dircreate: [gettext('Directory Storage'), gettext('Create')],
>>              dirremove: [gettext('Directory'), gettext('Remove')],
>> +            diskdiscard: ['', gettext('Discard disk')],
>> +            diskzero: ['', gettext('Zero out disk')],
>> +            diskzerodiscard: ['', gettext('Zero out and discard disk')],
>>              download: [gettext('File'), gettext('Download')],
>>              hamigrate: ['HA', gettext('Migrate')],
>>              hashutdown: ['HA', gettext('Shutdown')],
>> diff --git a/www/manager6/storage/LVMEdit.js b/www/manager6/storage/LVMEdit.js
>> index 148f0601..35d2b329 100644
>> --- a/www/manager6/storage/LVMEdit.js
>> +++ b/www/manager6/storage/LVMEdit.js
>> @@ -241,5 +241,18 @@ Ext.define('PVE.storage.LVMInputPanel', {
>>              uncheckedValue: 0,
>>              fieldLabel: gettext('Wipe Removed Volumes'),
>>          },
>> +        {
>> +            xtype: 'proxmoxcheckbox',
>> +            name: 'issue_blkdiscard',
>> +            uncheckedValue: 0,
>> +            cbind: {
>> +                checked: '{isCreate}',
>> +            },
>> +            fieldLabel: gettext('Discard Removed Volumes'),
>> +            autoEl: {
>> +                tag: 'div',
>> +                'data-qtip': gettext('Enable if storage space should be freed when VM is deleted.'),
> 
> I would recommend:
> 
> 'Whether to free up storage space when deleting the VM.'
> 
> instead.

I think both this here and the backend description should contain the
words "discard" and "trim". That the end result is freeing capacity on
the backing device is a (desired) implementation detail.

If you don't know what discard means, you should not enable this option
anyway.

"when VM is deleted" is wrong in any case, the option affects volumes
which are destroyed (which can happen as part of destroying a guest, of
course, but is not limited to those tasks).

What we want to convey:

- destroying a volume always logically removes it
- saferemove wipes the volume before removing it (prevents data leakage)
- issue_blockdiscard discards the volume's data before removing it
  (increases lifespan of flash media, allows better thin provisioning,
  requires support across the whole stack to work)

"free up space" is too generic, since it's not clear whether this refers
to logical space usage or physical space usage, IMHO.

> 
> But this begs the question: How is this different from "Wipe Remove
> Volumes" above? Imho it should be possible to tell them apart just with
> the information available in the web UI.
> 
>> +            },
>> +        },
>>      ],
>>  });
> 
> -- 
> Maximiliano
> 
> 
> 
> 
> 




^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2026-03-23 14:01 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-03-23 10:14 [PATCH manager/storage 0/2] fix #7339: lvmthick: add option to free storage for deleted VMs Lukas Sichert
2026-03-23 10:14 ` [PATCH storage 1/2] fix #7339: lvmthick: add worker to free space of to be " Lukas Sichert
2026-03-23 10:31   ` Michael Köppl
2026-03-23 10:57   ` Maximiliano Sandoval
2026-03-23 12:50   ` Michael Köppl
2026-03-23 10:14 ` [PATCH manager 2/2] fix #7339: lvmthick: ui: add UI fields for option to free storage Lukas Sichert
2026-03-23 10:52   ` Maximiliano Sandoval
2026-03-23 14:01     ` Fabian Grünbichler
2026-03-23 12:39   ` Michael Köppl

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal