public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Lukas Sichert <l.sichert@proxmox.com>
To: pve-devel@lists.proxmox.com
Cc: Lukas Sichert <l.sichert@proxmox.com>
Subject: [PATCH storage v4 1/2] fix #7339: lvmthick: add worker to free space of to be deleted VMs
Date: Wed, 13 May 2026 14:02:37 +0200	[thread overview]
Message-ID: <20260513120240.81893-2-l.sichert@proxmox.com> (raw)
In-Reply-To: <20260513120240.81893-1-l.sichert@proxmox.com>

Currently when deleting a VM whose disk is stored on a
thinly-provisioned LUN there is no way to also free the storage space
used by the VM. This is because the current implementation only calls
'lvremove'. This command deletes the LVM meta-data for the disk, but it
does not send discards to the SAN. 'lvmremove' can also be used with
'issue_discards', but since LVM meta-data is changed, it needs to be
done under a cluster-wide lock, which can lead to timeouts. There is
already an option to enable 'saferemove', which executes 'blkdiscard
--zeroout' to override the whole storage space allocated to the disk
with zeros. However it does not free the storage space.[1]

To add the functionality that frees the storage space, adjust the worker
in the code that is already there for zeroing out. In the worker parse
the storage config and if 'issue-blkdiscard' is enabled execute
'blkdiscard'. This can also be executed in combination with 'blkdiscard
--zeroout' to first zero out the disk and then free the storage
space.[1]

To add an option to set 'issue-blkdiscard' in the frontend, add a
description, so that the variable will be included in the json-Schema.

[1] https://man7.org/linux/man-pages/man8/blkdiscard.8.html

Signed-off-by: Lukas Sichert <l.sichert@proxmox.com>
---
 src/PVE/Storage.pm           |  8 +++----
 src/PVE/Storage/LVMPlugin.pm | 42 ++++++++++++++++++++++++++++++------
 2 files changed, 40 insertions(+), 10 deletions(-)

diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index 020fa03..fa0c0bc 100755
--- a/src/PVE/Storage.pm
+++ b/src/PVE/Storage.pm
@@ -1192,7 +1192,7 @@ sub vdisk_free {
 
     activate_storage($cfg, $storeid);
 
-    my $cleanup_worker;
+    my $discard_worker;
 
     # lock shared storage
     $plugin->cluster_lock_storage(
@@ -1206,16 +1206,16 @@ sub vdisk_free {
 
             my (undef, undef, undef, undef, undef, $isBase, $format) =
                 $plugin->parse_volname($volname);
-            $cleanup_worker = $plugin->free_image($storeid, $scfg, $volname, $isBase, $format);
+            $discard_worker = $plugin->free_image($storeid, $scfg, $volname, $isBase, $format);
         },
     );
 
-    return if !$cleanup_worker;
+    return if !$discard_worker;
 
     my $rpcenv = PVE::RPCEnvironment::get();
     my $authuser = $rpcenv->get_user();
 
-    $rpcenv->fork_worker('imgdel', undef, $authuser, $cleanup_worker);
+    $rpcenv->fork_worker('imgdel', undef, $authuser, $discard_worker);
 }
 
 sub vdisk_list {
diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
index 3a35e38..03455c6 100644
--- a/src/PVE/Storage/LVMPlugin.pm
+++ b/src/PVE/Storage/LVMPlugin.pm
@@ -13,6 +13,7 @@ use PVE::Tools qw(run_command file_read_firstline trim);
 
 use PVE::Storage::Common;
 use PVE::Storage::Plugin;
+use PVE::SafeSyslog;
 
 use base qw(PVE::Storage::Plugin);
 
@@ -351,11 +352,20 @@ my sub free_lvm_volumes_locked {
     };
 
     # we need to zero out LVM data for security reasons
-    # and to allow thin provisioning
-    my $zero_out_worker = sub {
+    # and discard images to free storage space to allow
+    # thin provisioning
+    my $discard_worker = sub {
+
         for my $name (@$volnames) {
             my $lvmpath = "/dev/$vg/del-$name";
-            print "zero-out data on image $name ($lvmpath)\n";
+
+            my $discard_action =
+                $scfg->{saferemove}
+                && $scfg->{'issue-blkdiscard'} ? 'zero-out data and discard (TRIM)'
+                : $scfg->{saferemove} ? 'zero-out data on'
+                : $scfg->{'issue-blkdiscard'} ? 'discard (TRIM)'
+                : undef;
+            print "$discard_action image $name ($lvmpath)\n";
 
             my $cmd_activate = ['/sbin/lvchange', '-aly', $lvmpath];
             run_command(
@@ -367,8 +377,18 @@ my sub free_lvm_volumes_locked {
                 $cmd_activate,
                 errmsg => "can't refresh LV '$lvmpath' to zero-out its data",
             );
+            syslog('info', "starting to $discard_action $name ($lvmpath)")
+                if defined($discard_action);
 
-            $secure_delete_cmd->($lvmpath);
+            if ($scfg->{saferemove}) {
+                print "zero-out data on image $name ($lvmpath)\n";
+                $secure_delete_cmd->($lvmpath);
+            }
+            if ($scfg->{'issue-blkdiscard'}) {
+                print "discard image $name ($lvmpath)\n";
+                eval { run_command(['/sbin/blkdiscard', $lvmpath]); };
+                warn $@ if $@;
+            }
 
             $class->cluster_lock_storage(
                 $storeid,
@@ -383,13 +403,13 @@ my sub free_lvm_volumes_locked {
         }
     };
 
-    if ($scfg->{saferemove}) {
+    if ($scfg->{saferemove} || $scfg->{'issue-blkdiscard'}) {
         for my $name (@$volnames) {
             # avoid long running task, so we only rename here
             my $cmd = ['/sbin/lvrename', $vg, $name, "del-$name"];
             run_command($cmd, errmsg => "lvrename '$vg/$name' error");
         }
-        return $zero_out_worker;
+        return $discard_worker;
     } else {
         for my $name (@$volnames) {
             my $cmd = ['/sbin/lvremove', '-f', "$vg/$name"];
@@ -428,6 +448,15 @@ sub properties {
             description => "Zero-out data when removing LVs.",
             type => 'boolean',
         },
+        'issue-blkdiscard' => {
+            description => "Issue discard (TRIM) requests for LVs before removing them.",
+            type => 'boolean',
+            verbose_description =>
+                "If enabled, blkdiscard is issued for the LV before removing it."
+                . " This sends discard requests for the LV's block range, allowing"
+                . " thin-provisioned storage to reclaim previously allocated physical"
+                . " space, provided the storage supports discard.",
+        },
         'saferemove-stepsize' => {
             description => "Wipe step size in MiB."
                 . " It will be capped to the maximum supported by the storage.",
@@ -453,6 +482,7 @@ sub options {
         shared => { optional => 1 },
         disable => { optional => 1 },
         saferemove => { optional => 1 },
+        'issue-blkdiscard' => { optional => 1 },
         'saferemove-stepsize' => { optional => 1 },
         saferemove_throughput => { optional => 1 },
         content => { optional => 1 },
-- 
2.47.3





  reply	other threads:[~2026-05-13 12:02 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-05-13 12:02 [PATCH manager/storage v4 0/2] fix #7339: lvmthick: add option to free storage for deleted VMs Lukas Sichert
2026-05-13 12:02 ` Lukas Sichert [this message]
2026-05-15 14:02   ` [PATCH storage v4 1/2] fix #7339: lvmthick: add worker to free space of to be " Friedrich Weber
2026-05-15 14:28     ` Fabian Grünbichler
2026-05-13 12:02 ` [PATCH manager v4 2/2] fix #7339: lvmthick: ui: add UI option to free storage Lukas Sichert

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260513120240.81893-2-l.sichert@proxmox.com \
    --to=l.sichert@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal