public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Maximiliano Sandoval <m.sandoval@proxmox.com>
To: Lukas Sichert <l.sichert@proxmox.com>
Cc: pve-devel@lists.proxmox.com
Subject: Re: [PATCH storage 1/2] fix #7339: lvmthick: add worker to free space of to be deleted VMs
Date: Mon, 23 Mar 2026 11:57:08 +0100	[thread overview]
Message-ID: <s8ofr5qbypn.fsf@toolbox> (raw)
In-Reply-To: <20260323101506.56098-2-l.sichert@proxmox.com> (Lukas Sichert's message of "Mon, 23 Mar 2026 11:14:54 +0100")

Lukas Sichert <l.sichert@proxmox.com> writes:

> Currently when deleting a VM whose disk is stored on a
> thinly-provisioned LUN there is no way to also free the storage space
> used by the VM. This is because the current implementation only calls
> 'lvremove'. This command deletes the LVM meta-data for the disk, but it
> does not send discards to the SAN. 'lvmremove' can also be used with
> 'issue_discards', but since LVM meta-data is changed, it needs to be
> done under a cluster-wide lock, which can lead to timeouts. There is
> already an option to enable 'saferemove', which executes 'blkdiscard
> --zeroout' to override the whole storage space allocated to the disk
> with zeros. However it does not free the storage space.[1]
>
> To add the functionality that frees the storage space, adjust the worker
> in the code that is already there for zeroing out. In the worker parse
> the storage config and if 'discard' is enabled execute 'blkdiscard'.
> This can also be executed in combination with 'blkdiscard --zeroout' to
> first zero out the disk and then free the storage space.[1]
>
> To add an option to set 'discard' in the frontend, add a description, so
> that the variable will be included in the json-Schema.
>
> [1] https://man7.org/linux/man-pages/man8/blkdiscard.8.html
>
> Signed-off-by: Lukas Sichert <l.sichert@proxmox.com>
> ---
>  src/PVE/Storage.pm           | 16 ++++++++++++----
>  src/PVE/Storage/LVMPlugin.pm | 31 ++++++++++++++++++++++---------
>  2 files changed, 34 insertions(+), 13 deletions(-)
>
> diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
> index 6e87bac..b5d97fd 100755
> --- a/src/PVE/Storage.pm
> +++ b/src/PVE/Storage.pm
> @@ -1192,7 +1192,7 @@ sub vdisk_free {
>  
>      activate_storage($cfg, $storeid);
>  
> -    my $cleanup_worker;
> +    my $discard_worker;
>  
>      # lock shared storage
>      $plugin->cluster_lock_storage(
> @@ -1206,16 +1206,24 @@ sub vdisk_free {
>  
>              my (undef, undef, undef, undef, undef, $isBase, $format) =
>                  $plugin->parse_volname($volname);
> -            $cleanup_worker = $plugin->free_image($storeid, $scfg, $volname, $isBase, $format);
> +            $discard_worker = $plugin->free_image($storeid, $scfg, $volname, $isBase, $format);
>          },
>      );
>  
> -    return if !$cleanup_worker;
> +    return if !$discard_worker;
>  
>      my $rpcenv = PVE::RPCEnvironment::get();
>      my $authuser = $rpcenv->get_user();
>  
> -    $rpcenv->fork_worker('imgdel', undef, $authuser, $cleanup_worker);
> +    if ($scfg->{saferemove} && $scfg->{issue_blkdiscard}) {
> +        $rpcenv->fork_worker('diskzerodiscard', undef, $authuser, $discard_worker);
> +    } elsif ($scfg->{saferemove}) {
> +        $rpcenv->fork_worker('diskzero', undef, $authuser, $discard_worker); 
> +    } elsif ($scfg->{issue_blkdiscard}) {
> +        $rpcenv->fork_worker('diskdiscard', undef, $authuser, $discard_worker); 
> +    } else {
> +      die "config must have changed during execution. Disk can't be deleted safely";
> +    }
>  }
>  
>  sub vdisk_list {
> diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
> index 32a8339..b0b94a6 100644
> --- a/src/PVE/Storage/LVMPlugin.pm
> +++ b/src/PVE/Storage/LVMPlugin.pm
> @@ -352,12 +352,12 @@ my sub free_lvm_volumes {
>  
>      # we need to zero out LVM data for security reasons
>      # and to allow thin provisioning
> -    my $zero_out_worker = sub {
> +    my $blkdiscard_worker = sub {
> +        my ($saferemove, $issue_blkdiscard) = @_;
> +
>          for my $name (@$volnames) {
>              my $lvmpath = "/dev/$vg/del-$name";
> -            print "zero-out data on image $name ($lvmpath)\n";
> -
> -            my $cmd_activate = ['/sbin/lvchange', '-aly', $lvmpath];
> +                        my $cmd_activate = ['/sbin/lvchange', '-aly', $lvmpath];
>              run_command(
>                  $cmd_activate,
>                  errmsg => "can't activate LV '$lvmpath' to zero-out its data",
> @@ -367,8 +367,15 @@ my sub free_lvm_volumes {
>                  $cmd_activate,
>                  errmsg => "can't refresh LV '$lvmpath' to zero-out its data",
>              );
> -
> -            $secure_delete_cmd->($lvmpath);
> +            if ($saferemove) {
> +                print "zero-out data on image $name ($lvmpath)\n";
> +                $secure_delete_cmd->($lvmpath);
> +            }
> +            if ($issue_blkdiscard) {
> +              print "free storage of image $name ($lvmpath)\n";
> +              eval { run_command(['/sbin/blkdiscard', $lvmpath]); };
> +              warn $@ if $@;
> +            }
>  
>              $class->cluster_lock_storage(
>                  $storeid,
> @@ -379,17 +386,18 @@ my sub free_lvm_volumes {
>                      run_command($cmd, errmsg => "lvremove '$vg/del-$name' error");
>                  },
>              );
> -            print "successfully removed volume $name ($vg/del-$name)\n";
>          }
>      };
>  
> -    if ($scfg->{saferemove}) {
> +    if ($scfg->{saferemove} || $scfg->{issue_blkdiscard}) {
>          for my $name (@$volnames) {
>              # avoid long running task, so we only rename here
>              my $cmd = ['/sbin/lvrename', $vg, $name, "del-$name"];
>              run_command($cmd, errmsg => "lvrename '$vg/$name' error");
>          }
> -        return $zero_out_worker;
> +        return sub {
> +            $blkdiscard_worker->($scfg->{saferemove}, $scfg->{issue_blkdiscard});
> +        };
>      } else {
>          for my $name (@$volnames) {
>              my $cmd = ['/sbin/lvremove', '-f', "$vg/$name"];
> @@ -428,6 +436,10 @@ sub properties {
>              description => "Zero-out data when removing LVs.",
>              type => 'boolean',
>          },
> +        issue_blkdiscard => {
> +            description => "Free Storage space when removing LVs.",

I think the verb "Free up" is more appropriate than "free" in this
context.

Additionally, please add a verbose_description explaining the mechanism
and when this can be useful.

> +            type => 'boolean',
> +        },
>          'saferemove-stepsize' => {
>              description => "Wipe step size in MiB."
>                  . " It will be capped to the maximum supported by the storage.",
> @@ -453,6 +465,7 @@ sub options {
>          shared => { optional => 1 },
>          disable => { optional => 1 },
>          saferemove => { optional => 1 },
> +        issue_blkdiscard => { optional => 1 },
>          'saferemove-stepsize' => { optional => 1 },
>          saferemove_throughput => { optional => 1 },
>          content => { optional => 1 },

-- 
Maximiliano




  parent reply	other threads:[~2026-03-23 10:57 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-23 10:14 [PATCH manager/storage 0/2] fix #7339: lvmthick: add option to free storage for " Lukas Sichert
2026-03-23 10:14 ` [PATCH storage 1/2] fix #7339: lvmthick: add worker to free space of to be " Lukas Sichert
2026-03-23 10:31   ` Michael Köppl
2026-03-23 10:57   ` Maximiliano Sandoval [this message]
2026-03-23 12:50   ` Michael Köppl
2026-03-23 10:14 ` [PATCH manager 2/2] fix #7339: lvmthick: ui: add UI fields for option to free storage Lukas Sichert
2026-03-23 10:52   ` Maximiliano Sandoval
2026-03-23 14:01     ` Fabian Grünbichler
2026-03-23 12:39   ` Michael Köppl

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=s8ofr5qbypn.fsf@toolbox \
    --to=m.sandoval@proxmox.com \
    --cc=l.sichert@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal