public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Friedrich Weber <f.weber@proxmox.com>
To: Lukas Sichert <l.sichert@proxmox.com>, pve-devel@lists.proxmox.com
Subject: Re: [PATCH storage v3 1/2] fix #7339: lvmthick: add worker to free space of to be deleted VMs
Date: Tue, 5 May 2026 17:59:29 +0200	[thread overview]
Message-ID: <24d217bf-5b9f-48d2-8754-9614bbbc5484@proxmox.com> (raw)
In-Reply-To: <20260423144721.54451-2-l.sichert@proxmox.com>

Thanks for tackling this! Comments inline:

On 23/04/2026 16:46, Lukas Sichert wrote:
> Currently when deleting a VM whose disk is stored on a
> thinly-provisioned LUN there is no way to also free the storage space
> used by the VM. This is because the current implementation only calls
> 'lvremove'. This command deletes the LVM meta-data for the disk, but it
> does not send discards to the SAN. 'lvmremove' can also be used with
> 'issue_discards', but since LVM meta-data is changed, it needs to be
> done under a cluster-wide lock, which can lead to timeouts. There is
> already an option to enable 'saferemove', which executes 'blkdiscard
> --zeroout' to override the whole storage space allocated to the disk
> with zeros. However it does not free the storage space.[1]
> 
> To add the functionality that frees the storage space, adjust the worker
> in the code that is already there for zeroing out. In the worker parse
> the storage config and if 'discard' is enabled execute 'blkdiscard'.
> This can also be executed in combination with 'blkdiscard --zeroout' to
> first zero out the disk and then free the storage space.[1]
> 
> To add an option to set 'discard' in the frontend, add a description, so
> that the variable will be included in the json-Schema.
> 
> [1] https://man7.org/linux/man-pages/man8/blkdiscard.8.html
> 
> Signed-off-by: Lukas Sichert <l.sichert@proxmox.com>
> ---
>  .gitignore                   |  1 +
>  src/PVE/Storage.pm           | 16 ++++++++++++----
>  src/PVE/Storage/LVMPlugin.pm | 34 ++++++++++++++++++++++++++--------
>  3 files changed, 39 insertions(+), 12 deletions(-)
> 
> diff --git a/.gitignore b/.gitignore
> index 0a409d5..cd44ab1 100644
> --- a/.gitignore
> +++ b/.gitignore
> @@ -4,3 +4,4 @@
>  /*.buildinfo
>  /*.changes
>  /libpve-storage-perl-[0-9]*/
> +/libpve-storage_perl-[0-9]*.tar.xz
> diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
> index 6e87bac..ef1596f 100755
> --- a/src/PVE/Storage.pm
> +++ b/src/PVE/Storage.pm
> @@ -1192,7 +1192,7 @@ sub vdisk_free {
>  
>      activate_storage($cfg, $storeid);
>  
> -    my $cleanup_worker;
> +    my $discard_worker;
>  
>      # lock shared storage
>      $plugin->cluster_lock_storage(
> @@ -1206,16 +1206,24 @@ sub vdisk_free {
>  
>              my (undef, undef, undef, undef, undef, $isBase, $format) =
>                  $plugin->parse_volname($volname);
> -            $cleanup_worker = $plugin->free_image($storeid, $scfg, $volname, $isBase, $format);
> +            $discard_worker = $plugin->free_image($storeid, $scfg, $volname, $isBase, $format);
>          },
>      );
>  
> -    return if !$cleanup_worker;
> +    return if !$discard_worker;
>  
>      my $rpcenv = PVE::RPCEnvironment::get();
>      my $authuser = $rpcenv->get_user();
>  
> -    $rpcenv->fork_worker('imgdel', undef, $authuser, $cleanup_worker);
> +    if ($scfg->{saferemove} && $scfg->{'issue-blkdiscard'}) {
> +        $rpcenv->fork_worker('diskzerodiscard', undef, $authuser, $discard_worker);
> +    } elsif ($scfg->{saferemove}) {
> +        $rpcenv->fork_worker('diskzero', undef, $authuser, $discard_worker);
> +    } elsif ($scfg->{'issue-blkdiscard'}) {
> +        $rpcenv->fork_worker('diskdiscard', undef, $authuser, $discard_worker);
> +    } else {
> +        $rpcenv->fork_worker('imgdel', undef, $authuser, $discard_worker);
> +    }
>  }

Is this change necessary? To me it seems like the fairly general
PVE::Storage::vdisk_free shouldn't check for plugin-specific config
options (like saferemove/issue-blkdiscard) -- in a way, it breaks the
abstraction provided by PVE::Storage. Wouldn't it be nicer to keep
PVE::Storage::vdisk_free as-is, have it spawn the dedicated 'imgdel'
task, and the 'imgdel' task returned by the LVM plugin then
zeroouts+discards/only-zeroouts/only-discards depending on the LVM
storage settings?

>  
>  sub vdisk_list {
> diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
> index 32a8339..5a63ae5 100644
> --- a/src/PVE/Storage/LVMPlugin.pm
> +++ b/src/PVE/Storage/LVMPlugin.pm
> @@ -352,11 +352,11 @@ my sub free_lvm_volumes {
>  
>      # we need to zero out LVM data for security reasons
>      # and to allow thin provisioning

Would be nice to extend the comment here and mention that the worker
will also discard.

> -    my $zero_out_worker = sub {
> +    my $blkdiscard_worker = sub {

I see that $zero_out_worker is not a fitting name anymore because it
doesn't only zero-out, but $blkdiscard_worker isn't fitting either,
because it doesn't only discard, right? What about something general
like $cleanup_worker?

> +        my ($saferemove, $issue_blkdiscard) = @_;

Is it necessary to pass $saferemove and $issue_blkdiscard explicitly
here? Doesn't the $zero_out_worker "closure" have access to
$scfg->{saferemove} and $scfg->{issue-blkdiscard}? But maybe I'm missing
some Perl intricacy here.

> +
>          for my $name (@$volnames) {
>              my $lvmpath = "/dev/$vg/del-$name";
> -            print "zero-out data on image $name ($lvmpath)\n";
> -
>              my $cmd_activate = ['/sbin/lvchange', '-aly', $lvmpath];
>              run_command(
>                  $cmd_activate,
> @@ -367,8 +367,15 @@ my sub free_lvm_volumes {
>                  $cmd_activate,
>                  errmsg => "can't refresh LV '$lvmpath' to zero-out its data",
>              );
> -
> -            $secure_delete_cmd->($lvmpath);
> +            if ($saferemove) {
> +                print "zero-out data on image $name ($lvmpath)\n";
> +                $secure_delete_cmd->($lvmpath);
> +            }
> +            if ($issue_blkdiscard) {
> +                print "discard image $name ($lvmpath)\n";
> +                eval { run_command(['/sbin/blkdiscard', $lvmpath]); };
> +                warn $@ if $@;
> +            }
>  
>              $class->cluster_lock_storage(
>                  $storeid,
> @@ -379,17 +386,18 @@ my sub free_lvm_volumes {
>                      run_command($cmd, errmsg => "lvremove '$vg/del-$name' error");
>                  },
>              );
> -            print "successfully removed volume $name ($vg/del-$name)\n";
>          }
>      };
>  
> -    if ($scfg->{saferemove}) {
> +    if ($scfg->{saferemove} || $scfg->{'issue-blkdiscard'}) {
>          for my $name (@$volnames) {
>              # avoid long running task, so we only rename here
>              my $cmd = ['/sbin/lvrename', $vg, $name, "del-$name"];
>              run_command($cmd, errmsg => "lvrename '$vg/$name' error");
>          }
> -        return $zero_out_worker;
> +        return sub {
> +            $blkdiscard_worker->($scfg->{saferemove}, $scfg->{'issue-blkdiscard'});
> +        };
>      } else {
>          for my $name (@$volnames) {
>              my $cmd = ['/sbin/lvremove', '-f', "$vg/$name"];
> @@ -428,6 +436,15 @@ sub properties {
>              description => "Zero-out data when removing LVs.",
>              type => 'boolean',
>          },
> +        'issue-blkdiscard' => {

I'd prefer 'discard' over 'blkdiscard'. 'blkdiscard' is the name of the
tool we call, but that's more of an implementation detail -- the
important thing is that it discards the previously used blocks.

> +            description => "Issue discard (TRIM) requests for LVs before removing them.",
> +            type => 'boolean',
> +            verbose_description =>
> +                "If enabled, blkdiscard is issued for the LV before removing it."
> +                . " This sends discard requests for the LV's block range, allowing"

I'd mention TRIM here too: "This sends discard (TRIM) requests [...]"

> +                . " thin-provisioned storage to reclaim previously allocated physical"
> +                . " space, provided the storage supports discard.",
> +        },
>          'saferemove-stepsize' => {
>              description => "Wipe step size in MiB."
>                  . " It will be capped to the maximum supported by the storage.",
> @@ -453,6 +470,7 @@ sub options {
>          shared => { optional => 1 },
>          disable => { optional => 1 },
>          saferemove => { optional => 1 },
> +        'issue-blkdiscard' => { optional => 1 },
>          'saferemove-stepsize' => { optional => 1 },
>          saferemove_throughput => { optional => 1 },
>          content => { optional => 1 },





  reply	other threads:[~2026-05-05 15:59 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-23 14:47 [PATCH manager/storage v3 0/2] fix #7339: lvmthick: add option to free storage for deleted VMs Lukas Sichert
2026-04-23 14:47 ` [PATCH storage v3 1/2] fix #7339: lvmthick: add worker to free space of to be " Lukas Sichert
2026-05-05 15:59   ` Friedrich Weber [this message]
2026-04-23 14:47 ` [PATCH manager v3 2/2] fix #7339: lvmthick: ui: add UI fields for option to free storage Lukas Sichert

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=24d217bf-5b9f-48d2-8754-9614bbbc5484@proxmox.com \
    --to=f.weber@proxmox.com \
    --cc=l.sichert@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal