all lists on lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH v5 pve-storage 1/2] lvmplugin: use blkdiscard when supported instead cstream to saferemove drive
       [not found] <20251023122331.477027-1-alexandre.derumier@groupe-cyllene.com>
@ 2025-10-23 12:23 ` Alexandre Derumier via pve-devel
  2025-10-23 12:23 ` [pve-devel] [PATCH v5 pve-storage 2/2] fix #6941 : lvmplugin : fix volume activation of raw disk before secure delete Alexandre Derumier via pve-devel
  1 sibling, 0 replies; 2+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-10-23 12:23 UTC (permalink / raw)
  To: pve-devel; +Cc: Alexandre Derumier

[-- Attachment #1: Type: message/rfc822, Size: 11290 bytes --]

From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH v5 pve-storage 1/2] lvmplugin: use blkdiscard when supported instead cstream to saferemove drive
Date: Thu, 23 Oct 2025 14:23:30 +0200
Message-ID: <20251023122331.477027-2-alexandre.derumier@groupe-cyllene.com>

Current cstream implementation is pretty slow, even without throttling.

use blkdiscard --zeroout instead when storage support it,
which is a few magnitudes faster.

Another benefit is that blkdiscard is skipping already zeroed block, so for empty
temp images like snapshot, is pretty fast.

blkdiscard don't have throttling like cstream, but we can tune the step size
of zeroes pushed to the storage.
I'm using 32MB stepsize by default , like ovirt, where it seem to be the best
balance between speed and load.
https://github.com/oVirt/vdsm/commit/79f1d79058aad863ca4b6672d4a5ce2be8e48986

but it can be reduce with "saferemove_stepsize" option.

stepsize is also autoreduce to sysfs write_zeroes_max_bytes, which is the maximum
zeroing batch supported by the storage

test with a 100G volume (empty):

time /usr/bin/cstream -i /dev/zero -o /dev/test/vm-100-disk-0.qcow2 -T 10 -v 1 -b 1048576

13561233408 B 12.6 GB 10.00 s 1356062979 B/s 1.26 GB/s
26021462016 B 24.2 GB 20.00 s 1301029969 B/s 1.21 GB/s
38585499648 B 35.9 GB 30.00 s 1286135343 B/s 1.20 GB/s
50998542336 B 47.5 GB 40.00 s 1274925312 B/s 1.19 GB/s
63702765568 B 59.3 GB 50.00 s 1274009877 B/s 1.19 GB/s
76721885184 B 71.5 GB 60.00 s 1278640698 B/s 1.19 GB/s
89126539264 B 83.0 GB 70.00 s 1273178488 B/s 1.19 GB/s
101666459648 B 94.7 GB 80.00 s 1270779024 B/s 1.18 GB/s
107390959616 B 100.0 GB 84.39 s 1272531142 B/s 1.19 GB/s
write: No space left on device

real    1m24.394s
user    0m0.171s
sys     1m24.052s

time blkdiscard --zeroout /dev/test/vm-100-disk-0.qcow2 -v
/dev/test/vm-100-disk-0.qcow2: Zero-filled 107390959616 bytes from the offset 0

real    0m3.641s
user    0m0.001s
sys     0m3.433s

test with a 100G volume with random data:

time blkdiscard --zeroout /dev/test/vm-100-disk-0.qcow2 -v

/dev/test/vm-112-disk-1: Zero-filled 4764729344 bytes from the offset 0
/dev/test/vm-112-disk-1: Zero-filled 4664066048 bytes from the offset 4764729344
/dev/test/vm-112-disk-1: Zero-filled 4831838208 bytes from the offset 9428795392
/dev/test/vm-112-disk-1: Zero-filled 4831838208 bytes from the offset 14260633600
/dev/test/vm-112-disk-1: Zero-filled 4831838208 bytes from the offset 19092471808
/dev/test/vm-112-disk-1: Zero-filled 4865392640 bytes from the offset 23924310016
/dev/test/vm-112-disk-1: Zero-filled 4596957184 bytes from the offset 28789702656
/dev/test/vm-112-disk-1: Zero-filled 4731174912 bytes from the offset 33386659840
/dev/test/vm-112-disk-1: Zero-filled 4294967296 bytes from the offset 38117834752
/dev/test/vm-112-disk-1: Zero-filled 4664066048 bytes from the offset 42412802048
/dev/test/vm-112-disk-1: Zero-filled 4697620480 bytes from the offset 47076868096
/dev/test/vm-112-disk-1: Zero-filled 4664066048 bytes from the offset 51774488576
/dev/test/vm-112-disk-1: Zero-filled 4261412864 bytes from the offset 56438554624
/dev/test/vm-112-disk-1: Zero-filled 4362076160 bytes from the offset 60699967488
/dev/test/vm-112-disk-1: Zero-filled 4127195136 bytes from the offset 65062043648
/dev/test/vm-112-disk-1: Zero-filled 4328521728 bytes from the offset 69189238784
/dev/test/vm-112-disk-1: Zero-filled 4731174912 bytes from the offset 73517760512
/dev/test/vm-112-disk-1: Zero-filled 4026531840 bytes from the offset 78248935424
/dev/test/vm-112-disk-1: Zero-filled 4194304000 bytes from the offset 82275467264
/dev/test/vm-112-disk-1: Zero-filled 4664066048 bytes from the offset 86469771264
/dev/test/vm-112-disk-1: Zero-filled 4395630592 bytes from the offset 91133837312
/dev/test/vm-112-disk-1: Zero-filled 3623878656 bytes from the offset 95529467904
/dev/test/vm-112-disk-1: Zero-filled 4462739456 bytes from the offset 99153346560
/dev/test/vm-112-disk-1: Zero-filled 3758096384 bytes from the offset 103616086016

real    0m23.969s
user    0m0.030s
sys     0m0.144s

Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
---
 src/PVE/Storage/LVMPlugin.pm | 76 ++++++++++++++++++++++++++++++------
 1 file changed, 65 insertions(+), 11 deletions(-)

diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
index 0416c9e..1eeeec0 100644
--- a/src/PVE/Storage/LVMPlugin.pm
+++ b/src/PVE/Storage/LVMPlugin.pm
@@ -3,10 +3,11 @@ package PVE::Storage::LVMPlugin;
 use strict;
 use warnings;
 
+use Cwd 'abs_path';
 use File::Basename;
 use IO::File;
 
-use PVE::Tools qw(run_command trim);
+use PVE::Tools qw(run_command file_read_firstline trim);
 use PVE::Storage::Plugin;
 use PVE::JSONSchema qw(get_standard_option);
 
@@ -284,23 +285,40 @@ my sub free_lvm_volumes {
 
     my $vg = $scfg->{vgname};
 
-    # we need to zero out LVM data for security reasons
-    # and to allow thin provisioning
-    my $zero_out_worker = sub {
-        # wipe throughput up to 10MB/s by default; may be overwritten with saferemove_throughput
-        my $throughput = '-10485760';
-        if ($scfg->{saferemove_throughput}) {
-            $throughput = $scfg->{saferemove_throughput};
+    my $secure_delete_cmd = sub {
+        my ($lvmpath) = @_;
+
+        my $stepsize = $scfg->{'saferemove-stepsize'} // 32;
+        $stepsize = $stepsize * 1024 * 1024;
+
+        my $bdev = abs_path($lvmpath);
+
+        my $sysdir = undef;
+        if ($bdev && $bdev =~ m|^/dev/(dm-\d+)|) {
+            $sysdir = "/sys/block/$1";
+        } else {
+            warn "skip secure delete. lvm volume don't seem to be activated\n";
+            return;
         }
-        for my $name (@$volnames) {
-            print "zero-out data on image $name (/dev/$vg/del-$name)\n";
+
+        my $write_zeroes_max_bytes =
+            file_read_firstline("$sysdir/queue/write_zeroes_max_bytes") // 0;
+        ($write_zeroes_max_bytes) = $write_zeroes_max_bytes =~ m/^(\d+)$/; #untaint
+
+        if ($write_zeroes_max_bytes == 0) {
+            # if storage don't support write zeroes, we fallback to cstream
+            # wipe throughput up to 10MB/s by default; may be overwritten with saferemove_throughput
+            my $throughput = '-10485760';
+            if ($scfg->{saferemove_throughput}) {
+                $throughput = $scfg->{saferemove_throughput};
+            }
 
             my $cmd = [
                 '/usr/bin/cstream',
                 '-i',
                 '/dev/zero',
                 '-o',
-                "/dev/$vg/del-$name",
+                $lvmpath,
                 '-T',
                 '10',
                 '-v',
@@ -318,6 +336,34 @@ my sub free_lvm_volumes {
             };
             warn $@ if $@;
 
+        } else {
+
+            # if the storage support write_zeroes but stepsize is too big,
+            # reduce the stepsize to the max possible
+            if ($write_zeroes_max_bytes > 0 && $stepsize > $write_zeroes_max_bytes) {
+                warn "reduce stepsize to the maximum supported by the storage:"
+                    . "$write_zeroes_max_bytes bytes\n";
+
+                $stepsize = $write_zeroes_max_bytes;
+            }
+
+            my $cmd =
+                ['/usr/sbin/blkdiscard', $lvmpath, '-v', '--zeroout', '--step', "${stepsize}"];
+
+            eval { run_command($cmd); };
+            warn $@ if $@;
+        }
+    };
+
+    # we need to zero out LVM data for security reasons
+    # and to allow thin provisioning
+    my $zero_out_worker = sub {
+        for my $name (@$volnames) {
+            my $lvmpath = "/dev/$vg/del-$name";
+            print "zero-out data on image $name ($lvmpath)\n";
+
+            $secure_delete_cmd->($lvmpath);
+
             $class->cluster_lock_storage(
                 $storeid,
                 $scfg->{shared},
@@ -376,6 +422,13 @@ sub properties {
             description => "Zero-out data when removing LVs.",
             type => 'boolean',
         },
+        'saferemove-stepsize' => {
+            description => "Wipe step size in MiB."
+                . "It will be capped to the maximum storage support.",
+            default => 32,
+            enum => [qw(1 2 4 8 16 32)],
+            type => 'integer',
+        },
         saferemove_throughput => {
             description => "Wipe throughput (cstream -t parameter value).",
             type => 'string',
@@ -394,6 +447,7 @@ sub options {
         shared => { optional => 1 },
         disable => { optional => 1 },
         saferemove => { optional => 1 },
+        'saferemove-stepsize' => { optional => 1 },
         saferemove_throughput => { optional => 1 },
         content => { optional => 1 },
         base => { fixed => 1, optional => 1 },
-- 
2.47.3



[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 2+ messages in thread

* [pve-devel] [PATCH v5 pve-storage 2/2] fix #6941 : lvmplugin : fix volume activation of raw disk before secure delete
       [not found] <20251023122331.477027-1-alexandre.derumier@groupe-cyllene.com>
  2025-10-23 12:23 ` [pve-devel] [PATCH v5 pve-storage 1/2] lvmplugin: use blkdiscard when supported instead cstream to saferemove drive Alexandre Derumier via pve-devel
@ 2025-10-23 12:23 ` Alexandre Derumier via pve-devel
  1 sibling, 0 replies; 2+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-10-23 12:23 UTC (permalink / raw)
  To: pve-devel; +Cc: Alexandre Derumier

[-- Attachment #1: Type: message/rfc822, Size: 5663 bytes --]

From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH v5 pve-storage 2/2] fix #6941 : lvmplugin : fix volume activation of raw disk before secure delete
Date: Thu, 23 Oct 2025 14:23:31 +0200
Message-ID: <20251023122331.477027-3-alexandre.derumier@groupe-cyllene.com>

The volume activate before secure delete was lost in qcow2 snapshot implementation
in commit eda88c94ed150e61bc60a89037d37b320a31a9d4.

This re-add activation just before the the delete, to be sure to not write zero
to not existing /dev/.. (so in memory instead the device)

Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
---
 src/PVE/Storage/LVMPlugin.pm | 28 +++++++++++++---------------
 1 file changed, 13 insertions(+), 15 deletions(-)

diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
index 1eeeec0..428d28c 100644
--- a/src/PVE/Storage/LVMPlugin.pm
+++ b/src/PVE/Storage/LVMPlugin.pm
@@ -362,6 +362,17 @@ my sub free_lvm_volumes {
             my $lvmpath = "/dev/$vg/del-$name";
             print "zero-out data on image $name ($lvmpath)\n";
 
+            my $cmd_activate = ['/sbin/lvchange', '-aly', $lvmpath];
+            run_command(
+                $cmd_activate,
+                errmsg => "can't activate LV '$lvmpath' to zero-out its data",
+            );
+            $cmd_activate = ['/sbin/lvchange', '--refresh', $lvmpath];
+            run_command(
+                $cmd_activate,
+                errmsg => "can't refresh LV '$lvmpath' to zero-out its data",
+            );
+
             $secure_delete_cmd->($lvmpath);
 
             $class->cluster_lock_storage(
@@ -737,13 +748,6 @@ my sub alloc_snap_image {
 my sub free_snap_image {
     my ($class, $storeid, $scfg, $volname, $snap) = @_;
 
-    #activate only the snapshot volume
-    my $path = $class->path($scfg, $volname, $storeid, $snap);
-    my $cmd = ['/sbin/lvchange', '-aly', $path];
-    run_command($cmd, errmsg => "can't activate LV '$path' to zero-out its data");
-    $cmd = ['/sbin/lvchange', '--refresh', $path];
-    run_command($cmd, errmsg => "can't refresh LV '$path' to zero-out its data");
-
     my $snap_volname = get_snap_name($class, $volname, $snap);
     return free_lvm_volumes($class, $scfg, $storeid, [$snap_volname]);
 }
@@ -756,14 +760,8 @@ sub free_image {
     my $volnames = [$volname];
 
     if ($format eq 'qcow2') {
-        #activate volumes && snapshot volumes
-        my $path = $class->path($scfg, $volname, $storeid);
-        $path = "\@pve-$name" if $format && $format eq 'qcow2';
-        my $cmd = ['/sbin/lvchange', '-aly', $path];
-        run_command($cmd, errmsg => "can't activate LV '$path' to zero-out its data");
-        $cmd = ['/sbin/lvchange', '--refresh', $path];
-        run_command($cmd, errmsg => "can't refresh LV '$path' to zero-out its data");
-
+        #activate volumes to read snapshots chain
+        $class->activate_volume($storeid, $scfg, $volname);
         my $snapshots = $class->volume_snapshot_info($scfg, $storeid, $volname);
         for my $snapid (
             sort { $snapshots->{$a}->{order} <=> $snapshots->{$b}->{order} }
-- 
2.47.3



[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2025-10-23 12:23 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20251023122331.477027-1-alexandre.derumier@groupe-cyllene.com>
2025-10-23 12:23 ` [pve-devel] [PATCH v5 pve-storage 1/2] lvmplugin: use blkdiscard when supported instead cstream to saferemove drive Alexandre Derumier via pve-devel
2025-10-23 12:23 ` [pve-devel] [PATCH v5 pve-storage 2/2] fix #6941 : lvmplugin : fix volume activation of raw disk before secure delete Alexandre Derumier via pve-devel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal