public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
@ 2025-07-09 16:21 Alexandre Derumier via pve-devel
  2025-07-10 15:09 ` [pve-devel] [PATCH FOLLOW-UP storage 1/3] helpers: make qemu_img* storage config independent Fabian Grünbichler
  2025-07-10 15:13 ` [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support Fabian Grünbichler
  0 siblings, 2 replies; 28+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-07-09 16:21 UTC (permalink / raw)
  To: pve-devel; +Cc: Alexandre Derumier

[-- Attachment #1: Type: message/rfc822, Size: 7826 bytes --]

From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
Date: Wed,  9 Jul 2025 18:21:45 +0200
Message-ID: <20250709162202.2952597-1-alexandre.derumier@groupe-cyllene.com>

This patch series implement qcow2 external snapshot support for files && lvm volumes

The current internal qcow2 snapshots have bad write performance because no metadatas can be preallocated.

This is particulary visible on a shared filesystem like ocfs2 or gfs2.

Also other bugs are freeze/lock reported by users since years on snapshots delete on nfs
(The disk access seem to be frozen during all the delete duration)

This also open doors for remote snapshot export-import for storage replication.

Changelog v8:
    storage :
     - fix Fabian comments
     - add rename_snapshot   
     - add qemu_resize
     - plugin: restrict volnames
     - plugin: use 'external-snapshots' instead 'snapext'
    qemu-server:
     - fix efi test with wrong volnames vm-disk-100-0.raw
     - use rename_snapshot
MAIN TODO:
    - add snapshots tests in both pve-storage && qemu-server
    - better handle snapshot failure with multiple disks


pve-storage:

Alexandre Derumier (13):
  plugin: add qemu_img_create
  plugin: add qemu_img_create_qcow2_backed
  plugin: add qemu_img_info
  plugin: add qemu_img_measure
  plugin: add qemu_img_resize
  rbd && zfs : create_base : remove $running param from volume_snapshot
  storage: volume_snapshot: add $running param
  storage: add rename_snapshot method
  storage: add volume_support_qemu_snapshot
  plugin: fix volname parsing
  qcow2: add external snapshot support
  lvmplugin: add qcow2 snapshot
  tests: add lvmplugin test

 ApiChangeLog                         |  15 +
 src/PVE/Storage.pm                   |  45 ++-
 src/PVE/Storage/BTRFSPlugin.pm       |   6 +
 src/PVE/Storage/CIFSPlugin.pm        |   1 +
 src/PVE/Storage/Common.pm            |  33 ++
 src/PVE/Storage/DirPlugin.pm         |  11 +
 src/PVE/Storage/ESXiPlugin.pm        |   8 +-
 src/PVE/Storage/ISCSIDirectPlugin.pm |   2 +-
 src/PVE/Storage/LVMPlugin.pm         | 571 +++++++++++++++++++++-----
 src/PVE/Storage/LvmThinPlugin.pm     |   8 +-
 src/PVE/Storage/NFSPlugin.pm         |   1 +
 src/PVE/Storage/PBSPlugin.pm         |   2 +-
 src/PVE/Storage/Plugin.pm            | 518 +++++++++++++++++++++---
 src/PVE/Storage/RBDPlugin.pm         |  18 +-
 src/PVE/Storage/ZFSPlugin.pm         |   4 +-
 src/PVE/Storage/ZFSPoolPlugin.pm     |   8 +-
 src/test/Makefile                    |   5 +-
 src/test/run_test_lvmplugin.pl       | 577 +++++++++++++++++++++++++++
 18 files changed, 1662 insertions(+), 171 deletions(-)
 create mode 100755 src/test/run_test_lvmplugin.pl

qemu-server :

Alexandre Derumier (4):
  qemu_img convert : add external snapshot support
  blockdev: add backing_chain support
  qcow2: add external snapshot support
  tests: fix efi vm-disk-100-0.raw -> vm-100-disk-0.raw

 src/PVE/QemuConfig.pm                         |   4 +-
 src/PVE/QemuServer.pm                         | 132 +++++--
 src/PVE/QemuServer/Blockdev.pm                | 345 +++++++++++++++++-
 src/PVE/QemuServer/QemuImage.pm               |   6 +-
 src/test/cfg2cmd/efi-raw-old.conf             |   2 +-
 src/test/cfg2cmd/efi-raw-old.conf.cmd         |   2 +-
 src/test/cfg2cmd/efi-raw-template.conf        |   2 +-
 src/test/cfg2cmd/efi-raw-template.conf.cmd    |   2 +-
 src/test/cfg2cmd/efi-raw.conf                 |   2 +-
 src/test/cfg2cmd/efi-raw.conf.cmd             |   2 +-
 src/test/cfg2cmd/efi-secboot-and-tpm-q35.conf |   2 +-
 .../cfg2cmd/efi-secboot-and-tpm-q35.conf.cmd  |   2 +-
 src/test/cfg2cmd/efi-secboot-and-tpm.conf     |   2 +-
 src/test/cfg2cmd/efi-secboot-and-tpm.conf.cmd |   2 +-
 src/test/cfg2cmd/sev-es.conf                  |   2 +-
 src/test/cfg2cmd/sev-es.conf.cmd              |   2 +-
 src/test/cfg2cmd/sev-std.conf                 |   2 +-
 src/test/cfg2cmd/sev-std.conf.cmd             |   2 +-
 src/test/cfg2cmd/simple-backingchain.conf     |  25 ++
 src/test/cfg2cmd/simple-backingchain.conf.cmd |  33 ++
 src/test/run_config2command_tests.pl          |  47 +++
 src/test/run_qemu_img_convert_tests.pl        |  59 +++
 src/test/snapshot-test.pm                     |   4 +-
 23 files changed, 634 insertions(+), 49 deletions(-)
 create mode 100644 src/test/cfg2cmd/simple-backingchain.conf
 create mode 100644 src/test/cfg2cmd/simple-backingchain.conf.cmd


-- 
2.39.5



[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [pve-devel] [PATCH FOLLOW-UP storage 1/3] helpers: make qemu_img* storage config independent
  2025-07-09 16:21 [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support Alexandre Derumier via pve-devel
@ 2025-07-10 15:09 ` Fabian Grünbichler
  2025-07-10 15:09   ` [pve-devel] [PATCH FOLLOW-UP storage 2/3] helpers: move qemu_img* to Common module Fabian Grünbichler
                     ` (3 more replies)
  2025-07-10 15:13 ` [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support Fabian Grünbichler
  1 sibling, 4 replies; 28+ messages in thread
From: Fabian Grünbichler @ 2025-07-10 15:09 UTC (permalink / raw)
  To: pve-devel

by moving the preallocation handling to the call site, and preparing
them for taking further options like cluster size in the future.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
 src/PVE/Storage/LVMPlugin.pm | 10 ++++--
 src/PVE/Storage/Plugin.pm    | 66 ++++++++++++++++++++++--------------
 2 files changed, 48 insertions(+), 28 deletions(-)

diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
index a91801f..f90df18 100644
--- a/src/PVE/Storage/LVMPlugin.pm
+++ b/src/PVE/Storage/LVMPlugin.pm
@@ -579,11 +579,14 @@ my sub lvm_qcow2_format {
     $class->activate_volume($storeid, $scfg, $name);
     my $path = $class->path($scfg, $name, $storeid);
 
+    my $options = {
+        preallocation => PVE::Storage::Plugin::preallocation_cmd_opt($scfg, $fmt),
+    };
     if ($backing_snap) {
         my $backing_path = $class->path($scfg, $name, $storeid, $backing_snap);
-        PVE::Storage::Plugin::qemu_img_create_qcow2_backed($scfg, $path, $backing_path, $fmt);
+        PVE::Storage::Plugin::qemu_img_create_qcow2_backed($path, $backing_path, $fmt, $options);
     } else {
-        PVE::Storage::Plugin::qemu_img_create($scfg, $fmt, $size, $path);
+        PVE::Storage::Plugin::qemu_img_create($fmt, $size, $path, $options);
     }
 }
 
@@ -868,7 +871,8 @@ sub volume_resize {
     );
 
     if (!$running && $format eq 'qcow2') {
-        PVE::Storage::Plugin::qemu_img_resize($scfg, $path, $format, $size, 10);
+        my $preallocation = PVE::Storage::Plugin::preallocation_cmd_opt($scfg, $fmt);
+        PVE::Storage::Plugin::qemu_img_resize($path, $format, $size, $preallocation, 10);
     }
 
     return 1;
diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index 0b7989b..6eb91b7 100644
--- a/src/PVE/Storage/Plugin.pm
+++ b/src/PVE/Storage/Plugin.pm
@@ -615,7 +615,7 @@ sub parse_config {
     return $cfg;
 }
 
-sub preallocation_cmd_option {
+sub preallocation_cmd_opt {
     my ($scfg, $fmt) = @_;
 
     my $prealloc = $scfg->{preallocation};
@@ -626,7 +626,7 @@ sub preallocation_cmd_option {
         die "preallocation mode '$prealloc' not supported by format '$fmt'\n"
             if !$QCOW2_PREALLOCATION->{$prealloc};
 
-        return "preallocation=$prealloc";
+        return $prealloc;
     } elsif ($fmt eq 'raw') {
         $prealloc = $prealloc // 'off';
         $prealloc = 'off' if $prealloc eq 'metadata';
@@ -634,7 +634,7 @@ sub preallocation_cmd_option {
         die "preallocation mode '$prealloc' not supported by format '$fmt'\n"
             if !$RAW_PREALLOCATION->{$prealloc};
 
-        return "preallocation=$prealloc";
+        return $prealloc;
     }
 
     return;
@@ -644,19 +644,21 @@ sub preallocation_cmd_option {
 
 =head3 qemu_img_create
 
-    qemu_img_create($scfg, $fmt, $size, $path)
+    qemu_img_create($fmt, $size, $path, $options)
 
 Create a new qemu image with a specific format C<$format> and size C<$size> for a target C<$path>.
 
+C<$options> currently allows setting the C<preallocation> value
+
 =cut
 
 sub qemu_img_create {
-    my ($scfg, $fmt, $size, $path) = @_;
+    my ($fmt, $size, $path, $options) = @_;
 
     my $cmd = ['/usr/bin/qemu-img', 'create'];
 
-    my $prealloc_opt = preallocation_cmd_option($scfg, $fmt);
-    push @$cmd, '-o', $prealloc_opt if defined($prealloc_opt);
+    push @$cmd, '-o', "preallocation=$options->{preallocation}"
+        if defined($options->{preallocation});
 
     push @$cmd, '-f', $fmt, $path, "${size}K";
 
@@ -667,14 +669,16 @@ sub qemu_img_create {
 
 =head3 qemu_img_create_qcow2_backed
 
-    qemu_img_create_qcow2_backed($scfg, $path, $backing_path, $backing_format)
+    qemu_img_create_qcow2_backed($path, $backing_path, $backing_format, $options)
 
 Create a new qemu qcow2 image C<$path> using an existing backing image C<$backing_path> with backing_format C<$backing_format>.
 
+C<$options> currently allows setting the C<preallocation> value.
+
 =cut
 
 sub qemu_img_create_qcow2_backed {
-    my ($scfg, $path, $backing_path, $backing_format) = @_;
+    my ($path, $backing_path, $backing_format, $options) = @_;
 
     my $cmd = [
         '/usr/bin/qemu-img',
@@ -688,10 +692,10 @@ sub qemu_img_create_qcow2_backed {
         $path,
     ];
 
-    my $options = $QCOW2_CLUSTERS->{backed};
+    my $opts = $QCOW2_CLUSTERS->{backed};
 
-    push @$options, preallocation_cmd_option($scfg, 'qcow2');
-    push @$cmd, '-o', join(',', @$options) if @$options > 0;
+    push @$opts, $options->{preallocation} if defined($options->{preallocation});
+    push @$cmd, '-o', join(',', @$opts) if @$opts > 0;
 
     run_command($cmd, errmsg => "unable to create image");
 }
@@ -722,20 +726,21 @@ sub qemu_img_info {
 
 =head3 qemu_img_measure
 
-    qemu_img_measure($size, $fmt, $timeout, $is_backed)
+    qemu_img_measure($size, $fmt, $timeout, $options)
 
 Returns a json with the maximum size including all metadatas overhead for an image with format C<$fmt> and original size C<$size>Kb.
-If the image is backed C<$is_backed>, we use different cluster size informations.
+
+C<$options> allows specifying qemu-img options that might affect the sizing calculation, such as cluster size.
+
 =cut
 
 sub qemu_img_measure {
-    my ($size, $fmt, $timeout, $is_backed) = @_;
+    my ($size, $fmt, $timeout, $options) = @_;
 
     die "format is missing" if !$fmt;
 
     my $cmd = ['/usr/bin/qemu-img', 'measure', '--output=json', '--size', "${size}K", '-O', $fmt];
-    if ($is_backed) {
-        my $options = $QCOW2_CLUSTERS->{backed};
+    if ($options) {
         push $cmd->@*, '-o', join(',', @$options) if @$options > 0;
     }
     return PVE::Storage::Common::run_qemu_img_json($cmd, $timeout);
@@ -745,20 +750,21 @@ sub qemu_img_measure {
 
 =head3 qemu_img_resize
 
-    qemu_img_resize($scfg, $path, $format, $size, $timeout)
+    qemu_img_resize($scfg, $path, $format, $size, $preallocation, $timeout)
 
 Resize a qemu image C<$path> with format C<$format> to a target Kb size C<$size>.
 Default timeout C<$timeout> is 10s if not specified.
+C<$preallocation> allows to specify the preallocation option for the resize operation.
+
 =cut
 
 sub qemu_img_resize {
-    my ($scfg, $path, $format, $size, $timeout) = @_;
+    my ($scfg, $path, $format, $size, $preallocation, $timeout) = @_;
 
     die "format is missing" if !$format;
 
-    my $prealloc_opt = preallocation_cmd_option($scfg, $format);
     my $cmd = ['/usr/bin/qemu-img', 'resize'];
-    push $cmd->@*, "--$prealloc_opt" if $prealloc_opt;
+    push $cmd->@*, "--preallocation=$preallocation" if $preallocation;
     push $cmd->@*, '-f', $format, $path, $size;
 
     $timeout = 10 if !$timeout;
@@ -1067,7 +1073,10 @@ sub clone_image {
     # Note: we use relative paths, so we need to call chdir before qemu-img
     eval {
         local $CWD = $imagedir;
-        qemu_img_create_qcow2_backed($scfg, $path, "../$basevmid/$basename", $format);
+        my $options = {
+            preallocation => preallocation_cmd_opt($scfg, $format),
+        };
+        qemu_img_create_qcow2_backed($path, "../$basevmid/$basename", $format, $options);
     };
     my $err = $@;
 
@@ -1105,7 +1114,10 @@ sub alloc_image {
         umask $old_umask;
         die $err if $err;
     } else {
-        eval { qemu_img_create($scfg, $fmt, $size, $path) };
+        my $preallocation = preallocation_cmd_opt($scfg, $fmt);
+        my $options = {};
+        $options->{preallocation} = $preallocation if $preallocation;
+        eval { qemu_img_create($fmt, $size, $path, $options) };
         if ($@) {
             unlink $path;
             rmdir $imagedir;
@@ -1122,9 +1134,12 @@ my sub alloc_backed_image {
     my $path = $class->path($scfg, $volname, $storeid);
     my ($vmid, $backing_format) = ($class->parse_volname($volname))[2, 6];
 
+    my $preallocation = preallocation_cmd_opt($scfg, $backing_format);
+    my $options = {};
+    $options->{preallocation} = $preallocation if $preallocation;
     my $backing_volname = get_snap_name($class, $volname, $backing_snap);
     #qemu_img use relative path from base image for the backing_volname by default
-    eval { qemu_img_create_qcow2_backed($scfg, $path, $backing_volname, $backing_format) };
+    eval { qemu_img_create_qcow2_backed($path, $backing_volname, $backing_format, $options) };
     if ($@) {
         unlink $path;
         die "$@";
@@ -1371,7 +1386,8 @@ sub volume_resize {
 
     my $format = ($class->parse_volname($volname))[6];
 
-    qemu_img_resize($scfg, $path, $format, $size, 10);
+    my $preallocation = preallocation_cmd_opt($scfg, $format);
+    qemu_img_resize($path, $format, $size, $preallocation, 10);
 
     return undef;
 }
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [pve-devel] [PATCH FOLLOW-UP storage 2/3] helpers: move qemu_img* to Common module
  2025-07-10 15:09 ` [pve-devel] [PATCH FOLLOW-UP storage 1/3] helpers: make qemu_img* storage config independent Fabian Grünbichler
@ 2025-07-10 15:09   ` Fabian Grünbichler
  2025-07-10 15:09   ` [pve-devel] [PATCH FOLLOW-UP storage 3/3] rename_snapshot: fix parameter checks Fabian Grünbichler
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 28+ messages in thread
From: Fabian Grünbichler @ 2025-07-10 15:09 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
 src/PVE/Storage/Common.pm    | 147 ++++++++++++++++++++++++++++++---
 src/PVE/Storage/LVMPlugin.pm |  10 +--
 src/PVE/Storage/Plugin.pm    | 153 +++--------------------------------
 3 files changed, 152 insertions(+), 158 deletions(-)

diff --git a/src/PVE/Storage/Common.pm b/src/PVE/Storage/Common.pm
index aa89e68..71d123a 100644
--- a/src/PVE/Storage/Common.pm
+++ b/src/PVE/Storage/Common.pm
@@ -111,18 +111,7 @@ sub deallocate : prototype($$$) {
     }
 }
 
-=pod
-
-=head3 run_qemu_img_json
-
-    run_qemu_img_json($cmd, $timeout)
-
-Execute qemu_img command C<$cmd> with a timeout C<$timeout>.
-Parse the output result and return a json.
-
-=cut
-
-sub run_qemu_img_json {
+my sub run_qemu_img_json {
     my ($cmd, $timeout) = @_;
     my $json = '';
     my $err_output = '';
@@ -143,4 +132,138 @@ sub run_qemu_img_json {
     }
     return $json;
 }
+
+=pod
+
+=head3 qemu_img_create
+
+    qemu_img_create($fmt, $size, $path, $options)
+
+Create a new qemu image with a specific format C<$format> and size C<$size> for a target C<$path>.
+
+C<$options> currently allows setting the C<preallocation> value
+
+=cut
+
+sub qemu_img_create {
+    my ($fmt, $size, $path, $options) = @_;
+
+    my $cmd = ['/usr/bin/qemu-img', 'create'];
+
+    push @$cmd, '-o', "preallocation=$options->{preallocation}"
+        if defined($options->{preallocation});
+
+    push @$cmd, '-f', $fmt, $path, "${size}K";
+
+    run_command($cmd, errmsg => "unable to create image");
+}
+
+=pod
+
+=head3 qemu_img_create_qcow2_backed
+
+    qemu_img_create_qcow2_backed($path, $backing_path, $backing_format, $options)
+
+Create a new qemu qcow2 image C<$path> using an existing backing image C<$backing_path> with backing_format C<$backing_format>.
+
+C<$options> currently allows setting the C<preallocation> value.
+
+=cut
+
+sub qemu_img_create_qcow2_backed {
+    my ($path, $backing_path, $backing_format, $options) = @_;
+
+    my $cmd = [
+        '/usr/bin/qemu-img',
+        'create',
+        '-F',
+        $backing_format,
+        '-b',
+        $backing_path,
+        '-f',
+        'qcow2',
+        $path,
+    ];
+
+    # TODO make this configurable for all volumes/types and pass in via $options
+    my $opts = ['extended_l2=on', 'cluster_size=128k'];
+
+    push @$opts, "preallocation=$options->{preallocation}"
+        if defined($options->{preallocation});
+    push @$cmd, '-o', join(',', @$opts) if @$opts > 0;
+
+    run_command($cmd, errmsg => "unable to create image");
+}
+
+=pod
+
+=head3 qemu_img_info
+
+    qemu_img_info($filename, $file_format, $timeout, $follow_backing_files)
+
+Returns a json with qemu image C<$filename> informations with format <$file_format>.
+If C<$follow_backing_files> option is defined, return a json with the whole chain
+of backing files images.
+
+=cut
+
+sub qemu_img_info {
+    my ($filename, $file_format, $timeout, $follow_backing_files) = @_;
+
+    my $cmd = ['/usr/bin/qemu-img', 'info', '--output=json', $filename];
+    push $cmd->@*, '-f', $file_format if $file_format;
+    push $cmd->@*, '--backing-chain' if $follow_backing_files;
+
+    return run_qemu_img_json($cmd, $timeout);
+}
+
+=pod
+
+=head3 qemu_img_measure
+
+    qemu_img_measure($size, $fmt, $timeout, $options)
+
+Returns a json with the maximum size including all metadatas overhead for an image with format C<$fmt> and original size C<$size>Kb.
+
+C<$options> allows specifying qemu-img options that might affect the sizing calculation, such as cluster size.
+
+=cut
+
+sub qemu_img_measure {
+    my ($size, $fmt, $timeout, $options) = @_;
+
+    die "format is missing" if !$fmt;
+
+    my $cmd = ['/usr/bin/qemu-img', 'measure', '--output=json', '--size', "${size}K", '-O', $fmt];
+    if ($options) {
+        push $cmd->@*, '-o', join(',', @$options) if @$options > 0;
+    }
+    return run_qemu_img_json($cmd, $timeout);
+}
+
+=pod
+
+=head3 qemu_img_resize
+
+    qemu_img_resize($scfg, $path, $format, $size, $preallocation, $timeout)
+
+Resize a qemu image C<$path> with format C<$format> to a target Kb size C<$size>.
+Default timeout C<$timeout> is 10s if not specified.
+C<$preallocation> allows to specify the preallocation option for the resize operation.
+
+=cut
+
+sub qemu_img_resize {
+    my ($scfg, $path, $format, $size, $preallocation, $timeout) = @_;
+
+    die "format is missing" if !$format;
+
+    my $cmd = ['/usr/bin/qemu-img', 'resize'];
+    push $cmd->@*, "--preallocation=$preallocation" if $preallocation;
+    push $cmd->@*, '-f', $format, $path, $size;
+
+    $timeout = 10 if !$timeout;
+    run_command($cmd, timeout => $timeout);
+}
+
 1;
diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
index f90df18..4b60e32 100644
--- a/src/PVE/Storage/LVMPlugin.pm
+++ b/src/PVE/Storage/LVMPlugin.pm
@@ -584,9 +584,9 @@ my sub lvm_qcow2_format {
     };
     if ($backing_snap) {
         my $backing_path = $class->path($scfg, $name, $storeid, $backing_snap);
-        PVE::Storage::Plugin::qemu_img_create_qcow2_backed($path, $backing_path, $fmt, $options);
+        PVE::Storage::Common::qemu_img_create_qcow2_backed($path, $backing_path, $fmt, $options);
     } else {
-        PVE::Storage::Plugin::qemu_img_create($fmt, $size, $path, $options);
+        PVE::Storage::Common::qemu_img_create($fmt, $size, $path, $options);
     }
 }
 
@@ -598,7 +598,7 @@ my sub calculate_lvm_size {
 
     my $options = $backing_snap ? ['extended_l2=on', 'cluster_size=128k'] : [];
 
-    my $json = PVE::Storage::Plugin::qemu_img_measure($size, $fmt, 5, $options);
+    my $json = PVE::Storage::Common::qemu_img_measure($size, $fmt, 5, $options);
     die "failed to query file information with qemu-img measure\n" if !$json;
     my $info = eval { decode_json($json) };
     if ($@) {
@@ -871,8 +871,8 @@ sub volume_resize {
     );
 
     if (!$running && $format eq 'qcow2') {
-        my $preallocation = PVE::Storage::Plugin::preallocation_cmd_opt($scfg, $fmt);
-        PVE::Storage::Plugin::qemu_img_resize($path, $format, $size, $preallocation, 10);
+        my $preallocation = PVE::Storage::Plugin::preallocation_cmd_opt($scfg, $format);
+        PVE::Storage::Common::qemu_img_resize($path, $format, $size, $preallocation, 10);
     }
 
     return 1;
diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index 6eb91b7..aee145f 100644
--- a/src/PVE/Storage/Plugin.pm
+++ b/src/PVE/Storage/Plugin.pm
@@ -51,10 +51,6 @@ our $RAW_PREALLOCATION = {
     full => 1,
 };
 
-my $QCOW2_CLUSTERS = {
-    backed => ['extended_l2=on', 'cluster_size=128k'],
-};
-
 our $MAX_VOLUMES_PER_GUEST = 1024;
 
 cfs_register_file(
@@ -640,137 +636,6 @@ sub preallocation_cmd_opt {
     return;
 }
 
-=pod
-
-=head3 qemu_img_create
-
-    qemu_img_create($fmt, $size, $path, $options)
-
-Create a new qemu image with a specific format C<$format> and size C<$size> for a target C<$path>.
-
-C<$options> currently allows setting the C<preallocation> value
-
-=cut
-
-sub qemu_img_create {
-    my ($fmt, $size, $path, $options) = @_;
-
-    my $cmd = ['/usr/bin/qemu-img', 'create'];
-
-    push @$cmd, '-o', "preallocation=$options->{preallocation}"
-        if defined($options->{preallocation});
-
-    push @$cmd, '-f', $fmt, $path, "${size}K";
-
-    run_command($cmd, errmsg => "unable to create image");
-}
-
-=pod
-
-=head3 qemu_img_create_qcow2_backed
-
-    qemu_img_create_qcow2_backed($path, $backing_path, $backing_format, $options)
-
-Create a new qemu qcow2 image C<$path> using an existing backing image C<$backing_path> with backing_format C<$backing_format>.
-
-C<$options> currently allows setting the C<preallocation> value.
-
-=cut
-
-sub qemu_img_create_qcow2_backed {
-    my ($path, $backing_path, $backing_format, $options) = @_;
-
-    my $cmd = [
-        '/usr/bin/qemu-img',
-        'create',
-        '-F',
-        $backing_format,
-        '-b',
-        $backing_path,
-        '-f',
-        'qcow2',
-        $path,
-    ];
-
-    my $opts = $QCOW2_CLUSTERS->{backed};
-
-    push @$opts, $options->{preallocation} if defined($options->{preallocation});
-    push @$cmd, '-o', join(',', @$opts) if @$opts > 0;
-
-    run_command($cmd, errmsg => "unable to create image");
-}
-
-=pod
-
-=head3 qemu_img_info
-
-    qemu_img_info($filename, $file_format, $timeout, $follow_backing_files)
-
-Returns a json with qemu image C<$filename> informations with format <$file_format>.
-If C<$follow_backing_files> option is defined, return a json with the whole chain
-of backing files images.
-
-=cut
-
-sub qemu_img_info {
-    my ($filename, $file_format, $timeout, $follow_backing_files) = @_;
-
-    my $cmd = ['/usr/bin/qemu-img', 'info', '--output=json', $filename];
-    push $cmd->@*, '-f', $file_format if $file_format;
-    push $cmd->@*, '--backing-chain' if $follow_backing_files;
-
-    return PVE::Storage::Common::run_qemu_img_json($cmd, $timeout);
-}
-
-=pod
-
-=head3 qemu_img_measure
-
-    qemu_img_measure($size, $fmt, $timeout, $options)
-
-Returns a json with the maximum size including all metadatas overhead for an image with format C<$fmt> and original size C<$size>Kb.
-
-C<$options> allows specifying qemu-img options that might affect the sizing calculation, such as cluster size.
-
-=cut
-
-sub qemu_img_measure {
-    my ($size, $fmt, $timeout, $options) = @_;
-
-    die "format is missing" if !$fmt;
-
-    my $cmd = ['/usr/bin/qemu-img', 'measure', '--output=json', '--size', "${size}K", '-O', $fmt];
-    if ($options) {
-        push $cmd->@*, '-o', join(',', @$options) if @$options > 0;
-    }
-    return PVE::Storage::Common::run_qemu_img_json($cmd, $timeout);
-}
-
-=pod
-
-=head3 qemu_img_resize
-
-    qemu_img_resize($scfg, $path, $format, $size, $preallocation, $timeout)
-
-Resize a qemu image C<$path> with format C<$format> to a target Kb size C<$size>.
-Default timeout C<$timeout> is 10s if not specified.
-C<$preallocation> allows to specify the preallocation option for the resize operation.
-
-=cut
-
-sub qemu_img_resize {
-    my ($scfg, $path, $format, $size, $preallocation, $timeout) = @_;
-
-    die "format is missing" if !$format;
-
-    my $cmd = ['/usr/bin/qemu-img', 'resize'];
-    push $cmd->@*, "--preallocation=$preallocation" if $preallocation;
-    push $cmd->@*, '-f', $format, $path, $size;
-
-    $timeout = 10 if !$timeout;
-    run_command($cmd, timeout => $timeout);
-}
-
 # Storage implementation
 
 # called during addition of storage (before the new storage config got written)
@@ -1076,7 +941,9 @@ sub clone_image {
         my $options = {
             preallocation => preallocation_cmd_opt($scfg, $format),
         };
-        qemu_img_create_qcow2_backed($path, "../$basevmid/$basename", $format, $options);
+        PVE::Storage::Common::qemu_img_create_qcow2_backed(
+            $path, "../$basevmid/$basename", $format, $options,
+        );
     };
     my $err = $@;
 
@@ -1117,7 +984,7 @@ sub alloc_image {
         my $preallocation = preallocation_cmd_opt($scfg, $fmt);
         my $options = {};
         $options->{preallocation} = $preallocation if $preallocation;
-        eval { qemu_img_create($fmt, $size, $path, $options) };
+        eval { PVE::Storage::Common::qemu_img_create($fmt, $size, $path, $options) };
         if ($@) {
             unlink $path;
             rmdir $imagedir;
@@ -1139,7 +1006,11 @@ my sub alloc_backed_image {
     $options->{preallocation} = $preallocation if $preallocation;
     my $backing_volname = get_snap_name($class, $volname, $backing_snap);
     #qemu_img use relative path from base image for the backing_volname by default
-    eval { qemu_img_create_qcow2_backed($path, $backing_volname, $backing_format, $options) };
+    eval {
+        PVE::Storage::Common::qemu_img_create_qcow2_backed(
+            $path, $backing_volname, $backing_format, $options,
+        );
+    };
     if ($@) {
         unlink $path;
         die "$@";
@@ -1271,7 +1142,7 @@ sub file_size_info {
             "file_size_info: '$filename': falling back to 'raw' from unknown format '$file_format'\n";
         $file_format = 'raw';
     }
-    my $json = qemu_img_info($filename, $file_format, $timeout);
+    my $json = PVE::Storage::Common::qemu_img_info($filename, $file_format, $timeout);
     if (!$json) {
         die "failed to query file information with qemu-img\n" if $untrusted;
         # skip decoding if there was no output, e.g. if there was a timeout.
@@ -1387,7 +1258,7 @@ sub volume_resize {
     my $format = ($class->parse_volname($volname))[6];
 
     my $preallocation = preallocation_cmd_opt($scfg, $format);
-    qemu_img_resize($path, $format, $size, $preallocation, 10);
+    PVE::Storage::Common::qemu_img_resize($path, $format, $size, $preallocation, 10);
 
     return undef;
 }
@@ -1879,7 +1750,7 @@ sub volume_snapshot_info {
     my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
         $class->parse_volname($volname);
 
-    my $json = qemu_img_info($path, undef, 10, 1);
+    my $json = PVE::Storage::Common::qemu_img_info($path, undef, 10, 1);
     die "failed to query file information with qemu-img\n" if !$json;
     my $json_decode = eval { decode_json($json) };
     if ($@) {
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [pve-devel] [PATCH FOLLOW-UP storage 3/3] rename_snapshot: fix parameter checks
  2025-07-10 15:09 ` [pve-devel] [PATCH FOLLOW-UP storage 1/3] helpers: make qemu_img* storage config independent Fabian Grünbichler
  2025-07-10 15:09   ` [pve-devel] [PATCH FOLLOW-UP storage 2/3] helpers: move qemu_img* to Common module Fabian Grünbichler
@ 2025-07-10 15:09   ` Fabian Grünbichler
  2025-07-14  6:34   ` [pve-devel] [PATCH FOLLOW-UP storage 1/3] helpers: make qemu_img* storage config independent DERUMIER, Alexandre via pve-devel
       [not found]   ` <6b7eaeb6af6af22ebdd034236e9e88bc1b5e1e3f.camel@groupe-cyllene.com>
  3 siblings, 0 replies; 28+ messages in thread
From: Fabian Grünbichler @ 2025-07-10 15:09 UTC (permalink / raw)
  To: pve-devel

both source and target snapshot need to be provided when renaming.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
 src/PVE/Storage.pm | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index 53965ee..b3ca094 100755
--- a/src/PVE/Storage.pm
+++ b/src/PVE/Storage.pm
@@ -2348,7 +2348,8 @@ sub rename_snapshot {
     my ($cfg, $volid, $source_snap, $target_snap) = @_;
 
     die "no volid provided\n" if !$volid;
-    die "no source or target snap provided\n" if !$source_snap && !$target_snap;
+    die "no source snapshot provided\n" if !$source_snap;
+    die "no target snapshot provided\n" if !$target_snap;
 
     my ($storeid, $volname) = parse_volume_id($volid);
 
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
  2025-07-09 16:21 [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support Alexandre Derumier via pve-devel
  2025-07-10 15:09 ` [pve-devel] [PATCH FOLLOW-UP storage 1/3] helpers: make qemu_img* storage config independent Fabian Grünbichler
@ 2025-07-10 15:13 ` Fabian Grünbichler
  2025-07-10 15:46   ` DERUMIER, Alexandre via pve-devel
                     ` (7 more replies)
  1 sibling, 8 replies; 28+ messages in thread
From: Fabian Grünbichler @ 2025-07-10 15:13 UTC (permalink / raw)
  To: Proxmox VE development discussion; +Cc: Wolfgang Bumiller, Thomas Lamprecht


> Alexandre Derumier via pve-devel <pve-devel@lists.proxmox.com> hat am 09.07.2025 18:21 CEST geschrieben:
> This patch series implement qcow2 external snapshot support for files && lvm volumes
> 
> The current internal qcow2 snapshots have bad write performance because no metadatas can be preallocated.
> 
> This is particulary visible on a shared filesystem like ocfs2 or gfs2.
> 
> Also other bugs are freeze/lock reported by users since years on snapshots delete on nfs
> (The disk access seem to be frozen during all the delete duration)
> 
> This also open doors for remote snapshot export-import for storage replication.
> 
> Changelog v8:
>     storage :
>      - fix Fabian comments
>      - add rename_snapshot   
>      - add qemu_resize
>      - plugin: restrict volnames
>      - plugin: use 'external-snapshots' instead 'snapext'
>     qemu-server:
>      - fix efi test with wrong volnames vm-disk-100-0.raw
>      - use rename_snapshot
> MAIN TODO:
>     - add snapshots tests in both pve-storage && qemu-server
>     - better handle snapshot failure with multiple disks

sent a few follow-ups, as I didn't manage to fully test things and depending on the outcome of such tests, it might be okay to apply the series with those follow-ups, and fix up the rest later, or not..

some open issues that I discovered that still need fixing:

1. missing activation when snapshotting an LVM volume if the VM is not running

snapshotting 'drive-scsi0' (test:123/vm-123-disk-0.qcow2)
snapshotting 'drive-scsi1' (lvm:vm-123-disk-0.qcow2)
  Renamed "vm-123-disk-0.qcow2" to "snap_vm-123-disk-0_test.qcow2" in volume group "lvm"
failed to stat '/dev/lvm/snap_vm-123-disk-0_test.qcow2'   <============
Use of uninitialized value $size in division (/) at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 671.
  Rounding up size to full physical extent 4.00 MiB
  Logical volume "vm-123-disk-0.qcow2" created.
  Logical volume "vm-123-disk-0.qcow2" successfully removed.
  Renamed "snap_vm-123-disk-0_test.qcow2" to "vm-123-disk-0.qcow2" in volume group "lvm"

2. storage migration with external snapshots needs to be implemented or disabled (LVM is raw-only at the moment)
3. moving a 'raw' lvm volume to the same storage with format 'qcow2' fails with "you can't move to the same storage with same format (500)" (UI and CLI, other way round works)
4. all snapshot volumes on extsnap dir storages will print warnings like

`this volume filename is not supported anymore`

when hitting `parse_namedir` - those can likely be avoided by skipping the warning if the name matches the snapshot format and external-snapshots are enabled..

5. the backing file paths are not relative for LVM
6. it's fairly easy to accidentally create qcow2-formatted LVM volumes, as opposed to the requirement to enable a non-UI storage option at storage creation time for dir storages, we might want to add some warning to the UI at least? or we could also guard usage of the format with a config option..
7. the snapshot name related helpers being public would be nice to avoid - one way would be to inline them and duplicate volume_snapshot_info in the LVM plugin, but if a better option is found that would be great
8. renaming a volume needs to be forbidden if snapshots exist, or the whole chain needs to be renamed (this is currently caught higher up in the stack, not sure if we need/want to also check in the storage layer?)

the parse_namedir change also need a close look to see if some other plugins get broken.. (@Wolfgang - since you are working on related changes!)

I am sure there are more rough edges to be found, so don't consider the list above complete!

> 
> 
> pve-storage:
> 
> Alexandre Derumier (13):
>   plugin: add qemu_img_create
>   plugin: add qemu_img_create_qcow2_backed
>   plugin: add qemu_img_info
>   plugin: add qemu_img_measure
>   plugin: add qemu_img_resize
>   rbd && zfs : create_base : remove $running param from volume_snapshot
>   storage: volume_snapshot: add $running param
>   storage: add rename_snapshot method
>   storage: add volume_support_qemu_snapshot
>   plugin: fix volname parsing
>   qcow2: add external snapshot support
>   lvmplugin: add qcow2 snapshot
>   tests: add lvmplugin test
> 
>  ApiChangeLog                         |  15 +
>  src/PVE/Storage.pm                   |  45 ++-
>  src/PVE/Storage/BTRFSPlugin.pm       |   6 +
>  src/PVE/Storage/CIFSPlugin.pm        |   1 +
>  src/PVE/Storage/Common.pm            |  33 ++
>  src/PVE/Storage/DirPlugin.pm         |  11 +
>  src/PVE/Storage/ESXiPlugin.pm        |   8 +-
>  src/PVE/Storage/ISCSIDirectPlugin.pm |   2 +-
>  src/PVE/Storage/LVMPlugin.pm         | 571 +++++++++++++++++++++-----
>  src/PVE/Storage/LvmThinPlugin.pm     |   8 +-
>  src/PVE/Storage/NFSPlugin.pm         |   1 +
>  src/PVE/Storage/PBSPlugin.pm         |   2 +-
>  src/PVE/Storage/Plugin.pm            | 518 +++++++++++++++++++++---
>  src/PVE/Storage/RBDPlugin.pm         |  18 +-
>  src/PVE/Storage/ZFSPlugin.pm         |   4 +-
>  src/PVE/Storage/ZFSPoolPlugin.pm     |   8 +-
>  src/test/Makefile                    |   5 +-
>  src/test/run_test_lvmplugin.pl       | 577 +++++++++++++++++++++++++++
>  18 files changed, 1662 insertions(+), 171 deletions(-)
>  create mode 100755 src/test/run_test_lvmplugin.pl
> 
> qemu-server :
> 
> Alexandre Derumier (4):
>   qemu_img convert : add external snapshot support
>   blockdev: add backing_chain support
>   qcow2: add external snapshot support
>   tests: fix efi vm-disk-100-0.raw -> vm-100-disk-0.raw
> 
>  src/PVE/QemuConfig.pm                         |   4 +-
>  src/PVE/QemuServer.pm                         | 132 +++++--
>  src/PVE/QemuServer/Blockdev.pm                | 345 +++++++++++++++++-
>  src/PVE/QemuServer/QemuImage.pm               |   6 +-
>  src/test/cfg2cmd/efi-raw-old.conf             |   2 +-
>  src/test/cfg2cmd/efi-raw-old.conf.cmd         |   2 +-
>  src/test/cfg2cmd/efi-raw-template.conf        |   2 +-
>  src/test/cfg2cmd/efi-raw-template.conf.cmd    |   2 +-
>  src/test/cfg2cmd/efi-raw.conf                 |   2 +-
>  src/test/cfg2cmd/efi-raw.conf.cmd             |   2 +-
>  src/test/cfg2cmd/efi-secboot-and-tpm-q35.conf |   2 +-
>  .../cfg2cmd/efi-secboot-and-tpm-q35.conf.cmd  |   2 +-
>  src/test/cfg2cmd/efi-secboot-and-tpm.conf     |   2 +-
>  src/test/cfg2cmd/efi-secboot-and-tpm.conf.cmd |   2 +-
>  src/test/cfg2cmd/sev-es.conf                  |   2 +-
>  src/test/cfg2cmd/sev-es.conf.cmd              |   2 +-
>  src/test/cfg2cmd/sev-std.conf                 |   2 +-
>  src/test/cfg2cmd/sev-std.conf.cmd             |   2 +-
>  src/test/cfg2cmd/simple-backingchain.conf     |  25 ++
>  src/test/cfg2cmd/simple-backingchain.conf.cmd |  33 ++
>  src/test/run_config2command_tests.pl          |  47 +++
>  src/test/run_qemu_img_convert_tests.pl        |  59 +++
>  src/test/snapshot-test.pm                     |   4 +-
>  23 files changed, 634 insertions(+), 49 deletions(-)
>  create mode 100644 src/test/cfg2cmd/simple-backingchain.conf
>  create mode 100644 src/test/cfg2cmd/simple-backingchain.conf.cmd
> 
> 
> -- 
> 2.39.5


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
  2025-07-10 15:13 ` [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support Fabian Grünbichler
@ 2025-07-10 15:46   ` DERUMIER, Alexandre via pve-devel
       [not found]   ` <c6c3457906642a30ddffc0f6b9d28ea6a745ac7c.camel@groupe-cyllene.com>
                     ` (6 subsequent siblings)
  7 siblings, 0 replies; 28+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-10 15:46 UTC (permalink / raw)
  To: pve-devel, f.gruenbichler; +Cc: DERUMIER, Alexandre, w.bumiller, t.lamprecht

[-- Attachment #1: Type: message/rfc822, Size: 23664 bytes --]

From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Cc: "w.bumiller@proxmox.com" <w.bumiller@proxmox.com>, "t.lamprecht@proxmox.com" <t.lamprecht@proxmox.com>
Subject: Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
Date: Thu, 10 Jul 2025 15:46:35 +0000
Message-ID: <c6c3457906642a30ddffc0f6b9d28ea6a745ac7c.camel@groupe-cyllene.com>

Hi Fabian,

I'll try to fix all your comments for next week.

I'm going on holiday end of the next week, the 18th july to around 10
August, so I think It'll be the last time I can work on it before next
month. But feel free to improve my patches during this time.




-------- Message initial --------
De: Fabian Grünbichler <f.gruenbichler@proxmox.com>
À: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Cc: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>,
Wolfgang Bumiller <w.bumiller@proxmox.com>, Thomas Lamprecht
<t.lamprecht@proxmox.com>
Objet: Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add
external qcow2 snapshot support
Date: 10/07/2025 17:13:43


> Alexandre Derumier via pve-devel <pve-devel@lists.proxmox.com> hat am
> 09.07.2025 18:21 CEST geschrieben:
> This patch series implement qcow2 external snapshot support for files
> && lvm volumes
> 
> The current internal qcow2 snapshots have bad write performance
> because no metadatas can be preallocated.
> 
> This is particulary visible on a shared filesystem like ocfs2 or
> gfs2.
> 
> Also other bugs are freeze/lock reported by users since years on
> snapshots delete on nfs
> (The disk access seem to be frozen during all the delete duration)
> 
> This also open doors for remote snapshot export-import for storage
> replication.
> 
> Changelog v8:
>     storage :
>      - fix Fabian comments
>      - add rename_snapshot   
>      - add qemu_resize
>      - plugin: restrict volnames
>      - plugin: use 'external-snapshots' instead 'snapext'
>     qemu-server:
>      - fix efi test with wrong volnames vm-disk-100-0.raw
>      - use rename_snapshot
> MAIN TODO:
>     - add snapshots tests in both pve-storage && qemu-server
>     - better handle snapshot failure with multiple disks

sent a few follow-ups, as I didn't manage to fully test things and
depending on the outcome of such tests, it might be okay to apply the
series with those follow-ups, and fix up the rest later, or not..

some open issues that I discovered that still need fixing:

1. missing activation when snapshotting an LVM volume if the VM is not
running

snapshotting 'drive-scsi0' (test:123/vm-123-disk-0.qcow2)
snapshotting 'drive-scsi1' (lvm:vm-123-disk-0.qcow2)
  Renamed "vm-123-disk-0.qcow2" to "snap_vm-123-disk-0_test.qcow2" in
volume group "lvm"
failed to stat '/dev/lvm/snap_vm-123-disk-0_test.qcow2'   <============
Use of uninitialized value $size in division (/) at
/usr/share/perl5/PVE/Storage/LVMPlugin.pm line 671.
  Rounding up size to full physical extent 4.00 MiB
  Logical volume "vm-123-disk-0.qcow2" created.
  Logical volume "vm-123-disk-0.qcow2" successfully removed.
  Renamed "snap_vm-123-disk-0_test.qcow2" to "vm-123-disk-0.qcow2" in
volume group "lvm"

2. storage migration with external snapshots needs to be implemented or
disabled (LVM is raw-only at the moment)
3. moving a 'raw' lvm volume to the same storage with format 'qcow2'
fails with "you can't move to the same storage with same format (500)"
(UI and CLI, other way round works)
4. all snapshot volumes on extsnap dir storages will print warnings
like

`this volume filename is not supported anymore`

when hitting `parse_namedir` - those can likely be avoided by skipping
the warning if the name matches the snapshot format and external-
snapshots are enabled..

5. the backing file paths are not relative for LVM
6. it's fairly easy to accidentally create qcow2-formatted LVM volumes,
as opposed to the requirement to enable a non-UI storage option at
storage creation time for dir storages, we might want to add some
warning to the UI at least? or we could also guard usage of the format
with a config option..
7. the snapshot name related helpers being public would be nice to
avoid - one way would be to inline them and duplicate
volume_snapshot_info in the LVM plugin, but if a better option is found
that would be great
8. renaming a volume needs to be forbidden if snapshots exist, or the
whole chain needs to be renamed (this is currently caught higher up in
the stack, not sure if we need/want to also check in the storage
layer?)

the parse_namedir change also need a close look to see if some other
plugins get broken.. (@Wolfgang - since you are working on related
changes!)

I am sure there are more rough edges to be found, so don't consider the
list above complete!

> 
> 
> pve-storage:
> 
> Alexandre Derumier (13):
>   plugin: add qemu_img_create
>   plugin: add qemu_img_create_qcow2_backed
>   plugin: add qemu_img_info
>   plugin: add qemu_img_measure
>   plugin: add qemu_img_resize
>   rbd && zfs : create_base : remove $running param from
> volume_snapshot
>   storage: volume_snapshot: add $running param
>   storage: add rename_snapshot method
>   storage: add volume_support_qemu_snapshot
>   plugin: fix volname parsing
>   qcow2: add external snapshot support
>   lvmplugin: add qcow2 snapshot
>   tests: add lvmplugin test
> 
>  ApiChangeLog                         |  15 +
>  src/PVE/Storage.pm                   |  45 ++-
>  src/PVE/Storage/BTRFSPlugin.pm       |   6 +
>  src/PVE/Storage/CIFSPlugin.pm        |   1 +
>  src/PVE/Storage/Common.pm            |  33 ++
>  src/PVE/Storage/DirPlugin.pm         |  11 +
>  src/PVE/Storage/ESXiPlugin.pm        |   8 +-
>  src/PVE/Storage/ISCSIDirectPlugin.pm |   2 +-
>  src/PVE/Storage/LVMPlugin.pm         | 571 +++++++++++++++++++++----
> -
>  src/PVE/Storage/LvmThinPlugin.pm     |   8 +-
>  src/PVE/Storage/NFSPlugin.pm         |   1 +
>  src/PVE/Storage/PBSPlugin.pm         |   2 +-
>  src/PVE/Storage/Plugin.pm            | 518 +++++++++++++++++++++---
>  src/PVE/Storage/RBDPlugin.pm         |  18 +-
>  src/PVE/Storage/ZFSPlugin.pm         |   4 +-
>  src/PVE/Storage/ZFSPoolPlugin.pm     |   8 +-
>  src/test/Makefile                    |   5 +-
>  src/test/run_test_lvmplugin.pl       | 577
> +++++++++++++++++++++++++++
>  18 files changed, 1662 insertions(+), 171 deletions(-)
>  create mode 100755 src/test/run_test_lvmplugin.pl
> 
> qemu-server :
> 
> Alexandre Derumier (4):
>   qemu_img convert : add external snapshot support
>   blockdev: add backing_chain support
>   qcow2: add external snapshot support
>   tests: fix efi vm-disk-100-0.raw -> vm-100-disk-0.raw
> 
>  src/PVE/QemuConfig.pm                         |   4 +-
>  src/PVE/QemuServer.pm                         | 132 +++++--
>  src/PVE/QemuServer/Blockdev.pm                | 345
> +++++++++++++++++-
>  src/PVE/QemuServer/QemuImage.pm               |   6 +-
>  src/test/cfg2cmd/efi-raw-old.conf             |   2 +-
>  src/test/cfg2cmd/efi-raw-old.conf.cmd         |   2 +-
>  src/test/cfg2cmd/efi-raw-template.conf        |   2 +-
>  src/test/cfg2cmd/efi-raw-template.conf.cmd    |   2 +-
>  src/test/cfg2cmd/efi-raw.conf                 |   2 +-
>  src/test/cfg2cmd/efi-raw.conf.cmd             |   2 +-
>  src/test/cfg2cmd/efi-secboot-and-tpm-q35.conf |   2 +-
>  .../cfg2cmd/efi-secboot-and-tpm-q35.conf.cmd  |   2 +-
>  src/test/cfg2cmd/efi-secboot-and-tpm.conf     |   2 +-
>  src/test/cfg2cmd/efi-secboot-and-tpm.conf.cmd |   2 +-
>  src/test/cfg2cmd/sev-es.conf                  |   2 +-
>  src/test/cfg2cmd/sev-es.conf.cmd              |   2 +-
>  src/test/cfg2cmd/sev-std.conf                 |   2 +-
>  src/test/cfg2cmd/sev-std.conf.cmd             |   2 +-
>  src/test/cfg2cmd/simple-backingchain.conf     |  25 ++
>  src/test/cfg2cmd/simple-backingchain.conf.cmd |  33 ++
>  src/test/run_config2command_tests.pl          |  47 +++
>  src/test/run_qemu_img_convert_tests.pl        |  59 +++
>  src/test/snapshot-test.pm                     |   4 +-
>  23 files changed, 634 insertions(+), 49 deletions(-)
>  create mode 100644 src/test/cfg2cmd/simple-backingchain.conf
>  create mode 100644 src/test/cfg2cmd/simple-backingchain.conf.cmd
> 
> 
> -- 
> 2.39.5


[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
       [not found]   ` <c6c3457906642a30ddffc0f6b9d28ea6a745ac7c.camel@groupe-cyllene.com>
@ 2025-07-10 16:29     ` Thomas Lamprecht
  2025-07-11 12:04       ` DERUMIER, Alexandre via pve-devel
  0 siblings, 1 reply; 28+ messages in thread
From: Thomas Lamprecht @ 2025-07-10 16:29 UTC (permalink / raw)
  To: DERUMIER, Alexandre, pve-devel, f.gruenbichler; +Cc: w.bumiller

Hi Alexandre,

Am 10.07.25 um 17:46 schrieb DERUMIER, Alexandre:
> I'll try to fix all your comments for next week.
> 
> I'm going on holiday end of the next week, the 18th july to around 10
> August, so I think It'll be the last time I can work on it before next
> month. But feel free to improve my patches during this time.

I'm pondering a bit if taking this + Fabian's follow-up in already and
then doing the rest of the improvements as further follow-ups would make
them a bit easier to review and maybe also develop, what do you think?
I'm naturally also fine with waiting until another revision from you
next week though, just as an idea.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
  2025-07-10 16:29     ` Thomas Lamprecht
@ 2025-07-11 12:04       ` DERUMIER, Alexandre via pve-devel
  0 siblings, 0 replies; 28+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-11 12:04 UTC (permalink / raw)
  To: pve-devel, t.lamprecht, f.gruenbichler; +Cc: DERUMIER, Alexandre, w.bumiller

[-- Attachment #1: Type: message/rfc822, Size: 14160 bytes --]

From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "t.lamprecht@proxmox.com" <t.lamprecht@proxmox.com>, "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Cc: "w.bumiller@proxmox.com" <w.bumiller@proxmox.com>
Subject: Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
Date: Fri, 11 Jul 2025 12:04:44 +0000
Message-ID: <86f74ead797f07da2244caec44ed5da39c07274b.camel@groupe-cyllene.com>


Hi Thomas

Am 10.07.25 um 17:46 schrieb DERUMIER, Alexandre:
> I'll try to fix all your comments for next week.
> 
> I'm going on holiday end of the next week, the 18th july to around 10
> August, so I think It'll be the last time I can work on it before
> next
> month. But feel free to improve my patches during this time.

>>I'm pondering a bit if taking this + Fabian's follow-up in already
>>and
>>then doing the rest of the improvements as further follow-ups would
>>make
>>them a bit easier to review and maybe also develop, what do you
>>think?
>>I'm naturally also fine with waiting until another revision from you
>>next week though, just as an idea.

Yes, sure, we can work with follow-up based on this serie, no problem.

(Not sure I'll have time to fully working it before vacation anyway,
next week will be really full)




[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
  2025-07-10 15:13 ` [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support Fabian Grünbichler
  2025-07-10 15:46   ` DERUMIER, Alexandre via pve-devel
       [not found]   ` <c6c3457906642a30ddffc0f6b9d28ea6a745ac7c.camel@groupe-cyllene.com>
@ 2025-07-14  6:25   ` DERUMIER, Alexandre via pve-devel
  2025-07-14  8:18   ` DERUMIER, Alexandre via pve-devel
                     ` (4 subsequent siblings)
  7 siblings, 0 replies; 28+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-14  6:25 UTC (permalink / raw)
  To: pve-devel, f.gruenbichler; +Cc: DERUMIER, Alexandre, w.bumiller, t.lamprecht

[-- Attachment #1: Type: message/rfc822, Size: 13738 bytes --]

From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Cc: "w.bumiller@proxmox.com" <w.bumiller@proxmox.com>, "t.lamprecht@proxmox.com" <t.lamprecht@proxmox.com>
Subject: Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
Date: Mon, 14 Jul 2025 06:25:58 +0000
Message-ID: <e4e612ebca9f43afa3a01699caa81a3798be1fbb.camel@groupe-cyllene.com>

>>1. missing activation when snapshotting an LVM volume if the VM is
>>not running

Ah yes, I didn't see it,  on volume create, the volume is auto-
activate,  but if you start/stop the vm, it's inactivate.

I'm seeing new patch to disable auto-activation too
https://git.proxmox.com/?p=pve-storage.git;a=commit;h=2796d6b6395c2c4d0da55df51ca41b0e045af3a0

Do we need to also deactivate it after each action (snapshot,
resize,....) if the vm is not running ? 
not sure, but maybe it need to be done in qemu-server as it need to
known if the vm is running and we don't have the $running option
everywhere.


I'll fix the activate for the create snapshot







[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH FOLLOW-UP storage 1/3] helpers: make qemu_img* storage config independent
  2025-07-10 15:09 ` [pve-devel] [PATCH FOLLOW-UP storage 1/3] helpers: make qemu_img* storage config independent Fabian Grünbichler
  2025-07-10 15:09   ` [pve-devel] [PATCH FOLLOW-UP storage 2/3] helpers: move qemu_img* to Common module Fabian Grünbichler
  2025-07-10 15:09   ` [pve-devel] [PATCH FOLLOW-UP storage 3/3] rename_snapshot: fix parameter checks Fabian Grünbichler
@ 2025-07-14  6:34   ` DERUMIER, Alexandre via pve-devel
       [not found]   ` <6b7eaeb6af6af22ebdd034236e9e88bc1b5e1e3f.camel@groupe-cyllene.com>
  3 siblings, 0 replies; 28+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-14  6:34 UTC (permalink / raw)
  To: pve-devel, f.gruenbichler; +Cc: DERUMIER, Alexandre

[-- Attachment #1: Type: message/rfc822, Size: 13376 bytes --]

From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Subject: Re: [PATCH FOLLOW-UP storage 1/3] helpers: make qemu_img* storage config independent
Date: Mon, 14 Jul 2025 06:34:38 +0000
Message-ID: <6b7eaeb6af6af22ebdd034236e9e88bc1b5e1e3f.camel@groupe-cyllene.com>

>> sub qemu_img_resize {
>>-    my ($scfg, $path, $format, $size, $timeout) = @_;
>>+    my ($scfg, $path, $format, $size, $preallocation, $timeout) =
@_;

you have forgot to remove the $scfg param, so 
it's breaking resize for both plugin && lvmplugin


Plugin.pm:    PVE::Storage::Common::qemu_img_resize($path, $format,
$size, $preallocation, 10);

LVMPlugin.pm:        PVE::Storage::Common::qemu_img_resize($path,
$format, $size, $preallocation, 10);

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
  2025-07-10 15:13 ` [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support Fabian Grünbichler
                     ` (2 preceding siblings ...)
  2025-07-14  6:25   ` DERUMIER, Alexandre via pve-devel
@ 2025-07-14  8:18   ` DERUMIER, Alexandre via pve-devel
  2025-07-14  8:42   ` DERUMIER, Alexandre via pve-devel
                     ` (3 subsequent siblings)
  7 siblings, 0 replies; 28+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-14  8:18 UTC (permalink / raw)
  To: pve-devel, f.gruenbichler; +Cc: DERUMIER, Alexandre, w.bumiller, t.lamprecht

[-- Attachment #1: Type: message/rfc822, Size: 12892 bytes --]

From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Cc: "w.bumiller@proxmox.com" <w.bumiller@proxmox.com>, "t.lamprecht@proxmox.com" <t.lamprecht@proxmox.com>
Subject: Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
Date: Mon, 14 Jul 2025 08:18:53 +0000
Message-ID: <26badbf66613a89e63eaad8b24dd05567900250b.camel@groupe-cyllene.com>

>>4. all snapshot volumes on extsnap dir storages will print warnings
>>like
>>
>>`this volume filename is not supported anymore`
>>
>>when hitting `parse_namedir` - those can likely be avoided by
>>skipping the warning if the name matches the snapshot format and
>>external-snapshots are enabled..

Do you have seen a case where it's displayed ? 

because I never call parse_volname() with a $snapvolname, only with the
main $volname. (because we don't want to display snapshots volumes in
volumes list for example)

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
  2025-07-10 15:13 ` [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support Fabian Grünbichler
                     ` (3 preceding siblings ...)
  2025-07-14  8:18   ` DERUMIER, Alexandre via pve-devel
@ 2025-07-14  8:42   ` DERUMIER, Alexandre via pve-devel
       [not found]   ` <26badbf66613a89e63eaad8b24dd05567900250b.camel@groupe-cyllene.com>
                     ` (2 subsequent siblings)
  7 siblings, 0 replies; 28+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-14  8:42 UTC (permalink / raw)
  To: pve-devel, f.gruenbichler; +Cc: DERUMIER, Alexandre, w.bumiller, t.lamprecht

[-- Attachment #1: Type: message/rfc822, Size: 12929 bytes --]

From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Cc: "w.bumiller@proxmox.com" <w.bumiller@proxmox.com>, "t.lamprecht@proxmox.com" <t.lamprecht@proxmox.com>
Subject: Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
Date: Mon, 14 Jul 2025 08:42:17 +0000
Message-ID: <719c71b1148846e0cdd7e5c149d8b20146b4d043.camel@groupe-cyllene.com>

>>6. it's fairly easy to accidentally create qcow2-formatted LVM
>>volumes, as opposed to the requirement to enable a non-UI storage
>>option at storage creation time for dir storages, we might want to
>>add some warning to the UI at least? or we could also guard usage of
>>the format with a config option..

Personnaly, this don't shock that's much

the external-snapshot option is to distinguished between
internal|external snapshot.

you can also "accidentally" create a qcow2 file instead a raw file on
dir storage.

so I think that a warning in the UI is fine enough if qcow2 format is
selected.

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
       [not found]   ` <26badbf66613a89e63eaad8b24dd05567900250b.camel@groupe-cyllene.com>
@ 2025-07-14 11:02     ` Fabian Grünbichler
  2025-07-15  4:15       ` DERUMIER, Alexandre via pve-devel
  0 siblings, 1 reply; 28+ messages in thread
From: Fabian Grünbichler @ 2025-07-14 11:02 UTC (permalink / raw)
  To: DERUMIER, Alexandre, pve-devel; +Cc: w.bumiller, t.lamprecht


> DERUMIER, Alexandre <alexandre.derumier@groupe-cyllene.com> hat am 14.07.2025 10:18 CEST geschrieben:
> 
>  
> >>4. all snapshot volumes on extsnap dir storages will print warnings
> >>like
> >>
> >>`this volume filename is not supported anymore`
> >>
> >>when hitting `parse_namedir` - those can likely be avoided by
> >>skipping the warning if the name matches the snapshot format and
> >>external-snapshots are enabled..
> 
> Do you have seen a case where it's displayed ? 
> 
> because I never call parse_volname() with a $snapvolname, only with the
> main $volname. (because we don't want to display snapshots volumes in
> volumes list for example)

IIRC a simple `pvesm list <storage>` does..


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH FOLLOW-UP storage 1/3] helpers: make qemu_img* storage config independent
       [not found]   ` <6b7eaeb6af6af22ebdd034236e9e88bc1b5e1e3f.camel@groupe-cyllene.com>
@ 2025-07-14 11:02     ` Fabian Grünbichler
  0 siblings, 0 replies; 28+ messages in thread
From: Fabian Grünbichler @ 2025-07-14 11:02 UTC (permalink / raw)
  To: DERUMIER, Alexandre, pve-devel


> DERUMIER, Alexandre <alexandre.derumier@groupe-cyllene.com> hat am 14.07.2025 08:34 CEST geschrieben:
> 
>  
> >> sub qemu_img_resize {
> >>-    my ($scfg, $path, $format, $size, $timeout) = @_;
> >>+    my ($scfg, $path, $format, $size, $preallocation, $timeout) =
> @_;
> 
> you have forgot to remove the $scfg param, so 
> it's breaking resize for both plugin && lvmplugin
> 
> 
> Plugin.pm:    PVE::Storage::Common::qemu_img_resize($path, $format,
> $size, $preallocation, 10);
> 
> LVMPlugin.pm:        PVE::Storage::Common::qemu_img_resize($path,
> $format, $size, $preallocation, 10);

thanks - yes, this needs to be fixed up!


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
       [not found]   ` <719c71b1148846e0cdd7e5c149d8b20146b4d043.camel@groupe-cyllene.com>
@ 2025-07-14 11:04     ` Fabian Grünbichler
  2025-07-14 11:11       ` Thomas Lamprecht
  0 siblings, 1 reply; 28+ messages in thread
From: Fabian Grünbichler @ 2025-07-14 11:04 UTC (permalink / raw)
  To: DERUMIER, Alexandre, pve-devel; +Cc: w.bumiller, t.lamprecht


> DERUMIER, Alexandre <alexandre.derumier@groupe-cyllene.com> hat am 14.07.2025 10:42 CEST geschrieben:
> 
>  
> >>6. it's fairly easy to accidentally create qcow2-formatted LVM
> >>volumes, as opposed to the requirement to enable a non-UI storage
> >>option at storage creation time for dir storages, we might want to
> >>add some warning to the UI at least? or we could also guard usage of
> >>the format with a config option..
> 
> Personnaly, this don't shock that's much
> 
> the external-snapshot option is to distinguished between
> internal|external snapshot.
> 
> you can also "accidentally" create a qcow2 file instead a raw file on
> dir storage.

yes, but that one has been tested for a while, the LVM one is new and
still experimental, but that is not obvious from the UI.

> so I think that a warning in the UI is fine enough if qcow2 format is
> selected.

would probably need to go into a few places then (allocating a disk,
moving it, restore of backup - at least?)


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
  2025-07-14 11:04     ` Fabian Grünbichler
@ 2025-07-14 11:11       ` Thomas Lamprecht
  2025-07-14 11:15         ` Fabian Grünbichler
  0 siblings, 1 reply; 28+ messages in thread
From: Thomas Lamprecht @ 2025-07-14 11:11 UTC (permalink / raw)
  To: Fabian Grünbichler, DERUMIER, Alexandre, pve-devel; +Cc: w.bumiller

Am 14.07.25 um 13:04 schrieb Fabian Grünbichler:
>> so I think that a warning in the UI is fine enough if qcow2 format is
>> selected.
> would probably need to go into a few places then (allocating a disk,
> moving it, restore of backup - at least?)


We might be able to add it relatively centrally to the storage selector,
or a new wrapper component, and then enable an option (or switch to the
new wrapper component) - Dominik probably would have a good idea where
this could fit nicely.

Albeit, taking a step back, I'm not so sure if that really helps the user?
I.e., what's something actionable they can do when seeing such a warning?
Is it just to signal "this is a tech preview, be wary"? If, then I'd rather
just signal that in the storage edit/add window.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
  2025-07-14 11:11       ` Thomas Lamprecht
@ 2025-07-14 11:15         ` Fabian Grünbichler
  2025-07-14 11:27           ` Thomas Lamprecht
  0 siblings, 1 reply; 28+ messages in thread
From: Fabian Grünbichler @ 2025-07-14 11:15 UTC (permalink / raw)
  To: Thomas Lamprecht, DERUMIER, Alexandre, pve-devel; +Cc: w.bumiller


> Thomas Lamprecht <t.lamprecht@proxmox.com> hat am 14.07.2025 13:11 CEST geschrieben:
> 
>  
> Am 14.07.25 um 13:04 schrieb Fabian Grünbichler:
> >> so I think that a warning in the UI is fine enough if qcow2 format is
> >> selected.
> > would probably need to go into a few places then (allocating a disk,
> > moving it, restore of backup - at least?)
> 
> 
> We might be able to add it relatively centrally to the storage selector,
> or a new wrapper component, and then enable an option (or switch to the
> new wrapper component) - Dominik probably would have a good idea where
> this could fit nicely.
> 
> Albeit, taking a step back, I'm not so sure if that really helps the user?
> I.e., what's something actionable they can do when seeing such a warning?
> Is it just to signal "this is a tech preview, be wary"? If, then I'd rather
> just signal that in the storage edit/add window.

for LVM, there is no new plugin or storage option, the plugin just gains
a new supported format and that has the experimental status. we could
guard that (like in the DirPlugin) with the external-snapshots flag, and
only allow qcow2 formatted volumes if that is set - that way, users would
need to "opt-in" to the experimental behaviour by creating a new storage
with that flag set. if we don't do that, then the only thing that we can
warn about is qcow2 *being used*, and there is no central place for that
as all existing LVM storages would get that support "over night" (once
they upgrade).


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
  2025-07-14 11:15         ` Fabian Grünbichler
@ 2025-07-14 11:27           ` Thomas Lamprecht
  2025-07-14 11:46             ` Fabian Grünbichler
  0 siblings, 1 reply; 28+ messages in thread
From: Thomas Lamprecht @ 2025-07-14 11:27 UTC (permalink / raw)
  To: Fabian Grünbichler, DERUMIER, Alexandre, pve-devel; +Cc: w.bumiller

Am 14.07.25 um 13:15 schrieb Fabian Grünbichler:
>> Thomas Lamprecht <t.lamprecht@proxmox.com> hat am 14.07.2025 13:11 CEST geschrieben:
>> Albeit, taking a step back, I'm not so sure if that really helps the user?
>> I.e., what's something actionable they can do when seeing such a warning?
>> Is it just to signal "this is a tech preview, be wary"? If, then I'd rather
>> just signal that in the storage edit/add window.
> 
> for LVM, there is no new plugin or storage option, the plugin just gains
> a new supported format and that has the experimental status. we could
> guard that (like in the DirPlugin) with the external-snapshots flag, and
> only allow qcow2 formatted volumes if that is set - that way, users would
> need to "opt-in" to the experimental behaviour by creating a new storage
> with that flag set. if we don't do that, then the only thing that we can
> warn about is qcow2 *being used*, and there is no central place for that
> as all existing LVM storages would get that support "over night" (once
> they upgrade).

Ah okay, I actually thought it already worked that way when I saw the
new flag getting added.

Might be indeed safer and more telling if we couple allowing it with that
flag being set. We could then change the default of that flag for a future
release, once we got more exposure and maybe improved UX polishing a bit
and if we think it helps users. But it might not be bad to require an
explicit decision for this.

If we really just enforce that check on volume creation, we could even
allow changing the flag for an existing storage, as that would be a bit
more convenient, and, more importantly, better than nudge the admin towards
adding the same storage twice, which we do not support.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
  2025-07-14 11:27           ` Thomas Lamprecht
@ 2025-07-14 11:46             ` Fabian Grünbichler
  0 siblings, 0 replies; 28+ messages in thread
From: Fabian Grünbichler @ 2025-07-14 11:46 UTC (permalink / raw)
  To: Thomas Lamprecht, DERUMIER, Alexandre, pve-devel; +Cc: w.bumiller


> Thomas Lamprecht <t.lamprecht@proxmox.com> hat am 14.07.2025 13:27 CEST geschrieben:
> 
>  
> Am 14.07.25 um 13:15 schrieb Fabian Grünbichler:
> >> Thomas Lamprecht <t.lamprecht@proxmox.com> hat am 14.07.2025 13:11 CEST geschrieben:
> >> Albeit, taking a step back, I'm not so sure if that really helps the user?
> >> I.e., what's something actionable they can do when seeing such a warning?
> >> Is it just to signal "this is a tech preview, be wary"? If, then I'd rather
> >> just signal that in the storage edit/add window.
> > 
> > for LVM, there is no new plugin or storage option, the plugin just gains
> > a new supported format and that has the experimental status. we could
> > guard that (like in the DirPlugin) with the external-snapshots flag, and
> > only allow qcow2 formatted volumes if that is set - that way, users would
> > need to "opt-in" to the experimental behaviour by creating a new storage
> > with that flag set. if we don't do that, then the only thing that we can
> > warn about is qcow2 *being used*, and there is no central place for that
> > as all existing LVM storages would get that support "over night" (once
> > they upgrade).
> 
> Ah okay, I actually thought it already worked that way when I saw the
> new flag getting added.
> 
> Might be indeed safer and more telling if we couple allowing it with that
> flag being set. We could then change the default of that flag for a future
> release, once we got more exposure and maybe improved UX polishing a bit
> and if we think it helps users. But it might not be bad to require an
> explicit decision for this.
> 
> If we really just enforce that check on volume creation, we could even
> allow changing the flag for an existing storage, as that would be a bit
> more convenient, and, more importantly, better than nudge the admin towards
> adding the same storage twice, which we do not support.

yes, for LVM the parameter would not need to be fixed, like for the DirPlugin.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
  2025-07-10 15:13 ` [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support Fabian Grünbichler
                     ` (6 preceding siblings ...)
       [not found]   ` <719c71b1148846e0cdd7e5c149d8b20146b4d043.camel@groupe-cyllene.com>
@ 2025-07-14 15:12   ` Tiago Sousa via pve-devel
  7 siblings, 0 replies; 28+ messages in thread
From: Tiago Sousa via pve-devel @ 2025-07-14 15:12 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fabian Grünbichler
  Cc: Tiago Sousa, Wolfgang Bumiller, Thomas Lamprecht

[-- Attachment #1: Type: message/rfc822, Size: 5327 bytes --]

From: Tiago Sousa <joao.sousa@eurotux.com>
To: "Proxmox VE development discussion" <pve-devel@lists.proxmox.com>, "Fabian Grünbichler" <f.gruenbichler@proxmox.com>
Cc: Wolfgang Bumiller <w.bumiller@proxmox.com>, Thomas Lamprecht <t.lamprecht@proxmox.com>
Subject: Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
Date: Mon, 14 Jul 2025 16:12:04 +0100
Message-ID: <cadccf7c-71d0-4d22-8d1f-266257a88064@eurotux.com>

Just adding a few things I noticed during testing, on top of what Fabian 
already mentioned:

1. VM data not wiped after deletion:
When I delete a VM and then create a new one with the same settings, the 
old OS and data are still there. It looks like the disk wasn't properly 
cleaned up.

2. Can’t roll back to older snapshots:
Rolling back only works for the latest snapshot. If I try to go back to 
an earlier one, I get this error:
TASK ERROR: can't rollback, 'test01' is not most recent snapshot on 
'vm-104-disk-0.qcow2'

3. Rollback doesn't work for LVM-thin volumes:
When trying to roll back a snapshot on an LVM-thin volume, it fails with:
can't rollback snapshot for 'raw' volume



[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
  2025-07-14 11:02     ` Fabian Grünbichler
@ 2025-07-15  4:15       ` DERUMIER, Alexandre via pve-devel
  0 siblings, 0 replies; 28+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-15  4:15 UTC (permalink / raw)
  To: pve-devel, f.gruenbichler; +Cc: DERUMIER, Alexandre, w.bumiller, t.lamprecht

[-- Attachment #1: Type: message/rfc822, Size: 14069 bytes --]

From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Cc: "w.bumiller@proxmox.com" <w.bumiller@proxmox.com>, "t.lamprecht@proxmox.com" <t.lamprecht@proxmox.com>
Subject: Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
Date: Tue, 15 Jul 2025 04:15:48 +0000
Message-ID: <555c5acfdf30a1914ce4030b51d1cb685e23b097.camel@groupe-cyllene.com>

> > > 4. all snapshot volumes on extsnap dir storages will print
> > > warnings
> > > like
> > > 
> > > `this volume filename is not supported anymore`
> > > 
> > > when hitting `parse_namedir` - those can likely be avoided by
> > > skipping the warning if the name matches the snapshot format and
> > > external-snapshots are enabled..
> 
> Do you have seen a case where it's displayed ? 
> 
> because I never call parse_volname() with a $snapvolname, only with
> the
> main $volname. (because we don't want to display snapshots volumes in
> volumes list for example)

>>IIRC a simple `pvesm list <storage>` does..


sub list_images don't use  parse_volname/parse_namedir but


  foreach my $fn (<$imagedir/[0-9][0-9]*/*>) {
        next if $fn !~ m!^(/.+/(\d+)/([^/]+\.($fmts)))$!;
        $fn = $1; # untaint


so the snapshots files are listed.


The warning '`this volume filename is not supported anymore`', is
coming after in the api, where PVE::Storage::check_volume_access is
called on each volume listed. (and check_volume_access is calling
parse_volname).


I'm not sure if list_images filtering should be improved now that are
more strict ?
lvm for example, have a basic :  

next if $volname !~ m/^vm-(\d+)-/;



for now, I'll fix the parse_namedir


[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
       [not found]         ` <58c2db18-c2c2-4c91-9521-bdb42a302e93@eurotux.com>
  2025-07-17 15:53           ` DERUMIER, Alexandre via pve-devel
@ 2025-07-17 16:05           ` DERUMIER, Alexandre via pve-devel
  1 sibling, 0 replies; 28+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-17 16:05 UTC (permalink / raw)
  To: joao.sousa, pve-devel; +Cc: DERUMIER, Alexandre

[-- Attachment #1: Type: message/rfc822, Size: 13199 bytes --]

From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "joao.sousa@eurotux.com" <joao.sousa@eurotux.com>, "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
Date: Thu, 17 Jul 2025 16:05:05 +0000
Message-ID: <bb552989f78d0f9c532b1ab09ced5a05299c459f.camel@groupe-cyllene.com>

also thinking about that,

if we local daemon is down/dead, and that it's don't unqueue,
maybe some kind of ttl for the object in queue should be use.


if we go with a separate queue for each server, it's no a problem, 
but they need to compete on the resize lock. (so we can't be sure of
the order of the resize)

and they are still the case of live migration, in this case, the vmid
need to be transfered between the different queues after live migration
to be resized by the target node


I really don't known what is the best way






[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
       [not found]         ` <58c2db18-c2c2-4c91-9521-bdb42a302e93@eurotux.com>
@ 2025-07-17 15:53           ` DERUMIER, Alexandre via pve-devel
  2025-07-17 16:05           ` DERUMIER, Alexandre via pve-devel
  1 sibling, 0 replies; 28+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-17 15:53 UTC (permalink / raw)
  To: joao.sousa, pve-devel; +Cc: DERUMIER, Alexandre

[-- Attachment #1: Type: message/rfc822, Size: 15680 bytes --]

From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "joao.sousa@eurotux.com" <joao.sousa@eurotux.com>, "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
Date: Thu, 17 Jul 2025 15:53:50 +0000
Message-ID: <7d63c3908c3ea00861a4582b217195305ea4ece0.camel@groupe-cyllene.com>

>>Ok, are you thinking of having a local queue for each node as well? 
>>since if there was a single queue for all nodes how would you manage
>>the 
>>concurrency of writes of each node? 

I really don't known to be honest.
What could happen, is a live migration, where an event is emit on
node1, but the vm is transferred on node2 before the resize occur.

So maybe a central queue could be great here. (I think it's not too
much a problem for concurrency to have write lock when we need to
queue/dequeue).
And each local resize daemon looking at the queue if the vmid is
currently running locally.

Something like that.

>>(is there a similar function to >>cfs_lock_file but for C code?)

the pxmcfs is wrote in C, so I'm pretty sure it's possible to do it.
^_^

(I'm really bad in C or Rust, so maybe some proxmox developpers could
help on this part)

> > > 2. If we use a local daemon for each node how is it decided which
> > > node
> > > will preform the extend operation?
> > > Another option is to use a centralized daemon (maybe on the
> > > quorum
> > > master) that performs every extend.
> The local daemon where the vm is running should resize the lvm,
> because
> if the resize is done on another node, the lvm need to be
> rescan/refresh to see the new size. AFAIK, It's not done
> automatically. 

>>Ok, at first I was thinking of doing a FIFO like cluster wide queue 
>>where the extends would be done in the order of arrival. But if I'm 
>>understanding correctly, by doing a local queue and extend daemon,
>>the 
>>extends would be done in order but not in a global cluster sense
>>right? 
>>Where each extend daemon would be competing to acquire the storage
>>lock 
>>to proceed with the next extend. Please let me know if I'm
>>understanding 
>>your idea correctly.


The lvm resize itself need to be locked globally. (because if you
resize in // on different servers different lvm volume, you could have
same disk sectors allocated to different lvms volumes)

so, indeed, each local daemon competing to have the resize lock.
FIFO should be use, because we want to resize as fast as possible after
en event emit (before than the vm is going out of space).

So, a global queue is better here also, just keeping local daemons
pooling the queue in FIFO order, and if the vm is local to the daemon,
the daemon is doing the resize.

something like that.

(but all ideas are welcome ^_^)










[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
       [not found]       ` <47b76678f969ba97926c85af4bf8e50c9dda161d.camel@groupe-cyllene.com>
@ 2025-07-17 15:42         ` Tiago Sousa via pve-devel
       [not found]         ` <58c2db18-c2c2-4c91-9521-bdb42a302e93@eurotux.com>
  1 sibling, 0 replies; 28+ messages in thread
From: Tiago Sousa via pve-devel @ 2025-07-17 15:42 UTC (permalink / raw)
  To: DERUMIER, Alexandre, pve-devel; +Cc: Tiago Sousa

[-- Attachment #1: Type: message/rfc822, Size: 7789 bytes --]

From: Tiago Sousa <joao.sousa@eurotux.com>
To: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>, "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
Date: Thu, 17 Jul 2025 16:42:06 +0100
Message-ID: <58c2db18-c2c2-4c91-9521-bdb42a302e93@eurotux.com>



On 7/17/25 16:08, DERUMIER, Alexandre wrote:
>>>
>>> However I have some questions:
>>>
>>> 1. When qmeventd receives the BLOCK_WRITE_THRESHOLD event, should the
>>> extend request (writing the nodename to the extend queue) be handled
>>> directly in C, or would it be preferable to do it via an API call
>>> such
>>> as PUT /nodes/{node}/qemu/{vmid}/extend_request, passing the nodename
>>> as
>>> a parameter?
> 
> I think that qemevent should be as fast as possible. For my first
> version, I was doing an exec of "qm extend ....", but I think it should
> be even better if qmevent is simply write vmid/diskid in a text file
> somewhere in /etc/pve.  (the the other daemon can manage the queue)

Ok, are you thinking of having a local queue for each node as well? 
since if there was a single queue for all nodes how would you manage the 
concurrency of writes of each node? (is there a similar function to 
cfs_lock_file but for C code?)
>>> 2. If we use a local daemon for each node how is it decided which
>>> node
>>> will preform the extend operation?
>>> Another option is to use a centralized daemon (maybe on the quorum
>>> master) that performs every extend.
> The local daemon where the vm is running should resize the lvm, because
> if the resize is done on another node, the lvm need to be
> rescan/refresh to see the new size. AFAIK, It's not done automatically. 
Ok, at first I was thinking of doing a FIFO like cluster wide queue 
where the extends would be done in the order of arrival. But if I'm 
understanding correctly, by doing a local queue and extend daemon, the 
extends would be done in order but not in a global cluster sense right? 
Where each extend daemon would be competing to acquire the storage lock 
to proceed with the next extend. Please let me know if I'm understanding 
your idea correctly.


>>> 3. Is there any specific reason for the event only be triggered at
>>> 50%
>>> of last chunk, in your implementation? I was thinking of implementing
>>> it
>>> with 10% of the current provisioned space to be safe. Any options on
>>> this?
> 
> I have use same implementation than redhat, but it could be some
> tunable value. It's really depend of how much fast is your storage.
> as far I remember, it's was 50% of the size of the chunk (1GB).
> so when you have 500MB free, it should add another 1GB.
> 
> of course, if you have fast nvme, and you write 2GB/s,  it'll be too
> short.
> 
> 
> if you go with 10% of provisionned, if you have 2TB qcow2, it'll
> increase when 200GB is free.  (and how much do we want to increase ?
> another 10% ? )
> 
> but if you want 2GB qcow2, it'll increase when 200MB is free.
> so, we a fast nvme, it could not work with small disk, but ok with big
> disk.
> I think that's why redhat use percent of fixed chunksize (that you can
> configure depending of your storage speed)
You're right that way makes the most sense.







[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
       [not found]     ` <1fddff1a-b806-475a-ac75-1dd0107d1013@eurotux.com>
@ 2025-07-17 15:08       ` DERUMIER, Alexandre via pve-devel
       [not found]       ` <47b76678f969ba97926c85af4bf8e50c9dda161d.camel@groupe-cyllene.com>
  1 sibling, 0 replies; 28+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-17 15:08 UTC (permalink / raw)
  To: joao.sousa, pve-devel; +Cc: DERUMIER, Alexandre

[-- Attachment #1: Type: message/rfc822, Size: 16433 bytes --]

From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "joao.sousa@eurotux.com" <joao.sousa@eurotux.com>, "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
Date: Thu, 17 Jul 2025 15:08:45 +0000
Message-ID: <47b76678f969ba97926c85af4bf8e50c9dda161d.camel@groupe-cyllene.com>

>>
>>However I have some questions:
>>
>>1. When qmeventd receives the BLOCK_WRITE_THRESHOLD event, should the
>>extend request (writing the nodename to the extend queue) be handled 
>>directly in C, or would it be preferable to do it via an API call
>>such 
>>as PUT /nodes/{node}/qemu/{vmid}/extend_request, passing the nodename
>>as 
>>a parameter?

I think that qemevent should be as fast as possible. For my first
version, I was doing an exec of "qm extend ....", but I think it should
be even better if qmevent is simply write vmid/diskid in a text file
somewhere in /etc/pve.  (the the other daemon can manage the queue)


>>2. If we use a local daemon for each node how is it decided which
>>node 
>>will preform the extend operation?
>>Another option is to use a centralized daemon (maybe on the quorum 
>>master) that performs every extend.
The local daemon where the vm is running should resize the lvm, because
if the resize is done on another node, the lvm need to be
rescan/refresh to see the new size. AFAIK, It's not done automatically.



>>3. Is there any specific reason for the event only be triggered at
>>50% 
>>of last chunk, in your implementation? I was thinking of implementing
>>it 
>>with 10% of the current provisioned space to be safe. Any options on
>>this?

I have use same implementation than redhat, but it could be some
tunable value. It's really depend of how much fast is your storage.
as far I remember, it's was 50% of the size of the chunk (1GB).
so when you have 500MB free, it should add another 1GB.

of course, if you have fast nvme, and you write 2GB/s,  it'll be too
short.


if you go with 10% of provisionned, if you have 2TB qcow2, it'll
increase when 200GB is free.  (and how much do we want to increase ?
another 10% ? )

but if you want 2GB qcow2, it'll increase when 200MB is free.
so, we a fast nvme, it could not work with small disk, but ok with big
disk.
I think that's why redhat use percent of fixed chunksize (that you can
configure depending of your storage speed)






>>In terms of locking I'm planning to use the cfs_lock_file to write to
>>the extend queue and cfs_lock_storage to perform the extend on the 
>>target disk.

yes, that's what I had in mind too.


I thing to also handle, is if the server loose quorum for some second
(so /etc/pve readonly). I think we need to keep info in qmeventd memory
and try to write the file in queue when quorum is good again.


[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
  2025-07-17  8:01   ` DERUMIER, Alexandre via pve-devel
@ 2025-07-17 14:49     ` Tiago Sousa via pve-devel
       [not found]     ` <1fddff1a-b806-475a-ac75-1dd0107d1013@eurotux.com>
  1 sibling, 0 replies; 28+ messages in thread
From: Tiago Sousa via pve-devel @ 2025-07-17 14:49 UTC (permalink / raw)
  To: Proxmox VE development discussion; +Cc: Tiago Sousa

[-- Attachment #1: Type: message/rfc822, Size: 5936 bytes --]

From: Tiago Sousa <joao.sousa@eurotux.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
Date: Thu, 17 Jul 2025 15:49:05 +0100
Message-ID: <1fddff1a-b806-475a-ac75-1dd0107d1013@eurotux.com>

Hi,

I'm starting to develop the thin provisioning feature to integrate with 
the new snapshot feature for LVM. The architecture I'm following is very 
similar to the one Alexandre mentioned here

https://lore.proxmox.com/pve-devel/mailman.380.1750053104.395.pve-devel@lists.proxmox.com/

However I have some questions:

1. When qmeventd receives the BLOCK_WRITE_THRESHOLD event, should the 
extend request (writing the nodename to the extend queue) be handled 
directly in C, or would it be preferable to do it via an API call such 
as PUT /nodes/{node}/qemu/{vmid}/extend_request, passing the nodename as 
a parameter?

2. If we use a local daemon for each node how is it decided which node 
will preform the extend operation?
Another option is to use a centralized daemon (maybe on the quorum 
master) that performs every extend.

3. Is there any specific reason for the event only be triggered at 50% 
of last chunk, in your implementation? I was thinking of implementing it 
with 10% of the current provisioned space to be safe. Any options on this?

In terms of locking I'm planning to use the cfs_lock_file to write to 
the extend queue and cfs_lock_storage to perform the extend on the 
target disk.


[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
  2025-07-16 15:15 ` Thomas Lamprecht
@ 2025-07-17  8:01   ` DERUMIER, Alexandre via pve-devel
  2025-07-17 14:49     ` Tiago Sousa via pve-devel
       [not found]     ` <1fddff1a-b806-475a-ac75-1dd0107d1013@eurotux.com>
  0 siblings, 2 replies; 28+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-17  8:01 UTC (permalink / raw)
  To: pve-devel, t.lamprecht; +Cc: DERUMIER, Alexandre

[-- Attachment #1: Type: message/rfc822, Size: 13563 bytes --]

From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "t.lamprecht@proxmox.com" <t.lamprecht@proxmox.com>
Subject: Re: [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
Date: Thu, 17 Jul 2025 08:01:01 +0000
Message-ID: <ed658e13ad9c6e02454ba29f462ca0515d8234da.camel@groupe-cyllene.com>

>>
>>
>>Wolfgang now applied all patches and some follow-ups, we test this a
>>bit
>>more internal, but if nothing grave comes up this should be included
>>in
>>upcoming PVE 9 - great work!

Fantastic !  I would like to thank you guys for your support ! So
thanks Fiona, Fabian, Wolfgang, Thomas for the help, review, ideas, and
for your time on this ! 


>>btw. and mostly out of interest: What was the status for using qcow2
>>for
>>thin-provisioning idea again? I know there was some discussion, but
>>could
>>not quickly find what the last status was.

for lvm+qcow2 provisionning, it's really early POC/RFC. (I have sent
some patches 1year ago). 
https://lore.proxmox.com/all/mailman.400.1724670042.302.pve-devel@lists.proxmox.com/

I'll try to work on it after this summer for pve9.1.
The main thing to work is how to handle cluster storage lock for lvm
resize.

Qemu send an event when the used size reach a threshold to qmeventd,
Pvestatd can also sent an event on disk full by security 

Then, some kind of daemon need to resize the lvm with a global lock.
I think we can use a local daemon on each node, then maybe use lock
through pmxcfs, with some kind of queue file with error/retry handling.

 




[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
       [not found] <20250709162202.2952597-1-alexandre.derumier@groupe-cyllene.com>
@ 2025-07-16 15:15 ` Thomas Lamprecht
  2025-07-17  8:01   ` DERUMIER, Alexandre via pve-devel
  0 siblings, 1 reply; 28+ messages in thread
From: Thomas Lamprecht @ 2025-07-16 15:15 UTC (permalink / raw)
  To: Alexandre Derumier, pve-devel

Am 09.07.25 um 18:21 schrieb Alexandre Derumier:
> This patch series implement qcow2 external snapshot support for files && lvm volumes
> 
> The current internal qcow2 snapshots have bad write performance because no metadatas can be preallocated.
> 
> This is particulary visible on a shared filesystem like ocfs2 or gfs2.
> 
> Also other bugs are freeze/lock reported by users since years on snapshots delete on nfs
> (The disk access seem to be frozen during all the delete duration)
> 
> This also open doors for remote snapshot export-import for storage replication.
> 

Wolfgang now applied all patches and some follow-ups, we test this a bit
more internal, but if nothing grave comes up this should be included in
upcoming PVE 9 - great work!

btw. and mostly out of interest: What was the status for using qcow2 for
thin-provisioning idea again? I know there was some discussion, but could
not quickly find what the last status was.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2025-07-17 16:04 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-07-09 16:21 [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support Alexandre Derumier via pve-devel
2025-07-10 15:09 ` [pve-devel] [PATCH FOLLOW-UP storage 1/3] helpers: make qemu_img* storage config independent Fabian Grünbichler
2025-07-10 15:09   ` [pve-devel] [PATCH FOLLOW-UP storage 2/3] helpers: move qemu_img* to Common module Fabian Grünbichler
2025-07-10 15:09   ` [pve-devel] [PATCH FOLLOW-UP storage 3/3] rename_snapshot: fix parameter checks Fabian Grünbichler
2025-07-14  6:34   ` [pve-devel] [PATCH FOLLOW-UP storage 1/3] helpers: make qemu_img* storage config independent DERUMIER, Alexandre via pve-devel
     [not found]   ` <6b7eaeb6af6af22ebdd034236e9e88bc1b5e1e3f.camel@groupe-cyllene.com>
2025-07-14 11:02     ` Fabian Grünbichler
2025-07-10 15:13 ` [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support Fabian Grünbichler
2025-07-10 15:46   ` DERUMIER, Alexandre via pve-devel
     [not found]   ` <c6c3457906642a30ddffc0f6b9d28ea6a745ac7c.camel@groupe-cyllene.com>
2025-07-10 16:29     ` Thomas Lamprecht
2025-07-11 12:04       ` DERUMIER, Alexandre via pve-devel
2025-07-14  6:25   ` DERUMIER, Alexandre via pve-devel
2025-07-14  8:18   ` DERUMIER, Alexandre via pve-devel
2025-07-14  8:42   ` DERUMIER, Alexandre via pve-devel
     [not found]   ` <26badbf66613a89e63eaad8b24dd05567900250b.camel@groupe-cyllene.com>
2025-07-14 11:02     ` Fabian Grünbichler
2025-07-15  4:15       ` DERUMIER, Alexandre via pve-devel
     [not found]   ` <719c71b1148846e0cdd7e5c149d8b20146b4d043.camel@groupe-cyllene.com>
2025-07-14 11:04     ` Fabian Grünbichler
2025-07-14 11:11       ` Thomas Lamprecht
2025-07-14 11:15         ` Fabian Grünbichler
2025-07-14 11:27           ` Thomas Lamprecht
2025-07-14 11:46             ` Fabian Grünbichler
2025-07-14 15:12   ` Tiago Sousa via pve-devel
     [not found] <20250709162202.2952597-1-alexandre.derumier@groupe-cyllene.com>
2025-07-16 15:15 ` Thomas Lamprecht
2025-07-17  8:01   ` DERUMIER, Alexandre via pve-devel
2025-07-17 14:49     ` Tiago Sousa via pve-devel
     [not found]     ` <1fddff1a-b806-475a-ac75-1dd0107d1013@eurotux.com>
2025-07-17 15:08       ` DERUMIER, Alexandre via pve-devel
     [not found]       ` <47b76678f969ba97926c85af4bf8e50c9dda161d.camel@groupe-cyllene.com>
2025-07-17 15:42         ` Tiago Sousa via pve-devel
     [not found]         ` <58c2db18-c2c2-4c91-9521-bdb42a302e93@eurotux.com>
2025-07-17 15:53           ` DERUMIER, Alexandre via pve-devel
2025-07-17 16:05           ` DERUMIER, Alexandre via pve-devel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal