* [pve-devel] [PATCH qemu-server 1/3] qemu_img convert : add external snapshot support
[not found] <20250704064507.511884-1-alexandre.derumier@groupe-cyllene.com>
@ 2025-07-04 6:44 ` Alexandre Derumier via pve-devel
2025-07-04 6:44 ` [pve-devel] [PATCH pve-storage 01/10] tests: add lvmplugin test Alexandre Derumier via pve-devel
` (11 subsequent siblings)
12 siblings, 0 replies; 46+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-07-04 6:44 UTC (permalink / raw)
To: pve-devel; +Cc: Alexandre Derumier
[-- Attachment #1: Type: message/rfc822, Size: 6675 bytes --]
From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH qemu-server 1/3] qemu_img convert : add external snapshot support
Date: Fri, 4 Jul 2025 08:44:55 +0200
Message-ID: <20250704064507.511884-2-alexandre.derumier@groupe-cyllene.com>
for external snapshot, we simply use snap volname as src.
don't use internal snapshot option in the command line.
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
---
src/PVE/QemuServer/QemuImage.pm | 6 ++-
src/test/run_qemu_img_convert_tests.pl | 59 ++++++++++++++++++++++++++
2 files changed, 64 insertions(+), 1 deletion(-)
diff --git a/src/PVE/QemuServer/QemuImage.pm b/src/PVE/QemuServer/QemuImage.pm
index 38f7d52b..2502a32d 100644
--- a/src/PVE/QemuServer/QemuImage.pm
+++ b/src/PVE/QemuServer/QemuImage.pm
@@ -71,11 +71,15 @@ sub convert {
my $dst_format = checked_volume_format($storecfg, $dst_volid);
my $dst_path = PVE::Storage::path($storecfg, $dst_volid);
my $dst_is_iscsi = ($dst_path =~ m|^iscsi://|);
+ my $support_qemu_snapshots = PVE::Storage::volume_support_qemu_snapshot($storecfg, $src_volid);
my $cmd = [];
push @$cmd, '/usr/bin/qemu-img', 'convert', '-p', '-n';
push @$cmd, '-l', "snapshot.name=$snapname"
- if $snapname && $src_format && $src_format eq "qcow2";
+ if $snapname
+ && $src_format eq 'qcow2'
+ && $support_qemu_snapshots
+ && $support_qemu_snapshots eq 'internal';
push @$cmd, '-t', 'none' if $dst_scfg->{type} eq 'zfspool';
push @$cmd, '-T', $cachemode if defined($cachemode);
push @$cmd, '-r', "${bwlimit}K" if defined($bwlimit);
diff --git a/src/test/run_qemu_img_convert_tests.pl b/src/test/run_qemu_img_convert_tests.pl
index b5a457c3..da660930 100755
--- a/src/test/run_qemu_img_convert_tests.pl
+++ b/src/test/run_qemu_img_convert_tests.pl
@@ -21,6 +21,15 @@ my $storage_config = {
type => "dir",
shared => 0,
},
+ localsnapext => {
+ content => {
+ images => 1,
+ },
+ path => "/var/lib/vzsnapext",
+ type => "dir",
+ snapext => 1,
+ shared => 0,
+ },
btrfs => {
content => {
images => 1,
@@ -61,6 +70,13 @@ my $storage_config = {
images => 1,
},
},
+ "lvm-store" => {
+ vgname => "pve",
+ type => "lvm",
+ content => {
+ images => 1,
+ },
+ },
"zfs-over-iscsi" => {
type => "zfs",
iscsiprovider => "LIO",
@@ -469,6 +485,49 @@ my $tests = [
"/var/lib/vz/images/$vmid/vm-$vmid-disk-0.raw",
],
},
+ {
+ name => "qcow2_external_snapshot",
+ parameters => [
+ "localsnapext:$vmid/vm-$vmid-disk-0.qcow2",
+ "local:$vmid/vm-$vmid-disk-0.raw",
+ 1024 * 10,
+ { snapname => 'foo' },
+ ],
+ expected => [
+ "/usr/bin/qemu-img",
+ "convert",
+ "-p",
+ "-n",
+ "-f",
+ "qcow2",
+ "-O",
+ "raw",
+ "/var/lib/vzsnapext/images/$vmid/snap-foo-vm-$vmid-disk-0.qcow2",
+ "/var/lib/vz/images/$vmid/vm-$vmid-disk-0.raw",
+ ],
+ },
+ {
+ name => "lvmqcow2_external_snapshot",
+ parameters => [
+ "lvm-store:vm-$vmid-disk-0.qcow2",
+ "local:$vmid/vm-$vmid-disk-0.raw",
+ 1024 * 10,
+ { snapname => 'foo' },
+ ],
+ expected => [
+ "/usr/bin/qemu-img",
+ "convert",
+ "-p",
+ "-n",
+ "-f",
+ "qcow2",
+ "-O",
+ "raw",
+ "/dev/pve/snap-foo-vm-$vmid-disk-0.qcow2",
+ "/var/lib/vz/images/$vmid/vm-$vmid-disk-0.raw",
+ ],
+ },
+
];
my $command;
--
2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* [pve-devel] [PATCH pve-storage 01/10] tests: add lvmplugin test
[not found] <20250704064507.511884-1-alexandre.derumier@groupe-cyllene.com>
2025-07-04 6:44 ` [pve-devel] [PATCH qemu-server 1/3] qemu_img convert : add external snapshot support Alexandre Derumier via pve-devel
@ 2025-07-04 6:44 ` Alexandre Derumier via pve-devel
2025-07-04 6:44 ` [pve-devel] [PATCH qemu-server 2/3] blockdev: add backing_chain support Alexandre Derumier via pve-devel
` (10 subsequent siblings)
12 siblings, 0 replies; 46+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-07-04 6:44 UTC (permalink / raw)
To: pve-devel; +Cc: Alexandre Derumier
[-- Attachment #1: Type: message/rfc822, Size: 16680 bytes --]
From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH pve-storage 01/10] tests: add lvmplugin test
Date: Fri, 4 Jul 2025 08:44:56 +0200
Message-ID: <20250704064507.511884-3-alexandre.derumier@groupe-cyllene.com>
use same template than zfspoolplugin tests
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
---
src/test/Makefile | 5 +-
src/test/run_test_lvmplugin.pl | 577 +++++++++++++++++++++++++++++++++
2 files changed, 581 insertions(+), 1 deletion(-)
create mode 100755 src/test/run_test_lvmplugin.pl
diff --git a/src/test/Makefile b/src/test/Makefile
index 12991da..9186303 100644
--- a/src/test/Makefile
+++ b/src/test/Makefile
@@ -1,10 +1,13 @@
all: test
-test: test_zfspoolplugin test_disklist test_bwlimit test_plugin test_ovf
+test: test_zfspoolplugin test_lvmplugin test_disklist test_bwlimit test_plugin test_ovf
test_zfspoolplugin: run_test_zfspoolplugin.pl
./run_test_zfspoolplugin.pl
+test_lvmplugin: run_test_lvmplugin.pl
+ ./run_test_lvmplugin.pl
+
test_disklist: run_disk_tests.pl
./run_disk_tests.pl
diff --git a/src/test/run_test_lvmplugin.pl b/src/test/run_test_lvmplugin.pl
new file mode 100755
index 0000000..e87a3de
--- /dev/null
+++ b/src/test/run_test_lvmplugin.pl
@@ -0,0 +1,577 @@
+#!/usr/bin/perl
+
+use lib '..';
+
+use strict;
+use warnings;
+
+use Data::Dumper qw(Dumper);
+use PVE::Storage;
+use PVE::Cluster;
+use PVE::Tools qw(run_command);
+use Cwd;
+$Data::Dumper::Sortkeys = 1;
+
+my $verbose = undef;
+
+my $storagename = "lvmregression";
+my $vgname = 'regressiontest';
+
+#volsize in GB
+my $volsize = 1;
+my $vmdisk = "vm-102-disk-1";
+
+my $tests = {};
+
+my $cfg = undef;
+my $count = 0;
+my $testnum = 12;
+my $end_test = $testnum;
+my $start_test = 1;
+
+if (@ARGV == 2) {
+ $end_test = $ARGV[1];
+ $start_test = $ARGV[0];
+} elsif (@ARGV == 1) {
+ $start_test = $ARGV[0];
+ $end_test = $ARGV[0];
+}
+
+my $test12 = sub {
+
+ print "\nrun test12 \"path\"\n";
+
+ my @res;
+ my $fail = 0;
+ eval {
+ @res = PVE::Storage::path($cfg, "$storagename:$vmdisk");
+ if ($res[0] ne "\/dev\/regressiontest\/$vmdisk") {
+ $count++;
+ $fail = 1;
+ warn
+ "Test 12 a: path is not correct: expected \'\/dev\/regressiontest\/$vmdisk'\ get \'$res[0]\'";
+ }
+ if ($res[1] ne "102") {
+ if (!$fail) {
+ $count++;
+ $fail = 1;
+ }
+ warn "Test 12 a: owner is not correct: expected \'102\' get \'$res[1]\'";
+ }
+ if ($res[2] ne "images") {
+ if (!$fail) {
+ $count++;
+ $fail = 1;
+ }
+ warn "Test 12 a: owner is not correct: expected \'images\' get \'$res[2]\'";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test 12 a: $@";
+ }
+
+};
+$tests->{12} = $test12;
+
+my $test11 = sub {
+
+ print "\nrun test11 \"deactivate_storage\"\n";
+
+ eval {
+ PVE::Storage::activate_storage($cfg, $storagename);
+ PVE::Storage::deactivate_storage($cfg, $storagename);
+ };
+ if ($@) {
+ $count++;
+ warn "Test 11 a: $@";
+ }
+};
+$tests->{11} = $test11;
+
+my $test10 = sub {
+
+ print "\nrun test10 \"activate_storage\"\n";
+
+ eval { PVE::Storage::activate_storage($cfg, $storagename); };
+ if ($@) {
+ $count++;
+ warn "Test 10 a: $@";
+ }
+};
+$tests->{10} = $test10;
+
+my $test9 = sub {
+
+ print "\nrun test15 \"template_list and vdisk_list\"\n";
+
+ my $hash = Dumper {};
+
+ my $res = Dumper PVE::Storage::template_list($cfg, $storagename, "vztmpl");
+ if ($hash ne $res) {
+ $count++;
+ warn "Test 9 a failed\n";
+ }
+ $res = undef;
+
+ $res = Dumper PVE::Storage::template_list($cfg, $storagename, "iso");
+ if ($hash ne $res) {
+ $count++;
+ warn "Test 9 b failed\n";
+ }
+ $res = undef;
+
+ $res = Dumper PVE::Storage::template_list($cfg, $storagename, "backup");
+ if ($hash ne $res) {
+ $count++;
+ warn "Test 9 c failed\n";
+ }
+
+};
+$tests->{9} = $test9;
+
+my $test8 = sub {
+
+ print "\nrun test8 \"vdisk_free\"\n";
+
+ eval {
+ PVE::Storage::vdisk_free($cfg, "$storagename:$vmdisk");
+
+ eval {
+ run_command("lvs $vgname/$vmdisk", outfunc => sub { }, errfunc => sub { });
+ };
+ if (!$@) {
+ $count++;
+ warn "Test8 a: vdisk still exists\n";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test8 a: $@";
+ }
+
+};
+$tests->{8} = $test8;
+
+my $test7 = sub {
+
+ print "\nrun test7 \"vdisk_alloc\"\n";
+
+ eval {
+ my $tmp_volid =
+ PVE::Storage::vdisk_alloc($cfg, $storagename, "112", "raw", undef, 1024 * 1024);
+
+ if ($tmp_volid ne "$storagename:vm-112-disk-0") {
+ die "volname:$tmp_volid don't match\n";
+ }
+ eval {
+ run_command(
+ "lvs --noheadings -o lv_size $vgname/vm-112-disk-0",
+ outfunc => sub {
+ my $tmp = shift;
+ if ($tmp !~ m/1\.00g/) {
+ die "size don't match\n";
+ }
+ },
+ );
+ };
+ if ($@) {
+ $count++;
+ warn "Test7 a: $@";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test7 a: $@";
+ }
+
+ eval {
+ my $tmp_volid =
+ PVE::Storage::vdisk_alloc($cfg, $storagename, "112", "raw", undef, 2048 * 1024);
+
+ if ($tmp_volid ne "$storagename:vm-112-disk-1") {
+ die "volname:$tmp_volid don't match\n";
+ }
+ eval {
+ run_command(
+ "lvs --noheadings -o lv_size $vgname/vm-112-disk-1",
+ outfunc => sub {
+ my $tmp = shift;
+ if ($tmp !~ m/2\.00g/) {
+ die "size don't match\n";
+ }
+ },
+ );
+ };
+ if ($@) {
+ $count++;
+ warn "Test7 b: $@";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test7 b: $@";
+ }
+
+};
+$tests->{7} = $test7;
+
+my $test6 = sub {
+
+ print "\nrun test6 \"parse_volume_id\"\n";
+
+ eval {
+ my ($store, $disk) = PVE::Storage::parse_volume_id("$storagename:$vmdisk");
+
+ if ($store ne $storagename || $disk ne $vmdisk) {
+ $count++;
+ warn "Test6 a: parsing wrong";
+ }
+
+ };
+ if ($@) {
+ $count++;
+ warn "Test6 a: $@";
+ }
+
+};
+$tests->{6} = $test6;
+
+my $test5 = sub {
+
+ print "\nrun test5 \"parse_volname\"\n";
+
+ eval {
+ my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
+ PVE::Storage::parse_volname($cfg, "$storagename:$vmdisk");
+
+ if (
+ $vtype ne 'images'
+ || $vmid ne '102'
+ || $name ne $vmdisk
+ || defined($basename)
+ || defined($basevmid)
+ || $isBase
+ || $format ne 'raw'
+ ) {
+ $count++;
+ warn "Test5 a: parsing wrong";
+ }
+
+ };
+ if ($@) {
+ $count++;
+ warn "Test5 a: $@";
+ }
+
+};
+$tests->{5} = $test5;
+
+my $test4 = sub {
+
+ print "\nrun test4 \"volume_rollback_is_possible\"\n";
+
+ eval {
+ my $blockers = [];
+ my $res = undef;
+ eval {
+ $res = PVE::Storage::volume_rollback_is_possible(
+ $cfg, "$storagename:$vmdisk", 'snap1', $blockers,
+ );
+ };
+ if (!$@) {
+ $count++;
+ warn "Test4 a: Rollback shouldn't be possible";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test4 a: $@";
+ }
+
+};
+$tests->{4} = $test4;
+
+my $test3 = sub {
+
+ print "\nrun test3 \"volume_has_feature\"\n";
+
+ eval {
+ if (PVE::Storage::volume_has_feature(
+ $cfg, 'snapshot', "$storagename:$vmdisk", undef, 0,
+ )) {
+ $count++;
+ warn "Test3 a failed";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test3 a: $@";
+ }
+
+ eval {
+ if (PVE::Storage::volume_has_feature($cfg, 'clone', "$storagename:$vmdisk", undef, 0)) {
+ $count++;
+ warn "Test3 g failed";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test3 g: $@";
+ }
+
+ eval {
+ if (PVE::Storage::volume_has_feature(
+ $cfg, 'template', "$storagename:$vmdisk", undef, 0,
+ )) {
+ $count++;
+ warn "Test3 l failed";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test3 l: $@";
+ }
+
+ eval {
+ if (!PVE::Storage::volume_has_feature($cfg, 'copy', "$storagename:$vmdisk", undef, 0)) {
+ $count++;
+ warn "Test3 r failed";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test3 r: $@";
+ }
+
+ eval {
+ if (PVE::Storage::volume_has_feature(
+ $cfg, 'sparseinit', "$storagename:$vmdisk", undef, 0,
+ )) {
+ $count++;
+ warn "Test3 x failed";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test3 x: $@";
+ }
+
+ eval {
+ if (PVE::Storage::volume_has_feature(
+ $cfg, 'snapshot', "$storagename:$vmdisk", 'test', 0,
+ )) {
+ $count++;
+ warn "Test3 a1 failed";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test3 a1: $@";
+ }
+
+ eval {
+ if (PVE::Storage::volume_has_feature($cfg, 'clone', "$storagename:$vmdisk", 'test', 0)) {
+ $count++;
+ warn "Test3 g1 failed";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test3 g1: $@";
+ }
+
+ eval {
+ if (PVE::Storage::volume_has_feature(
+ $cfg, 'template', "$storagename:$vmdisk", 'test', 0,
+ )) {
+ $count++;
+ warn "Test3 l1 failed";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test3 l1: $@";
+ }
+
+ eval {
+ if (PVE::Storage::volume_has_feature($cfg, 'copy', "$storagename:$vmdisk", 'test', 0)) {
+ $count++;
+ warn "Test3 r1 failed";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test3 r1: $@";
+ }
+
+ eval {
+ if (PVE::Storage::volume_has_feature(
+ $cfg, 'sparseinit', "$storagename:$vmdisk", 'test', 0,
+ )) {
+ $count++;
+ warn "Test3 x1 failed";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test3 x1: $@";
+ }
+
+};
+$tests->{3} = $test3;
+
+my $test2 = sub {
+
+ print "\nrun test2 \"volume_resize\"\n";
+ my $newsize = ($volsize + 1) * 1024 * 1024 * 1024;
+
+ eval {
+ eval { PVE::Storage::volume_resize($cfg, "$storagename:$vmdisk", $newsize, 0); };
+ if ($@) {
+ $count++;
+ warn "Test2 a failed";
+ }
+ if ($newsize != PVE::Storage::volume_size_info($cfg, "$storagename:$vmdisk")) {
+ $count++;
+ warn "Test2 a failed";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test2 a: $@";
+ }
+
+};
+$tests->{2} = $test2;
+
+my $test1 = sub {
+
+ print "\nrun test1 \"volume_size_info\"\n";
+ my $size = ($volsize * 1024 * 1024 * 1024);
+
+ eval {
+ if ($size != PVE::Storage::volume_size_info($cfg, "$storagename:$vmdisk")) {
+ $count++;
+ warn "Test1 a failed";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test1 a : $@";
+ }
+
+};
+$tests->{1} = $test1;
+
+sub setup_lvm_volumes {
+ eval { run_command("vgcreate $vgname /dev/loop1"); };
+
+ print "create lvm volume $vmdisk\n" if $verbose;
+ run_command("lvcreate -L${volsize}G -n $vmdisk $vgname");
+
+ my $vollist = [
+ "$storagename:$vmdisk",
+ ];
+
+ PVE::Storage::activate_volumes($cfg, $vollist);
+}
+
+sub cleanup_lvm_volumes {
+
+ print "destroy $vgname\n" if $verbose;
+ eval { run_command("vgremove $vgname -y"); };
+ if ($@) {
+ print "cleanup failed: $@\nretrying once\n" if $verbose;
+ eval { run_command("vgremove $vgname -y"); };
+ if ($@) {
+ clean_up_lvm();
+ setup_lvm();
+ }
+ }
+}
+
+sub setup_lvm {
+
+ unlink 'lvm.img';
+ eval { run_command("dd if=/dev/zero of=lvm.img bs=1M count=8000"); };
+ if ($@) {
+ clean_up_lvm();
+ }
+ my $pwd = cwd();
+ eval { run_command("losetup /dev/loop1 $pwd\/lvm.img"); };
+ if ($@) {
+ clean_up_lvm();
+ }
+ eval { run_command("pvcreate /dev/loop1"); };
+ if ($@) {
+ clean_up_lvm();
+ }
+}
+
+sub clean_up_lvm {
+
+ eval { run_command("pvremove /dev/loop1 -ff -y"); };
+ if ($@) {
+ warn $@;
+ }
+ eval { run_command("losetup -d /dev/loop1"); };
+ if ($@) {
+ warn $@;
+ }
+
+ unlink 'lvm.img';
+}
+
+sub volume_is_base {
+ my ($cfg, $volid) = @_;
+
+ my (undef, undef, undef, undef, undef, $isBase, undef) =
+ PVE::Storage::parse_volname($cfg, $volid);
+
+ return $isBase;
+}
+
+if ($> != 0) { #EUID
+ warn "not root, skipping lvm tests\n";
+ exit 0;
+}
+
+my $time = time;
+print "Start tests for LVMPlugin\n";
+
+$cfg = {
+ 'ids' => {
+ $storagename => {
+ 'content' => {
+ 'images' => 1,
+ 'rootdir' => 1,
+ },
+ 'vgname' => $vgname,
+ 'type' => 'lvm',
+ },
+ },
+ 'order' => { 'lvmregression' => 1 },
+};
+
+setup_lvm();
+for (my $i = $start_test; $i <= $end_test; $i++) {
+ setup_lvm_volumes();
+
+ eval { $tests->{$i}(); };
+ if (my $err = $@) {
+ warn $err;
+ $count++;
+ }
+ cleanup_lvm_volumes();
+
+}
+clean_up_lvm();
+
+$time = time - $time;
+
+print "Stop tests for LVMPlugin\n";
+print "$count tests failed\n";
+print "Time: ${time}s\n";
+
+exit -1 if $count > 0;
--
2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* [pve-devel] [PATCH qemu-server 2/3] blockdev: add backing_chain support
[not found] <20250704064507.511884-1-alexandre.derumier@groupe-cyllene.com>
2025-07-04 6:44 ` [pve-devel] [PATCH qemu-server 1/3] qemu_img convert : add external snapshot support Alexandre Derumier via pve-devel
2025-07-04 6:44 ` [pve-devel] [PATCH pve-storage 01/10] tests: add lvmplugin test Alexandre Derumier via pve-devel
@ 2025-07-04 6:44 ` Alexandre Derumier via pve-devel
2025-07-04 6:44 ` [pve-devel] [PATCH pve-storage 02/10] common: add qemu_img_create an preallocation_cmd_option Alexandre Derumier via pve-devel
` (9 subsequent siblings)
12 siblings, 0 replies; 46+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-07-04 6:44 UTC (permalink / raw)
To: pve-devel; +Cc: Alexandre Derumier
[-- Attachment #1: Type: message/rfc822, Size: 14809 bytes --]
From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH qemu-server 2/3] blockdev: add backing_chain support
Date: Fri, 4 Jul 2025 08:44:57 +0200
Message-ID: <20250704064507.511884-4-alexandre.derumier@groupe-cyllene.com>
We need to define name-nodes for all backing chain images,
to be able to live rename them with blockdev-reopen
For linked clone, we don't need to definebase image(s) chain.
They are auto added with #block nodename.
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
---
src/PVE/QemuServer/Blockdev.pm | 49 +++++++++++++++++++
src/test/cfg2cmd/simple-backingchain.conf | 25 ++++++++++
src/test/cfg2cmd/simple-backingchain.conf.cmd | 33 +++++++++++++
src/test/run_config2command_tests.pl | 47 ++++++++++++++++++
4 files changed, 154 insertions(+)
create mode 100644 src/test/cfg2cmd/simple-backingchain.conf
create mode 100644 src/test/cfg2cmd/simple-backingchain.conf.cmd
diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index 5f1fdae3..2a0513fb 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -360,6 +360,46 @@ my sub generate_format_blockdev {
return $blockdev;
}
+my sub generate_backing_blockdev;
+
+sub generate_backing_blockdev {
+ my ($storecfg, $snapshots, $deviceid, $drive, $machine_version, $options) = @_;
+
+ my $snap_id = $options->{'snapshot-name'};
+ my $snapshot = $snapshots->{$snap_id};
+ my $parentid = $snapshot->{parent};
+
+ my $volid = $drive->{file};
+
+ my $snap_file_blockdev = generate_file_blockdev($storecfg, $drive, $machine_version, $options);
+ $snap_file_blockdev->{filename} = $snapshot->{file};
+
+ my $snap_fmt_blockdev =
+ generate_format_blockdev($storecfg, $drive, $snap_file_blockdev, $options);
+
+ if ($parentid) {
+ my $options = { 'snapshot-name' => $parentid };
+ $snap_fmt_blockdev->{backing} = generate_backing_blockdev(
+ $storecfg, $snapshots, $deviceid, $drive, $machine_version, $options,
+ );
+ }
+ return $snap_fmt_blockdev;
+}
+
+my sub generate_backing_chain_blockdev {
+ my ($storecfg, $deviceid, $drive, $machine_version) = @_;
+
+ my $volid = $drive->{file};
+
+ my $snapshots = PVE::Storage::volume_snapshot_info($storecfg, $volid);
+ my $parentid = $snapshots->{'current'}->{parent};
+ return undef if !$parentid;
+ my $options = { 'snapshot-name' => $parentid };
+ return generate_backing_blockdev(
+ $storecfg, $snapshots, $deviceid, $drive, $machine_version, $options,
+ );
+}
+
sub generate_drive_blockdev {
my ($storecfg, $drive, $machine_version, $options) = @_;
@@ -371,6 +411,15 @@ sub generate_drive_blockdev {
my $child = generate_file_blockdev($storecfg, $drive, $machine_version, $options);
if (!is_nbd($drive)) {
$child = generate_format_blockdev($storecfg, $drive, $child, $options);
+
+ my $support_qemu_snapshots =
+ PVE::Storage::volume_support_qemu_snapshot($storecfg, $drive->{file});
+ if ($support_qemu_snapshots && $support_qemu_snapshots eq 'external') {
+ my $backing_chain = generate_backing_chain_blockdev(
+ $storecfg, "drive-$drive_id", $drive, $machine_version,
+ );
+ $child->{backing} = $backing_chain if $backing_chain;
+ }
}
if ($options->{'zero-initialized'}) {
diff --git a/src/test/cfg2cmd/simple-backingchain.conf b/src/test/cfg2cmd/simple-backingchain.conf
new file mode 100644
index 00000000..2c0b0f2c
--- /dev/null
+++ b/src/test/cfg2cmd/simple-backingchain.conf
@@ -0,0 +1,25 @@
+# TEST: Simple test for external snapshot backing chain
+name: simple
+parent: snap3
+scsi0: localsnapext:8006/vm-8006-disk-0.qcow2,size=1G
+scsi1: lvm-store:vm-8006-disk-0.qcow2,size=1G
+
+[snap1]
+name: simple
+scsi0: localsnapext:8006/vm-8006-disk-0.qcow2,size=1G
+scsi1: lvm-store:vm-8006-disk-0.qcow2,size=1G
+snaptime: 1748933042
+
+[snap2]
+parent: snap1
+name: simple
+scsi0: localsnapext:8006/vm-8006-disk-0.qcow2,size=1G
+scsi1: lvm-store:vm-8006-disk-0.qcow2,size=1G
+snaptime: 1748933043
+
+[snap3]
+parent: snap2
+name: simple
+scsi0: localsnapext:8006/vm-8006-disk-0.qcow2,size=1G
+scsi1: lvm-store:vm-8006-disk-0.qcow2,size=1G
+snaptime: 1748933044
diff --git a/src/test/cfg2cmd/simple-backingchain.conf.cmd b/src/test/cfg2cmd/simple-backingchain.conf.cmd
new file mode 100644
index 00000000..40c957f5
--- /dev/null
+++ b/src/test/cfg2cmd/simple-backingchain.conf.cmd
@@ -0,0 +1,33 @@
+/usr/bin/kvm \
+ -id 8006 \
+ -name 'simple,debug-threads=on' \
+ -no-shutdown \
+ -chardev 'socket,id=qmp,path=/var/run/qemu-server/8006.qmp,server=on,wait=off' \
+ -mon 'chardev=qmp,mode=control' \
+ -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect-ms=5000' \
+ -mon 'chardev=qmp-event,mode=control' \
+ -pidfile /var/run/qemu-server/8006.pid \
+ -daemonize \
+ -smp '1,sockets=1,cores=1,maxcpus=1' \
+ -nodefaults \
+ -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
+ -vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
+ -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
+ -m 512 \
+ -object '{"id":"throttle-drive-scsi0","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi1","limits":{},"qom-type":"throttle-group"}' \
+ -global 'PIIX4_PM.disable_s3=1' \
+ -global 'PIIX4_PM.disable_s4=1' \
+ -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
+ -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' \
+ -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' \
+ -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' \
+ -device 'VGA,id=vga,bus=pci.0,addr=0x2' \
+ -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
+ -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
+ -device 'lsi,id=scsihw0,bus=pci.0,addr=0x5' \
+ -blockdev '{"driver":"throttle","file":{"backing":{"backing":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vzsnapext/images/8006/snap1-vm-8006-disk-0.qcow2","node-name":"ea91a385a49a008a4735c0aec5c6749","read-only":false},"node-name":"fa91a385a49a008a4735c0aec5c6749","read-only":false},"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vzsnapext/images/8006/snap2-vm-8006-disk-0.qcow2","node-name":"ec0289317073959d450248d8cd7a480","read-only":false},"node-name":"fc0289317073959d450248d8cd7a480","read-only":false},"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vzsnapext/images/8006/vm-8006-disk-0.qcow2","node-name":"e74f4959037afb46eddc7313c43dfdd","read-only":false},"node-name":"f74f4959037afb46eddc7313c43dfdd","read-only":false},"node-name":"drive-scsi0","throttle-group":"throttle-drive-scsi0"}' \
+ -device 'scsi-hd,bus=scsihw0.0,scsi-id=0,drive=drive-scsi0,id=scsi0,write-cache=on' \
+ -blockdev '{"driver":"throttle","file":{"backing":{"backing":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"native","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"host_device","filename":"/dev/veegee/snap1-vm-8006-disk-0.qcow2","node-name":"e25f58d3e6e11f2065ad41253988915","read-only":false},"node-name":"f25f58d3e6e11f2065ad41253988915","read-only":false},"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"native","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"host_device","filename":"/dev/veegee/snap2-vm-8006-disk-0.qcow2","node-name":"e9415bb5e484c1e25d25063b01686fe","read-only":false},"node-name":"f9415bb5e484c1e25d25063b01686fe","read-only":false},"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"native","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"host_device","filename":"/dev/veegee/vm-8006-disk-0.qcow2","node-name":"e87358a470ca311f94d5cc61d1eb428","read-only":false},"node-name":"f87358a470ca311f94d5cc61d1eb428","read-only":false},"node-name":"drive-scsi1","throttle-group":"throttle-drive-scsi1"}' \
+ -device 'scsi-hd,bus=scsihw0.0,scsi-id=1,drive=drive-scsi1,id=scsi1,write-cache=on' \
+ -machine 'type=pc+pve0'
diff --git a/src/test/run_config2command_tests.pl b/src/test/run_config2command_tests.pl
index 1262a0df..61302f6b 100755
--- a/src/test/run_config2command_tests.pl
+++ b/src/test/run_config2command_tests.pl
@@ -21,6 +21,7 @@ use PVE::QemuServer::Helpers;
use PVE::QemuServer::Monitor;
use PVE::QemuServer::QMPHelpers;
use PVE::QemuServer::CPUConfig;
+use PVE::Storage;
my $base_env = {
storage_config => {
@@ -34,6 +35,15 @@ my $base_env = {
type => 'dir',
shared => 0,
},
+ localsnapext => {
+ content => {
+ images => 1,
+ },
+ path => '/var/lib/vzsnapext',
+ type => 'dir',
+ shared => 0,
+ snapext => 1,
+ },
noimages => {
content => {
iso => 1,
@@ -264,6 +274,43 @@ $storage_module->mock(
deactivate_volumes => sub {
return;
},
+ volume_snapshot_info => sub {
+ my ($cfg, $volid) = @_;
+
+ my ($storeid, $volname) = PVE::Storage::parse_volume_id($volid);
+
+ my $snapshots = {};
+ if ($storeid eq 'localsnapext') {
+ $snapshots = {
+ current => {
+ file => 'var/lib/vzsnapext/images/8006/vm-8006-disk-0.qcow2',
+ parent => 'snap2',
+ },
+ snap2 => {
+ file => '/var/lib/vzsnapext/images/8006/snap2-vm-8006-disk-0.qcow2',
+ parent => 'snap1',
+ },
+ snap1 => {
+ file => '/var/lib/vzsnapext/images/8006/snap1-vm-8006-disk-0.qcow2',
+ },
+ };
+ } elsif ($storeid eq 'lvm-store') {
+ $snapshots = {
+ current => {
+ file => '/dev/veegee/vm-8006-disk-0.qcow2',
+ parent => 'snap2',
+ },
+ snap2 => {
+ file => '/dev/veegee/snap2-vm-8006-disk-0.qcow2',
+ parent => 'snap1',
+ },
+ snap1 => {
+ file => '/dev/veegee/snap1-vm-8006-disk-0.qcow2',
+ },
+ };
+ }
+ return $snapshots;
+ },
);
my $file_stat_module = Test::MockModule->new("File::stat");
--
2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* [pve-devel] [PATCH pve-storage 02/10] common: add qemu_img_create an preallocation_cmd_option
[not found] <20250704064507.511884-1-alexandre.derumier@groupe-cyllene.com>
` (2 preceding siblings ...)
2025-07-04 6:44 ` [pve-devel] [PATCH qemu-server 2/3] blockdev: add backing_chain support Alexandre Derumier via pve-devel
@ 2025-07-04 6:44 ` Alexandre Derumier via pve-devel
2025-07-04 11:53 ` Fabian Grünbichler
2025-07-04 6:44 ` [pve-devel] [PATCH pve-storage 03/10] common: qemu_img_create: add backing_file support Alexandre Derumier via pve-devel
` (8 subsequent siblings)
12 siblings, 1 reply; 46+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-07-04 6:44 UTC (permalink / raw)
To: pve-devel; +Cc: Alexandre Derumier
[-- Attachment #1: Type: message/rfc822, Size: 6961 bytes --]
From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH pve-storage 02/10] common: add qemu_img_create an preallocation_cmd_option
Date: Fri, 4 Jul 2025 08:44:58 +0200
Message-ID: <20250704064507.511884-5-alexandre.derumier@groupe-cyllene.com>
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
---
src/PVE/Storage/Common.pm | 52 +++++++++++++++++++++++++++++++++++++++
src/PVE/Storage/Plugin.pm | 47 +----------------------------------
2 files changed, 53 insertions(+), 46 deletions(-)
diff --git a/src/PVE/Storage/Common.pm b/src/PVE/Storage/Common.pm
index 89a70f4..29f2e52 100644
--- a/src/PVE/Storage/Common.pm
+++ b/src/PVE/Storage/Common.pm
@@ -5,12 +5,26 @@ use warnings;
use PVE::JSONSchema;
use PVE::Syscall;
+use PVE::Tools qw(run_command);
use constant {
FALLOC_FL_KEEP_SIZE => 0x01, # see linux/falloc.h
FALLOC_FL_PUNCH_HOLE => 0x02, # see linux/falloc.h
};
+our $QCOW2_PREALLOCATION = {
+ off => 1,
+ metadata => 1,
+ falloc => 1,
+ full => 1,
+};
+
+our $RAW_PREALLOCATION = {
+ off => 1,
+ falloc => 1,
+ full => 1,
+};
+
=pod
=head1 NAME
@@ -110,4 +124,42 @@ sub deallocate : prototype($$$) {
}
}
+sub preallocation_cmd_option {
+ my ($scfg, $fmt) = @_;
+
+ my $prealloc = $scfg->{preallocation};
+
+ if ($fmt eq 'qcow2') {
+ $prealloc = $prealloc // 'metadata';
+
+ die "preallocation mode '$prealloc' not supported by format '$fmt'\n"
+ if !$QCOW2_PREALLOCATION->{$prealloc};
+
+ return "preallocation=$prealloc";
+ } elsif ($fmt eq 'raw') {
+ $prealloc = $prealloc // 'off';
+ $prealloc = 'off' if $prealloc eq 'metadata';
+
+ die "preallocation mode '$prealloc' not supported by format '$fmt'\n"
+ if !$RAW_PREALLOCATION->{$prealloc};
+
+ return "preallocation=$prealloc";
+ }
+
+ return;
+}
+
+sub qemu_img_create {
+ my ($scfg, $fmt, $size, $path) = @_;
+
+ my $cmd = ['/usr/bin/qemu-img', 'create'];
+
+ my $prealloc_opt = preallocation_cmd_option($scfg, $fmt);
+ push @$cmd, '-o', $prealloc_opt if defined($prealloc_opt);
+
+ push @$cmd, '-f', $fmt, $path, "${size}K";
+
+ run_command($cmd, errmsg => "unable to create image");
+}
+
1;
diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index c2f376b..80bb077 100644
--- a/src/PVE/Storage/Plugin.pm
+++ b/src/PVE/Storage/Plugin.pm
@@ -38,19 +38,6 @@ our @SHARED_STORAGE = (
'iscsi', 'nfs', 'cifs', 'rbd', 'cephfs', 'iscsidirect', 'zfs', 'drbd', 'pbs',
);
-our $QCOW2_PREALLOCATION = {
- off => 1,
- metadata => 1,
- falloc => 1,
- full => 1,
-};
-
-our $RAW_PREALLOCATION = {
- off => 1,
- falloc => 1,
- full => 1,
-};
-
our $MAX_VOLUMES_PER_GUEST = 1024;
cfs_register_file(
@@ -606,31 +593,6 @@ sub parse_config {
return $cfg;
}
-sub preallocation_cmd_option {
- my ($scfg, $fmt) = @_;
-
- my $prealloc = $scfg->{preallocation};
-
- if ($fmt eq 'qcow2') {
- $prealloc = $prealloc // 'metadata';
-
- die "preallocation mode '$prealloc' not supported by format '$fmt'\n"
- if !$QCOW2_PREALLOCATION->{$prealloc};
-
- return "preallocation=$prealloc";
- } elsif ($fmt eq 'raw') {
- $prealloc = $prealloc // 'off';
- $prealloc = 'off' if $prealloc eq 'metadata';
-
- die "preallocation mode '$prealloc' not supported by format '$fmt'\n"
- if !$RAW_PREALLOCATION->{$prealloc};
-
- return "preallocation=$prealloc";
- }
-
- return;
-}
-
# Storage implementation
# called during addition of storage (before the new storage config got written)
@@ -969,14 +931,7 @@ sub alloc_image {
umask $old_umask;
die $err if $err;
} else {
- my $cmd = ['/usr/bin/qemu-img', 'create'];
-
- my $prealloc_opt = preallocation_cmd_option($scfg, $fmt);
- push @$cmd, '-o', $prealloc_opt if defined($prealloc_opt);
-
- push @$cmd, '-f', $fmt, $path, "${size}K";
-
- eval { run_command($cmd, errmsg => "unable to create image"); };
+ eval { PVE::Storage::Common::qemu_img_create($scfg, $fmt, $size, $path) };
if ($@) {
unlink $path;
rmdir $imagedir;
--
2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* [pve-devel] [PATCH pve-storage 03/10] common: qemu_img_create: add backing_file support
[not found] <20250704064507.511884-1-alexandre.derumier@groupe-cyllene.com>
` (3 preceding siblings ...)
2025-07-04 6:44 ` [pve-devel] [PATCH pve-storage 02/10] common: add qemu_img_create an preallocation_cmd_option Alexandre Derumier via pve-devel
@ 2025-07-04 6:44 ` Alexandre Derumier via pve-devel
2025-07-04 11:52 ` Fabian Grünbichler
2025-07-04 6:45 ` [pve-devel] [PATCH qemu-server 3/3] qcow2: add external snapshot support Alexandre Derumier via pve-devel
` (7 subsequent siblings)
12 siblings, 1 reply; 46+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-07-04 6:44 UTC (permalink / raw)
To: pve-devel; +Cc: Alexandre Derumier
[-- Attachment #1: Type: message/rfc822, Size: 6464 bytes --]
From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH pve-storage 03/10] common: qemu_img_create: add backing_file support
Date: Fri, 4 Jul 2025 08:44:59 +0200
Message-ID: <20250704064507.511884-6-alexandre.derumier@groupe-cyllene.com>
and use it for plugin linked clone
This also enable extended_l2=on, as it's mandatory for backing file
preallocation.
Preallocation was missing previously, so it should increase performance
for linked clone now (around x5 in randwrite 4k)
cluster_size is set to 128k, as it reduce qcow2 overhead (reduce disk,
but also memory needed to cache metadatas)
l2_extended is not enabled yet on base image, but it could help too
to reduce overhead without impacting performance
bench on 100G qcow2 file:
fio --filename=/dev/sdb --direct=1 --rw=randwrite --bs=4k --iodepth=32 --ioengine=libaio --name=test
fio --filename=/dev/sdb --direct=1 --rw=randread --bs=4k --iodepth=32 --ioengine=libaio --name=test
base image:
randwrite 4k: prealloc=metadata, l2_extended=off, cluster_size=64k: 20215
randread 4k: prealloc=metadata, l2_extended=off, cluster_size=64k: 22219
randwrite 4k: prealloc=metadata, l2_extended=on, cluster_size=64k: 20217
randread 4k: prealloc=metadata, l2_extended=on, cluster_size=64k: 21742
randwrite 4k: prealloc=metadata, l2_extended=on, cluster_size=128k: 21599
randread 4k: prealloc=metadata, l2_extended=on, cluster_size=128k: 22037
clone image with backing file:
randwrite 4k: prealloc=metadata, l2_extended=off, cluster_size=64k: 3912
randread 4k: prealloc=metadata, l2_extended=off, cluster_size=64k: 21476
randwrite 4k: prealloc=metadata, l2_extended=on, cluster_size=64k: 20563
randread 4k: prealloc=metadata, l2_extended=on, cluster_size=64k: 22265
randwrite 4k: prealloc=metadata, l2_extended=on, cluster_size=128k: 18016
randread 4k: prealloc=metadata, l2_extended=on, cluster_size=128k: 21611
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
---
src/PVE/Storage/Common.pm | 17 +++++++++++++----
src/PVE/Storage/Plugin.pm | 17 +++--------------
2 files changed, 16 insertions(+), 18 deletions(-)
diff --git a/src/PVE/Storage/Common.pm b/src/PVE/Storage/Common.pm
index 29f2e52..78e5320 100644
--- a/src/PVE/Storage/Common.pm
+++ b/src/PVE/Storage/Common.pm
@@ -150,14 +150,23 @@ sub preallocation_cmd_option {
}
sub qemu_img_create {
- my ($scfg, $fmt, $size, $path) = @_;
+ my ($scfg, $fmt, $size, $path, $backing_path) = @_;
+
+ die "size can't be specified if backing file is used" if $size && $backing_path;
my $cmd = ['/usr/bin/qemu-img', 'create'];
- my $prealloc_opt = preallocation_cmd_option($scfg, $fmt);
- push @$cmd, '-o', $prealloc_opt if defined($prealloc_opt);
+ my $options = [];
+
+ if ($backing_path) {
+ push @$cmd, '-b', $backing_path, '-F', 'qcow2';
+ push @$options, 'extended_l2=on', 'cluster_size=128k';
+ }
- push @$cmd, '-f', $fmt, $path, "${size}K";
+ push @$options, preallocation_cmd_option($scfg, $fmt);
+ push @$cmd, '-o', join(',', @$options) if @$options > 0;
+ push @$cmd, '-f', $fmt, $path;
+ push @$cmd, "${size}K" if $size;
run_command($cmd, errmsg => "unable to create image");
}
diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index 80bb077..c35d5e5 100644
--- a/src/PVE/Storage/Plugin.pm
+++ b/src/PVE/Storage/Plugin.pm
@@ -880,20 +880,9 @@ sub clone_image {
# Note: we use relative paths, so we need to call chdir before qemu-img
eval {
local $CWD = $imagedir;
-
- my $cmd = [
- '/usr/bin/qemu-img',
- 'create',
- '-b',
- "../$basevmid/$basename",
- '-F',
- $format,
- '-f',
- 'qcow2',
- $path,
- ];
-
- run_command($cmd);
+ PVE::Storage::Common::qemu_img_create(
+ $scfg, $format, undef, $path, "../$basevmid/$basename",
+ );
};
my $err = $@;
--
2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* [pve-devel] [PATCH qemu-server 3/3] qcow2: add external snapshot support
[not found] <20250704064507.511884-1-alexandre.derumier@groupe-cyllene.com>
` (4 preceding siblings ...)
2025-07-04 6:44 ` [pve-devel] [PATCH pve-storage 03/10] common: qemu_img_create: add backing_file support Alexandre Derumier via pve-devel
@ 2025-07-04 6:45 ` Alexandre Derumier via pve-devel
2025-07-04 11:52 ` Fabian Grünbichler
2025-07-04 6:45 ` [pve-devel] [PATCH pve-storage 04/10] rename_volume: add source && target snap Alexandre Derumier via pve-devel
` (6 subsequent siblings)
12 siblings, 1 reply; 46+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-07-04 6:45 UTC (permalink / raw)
To: pve-devel; +Cc: Alexandre Derumier
[-- Attachment #1: Type: message/rfc822, Size: 22067 bytes --]
From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH qemu-server 3/3] qcow2: add external snapshot support
Date: Fri, 4 Jul 2025 08:45:00 +0200
Message-ID: <20250704064507.511884-7-alexandre.derumier@groupe-cyllene.com>
fixme:
- add test for internal (was missing) && external qemu snapshots
- is it possible to use blockjob transactions for commit && steam
for atomatic disk commit ?
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
---
src/PVE/QemuConfig.pm | 4 +-
src/PVE/QemuServer.pm | 132 ++++++++++++---
src/PVE/QemuServer/Blockdev.pm | 296 ++++++++++++++++++++++++++++++++-
src/test/snapshot-test.pm | 4 +-
4 files changed, 402 insertions(+), 34 deletions(-)
diff --git a/src/PVE/QemuConfig.pm b/src/PVE/QemuConfig.pm
index 82295641..e0853d65 100644
--- a/src/PVE/QemuConfig.pm
+++ b/src/PVE/QemuConfig.pm
@@ -398,7 +398,7 @@ sub __snapshot_create_vol_snapshot {
print "snapshotting '$device' ($drive->{file})\n";
- PVE::QemuServer::qemu_volume_snapshot($vmid, $device, $storecfg, $volid, $snapname);
+ PVE::QemuServer::qemu_volume_snapshot($vmid, $device, $storecfg, $drive, $snapname);
}
sub __snapshot_delete_remove_drive {
@@ -435,7 +435,7 @@ sub __snapshot_delete_vol_snapshot {
my $storecfg = PVE::Storage::config();
my $volid = $drive->{file};
- PVE::QemuServer::qemu_volume_snapshot_delete($vmid, $storecfg, $volid, $snapname);
+ PVE::QemuServer::qemu_volume_snapshot_delete($vmid, $storecfg, $drive, $snapname);
push @$unused, $volid;
}
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 92c8fad6..158c91b1 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -4340,20 +4340,64 @@ sub qemu_cpu_hotplug {
}
sub qemu_volume_snapshot {
- my ($vmid, $deviceid, $storecfg, $volid, $snap) = @_;
+ my ($vmid, $deviceid, $storecfg, $drive, $snap) = @_;
+ my $volid = $drive->{file};
my $running = check_running($vmid);
- if ($running && do_snapshots_with_qemu($storecfg, $volid, $deviceid)) {
+ my $do_snapshots_type = do_snapshots_type($storecfg, $volid, $deviceid, $running);
+
+ if ($do_snapshots_type eq 'internal') {
+ print "internal qemu snapshot\n";
mon_cmd($vmid, 'blockdev-snapshot-internal-sync', device => $deviceid, name => $snap);
- } else {
+ } elsif ($do_snapshots_type eq 'external') {
+ my $storeid = (PVE::Storage::parse_volume_id($volid))[0];
+ my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
+ print "external qemu snapshot\n";
+ my $snapshots = PVE::Storage::volume_snapshot_info($storecfg, $volid);
+ my $parent_snap = $snapshots->{'current'}->{parent};
+ my $machine_version = PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
+
+ PVE::QemuServer::Blockdev::blockdev_rename(
+ $storecfg,
+ $vmid,
+ $machine_version,
+ $deviceid,
+ $drive,
+ 'current',
+ $snap,
+ $parent_snap,
+ );
+ eval {
+ PVE::QemuServer::Blockdev::blockdev_external_snapshot(
+ $storecfg, $vmid, $machine_version, $deviceid, $drive, $snap,
+ );
+ };
+ if ($@) {
+ warn $@ if $@;
+ print "Error creating snapshot. Revert rename\n";
+ eval {
+ PVE::QemuServer::Blockdev::blockdev_rename(
+ $storecfg,
+ $vmid,
+ $machine_version,
+ $deviceid,
+ $drive,
+ $snap,
+ 'current',
+ $parent_snap,
+ );
+ };
+ }
+ } elsif ($do_snapshots_type eq 'storage') {
PVE::Storage::volume_snapshot($storecfg, $volid, $snap);
}
}
sub qemu_volume_snapshot_delete {
- my ($vmid, $storecfg, $volid, $snap) = @_;
+ my ($vmid, $storecfg, $drive, $snap) = @_;
+ my $volid = $drive->{file};
my $running = check_running($vmid);
my $attached_deviceid;
@@ -4368,14 +4412,62 @@ sub qemu_volume_snapshot_delete {
);
}
- if ($attached_deviceid && do_snapshots_with_qemu($storecfg, $volid, $attached_deviceid)) {
+ my $do_snapshots_type = do_snapshots_type($storecfg, $volid, $attached_deviceid, $running);
+
+ if ($do_snapshots_type eq 'internal') {
mon_cmd(
$vmid,
'blockdev-snapshot-delete-internal-sync',
device => $attached_deviceid,
name => $snap,
);
- } else {
+ } elsif ($do_snapshots_type eq 'external') {
+ print "delete qemu external snapshot\n";
+
+ my $path = PVE::Storage::path($storecfg, $volid);
+ my $snapshots = PVE::Storage::volume_snapshot_info($storecfg, $volid);
+ my $parentsnap = $snapshots->{$snap}->{parent};
+ my $childsnap = $snapshots->{$snap}->{child};
+ my $machine_version = PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
+
+ # if we delete the first snasphot, we commit because the first snapshot original base image, it should be big.
+ # improve-me: if firstsnap > child : commit, if firstsnap < child do a stream.
+ if (!$parentsnap) {
+ print "delete first snapshot $snap\n";
+ PVE::QemuServer::Blockdev::blockdev_commit(
+ $storecfg,
+ $vmid,
+ $machine_version,
+ $attached_deviceid,
+ $drive,
+ $childsnap,
+ $snap,
+ );
+ PVE::QemuServer::Blockdev::blockdev_rename(
+ $storecfg,
+ $vmid,
+ $machine_version,
+ $attached_deviceid,
+ $drive,
+ $snap,
+ $childsnap,
+ $snapshots->{$childsnap}->{child},
+ );
+ } else {
+ #intermediate snapshot, we always stream the snapshot to child snapshot
+ print "stream intermediate snapshot $snap to $childsnap\n";
+ PVE::QemuServer::Blockdev::blockdev_stream(
+ $storecfg,
+ $vmid,
+ $machine_version,
+ $attached_deviceid,
+ $drive,
+ $snap,
+ $parentsnap,
+ $childsnap,
+ );
+ }
+ } elsif ($do_snapshots_type eq 'storage') {
PVE::Storage::volume_snapshot_delete(
$storecfg,
$volid,
@@ -7563,28 +7655,20 @@ sub restore_tar_archive {
warn $@ if $@;
}
-my $qemu_snap_storage = {
- rbd => 1,
-};
-
-sub do_snapshots_with_qemu {
- my ($storecfg, $volid, $deviceid) = @_;
-
- return if $deviceid =~ m/tpmstate0/;
+sub do_snapshots_type {
+ my ($storecfg, $volid, $deviceid, $running) = @_;
- my $storage_name = PVE::Storage::parse_volume_id($volid);
- my $scfg = $storecfg->{ids}->{$storage_name};
- die "could not find storage '$storage_name'\n" if !defined($scfg);
+ #we skip snapshot for tpmstate
+ return if $deviceid && $deviceid =~ m/tpmstate0/;
- if ($qemu_snap_storage->{ $scfg->{type} } && !$scfg->{krbd}) {
- return 1;
- }
+ #we use storage snapshot if vm is not running or if disk is unused;
+ return 'storage' if !$running || !$deviceid;
- if ($volid =~ m/\.(qcow2|qed)$/) {
- return 1;
- }
+ my $qemu_snapshot_type = PVE::Storage::volume_support_qemu_snapshot($storecfg, $volid);
+ # if running, but don't support qemu snapshot, we use storage snapshot
+ return 'storage' if !$qemu_snapshot_type;
- return;
+ return $qemu_snapshot_type;
}
=head3 template_create($vmid, $conf [, $disk])
diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index 2a0513fb..07141777 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -11,6 +11,7 @@ use JSON;
use PVE::JSONSchema qw(json_bool);
use PVE::Storage;
+use PVE::QemuServer::BlockJob;
use PVE::QemuServer::Drive qw(drive_is_cdrom);
use PVE::QemuServer::Helpers;
use PVE::QemuServer::Monitor qw(mon_cmd);
@@ -243,6 +244,9 @@ my sub generate_file_blockdev {
my $blockdev = {};
my $scfg = undef;
+ delete $options->{'snapshot-name'}
+ if $options->{'snapshot-name'} && $options->{'snapshot-name'} eq 'current';
+
die "generate_file_blockdev called without volid/path\n" if !$drive->{file};
die "generate_file_blockdev called with 'none'\n" if $drive->{file} eq 'none';
# FIXME use overlay and new config option to define storage for temp write device
@@ -322,6 +326,9 @@ my sub generate_format_blockdev {
die "generate_format_blockdev called with 'none'\n" if $drive->{file} eq 'none';
die "generate_format_blockdev called with NBD path\n" if is_nbd($drive);
+ delete($options->{'snapshot-name'})
+ if $options->{'snapshot-name'} && $options->{'snapshot-name'} eq 'current';
+
my $scfg;
my $format;
my $volid = $drive->{file};
@@ -400,6 +407,17 @@ my sub generate_backing_chain_blockdev {
);
}
+sub generate_throttle_blockdev {
+ my ($drive_id, $child) = @_;
+
+ return {
+ driver => "throttle",
+ 'node-name' => top_node_name($drive_id),
+ 'throttle-group' => throttle_group_id($drive_id),
+ file => $child,
+ };
+}
+
sub generate_drive_blockdev {
my ($storecfg, $drive, $machine_version, $options) = @_;
@@ -442,12 +460,7 @@ sub generate_drive_blockdev {
return $child if $options->{fleecing} || $options->{'tpm-backup'} || $options->{'no-throttle'};
# this is the top filter entry point, use $drive-drive_id as nodename
- return {
- driver => "throttle",
- 'node-name' => top_node_name($drive_id),
- 'throttle-group' => throttle_group_id($drive_id),
- file => $child,
- };
+ return generate_throttle_blockdev($drive_id, $child);
}
sub generate_pbs_blockdev {
@@ -785,4 +798,275 @@ sub set_io_throttle {
}
}
+sub blockdev_external_snapshot {
+ my ($storecfg, $vmid, $machine_version, $deviceid, $drive, $snap, $size) = @_;
+
+ print "Creating a new current volume with $snap as backing snap\n";
+
+ my $volid = $drive->{file};
+
+ #preallocate add a new current file with reference to backing-file
+ PVE::Storage::volume_snapshot($storecfg, $volid, $snap, 1);
+
+ #be sure to add drive in write mode
+ delete($drive->{ro});
+
+ my $new_file_blockdev = generate_file_blockdev($storecfg, $drive);
+ my $new_fmt_blockdev = generate_format_blockdev($storecfg, $drive, $new_file_blockdev);
+
+ my $snap_file_blockdev = generate_file_blockdev($storecfg, $drive, $snap);
+ my $snap_fmt_blockdev = generate_format_blockdev(
+ $storecfg,
+ $drive,
+ $snap_file_blockdev,
+ { 'snapshot-name' => $snap },
+ );
+
+ #backing need to be forced to undef in blockdev, to avoid reopen of backing-file on blockdev-add
+ $new_fmt_blockdev->{backing} = undef;
+
+ mon_cmd($vmid, 'blockdev-add', %$new_fmt_blockdev);
+
+ mon_cmd(
+ $vmid, 'blockdev-snapshot',
+ node => $snap_fmt_blockdev->{'node-name'},
+ overlay => $new_fmt_blockdev->{'node-name'},
+ );
+}
+
+sub blockdev_delete {
+ my ($storecfg, $vmid, $drive, $file_blockdev, $fmt_blockdev, $snap) = @_;
+
+ #add eval as reopen is auto removing the old nodename automatically only if it was created at vm start in command line argument
+ eval { mon_cmd($vmid, 'blockdev-del', 'node-name' => $file_blockdev->{'node-name'}) };
+ eval { mon_cmd($vmid, 'blockdev-del', 'node-name' => $fmt_blockdev->{'node-name'}) };
+
+ #delete the file (don't use vdisk_free as we don't want to delete all snapshot chain)
+ print "delete old $file_blockdev->{filename}\n";
+
+ my $storage_name = PVE::Storage::parse_volume_id($drive->{file});
+
+ my $volid = $drive->{file};
+ PVE::Storage::volume_snapshot_delete($storecfg, $volid, $snap, 1);
+}
+
+sub blockdev_rename {
+ my (
+ $storecfg,
+ $vmid,
+ $machine_version,
+ $deviceid,
+ $drive,
+ $src_snap,
+ $target_snap,
+ $parent_snap,
+ ) = @_;
+
+ print "rename $src_snap to $target_snap\n";
+
+ my $volid = $drive->{file};
+
+ my $src_file_blockdev = generate_file_blockdev(
+ $storecfg,
+ $drive,
+ $machine_version,
+ { 'snapshot-name' => $src_snap },
+ );
+ my $src_fmt_blockdev = generate_format_blockdev(
+ $storecfg,
+ $drive,
+ $src_file_blockdev,
+ { 'snapshot-name' => $src_snap },
+ );
+
+ #rename volume image
+ PVE::Storage::rename_volume($storecfg, $volid, $vmid, undef, $src_snap, $target_snap);
+
+ my $target_file_blockdev = generate_file_blockdev(
+ $storecfg,
+ $drive,
+ $machine_version,
+ { 'snapshot-name' => $target_snap },
+ );
+ my $target_fmt_blockdev = generate_format_blockdev(
+ $storecfg,
+ $drive,
+ $target_file_blockdev,
+ { 'snapshot-name' => $target_snap },
+ );
+
+ if ($target_snap eq 'current' || $src_snap eq 'current') {
+ #rename from|to current
+ my $drive_id = PVE::QemuServer::Drive::get_drive_id($drive);
+
+ #add backing to target
+ if ($parent_snap) {
+ my $parent_fmt_nodename =
+ get_node_name('fmt', $drive_id, $volid, { 'snapshot-name' => $parent_snap });
+ $target_fmt_blockdev->{backing} = $parent_fmt_nodename;
+ }
+ mon_cmd($vmid, 'blockdev-add', %$target_fmt_blockdev);
+
+ #reopen the current throttlefilter nodename with the target fmt nodename
+ my $throttle_blockdev =
+ generate_throttle_blockdev($drive_id, $target_fmt_blockdev->{'node-name'});
+ mon_cmd($vmid, 'blockdev-reopen', options => [$throttle_blockdev]);
+ } else {
+ rename($src_file_blockdev->{filename}, $target_file_blockdev->{filename});
+
+ #intermediate snapshot
+ mon_cmd($vmid, 'blockdev-add', %$target_fmt_blockdev);
+
+ #reopen the parent node with the new target fmt backing node
+ my $parent_file_blockdev = generate_file_blockdev(
+ $storecfg,
+ $drive,
+ $machine_version,
+ { 'snapshot-name' => $parent_snap },
+ );
+ my $parent_fmt_blockdev = generate_format_blockdev(
+ $storecfg,
+ $drive,
+ $parent_file_blockdev,
+ { 'snapshot-name' => $parent_snap },
+ );
+ $parent_fmt_blockdev->{backing} = $target_fmt_blockdev->{'node-name'};
+ mon_cmd($vmid, 'blockdev-reopen', options => [$parent_fmt_blockdev]);
+
+ #change backing-file in qcow2 metadatas
+ mon_cmd(
+ $vmid, 'change-backing-file',
+ device => $deviceid,
+ 'image-node-name' => $parent_fmt_blockdev->{'node-name'},
+ 'backing-file' => $target_file_blockdev->{filename},
+ );
+ }
+
+ # delete old file|fmt nodes
+ # add eval as reopen is auto removing the old nodename automatically only if it was created at vm start in command line argument
+ eval { mon_cmd($vmid, 'blockdev-del', 'node-name' => $src_file_blockdev->{'node-name'}) };
+ eval { mon_cmd($vmid, 'blockdev-del', 'node-name' => $src_fmt_blockdev->{'node-name'}) };
+}
+
+sub blockdev_commit {
+ my ($storecfg, $vmid, $machine_version, $deviceid, $drive, $src_snap, $target_snap) = @_;
+
+ my $volid = $drive->{file};
+
+ print "block-commit $src_snap to base:$target_snap\n";
+
+ my $target_file_blockdev = generate_file_blockdev(
+ $storecfg,
+ $drive,
+ $machine_version,
+ { 'snapshot-name' => $target_snap },
+ );
+ my $target_fmt_blockdev = generate_format_blockdev(
+ $storecfg,
+ $drive,
+ $target_file_blockdev,
+ { 'snapshot-name' => $target_snap },
+ );
+
+ my $src_file_blockdev = generate_file_blockdev(
+ $storecfg,
+ $drive,
+ $machine_version,
+ { 'snapshot-name' => $src_snap },
+ );
+ my $src_fmt_blockdev = generate_format_blockdev(
+ $storecfg,
+ $drive,
+ $src_file_blockdev,
+ { 'snapshot-name' => $src_snap },
+ );
+
+ my $job_id = "commit-$deviceid";
+ my $jobs = {};
+ my $opts = { 'job-id' => $job_id, device => $deviceid };
+
+ $opts->{'base-node'} = $target_fmt_blockdev->{'node-name'};
+ $opts->{'top-node'} = $src_fmt_blockdev->{'node-name'};
+
+ mon_cmd($vmid, "block-commit", %$opts);
+ $jobs->{$job_id} = {};
+
+ # if we commit the current, the blockjob need to be in 'complete' mode
+ my $complete = $src_snap && $src_snap ne 'current' ? 'auto' : 'complete';
+
+ eval {
+ PVE::QemuServer::BlockJob::qemu_drive_mirror_monitor(
+ $vmid, undef, $jobs, $complete, 0, 'commit',
+ );
+ };
+ if ($@) {
+ die "Failed to complete block commit: $@\n";
+ }
+
+ blockdev_delete($storecfg, $vmid, $drive, $src_file_blockdev, $src_fmt_blockdev, $src_snap);
+}
+
+sub blockdev_stream {
+ my ($storecfg, $vmid, $machine_version, $deviceid, $drive, $snap, $parent_snap, $target_snap) =
+ @_;
+
+ my $volid = $drive->{file};
+ $target_snap = undef if $target_snap eq 'current';
+
+ my $parent_file_blockdev = generate_file_blockdev(
+ $storecfg,
+ $drive,
+ $machine_version,
+ { 'snapshot-name' => $parent_snap },
+ );
+ my $parent_fmt_blockdev = generate_format_blockdev(
+ $storecfg,
+ $drive,
+ $parent_file_blockdev,
+ { 'snapshot-name' => $parent_snap },
+ );
+
+ my $target_file_blockdev = generate_file_blockdev(
+ $storecfg,
+ $drive,
+ $machine_version,
+ { 'snapshot-name' => $target_snap },
+ );
+ my $target_fmt_blockdev = generate_format_blockdev(
+ $storecfg,
+ $drive,
+ $target_file_blockdev,
+ { 'snapshot-name' => $target_snap },
+ );
+
+ my $snap_file_blockdev =
+ generate_file_blockdev($storecfg, $drive, $machine_version, { 'snapshot-name' => $snap });
+ my $snap_fmt_blockdev = generate_format_blockdev(
+ $storecfg,
+ $drive,
+ $snap_file_blockdev,
+ { 'snapshot-name' => $snap },
+ );
+
+ my $job_id = "stream-$deviceid";
+ my $jobs = {};
+ my $options = { 'job-id' => $job_id, device => $target_fmt_blockdev->{'node-name'} };
+ $options->{'base-node'} = $parent_fmt_blockdev->{'node-name'};
+ $options->{'backing-file'} = $parent_file_blockdev->{filename};
+
+ mon_cmd($vmid, 'block-stream', %$options);
+ $jobs->{$job_id} = {};
+
+ eval {
+ PVE::QemuServer::BlockJob::qemu_drive_mirror_monitor(
+ $vmid, undef, $jobs, 'auto', 0, 'stream',
+ );
+ };
+ if ($@) {
+ die "Failed to complete block stream: $@\n";
+ }
+
+ blockdev_delete($storecfg, $vmid, $drive, $snap_file_blockdev, $snap_fmt_blockdev, $snap);
+}
+
1;
diff --git a/src/test/snapshot-test.pm b/src/test/snapshot-test.pm
index 4fce87f1..f61cd64b 100644
--- a/src/test/snapshot-test.pm
+++ b/src/test/snapshot-test.pm
@@ -399,8 +399,8 @@ sub set_migration_caps { } # ignored
# BEGIN redefine PVE::QemuServer methods
-sub do_snapshots_with_qemu {
- return 0;
+sub do_snapshots_type {
+ return 'storage';
}
sub vm_start {
--
2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* [pve-devel] [PATCH pve-storage 04/10] rename_volume: add source && target snap
[not found] <20250704064507.511884-1-alexandre.derumier@groupe-cyllene.com>
` (5 preceding siblings ...)
2025-07-04 6:45 ` [pve-devel] [PATCH qemu-server 3/3] qcow2: add external snapshot support Alexandre Derumier via pve-devel
@ 2025-07-04 6:45 ` Alexandre Derumier via pve-devel
2025-07-04 11:52 ` Fabian Grünbichler
2025-07-04 6:45 ` [pve-devel] [PATCH pve-storage 05/10] common: add qemu_img_info helper Alexandre Derumier via pve-devel
` (5 subsequent siblings)
12 siblings, 1 reply; 46+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-07-04 6:45 UTC (permalink / raw)
To: pve-devel; +Cc: Alexandre Derumier
[-- Attachment #1: Type: message/rfc822, Size: 6489 bytes --]
From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH pve-storage 04/10] rename_volume: add source && target snap
Date: Fri, 4 Jul 2025 08:45:01 +0200
Message-ID: <20250704064507.511884-8-alexandre.derumier@groupe-cyllene.com>
allow to rename from|to external snapshot volname
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
---
src/PVE/Storage.pm | 10 ++++++++--
src/PVE/Storage/LVMPlugin.pm | 17 +++++++++++++++--
src/PVE/Storage/Plugin.pm | 14 +++++++++++++-
3 files changed, 36 insertions(+), 5 deletions(-)
diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index 7861bf6..fe6eaf7 100755
--- a/src/PVE/Storage.pm
+++ b/src/PVE/Storage.pm
@@ -2319,7 +2319,7 @@ sub complete_volume {
}
sub rename_volume {
- my ($cfg, $source_volid, $target_vmid, $target_volname) = @_;
+ my ($cfg, $source_volid, $target_vmid, $target_volname, $source_snap, $target_snap) = @_;
die "no source volid provided\n" if !$source_volid;
die "no target VMID or target volname provided\n" if !$target_vmid && !$target_volname;
@@ -2339,7 +2339,13 @@ sub rename_volume {
undef,
sub {
return $plugin->rename_volume(
- $scfg, $storeid, $source_volname, $target_vmid, $target_volname,
+ $scfg,
+ $storeid,
+ $source_volname,
+ $target_vmid,
+ $target_volname,
+ $source_snap,
+ $target_snap,
);
},
);
diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
index 1a992e8..0b506c7 100644
--- a/src/PVE/Storage/LVMPlugin.pm
+++ b/src/PVE/Storage/LVMPlugin.pm
@@ -838,11 +838,24 @@ sub volume_import_write {
}
sub rename_volume {
- my ($class, $scfg, $storeid, $source_volname, $target_vmid, $target_volname) = @_;
+ my (
+ $class,
+ $scfg,
+ $storeid,
+ $source_volname,
+ $target_vmid,
+ $target_volname,
+ $source_snap,
+ $target_snap,
+ ) = @_;
my (
undef, $source_image, $source_vmid, $base_name, $base_vmid, undef, $format,
) = $class->parse_volname($source_volname);
+
+ $source_image = $class->get_snap_volname($source_volname, $source_snap) if $source_snap;
+ $target_volname = $class->get_snap_volname($source_volname, $target_snap) if $target_snap;
+
$target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid, $format)
if !$target_volname;
@@ -851,7 +864,7 @@ sub rename_volume {
die "target volume '${target_volname}' already exists\n"
if ($lvs->{$vg}->{$target_volname});
- lvrename($vg, $source_volname, $target_volname);
+ lvrename($vg, $source_image, $target_volname);
return "${storeid}:${target_volname}";
}
diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index c35d5e5..5afe29b 100644
--- a/src/PVE/Storage/Plugin.pm
+++ b/src/PVE/Storage/Plugin.pm
@@ -1877,7 +1877,16 @@ sub volume_import_formats {
}
sub rename_volume {
- my ($class, $scfg, $storeid, $source_volname, $target_vmid, $target_volname) = @_;
+ my (
+ $class,
+ $scfg,
+ $storeid,
+ $source_volname,
+ $target_vmid,
+ $target_volname,
+ $source_snap,
+ $target_snap,
+ ) = @_;
die "not implemented in storage plugin '$class'\n" if $class->can('api') && $class->api() < 10;
die "no path found\n" if !$scfg->{path};
@@ -1885,6 +1894,9 @@ sub rename_volume {
undef, $source_image, $source_vmid, $base_name, $base_vmid, undef, $format,
) = $class->parse_volname($source_volname);
+ $source_image = $class->get_snap_name($source_volname, $source_snap) if $source_snap;
+ $target_volname = $class->get_snap_name($source_volname, $target_snap) if $target_snap;
+
$target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid, $format, 1)
if !$target_volname;
--
2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* [pve-devel] [PATCH pve-storage 05/10] common: add qemu_img_info helper
[not found] <20250704064507.511884-1-alexandre.derumier@groupe-cyllene.com>
` (6 preceding siblings ...)
2025-07-04 6:45 ` [pve-devel] [PATCH pve-storage 04/10] rename_volume: add source && target snap Alexandre Derumier via pve-devel
@ 2025-07-04 6:45 ` Alexandre Derumier via pve-devel
2025-07-04 6:45 ` [pve-devel] [PATCH pve-storage 06/10] common: add qemu-img measure Alexandre Derumier via pve-devel
` (4 subsequent siblings)
12 siblings, 0 replies; 46+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-07-04 6:45 UTC (permalink / raw)
To: pve-devel; +Cc: Alexandre Derumier
[-- Attachment #1: Type: message/rfc822, Size: 5251 bytes --]
From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH pve-storage 05/10] common: add qemu_img_info helper
Date: Fri, 4 Jul 2025 08:45:02 +0200
Message-ID: <20250704064507.511884-9-alexandre.derumier@groupe-cyllene.com>
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
---
src/PVE/Storage/Common.pm | 26 ++++++++++++++++++++++++++
src/PVE/Storage/Plugin.pm | 20 +-------------------
2 files changed, 27 insertions(+), 19 deletions(-)
diff --git a/src/PVE/Storage/Common.pm b/src/PVE/Storage/Common.pm
index 78e5320..c15cc88 100644
--- a/src/PVE/Storage/Common.pm
+++ b/src/PVE/Storage/Common.pm
@@ -171,4 +171,30 @@ sub qemu_img_create {
run_command($cmd, errmsg => "unable to create image");
}
+sub qemu_img_info {
+ my ($filename, $file_format, $timeout) = @_;
+
+ my $cmd = ['/usr/bin/qemu-img', 'info', '--output=json', $filename];
+ push $cmd->@*, '-f', $file_format if $file_format;
+
+ my $json = '';
+ my $err_output = '';
+ eval {
+ run_command(
+ $cmd,
+ timeout => $timeout,
+ outfunc => sub { $json .= shift },
+ errfunc => sub { $err_output .= shift . "\n" },
+ );
+ };
+ warn $@ if $@;
+ if ($err_output) {
+ # if qemu did not output anything to stdout we die with stderr as an error
+ die $err_output if !$json;
+ # otherwise we warn about it and try to parse the json
+ warn $err_output;
+ }
+ return $json;
+}
+
1;
diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index 5afe29b..81443aa 100644
--- a/src/PVE/Storage/Plugin.pm
+++ b/src/PVE/Storage/Plugin.pm
@@ -1031,26 +1031,8 @@ sub file_size_info {
"file_size_info: '$filename': falling back to 'raw' from unknown format '$file_format'\n";
$file_format = 'raw';
}
- my $cmd = ['/usr/bin/qemu-img', 'info', '--output=json', $filename];
- push $cmd->@*, '-f', $file_format if $file_format;
- my $json = '';
- my $err_output = '';
- eval {
- run_command(
- $cmd,
- timeout => $timeout,
- outfunc => sub { $json .= shift },
- errfunc => sub { $err_output .= shift . "\n" },
- );
- };
- warn $@ if $@;
- if ($err_output) {
- # if qemu did not output anything to stdout we die with stderr as an error
- die $err_output if !$json;
- # otherwise we warn about it and try to parse the json
- warn $err_output;
- }
+ my $json = PVE::Storage::Common::qemu_img_info($filename, $file_format, $timeout);
if (!$json) {
die "failed to query file information with qemu-img\n" if $untrusted;
# skip decoding if there was no output, e.g. if there was a timeout.
--
2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* [pve-devel] [PATCH pve-storage 06/10] common: add qemu-img measure
[not found] <20250704064507.511884-1-alexandre.derumier@groupe-cyllene.com>
` (7 preceding siblings ...)
2025-07-04 6:45 ` [pve-devel] [PATCH pve-storage 05/10] common: add qemu_img_info helper Alexandre Derumier via pve-devel
@ 2025-07-04 6:45 ` Alexandre Derumier via pve-devel
2025-07-04 11:51 ` Fabian Grünbichler
2025-07-04 6:45 ` [pve-devel] [PATCH pve-storage 07/10] storage: volume_snapshot: add $running param Alexandre Derumier via pve-devel
` (3 subsequent siblings)
12 siblings, 1 reply; 46+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-07-04 6:45 UTC (permalink / raw)
To: pve-devel; +Cc: Alexandre Derumier
[-- Attachment #1: Type: message/rfc822, Size: 3946 bytes --]
From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH pve-storage 06/10] common: add qemu-img measure
Date: Fri, 4 Jul 2025 08:45:03 +0200
Message-ID: <20250704064507.511884-10-alexandre.derumier@groupe-cyllene.com>
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
---
src/PVE/Storage/Common.pm | 28 ++++++++++++++++++++++++++++
1 file changed, 28 insertions(+)
diff --git a/src/PVE/Storage/Common.pm b/src/PVE/Storage/Common.pm
index c15cc88..e73eeab 100644
--- a/src/PVE/Storage/Common.pm
+++ b/src/PVE/Storage/Common.pm
@@ -197,4 +197,32 @@ sub qemu_img_info {
return $json;
}
+sub qemu_img_measure {
+ my ($size, $fmt, $timeout, $options) = @_;
+
+ die "format is missing" if !$fmt;
+
+ my $cmd = ['/usr/bin/qemu-img', 'measure', '--output=json', '--size', "${size}K", '-O', $fmt];
+ push $cmd->@*, '-o', join(',', @$options) if @$options > 0;
+
+ my $json = '';
+ my $err_output = '';
+ eval {
+ run_command(
+ $cmd,
+ timeout => $timeout,
+ outfunc => sub { $json .= shift },
+ errfunc => sub { $err_output .= shift . "\n" },
+ );
+ };
+ warn $@ if $@;
+ if ($err_output) {
+ # if qemu did not output anything to stdout we die with stderr as an error
+ die $err_output if !$json;
+ # otherwise we warn about it and try to parse the json
+ warn $err_output;
+ }
+ return $json;
+}
+
1;
--
2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* [pve-devel] [PATCH pve-storage 07/10] storage: volume_snapshot: add $running param
[not found] <20250704064507.511884-1-alexandre.derumier@groupe-cyllene.com>
` (8 preceding siblings ...)
2025-07-04 6:45 ` [pve-devel] [PATCH pve-storage 06/10] common: add qemu-img measure Alexandre Derumier via pve-devel
@ 2025-07-04 6:45 ` Alexandre Derumier via pve-devel
2025-07-04 11:52 ` Fabian Grünbichler
2025-07-04 6:45 ` [pve-devel] [PATCH pve-storage 08/10] qcow2: add external snapshot support Alexandre Derumier via pve-devel
` (2 subsequent siblings)
12 siblings, 1 reply; 46+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-07-04 6:45 UTC (permalink / raw)
To: pve-devel; +Cc: Alexandre Derumier
[-- Attachment #1: Type: message/rfc822, Size: 7914 bytes --]
From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH pve-storage 07/10] storage: volume_snapshot: add $running param
Date: Fri, 4 Jul 2025 08:45:04 +0200
Message-ID: <20250704064507.511884-11-alexandre.derumier@groupe-cyllene.com>
This add a $running param to volume_snapshot,
it can be used if some extra actions need to be done at the storage
layer when the snapshot has already be done at qemu level.
Note: zfs && rbd plugins already used this param in create_base,
but it was not implemented in volume_snapshot.
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
---
src/PVE/Storage.pm | 4 ++--
src/PVE/Storage/ESXiPlugin.pm | 2 +-
src/PVE/Storage/ISCSIDirectPlugin.pm | 2 +-
src/PVE/Storage/LVMPlugin.pm | 2 +-
src/PVE/Storage/LvmThinPlugin.pm | 2 +-
src/PVE/Storage/PBSPlugin.pm | 2 +-
src/PVE/Storage/Plugin.pm | 2 +-
src/PVE/Storage/RBDPlugin.pm | 2 +-
src/PVE/Storage/ZFSPoolPlugin.pm | 2 +-
9 files changed, 10 insertions(+), 10 deletions(-)
diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index fe6eaf7..0396160 100755
--- a/src/PVE/Storage.pm
+++ b/src/PVE/Storage.pm
@@ -449,13 +449,13 @@ sub volume_rollback_is_possible {
}
sub volume_snapshot {
- my ($cfg, $volid, $snap) = @_;
+ my ($cfg, $volid, $snap, $running) = @_;
my ($storeid, $volname) = parse_volume_id($volid, 1);
if ($storeid) {
my $scfg = storage_config($cfg, $storeid);
my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
- return $plugin->volume_snapshot($scfg, $storeid, $volname, $snap);
+ return $plugin->volume_snapshot($scfg, $storeid, $volname, $snap, $running);
} elsif ($volid =~ m|^(/.+)$| && -e $volid) {
die "snapshot file/device '$volid' is not possible\n";
} else {
diff --git a/src/PVE/Storage/ESXiPlugin.pm b/src/PVE/Storage/ESXiPlugin.pm
index ab5242d..e655d7b 100644
--- a/src/PVE/Storage/ESXiPlugin.pm
+++ b/src/PVE/Storage/ESXiPlugin.pm
@@ -555,7 +555,7 @@ sub volume_size_info {
}
sub volume_snapshot {
- my ($class, $scfg, $storeid, $volname, $snap) = @_;
+ my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
die "creating snapshots is not supported for $class\n";
}
diff --git a/src/PVE/Storage/ISCSIDirectPlugin.pm b/src/PVE/Storage/ISCSIDirectPlugin.pm
index 62e9026..93cfd3c 100644
--- a/src/PVE/Storage/ISCSIDirectPlugin.pm
+++ b/src/PVE/Storage/ISCSIDirectPlugin.pm
@@ -232,7 +232,7 @@ sub volume_resize {
}
sub volume_snapshot {
- my ($class, $scfg, $storeid, $volname, $snap) = @_;
+ my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
die "volume snapshot is not possible on iscsi device\n";
}
diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
index 0b506c7..3d07260 100644
--- a/src/PVE/Storage/LVMPlugin.pm
+++ b/src/PVE/Storage/LVMPlugin.pm
@@ -691,7 +691,7 @@ sub volume_size_info {
}
sub volume_snapshot {
- my ($class, $scfg, $storeid, $volname, $snap) = @_;
+ my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
die "lvm snapshot is not implemented";
}
diff --git a/src/PVE/Storage/LvmThinPlugin.pm b/src/PVE/Storage/LvmThinPlugin.pm
index c244c91..e5df0b4 100644
--- a/src/PVE/Storage/LvmThinPlugin.pm
+++ b/src/PVE/Storage/LvmThinPlugin.pm
@@ -339,7 +339,7 @@ sub create_base {
# sub volume_resize {} reuse code from parent class
sub volume_snapshot {
- my ($class, $scfg, $storeid, $volname, $snap) = @_;
+ my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
my $vg = $scfg->{vgname};
my $snapvol = "snap_${volname}_$snap";
diff --git a/src/PVE/Storage/PBSPlugin.pm b/src/PVE/Storage/PBSPlugin.pm
index 00170f5..45edc46 100644
--- a/src/PVE/Storage/PBSPlugin.pm
+++ b/src/PVE/Storage/PBSPlugin.pm
@@ -966,7 +966,7 @@ sub volume_resize {
}
sub volume_snapshot {
- my ($class, $scfg, $storeid, $volname, $snap) = @_;
+ my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
die "volume snapshot is not possible on pbs device";
}
diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index 81443aa..88c30c2 100644
--- a/src/PVE/Storage/Plugin.pm
+++ b/src/PVE/Storage/Plugin.pm
@@ -1155,7 +1155,7 @@ sub volume_resize {
}
sub volume_snapshot {
- my ($class, $scfg, $storeid, $volname, $snap) = @_;
+ my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
die "can't snapshot this image format\n" if $volname !~ m/\.(qcow2|qed)$/;
diff --git a/src/PVE/Storage/RBDPlugin.pm b/src/PVE/Storage/RBDPlugin.pm
index ce7db50..883b0e4 100644
--- a/src/PVE/Storage/RBDPlugin.pm
+++ b/src/PVE/Storage/RBDPlugin.pm
@@ -868,7 +868,7 @@ sub volume_resize {
}
sub volume_snapshot {
- my ($class, $scfg, $storeid, $volname, $snap) = @_;
+ my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
my ($vtype, $name, $vmid) = $class->parse_volname($volname);
diff --git a/src/PVE/Storage/ZFSPoolPlugin.pm b/src/PVE/Storage/ZFSPoolPlugin.pm
index 979cf2c..9cdfa68 100644
--- a/src/PVE/Storage/ZFSPoolPlugin.pm
+++ b/src/PVE/Storage/ZFSPoolPlugin.pm
@@ -480,7 +480,7 @@ sub volume_size_info {
}
sub volume_snapshot {
- my ($class, $scfg, $storeid, $volname, $snap) = @_;
+ my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
my $vname = ($class->parse_volname($volname))[1];
--
2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* [pve-devel] [PATCH pve-storage 08/10] qcow2: add external snapshot support
[not found] <20250704064507.511884-1-alexandre.derumier@groupe-cyllene.com>
` (9 preceding siblings ...)
2025-07-04 6:45 ` [pve-devel] [PATCH pve-storage 07/10] storage: volume_snapshot: add $running param Alexandre Derumier via pve-devel
@ 2025-07-04 6:45 ` Alexandre Derumier via pve-devel
2025-07-04 11:52 ` Fabian Grünbichler
2025-07-04 6:45 ` [pve-devel] [PATCH pve-storage 09/10] lvmplugin: add qcow2 snapshot Alexandre Derumier via pve-devel
2025-07-04 6:45 ` [pve-devel] [PATCH pve-storage 10/10] storage : add volume_support_qemu_snapshot Alexandre Derumier via pve-devel
12 siblings, 1 reply; 46+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-07-04 6:45 UTC (permalink / raw)
To: pve-devel; +Cc: Alexandre Derumier
[-- Attachment #1: Type: message/rfc822, Size: 17283 bytes --]
From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH pve-storage 08/10] qcow2: add external snapshot support
Date: Fri, 4 Jul 2025 08:45:05 +0200
Message-ID: <20250704064507.511884-12-alexandre.derumier@groupe-cyllene.com>
add a snapext option to enable the feature
When a snapshot is taken, the current volume is renamed to snap volname
and a current image is created with the snap volume as backing file
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
---
src/PVE/Storage.pm | 1 -
src/PVE/Storage/Common.pm | 3 +-
src/PVE/Storage/DirPlugin.pm | 1 +
src/PVE/Storage/Plugin.pm | 263 +++++++++++++++++++++++++++++++++--
4 files changed, 252 insertions(+), 16 deletions(-)
diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index 0396160..d83770c 100755
--- a/src/PVE/Storage.pm
+++ b/src/PVE/Storage.pm
@@ -479,7 +479,6 @@ sub volume_snapshot_rollback {
}
}
-# FIXME PVE 8.x remove $running parameter (needs APIAGE reset)
sub volume_snapshot_delete {
my ($cfg, $volid, $snap, $running) = @_;
diff --git a/src/PVE/Storage/Common.pm b/src/PVE/Storage/Common.pm
index e73eeab..43f3f15 100644
--- a/src/PVE/Storage/Common.pm
+++ b/src/PVE/Storage/Common.pm
@@ -172,10 +172,11 @@ sub qemu_img_create {
}
sub qemu_img_info {
- my ($filename, $file_format, $timeout) = @_;
+ my ($filename, $file_format, $timeout, $follow_backing_files) = @_;
my $cmd = ['/usr/bin/qemu-img', 'info', '--output=json', $filename];
push $cmd->@*, '-f', $file_format if $file_format;
+ push $cmd->@*, '--backing-chain' if $follow_backing_files;
my $json = '';
my $err_output = '';
diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
index 10e4f70..ae5d083 100644
--- a/src/PVE/Storage/DirPlugin.pm
+++ b/src/PVE/Storage/DirPlugin.pm
@@ -95,6 +95,7 @@ sub options {
is_mountpoint => { optional => 1 },
bwlimit => { optional => 1 },
preallocation => { optional => 1 },
+ snapext => { optional => 1 },
};
}
diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index 88c30c2..68d17ff 100644
--- a/src/PVE/Storage/Plugin.pm
+++ b/src/PVE/Storage/Plugin.pm
@@ -215,6 +215,11 @@ my $defaultData = {
maximum => 65535,
optional => 1,
},
+ 'snapext' => {
+ type => 'boolean',
+ description => 'enable external snapshot.',
+ optional => 1,
+ },
},
};
@@ -727,6 +732,7 @@ sub filesystem_path {
my ($class, $scfg, $volname, $snapname) = @_;
my ($vtype, $name, $vmid, undef, undef, $isBase, $format) = $class->parse_volname($volname);
+ $name = $class->get_snap_name($volname, $snapname) if $scfg->{snapext} && $snapname;
# Note: qcow2/qed has internal snapshot, so path is always
# the same (with or without snapshot => same file).
@@ -931,6 +937,26 @@ sub alloc_image {
return "$vmid/$name";
}
+my sub alloc_backed_image {
+ my ($class, $storeid, $scfg, $volname, $backing_snap) = @_;
+
+ my $path = $class->path($scfg, $volname, $storeid);
+ my $backing_path = $class->path($scfg, $volname, $storeid, $backing_snap);
+
+ eval { PVE::Storage::Common::qemu_img_create($scfg, 'qcow2', undef, $path, $backing_path) };
+ if ($@) {
+ unlink $path;
+ die "$@";
+ }
+}
+
+my sub free_snap_image {
+ my ($class, $storeid, $scfg, $volname, $snap) = @_;
+
+ my $path = $class->path($scfg, $volname, $storeid, $snap);
+ unlink($path) || die "unlink '$path' failed - $!\n";
+}
+
sub free_image {
my ($class, $storeid, $scfg, $volname, $isBase, $format) = @_;
@@ -953,6 +979,20 @@ sub free_image {
return undef;
}
+ #delete external snapshots
+ if ($scfg->{snapext}) {
+ my $snapshots = $class->volume_snapshot_info($scfg, $storeid, $volname);
+ for my $snapid (
+ sort { $snapshots->{$b}->{order} <=> $snapshots->{$a}->{order} }
+ keys %$snapshots
+ ) {
+ my $snap = $snapshots->{$snapid};
+ next if $snapid eq 'current';
+ next if !$snap->{ext};
+ free_snap_image($class, $storeid, $scfg, $volname, $snapid);
+ }
+ }
+
unlink($path) || die "unlink '$path' failed - $!\n";
}
@@ -1159,11 +1199,39 @@ sub volume_snapshot {
die "can't snapshot this image format\n" if $volname !~ m/\.(qcow2|qed)$/;
- my $path = $class->filesystem_path($scfg, $volname);
+ if ($scfg->{snapext}) {
+
+ my $vmid = ($class->parse_volname($volname))[2];
+
+ #if running, the old current has been renamed with blockdev-reopen by qemu
+ if (!$running) {
+ #rename current volume to snap volume
+ $class->rename_volume($scfg, $storeid, $volname, $vmid, undef, 'current', $snap);
+ }
+
+ eval { alloc_backed_image($class, $storeid, $scfg, $volname, $snap) };
+ if ($@) {
+ warn "$@ \n";
+ #if running, the revert is done by qemu with blockdev-reopen
+ if (!$running) {
+ eval {
+ $class->rename_volume(
+ $scfg, $storeid, $volname, $vmid, undef, $snap, 'current',
+ );
+ };
+ warn $@ if $@;
+ }
+ die "can't allocate new volume $volname with $snap backing image\n";
+ }
+
+ } else {
+
+ my $path = $class->filesystem_path($scfg, $volname);
- my $cmd = ['/usr/bin/qemu-img', 'snapshot', '-c', $snap, $path];
+ my $cmd = ['/usr/bin/qemu-img', 'snapshot', '-c', $snap, $path];
- run_command($cmd);
+ run_command($cmd);
+ }
return undef;
}
@@ -1174,6 +1242,21 @@ sub volume_snapshot {
sub volume_rollback_is_possible {
my ($class, $scfg, $storeid, $volname, $snap, $blockers) = @_;
+ if ($scfg->{snapext}) {
+ #technically, we could manage multibranch, we it need lot more work for snapshot delete
+ #we need to implemente block-stream from deleted snapshot to all others child branchs
+ #when online, we need to do a transaction for multiple disk when delete the last snapshot
+ #and need to merge in current running file
+
+ my $snappath = $class->path($scfg, $volname, $storeid, $snap);
+ my $snapshots = $class->volume_snapshot_info($scfg, $storeid, $volname);
+ my $parentsnap = $snapshots->{current}->{parent};
+
+ return 1 if $parentsnap eq $snap;
+
+ die "can't rollback, '$snap' is not most recent snapshot on '$volname'\n";
+ }
+
return 1;
}
@@ -1182,11 +1265,22 @@ sub volume_snapshot_rollback {
die "can't rollback snapshot this image format\n" if $volname !~ m/\.(qcow2|qed)$/;
- my $path = $class->filesystem_path($scfg, $volname);
-
- my $cmd = ['/usr/bin/qemu-img', 'snapshot', '-a', $snap, $path];
+ if ($scfg->{snapext}) {
+ #simply delete the current snapshot and recreate it
+ eval { free_snap_image($class, $storeid, $scfg, $volname, 'current') };
+ if ($@) {
+ die "can't delete old volume $volname: $@\n";
+ }
- run_command($cmd);
+ eval { alloc_backed_image($class, $storeid, $scfg, $volname, $snap) };
+ if ($@) {
+ die "can't allocate new volume $volname: $@\n";
+ }
+ } else {
+ my $path = $class->filesystem_path($scfg, $volname);
+ my $cmd = ['/usr/bin/qemu-img', 'snapshot', '-a', $snap, $path];
+ run_command($cmd);
+ }
return undef;
}
@@ -1196,15 +1290,83 @@ sub volume_snapshot_delete {
die "can't delete snapshot for this image format\n" if $volname !~ m/\.(qcow2|qed)$/;
- return 1 if $running;
+ my $cmd = "";
- my $path = $class->filesystem_path($scfg, $volname);
+ if ($scfg->{snapext}) {
- $class->deactivate_volume($storeid, $scfg, $volname, $snap, {});
+ #qemu has already live commit|stream the snapshot, therefore we only have to drop the image itself
+ if ($running) {
+ eval { free_snap_image($class, $storeid, $scfg, $volname, $snap) };
+ if ($@) {
+ die "can't delete snapshot $snap of volume $volname: $@\n";
+ }
+ return;
+ }
- my $cmd = ['/usr/bin/qemu-img', 'snapshot', '-d', $snap, $path];
+ my $snapshots = $class->volume_snapshot_info($scfg, $storeid, $volname);
+ my $snappath = $snapshots->{$snap}->{file};
+ my $snap_volname = $snapshots->{$snap}->{volname};
+ die "volume $snappath is missing" if !-e $snappath;
+
+ my $parentsnap = $snapshots->{$snap}->{parent};
+ my $childsnap = $snapshots->{$snap}->{child};
+ my $childpath = $snapshots->{$childsnap}->{file};
+
+ #if first snapshot,as it should be bigger, we merge child, and rename the snapshot to child
+ if (!$parentsnap) {
+ print "$volname: deleting snapshot '$snap' by commiting snapshot '$childsnap'\n";
+ print "running 'qemu-img commit $childpath'\n";
+ $cmd = ['/usr/bin/qemu-img', 'commit', $childpath];
+ eval { run_command($cmd) };
+ if ($@) {
+ warn
+ "The state of $snap is now invalid. Don't try to clone or rollback it. You can only try to delete it again later\n";
+ die "error commiting $childsnap to $snap; $@\n";
+ }
+
+ print "rename $snappath to $childpath\n";
+ rename($snappath, $childpath)
+ || die "rename '$snappath' to '$childpath' failed - $!\n";
- run_command($cmd);
+ } else {
+ #we rebase the child image on the parent as new backing image
+ my $parentpath = $snapshots->{$parentsnap}->{file};
+ print
+ "$volname: deleting snapshot '$snap' by rebasing '$childsnap' on top of '$parentsnap'\n";
+ print "running 'qemu-img rebase -b $parentpath -F qcow -f qcow2 $childpath'\n";
+ $cmd = [
+ '/usr/bin/qemu-img',
+ 'rebase',
+ '-b',
+ $parentpath,
+ '-F',
+ 'qcow2',
+ '-f',
+ 'qcow2',
+ $childpath,
+ ];
+ eval { run_command($cmd) };
+ if ($@) {
+ #in case of abort, the state of the snap is still clean, just a little bit bigger
+ die "error rebase $childsnap from $parentsnap; $@\n";
+ }
+ #delete the old snapshot file (not part of the backing chain anymore)
+ eval { free_snap_image($class, $storeid, $scfg, $volname, $snap) };
+ if ($@) {
+ die "error delete old snapshot volume $snap_volname: $@\n";
+ }
+ }
+
+ } else {
+
+ return 1 if $running;
+
+ my $path = $class->filesystem_path($scfg, $volname);
+ $class->deactivate_volume($storeid, $scfg, $volname, $snap, {});
+
+ $cmd = ['/usr/bin/qemu-img', 'snapshot', '-d', $snap, $path];
+ run_command($cmd);
+ }
return undef;
}
@@ -1484,7 +1646,53 @@ sub status {
sub volume_snapshot_info {
my ($class, $scfg, $storeid, $volname) = @_;
- die "volume_snapshot_info is not implemented for $class";
+ my $path = $class->filesystem_path($scfg, $volname);
+ my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
+ $class->parse_volname($volname);
+
+ my $json = PVE::Storage::Common::qemu_img_info($path, undef, 10, 1);
+ die "failed to query file information with qemu-img\n" if !$json;
+ my $json_decode = eval { decode_json($json) };
+ if ($@) {
+ die "Can't decode qemu snapshot list. Invalid JSON: $@\n";
+ }
+ my $info = {};
+ my $order = 0;
+ if (ref($json_decode) eq 'HASH') {
+ #internal snapshots is a hashref
+ my $snapshots = $json_decode->{snapshots};
+ for my $snap (@$snapshots) {
+ my $snapname = $snap->{name};
+ $info->{$snapname}->{order} = $snap->{id};
+ $info->{$snapname}->{timestamp} = $snap->{'date-sec'};
+
+ }
+ } elsif (ref($json_decode) eq 'ARRAY') {
+ #no snapshot or external snapshots is an arrayref
+ my $snapshots = $json_decode;
+ for my $snap (@$snapshots) {
+ my $snapfile = $snap->{filename};
+ my $snapname = parse_snapname($snapfile);
+ $snapname = 'current' if !$snapname;
+ my $snapvolname = $class->get_snap_volname($volname, $snapname);
+
+ $info->{$snapname}->{order} = $order;
+ $info->{$snapname}->{file} = $snapfile;
+ $info->{$snapname}->{volname} = "$snapvolname";
+ $info->{$snapname}->{volid} = "$storeid:$snapvolname";
+ $info->{$snapname}->{ext} = 1;
+
+ my $parentfile = $snap->{'backing-filename'};
+ if ($parentfile) {
+ my $parentname = parse_snapname($parentfile);
+ $info->{$snapname}->{parent} = $parentname;
+ $info->{$parentname}->{child} = $snapname;
+ }
+ $order++;
+ }
+ }
+
+ return $info;
}
sub activate_storage {
@@ -2004,7 +2212,7 @@ sub qemu_blockdev_options {
# the snapshot alone.
my $format = ($class->parse_volname($volname))[6];
die "cannot attach only the snapshot of a '$format' image\n"
- if $options->{'snapshot-name'} && ($format eq 'qcow2' || $format eq 'qed');
+ if $options->{'snapshot-name'} && ($format eq 'qcow2' && !$scfg->{snapext} || $format eq 'qed');
# The 'file' driver only works for regular files. The check below is taken from
# block/file-posix.c:hdev_probe_device() in QEMU. Do not bother with detecting 'host_cdrom'
@@ -2108,4 +2316,31 @@ sub config_aware_base_mkdir {
}
}
+sub get_snap_name {
+ my ($class, $volname, $snapname) = @_;
+
+ my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
+ $class->parse_volname($volname);
+ $name = !$snapname || $snapname eq 'current' ? $name : "snap-$snapname-$name";
+ return $name;
+}
+
+sub get_snap_volname {
+ my ($class, $volname, $snapname) = @_;
+
+ my $vmid = ($class->parse_volname($volname))[2];
+ my $name = $class->get_snap_name($volname, $snapname);
+ return "$vmid/$name";
+}
+
+sub parse_snapname {
+ my ($name) = @_;
+
+ my $basename = basename($name);
+ if ($basename =~ m/^snap-(.*)-vm(.*)$/) {
+ return $1;
+ }
+ return undef;
+}
+
1;
--
2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* [pve-devel] [PATCH pve-storage 09/10] lvmplugin: add qcow2 snapshot
[not found] <20250704064507.511884-1-alexandre.derumier@groupe-cyllene.com>
` (10 preceding siblings ...)
2025-07-04 6:45 ` [pve-devel] [PATCH pve-storage 08/10] qcow2: add external snapshot support Alexandre Derumier via pve-devel
@ 2025-07-04 6:45 ` Alexandre Derumier via pve-devel
2025-07-04 11:51 ` Fabian Grünbichler
2025-07-04 6:45 ` [pve-devel] [PATCH pve-storage 10/10] storage : add volume_support_qemu_snapshot Alexandre Derumier via pve-devel
12 siblings, 1 reply; 46+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-07-04 6:45 UTC (permalink / raw)
To: pve-devel; +Cc: Alexandre Derumier
[-- Attachment #1: Type: message/rfc822, Size: 25977 bytes --]
From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH pve-storage 09/10] lvmplugin: add qcow2 snapshot
Date: Fri, 4 Jul 2025 08:45:06 +0200
Message-ID: <20250704064507.511884-13-alexandre.derumier@groupe-cyllene.com>
we format lvm logical volume with qcow2 to handle snapshot chain.
like for qcow2 file, when a snapshot is taken, the current lvm volume
is renamed to snap volname, and a new current lvm volume is created
with the snap volname as backing file
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
---
src/PVE/Storage/LVMPlugin.pm | 493 +++++++++++++++++++++++++++++------
1 file changed, 412 insertions(+), 81 deletions(-)
diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
index 3d07260..ef010b8 100644
--- a/src/PVE/Storage/LVMPlugin.pm
+++ b/src/PVE/Storage/LVMPlugin.pm
@@ -11,6 +11,8 @@ use PVE::JSONSchema qw(get_standard_option);
use PVE::Storage::Common;
+use JSON;
+
use base qw(PVE::Storage::Plugin);
# lvm helper functions
@@ -267,6 +269,74 @@ sub lvm_list_volumes {
return $lvs;
}
+sub free_lvm_volumes {
+ my ($class, $scfg, $storeid, $volnames) = @_;
+
+ my $vg = $scfg->{vgname};
+
+ # we need to zero out LVM data for security reasons
+ # and to allow thin provisioning
+ my $zero_out_worker = sub {
+ # wipe throughput up to 10MB/s by default; may be overwritten with saferemove_throughput
+ my $throughput = '-10485760';
+ if ($scfg->{saferemove_throughput}) {
+ $throughput = $scfg->{saferemove_throughput};
+ }
+ for my $name (@$volnames) {
+ print "zero-out data on image $name (/dev/$vg/del-$name)\n";
+
+ my $cmd = [
+ '/usr/bin/cstream',
+ '-i',
+ '/dev/zero',
+ '-o',
+ "/dev/$vg/del-$name",
+ '-T',
+ '10',
+ '-v',
+ '1',
+ '-b',
+ '1048576',
+ '-t',
+ "$throughput",
+ ];
+ eval {
+ run_command(
+ $cmd,
+ errmsg => "zero out finished (note: 'No space left on device' is ok here)",
+ );
+ };
+ warn $@ if $@;
+
+ $class->cluster_lock_storage(
+ $storeid,
+ $scfg->{shared},
+ undef,
+ sub {
+ my $cmd = ['/sbin/lvremove', '-f', "$vg/del-$name"];
+ run_command($cmd, errmsg => "lvremove '$vg/del-$name' error");
+ },
+ );
+ print "successfully removed volume $name ($vg/del-$name)\n";
+ }
+ };
+
+ if ($scfg->{saferemove}) {
+ for my $name (@$volnames) {
+ # avoid long running task, so we only rename here
+ my $cmd = ['/sbin/lvrename', $vg, $name, "del-$name"];
+ run_command($cmd, errmsg => "lvrename '$vg/$name' error");
+ }
+ return $zero_out_worker;
+ } else {
+ for my $name (@$volnames) {
+ my $tmpvg = $scfg->{vgname};
+ my $cmd = ['/sbin/lvremove', '-f', "$tmpvg/$name"];
+ run_command($cmd, errmsg => "lvremove '$tmpvg/$name' error");
+ }
+ }
+}
+
# Configuration
sub type {
@@ -276,6 +346,7 @@ sub type {
sub plugindata {
return {
content => [{ images => 1, rootdir => 1 }, { images => 1 }],
+ format => [{ raw => 1, qcow2 => 1 }, 'raw'],
'sensitive-properties' => {},
};
}
@@ -354,7 +425,10 @@ sub parse_volname {
PVE::Storage::Plugin::parse_lvm_name($volname);
if ($volname =~ m/^(vm-(\d+)-\S+)$/) {
- return ('images', $1, $2, undef, undef, undef, 'raw');
+ my $name = $1;
+ my $vmid = $2;
+ my $format = $volname =~ m/\.qcow2$/ ? 'qcow2' : 'raw';
+ return ('images', $name, $vmid, undef, undef, undef, $format);
}
die "unable to parse lvm volume name '$volname'\n";
@@ -363,17 +437,29 @@ sub parse_volname {
sub filesystem_path {
my ($class, $scfg, $volname, $snapname) = @_;
- die "lvm snapshot is not implemented" if defined($snapname);
+ my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
+ $class->parse_volname($volname);
- my ($vtype, $name, $vmid) = $class->parse_volname($volname);
+ die "snapshot is working with qcow2 format only" if defined($snapname) && $format ne 'qcow2';
my $vg = $scfg->{vgname};
+ $name = $class->get_snap_name($volname, $snapname) if $snapname;
my $path = "/dev/$vg/$name";
return wantarray ? ($path, $vmid, $vtype) : $path;
}
+sub qemu_blockdev_options {
+ my ($class, $scfg, $storeid, $volname, $machine_version, $options) = @_;
+
+ my ($path) = $class->path($scfg, $volname, $storeid, $options->{'snapshot-name'});
+
+ my $blockdev = { driver => 'host_device', filename => $path };
+
+ return $blockdev;
+}
+
sub create_base {
my ($class, $storeid, $scfg, $volname) = @_;
@@ -395,7 +481,11 @@ sub find_free_diskname {
my $disk_list = [keys %{ $lvs->{$vg} }];
- return PVE::Storage::Plugin::get_next_vm_diskname($disk_list, $storeid, $vmid, undef, $scfg);
+ $add_fmt_suffix = $fmt eq 'qcow2' ? 1 : undef;
+
+ return PVE::Storage::Plugin::get_next_vm_diskname(
+ $disk_list, $storeid, $vmid, $fmt, $scfg, $add_fmt_suffix,
+ );
}
sub lvcreate {
@@ -423,13 +513,46 @@ sub lvrename {
);
}
-sub alloc_image {
- my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
+my sub lvm_qcow2_format {
+ my ($class, $storeid, $scfg, $name, $fmt, $backing_snap, $size) = @_;
+
+ die "Can't format the volume, the format is not qcow2" if $fmt ne 'qcow2';
+
+ $class->activate_volume($storeid, $scfg, $name);
+ my $path = $class->path($scfg, $name, $storeid);
+ my $backing_path = $class->path($scfg, $name, $storeid, $backing_snap) if $backing_snap;
+ $size = undef if $backing_snap;
+ PVE::Storage::Common::qemu_img_create($scfg, 'qcow2', $size, $path, $backing_path);
+
+}
+
+my sub calculate_lvm_size {
+ my ($size, $fmt, $backing_snap) = @_;
+ #input size = qcow2 image size in kb
+
+ return $size if $fmt ne 'qcow2';
- die "unsupported format '$fmt'" if $fmt ne 'raw';
+ my $options = $backing_snap ? ['extended_l2=on', 'cluster_size=128k'] : [];
+
+ my $json = PVE::Storage::Common::qemu_img_measure($size, $fmt, 5, $options);
+ die "failed to query file information with qemu-img measure\n" if !$json;
+ my $info = eval { decode_json($json) };
+ if ($@) {
+ die "Invalid JSON: $@\n";
+ }
+
+ die "Missing fully-allocated value from json" if !$info->{'fully-allocated'};
+
+ return $info->{'fully-allocated'} / 1024;
+}
+
+my sub alloc_lvm_image {
+ my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size, $backing_snap) = @_;
+
+ die "unsupported format '$fmt'" if $fmt !~ m/(raw|qcow2)/;
die "illegal name '$name' - should be 'vm-$vmid-*'\n"
- if $name && $name !~ m/^vm-$vmid-/;
+ if $name !~ m/^vm-$vmid-/;
my $vgs = lvm_vgs();
@@ -438,86 +561,98 @@ sub alloc_image {
die "no such volume group '$vg'\n" if !defined($vgs->{$vg});
my $free = int($vgs->{$vg}->{free});
+ my $lvmsize = calculate_lvm_size($size, $fmt, $backing_snap);
die "not enough free space ($free < $size)\n" if $free < $size;
- $name = $class->find_free_diskname($storeid, $scfg, $vmid)
+ my $tags = ["pve-vm-$vmid"];
+ #tags all snapshots volumes with the main volume tag for easier activation of the whole group
+ push @$tags, "\@pve-$name" if $fmt eq 'qcow2';
+ lvcreate($vg, $name, $lvmsize, $tags);
+
+ return if $fmt ne 'qcow2';
+
+ #format the lvm volume with qcow2 format
+ eval { lvm_qcow2_format($class, $storeid, $scfg, $name, $fmt, $backing_snap, $size) };
+ if ($@) {
+ my $err = $@;
+ #no need to safe cleanup as the volume is still empty
+ eval {
+ my $cmd = ['/sbin/lvremove', '-f', "$vg/$name"];
+ run_command($cmd, errmsg => "lvremove '$vg/$name' error");
+ };
+ die $err;
+ }
+
+}
+
+sub alloc_image {
+ my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
+
+ $name = $class->find_free_diskname($storeid, $scfg, $vmid, $fmt)
if !$name;
- lvcreate($vg, $name, $size, ["pve-vm-$vmid"]);
+ alloc_lvm_image($class, $storeid, $scfg, $vmid, $fmt, $name, $size);
return $name;
}
-sub free_image {
- my ($class, $storeid, $scfg, $volname, $isBase) = @_;
+sub alloc_snap_image {
+ my ($class, $storeid, $scfg, $volname, $backing_snap) = @_;
- my $vg = $scfg->{vgname};
+ my ($vmid, $format) = ($class->parse_volname($volname))[2, 6];
+ my $path = $class->path($scfg, $volname, $storeid, $backing_snap);
- # we need to zero out LVM data for security reasons
- # and to allow thin provisioning
+ #we need to use same size than the backing image qcow2 virtual-size
+ my $size = PVE::Storage::Plugin::file_size_info($path, 5, $format);
+ $size = $size / 1024; #we use kb in lvcreate
- my $zero_out_worker = sub {
- print "zero-out data on image $volname (/dev/$vg/del-$volname)\n";
+ alloc_lvm_image($class, $storeid, $scfg, $vmid, $format, $volname, $size, $backing_snap);
+}
- # wipe throughput up to 10MB/s by default; may be overwritten with saferemove_throughput
- my $throughput = '-10485760';
- if ($scfg->{saferemove_throughput}) {
- $throughput = $scfg->{saferemove_throughput};
- }
+sub free_snap_image {
+ my ($class, $storeid, $scfg, $volname, $snap) = @_;
- my $cmd = [
- '/usr/bin/cstream',
- '-i',
- '/dev/zero',
- '-o',
- "/dev/$vg/del-$volname",
- '-T',
- '10',
- '-v',
- '1',
- '-b',
- '1048576',
- '-t',
- "$throughput",
- ];
- eval {
- run_command(
- $cmd,
- errmsg => "zero out finished (note: 'No space left on device' is ok here)",
- );
- };
- warn $@ if $@;
+ #activate only the snapshot volume
+ my $path = $class->path($scfg, $volname, $storeid, $snap);
+ my $cmd = ['/sbin/lvchange', '-aly', $path];
+ run_command($cmd, errmsg => "can't activate LV '$path' to zero-out its data");
+ $cmd = ['/sbin/lvchange', '--refresh', $path];
+ run_command($cmd, errmsg => "can't refresh LV '$path' to zero-out its data");
- $class->cluster_lock_storage(
- $storeid,
- $scfg->{shared},
- undef,
- sub {
- my $cmd = ['/sbin/lvremove', '-f', "$vg/del-$volname"];
- run_command($cmd, errmsg => "lvremove '$vg/del-$volname' error");
- },
- );
- print "successfully removed volume $volname ($vg/del-$volname)\n";
- };
+ my $snap_volname = $class->get_snap_volname($volname, $snap);
+ return $class->free_lvm_volumes($scfg, $storeid, [$snap_volname]);
+}
- my $cmd = ['/sbin/lvchange', '-aly', "$vg/$volname"];
- run_command($cmd, errmsg => "can't activate LV '$vg/$volname' to zero-out its data");
- $cmd = ['/sbin/lvchange', '--refresh', "$vg/$volname"];
- run_command($cmd, errmsg => "can't refresh LV '$vg/$volname' to zero-out its data");
+sub free_image {
+ my ($class, $storeid, $scfg, $volname, $isBase, $format) = @_;
- if ($scfg->{saferemove}) {
- # avoid long running task, so we only rename here
- $cmd = ['/sbin/lvrename', $vg, $volname, "del-$volname"];
- run_command($cmd, errmsg => "lvrename '$vg/$volname' error");
- return $zero_out_worker;
- } else {
- my $tmpvg = $scfg->{vgname};
- $cmd = ['/sbin/lvremove', '-f', "$tmpvg/$volname"];
- run_command($cmd, errmsg => "lvremove '$tmpvg/$volname' error");
+ my $name = ($class->parse_volname($volname))[1];
+
+ #activate volumes && snapshot volumes
+ my $path = $class->path($scfg, $volname, $storeid);
+ $path = "\@pve-$name" if $format && $format eq 'qcow2';
+ my $cmd = ['/sbin/lvchange', '-aly', $path];
+ run_command($cmd, errmsg => "can't activate LV '$path' to zero-out its data");
+ $cmd = ['/sbin/lvchange', '--refresh', $path];
+ run_command($cmd, errmsg => "can't refresh LV '$path' to zero-out its data");
+
+ my $volnames = [];
+ my $snapshots = $class->volume_snapshot_info($scfg, $storeid, $volname);
+ for my $snapid (
+ sort { $snapshots->{$b}->{order} <=> $snapshots->{$a}->{order} }
+ keys %$snapshots
+ ) {
+ my $snap = $snapshots->{$snapid};
+ next if $snapid eq 'current';
+ next if !$snap->{volid};
+ next if !$snap->{ext};
+ my ($snap_storeid, $snap_volname) = PVE::Storage::parse_volume_id($snap->{volid});
+ push @$volnames, $snap_volname;
}
+ push @$volnames, $volname;
- return undef;
+ return $class->free_lvm_volumes($scfg, $storeid, $volnames);
}
my $check_tags = sub {
@@ -624,6 +759,12 @@ sub activate_volume {
my $lvm_activate_mode = 'ey';
+ #activate volume && all snapshots volumes by tag
+ my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
+ $class->parse_volname($volname);
+
+ $path = "\@pve-$name" if $format eq 'qcow2';
+
my $cmd = ['/sbin/lvchange', "-a$lvm_activate_mode", $path];
run_command($cmd, errmsg => "can't activate LV '$path'");
$cmd = ['/sbin/lvchange', '--refresh', $path];
@@ -636,6 +777,10 @@ sub deactivate_volume {
my $path = $class->path($scfg, $volname, $storeid, $snapname);
return if !-b $path;
+ my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
+ $class->parse_volname($volname);
+ $path = "\@pve-$name" if $format eq 'qcow2';
+
my $cmd = ['/sbin/lvchange', '-aln', $path];
run_command($cmd, errmsg => "can't deactivate LV '$path'");
}
@@ -643,10 +788,14 @@ sub deactivate_volume {
sub volume_resize {
my ($class, $scfg, $storeid, $volname, $size, $running) = @_;
- $size = ($size / 1024 / 1024) . "M";
+ my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
+ $class->parse_volname($volname);
+
+ my $lvmsize = calculate_lvm_size($size / 1024, $format);
+ $lvmsize = "${lvmsize}k";
my $path = $class->path($scfg, $volname);
- my $cmd = ['/sbin/lvextend', '-L', $size, $path];
+ my $cmd = ['/sbin/lvextend', '-L', $lvmsize, $path];
$class->cluster_lock_storage(
$storeid,
@@ -657,12 +806,18 @@ sub volume_resize {
},
);
+ if (!$running && $format eq 'qcow2') {
+ my $prealloc_opt = PVE::Storage::Plugin::preallocation_cmd_option($scfg, $format);
+ my $cmd = ['/usr/bin/qemu-img', 'resize', "--$prealloc_opt", '-f', $format, $path, $size];
+ run_command($cmd, timeout => 10);
+ }
+
return 1;
}
sub volume_size_info {
- my ($class, $scfg, $storeid, $volname, $timeout) = @_;
- my $path = $class->filesystem_path($scfg, $volname);
+ my ($class, $scfg, $storeid, $volname, $timeout, $snap) = @_;
+ my $path = $class->filesystem_path($scfg, $volname, $snap);
my $cmd = [
'/sbin/lvs',
@@ -693,30 +848,191 @@ sub volume_size_info {
sub volume_snapshot {
my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
- die "lvm snapshot is not implemented";
+ my ($vmid, $format) = ($class->parse_volname($volname))[2, 6];
+
+ die "can't snapshot this image format\n" if $format ne 'qcow2';
+
+ if ($running) {
+ #rename with blockdev-reopen is done at qemu level when running
+ $class->alloc_snap_image($storeid, $scfg, $volname, $snap);
+ if ($@) {
+ die "can't allocate new volume $volname: $@\n";
+ }
+ return;
+ }
+
+ #rename current volume to snap volume
+ eval { $class->rename_volume($scfg, $storeid, $volname, $vmid, undef, 'current', $snap) };
+ die "error rename $volname to $snap\n" if $@;
+
+ eval { $class->alloc_snap_image($storeid, $scfg, $volname, $snap) };
+ if ($@) {
+ my $err = $@;
+ eval { $class->rename_volume($scfg, $storeid, $volname, $vmid, undef, $snap, 'current') };
+ die $err;
+ }
+}
+
+sub volume_rollback_is_possible {
+ my ($class, $scfg, $storeid, $volname, $snap, $blockers) = @_;
+
+ my $snap_path = $class->path($scfg, $volname, $storeid, $snap);
+
+ $class->activate_volume($storeid, $scfg, $volname, undef, {});
+ my $snapshots = $class->volume_snapshot_info($scfg, $storeid, $volname);
+ my $parent_snap = $snapshots->{current}->{parent};
+
+ return 1 if $parent_snap eq $snap;
+ die "can't rollback, '$snap' is not most recent snapshot on '$volname'\n";
+
+ return 1;
}
sub volume_snapshot_rollback {
my ($class, $scfg, $storeid, $volname, $snap) = @_;
- die "lvm snapshot rollback is not implemented";
+ my $format = ($class->parse_volname($volname))[6];
+
+ die "can't rollback snapshot for this image format\n" if $format ne 'qcow2';
+
+ $class->activate_volume($storeid, $scfg, $volname, undef, {});
+
+ # we can simply reformat the current lvm volume to avoid
+ # a long safe remove.(not needed here, as the allocated space
+ # is still the same owner)
+ eval { lvm_qcow2_format($class, $storeid, $scfg, $volname, $format, $snap) };
+ if ($@) {
+ die "can't rollback. Error reformating current $volname\n";
+ }
+ return undef;
}
sub volume_snapshot_delete {
- my ($class, $scfg, $storeid, $volname, $snap) = @_;
+ my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
+
+ my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
+ $class->parse_volname($volname);
+
+ die "can't delete snapshot for this image format\n" if $format ne 'qcow2';
+
+ if ($running) {
+ my $cleanup_worker = eval { $class->free_snap_image($storeid, $scfg, $volname, $snap); };
+ die "error deleting snapshot $snap $@\n" if $@;
+
+ if ($cleanup_worker) {
+ my $rpcenv = PVE::RPCEnvironment::get();
+ my $authuser = $rpcenv->get_user();
+ $rpcenv->fork_worker('imgdel', undef, $authuser, $cleanup_worker);
+ }
+ return;
+ }
+
+ my $cmd = "";
+ my $path = $class->filesystem_path($scfg, $volname);
+
+ my $snapshots = $class->volume_snapshot_info($scfg, $storeid, $volname);
+ my $snappath = $snapshots->{$snap}->{file};
+ my $snapvolname = $snapshots->{$snap}->{volname};
+ die "volume $snappath is missing" if !-e $snappath;
+
+ my $parentsnap = $snapshots->{$snap}->{parent};
+
+ my $childsnap = $snapshots->{$snap}->{child};
+ my $childpath = $snapshots->{$childsnap}->{file};
+ my $childvolname = $snapshots->{$childsnap}->{volname};
+
+ my $cleanup_worker = undef;
+ my $err = undef;
+ #if first snapshot,as it should be bigger, we merge child, and rename the snapshot to child
+ if (!$parentsnap) {
+ print "$volname: deleting snapshot '$snap' by commiting snapshot '$childsnap'\n";
+ print "running 'qemu-img commit $childpath'\n";
+ #can't use -d here, as it's an lvm volume
+ $cmd = ['/usr/bin/qemu-img', 'commit', $childpath];
+ eval { run_command($cmd) };
+ if ($@) {
+ warn
+ "The state of $snap is now invalid. Don't try to clone or rollback it. You can only try to delete it again later\n";
+ die "error commiting $childsnap to $snap; $@\n";
+ }
+ print "delete $childvolname\n";
+ $cleanup_worker = eval { $class->free_snap_image($storeid, $scfg, $volname, $childsnap) };
+ if ($@) {
+ die "error delete old snapshot volume $childvolname: $@\n";
+ }
+
+ print "rename $snapvolname to $childvolname\n";
+ my $vg = $scfg->{vgname};
+ eval { lvrename($vg, $snapvolname, $childvolname) };
+ if ($@) {
+ warn $@;
+ $err = "error renaming snapshot: $@\n";
+ }
- die "lvm snapshot delete is not implemented";
+ } else {
+ #we rebase the child image on the parent as new backing image
+ my $parentpath = $snapshots->{$parentsnap}->{file};
+ print
+ "$volname: deleting snapshot '$snap' by rebasing '$childsnap' on top of '$parentsnap'\n";
+ print "running 'qemu-img rebase -b $parentpath -F qcow -f qcow2 $childpath'\n";
+ $cmd = [
+ '/usr/bin/qemu-img',
+ 'rebase',
+ '-b',
+ $parentpath,
+ '-F',
+ 'qcow2',
+ '-f',
+ 'qcow2',
+ $childpath,
+ ];
+ eval { run_command($cmd) };
+ if ($@) {
+ #in case of abort, the state of the snap is still clean, just a little bit bigger
+ die "error rebase $childsnap from $parentsnap; $@\n";
+ }
+ #delete the snapshot
+ eval { $cleanup_worker = $class->free_snap_image($storeid, $scfg, $volname, $snap); };
+ if ($@) {
+ die "error deleting old snapshot volume $snapvolname\n";
+ }
+ }
+ if ($cleanup_worker) {
+ my $rpcenv = PVE::RPCEnvironment::get();
+ my $authuser = $rpcenv->get_user();
+ $rpcenv->fork_worker('imgdel', undef, $authuser, $cleanup_worker);
+ }
+
+ die $err if $err;
}
sub volume_has_feature {
my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running) = @_;
my $features = {
- copy => { base => 1, current => 1 },
- rename => { current => 1 },
+ copy => {
+ base => { qcow2 => 1, raw => 1 },
+ current => { qcow2 => 1, raw => 1 },
+ snap => { qcow2 => 1 },
+ },
+ 'rename' => {
+ current => { qcow2 => 1, raw => 1 },
+ },
+ snapshot => {
+ current => { qcow2 => 1 },
+ snap => { qcow2 => 1 },
+ },
+ # fixme: add later ? (we need to handle basepath, volume activation,...)
+ # template => {
+ # current => { raw => 1, qcow2 => 1},
+ # },
+ # clone => {
+ # base => { qcow2 => 1 },
+ # },
};
- my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) = $class->parse_volname($volname);
+ my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
+ $class->parse_volname($volname);
my $key = undef;
if ($snapname) {
@@ -724,7 +1040,7 @@ sub volume_has_feature {
} else {
$key = $isBase ? 'base' : 'current';
}
- return 1 if $features->{$feature}->{$key};
+ return 1 if defined($features->{$feature}->{$key}->{$format});
return undef;
}
@@ -868,4 +1184,19 @@ sub rename_volume {
return "${storeid}:${target_volname}";
}
+sub get_snap_name {
+ my ($class, $volname, $snapname) = @_;
+
+ my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
+ $class->parse_volname($volname);
+ $name = !$snapname || $snapname eq 'current' ? $name : "snap-$snapname-$name";
+ return $name;
+}
+
+sub get_snap_volname {
+ my ($class, $volname, $snapname) = @_;
+
+ return $class->get_snap_name($volname, $snapname);
+}
+
1;
--
2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* [pve-devel] [PATCH pve-storage 10/10] storage : add volume_support_qemu_snapshot
[not found] <20250704064507.511884-1-alexandre.derumier@groupe-cyllene.com>
` (11 preceding siblings ...)
2025-07-04 6:45 ` [pve-devel] [PATCH pve-storage 09/10] lvmplugin: add qcow2 snapshot Alexandre Derumier via pve-devel
@ 2025-07-04 6:45 ` Alexandre Derumier via pve-devel
2025-07-04 11:51 ` Fabian Grünbichler
12 siblings, 1 reply; 46+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-07-04 6:45 UTC (permalink / raw)
To: pve-devel; +Cc: Alexandre Derumier
[-- Attachment #1: Type: message/rfc822, Size: 6532 bytes --]
From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH pve-storage 10/10] storage : add volume_support_qemu_snapshot
Date: Fri, 4 Jul 2025 08:45:07 +0200
Message-ID: <20250704064507.511884-14-alexandre.derumier@groupe-cyllene.com>
Returns if the volume is supporting qemu snapshot:
'internal' : do the snapshot with qemu internal snapshot
'external' : do the snapshot with qemu external snapshot
undef : does not support qemu snapshot
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
---
src/PVE/Storage.pm | 19 +++++++++++++++++++
src/PVE/Storage/DirPlugin.pm | 10 ++++++++++
src/PVE/Storage/LVMPlugin.pm | 7 +++++++
src/PVE/Storage/Plugin.pm | 13 +++++++++++++
src/PVE/Storage/RBDPlugin.pm | 6 ++++++
5 files changed, 55 insertions(+)
diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index d83770c..e794f7b 100755
--- a/src/PVE/Storage.pm
+++ b/src/PVE/Storage.pm
@@ -2350,6 +2350,25 @@ sub rename_volume {
);
}
+# Returns the method type to take a snapshot with qemu:
+# 'internal' : support snapshot with qemu internal snapshot
+# 'external' : support do the snapshot with qemu external snapshot
+# undef : don't support qemu snapshot
+sub volume_support_qemu_snapshot {
+ my ($cfg, $volid) = @_;
+
+ my ($storeid, $volname) = parse_volume_id($volid, 1);
+
+ if ($storeid) {
+ my $scfg = storage_config($cfg, $storeid);
+
+ my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+
+ return $plugin->volume_support_qemu_snapshot($storeid, $scfg, $volname);
+ }
+ return undef;
+}
+
# Various io-heavy operations require io/bandwidth limits which can be
# configured on multiple levels: The global defaults in datacenter.cfg, and
# per-storage overrides. When we want to do a restore from storage A to storage
diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
index ae5d083..0f8776b 100644
--- a/src/PVE/Storage/DirPlugin.pm
+++ b/src/PVE/Storage/DirPlugin.pm
@@ -315,4 +315,14 @@ sub get_import_metadata {
};
}
+sub volume_support_qemu_snapshot {
+ my ($class, $storeid, $scfg, $volname) = @_;
+
+ my $format = ($class->parse_volname($volname))[6];
+ return if $format ne 'qcow2';
+
+ my $type = $scfg->{snapext} ? 'external' : 'internal';
+ return $type;
+}
+
1;
diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
index ef010b8..3e3e48c 100644
--- a/src/PVE/Storage/LVMPlugin.pm
+++ b/src/PVE/Storage/LVMPlugin.pm
@@ -1199,4 +1199,11 @@ sub get_snap_volname {
return $class->get_snap_name($volname, $snapname);
}
+sub volume_support_qemu_snapshot {
+ my ($class, $storeid, $scfg, $volname) = @_;
+
+ my $format = ($class->parse_volname($volname))[6];
+ return 'external' if $format eq 'qcow2';
+}
+
1;
diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index 68d17ff..bf190d2 100644
--- a/src/PVE/Storage/Plugin.pm
+++ b/src/PVE/Storage/Plugin.pm
@@ -2292,6 +2292,19 @@ sub qemu_blockdev_options {
return $blockdev;
}
+
+# Returns the method type to take a snapshot with qemu:
+# 'internal' : support snapshot with qemu internal snapshot
+# 'external' : support do the snapshot with qemu external snapshot
+# undef : don't support qemu snapshot
+
+sub volume_support_qemu_snapshot {
+ my ($class, $storeid, $scfg, $volname) = @_;
+
+ my $format = ($class->parse_volname($volname))[6];
+ return 'internal' if $format eq 'qcow2';
+}
+
# Used by storage plugins for external backup providers. See PVE::BackupProvider::Plugin for the API
# the provider needs to implement.
#
diff --git a/src/PVE/Storage/RBDPlugin.pm b/src/PVE/Storage/RBDPlugin.pm
index 883b0e4..8065cfc 100644
--- a/src/PVE/Storage/RBDPlugin.pm
+++ b/src/PVE/Storage/RBDPlugin.pm
@@ -1057,4 +1057,10 @@ sub rename_volume {
return "${storeid}:${base_name}${target_volname}";
}
+sub volume_support_qemu_snapshot {
+ my ($class, $storeid, $scfg, $volname) = @_;
+
+ return 'internal' if !$scfg->{krbd};
+}
+
1;
--
2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 09/10] lvmplugin: add qcow2 snapshot
2025-07-04 6:45 ` [pve-devel] [PATCH pve-storage 09/10] lvmplugin: add qcow2 snapshot Alexandre Derumier via pve-devel
@ 2025-07-04 11:51 ` Fabian Grünbichler
2025-07-09 7:24 ` DERUMIER, Alexandre via pve-devel
2025-07-09 8:06 ` DERUMIER, Alexandre via pve-devel
0 siblings, 2 replies; 46+ messages in thread
From: Fabian Grünbichler @ 2025-07-04 11:51 UTC (permalink / raw)
To: Proxmox VE development discussion
> Alexandre Derumier via pve-devel <pve-devel@lists.proxmox.com> hat am 04.07.2025 08:45 CEST geschrieben:
> we format lvm logical volume with qcow2 to handle snapshot chain.
>
> like for qcow2 file, when a snapshot is taken, the current lvm volume
> is renamed to snap volname, and a new current lvm volume is created
> with the snap volname as backing file
list_images always returns format 'raw', this needs to be fixed..
the size probably should be queried as well for qcow2-formatted volumes
>
> Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
> ---
> src/PVE/Storage/LVMPlugin.pm | 493 +++++++++++++++++++++++++++++------
> 1 file changed, 412 insertions(+), 81 deletions(-)
>
> diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
> index 3d07260..ef010b8 100644
> --- a/src/PVE/Storage/LVMPlugin.pm
> +++ b/src/PVE/Storage/LVMPlugin.pm
> @@ -11,6 +11,8 @@ use PVE::JSONSchema qw(get_standard_option);
>
> use PVE::Storage::Common;
>
> +use JSON;
> +
> use base qw(PVE::Storage::Plugin);
>
> # lvm helper functions
> @@ -267,6 +269,74 @@ sub lvm_list_volumes {
> return $lvs;
> }
>
> +sub free_lvm_volumes {
shouldn't be public, and should be further below..
> + my ($class, $scfg, $storeid, $volnames) = @_;
> +
> + my $vg = $scfg->{vgname};
this variable here
> +
> + # we need to zero out LVM data for security reasons
> + # and to allow thin provisioning
> + my $zero_out_worker = sub {
> + # wipe throughput up to 10MB/s by default; may be overwritten with saferemove_throughput
> + my $throughput = '-10485760';
> + if ($scfg->{saferemove_throughput}) {
> + $throughput = $scfg->{saferemove_throughput};
> + }
> + for my $name (@$volnames) {
> + print "zero-out data on image $name (/dev/$vg/del-$name)\n";
> +
> + my $cmd = [
> + '/usr/bin/cstream',
> + '-i',
> + '/dev/zero',
> + '-o',
> + "/dev/$vg/del-$name",
> + '-T',
> + '10',
> + '-v',
> + '1',
> + '-b',
> + '1048576',
> + '-t',
> + "$throughput",
> + ];
> + eval {
> + run_command(
> + $cmd,
> + errmsg => "zero out finished (note: 'No space left on device' is ok here)",
> + );
> + };
> + warn $@ if $@;
> +
> + $class->cluster_lock_storage(
> + $storeid,
> + $scfg->{shared},
> + undef,
> + sub {
> + my $cmd = ['/sbin/lvremove', '-f', "$vg/del-$name"];
> + run_command($cmd, errmsg => "lvremove '$vg/del-$name' error");
> + },
> + );
> + print "successfully removed volume $name ($vg/del-$name)\n";
> + }
> + };
> +
> + if ($scfg->{saferemove}) {
> + for my $name (@$volnames) {
> + # avoid long running task, so we only rename here
> + my $cmd = ['/sbin/lvrename', $vg, $name, "del-$name"];
> + run_command($cmd, errmsg => "lvrename '$vg/$name' error");
> + }
> + return $zero_out_worker;
> + } else {
> + for my $name (@$volnames) {
> + my $tmpvg = $scfg->{vgname};
could also be used here..
> + my $cmd = ['/sbin/lvremove', '-f', "$tmpvg/$name"];
> + run_command($cmd, errmsg => "lvremove '$tmpvg/$name' error");
> + }
> + }
> +}
> +
> # Configuration
>
> sub type {
> @@ -276,6 +346,7 @@ sub type {
> sub plugindata {
> return {
> content => [{ images => 1, rootdir => 1 }, { images => 1 }],
> + format => [{ raw => 1, qcow2 => 1 }, 'raw'],
> 'sensitive-properties' => {},
> };
> }
> @@ -354,7 +425,10 @@ sub parse_volname {
> PVE::Storage::Plugin::parse_lvm_name($volname);
>
> if ($volname =~ m/^(vm-(\d+)-\S+)$/) {
here we don't have the same issue as in Plugin.pm, thankfully!
and it's only possible with raw storage access to create an LV
ending in .qcow2 that would confuse us down the line, so I think
this is okay..
> - return ('images', $1, $2, undef, undef, undef, 'raw');
> + my $name = $1;
> + my $vmid = $2;
> + my $format = $volname =~ m/\.qcow2$/ ? 'qcow2' : 'raw';
> + return ('images', $name, $vmid, undef, undef, undef, $format);
> }
>
> die "unable to parse lvm volume name '$volname'\n";
> @@ -363,17 +437,29 @@ sub parse_volname {
> sub filesystem_path {
> my ($class, $scfg, $volname, $snapname) = @_;
>
> - die "lvm snapshot is not implemented" if defined($snapname);
> + my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
> + $class->parse_volname($volname);
>
> - my ($vtype, $name, $vmid) = $class->parse_volname($volname);
> + die "snapshot is working with qcow2 format only" if defined($snapname) && $format ne 'qcow2';
>
> my $vg = $scfg->{vgname};
> + $name = $class->get_snap_name($volname, $snapname) if $snapname;
>
> my $path = "/dev/$vg/$name";
>
> return wantarray ? ($path, $vmid, $vtype) : $path;
> }
>
> +sub qemu_blockdev_options {
> + my ($class, $scfg, $storeid, $volname, $machine_version, $options) = @_;
> +
> + my ($path) = $class->path($scfg, $volname, $storeid, $options->{'snapshot-name'});
> +
> + my $blockdev = { driver => 'host_device', filename => $path };
> +
> + return $blockdev;
> +}
> +
> sub create_base {
> my ($class, $storeid, $scfg, $volname) = @_;
>
> @@ -395,7 +481,11 @@ sub find_free_diskname {
>
> my $disk_list = [keys %{ $lvs->{$vg} }];
>
> - return PVE::Storage::Plugin::get_next_vm_diskname($disk_list, $storeid, $vmid, undef, $scfg);
> + $add_fmt_suffix = $fmt eq 'qcow2' ? 1 : undef;
> +
> + return PVE::Storage::Plugin::get_next_vm_diskname(
> + $disk_list, $storeid, $vmid, $fmt, $scfg, $add_fmt_suffix,
> + );
> }
>
> sub lvcreate {
> @@ -423,13 +513,46 @@ sub lvrename {
> );
> }
>
> -sub alloc_image {
> - my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
> +my sub lvm_qcow2_format {
> + my ($class, $storeid, $scfg, $name, $fmt, $backing_snap, $size) = @_;
> +
> + die "Can't format the volume, the format is not qcow2" if $fmt ne 'qcow2';
either drop this (it's already checked at all call sites)
> +
> + $class->activate_volume($storeid, $scfg, $name);
> + my $path = $class->path($scfg, $name, $storeid);
> + my $backing_path = $class->path($scfg, $name, $storeid, $backing_snap) if $backing_snap;
not allowed (post if + declaration)
also, this should probably encode a relative path so that renaming the VG and
adapting the storage.cfg entry works without breaking the back reference?
> + $size = undef if $backing_snap;
could be split into
if ($backing_snap) {
my $backing_path = $class->path($scfg, $name, $storeid, $backing_snap);
PVE::Storage::Common::qemu_img_create($scfg, 'qcow2', undef, $path, $backing_path);
} else {
PVE::Storage::Common::qemu_img_create($scfg, 'qcow2', $size, $path);
}
> + PVE::Storage::Common::qemu_img_create($scfg, 'qcow2', $size, $path, $backing_path);
or if you keep the check above, use $fmt for this/these ;)
> +
> +}
> +
> +my sub calculate_lvm_size {
> + my ($size, $fmt, $backing_snap) = @_;
> + #input size = qcow2 image size in kb
> +
> + return $size if $fmt ne 'qcow2';
>
> - die "unsupported format '$fmt'" if $fmt ne 'raw';
> + my $options = $backing_snap ? ['extended_l2=on', 'cluster_size=128k'] : [];
> +
> + my $json = PVE::Storage::Common::qemu_img_measure($size, $fmt, 5, $options);
> + die "failed to query file information with qemu-img measure\n" if !$json;
> + my $info = eval { decode_json($json) };
> + if ($@) {
> + die "Invalid JSON: $@\n";
> + }
> +
> + die "Missing fully-allocated value from json" if !$info->{'fully-allocated'};
> +
> + return $info->{'fully-allocated'} / 1024;
> +}
> +
> +my sub alloc_lvm_image {
> + my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size, $backing_snap) = @_;
> +
> + die "unsupported format '$fmt'" if $fmt !~ m/(raw|qcow2)/;
missing anchors, but this could just be `$fmt ne 'raw' && $fmt ne 'qcow2'`..
but how should we end up here? the whole storage plugin only allows those
two formats anyway..
>
> die "illegal name '$name' - should be 'vm-$vmid-*'\n"
> - if $name && $name !~ m/^vm-$vmid-/;
> + if $name !~ m/^vm-$vmid-/;
parse_volname should be enough - VMs can reference volumes owned by other VMs..
then we can also verify that the name and the format match, because right now
it's possible to do
$ pvesm alloc lvm 126 vm-126-disk-1 1G --format qcow2
Rounding up size to full physical extent 1.00 GiB
Logical volume "vm-126-disk-1" created.
Formatting '/dev/lvm/vm-126-disk-1', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=1073741824 lazy_refcounts=off refcount_bits=16
successfully created 'lvm:vm-126-disk-1'
$ pvesm list lvm
Volid Format Type Size VMID
lvm:vm-126-disk-1 raw images 1077936128 126
(the other way round is not possible, because the API catches it)
>
> my $vgs = lvm_vgs();
>
> @@ -438,86 +561,98 @@ sub alloc_image {
> die "no such volume group '$vg'\n" if !defined($vgs->{$vg});
>
> my $free = int($vgs->{$vg}->{free});
> + my $lvmsize = calculate_lvm_size($size, $fmt, $backing_snap);
>
> die "not enough free space ($free < $size)\n" if $free < $size;
>
> - $name = $class->find_free_diskname($storeid, $scfg, $vmid)
> + my $tags = ["pve-vm-$vmid"];
> + #tags all snapshots volumes with the main volume tag for easier activation of the whole group
> + push @$tags, "\@pve-$name" if $fmt eq 'qcow2';
> + lvcreate($vg, $name, $lvmsize, $tags);
> +
> + return if $fmt ne 'qcow2';
> +
> + #format the lvm volume with qcow2 format
> + eval { lvm_qcow2_format($class, $storeid, $scfg, $name, $fmt, $backing_snap, $size) };
> + if ($@) {
> + my $err = $@;
> + #no need to safe cleanup as the volume is still empty
> + eval {
> + my $cmd = ['/sbin/lvremove', '-f', "$vg/$name"];
> + run_command($cmd, errmsg => "lvremove '$vg/$name' error");
> + };
> + die $err;
> + }
> +
> +}
> +
> +sub alloc_image {
> + my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
> +
> + $name = $class->find_free_diskname($storeid, $scfg, $vmid, $fmt)
> if !$name;
>
> - lvcreate($vg, $name, $size, ["pve-vm-$vmid"]);
> + alloc_lvm_image($class, $storeid, $scfg, $vmid, $fmt, $name, $size);
>
> return $name;
> }
>
> -sub free_image {
> - my ($class, $storeid, $scfg, $volname, $isBase) = @_;
> +sub alloc_snap_image {
does this need to be public?
> + my ($class, $storeid, $scfg, $volname, $backing_snap) = @_;
>
> - my $vg = $scfg->{vgname};
> + my ($vmid, $format) = ($class->parse_volname($volname))[2, 6];
> + my $path = $class->path($scfg, $volname, $storeid, $backing_snap);
>
> - # we need to zero out LVM data for security reasons
> - # and to allow thin provisioning
> + #we need to use same size than the backing image qcow2 virtual-size
> + my $size = PVE::Storage::Plugin::file_size_info($path, 5, $format);
> + $size = $size / 1024; #we use kb in lvcreate
>
> - my $zero_out_worker = sub {
> - print "zero-out data on image $volname (/dev/$vg/del-$volname)\n";
> + alloc_lvm_image($class, $storeid, $scfg, $vmid, $format, $volname, $size, $backing_snap);
> +}
>
> - # wipe throughput up to 10MB/s by default; may be overwritten with saferemove_throughput
> - my $throughput = '-10485760';
> - if ($scfg->{saferemove_throughput}) {
> - $throughput = $scfg->{saferemove_throughput};
> - }
> +sub free_snap_image {
same question here?
> + my ($class, $storeid, $scfg, $volname, $snap) = @_;
>
> - my $cmd = [
> - '/usr/bin/cstream',
> - '-i',
> - '/dev/zero',
> - '-o',
> - "/dev/$vg/del-$volname",
> - '-T',
> - '10',
> - '-v',
> - '1',
> - '-b',
> - '1048576',
> - '-t',
> - "$throughput",
> - ];
> - eval {
> - run_command(
> - $cmd,
> - errmsg => "zero out finished (note: 'No space left on device' is ok here)",
> - );
> - };
> - warn $@ if $@;
> + #activate only the snapshot volume
> + my $path = $class->path($scfg, $volname, $storeid, $snap);
> + my $cmd = ['/sbin/lvchange', '-aly', $path];
> + run_command($cmd, errmsg => "can't activate LV '$path' to zero-out its data");
> + $cmd = ['/sbin/lvchange', '--refresh', $path];
> + run_command($cmd, errmsg => "can't refresh LV '$path' to zero-out its data");
>
> - $class->cluster_lock_storage(
> - $storeid,
> - $scfg->{shared},
> - undef,
> - sub {
> - my $cmd = ['/sbin/lvremove', '-f', "$vg/del-$volname"];
> - run_command($cmd, errmsg => "lvremove '$vg/del-$volname' error");
> - },
> - );
> - print "successfully removed volume $volname ($vg/del-$volname)\n";
> - };
> + my $snap_volname = $class->get_snap_volname($volname, $snap);
> + return $class->free_lvm_volumes($scfg, $storeid, [$snap_volname]);
> +}
>
> - my $cmd = ['/sbin/lvchange', '-aly', "$vg/$volname"];
> - run_command($cmd, errmsg => "can't activate LV '$vg/$volname' to zero-out its data");
> - $cmd = ['/sbin/lvchange', '--refresh', "$vg/$volname"];
> - run_command($cmd, errmsg => "can't refresh LV '$vg/$volname' to zero-out its data");
> +sub free_image {
> + my ($class, $storeid, $scfg, $volname, $isBase, $format) = @_;
>
> - if ($scfg->{saferemove}) {
> - # avoid long running task, so we only rename here
> - $cmd = ['/sbin/lvrename', $vg, $volname, "del-$volname"];
> - run_command($cmd, errmsg => "lvrename '$vg/$volname' error");
> - return $zero_out_worker;
> - } else {
> - my $tmpvg = $scfg->{vgname};
> - $cmd = ['/sbin/lvremove', '-f', "$tmpvg/$volname"];
> - run_command($cmd, errmsg => "lvremove '$tmpvg/$volname' error");
> + my $name = ($class->parse_volname($volname))[1];
> +
> + #activate volumes && snapshot volumes
this is only needed for qcow2-formatted volumes or if zeroing is enabled, right?
so we should guard it accordingly..
> + my $path = $class->path($scfg, $volname, $storeid);
> + $path = "\@pve-$name" if $format && $format eq 'qcow2';
> + my $cmd = ['/sbin/lvchange', '-aly', $path];
> + run_command($cmd, errmsg => "can't activate LV '$path' to zero-out its data");
> + $cmd = ['/sbin/lvchange', '--refresh', $path];
> + run_command($cmd, errmsg => "can't refresh LV '$path' to zero-out its data");
> +
> + my $volnames = [];
> + my $snapshots = $class->volume_snapshot_info($scfg, $storeid, $volname);
> + for my $snapid (
> + sort { $snapshots->{$b}->{order} <=> $snapshots->{$a}->{order} }
> + keys %$snapshots
> + ) {
> + my $snap = $snapshots->{$snapid};
> + next if $snapid eq 'current';
> + next if !$snap->{volid};
> + next if !$snap->{ext};
> + my ($snap_storeid, $snap_volname) = PVE::Storage::parse_volume_id($snap->{volid});
> + push @$volnames, $snap_volname;
> }
> + push @$volnames, $volname;
>
> - return undef;
> + return $class->free_lvm_volumes($scfg, $storeid, $volnames);
> }
>
> my $check_tags = sub {
> @@ -624,6 +759,12 @@ sub activate_volume {
>
> my $lvm_activate_mode = 'ey';
>
> + #activate volume && all snapshots volumes by tag
> + my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
> + $class->parse_volname($volname);
> +
> + $path = "\@pve-$name" if $format eq 'qcow2';
> +
> my $cmd = ['/sbin/lvchange', "-a$lvm_activate_mode", $path];
> run_command($cmd, errmsg => "can't activate LV '$path'");
> $cmd = ['/sbin/lvchange', '--refresh', $path];
> @@ -636,6 +777,10 @@ sub deactivate_volume {
> my $path = $class->path($scfg, $volname, $storeid, $snapname);
> return if !-b $path;
>
> + my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
> + $class->parse_volname($volname);
> + $path = "\@pve-$name" if $format eq 'qcow2';
> +
> my $cmd = ['/sbin/lvchange', '-aln', $path];
> run_command($cmd, errmsg => "can't deactivate LV '$path'");
> }
> @@ -643,10 +788,14 @@ sub deactivate_volume {
> sub volume_resize {
> my ($class, $scfg, $storeid, $volname, $size, $running) = @_;
>
> - $size = ($size / 1024 / 1024) . "M";
> + my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
> + $class->parse_volname($volname);
> +
> + my $lvmsize = calculate_lvm_size($size / 1024, $format);
> + $lvmsize = "${lvmsize}k";
>
> my $path = $class->path($scfg, $volname);
> - my $cmd = ['/sbin/lvextend', '-L', $size, $path];
> + my $cmd = ['/sbin/lvextend', '-L', $lvmsize, $path];
>
> $class->cluster_lock_storage(
> $storeid,
> @@ -657,12 +806,18 @@ sub volume_resize {
> },
> );
>
> + if (!$running && $format eq 'qcow2') {
> + my $prealloc_opt = PVE::Storage::Plugin::preallocation_cmd_option($scfg, $format);
this got moved to Common in your series (but I'd like to keep it were it was, see comments there)
> + my $cmd = ['/usr/bin/qemu-img', 'resize', "--$prealloc_opt", '-f', $format, $path, $size];
> + run_command($cmd, timeout => 10);
> + }
> +
> return 1;
> }
>
> sub volume_size_info {
> - my ($class, $scfg, $storeid, $volname, $timeout) = @_;
> - my $path = $class->filesystem_path($scfg, $volname);
> + my ($class, $scfg, $storeid, $volname, $timeout, $snap) = @_;
this is never called with $snap set?
> + my $path = $class->filesystem_path($scfg, $volname, $snap);
>
> my $cmd = [
> '/sbin/lvs',
> @@ -693,30 +848,191 @@ sub volume_size_info {
> sub volume_snapshot {
> my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
>
> - die "lvm snapshot is not implemented";
> + my ($vmid, $format) = ($class->parse_volname($volname))[2, 6];
> +
> + die "can't snapshot this image format\n" if $format ne 'qcow2';
die "cannot snapshot '$format' volume\n" if $format ne 'qcow2';
> +
> + if ($running) {
> + #rename with blockdev-reopen is done at qemu level when running
> + $class->alloc_snap_image($storeid, $scfg, $volname, $snap);
> + if ($@) {
> + die "can't allocate new volume $volname: $@\n";
> + }
missing eval?
> + return;
> + }
> +
> + #rename current volume to snap volume
> + eval { $class->rename_volume($scfg, $storeid, $volname, $vmid, undef, 'current', $snap) };
same comment as with the Plugin/DirPlugin patch with regard to rename_volume vs rename_snapshot
> + die "error rename $volname to $snap\n" if $@;
> +
> + eval { $class->alloc_snap_image($storeid, $scfg, $volname, $snap) };
> + if ($@) {
> + my $err = $@;
> + eval { $class->rename_volume($scfg, $storeid, $volname, $vmid, undef, $snap, 'current') };
> + die $err;
> + }
> +}
> +
> +sub volume_rollback_is_possible {
> + my ($class, $scfg, $storeid, $volname, $snap, $blockers) = @_;
> +
> + my $snap_path = $class->path($scfg, $volname, $storeid, $snap);
> +
> + $class->activate_volume($storeid, $scfg, $volname, undef, {});
> + my $snapshots = $class->volume_snapshot_info($scfg, $storeid, $volname);
> + my $parent_snap = $snapshots->{current}->{parent};
> +
> + return 1 if $parent_snap eq $snap;
> + die "can't rollback, '$snap' is not most recent snapshot on '$volname'\n";
should populate $blockers since we have the info already..
> +
> + return 1;
> }
>
> sub volume_snapshot_rollback {
> my ($class, $scfg, $storeid, $volname, $snap) = @_;
>
> - die "lvm snapshot rollback is not implemented";
> + my $format = ($class->parse_volname($volname))[6];
> +
> + die "can't rollback snapshot for this image format\n" if $format ne 'qcow2';
please include the format in the message!
> +
> + $class->activate_volume($storeid, $scfg, $volname, undef, {});
last two parameters can be dropped
> +
> + # we can simply reformat the current lvm volume to avoid
> + # a long safe remove.(not needed here, as the allocated space
> + # is still the same owner)
> + eval { lvm_qcow2_format($class, $storeid, $scfg, $volname, $format, $snap) };
what if the volume got resized along the way?
> + if ($@) {
> + die "can't rollback. Error reformating current $volname\n";
> + }
> + return undef;
> }
>
> sub volume_snapshot_delete {
> - my ($class, $scfg, $storeid, $volname, $snap) = @_;
> + my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
> +
> + my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
> + $class->parse_volname($volname);
> +
> + die "can't delete snapshot for this image format\n" if $format ne 'qcow2';
please include the format in the message!
> +
> + if ($running) {
> + my $cleanup_worker = eval { $class->free_snap_image($storeid, $scfg, $volname, $snap); };
> + die "error deleting snapshot $snap $@\n" if $@;
> +
> + if ($cleanup_worker) {
> + my $rpcenv = PVE::RPCEnvironment::get();
> + my $authuser = $rpcenv->get_user();
> + $rpcenv->fork_worker('imgdel', undef, $authuser, $cleanup_worker);
> + }
> + return;
> + }
> +
> + my $cmd = "";
> + my $path = $class->filesystem_path($scfg, $volname);
> +
> + my $snapshots = $class->volume_snapshot_info($scfg, $storeid, $volname);
> + my $snappath = $snapshots->{$snap}->{file};
> + my $snapvolname = $snapshots->{$snap}->{volname};
> + die "volume $snappath is missing" if !-e $snappath;
> +
> + my $parentsnap = $snapshots->{$snap}->{parent};
> +
> + my $childsnap = $snapshots->{$snap}->{child};
> + my $childpath = $snapshots->{$childsnap}->{file};
> + my $childvolname = $snapshots->{$childsnap}->{volname};
> +
> + my $cleanup_worker = undef;
> + my $err = undef;
> + #if first snapshot,as it should be bigger, we merge child, and rename the snapshot to child
> + if (!$parentsnap) {
> + print "$volname: deleting snapshot '$snap' by commiting snapshot '$childsnap'\n";
> + print "running 'qemu-img commit $childpath'\n";
> + #can't use -d here, as it's an lvm volume
> + $cmd = ['/usr/bin/qemu-img', 'commit', $childpath];
> + eval { run_command($cmd) };
> + if ($@) {
> + warn
> + "The state of $snap is now invalid. Don't try to clone or rollback it. You can only try to delete it again later\n";
> + die "error commiting $childsnap to $snap; $@\n";
> + }
> + print "delete $childvolname\n";
> + $cleanup_worker = eval { $class->free_snap_image($storeid, $scfg, $volname, $childsnap) };
> + if ($@) {
> + die "error delete old snapshot volume $childvolname: $@\n";
> + }
> +
> + print "rename $snapvolname to $childvolname\n";
> + my $vg = $scfg->{vgname};
> + eval { lvrename($vg, $snapvolname, $childvolname) };
> + if ($@) {
> + warn $@;
> + $err = "error renaming snapshot: $@\n";
> + }
>
> - die "lvm snapshot delete is not implemented";
> + } else {
> + #we rebase the child image on the parent as new backing image
> + my $parentpath = $snapshots->{$parentsnap}->{file};
> + print
> + "$volname: deleting snapshot '$snap' by rebasing '$childsnap' on top of '$parentsnap'\n";
> + print "running 'qemu-img rebase -b $parentpath -F qcow -f qcow2 $childpath'\n";
> + $cmd = [
> + '/usr/bin/qemu-img',
> + 'rebase',
> + '-b',
> + $parentpath,
> + '-F',
> + 'qcow2',
> + '-f',
> + 'qcow2',
> + $childpath,
> + ];
> + eval { run_command($cmd) };
> + if ($@) {
> + #in case of abort, the state of the snap is still clean, just a little bit bigger
> + die "error rebase $childsnap from $parentsnap; $@\n";
> + }
> + #delete the snapshot
> + eval { $cleanup_worker = $class->free_snap_image($storeid, $scfg, $volname, $snap); };
> + if ($@) {
> + die "error deleting old snapshot volume $snapvolname\n";
> + }
> + }
> + if ($cleanup_worker) {
only doing this here is a tiny bit dangerous, since if somebody adds a die before it
the cleanup never happens.. should we maybe add these 5 lines as a $fork_cleanup helper
in this sub, and then just call it above?
> + my $rpcenv = PVE::RPCEnvironment::get();
> + my $authuser = $rpcenv->get_user();
> + $rpcenv->fork_worker('imgdel', undef, $authuser, $cleanup_worker);
> + }
> +
> + die $err if $err;
> }
>
> sub volume_has_feature {
> my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running) = @_;
>
> my $features = {
> - copy => { base => 1, current => 1 },
> - rename => { current => 1 },
> + copy => {
> + base => { qcow2 => 1, raw => 1 },
> + current => { qcow2 => 1, raw => 1 },
> + snap => { qcow2 => 1 },
> + },
> + 'rename' => {
> + current => { qcow2 => 1, raw => 1 },
> + },
> + snapshot => {
> + current => { qcow2 => 1 },
> + snap => { qcow2 => 1 },
> + },
> + # fixme: add later ? (we need to handle basepath, volume activation,...)
> + # template => {
> + # current => { raw => 1, qcow2 => 1},
> + # },
> + # clone => {
> + # base => { qcow2 => 1 },
> + # },
> };
>
> - my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) = $class->parse_volname($volname);
> + my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
> + $class->parse_volname($volname);
>
> my $key = undef;
> if ($snapname) {
> @@ -724,7 +1040,7 @@ sub volume_has_feature {
> } else {
> $key = $isBase ? 'base' : 'current';
> }
> - return 1 if $features->{$feature}->{$key};
> + return 1 if defined($features->{$feature}->{$key}->{$format});
>
> return undef;
> }
> @@ -868,4 +1184,19 @@ sub rename_volume {
> return "${storeid}:${target_volname}";
> }
>
> +sub get_snap_name {
> + my ($class, $volname, $snapname) = @_;
> +
> + my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
> + $class->parse_volname($volname);
> + $name = !$snapname || $snapname eq 'current' ? $name : "snap-$snapname-$name";
> + return $name;
> +}
> +
> +sub get_snap_volname {
> + my ($class, $volname, $snapname) = @_;
> +
> + return $class->get_snap_name($volname, $snapname);
> +}
> +
> 1;
> --
> 2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 10/10] storage : add volume_support_qemu_snapshot
2025-07-04 6:45 ` [pve-devel] [PATCH pve-storage 10/10] storage : add volume_support_qemu_snapshot Alexandre Derumier via pve-devel
@ 2025-07-04 11:51 ` Fabian Grünbichler
0 siblings, 0 replies; 46+ messages in thread
From: Fabian Grünbichler @ 2025-07-04 11:51 UTC (permalink / raw)
To: Proxmox VE development discussion
> Alexandre Derumier via pve-devel <pve-devel@lists.proxmox.com> hat am 04.07.2025 08:45 CEST geschrieben:
> Returns if the volume is supporting qemu snapshot:
> 'internal' : do the snapshot with qemu internal snapshot
> 'external' : do the snapshot with qemu external snapshot
> undef : does not support qemu snapshot
>
> Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
> ---
> src/PVE/Storage.pm | 19 +++++++++++++++++++
> src/PVE/Storage/DirPlugin.pm | 10 ++++++++++
> src/PVE/Storage/LVMPlugin.pm | 7 +++++++
> src/PVE/Storage/Plugin.pm | 13 +++++++++++++
> src/PVE/Storage/RBDPlugin.pm | 6 ++++++
> 5 files changed, 55 insertions(+)
>
> diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
> index d83770c..e794f7b 100755
> --- a/src/PVE/Storage.pm
> +++ b/src/PVE/Storage.pm
> @@ -2350,6 +2350,25 @@ sub rename_volume {
> );
> }
>
> +# Returns the method type to take a snapshot with qemu:
> +# 'internal' : support snapshot with qemu internal snapshot
> +# 'external' : support do the snapshot with qemu external snapshot
> +# undef : don't support qemu snapshot
> +sub volume_support_qemu_snapshot {
> + my ($cfg, $volid) = @_;
> +
> + my ($storeid, $volname) = parse_volume_id($volid, 1);
> +
> + if ($storeid) {
> + my $scfg = storage_config($cfg, $storeid);
> +
> + my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
> +
> + return $plugin->volume_support_qemu_snapshot($storeid, $scfg, $volname);
> + }
> + return undef;
> +}
> +
> # Various io-heavy operations require io/bandwidth limits which can be
> # configured on multiple levels: The global defaults in datacenter.cfg, and
> # per-storage overrides. When we want to do a restore from storage A to storage
> diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
> index ae5d083..0f8776b 100644
> --- a/src/PVE/Storage/DirPlugin.pm
> +++ b/src/PVE/Storage/DirPlugin.pm
> @@ -315,4 +315,14 @@ sub get_import_metadata {
> };
> }
>
> +sub volume_support_qemu_snapshot {
> + my ($class, $storeid, $scfg, $volname) = @_;
> +
> + my $format = ($class->parse_volname($volname))[6];
> + return if $format ne 'qcow2';
this previously also worked for 'qed'?
> +
> + my $type = $scfg->{snapext} ? 'external' : 'internal';
> + return $type;
> +}
> +
> 1;
> diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
> index ef010b8..3e3e48c 100644
> --- a/src/PVE/Storage/LVMPlugin.pm
> +++ b/src/PVE/Storage/LVMPlugin.pm
> @@ -1199,4 +1199,11 @@ sub get_snap_volname {
> return $class->get_snap_name($volname, $snapname);
> }
>
> +sub volume_support_qemu_snapshot {
> + my ($class, $storeid, $scfg, $volname) = @_;
> +
> + my $format = ($class->parse_volname($volname))[6];
> + return 'external' if $format eq 'qcow2';
> +}
> +
> 1;
> diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
> index 68d17ff..bf190d2 100644
> --- a/src/PVE/Storage/Plugin.pm
> +++ b/src/PVE/Storage/Plugin.pm
> @@ -2292,6 +2292,19 @@ sub qemu_blockdev_options {
> return $blockdev;
> }
>
> +
> +# Returns the method type to take a snapshot with qemu:
> +# 'internal' : support snapshot with qemu internal snapshot
> +# 'external' : support do the snapshot with qemu external snapshot
> +# undef : don't support qemu snapshot
> +
> +sub volume_support_qemu_snapshot {
> + my ($class, $storeid, $scfg, $volname) = @_;
> +
> + my $format = ($class->parse_volname($volname))[6];
> + return 'internal' if $format eq 'qcow2';
this previously also worked for 'qed'
> +}
> +
> # Used by storage plugins for external backup providers. See PVE::BackupProvider::Plugin for the API
> # the provider needs to implement.
> #
> diff --git a/src/PVE/Storage/RBDPlugin.pm b/src/PVE/Storage/RBDPlugin.pm
> index 883b0e4..8065cfc 100644
> --- a/src/PVE/Storage/RBDPlugin.pm
> +++ b/src/PVE/Storage/RBDPlugin.pm
> @@ -1057,4 +1057,10 @@ sub rename_volume {
> return "${storeid}:${base_name}${target_volname}";
> }
>
> +sub volume_support_qemu_snapshot {
> + my ($class, $storeid, $scfg, $volname) = @_;
> +
> + return 'internal' if !$scfg->{krbd};
> +}
> +
> 1;
> --
> 2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 06/10] common: add qemu-img measure
2025-07-04 6:45 ` [pve-devel] [PATCH pve-storage 06/10] common: add qemu-img measure Alexandre Derumier via pve-devel
@ 2025-07-04 11:51 ` Fabian Grünbichler
0 siblings, 0 replies; 46+ messages in thread
From: Fabian Grünbichler @ 2025-07-04 11:51 UTC (permalink / raw)
To: Proxmox VE development discussion
> Alexandre Derumier via pve-devel <pve-devel@lists.proxmox.com> hat am 04.07.2025 08:45 CEST geschrieben:
> ---
> src/PVE/Storage/Common.pm | 28 ++++++++++++++++++++++++++++
> 1 file changed, 28 insertions(+)
>
> diff --git a/src/PVE/Storage/Common.pm b/src/PVE/Storage/Common.pm
> index c15cc88..e73eeab 100644
> --- a/src/PVE/Storage/Common.pm
> +++ b/src/PVE/Storage/Common.pm
> @@ -197,4 +197,32 @@ sub qemu_img_info {
> return $json;
> }
>
> +sub qemu_img_measure {
> + my ($size, $fmt, $timeout, $options) = @_;
> +
> + die "format is missing" if !$fmt;
> +
> + my $cmd = ['/usr/bin/qemu-img', 'measure', '--output=json', '--size', "${size}K", '-O', $fmt];
> + push $cmd->@*, '-o', join(',', @$options) if @$options > 0;
> +
> + my $json = '';
> + my $err_output = '';
> + eval {
> + run_command(
> + $cmd,
> + timeout => $timeout,
> + outfunc => sub { $json .= shift },
> + errfunc => sub { $err_output .= shift . "\n" },
> + );
> + };
> + warn $@ if $@;
> + if ($err_output) {
> + # if qemu did not output anything to stdout we die with stderr as an error
> + die $err_output if !$json;
> + # otherwise we warn about it and try to parse the json
> + warn $err_output;
> + }
> + return $json;
this is identical to qemu_img_info modulo the generated command, so I'd add a
follow-up to extract the output handling into a helper:
commit 9053bf5593b097d484ce52c3ffe831138c2bb208
Author: Fabian Grünbichler <f.gruenbichler@proxmox.com>
AuthorDate: Fri Jul 4 11:01:13 2025 +0200
Commit: Fabian Grünbichler <f.gruenbichler@proxmox.com>
CommitDate: Fri Jul 4 11:01:13 2025 +0200
helpers: add `qemu-img .. --json` run helper
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
diff --git a/src/PVE/Storage/Common.pm b/src/PVE/Storage/Common.pm
index 43f3f15..ffcdab4 100644
--- a/src/PVE/Storage/Common.pm
+++ b/src/PVE/Storage/Common.pm
@@ -171,12 +171,8 @@ sub qemu_img_create {
run_command($cmd, errmsg => "unable to create image");
}
-sub qemu_img_info {
- my ($filename, $file_format, $timeout, $follow_backing_files) = @_;
-
- my $cmd = ['/usr/bin/qemu-img', 'info', '--output=json', $filename];
- push $cmd->@*, '-f', $file_format if $file_format;
- push $cmd->@*, '--backing-chain' if $follow_backing_files;
+my sub run_qemu_img_json {
+ my ($cmd, $timeout) = @_;
my $json = '';
my $err_output = '';
@@ -198,6 +194,16 @@ sub qemu_img_info {
return $json;
}
+sub qemu_img_info {
+ my ($filename, $file_format, $timeout, $follow_backing_files) = @_;
+
+ my $cmd = ['/usr/bin/qemu-img', 'info', '--output=json', $filename];
+ push $cmd->@*, '-f', $file_format if $file_format;
+ push $cmd->@*, '--backing-chain' if $follow_backing_files;
+
+ return run_qemu_img_json($cmd, $timeout);
+}
+
sub qemu_img_measure {
my ($size, $fmt, $timeout, $options) = @_;
@@ -206,24 +212,7 @@ sub qemu_img_measure {
my $cmd = ['/usr/bin/qemu-img', 'measure', '--output=json', '--size', "${size}K", '-O', $fmt];
push $cmd->@*, '-o', join(',', @$options) if @$options > 0;
- my $json = '';
- my $err_output = '';
- eval {
- run_command(
- $cmd,
- timeout => $timeout,
- outfunc => sub { $json .= shift },
- errfunc => sub { $err_output .= shift . "\n" },
- );
- };
- warn $@ if $@;
- if ($err_output) {
- # if qemu did not output anything to stdout we die with stderr as an error
- die $err_output if !$json;
- # otherwise we warn about it and try to parse the json
- warn $err_output;
- }
- return $json;
+ return run_qemu_img_json($cmd, $timeout);
}
1;
> +}
> +
> 1;
> --
> 2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [pve-devel] [PATCH qemu-server 3/3] qcow2: add external snapshot support
2025-07-04 6:45 ` [pve-devel] [PATCH qemu-server 3/3] qcow2: add external snapshot support Alexandre Derumier via pve-devel
@ 2025-07-04 11:52 ` Fabian Grünbichler
2025-07-04 12:46 ` DERUMIER, Alexandre via pve-devel
0 siblings, 1 reply; 46+ messages in thread
From: Fabian Grünbichler @ 2025-07-04 11:52 UTC (permalink / raw)
To: Proxmox VE development discussion
haven't fully managed to get through the qemu-server part, but one small thing below..
> Alexandre Derumier via pve-devel <pve-devel@lists.proxmox.com> hat am 04.07.2025 08:45 CEST geschrieben:
> fixme:
> - add test for internal (was missing) && external qemu snapshots
> - is it possible to use blockjob transactions for commit && steam
> for atomatic disk commit ?
>
> Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
> ---
> src/PVE/QemuConfig.pm | 4 +-
> src/PVE/QemuServer.pm | 132 ++++++++++++---
> src/PVE/QemuServer/Blockdev.pm | 296 ++++++++++++++++++++++++++++++++-
> src/test/snapshot-test.pm | 4 +-
> 4 files changed, 402 insertions(+), 34 deletions(-)
>
> diff --git a/src/PVE/QemuConfig.pm b/src/PVE/QemuConfig.pm
> index 82295641..e0853d65 100644
> --- a/src/PVE/QemuConfig.pm
> +++ b/src/PVE/QemuConfig.pm
> @@ -398,7 +398,7 @@ sub __snapshot_create_vol_snapshot {
>
> print "snapshotting '$device' ($drive->{file})\n";
>
> - PVE::QemuServer::qemu_volume_snapshot($vmid, $device, $storecfg, $volid, $snapname);
> + PVE::QemuServer::qemu_volume_snapshot($vmid, $device, $storecfg, $drive, $snapname);
> }
>
> sub __snapshot_delete_remove_drive {
> @@ -435,7 +435,7 @@ sub __snapshot_delete_vol_snapshot {
> my $storecfg = PVE::Storage::config();
> my $volid = $drive->{file};
>
> - PVE::QemuServer::qemu_volume_snapshot_delete($vmid, $storecfg, $volid, $snapname);
> + PVE::QemuServer::qemu_volume_snapshot_delete($vmid, $storecfg, $drive, $snapname);
>
> push @$unused, $volid;
> }
> diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
> index 92c8fad6..158c91b1 100644
> --- a/src/PVE/QemuServer.pm
> +++ b/src/PVE/QemuServer.pm
> @@ -4340,20 +4340,64 @@ sub qemu_cpu_hotplug {
> }
>
> sub qemu_volume_snapshot {
> - my ($vmid, $deviceid, $storecfg, $volid, $snap) = @_;
> + my ($vmid, $deviceid, $storecfg, $drive, $snap) = @_;
>
> + my $volid = $drive->{file};
> my $running = check_running($vmid);
>
> - if ($running && do_snapshots_with_qemu($storecfg, $volid, $deviceid)) {
> + my $do_snapshots_type = do_snapshots_type($storecfg, $volid, $deviceid, $running);
> +
> + if ($do_snapshots_type eq 'internal') {
> + print "internal qemu snapshot\n";
> mon_cmd($vmid, 'blockdev-snapshot-internal-sync', device => $deviceid, name => $snap);
> - } else {
> + } elsif ($do_snapshots_type eq 'external') {
> + my $storeid = (PVE::Storage::parse_volume_id($volid))[0];
> + my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
> + print "external qemu snapshot\n";
> + my $snapshots = PVE::Storage::volume_snapshot_info($storecfg, $volid);
> + my $parent_snap = $snapshots->{'current'}->{parent};
> + my $machine_version = PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
> +
> + PVE::QemuServer::Blockdev::blockdev_rename(
> + $storecfg,
> + $vmid,
> + $machine_version,
> + $deviceid,
> + $drive,
> + 'current',
> + $snap,
> + $parent_snap,
> + );
> + eval {
> + PVE::QemuServer::Blockdev::blockdev_external_snapshot(
> + $storecfg, $vmid, $machine_version, $deviceid, $drive, $snap,
> + );
> + };
> + if ($@) {
> + warn $@ if $@;
> + print "Error creating snapshot. Revert rename\n";
> + eval {
> + PVE::QemuServer::Blockdev::blockdev_rename(
> + $storecfg,
> + $vmid,
> + $machine_version,
> + $deviceid,
> + $drive,
> + $snap,
> + 'current',
> + $parent_snap,
> + );
> + };
> + }
> + } elsif ($do_snapshots_type eq 'storage') {
> PVE::Storage::volume_snapshot($storecfg, $volid, $snap);
> }
> }
>
> sub qemu_volume_snapshot_delete {
> - my ($vmid, $storecfg, $volid, $snap) = @_;
> + my ($vmid, $storecfg, $drive, $snap) = @_;
>
> + my $volid = $drive->{file};
> my $running = check_running($vmid);
> my $attached_deviceid;
>
> @@ -4368,14 +4412,62 @@ sub qemu_volume_snapshot_delete {
> );
> }
>
> - if ($attached_deviceid && do_snapshots_with_qemu($storecfg, $volid, $attached_deviceid)) {
> + my $do_snapshots_type = do_snapshots_type($storecfg, $volid, $attached_deviceid, $running);
> +
> + if ($do_snapshots_type eq 'internal') {
> mon_cmd(
> $vmid,
> 'blockdev-snapshot-delete-internal-sync',
> device => $attached_deviceid,
> name => $snap,
> );
> - } else {
> + } elsif ($do_snapshots_type eq 'external') {
> + print "delete qemu external snapshot\n";
> +
> + my $path = PVE::Storage::path($storecfg, $volid);
> + my $snapshots = PVE::Storage::volume_snapshot_info($storecfg, $volid);
> + my $parentsnap = $snapshots->{$snap}->{parent};
> + my $childsnap = $snapshots->{$snap}->{child};
> + my $machine_version = PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
> +
> + # if we delete the first snasphot, we commit because the first snapshot original base image, it should be big.
> + # improve-me: if firstsnap > child : commit, if firstsnap < child do a stream.
> + if (!$parentsnap) {
> + print "delete first snapshot $snap\n";
> + PVE::QemuServer::Blockdev::blockdev_commit(
> + $storecfg,
> + $vmid,
> + $machine_version,
> + $attached_deviceid,
> + $drive,
> + $childsnap,
> + $snap,
> + );
> + PVE::QemuServer::Blockdev::blockdev_rename(
> + $storecfg,
> + $vmid,
> + $machine_version,
> + $attached_deviceid,
> + $drive,
> + $snap,
> + $childsnap,
> + $snapshots->{$childsnap}->{child},
> + );
> + } else {
> + #intermediate snapshot, we always stream the snapshot to child snapshot
> + print "stream intermediate snapshot $snap to $childsnap\n";
> + PVE::QemuServer::Blockdev::blockdev_stream(
> + $storecfg,
> + $vmid,
> + $machine_version,
> + $attached_deviceid,
> + $drive,
> + $snap,
> + $parentsnap,
> + $childsnap,
> + );
> + }
> + } elsif ($do_snapshots_type eq 'storage') {
> PVE::Storage::volume_snapshot_delete(
> $storecfg,
> $volid,
> @@ -7563,28 +7655,20 @@ sub restore_tar_archive {
> warn $@ if $@;
> }
>
> -my $qemu_snap_storage = {
> - rbd => 1,
> -};
> -
> -sub do_snapshots_with_qemu {
> - my ($storecfg, $volid, $deviceid) = @_;
> -
> - return if $deviceid =~ m/tpmstate0/;
> +sub do_snapshots_type {
> + my ($storecfg, $volid, $deviceid, $running) = @_;
>
> - my $storage_name = PVE::Storage::parse_volume_id($volid);
> - my $scfg = $storecfg->{ids}->{$storage_name};
> - die "could not find storage '$storage_name'\n" if !defined($scfg);
> + #we skip snapshot for tpmstate
> + return if $deviceid && $deviceid =~ m/tpmstate0/;
I think this is wrong.. this should return 'storage' as well?
>
> - if ($qemu_snap_storage->{ $scfg->{type} } && !$scfg->{krbd}) {
> - return 1;
> - }
> + #we use storage snapshot if vm is not running or if disk is unused;
> + return 'storage' if !$running || !$deviceid;
>
> - if ($volid =~ m/\.(qcow2|qed)$/) {
> - return 1;
> - }
> + my $qemu_snapshot_type = PVE::Storage::volume_support_qemu_snapshot($storecfg, $volid);
> + # if running, but don't support qemu snapshot, we use storage snapshot
> + return 'storage' if !$qemu_snapshot_type;
>
> - return;
> + return $qemu_snapshot_type;
> }
>
> =head3 template_create($vmid, $conf [, $disk])
> diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
> index 2a0513fb..07141777 100644
> --- a/src/PVE/QemuServer/Blockdev.pm
> +++ b/src/PVE/QemuServer/Blockdev.pm
> @@ -11,6 +11,7 @@ use JSON;
> use PVE::JSONSchema qw(json_bool);
> use PVE::Storage;
>
> +use PVE::QemuServer::BlockJob;
> use PVE::QemuServer::Drive qw(drive_is_cdrom);
> use PVE::QemuServer::Helpers;
> use PVE::QemuServer::Monitor qw(mon_cmd);
> @@ -243,6 +244,9 @@ my sub generate_file_blockdev {
> my $blockdev = {};
> my $scfg = undef;
>
> + delete $options->{'snapshot-name'}
> + if $options->{'snapshot-name'} && $options->{'snapshot-name'} eq 'current';
> +
> die "generate_file_blockdev called without volid/path\n" if !$drive->{file};
> die "generate_file_blockdev called with 'none'\n" if $drive->{file} eq 'none';
> # FIXME use overlay and new config option to define storage for temp write device
> @@ -322,6 +326,9 @@ my sub generate_format_blockdev {
> die "generate_format_blockdev called with 'none'\n" if $drive->{file} eq 'none';
> die "generate_format_blockdev called with NBD path\n" if is_nbd($drive);
>
> + delete($options->{'snapshot-name'})
> + if $options->{'snapshot-name'} && $options->{'snapshot-name'} eq 'current';
> +
> my $scfg;
> my $format;
> my $volid = $drive->{file};
> @@ -400,6 +407,17 @@ my sub generate_backing_chain_blockdev {
> );
> }
>
> +sub generate_throttle_blockdev {
> + my ($drive_id, $child) = @_;
> +
> + return {
> + driver => "throttle",
> + 'node-name' => top_node_name($drive_id),
> + 'throttle-group' => throttle_group_id($drive_id),
> + file => $child,
> + };
> +}
> +
> sub generate_drive_blockdev {
> my ($storecfg, $drive, $machine_version, $options) = @_;
>
> @@ -442,12 +460,7 @@ sub generate_drive_blockdev {
> return $child if $options->{fleecing} || $options->{'tpm-backup'} || $options->{'no-throttle'};
>
> # this is the top filter entry point, use $drive-drive_id as nodename
> - return {
> - driver => "throttle",
> - 'node-name' => top_node_name($drive_id),
> - 'throttle-group' => throttle_group_id($drive_id),
> - file => $child,
> - };
> + return generate_throttle_blockdev($drive_id, $child);
> }
>
> sub generate_pbs_blockdev {
> @@ -785,4 +798,275 @@ sub set_io_throttle {
> }
> }
>
> +sub blockdev_external_snapshot {
> + my ($storecfg, $vmid, $machine_version, $deviceid, $drive, $snap, $size) = @_;
> +
> + print "Creating a new current volume with $snap as backing snap\n";
> +
> + my $volid = $drive->{file};
> +
> + #preallocate add a new current file with reference to backing-file
> + PVE::Storage::volume_snapshot($storecfg, $volid, $snap, 1);
> +
> + #be sure to add drive in write mode
> + delete($drive->{ro});
> +
> + my $new_file_blockdev = generate_file_blockdev($storecfg, $drive);
> + my $new_fmt_blockdev = generate_format_blockdev($storecfg, $drive, $new_file_blockdev);
> +
> + my $snap_file_blockdev = generate_file_blockdev($storecfg, $drive, $snap);
> + my $snap_fmt_blockdev = generate_format_blockdev(
> + $storecfg,
> + $drive,
> + $snap_file_blockdev,
> + { 'snapshot-name' => $snap },
> + );
> +
> + #backing need to be forced to undef in blockdev, to avoid reopen of backing-file on blockdev-add
> + $new_fmt_blockdev->{backing} = undef;
> +
> + mon_cmd($vmid, 'blockdev-add', %$new_fmt_blockdev);
> +
> + mon_cmd(
> + $vmid, 'blockdev-snapshot',
> + node => $snap_fmt_blockdev->{'node-name'},
> + overlay => $new_fmt_blockdev->{'node-name'},
> + );
> +}
> +
> +sub blockdev_delete {
> + my ($storecfg, $vmid, $drive, $file_blockdev, $fmt_blockdev, $snap) = @_;
> +
> + #add eval as reopen is auto removing the old nodename automatically only if it was created at vm start in command line argument
> + eval { mon_cmd($vmid, 'blockdev-del', 'node-name' => $file_blockdev->{'node-name'}) };
> + eval { mon_cmd($vmid, 'blockdev-del', 'node-name' => $fmt_blockdev->{'node-name'}) };
> +
> + #delete the file (don't use vdisk_free as we don't want to delete all snapshot chain)
> + print "delete old $file_blockdev->{filename}\n";
> +
> + my $storage_name = PVE::Storage::parse_volume_id($drive->{file});
> +
> + my $volid = $drive->{file};
> + PVE::Storage::volume_snapshot_delete($storecfg, $volid, $snap, 1);
> +}
> +
> +sub blockdev_rename {
> + my (
> + $storecfg,
> + $vmid,
> + $machine_version,
> + $deviceid,
> + $drive,
> + $src_snap,
> + $target_snap,
> + $parent_snap,
> + ) = @_;
> +
> + print "rename $src_snap to $target_snap\n";
> +
> + my $volid = $drive->{file};
> +
> + my $src_file_blockdev = generate_file_blockdev(
> + $storecfg,
> + $drive,
> + $machine_version,
> + { 'snapshot-name' => $src_snap },
> + );
> + my $src_fmt_blockdev = generate_format_blockdev(
> + $storecfg,
> + $drive,
> + $src_file_blockdev,
> + { 'snapshot-name' => $src_snap },
> + );
> +
> + #rename volume image
> + PVE::Storage::rename_volume($storecfg, $volid, $vmid, undef, $src_snap, $target_snap);
> +
> + my $target_file_blockdev = generate_file_blockdev(
> + $storecfg,
> + $drive,
> + $machine_version,
> + { 'snapshot-name' => $target_snap },
> + );
> + my $target_fmt_blockdev = generate_format_blockdev(
> + $storecfg,
> + $drive,
> + $target_file_blockdev,
> + { 'snapshot-name' => $target_snap },
> + );
> +
> + if ($target_snap eq 'current' || $src_snap eq 'current') {
> + #rename from|to current
> + my $drive_id = PVE::QemuServer::Drive::get_drive_id($drive);
> +
> + #add backing to target
> + if ($parent_snap) {
> + my $parent_fmt_nodename =
> + get_node_name('fmt', $drive_id, $volid, { 'snapshot-name' => $parent_snap });
> + $target_fmt_blockdev->{backing} = $parent_fmt_nodename;
> + }
> + mon_cmd($vmid, 'blockdev-add', %$target_fmt_blockdev);
> +
> + #reopen the current throttlefilter nodename with the target fmt nodename
> + my $throttle_blockdev =
> + generate_throttle_blockdev($drive_id, $target_fmt_blockdev->{'node-name'});
> + mon_cmd($vmid, 'blockdev-reopen', options => [$throttle_blockdev]);
> + } else {
> + rename($src_file_blockdev->{filename}, $target_file_blockdev->{filename});
> +
> + #intermediate snapshot
> + mon_cmd($vmid, 'blockdev-add', %$target_fmt_blockdev);
> +
> + #reopen the parent node with the new target fmt backing node
> + my $parent_file_blockdev = generate_file_blockdev(
> + $storecfg,
> + $drive,
> + $machine_version,
> + { 'snapshot-name' => $parent_snap },
> + );
> + my $parent_fmt_blockdev = generate_format_blockdev(
> + $storecfg,
> + $drive,
> + $parent_file_blockdev,
> + { 'snapshot-name' => $parent_snap },
> + );
> + $parent_fmt_blockdev->{backing} = $target_fmt_blockdev->{'node-name'};
> + mon_cmd($vmid, 'blockdev-reopen', options => [$parent_fmt_blockdev]);
> +
> + #change backing-file in qcow2 metadatas
> + mon_cmd(
> + $vmid, 'change-backing-file',
> + device => $deviceid,
> + 'image-node-name' => $parent_fmt_blockdev->{'node-name'},
> + 'backing-file' => $target_file_blockdev->{filename},
> + );
> + }
> +
> + # delete old file|fmt nodes
> + # add eval as reopen is auto removing the old nodename automatically only if it was created at vm start in command line argument
> + eval { mon_cmd($vmid, 'blockdev-del', 'node-name' => $src_file_blockdev->{'node-name'}) };
> + eval { mon_cmd($vmid, 'blockdev-del', 'node-name' => $src_fmt_blockdev->{'node-name'}) };
> +}
> +
> +sub blockdev_commit {
> + my ($storecfg, $vmid, $machine_version, $deviceid, $drive, $src_snap, $target_snap) = @_;
> +
> + my $volid = $drive->{file};
> +
> + print "block-commit $src_snap to base:$target_snap\n";
> +
> + my $target_file_blockdev = generate_file_blockdev(
> + $storecfg,
> + $drive,
> + $machine_version,
> + { 'snapshot-name' => $target_snap },
> + );
> + my $target_fmt_blockdev = generate_format_blockdev(
> + $storecfg,
> + $drive,
> + $target_file_blockdev,
> + { 'snapshot-name' => $target_snap },
> + );
> +
> + my $src_file_blockdev = generate_file_blockdev(
> + $storecfg,
> + $drive,
> + $machine_version,
> + { 'snapshot-name' => $src_snap },
> + );
> + my $src_fmt_blockdev = generate_format_blockdev(
> + $storecfg,
> + $drive,
> + $src_file_blockdev,
> + { 'snapshot-name' => $src_snap },
> + );
> +
> + my $job_id = "commit-$deviceid";
> + my $jobs = {};
> + my $opts = { 'job-id' => $job_id, device => $deviceid };
> +
> + $opts->{'base-node'} = $target_fmt_blockdev->{'node-name'};
> + $opts->{'top-node'} = $src_fmt_blockdev->{'node-name'};
> +
> + mon_cmd($vmid, "block-commit", %$opts);
> + $jobs->{$job_id} = {};
> +
> + # if we commit the current, the blockjob need to be in 'complete' mode
> + my $complete = $src_snap && $src_snap ne 'current' ? 'auto' : 'complete';
> +
> + eval {
> + PVE::QemuServer::BlockJob::qemu_drive_mirror_monitor(
> + $vmid, undef, $jobs, $complete, 0, 'commit',
> + );
> + };
> + if ($@) {
> + die "Failed to complete block commit: $@\n";
> + }
> +
> + blockdev_delete($storecfg, $vmid, $drive, $src_file_blockdev, $src_fmt_blockdev, $src_snap);
> +}
> +
> +sub blockdev_stream {
> + my ($storecfg, $vmid, $machine_version, $deviceid, $drive, $snap, $parent_snap, $target_snap) =
> + @_;
> +
> + my $volid = $drive->{file};
> + $target_snap = undef if $target_snap eq 'current';
> +
> + my $parent_file_blockdev = generate_file_blockdev(
> + $storecfg,
> + $drive,
> + $machine_version,
> + { 'snapshot-name' => $parent_snap },
> + );
> + my $parent_fmt_blockdev = generate_format_blockdev(
> + $storecfg,
> + $drive,
> + $parent_file_blockdev,
> + { 'snapshot-name' => $parent_snap },
> + );
> +
> + my $target_file_blockdev = generate_file_blockdev(
> + $storecfg,
> + $drive,
> + $machine_version,
> + { 'snapshot-name' => $target_snap },
> + );
> + my $target_fmt_blockdev = generate_format_blockdev(
> + $storecfg,
> + $drive,
> + $target_file_blockdev,
> + { 'snapshot-name' => $target_snap },
> + );
> +
> + my $snap_file_blockdev =
> + generate_file_blockdev($storecfg, $drive, $machine_version, { 'snapshot-name' => $snap });
> + my $snap_fmt_blockdev = generate_format_blockdev(
> + $storecfg,
> + $drive,
> + $snap_file_blockdev,
> + { 'snapshot-name' => $snap },
> + );
> +
> + my $job_id = "stream-$deviceid";
> + my $jobs = {};
> + my $options = { 'job-id' => $job_id, device => $target_fmt_blockdev->{'node-name'} };
> + $options->{'base-node'} = $parent_fmt_blockdev->{'node-name'};
> + $options->{'backing-file'} = $parent_file_blockdev->{filename};
> +
> + mon_cmd($vmid, 'block-stream', %$options);
> + $jobs->{$job_id} = {};
> +
> + eval {
> + PVE::QemuServer::BlockJob::qemu_drive_mirror_monitor(
> + $vmid, undef, $jobs, 'auto', 0, 'stream',
> + );
> + };
> + if ($@) {
> + die "Failed to complete block stream: $@\n";
> + }
> +
> + blockdev_delete($storecfg, $vmid, $drive, $snap_file_blockdev, $snap_fmt_blockdev, $snap);
> +}
> +
> 1;
> diff --git a/src/test/snapshot-test.pm b/src/test/snapshot-test.pm
> index 4fce87f1..f61cd64b 100644
> --- a/src/test/snapshot-test.pm
> +++ b/src/test/snapshot-test.pm
> @@ -399,8 +399,8 @@ sub set_migration_caps { } # ignored
>
> # BEGIN redefine PVE::QemuServer methods
>
> -sub do_snapshots_with_qemu {
> - return 0;
> +sub do_snapshots_type {
> + return 'storage';
> }
>
> sub vm_start {
> --
> 2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 04/10] rename_volume: add source && target snap
2025-07-04 6:45 ` [pve-devel] [PATCH pve-storage 04/10] rename_volume: add source && target snap Alexandre Derumier via pve-devel
@ 2025-07-04 11:52 ` Fabian Grünbichler
2025-07-04 12:04 ` Thomas Lamprecht
2025-07-07 10:34 ` DERUMIER, Alexandre via pve-devel
0 siblings, 2 replies; 46+ messages in thread
From: Fabian Grünbichler @ 2025-07-04 11:52 UTC (permalink / raw)
To: Proxmox VE development discussion; +Cc: Thomas Lamprecht
> Alexandre Derumier via pve-devel <pve-devel@lists.proxmox.com> hat am 04.07.2025 08:45 CEST geschrieben:
> allow to rename from|to external snapshot volname
we could consider adding a new API method `rename_snapshot` instead:
my ($class, $scfg, $storeid, $volname, $source_snap, $target_snap) = @_;
for the two plugins here it could easily share most of the implementation
with rename_volume, without blowing up the interface for a fairly limited
use case?
rename_volume right now is used for re-assigning a volume from one
owner/vmid to another only, AFAICT, with $target_volname never being
actually provided by callers. the new call would then never provide
$target_vmid and never provide $target_volname, while existing ones
never provide the snapshot parameters. OTOH, just like the existing
rename_volume, such a rename_snapshot method would only have a
single use case/call site, unless we plan to also add generic
snapshot renaming as a feature down the line..
another missing piece here is that for external snapshots, we need
to actually rename all the volumes when reassigning, else only the
current volume is renamed, and the rest still have the old owner/name..
>
> Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
> ---
> src/PVE/Storage.pm | 10 ++++++++--
> src/PVE/Storage/LVMPlugin.pm | 17 +++++++++++++++--
> src/PVE/Storage/Plugin.pm | 14 +++++++++++++-
> 3 files changed, 36 insertions(+), 5 deletions(-)
>
> diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
> index 7861bf6..fe6eaf7 100755
> --- a/src/PVE/Storage.pm
> +++ b/src/PVE/Storage.pm
> @@ -2319,7 +2319,7 @@ sub complete_volume {
> }
>
> sub rename_volume {
> - my ($cfg, $source_volid, $target_vmid, $target_volname) = @_;
> + my ($cfg, $source_volid, $target_vmid, $target_volname, $source_snap, $target_snap) = @_;
>
> die "no source volid provided\n" if !$source_volid;
> die "no target VMID or target volname provided\n" if !$target_vmid && !$target_volname;
> @@ -2339,7 +2339,13 @@ sub rename_volume {
> undef,
> sub {
> return $plugin->rename_volume(
> - $scfg, $storeid, $source_volname, $target_vmid, $target_volname,
> + $scfg,
> + $storeid,
> + $source_volname,
> + $target_vmid,
> + $target_volname,
> + $source_snap,
> + $target_snap,
> );
> },
> );
> diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
> index 1a992e8..0b506c7 100644
> --- a/src/PVE/Storage/LVMPlugin.pm
> +++ b/src/PVE/Storage/LVMPlugin.pm
> @@ -838,11 +838,24 @@ sub volume_import_write {
> }
>
> sub rename_volume {
> - my ($class, $scfg, $storeid, $source_volname, $target_vmid, $target_volname) = @_;
> + my (
> + $class,
> + $scfg,
> + $storeid,
> + $source_volname,
> + $target_vmid,
> + $target_volname,
> + $source_snap,
> + $target_snap,
> + ) = @_;
>
> my (
> undef, $source_image, $source_vmid, $base_name, $base_vmid, undef, $format,
> ) = $class->parse_volname($source_volname);
> +
> + $source_image = $class->get_snap_volname($source_volname, $source_snap) if $source_snap;
if you flip these two around, this could just overwrite $source_volname, because otherwise
you rely on $source_image and $source_volname being identical always which might not be
the case in the future? or is this more correct in general, in case we ever add base image
support to non-thin LVM?
> + $target_volname = $class->get_snap_volname($source_volname, $target_snap) if $target_snap;
if we keep this, this should assert that $target_volname wasn't provided by the caller..
> +
> $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid, $format)
> if !$target_volname;
>
> @@ -851,7 +864,7 @@ sub rename_volume {
> die "target volume '${target_volname}' already exists\n"
> if ($lvs->{$vg}->{$target_volname});
>
> - lvrename($vg, $source_volname, $target_volname);
> + lvrename($vg, $source_image, $target_volname);
this could then be dropped..
> return "${storeid}:${target_volname}";
> }
>
> diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
> index c35d5e5..5afe29b 100644
> --- a/src/PVE/Storage/Plugin.pm
> +++ b/src/PVE/Storage/Plugin.pm
> @@ -1877,7 +1877,16 @@ sub volume_import_formats {
> }
>
> sub rename_volume {
> - my ($class, $scfg, $storeid, $source_volname, $target_vmid, $target_volname) = @_;
> + my (
> + $class,
> + $scfg,
> + $storeid,
> + $source_volname,
> + $target_vmid,
> + $target_volname,
> + $source_snap,
> + $target_snap,
> + ) = @_;
> die "not implemented in storage plugin '$class'\n" if $class->can('api') && $class->api() < 10;
> die "no path found\n" if !$scfg->{path};
>
> @@ -1885,6 +1894,9 @@ sub rename_volume {
> undef, $source_image, $source_vmid, $base_name, $base_vmid, undef, $format,
> ) = $class->parse_volname($source_volname);
>
> + $source_image = $class->get_snap_name($source_volname, $source_snap) if $source_snap;
> + $target_volname = $class->get_snap_name($source_volname, $target_snap) if $target_snap;
same here - this should assert that $target_volname wasn't provided, if we don't split
it out into a new sub..
> +
> $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid, $format, 1)
> if !$target_volname;
>
> --
> 2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 08/10] qcow2: add external snapshot support
2025-07-04 6:45 ` [pve-devel] [PATCH pve-storage 08/10] qcow2: add external snapshot support Alexandre Derumier via pve-devel
@ 2025-07-04 11:52 ` Fabian Grünbichler
2025-07-04 13:22 ` DERUMIER, Alexandre via pve-devel
` (6 more replies)
0 siblings, 7 replies; 46+ messages in thread
From: Fabian Grünbichler @ 2025-07-04 11:52 UTC (permalink / raw)
To: Proxmox VE development discussion
> Alexandre Derumier via pve-devel <pve-devel@lists.proxmox.com> hat am 04.07.2025 08:45 CEST geschrieben:
> add a snapext option to enable the feature
>
> When a snapshot is taken, the current volume is renamed to snap volname
> and a current image is created with the snap volume as backing file
>
> Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
> ---
> src/PVE/Storage.pm | 1 -
> src/PVE/Storage/Common.pm | 3 +-
> src/PVE/Storage/DirPlugin.pm | 1 +
> src/PVE/Storage/Plugin.pm | 263 +++++++++++++++++++++++++++++++++--
> 4 files changed, 252 insertions(+), 16 deletions(-)
>
> diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
> index 0396160..d83770c 100755
> --- a/src/PVE/Storage.pm
> +++ b/src/PVE/Storage.pm
> @@ -479,7 +479,6 @@ sub volume_snapshot_rollback {
> }
> }
>
> -# FIXME PVE 8.x remove $running parameter (needs APIAGE reset)
> sub volume_snapshot_delete {
> my ($cfg, $volid, $snap, $running) = @_;
>
> diff --git a/src/PVE/Storage/Common.pm b/src/PVE/Storage/Common.pm
> index e73eeab..43f3f15 100644
> --- a/src/PVE/Storage/Common.pm
> +++ b/src/PVE/Storage/Common.pm
> @@ -172,10 +172,11 @@ sub qemu_img_create {
> }
>
> sub qemu_img_info {
> - my ($filename, $file_format, $timeout) = @_;
> + my ($filename, $file_format, $timeout, $follow_backing_files) = @_;
>
> my $cmd = ['/usr/bin/qemu-img', 'info', '--output=json', $filename];
> push $cmd->@*, '-f', $file_format if $file_format;
> + push $cmd->@*, '--backing-chain' if $follow_backing_files;
>
> my $json = '';
> my $err_output = '';
> diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
> index 10e4f70..ae5d083 100644
> --- a/src/PVE/Storage/DirPlugin.pm
> +++ b/src/PVE/Storage/DirPlugin.pm
> @@ -95,6 +95,7 @@ sub options {
> is_mountpoint => { optional => 1 },
> bwlimit => { optional => 1 },
> preallocation => { optional => 1 },
> + snapext => { optional => 1 },
needs to be "fixed", as the code doesn't handle mixing internal
and external snapshots on a single storage..
> };
> }
>
> diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
> index 88c30c2..68d17ff 100644
> --- a/src/PVE/Storage/Plugin.pm
> +++ b/src/PVE/Storage/Plugin.pm
> @@ -215,6 +215,11 @@ my $defaultData = {
> maximum => 65535,
> optional => 1,
> },
> + 'snapext' => {
> + type => 'boolean',
> + description => 'enable external snapshot.',
> + optional => 1,
> + },
> },
> };
>
> @@ -727,6 +732,7 @@ sub filesystem_path {
> my ($class, $scfg, $volname, $snapname) = @_;
>
> my ($vtype, $name, $vmid, undef, undef, $isBase, $format) = $class->parse_volname($volname);
> + $name = $class->get_snap_name($volname, $snapname) if $scfg->{snapext} && $snapname;
>
> # Note: qcow2/qed has internal snapshot, so path is always
> # the same (with or without snapshot => same file).
> @@ -931,6 +937,26 @@ sub alloc_image {
> return "$vmid/$name";
> }
>
> +my sub alloc_backed_image {
> + my ($class, $storeid, $scfg, $volname, $backing_snap) = @_;
> +
> + my $path = $class->path($scfg, $volname, $storeid);
> + my $backing_path = $class->path($scfg, $volname, $storeid, $backing_snap);
should we use a relative path here like we do when doing a linked clone? else
it basically means that it is no longer possible to move the storage mountpoint,
unless I am mistaken?
> +
> + eval { PVE::Storage::Common::qemu_img_create($scfg, 'qcow2', undef, $path, $backing_path) };
> + if ($@) {
> + unlink $path;
> + die "$@";
> + }
> +}
> +
> +my sub free_snap_image {
> + my ($class, $storeid, $scfg, $volname, $snap) = @_;
> +
> + my $path = $class->path($scfg, $volname, $storeid, $snap);
> + unlink($path) || die "unlink '$path' failed - $!\n";
> +}
> +
> sub free_image {
> my ($class, $storeid, $scfg, $volname, $isBase, $format) = @_;
>
> @@ -953,6 +979,20 @@ sub free_image {
> return undef;
> }
>
> + #delete external snapshots
> + if ($scfg->{snapext}) {
> + my $snapshots = $class->volume_snapshot_info($scfg, $storeid, $volname);
> + for my $snapid (
> + sort { $snapshots->{$b}->{order} <=> $snapshots->{$a}->{order} }
> + keys %$snapshots
> + ) {
> + my $snap = $snapshots->{$snapid};
> + next if $snapid eq 'current';
> + next if !$snap->{ext};
> + free_snap_image($class, $storeid, $scfg, $volname, $snapid);
> + }
> + }
> +
this is a bit tricky.. once we've deleted the first snapshot, we've basically invalidated
the whole image.. should we try to continue freeing as much as possible? and maybe even
start with the "current" image, so that a partial removal doesn't look like valid image
anymore?
> unlink($path) || die "unlink '$path' failed - $!\n";
> }
>
> @@ -1159,11 +1199,39 @@ sub volume_snapshot {
>
> die "can't snapshot this image format\n" if $volname !~ m/\.(qcow2|qed)$/;
and snapext is only allowed for qcow2!
>
> - my $path = $class->filesystem_path($scfg, $volname);
> + if ($scfg->{snapext}) {
> +
> + my $vmid = ($class->parse_volname($volname))[2];
> +
> + #if running, the old current has been renamed with blockdev-reopen by qemu
> + if (!$running) {
> + #rename current volume to snap volume
the two comments here could be a single one ;)
# rename volume unless qemu has already done it for us
> + $class->rename_volume($scfg, $storeid, $volname, $vmid, undef, 'current', $snap);
> + }
> +
> + eval { alloc_backed_image($class, $storeid, $scfg, $volname, $snap) };
> + if ($@) {
> + warn "$@ \n";
> + #if running, the revert is done by qemu with blockdev-reopen
> + if (!$running) {
> + eval {
> + $class->rename_volume(
> + $scfg, $storeid, $volname, $vmid, undef, $snap, 'current',
> + );
> + };
> + warn $@ if $@;
> + }
> + die "can't allocate new volume $volname with $snap backing image\n";
> + }
> +
> + } else {
> +
> + my $path = $class->filesystem_path($scfg, $volname);
>
> - my $cmd = ['/usr/bin/qemu-img', 'snapshot', '-c', $snap, $path];
> + my $cmd = ['/usr/bin/qemu-img', 'snapshot', '-c', $snap, $path];
>
> - run_command($cmd);
> + run_command($cmd);
> + }
>
> return undef;
> }
> @@ -1174,6 +1242,21 @@ sub volume_snapshot {
> sub volume_rollback_is_possible {
> my ($class, $scfg, $storeid, $volname, $snap, $blockers) = @_;
>
> + if ($scfg->{snapext}) {
> + #technically, we could manage multibranch, we it need lot more work for snapshot delete
> + #we need to implemente block-stream from deleted snapshot to all others child branchs
> + #when online, we need to do a transaction for multiple disk when delete the last snapshot
> + #and need to merge in current running file
> +
> + my $snappath = $class->path($scfg, $volname, $storeid, $snap);
> + my $snapshots = $class->volume_snapshot_info($scfg, $storeid, $volname);
> + my $parentsnap = $snapshots->{current}->{parent};
> +
> + return 1 if $parentsnap eq $snap;
> +
while only used for replication atm AFAIR, we could fill $blockers here since
we have the information readily available already..
> + die "can't rollback, '$snap' is not most recent snapshot on '$volname'\n";
> + }
> +
nit: could be inverted:
# internal snapshots have no restrictions
return 1 if !$scfg->{snapext};
then the big part of the code doesn't need another level of indentation..
> return 1;
> }
>
> @@ -1182,11 +1265,22 @@ sub volume_snapshot_rollback {
>
> die "can't rollback snapshot this image format\n" if $volname !~ m/\.(qcow2|qed)$/;
>
> - my $path = $class->filesystem_path($scfg, $volname);
> -
> - my $cmd = ['/usr/bin/qemu-img', 'snapshot', '-a', $snap, $path];
> + if ($scfg->{snapext}) {
> + #simply delete the current snapshot and recreate it
> + eval { free_snap_image($class, $storeid, $scfg, $volname, 'current') };
> + if ($@) {
> + die "can't delete old volume $volname: $@\n";
> + }
>
> - run_command($cmd);
> + eval { alloc_backed_image($class, $storeid, $scfg, $volname, $snap) };
> + if ($@) {
> + die "can't allocate new volume $volname: $@\n";
> + }
> + } else {
> + my $path = $class->filesystem_path($scfg, $volname);
> + my $cmd = ['/usr/bin/qemu-img', 'snapshot', '-a', $snap, $path];
> + run_command($cmd);
> + }
>
> return undef;
> }
> @@ -1196,15 +1290,83 @@ sub volume_snapshot_delete {
>
> die "can't delete snapshot for this image format\n" if $volname !~ m/\.(qcow2|qed)$/;
>
> - return 1 if $running;
> + my $cmd = "";
>
> - my $path = $class->filesystem_path($scfg, $volname);
> + if ($scfg->{snapext}) {
>
> - $class->deactivate_volume($storeid, $scfg, $volname, $snap, {});
> + #qemu has already live commit|stream the snapshot, therefore we only have to drop the image itself
> + if ($running) {
> + eval { free_snap_image($class, $storeid, $scfg, $volname, $snap) };
> + if ($@) {
> + die "can't delete snapshot $snap of volume $volname: $@\n";
> + }
> + return;
> + }
>
> - my $cmd = ['/usr/bin/qemu-img', 'snapshot', '-d', $snap, $path];
> + my $snapshots = $class->volume_snapshot_info($scfg, $storeid, $volname);
> + my $snappath = $snapshots->{$snap}->{file};
> + my $snap_volname = $snapshots->{$snap}->{volname};
> + die "volume $snappath is missing" if !-e $snappath;
> +
> + my $parentsnap = $snapshots->{$snap}->{parent};
> + my $childsnap = $snapshots->{$snap}->{child};
> + my $childpath = $snapshots->{$childsnap}->{file};
> +
> + #if first snapshot,as it should be bigger, we merge child, and rename the snapshot to child
> + if (!$parentsnap) {
> + print "$volname: deleting snapshot '$snap' by commiting snapshot '$childsnap'\n";
> + print "running 'qemu-img commit $childpath'\n";
> + $cmd = ['/usr/bin/qemu-img', 'commit', $childpath];
> + eval { run_command($cmd) };
> + if ($@) {
> + warn
> + "The state of $snap is now invalid. Don't try to clone or rollback it. You can only try to delete it again later\n";
> + die "error commiting $childsnap to $snap; $@\n";
> + }
> +
> + print "rename $snappath to $childpath\n";
> + rename($snappath, $childpath)
> + || die "rename '$snappath' to '$childpath' failed - $!\n";
should this use `rename_volume` or `rename_snapshot`?
>
> - run_command($cmd);
> + } else {
> + #we rebase the child image on the parent as new backing image
> + my $parentpath = $snapshots->{$parentsnap}->{file};
> + print
> + "$volname: deleting snapshot '$snap' by rebasing '$childsnap' on top of '$parentsnap'\n";
> + print "running 'qemu-img rebase -b $parentpath -F qcow -f qcow2 $childpath'\n";
> + $cmd = [
> + '/usr/bin/qemu-img',
> + 'rebase',
> + '-b',
> + $parentpath,
> + '-F',
> + 'qcow2',
> + '-f',
> + 'qcow2',
> + $childpath,
> + ];
> + eval { run_command($cmd) };
> + if ($@) {
> + #in case of abort, the state of the snap is still clean, just a little bit bigger
> + die "error rebase $childsnap from $parentsnap; $@\n";
> + }
> + #delete the old snapshot file (not part of the backing chain anymore)
> + eval { free_snap_image($class, $storeid, $scfg, $volname, $snap) };
> + if ($@) {
> + die "error delete old snapshot volume $snap_volname: $@\n";
> + }
> + }
> +
> + } else {
> +
> + return 1 if $running;
> +
> + my $path = $class->filesystem_path($scfg, $volname);
> + $class->deactivate_volume($storeid, $scfg, $volname, $snap, {});
> +
> + $cmd = ['/usr/bin/qemu-img', 'snapshot', '-d', $snap, $path];
> + run_command($cmd);
> + }
>
> return undef;
> }
> @@ -1484,7 +1646,53 @@ sub status {
> sub volume_snapshot_info {
> my ($class, $scfg, $storeid, $volname) = @_;
>
> - die "volume_snapshot_info is not implemented for $class";
> + my $path = $class->filesystem_path($scfg, $volname);
> + my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
> + $class->parse_volname($volname);
> +
> + my $json = PVE::Storage::Common::qemu_img_info($path, undef, 10, 1);
> + die "failed to query file information with qemu-img\n" if !$json;
> + my $json_decode = eval { decode_json($json) };
> + if ($@) {
> + die "Can't decode qemu snapshot list. Invalid JSON: $@\n";
> + }
> + my $info = {};
> + my $order = 0;
> + if (ref($json_decode) eq 'HASH') {
> + #internal snapshots is a hashref
> + my $snapshots = $json_decode->{snapshots};
> + for my $snap (@$snapshots) {
> + my $snapname = $snap->{name};
> + $info->{$snapname}->{order} = $snap->{id};
> + $info->{$snapname}->{timestamp} = $snap->{'date-sec'};
> +
> + }
> + } elsif (ref($json_decode) eq 'ARRAY') {
> + #no snapshot or external snapshots is an arrayref
> + my $snapshots = $json_decode;
> + for my $snap (@$snapshots) {
> + my $snapfile = $snap->{filename};
> + my $snapname = parse_snapname($snapfile);
> + $snapname = 'current' if !$snapname;
> + my $snapvolname = $class->get_snap_volname($volname, $snapname);
> +
> + $info->{$snapname}->{order} = $order;
> + $info->{$snapname}->{file} = $snapfile;
> + $info->{$snapname}->{volname} = "$snapvolname";
> + $info->{$snapname}->{volid} = "$storeid:$snapvolname";
> + $info->{$snapname}->{ext} = 1;
> +
> + my $parentfile = $snap->{'backing-filename'};
> + if ($parentfile) {
> + my $parentname = parse_snapname($parentfile);
> + $info->{$snapname}->{parent} = $parentname;
> + $info->{$parentname}->{child} = $snapname;
> + }
> + $order++;
> + }
> + }
> +
> + return $info;
> }
>
> sub activate_storage {
> @@ -2004,7 +2212,7 @@ sub qemu_blockdev_options {
> # the snapshot alone.
> my $format = ($class->parse_volname($volname))[6];
> die "cannot attach only the snapshot of a '$format' image\n"
> - if $options->{'snapshot-name'} && ($format eq 'qcow2' || $format eq 'qed');
> + if $options->{'snapshot-name'} && ($format eq 'qcow2' && !$scfg->{snapext} || $format eq 'qed');
let;s make this a bit easier to read:
my $internal_snapshot = $format eq 'qed' || ($format eq 'qcow2 && !$scfg->{snapext});
die ..
if $options->{'snapshot-name'} && $internal_snapshot;
?
and then we can switch to using the new helper and combining that with the format?
>
> # The 'file' driver only works for regular files. The check below is taken from
> # block/file-posix.c:hdev_probe_device() in QEMU. Do not bother with detecting 'host_cdrom'
> @@ -2108,4 +2316,31 @@ sub config_aware_base_mkdir {
> }
> }
>
> +sub get_snap_name {
should this be public?
> + my ($class, $volname, $snapname) = @_;
> +
> + my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
> + $class->parse_volname($volname);
> + $name = !$snapname || $snapname eq 'current' ? $name : "snap-$snapname-$name";
this is never called without a snapname, so we can assert that and drop this here..
the naming scheme here still clashes with regular volids unfortunately:
$ pvesm alloc ext4 12344321 snap-foobar-12344321-disk-foofoobar.qcow2 1G
Formatting '/mnt/pve/ext4/images/12344321/snap-foobar-12344321-disk-foofoobar.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=off compression_type=zlib size=1073741824 lazy_refcounts=off refcount_bits=16
successfully created 'ext4:12344321/snap-foobar-12344321-disk-foofoobar.qcow2'
$ pvesm list ext4 -content images -vmid 12344321 | grep foobar
ext4:12344321/snap-foobar-12344321-disk-foofoobar.qcow2 qcow2 images 1073741824 12344321
$ qm set 12344321 --scsi0 ext4:12344321/snap-foobar-12344321-disk-foofoobar.qcow2
should we maybe move snapshot files into a subdir, since `/` is not allowed in volnames?
> + return $name;
> +}
> +
> +sub get_snap_volname {
should this be public?
> + my ($class, $volname, $snapname) = @_;
> +
> + my $vmid = ($class->parse_volname($volname))[2];
> + my $name = $class->get_snap_name($volname, $snapname);
> + return "$vmid/$name";
> +}
> +
> +sub parse_snapname {
should this be public?
> + my ($name) = @_;
> +
> + my $basename = basename($name);
> + if ($basename =~ m/^snap-(.*)-vm(.*)$/) {
see above..
> + return $1;
> + }
> + return undef;
> +}
> +
> 1;
> --
> 2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 07/10] storage: volume_snapshot: add $running param
2025-07-04 6:45 ` [pve-devel] [PATCH pve-storage 07/10] storage: volume_snapshot: add $running param Alexandre Derumier via pve-devel
@ 2025-07-04 11:52 ` Fabian Grünbichler
0 siblings, 0 replies; 46+ messages in thread
From: Fabian Grünbichler @ 2025-07-04 11:52 UTC (permalink / raw)
To: Proxmox VE development discussion
> Alexandre Derumier via pve-devel <pve-devel@lists.proxmox.com> hat am 04.07.2025 08:45 CEST geschrieben:
> This add a $running param to volume_snapshot,
> it can be used if some extra actions need to be done at the storage
> layer when the snapshot has already be done at qemu level.
>
> Note: zfs && rbd plugins already used this param in create_base,
> but it was not implemented in volume_snapshot.
it used to be[0], but got dropped. it served as an early return and got
replaced by the "do snapshot with qemu" logic in qemu-server that we are
now improving once more ;)
those two sites have FIXMEs, and since create_base is only called
- for disks newly added to templates
- when creating a template (via conversion, cloning, restoring, ..)
and by definition template guests are never running and never write
to their disks/mountpoints, those $running parameters there can be
dropped.
0: https://git.proxmox.com/?p=pve-storage.git;a=commit;h=f5640e7d3be3068d34599512435954276d6e27f0
>
> Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
> ---
> src/PVE/Storage.pm | 4 ++--
> src/PVE/Storage/ESXiPlugin.pm | 2 +-
> src/PVE/Storage/ISCSIDirectPlugin.pm | 2 +-
> src/PVE/Storage/LVMPlugin.pm | 2 +-
> src/PVE/Storage/LvmThinPlugin.pm | 2 +-
> src/PVE/Storage/PBSPlugin.pm | 2 +-
> src/PVE/Storage/Plugin.pm | 2 +-
> src/PVE/Storage/RBDPlugin.pm | 2 +-
> src/PVE/Storage/ZFSPoolPlugin.pm | 2 +-
> 9 files changed, 10 insertions(+), 10 deletions(-)
>
> diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
> index fe6eaf7..0396160 100755
> --- a/src/PVE/Storage.pm
> +++ b/src/PVE/Storage.pm
> @@ -449,13 +449,13 @@ sub volume_rollback_is_possible {
> }
>
> sub volume_snapshot {
> - my ($cfg, $volid, $snap) = @_;
> + my ($cfg, $volid, $snap, $running) = @_;
>
> my ($storeid, $volname) = parse_volume_id($volid, 1);
> if ($storeid) {
> my $scfg = storage_config($cfg, $storeid);
> my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
> - return $plugin->volume_snapshot($scfg, $storeid, $volname, $snap);
> + return $plugin->volume_snapshot($scfg, $storeid, $volname, $snap, $running);
> } elsif ($volid =~ m|^(/.+)$| && -e $volid) {
> die "snapshot file/device '$volid' is not possible\n";
> } else {
> diff --git a/src/PVE/Storage/ESXiPlugin.pm b/src/PVE/Storage/ESXiPlugin.pm
> index ab5242d..e655d7b 100644
> --- a/src/PVE/Storage/ESXiPlugin.pm
> +++ b/src/PVE/Storage/ESXiPlugin.pm
> @@ -555,7 +555,7 @@ sub volume_size_info {
> }
>
> sub volume_snapshot {
> - my ($class, $scfg, $storeid, $volname, $snap) = @_;
> + my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
>
> die "creating snapshots is not supported for $class\n";
> }
> diff --git a/src/PVE/Storage/ISCSIDirectPlugin.pm b/src/PVE/Storage/ISCSIDirectPlugin.pm
> index 62e9026..93cfd3c 100644
> --- a/src/PVE/Storage/ISCSIDirectPlugin.pm
> +++ b/src/PVE/Storage/ISCSIDirectPlugin.pm
> @@ -232,7 +232,7 @@ sub volume_resize {
> }
>
> sub volume_snapshot {
> - my ($class, $scfg, $storeid, $volname, $snap) = @_;
> + my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
> die "volume snapshot is not possible on iscsi device\n";
> }
>
> diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
> index 0b506c7..3d07260 100644
> --- a/src/PVE/Storage/LVMPlugin.pm
> +++ b/src/PVE/Storage/LVMPlugin.pm
> @@ -691,7 +691,7 @@ sub volume_size_info {
> }
>
> sub volume_snapshot {
> - my ($class, $scfg, $storeid, $volname, $snap) = @_;
> + my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
>
> die "lvm snapshot is not implemented";
> }
> diff --git a/src/PVE/Storage/LvmThinPlugin.pm b/src/PVE/Storage/LvmThinPlugin.pm
> index c244c91..e5df0b4 100644
> --- a/src/PVE/Storage/LvmThinPlugin.pm
> +++ b/src/PVE/Storage/LvmThinPlugin.pm
> @@ -339,7 +339,7 @@ sub create_base {
> # sub volume_resize {} reuse code from parent class
>
> sub volume_snapshot {
> - my ($class, $scfg, $storeid, $volname, $snap) = @_;
> + my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
>
> my $vg = $scfg->{vgname};
> my $snapvol = "snap_${volname}_$snap";
> diff --git a/src/PVE/Storage/PBSPlugin.pm b/src/PVE/Storage/PBSPlugin.pm
> index 00170f5..45edc46 100644
> --- a/src/PVE/Storage/PBSPlugin.pm
> +++ b/src/PVE/Storage/PBSPlugin.pm
> @@ -966,7 +966,7 @@ sub volume_resize {
> }
>
> sub volume_snapshot {
> - my ($class, $scfg, $storeid, $volname, $snap) = @_;
> + my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
> die "volume snapshot is not possible on pbs device";
> }
>
> diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
> index 81443aa..88c30c2 100644
> --- a/src/PVE/Storage/Plugin.pm
> +++ b/src/PVE/Storage/Plugin.pm
> @@ -1155,7 +1155,7 @@ sub volume_resize {
> }
>
> sub volume_snapshot {
> - my ($class, $scfg, $storeid, $volname, $snap) = @_;
> + my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
>
> die "can't snapshot this image format\n" if $volname !~ m/\.(qcow2|qed)$/;
>
> diff --git a/src/PVE/Storage/RBDPlugin.pm b/src/PVE/Storage/RBDPlugin.pm
> index ce7db50..883b0e4 100644
> --- a/src/PVE/Storage/RBDPlugin.pm
> +++ b/src/PVE/Storage/RBDPlugin.pm
> @@ -868,7 +868,7 @@ sub volume_resize {
> }
>
> sub volume_snapshot {
> - my ($class, $scfg, $storeid, $volname, $snap) = @_;
> + my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
>
> my ($vtype, $name, $vmid) = $class->parse_volname($volname);
>
> diff --git a/src/PVE/Storage/ZFSPoolPlugin.pm b/src/PVE/Storage/ZFSPoolPlugin.pm
> index 979cf2c..9cdfa68 100644
> --- a/src/PVE/Storage/ZFSPoolPlugin.pm
> +++ b/src/PVE/Storage/ZFSPoolPlugin.pm
> @@ -480,7 +480,7 @@ sub volume_size_info {
> }
>
> sub volume_snapshot {
> - my ($class, $scfg, $storeid, $volname, $snap) = @_;
> + my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
>
> my $vname = ($class->parse_volname($volname))[1];
>
> --
> 2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 03/10] common: qemu_img_create: add backing_file support
2025-07-04 6:44 ` [pve-devel] [PATCH pve-storage 03/10] common: qemu_img_create: add backing_file support Alexandre Derumier via pve-devel
@ 2025-07-04 11:52 ` Fabian Grünbichler
2025-07-04 12:31 ` DERUMIER, Alexandre via pve-devel
2025-07-07 7:16 ` DERUMIER, Alexandre via pve-devel
0 siblings, 2 replies; 46+ messages in thread
From: Fabian Grünbichler @ 2025-07-04 11:52 UTC (permalink / raw)
To: Proxmox VE development discussion
> Alexandre Derumier via pve-devel <pve-devel@lists.proxmox.com> hat am 04.07.2025 08:44 CEST geschrieben:
> and use it for plugin linked clone
>
> This also enable extended_l2=on, as it's mandatory for backing file
> preallocation.
>
> Preallocation was missing previously, so it should increase performance
> for linked clone now (around x5 in randwrite 4k)
>
> cluster_size is set to 128k, as it reduce qcow2 overhead (reduce disk,
> but also memory needed to cache metadatas)
should we make this configurable?
> l2_extended is not enabled yet on base image, but it could help too
> to reduce overhead without impacting performance
>
> bench on 100G qcow2 file:
>
> fio --filename=/dev/sdb --direct=1 --rw=randwrite --bs=4k --iodepth=32 --ioengine=libaio --name=test
> fio --filename=/dev/sdb --direct=1 --rw=randread --bs=4k --iodepth=32 --ioengine=libaio --name=test
>
> base image:
>
> randwrite 4k: prealloc=metadata, l2_extended=off, cluster_size=64k: 20215
> randread 4k: prealloc=metadata, l2_extended=off, cluster_size=64k: 22219
> randwrite 4k: prealloc=metadata, l2_extended=on, cluster_size=64k: 20217
> randread 4k: prealloc=metadata, l2_extended=on, cluster_size=64k: 21742
> randwrite 4k: prealloc=metadata, l2_extended=on, cluster_size=128k: 21599
> randread 4k: prealloc=metadata, l2_extended=on, cluster_size=128k: 22037
>
> clone image with backing file:
>
> randwrite 4k: prealloc=metadata, l2_extended=off, cluster_size=64k: 3912
> randread 4k: prealloc=metadata, l2_extended=off, cluster_size=64k: 21476
> randwrite 4k: prealloc=metadata, l2_extended=on, cluster_size=64k: 20563
> randread 4k: prealloc=metadata, l2_extended=on, cluster_size=64k: 22265
> randwrite 4k: prealloc=metadata, l2_extended=on, cluster_size=128k: 18016
> randread 4k: prealloc=metadata, l2_extended=on, cluster_size=128k: 21611
>
> Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
> ---
> src/PVE/Storage/Common.pm | 17 +++++++++++++----
> src/PVE/Storage/Plugin.pm | 17 +++--------------
> 2 files changed, 16 insertions(+), 18 deletions(-)
>
> diff --git a/src/PVE/Storage/Common.pm b/src/PVE/Storage/Common.pm
> index 29f2e52..78e5320 100644
> --- a/src/PVE/Storage/Common.pm
> +++ b/src/PVE/Storage/Common.pm
> @@ -150,14 +150,23 @@ sub preallocation_cmd_option {
> }
>
> sub qemu_img_create {
> - my ($scfg, $fmt, $size, $path) = @_;
> + my ($scfg, $fmt, $size, $path, $backing_path) = @_;
> +
> + die "size can't be specified if backing file is used" if $size && $backing_path;
should we assert that $backing_path is only used with $fmt eq 'qcow2'?
>
> my $cmd = ['/usr/bin/qemu-img', 'create'];
>
> - my $prealloc_opt = preallocation_cmd_option($scfg, $fmt);
> - push @$cmd, '-o', $prealloc_opt if defined($prealloc_opt);
> + my $options = [];
> +
> + if ($backing_path) {
> + push @$cmd, '-b', $backing_path, '-F', 'qcow2';
and then use $fmt here as well?
> + push @$options, 'extended_l2=on', 'cluster_size=128k';
> + }
>
> - push @$cmd, '-f', $fmt, $path, "${size}K";
> + push @$options, preallocation_cmd_option($scfg, $fmt);
> + push @$cmd, '-o', join(',', @$options) if @$options > 0;
> + push @$cmd, '-f', $fmt, $path;
> + push @$cmd, "${size}K" if $size;
>
> run_command($cmd, errmsg => "unable to create image");
> }
> diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
> index 80bb077..c35d5e5 100644
> --- a/src/PVE/Storage/Plugin.pm
> +++ b/src/PVE/Storage/Plugin.pm
> @@ -880,20 +880,9 @@ sub clone_image {
> # Note: we use relative paths, so we need to call chdir before qemu-img
> eval {
> local $CWD = $imagedir;
> -
> - my $cmd = [
> - '/usr/bin/qemu-img',
> - 'create',
> - '-b',
> - "../$basevmid/$basename",
> - '-F',
> - $format,
> - '-f',
> - 'qcow2',
> - $path,
> - ];
> -
> - run_command($cmd);
> + PVE::Storage::Common::qemu_img_create(
> + $scfg, $format, undef, $path, "../$basevmid/$basename",
> + );
> };
> my $err = $@;
>
> --
> 2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 02/10] common: add qemu_img_create an preallocation_cmd_option
2025-07-04 6:44 ` [pve-devel] [PATCH pve-storage 02/10] common: add qemu_img_create an preallocation_cmd_option Alexandre Derumier via pve-devel
@ 2025-07-04 11:53 ` Fabian Grünbichler
2025-07-04 12:33 ` DERUMIER, Alexandre via pve-devel
[not found] ` <51f988f11e60f9dfaa49658c1ed9ecf72fcfcde4.camel@groupe-cyllene.com>
0 siblings, 2 replies; 46+ messages in thread
From: Fabian Grünbichler @ 2025-07-04 11:53 UTC (permalink / raw)
To: Proxmox VE development discussion
> Alexandre Derumier via pve-devel <pve-devel@lists.proxmox.com> hat am 04.07.2025 08:44 CEST geschrieben:
> Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
> ---
> src/PVE/Storage/Common.pm | 52 +++++++++++++++++++++++++++++++++++++++
> src/PVE/Storage/Plugin.pm | 47 +----------------------------------
> 2 files changed, 53 insertions(+), 46 deletions(-)
>
> diff --git a/src/PVE/Storage/Common.pm b/src/PVE/Storage/Common.pm
> index 89a70f4..29f2e52 100644
> --- a/src/PVE/Storage/Common.pm
> +++ b/src/PVE/Storage/Common.pm
> @@ -5,12 +5,26 @@ use warnings;
>
> use PVE::JSONSchema;
> use PVE::Syscall;
> +use PVE::Tools qw(run_command);
>
> use constant {
> FALLOC_FL_KEEP_SIZE => 0x01, # see linux/falloc.h
> FALLOC_FL_PUNCH_HOLE => 0x02, # see linux/falloc.h
> };
>
> +our $QCOW2_PREALLOCATION = {
> + off => 1,
> + metadata => 1,
> + falloc => 1,
> + full => 1,
> +};
> +
> +our $RAW_PREALLOCATION = {
> + off => 1,
> + falloc => 1,
> + full => 1,
> +};
these should probably stay in Plugin.pm
> +
> =pod
>
> =head1 NAME
> @@ -110,4 +124,42 @@ sub deallocate : prototype($$$) {
> }
> }
>
> +sub preallocation_cmd_option {
this as well, since it is storage config dependent
> + my ($scfg, $fmt) = @_;
> +
> + my $prealloc = $scfg->{preallocation};
> +
> + if ($fmt eq 'qcow2') {
> + $prealloc = $prealloc // 'metadata';
> +
> + die "preallocation mode '$prealloc' not supported by format '$fmt'\n"
> + if !$QCOW2_PREALLOCATION->{$prealloc};
> +
> + return "preallocation=$prealloc";
> + } elsif ($fmt eq 'raw') {
> + $prealloc = $prealloc // 'off';
> + $prealloc = 'off' if $prealloc eq 'metadata';
> +
> + die "preallocation mode '$prealloc' not supported by format '$fmt'\n"
> + if !$RAW_PREALLOCATION->{$prealloc};
> +
> + return "preallocation=$prealloc";
> + }
> +
> + return;
> +}
> +
> +sub qemu_img_create {
> + my ($scfg, $fmt, $size, $path) = @_;
then we could lose the $scfg here, and instead add an $options (that the
later patch can then extend further).
> +
> + my $cmd = ['/usr/bin/qemu-img', 'create'];
> +
> + my $prealloc_opt = preallocation_cmd_option($scfg, $fmt);
> + push @$cmd, '-o', $prealloc_opt if defined($prealloc_opt);
> +
> + push @$cmd, '-f', $fmt, $path, "${size}K";
> +
> + run_command($cmd, errmsg => "unable to create image");
> +}
> +
> 1;
> diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
> index c2f376b..80bb077 100644
> --- a/src/PVE/Storage/Plugin.pm
> +++ b/src/PVE/Storage/Plugin.pm
> @@ -38,19 +38,6 @@ our @SHARED_STORAGE = (
> 'iscsi', 'nfs', 'cifs', 'rbd', 'cephfs', 'iscsidirect', 'zfs', 'drbd', 'pbs',
> );
>
> -our $QCOW2_PREALLOCATION = {
> - off => 1,
> - metadata => 1,
> - falloc => 1,
> - full => 1,
> -};
> -
> -our $RAW_PREALLOCATION = {
> - off => 1,
> - falloc => 1,
> - full => 1,
> -};
because we don't know whether somebody relies on this being here..
> -
> our $MAX_VOLUMES_PER_GUEST = 1024;
>
> cfs_register_file(
> @@ -606,31 +593,6 @@ sub parse_config {
> return $cfg;
> }
>
> -sub preallocation_cmd_option {
> - my ($scfg, $fmt) = @_;
> -
> - my $prealloc = $scfg->{preallocation};
> -
> - if ($fmt eq 'qcow2') {
> - $prealloc = $prealloc // 'metadata';
> -
> - die "preallocation mode '$prealloc' not supported by format '$fmt'\n"
> - if !$QCOW2_PREALLOCATION->{$prealloc};
> -
> - return "preallocation=$prealloc";
> - } elsif ($fmt eq 'raw') {
> - $prealloc = $prealloc // 'off';
> - $prealloc = 'off' if $prealloc eq 'metadata';
> -
> - die "preallocation mode '$prealloc' not supported by format '$fmt'\n"
> - if !$RAW_PREALLOCATION->{$prealloc};
> -
> - return "preallocation=$prealloc";
> - }
> -
> - return;
> -}
> -
> # Storage implementation
>
> # called during addition of storage (before the new storage config got written)
> @@ -969,14 +931,7 @@ sub alloc_image {
> umask $old_umask;
> die $err if $err;
> } else {
> - my $cmd = ['/usr/bin/qemu-img', 'create'];
> -
> - my $prealloc_opt = preallocation_cmd_option($scfg, $fmt);
> - push @$cmd, '-o', $prealloc_opt if defined($prealloc_opt);
> -
> - push @$cmd, '-f', $fmt, $path, "${size}K";
> -
> - eval { run_command($cmd, errmsg => "unable to create image"); };
> + eval { PVE::Storage::Common::qemu_img_create($scfg, $fmt, $size, $path) };
> if ($@) {
> unlink $path;
> rmdir $imagedir;
> --
> 2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 04/10] rename_volume: add source && target snap
2025-07-04 11:52 ` Fabian Grünbichler
@ 2025-07-04 12:04 ` Thomas Lamprecht
2025-07-07 10:34 ` DERUMIER, Alexandre via pve-devel
1 sibling, 0 replies; 46+ messages in thread
From: Thomas Lamprecht @ 2025-07-04 12:04 UTC (permalink / raw)
To: Fabian Grünbichler, Proxmox VE development discussion
Am 04.07.25 um 13:52 schrieb Fabian Grünbichler:
>> Alexandre Derumier via pve-devel <pve-devel@lists.proxmox.com> hat am 04.07.2025 08:45 CEST geschrieben:
>> allow to rename from|to external snapshot volname
>
> we could consider adding a new API method `rename_snapshot` instead:
>
> my ($class, $scfg, $storeid, $volname, $source_snap, $target_snap) = @_;
>
> for the two plugins here it could easily share most of the implementation
> with rename_volume, without blowing up the interface for a fairly limited
> use case?
>
> rename_volume right now is used for re-assigning a volume from one
> owner/vmid to another only, AFAICT, with $target_volname never being
> actually provided by callers. the new call would then never provide
> $target_vmid and never provide $target_volname, while existing ones
> never provide the snapshot parameters. OTOH, just like the existing
> rename_volume, such a rename_snapshot method would only have a
> single use case/call site, unless we plan to also add generic
> snapshot renaming as a feature down the line..
I'm currently not too closely into this code, but IMO it might be
indeed a bit nicer to have this as separate and specialized method.
The amount of call sites is not so important to me, having a more
clear and less "multiplexed" API can provide its benefits on it's
own, like avoiding some dangerous edge cases you pointed out below.
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 03/10] common: qemu_img_create: add backing_file support
2025-07-04 11:52 ` Fabian Grünbichler
@ 2025-07-04 12:31 ` DERUMIER, Alexandre via pve-devel
2025-07-07 7:16 ` DERUMIER, Alexandre via pve-devel
1 sibling, 0 replies; 46+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-04 12:31 UTC (permalink / raw)
To: pve-devel, f.gruenbichler; +Cc: DERUMIER, Alexandre
[-- Attachment #1: Type: message/rfc822, Size: 14172 bytes --]
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Subject: Re: [pve-devel] [PATCH pve-storage 03/10] common: qemu_img_create: add backing_file support
Date: Fri, 4 Jul 2025 12:31:04 +0000
Message-ID: <cd5d34fd36a79b70a38797c8ca4e85076847fd48.camel@groupe-cyllene.com>
>
> >>cluster_size is set to 128k, as it reduce qcow2 overhead (reduce
> >>disk,
> >>but also memory needed to cache metadatas)
>>
>>should we make this configurable?
I'm not sure yet, I have choose the best balance between memory <-
>performance (too big block reduce performance, too small increase
memory). they are 32 suballocated clusters of 4k with 128k
cluster_size, so for 4k workload, it's avoid over-amplication when it
need to read the parent snapshot chains.
If the image is really big, the best way to keep good performance, is
to increase the l2-cache-size. (maybe make it tunable)
the default qemu cache is 32MB, and it's able to handle 256GB image
with default 64k cluster without performance degradation.
> sub qemu_img_create {
> - my ($scfg, $fmt, $size, $path) = @_;
> + my ($scfg, $fmt, $size, $path, $backing_path) = @_;
> +
> + die "size can't be specified if backing file is used" if $size
> && $backing_path;
>>should we assert that $backing_path is only used with $fmt eq
'qcow2'?
yes indeed, qcow2 is the only format supporting a backing file
>
> my $cmd = ['/usr/bin/qemu-img', 'create'];
>
> - my $prealloc_opt = preallocation_cmd_option($scfg, $fmt);
> - push @$cmd, '-o', $prealloc_opt if defined($prealloc_opt);
> + my $options = [];
> +
> + if ($backing_path) {
> + push @$cmd, '-b', $backing_path, '-F', 'qcow2';
>>and then use $fmt here as well?
ok, will do
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 02/10] common: add qemu_img_create an preallocation_cmd_option
2025-07-04 11:53 ` Fabian Grünbichler
@ 2025-07-04 12:33 ` DERUMIER, Alexandre via pve-devel
[not found] ` <51f988f11e60f9dfaa49658c1ed9ecf72fcfcde4.camel@groupe-cyllene.com>
1 sibling, 0 replies; 46+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-04 12:33 UTC (permalink / raw)
To: pve-devel, f.gruenbichler; +Cc: DERUMIER, Alexandre
[-- Attachment #1: Type: message/rfc822, Size: 12066 bytes --]
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Subject: Re: [pve-devel] [PATCH pve-storage 02/10] common: add qemu_img_create an preallocation_cmd_option
Date: Fri, 4 Jul 2025 12:33:46 +0000
Message-ID: <51f988f11e60f9dfaa49658c1ed9ecf72fcfcde4.camel@groupe-cyllene.com>
>>these should probably stay in Plugin.pm
Ok, will do, no problem (Fiona asked me to move it out of Plugin)
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [pve-devel] [PATCH qemu-server 3/3] qcow2: add external snapshot support
2025-07-04 11:52 ` Fabian Grünbichler
@ 2025-07-04 12:46 ` DERUMIER, Alexandre via pve-devel
0 siblings, 0 replies; 46+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-04 12:46 UTC (permalink / raw)
To: pve-devel, f.gruenbichler; +Cc: DERUMIER, Alexandre
[-- Attachment #1: Type: message/rfc822, Size: 12543 bytes --]
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Subject: Re: [pve-devel] [PATCH qemu-server 3/3] qcow2: add external snapshot support
Date: Fri, 4 Jul 2025 12:46:20 +0000
Message-ID: <dd14d131f63fe8553666fe61f96604c0014e18e6.camel@groupe-cyllene.com>
-
> + #we skip snapshot for tpmstate
> + return if $deviceid && $deviceid =~ m/tpmstate0/;
>>I think this is wrong.. this should return 'storage' as well?
Ah yes,indeed, I don't known why I was confused, and thinked we
couldn't take a storage snapshot of tmpstate when vm is running.
I'll fix it
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 08/10] qcow2: add external snapshot support
2025-07-04 11:52 ` Fabian Grünbichler
@ 2025-07-04 13:22 ` DERUMIER, Alexandre via pve-devel
[not found] ` <c38598bae6477dfa6af0db96da054b156698d41c.camel@groupe-cyllene.com>
` (5 subsequent siblings)
6 siblings, 0 replies; 46+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-04 13:22 UTC (permalink / raw)
To: pve-devel, f.gruenbichler; +Cc: DERUMIER, Alexandre
[-- Attachment #1: Type: message/rfc822, Size: 17423 bytes --]
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Subject: Re: [pve-devel] [PATCH pve-storage 08/10] qcow2: add external snapshot support
Date: Fri, 4 Jul 2025 13:22:52 +0000
Message-ID: <c38598bae6477dfa6af0db96da054b156698d41c.camel@groupe-cyllene.com>
> + snapext => { optional => 1 },
>>needs to be "fixed", as the code doesn't handle mixing internal
>>and external snapshots on a single storage..
indeed, I'll fix it
>
>
> +my sub alloc_backed_image {
> + my ($class, $storeid, $scfg, $volname, $backing_snap) = @_;
> +
> + my $path = $class->path($scfg, $volname, $storeid);
> + my $backing_path = $class->path($scfg, $volname, $storeid,
> $backing_snap);
>>should we use a relative path here like we do when doing a linked
>>clone? else
>>it basically means that it is no longer possible to move the storage
>>mountpoint,
>>unless I am mistaken?
I dind't have thinked about mountpoint change, but yes, I'll not work
anymore without related path.
> + #delete external snapshots
> + if ($scfg->{snapext}) {
> + my $snapshots = $class->volume_snapshot_info($scfg,
> $storeid, $volname);
> + for my $snapid (
> + sort { $snapshots->{$b}->{order} <=> $snapshots-
> >{$a}->{order} }
> + keys %$snapshots
> + ) {
> + my $snap = $snapshots->{$snapid};
> + next if $snapid eq 'current';
> + next if !$snap->{ext};
> + free_snap_image($class, $storeid, $scfg, $volname,
> $snapid);
> + }
> + }
> +
>>this is a bit tricky.. once we've deleted the first snapshot, we've
>>basically invalidated
>>the whole image..
Well, we want to delete it anyway, we don't care about invalidate it ?
>> should we try to continue freeing as much as possible? and maybe
even
>>start with the "current" image, so that a partial removal doesn't
>>look like valid image
>>anymore?
currently the volume_snapshot_info is reading the snapshot chain from
the current image (to do only 1 qemu-img call), that why I'm removing
snapshots in reverse order.
If something hang with partial delete, you can still try again later.
If we want to delete from the current to last snap, I'll need something
like calling qemu-info info on each snap file to find each parent.
or maybe use something else than volume_snapshot_info here, simply glob
all the vm disk && snap files and delete them in random order, as we
want to delete it anyway.
>>the naming scheme here still clashes with regular volids
unfortunately:
>>$ pvesm alloc ext4 12344321 snap-foobar-12344321-disk-foofoobar.qcow2
>>1G
>>Formatting '/mnt/pve/ext4/images/12344321/snap-foobar-12344321-disk-
>>foofoobar.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off
>>preallocation=off compression_type=zlib size=1073741824
>>lazy_refcounts=off refcount_bits=16
>>successfully created 'ext4:12344321/snap-foobar-12344321-disk-
>>foofoobar.qcow2'
>>$ pvesm list ext4 -content images -vmid 12344321 | grep foobar
>>ext4:12344321/snap-foobar-12344321-disk-foofoobar.qcow2 qcow2
>>images 1073741824 12344321
>>$ qm set 12344321 --scsi0 ext4:12344321/snap-foobar-12344321-disk-
>>foofoobar.qcow2
>>should we maybe move snapshot files into a subdir, since `/` is not
>>allowed in volnames?
what about lvm ? (I think it should be great to have same scheme for
both, but lvm have also reserved characters like @)
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 03/10] common: qemu_img_create: add backing_file support
2025-07-04 11:52 ` Fabian Grünbichler
2025-07-04 12:31 ` DERUMIER, Alexandre via pve-devel
@ 2025-07-07 7:16 ` DERUMIER, Alexandre via pve-devel
1 sibling, 0 replies; 46+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-07 7:16 UTC (permalink / raw)
To: pve-devel, f.gruenbichler; +Cc: DERUMIER, Alexandre
[-- Attachment #1: Type: message/rfc822, Size: 13967 bytes --]
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Subject: Re: [pve-devel] [PATCH pve-storage 03/10] common: qemu_img_create: add backing_file support
Date: Mon, 7 Jul 2025 07:16:27 +0000
Message-ID: <9215b2c8bbdea57629265afa1d14ea6bb8ac386d.camel@groupe-cyllene.com>
Ah, sorry, I just notice than I have rebase the wrong patch serie
version (I had already done some fix for last fiona review)
Fiona wanted a dedicated sub to create backed images like
our $QCOW2_CLUSTERS = {
backed => ['extended_l2=on','cluster_size=128k']
};
=pod
=head3 qemu_img_create_qcow2_backed
qemu_img_create_qcow2_backed($scfg, $path, $backing_path,
$backing_format)
Create a new qemu qcow2 image C<$path> using an existing backing image
C<$backing_path> with backing_format C<$backing_format>.
=cut
sub qemu_img_create_qcow2_backed {
my ($scfg, $path, $backing_path, $backing_format) = @_;
my $cmd = ['/usr/bin/qemu-img', 'create', '-F', $backing_format, '-
b', $backing_path, '-f', 'qcow2', $path];
my $options = $QCOW2_CLUSTERS->{backed};
push @$options, preallocation_cmd_option($scfg, 'qcow2');
push @$cmd, '-o', join(',', @$options) if @$options > 0;
run_command($cmd, errmsg => "unable to create image");
}
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 02/10] common: add qemu_img_create an preallocation_cmd_option
[not found] ` <51f988f11e60f9dfaa49658c1ed9ecf72fcfcde4.camel@groupe-cyllene.com>
@ 2025-07-07 7:55 ` Fabian Grünbichler
0 siblings, 0 replies; 46+ messages in thread
From: Fabian Grünbichler @ 2025-07-07 7:55 UTC (permalink / raw)
To: DERUMIER, Alexandre, pve-devel
> DERUMIER, Alexandre <alexandre.derumier@groupe-cyllene.com> hat am 04.07.2025 14:33 CEST geschrieben:
>
>
> >>these should probably stay in Plugin.pm
>
> Ok, will do, no problem (Fiona asked me to move it out of Plugin)
I'd prefer if the helpers there remain low-level helpers
for now, and not config-handling things including default
values..
but it's a bit of a grey area, sorry for the back and
forth!
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 08/10] qcow2: add external snapshot support
[not found] ` <c38598bae6477dfa6af0db96da054b156698d41c.camel@groupe-cyllene.com>
@ 2025-07-07 8:17 ` Fabian Grünbichler
2025-07-07 10:18 ` DERUMIER, Alexandre via pve-devel
[not found] ` <c671fe82a7cdab90a3691115a7132d0a35ae79b7.camel@groupe-cyllene.com>
0 siblings, 2 replies; 46+ messages in thread
From: Fabian Grünbichler @ 2025-07-07 8:17 UTC (permalink / raw)
To: DERUMIER, Alexandre, pve-devel
> DERUMIER, Alexandre <alexandre.derumier@groupe-cyllene.com> hat am 04.07.2025 15:22 CEST geschrieben:
> > + #delete external snapshots
> > + if ($scfg->{snapext}) {
> > + my $snapshots = $class->volume_snapshot_info($scfg,
> > $storeid, $volname);
> > + for my $snapid (
> > + sort { $snapshots->{$b}->{order} <=> $snapshots-
> > >{$a}->{order} }
> > + keys %$snapshots
> > + ) {
> > + my $snap = $snapshots->{$snapid};
> > + next if $snapid eq 'current';
> > + next if !$snap->{ext};
> > + free_snap_image($class, $storeid, $scfg, $volname,
> > $snapid);
> > + }
> > + }
> > +
>
> >>this is a bit tricky.. once we've deleted the first snapshot, we've
> >>basically invalidated
> >>the whole image..
> Well, we want to delete it anyway, we don't care about invalidate it ?
>
> >> should we try to continue freeing as much as possible? and maybe
> even
> >>start with the "current" image, so that a partial removal doesn't
> >>look like valid image
> >>anymore?
>
> currently the volume_snapshot_info is reading the snapshot chain from
> the current image (to do only 1 qemu-img call), that why I'm removing
> snapshots in reverse order.
> If something hang with partial delete, you can still try again later.
> If we want to delete from the current to last snap, I'll need something
> like calling qemu-info info on each snap file to find each parent.
>
> or maybe use something else than volume_snapshot_info here, simply glob
> all the vm disk && snap files and delete them in random order, as we
> want to delete it anyway.
yes, this is exactly what I meant with tricky ;)
if we start deleting snapshots from the "first"/root one, then it's
easier to retry deletion after a partial deletion - but the image still
"looks" like a valid one at first glance (although it is of course
unusable as soon as any snapshot is missing!)
if we start deleting with the image file, then it is immediately clear
that there is no more image to be used - but retrying a partial deletion
is impossible via the PVE API, you need to do it manually.
I just tried, and this would need more work if we want to support the
"retry partial deletion" approach - because:
root.qcow2 <- snap.qcow2 <- current.qcow2
delete root.qcow2
$ qemu-img info --output json --backing-chain -f qcow2 current.qcow2
qemu-img: Could not open 'root.qcow2': Could not open 'root.qcow2': No such file or directory
so we'd need to actually query image by image and detect when the chain
is broken, if we want to support that which might not be worth it, if
the snapshot deletion log contains the information about which snapshot
volumes are leftover and need to be manually cleaned?
> >>the naming scheme here still clashes with regular volids
> unfortunately:
>
> >>$ pvesm alloc ext4 12344321 snap-foobar-12344321-disk-foofoobar.qcow2
> >>1G
> >>Formatting '/mnt/pve/ext4/images/12344321/snap-foobar-12344321-disk-
> >>foofoobar.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off
> >>preallocation=off compression_type=zlib size=1073741824
> >>lazy_refcounts=off refcount_bits=16
> >>successfully created 'ext4:12344321/snap-foobar-12344321-disk-
> >>foofoobar.qcow2'
> >>$ pvesm list ext4 -content images -vmid 12344321 | grep foobar
> >>ext4:12344321/snap-foobar-12344321-disk-foofoobar.qcow2 qcow2
> >>images 1073741824 12344321
> >>$ qm set 12344321 --scsi0 ext4:12344321/snap-foobar-12344321-disk-
> >>foofoobar.qcow2
>
> >>should we maybe move snapshot files into a subdir, since `/` is not
> >>allowed in volnames?
>
> what about lvm ? (I think it should be great to have same scheme for
> both, but lvm have also reserved characters like @)
the naming scheme is already different:
<VMID>/<anything except slash and spaces>.<fmt> (dir)
vs
vm-<VMID>-<anything except spaces>[.<qcow2>] (LVM)
the LVM one is in practicate limited further by what is allowed in LV
names, like you said. but for LVM it's fine as it is because of the
`vm-` prefix, we can add a second namespace besides it with `snap-`
without the risk of collisions. or we could switch to make it uniform
with LVM thin, which uses
snap_<volname>_<snap>
as snapshot LV name.. that would make it (for example)
snap_vm-100-disk-1.qcow2_foobar
for a snapshot nameed "foobar" of the volume "vm-100-disk-1.qcow2"
for the DirPlugin we need to add another level for the snapshots
on-disk, else we cannot differentiate between weirdly-named image
files and snapshot files..
so we could add a directory "snapshots" and put all the snapshot
files in there. since snapshots are never references a volume ID,
this just affects the on disk structure, not the volume ID format.
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 08/10] qcow2: add external snapshot support
2025-07-07 8:17 ` Fabian Grünbichler
@ 2025-07-07 10:18 ` DERUMIER, Alexandre via pve-devel
[not found] ` <c671fe82a7cdab90a3691115a7132d0a35ae79b7.camel@groupe-cyllene.com>
1 sibling, 0 replies; 46+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-07 10:18 UTC (permalink / raw)
To: pve-devel, f.gruenbichler; +Cc: DERUMIER, Alexandre
[-- Attachment #1: Type: message/rfc822, Size: 16707 bytes --]
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Subject: Re: [pve-devel] [PATCH pve-storage 08/10] qcow2: add external snapshot support
Date: Mon, 7 Jul 2025 10:18:28 +0000
Message-ID: <c671fe82a7cdab90a3691115a7132d0a35ae79b7.camel@groupe-cyllene.com>
>
> or maybe use something else than volume_snapshot_info here, simply
> glob
> all the vm disk && snap files and delete them in random order, as we
> want to delete it anyway.
yes, this is exactly what I meant with tricky ;)
>>if we start deleting snapshots from the "first"/root one, then it's
>>easier to retry deletion after a partial deletion - but the image
>>still
>>"looks" like a valid one at first glance (although it is of course
>>unusable as soon as any snapshot is missing!)
ok, I see
>>if we start deleting with the image file, then it is immediately
>>clear
>>that there is no more image to be used - but retrying a partial
>>deletion
>>is impossible via the PVE API, you need to do it manually.
>>
>>I just tried, and this would need more work if we want to support the
>>"retry partial deletion" approach - because:
>>
>>root.qcow2 <- snap.qcow2 <- current.qcow2
>>
>>delete root.qcow2
>>
>>$ qemu-img info --output json --backing-chain -f qcow2 current.qcow2
>>qemu-img: Could not open 'root.qcow2': Could not open 'root.qcow2':
>>No such file or directory
>>
>>so we'd need to actually query image by image and detect when the
>>chain
>>is broken, if we want to support that which might not be worth it, if
>>the snapshot deletion log contains the information about which
>>snapshot
>>volumes are leftover and need to be manually cleaned?
yes, I think that it could be manually cleaned, no problem.
I'll revert the delete starting with current.
>
> what about lvm ? (I think it should be great to have same scheme for
> both, but lvm have also reserved characters like @)
>>the naming scheme is already different:
>><VMID>/<anything except slash and spaces>.<fmt> (dir)
>>
>>vs
>>
>>vm-<VMID>-<anything except spaces>[.<qcow2>] (LVM)
>>the LVM one is in practicate limited further by what is allowed in LV
>>names, like you said. but for LVM it's fine as it is because of the
>>`vm-` prefix, we can add a second namespace besides it with `snap-`
>>without the risk of collisions. or we could switch to make it uniform
>>with LVM thin, which uses
>>
>>snap_<volname>_<snap>
>>
>>as snapshot LV name.. that would make it (for example)
>>
>>snap_vm-100-disk-1.qcow2_foobar
>>
>>for a snapshot nameed "foobar" of the volume "vm-100-disk-1.qcow2"
>>
>>
>>for the DirPlugin we need to add another level for the snapshots
>>on-disk, else we cannot differentiate between weirdly-named image
>>files and snapshot files..
>>
>>so we could add a directory "snapshots" and put all the snapshot
>>files in there. since snapshots are never references a volume ID,
>>this just affects the on disk structure, not the volume ID format.
ok, no problem, I'll do it like this.
Do you want the a snapshot sub directory like ?
/var/lib/vz/images/<vmid>/snapshots/
or a global snapshots like
/var/lib/vz/snapshots/<vmid>/
?
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 04/10] rename_volume: add source && target snap
2025-07-04 11:52 ` Fabian Grünbichler
2025-07-04 12:04 ` Thomas Lamprecht
@ 2025-07-07 10:34 ` DERUMIER, Alexandre via pve-devel
1 sibling, 0 replies; 46+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-07 10:34 UTC (permalink / raw)
To: pve-devel, f.gruenbichler; +Cc: DERUMIER, Alexandre, t.lamprecht
[-- Attachment #1: Type: message/rfc822, Size: 14547 bytes --]
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Cc: "t.lamprecht@proxmox.com" <t.lamprecht@proxmox.com>
Subject: Re: [pve-devel] [PATCH pve-storage 04/10] rename_volume: add source && target snap
Date: Mon, 7 Jul 2025 10:34:21 +0000
Message-ID: <fc6e2ce2fd3aa270f21224a870416235e064ddb6.camel@groupe-cyllene.com>
>>we could consider adding a new API method `rename_snapshot` instead:
>>
>>my ($class, $scfg, $storeid, $volname, $source_snap, $target_snap) =
>>@_;
>>
>>for the two plugins here it could easily share most of the
>>implementation
>>with rename_volume, without blowing up the interface for a fairly
>>limited
>>use case?
>>
>>rename_volume right now is used for re-assigning a volume from one
>>owner/vmid to another only, AFAICT, with $target_volname never being
>>actually provided by callers. the new call would then never provide
>>$target_vmid and never provide $target_volname, while existing ones
>>never provide the snapshot parameters. OTOH, just like the existing
>>rename_volume, such a rename_snapshot method would only have a
>>single use case/call site, unless we plan to also add generic
>>snapshot renaming as a feature down the line..
ok, no problem, I'll add it
>>another missing piece here is that for external snapshots, we need
>>to actually rename all the volumes when reassigning, else only the
>>current volume is renamed, and the rest still have the old
owner/name..
ah yes, indeed, I'll fix it
>
>
> my (
> undef, $source_image, $source_vmid, $base_name, $base_vmid,
> undef, $format,
> ) = $class->parse_volname($source_volname);
> +
> + $source_image = $class->get_snap_volname($source_volname,
> $source_snap) if $source_snap;
>>if you flip these two around, this could just overwrite
>>$source_volname, because otherwise
>>you rely on $source_image and $source_volname being identical always
>>which might not be
>>the case in the future?
>> or is this more correct in general, in case we ever add base image
>>support to non-thin LVM?
it was mostly because source_volname is used just after to find
target_volname.
I'll cleanup this in the dedicated rename_snasphot sub
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 08/10] qcow2: add external snapshot support
[not found] ` <c671fe82a7cdab90a3691115a7132d0a35ae79b7.camel@groupe-cyllene.com>
@ 2025-07-07 10:53 ` Fabian Grünbichler
0 siblings, 0 replies; 46+ messages in thread
From: Fabian Grünbichler @ 2025-07-07 10:53 UTC (permalink / raw)
To: DERUMIER, Alexandre, pve-devel
> DERUMIER, Alexandre <alexandre.derumier@groupe-cyllene.com> hat am 07.07.2025 12:18 CEST geschrieben:
> >>for the DirPlugin we need to add another level for the snapshots
> >>on-disk, else we cannot differentiate between weirdly-named image
> >>files and snapshot files..
> >>
> >>so we could add a directory "snapshots" and put all the snapshot
> >>files in there. since snapshots are never references a volume ID,
> >>this just affects the on disk structure, not the volume ID format.
>
> ok, no problem, I'll do it like this.
>
> Do you want the a snapshot sub directory like ?
> /var/lib/vz/images/<vmid>/snapshots/
I think this makes the most sense, as it logically groups the snapshot
files below the main ones..
>
> or a global snapshots like
>
> /var/lib/vz/snapshots/<vmid>/
>
> ?
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 08/10] qcow2: add external snapshot support
2025-07-04 11:52 ` Fabian Grünbichler
2025-07-04 13:22 ` DERUMIER, Alexandre via pve-devel
[not found] ` <c38598bae6477dfa6af0db96da054b156698d41c.camel@groupe-cyllene.com>
@ 2025-07-08 8:44 ` DERUMIER, Alexandre via pve-devel
[not found] ` <3d1d8516e3c68de370608033647a38e99ef50f23.camel@groupe-cyllene.com>
` (3 subsequent siblings)
6 siblings, 0 replies; 46+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-08 8:44 UTC (permalink / raw)
To: pve-devel, f.gruenbichler; +Cc: DERUMIER, Alexandre
[-- Attachment #1: Type: message/rfc822, Size: 13186 bytes --]
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Subject: Re: [pve-devel] [PATCH pve-storage 08/10] qcow2: add external snapshot support
Date: Tue, 8 Jul 2025 08:44:42 +0000
Message-ID: <3d1d8516e3c68de370608033647a38e99ef50f23.camel@groupe-cyllene.com>
> preallocation => { optional => 1 },
> + snapext => { optional => 1 },
>>needs to be "fixed", as the code doesn't handle mixing internal
>>and external snapshots on a single storage..
I think it'll break parsing of already configured storage without
snapext option ?
(It's breaking current tests in pve-storage)
How do you handle this ? update storage.cfg when upgrading pve-storage
package ?
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 08/10] qcow2: add external snapshot support
[not found] ` <3d1d8516e3c68de370608033647a38e99ef50f23.camel@groupe-cyllene.com>
@ 2025-07-08 8:56 ` Fabian Grünbichler
2025-07-08 11:37 ` DERUMIER, Alexandre via pve-devel
0 siblings, 1 reply; 46+ messages in thread
From: Fabian Grünbichler @ 2025-07-08 8:56 UTC (permalink / raw)
To: DERUMIER, Alexandre, pve-devel
> DERUMIER, Alexandre <alexandre.derumier@groupe-cyllene.com> hat am 08.07.2025 10:44 CEST geschrieben:
>
>
> > preallocation => { optional => 1 },
> > + snapext => { optional => 1 },
>
> >>needs to be "fixed", as the code doesn't handle mixing internal
> >>and external snapshots on a single storage..
>
> I think it'll break parsing of already configured storage without
> snapext option ?
I don't think it does?
> (It's breaking current tests in pve-storage)
how?
> How do you handle this ? update storage.cfg when upgrading pve-storage
> package ?
no, it should be enough that the default is disabled..
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 08/10] qcow2: add external snapshot support
2025-07-04 11:52 ` Fabian Grünbichler
` (3 preceding siblings ...)
[not found] ` <3d1d8516e3c68de370608033647a38e99ef50f23.camel@groupe-cyllene.com>
@ 2025-07-08 10:04 ` DERUMIER, Alexandre via pve-devel
[not found] ` <27854af70a4fe3a7765d2760098e2f82f3475f17.camel@groupe-cyllene.com>
2025-07-09 12:52 ` DERUMIER, Alexandre via pve-devel
6 siblings, 0 replies; 46+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-08 10:04 UTC (permalink / raw)
To: pve-devel, f.gruenbichler; +Cc: DERUMIER, Alexandre
[-- Attachment #1: Type: message/rfc822, Size: 14860 bytes --]
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Subject: Re: [pve-devel] [PATCH pve-storage 08/10] qcow2: add external snapshot support
Date: Tue, 8 Jul 2025 10:04:26 +0000
Message-ID: <27854af70a4fe3a7765d2760098e2f82f3475f17.camel@groupe-cyllene.com>
> +my sub alloc_backed_image {
> + my ($class, $storeid, $scfg, $volname, $backing_snap) = @_;
> +
> + my $path = $class->path($scfg, $volname, $storeid);
> + my $backing_path = $class->path($scfg, $volname, $storeid,
> $backing_snap);
>>should we use a relative path here like we do when doing a linked
>>clone? else
>>it basically means that it is no longer possible to move the storage
>>mountpoint,
>>unless I am mistaken?
ah, it doesn't playing fine with snapshots directory, because current
is renamed && moved to snapshot directory, so it's breaking the
relative path.
1) create snap1
qm snapshot 10000 snap1
/home/store/images/10000/vm-10000-disk-0.qcow2
------>./snapshot/snap-snap1-vm-10000-disk-0.qcow2
2) create a snap2:
a)rename current to snap2
mv /home/store/images/10000/vm-10000-disk-0.qcow2
/home/store/images/10000/snapshots/snap-snap2-vm-10000-disk-0.qcow2
b) create the new current with snap2 backing
/usr/bin/qemu-img create -F qcow2 -b ./snapshots/snap-snap2-vm-10000-
disk-0.qcow2 -f qcow2 /home/store/images/10000/vm-10000-disk-0.qcow2
qemu-img: /home/store/images/10000/vm-10000-disk-0.qcow2: Could not
open backing file: Could not open
'/home/store/images/10000/./snapshots/./snapshots/snap-snap1-vm-10000-
disk-0.qcow2': No such file or directory
Could not open backing image.
same with:
qemu-img info ./snapshots/snap-snap2-vm-10000-disk-0.qcow2 --backing-
chain
qemu-img: Could not open './snapshots/./snapshots/snap-snap1-vm-10000-
disk-0.qcow2': Could not open './snapshots/./snapshots/snap-snap1-vm-
10000-disk-0.qcow2': No such file or directory
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 08/10] qcow2: add external snapshot support
[not found] ` <27854af70a4fe3a7765d2760098e2f82f3475f17.camel@groupe-cyllene.com>
@ 2025-07-08 10:59 ` Fabian Grünbichler
2025-07-08 11:35 ` DERUMIER, Alexandre via pve-devel
` (3 more replies)
0 siblings, 4 replies; 46+ messages in thread
From: Fabian Grünbichler @ 2025-07-08 10:59 UTC (permalink / raw)
To: DERUMIER, Alexandre, pve-devel; +Cc: Thomas Lamprecht
> DERUMIER, Alexandre <alexandre.derumier@groupe-cyllene.com> hat am 08.07.2025 12:04 CEST geschrieben:
>
>
> > +my sub alloc_backed_image {
> > + my ($class, $storeid, $scfg, $volname, $backing_snap) = @_;
> > +
> > + my $path = $class->path($scfg, $volname, $storeid);
> > + my $backing_path = $class->path($scfg, $volname, $storeid,
> > $backing_snap);
>
> >>should we use a relative path here like we do when doing a linked
> >>clone? else
> >>it basically means that it is no longer possible to move the storage
> >>mountpoint,
> >>unless I am mistaken?
>
> ah, it doesn't playing fine with snapshots directory, because current
> is renamed && moved to snapshot directory, so it's breaking the
> relative path.
okay, that means we instead need to become more strict with 'snapext'
storages and restrict the volnames there.. maybe to (vm-|base-)-XXX-*.fmt?
that means only allowing such names when allocating volumes, and filtering
when listing images..
since we want to make that property fixed anyway, we don't have to worry
about existing images..
does that sound okay?
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 08/10] qcow2: add external snapshot support
2025-07-08 10:59 ` Fabian Grünbichler
@ 2025-07-08 11:35 ` DERUMIER, Alexandre via pve-devel
[not found] ` <0b2ba0c34d2c8c15d7cb642442b300a3180e1592.camel@groupe-cyllene.com>
` (2 subsequent siblings)
3 siblings, 0 replies; 46+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-08 11:35 UTC (permalink / raw)
To: pve-devel, f.gruenbichler; +Cc: DERUMIER, Alexandre, t.lamprecht
[-- Attachment #1: Type: message/rfc822, Size: 12984 bytes --]
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Cc: "t.lamprecht@proxmox.com" <t.lamprecht@proxmox.com>
Subject: Re: [pve-devel] [PATCH pve-storage 08/10] qcow2: add external snapshot support
Date: Tue, 8 Jul 2025 11:35:09 +0000
Message-ID: <0b2ba0c34d2c8c15d7cb642442b300a3180e1592.camel@groupe-cyllene.com>
>>okay, that means we instead need to become more strict with 'snapext'
>>storages and restrict the volnames there.. maybe to (vm-|base-)-XXX-
>>*.fmt?
>>that means only allowing such names when allocating volumes, and
>>filtering
>>when listing images..
>>
>>since we want to make that property fixed anyway, we don't have to
>>worry
>>about existing images..
>>
>>does that sound okay?
yes, I was thinking about the same, as it's a new fixed feature.
I'll implement it
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 08/10] qcow2: add external snapshot support
2025-07-08 8:56 ` Fabian Grünbichler
@ 2025-07-08 11:37 ` DERUMIER, Alexandre via pve-devel
0 siblings, 0 replies; 46+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-08 11:37 UTC (permalink / raw)
To: pve-devel, f.gruenbichler; +Cc: DERUMIER, Alexandre
[-- Attachment #1: Type: message/rfc822, Size: 14576 bytes --]
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Subject: Re: [pve-devel] [PATCH pve-storage 08/10] qcow2: add external snapshot support
Date: Tue, 8 Jul 2025 11:37:53 +0000
Message-ID: <e96d21af22a9fc426c5e390a6f459ca3f8e9a7c5.camel@groupe-cyllene.com>
>
> I think it'll break parsing of already configured storage without
> snapext option ?
>>I don't think it does?
ah , I have tried with
{ fixed => 1 } only
but it's ok with
{ optional => 1, fixed => 1 }
pvesm set teststorage --snapext 1
update storage failed: can't change value of fixed parameter 'snapext'
> (It's breaking current tests in pve-storage)
>>how?
run_bwlimit_tests.pl
file /etc/pve/storage.cfg line 2 (skip section 'nolimit'): missing
value for required option 'snapext'
file /etc/pve/storage.cfg line 6 (skip section 'd50'): missing value
for required option 'snapext'
file /etc/pve/storage.cfg line 10 (skip section 'd50m40r30'): missing
value for required option 'snapext'
file /etc/pve/storage.cfg line 14 (skip section 'd20m40r30'): missing
value for required option 'snapext'
file /etc/pve/storage.cfg line 18 (skip section 'd200m400r300'):
missing value for required option 'snapext'
file /etc/pve/storage.cfg line 22 (skip section 'd10'): missing value
for required option 'snapext'
file /etc/pve/storage.cfg line 26 (skip section 'm50'): missing value
for required option 'snapext'
file /etc/pve/storage.cfg line 30 (skip section 'd200'): missing value
for required option 'snapext'
Use of uninitialized value $type in hash element at
../PVE/Storage/Plugin.pm line 606, <DATA> line 960.
Use of uninitialized value $type in string eq at
../PVE/Storage/Plugin.pm line 611, <DATA> line 960.
Use of uninitialized value $type in string eq at
../PVE/Storage/Plugin.pm line 611, <DATA> line 960.
Use of uninitialized value $type in string eq at
../PVE/Storage/Plugin.pm line 611, <DATA> line 960.
Use of uninitialized value $type in string eq at
../PVE/Storage/Plugin.pm line 611, <DATA> line 960.
Use of uninitialized value $type in string eq at
../PVE/Storage/Plugin.pm line 611, <DATA> line 960.
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 08/10] qcow2: add external snapshot support
[not found] ` <0b2ba0c34d2c8c15d7cb642442b300a3180e1592.camel@groupe-cyllene.com>
@ 2025-07-08 12:50 ` Thomas Lamprecht
2025-07-08 13:19 ` DERUMIER, Alexandre via pve-devel
0 siblings, 1 reply; 46+ messages in thread
From: Thomas Lamprecht @ 2025-07-08 12:50 UTC (permalink / raw)
To: DERUMIER, Alexandre, pve-devel, f.gruenbichler
Am 08.07.25 um 13:35 schrieb DERUMIER, Alexandre:
>>> okay, that means we instead need to become more strict with 'snapext'
>>> storages and restrict the volnames there.. maybe to (vm-|base-)-XXX-
>>> *.fmt?
>
>>> that means only allowing such names when allocating volumes, and
>>> filtering
>>> when listing images..
>>>
>>> since we want to make that property fixed anyway, we don't have to
>>> worry
>>> about existing images..
>>>
>>> does that sound okay?
>
> yes, I was thinking about the same, as it's a new fixed feature.
>
> I'll implement it
Can we please use an actually telling name though? As "ext" is quite
often used as term for "extension", and we really win nothing with
doing this.
Strongly preferring words to be spelled out in full and separated with
hyphens, instead of something else or being glued together like here.
Like "snapshot-external" or "external-snapshots". This is just an
option in the storage config, not for each volume, so we really do
not need to optimize option name length here.
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 08/10] qcow2: add external snapshot support
2025-07-08 12:50 ` Thomas Lamprecht
@ 2025-07-08 13:19 ` DERUMIER, Alexandre via pve-devel
0 siblings, 0 replies; 46+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-08 13:19 UTC (permalink / raw)
To: pve-devel, t.lamprecht, f.gruenbichler; +Cc: DERUMIER, Alexandre
[-- Attachment #1: Type: message/rfc822, Size: 13224 bytes --]
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "t.lamprecht@proxmox.com" <t.lamprecht@proxmox.com>, "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Subject: Re: [pve-devel] [PATCH pve-storage 08/10] qcow2: add external snapshot support
Date: Tue, 8 Jul 2025 13:19:27 +0000
Message-ID: <e7c2329df09498ae835f09b44848d601ab8cbe6c.camel@groupe-cyllene.com>
>>Can we please use an actually telling name though? As "ext" is quite
>>often used as term for "extension", and we really win nothing with
>>doing this.
>>
>>Strongly preferring words to be spelled out in full and separated
>>with
>>hyphens, instead of something else or being glued together like here.
>>
>>Like "snapshot-external" or "external-snapshots". This is just an
>>option in the storage config, not for each volume, so we really do
>>not need to optimize option name length here.
yes sure, no problem.
I'll use "external-snapshots" if it's ok for everyone.
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 08/10] qcow2: add external snapshot support
2025-07-08 10:59 ` Fabian Grünbichler
2025-07-08 11:35 ` DERUMIER, Alexandre via pve-devel
[not found] ` <0b2ba0c34d2c8c15d7cb642442b300a3180e1592.camel@groupe-cyllene.com>
@ 2025-07-08 13:42 ` DERUMIER, Alexandre via pve-devel
[not found] ` <67627e7904281520e1f7152657ed00c7ba3c138b.camel@groupe-cyllene.com>
3 siblings, 0 replies; 46+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-08 13:42 UTC (permalink / raw)
To: pve-devel, f.gruenbichler; +Cc: DERUMIER, Alexandre, t.lamprecht
[-- Attachment #1: Type: message/rfc822, Size: 13468 bytes --]
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Cc: "t.lamprecht@proxmox.com" <t.lamprecht@proxmox.com>
Subject: Re: [pve-devel] [PATCH pve-storage 08/10] qcow2: add external snapshot support
Date: Tue, 8 Jul 2025 13:42:08 +0000
Message-ID: <67627e7904281520e1f7152657ed00c7ba3c138b.camel@groupe-cyllene.com>
>>okay, that means we instead need to become more strict with 'snapext'
>>storages and restrict the volnames there.. maybe to (vm-|base-)-XXX-
>>*.fmt?
$plugin->parse_volname($volname) don't have $scfg param currently,
Do you want to extend it ? (and do change in every plugin)
or can I call
my ($storeid, $volname) = parse_volume_id($volid);
my $scfg = storage_config($cfg, $storeid);
in the Plugin::parse_namedir ?
sub parse_name_dir {
my $name = shift;
if ($scfg->{'external-snapshots'} && $name =~ m/^((vm|base)-(\d+)-
\S+\.(raw|qcow2|vmdk|subvol))$/) {
return ($1, $4, $2 eq 'base'); # (name, format, isBase)
} elsif ($name =~ m!^((base-)?[^/\s]+\.(raw|qcow2|vmdk|subvol))$!)
{
return ($1, $3, $2); # (name, format, isBase)
}
die "unable to parse volume filename '$name'\n";
}
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 08/10] qcow2: add external snapshot support
[not found] ` <67627e7904281520e1f7152657ed00c7ba3c138b.camel@groupe-cyllene.com>
@ 2025-07-08 14:18 ` Fabian Grünbichler
0 siblings, 0 replies; 46+ messages in thread
From: Fabian Grünbichler @ 2025-07-08 14:18 UTC (permalink / raw)
To: DERUMIER, Alexandre, pve-devel; +Cc: t.lamprecht
> DERUMIER, Alexandre <alexandre.derumier@groupe-cyllene.com> hat am 08.07.2025 15:42 CEST geschrieben:
>
>
> >>okay, that means we instead need to become more strict with 'snapext'
> >>storages and restrict the volnames there.. maybe to (vm-|base-)-XXX-
> >>*.fmt?
>
> $plugin->parse_volname($volname) don't have $scfg param currently,
>
> Do you want to extend it ? (and do change in every plugin)
ah, I forgot about that.. I guess what we could do instead would be
to restrict parsing to (vm|base)-XXX- in general there, which would
break people who have manually created such volumes and manually
assigned them to VMs. we can detect and warn about that in the
upgrade script, and let them move the disk (or manually rename)
before the upgrade..
we'd run into a similar issue with another planned change anyway,
I'll double check that there are no complications arising from that,
but you can assume for now that it's fine to restrict the volname
like that going forward.
> or can I call
>
> my ($storeid, $volname) = parse_volume_id($volid);
> my $scfg = storage_config($cfg, $storeid);
>
> in the Plugin::parse_namedir ?
no, that is bad ;)
>
>
> sub parse_name_dir {
> my $name = shift;
>
> if ($scfg->{'external-snapshots'} && $name =~ m/^((vm|base)-(\d+)-
> \S+\.(raw|qcow2|vmdk|subvol))$/) {
> return ($1, $4, $2 eq 'base'); # (name, format, isBase)
> } elsif ($name =~ m!^((base-)?[^/\s]+\.(raw|qcow2|vmdk|subvol))$!)
> {
> return ($1, $3, $2); # (name, format, isBase)
> }
>
> die "unable to parse volume filename '$name'\n";
> }
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 09/10] lvmplugin: add qcow2 snapshot
2025-07-04 11:51 ` Fabian Grünbichler
@ 2025-07-09 7:24 ` DERUMIER, Alexandre via pve-devel
2025-07-09 8:06 ` DERUMIER, Alexandre via pve-devel
1 sibling, 0 replies; 46+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-09 7:24 UTC (permalink / raw)
To: pve-devel, f.gruenbichler; +Cc: DERUMIER, Alexandre
[-- Attachment #1: Type: message/rfc822, Size: 14454 bytes --]
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Subject: Re: [pve-devel] [PATCH pve-storage 09/10] lvmplugin: add qcow2 snapshot
Date: Wed, 9 Jul 2025 07:24:43 +0000
Message-ID: <59da27ef546356d1d14c8ab1f8a2437e268a8ff8.camel@groupe-cyllene.com>
> + my $backing_path = $class->path($scfg, $name, $storeid,
> $backing_snap) if $backing_snap;
>>also, this should probably encode a relative path so that renaming
>>the VG and
>>adapting the storage.cfg entry works without breaking the back
>>reference?
About relative path, I have done test, and if you only specify the
filename|volname, it's already relative to the parent image
qemu-img info /home/store/images/10000/vm-10000-disk-0.qcow2
image: /home/store/images/10000/vm-10000-disk-0.qcow2
backing file: snap-snap4-vm-10000-disk-0.qcow2 (actual path:
/home/store/images/10000/snap-snap4-vm-10000-disk-0.qcow2)
changing to mountpoint:
qemu-img info /home/store2/images/10000/vm-10000-disk-0.qcow2
image: /home/store2/images/10000/vm-10000-disk-0.qcow2
backing file: snap-snap4-vm-10000-disk-0.qcow2 (actual path:
/home/store2/images/10000/snap-snap4-vm-10000-disk-0.qcow2)
with specify a relative path ./
qemu-img info /home/store/images/10000/vm-10000-disk-0.qcow2
image: /home/store/images/10000/vm-10000-disk-0.qcow2
backing file: ./snap-snap4-vm-10000-disk-0.qcow2 (actual path:
/home/store/images/10000/./snap-snap4-vm-10000-disk-0.qcow2)
(same for lvm)
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 09/10] lvmplugin: add qcow2 snapshot
2025-07-04 11:51 ` Fabian Grünbichler
2025-07-09 7:24 ` DERUMIER, Alexandre via pve-devel
@ 2025-07-09 8:06 ` DERUMIER, Alexandre via pve-devel
1 sibling, 0 replies; 46+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-09 8:06 UTC (permalink / raw)
To: pve-devel, f.gruenbichler; +Cc: DERUMIER, Alexandre
[-- Attachment #1: Type: message/rfc822, Size: 12840 bytes --]
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Subject: Re: [pve-devel] [PATCH pve-storage 09/10] lvmplugin: add qcow2 snapshot
Date: Wed, 9 Jul 2025 08:06:20 +0000
Message-ID: <86f06e1f47eb71060a184feb5e684dfbf5741852.camel@groupe-cyllene.com>
> +
> + # we can simply reformat the current lvm volume to avoid
> + # a long safe remove.(not needed here, as the allocated space
> + # is still the same owner)
> + eval { lvm_qcow2_format($class, $storeid, $scfg, $volname,
> $format, $snap) };
>>what if the volume got resized along the way?
Ah indeed, I didn't have thinked about that. We'll have extra space in
the lvm volume in this case.
I'll fix it with a clean removal
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 08/10] qcow2: add external snapshot support
2025-07-04 11:52 ` Fabian Grünbichler
` (5 preceding siblings ...)
[not found] ` <27854af70a4fe3a7765d2760098e2f82f3475f17.camel@groupe-cyllene.com>
@ 2025-07-09 12:52 ` DERUMIER, Alexandre via pve-devel
6 siblings, 0 replies; 46+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-09 12:52 UTC (permalink / raw)
To: pve-devel, f.gruenbichler; +Cc: DERUMIER, Alexandre
[-- Attachment #1: Type: message/rfc822, Size: 12570 bytes --]
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Subject: Re: [pve-devel] [PATCH pve-storage 08/10] qcow2: add external snapshot support
Date: Wed, 9 Jul 2025 12:52:44 +0000
Message-ID: <5640e9be14bbc97cc21edfa77fcc27c7305834d5.camel@groupe-cyllene.com>
> +sub get_snap_name {
>>should this be public?
I'll make it private
> +sub get_snap_volname {
>>should this be public?
> +
> +sub parse_snapname {
>>should this be public?
This two methods are used in volume_snapshot_info(), defined in plugin,
and use by lvmplugin too. (but the parsing is different in lvm plugin,
so the parse_snapname && get_snap_volname need to be public)
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 46+ messages in thread
end of thread, other threads:[~2025-07-09 12:52 UTC | newest]
Thread overview: 46+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <20250704064507.511884-1-alexandre.derumier@groupe-cyllene.com>
2025-07-04 6:44 ` [pve-devel] [PATCH qemu-server 1/3] qemu_img convert : add external snapshot support Alexandre Derumier via pve-devel
2025-07-04 6:44 ` [pve-devel] [PATCH pve-storage 01/10] tests: add lvmplugin test Alexandre Derumier via pve-devel
2025-07-04 6:44 ` [pve-devel] [PATCH qemu-server 2/3] blockdev: add backing_chain support Alexandre Derumier via pve-devel
2025-07-04 6:44 ` [pve-devel] [PATCH pve-storage 02/10] common: add qemu_img_create an preallocation_cmd_option Alexandre Derumier via pve-devel
2025-07-04 11:53 ` Fabian Grünbichler
2025-07-04 12:33 ` DERUMIER, Alexandre via pve-devel
[not found] ` <51f988f11e60f9dfaa49658c1ed9ecf72fcfcde4.camel@groupe-cyllene.com>
2025-07-07 7:55 ` Fabian Grünbichler
2025-07-04 6:44 ` [pve-devel] [PATCH pve-storage 03/10] common: qemu_img_create: add backing_file support Alexandre Derumier via pve-devel
2025-07-04 11:52 ` Fabian Grünbichler
2025-07-04 12:31 ` DERUMIER, Alexandre via pve-devel
2025-07-07 7:16 ` DERUMIER, Alexandre via pve-devel
2025-07-04 6:45 ` [pve-devel] [PATCH qemu-server 3/3] qcow2: add external snapshot support Alexandre Derumier via pve-devel
2025-07-04 11:52 ` Fabian Grünbichler
2025-07-04 12:46 ` DERUMIER, Alexandre via pve-devel
2025-07-04 6:45 ` [pve-devel] [PATCH pve-storage 04/10] rename_volume: add source && target snap Alexandre Derumier via pve-devel
2025-07-04 11:52 ` Fabian Grünbichler
2025-07-04 12:04 ` Thomas Lamprecht
2025-07-07 10:34 ` DERUMIER, Alexandre via pve-devel
2025-07-04 6:45 ` [pve-devel] [PATCH pve-storage 05/10] common: add qemu_img_info helper Alexandre Derumier via pve-devel
2025-07-04 6:45 ` [pve-devel] [PATCH pve-storage 06/10] common: add qemu-img measure Alexandre Derumier via pve-devel
2025-07-04 11:51 ` Fabian Grünbichler
2025-07-04 6:45 ` [pve-devel] [PATCH pve-storage 07/10] storage: volume_snapshot: add $running param Alexandre Derumier via pve-devel
2025-07-04 11:52 ` Fabian Grünbichler
2025-07-04 6:45 ` [pve-devel] [PATCH pve-storage 08/10] qcow2: add external snapshot support Alexandre Derumier via pve-devel
2025-07-04 11:52 ` Fabian Grünbichler
2025-07-04 13:22 ` DERUMIER, Alexandre via pve-devel
[not found] ` <c38598bae6477dfa6af0db96da054b156698d41c.camel@groupe-cyllene.com>
2025-07-07 8:17 ` Fabian Grünbichler
2025-07-07 10:18 ` DERUMIER, Alexandre via pve-devel
[not found] ` <c671fe82a7cdab90a3691115a7132d0a35ae79b7.camel@groupe-cyllene.com>
2025-07-07 10:53 ` Fabian Grünbichler
2025-07-08 8:44 ` DERUMIER, Alexandre via pve-devel
[not found] ` <3d1d8516e3c68de370608033647a38e99ef50f23.camel@groupe-cyllene.com>
2025-07-08 8:56 ` Fabian Grünbichler
2025-07-08 11:37 ` DERUMIER, Alexandre via pve-devel
2025-07-08 10:04 ` DERUMIER, Alexandre via pve-devel
[not found] ` <27854af70a4fe3a7765d2760098e2f82f3475f17.camel@groupe-cyllene.com>
2025-07-08 10:59 ` Fabian Grünbichler
2025-07-08 11:35 ` DERUMIER, Alexandre via pve-devel
[not found] ` <0b2ba0c34d2c8c15d7cb642442b300a3180e1592.camel@groupe-cyllene.com>
2025-07-08 12:50 ` Thomas Lamprecht
2025-07-08 13:19 ` DERUMIER, Alexandre via pve-devel
2025-07-08 13:42 ` DERUMIER, Alexandre via pve-devel
[not found] ` <67627e7904281520e1f7152657ed00c7ba3c138b.camel@groupe-cyllene.com>
2025-07-08 14:18 ` Fabian Grünbichler
2025-07-09 12:52 ` DERUMIER, Alexandre via pve-devel
2025-07-04 6:45 ` [pve-devel] [PATCH pve-storage 09/10] lvmplugin: add qcow2 snapshot Alexandre Derumier via pve-devel
2025-07-04 11:51 ` Fabian Grünbichler
2025-07-09 7:24 ` DERUMIER, Alexandre via pve-devel
2025-07-09 8:06 ` DERUMIER, Alexandre via pve-devel
2025-07-04 6:45 ` [pve-devel] [PATCH pve-storage 10/10] storage : add volume_support_qemu_snapshot Alexandre Derumier via pve-devel
2025-07-04 11:51 ` Fabian Grünbichler
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal