* [pve-devel] [PATCH pve-storage 01/13] plugin: add qemu_img_create
[not found] <20250709162202.2952597-1-alexandre.derumier@groupe-cyllene.com>
@ 2025-07-09 16:21 ` Alexandre Derumier via pve-devel
2025-07-09 16:21 ` [pve-devel] [PATCH qemu-server 1/4] qemu_img convert : add external snapshot support Alexandre Derumier via pve-devel
` (16 subsequent siblings)
17 siblings, 0 replies; 49+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-07-09 16:21 UTC (permalink / raw)
To: pve-devel; +Cc: Alexandre Derumier
[-- Attachment #1: Type: message/rfc822, Size: 5136 bytes --]
From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH pve-storage 01/13] plugin: add qemu_img_create
Date: Wed, 9 Jul 2025 18:21:46 +0200
Message-ID: <20250709162202.2952597-2-alexandre.derumier@groupe-cyllene.com>
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
---
src/PVE/Storage/Plugin.pm | 32 ++++++++++++++++++++++++--------
1 file changed, 24 insertions(+), 8 deletions(-)
diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index c2f376b..65a34b1 100644
--- a/src/PVE/Storage/Plugin.pm
+++ b/src/PVE/Storage/Plugin.pm
@@ -631,6 +631,29 @@ sub preallocation_cmd_option {
return;
}
+=pod
+
+=head3 qemu_img_create
+
+ qemu_img_create($scfg, $fmt, $size, $path)
+
+Create a new qemu image with a specific format C<$format> and size C<$size> for a target C<$path>.
+
+=cut
+
+sub qemu_img_create {
+ my ($scfg, $fmt, $size, $path) = @_;
+
+ my $cmd = ['/usr/bin/qemu-img', 'create'];
+
+ my $prealloc_opt = preallocation_cmd_option($scfg, $fmt);
+ push @$cmd, '-o', $prealloc_opt if defined($prealloc_opt);
+
+ push @$cmd, '-f', $fmt, $path, "${size}K";
+
+ run_command($cmd, errmsg => "unable to create image");
+}
+
# Storage implementation
# called during addition of storage (before the new storage config got written)
@@ -969,14 +992,7 @@ sub alloc_image {
umask $old_umask;
die $err if $err;
} else {
- my $cmd = ['/usr/bin/qemu-img', 'create'];
-
- my $prealloc_opt = preallocation_cmd_option($scfg, $fmt);
- push @$cmd, '-o', $prealloc_opt if defined($prealloc_opt);
-
- push @$cmd, '-f', $fmt, $path, "${size}K";
-
- eval { run_command($cmd, errmsg => "unable to create image"); };
+ eval { qemu_img_create($scfg, $fmt, $size, $path) };
if ($@) {
unlink $path;
rmdir $imagedir;
--
2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* [pve-devel] [PATCH qemu-server 1/4] qemu_img convert : add external snapshot support
[not found] <20250709162202.2952597-1-alexandre.derumier@groupe-cyllene.com>
2025-07-09 16:21 ` [pve-devel] [PATCH pve-storage 01/13] plugin: add qemu_img_create Alexandre Derumier via pve-devel
@ 2025-07-09 16:21 ` Alexandre Derumier via pve-devel
2025-07-09 16:21 ` [pve-devel] [PATCH qemu-server 2/4] blockdev: add backing_chain support Alexandre Derumier via pve-devel
` (15 subsequent siblings)
17 siblings, 0 replies; 49+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-07-09 16:21 UTC (permalink / raw)
To: pve-devel; +Cc: Alexandre Derumier
[-- Attachment #1: Type: message/rfc822, Size: 6690 bytes --]
From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH qemu-server 1/4] qemu_img convert : add external snapshot support
Date: Wed, 9 Jul 2025 18:21:47 +0200
Message-ID: <20250709162202.2952597-3-alexandre.derumier@groupe-cyllene.com>
for external snapshot, we simply use snap volname as src.
don't use internal snapshot option in the command line.
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
---
src/PVE/QemuServer/QemuImage.pm | 6 ++-
src/test/run_qemu_img_convert_tests.pl | 59 ++++++++++++++++++++++++++
2 files changed, 64 insertions(+), 1 deletion(-)
diff --git a/src/PVE/QemuServer/QemuImage.pm b/src/PVE/QemuServer/QemuImage.pm
index 38f7d52b..2502a32d 100644
--- a/src/PVE/QemuServer/QemuImage.pm
+++ b/src/PVE/QemuServer/QemuImage.pm
@@ -71,11 +71,15 @@ sub convert {
my $dst_format = checked_volume_format($storecfg, $dst_volid);
my $dst_path = PVE::Storage::path($storecfg, $dst_volid);
my $dst_is_iscsi = ($dst_path =~ m|^iscsi://|);
+ my $support_qemu_snapshots = PVE::Storage::volume_support_qemu_snapshot($storecfg, $src_volid);
my $cmd = [];
push @$cmd, '/usr/bin/qemu-img', 'convert', '-p', '-n';
push @$cmd, '-l', "snapshot.name=$snapname"
- if $snapname && $src_format && $src_format eq "qcow2";
+ if $snapname
+ && $src_format eq 'qcow2'
+ && $support_qemu_snapshots
+ && $support_qemu_snapshots eq 'internal';
push @$cmd, '-t', 'none' if $dst_scfg->{type} eq 'zfspool';
push @$cmd, '-T', $cachemode if defined($cachemode);
push @$cmd, '-r', "${bwlimit}K" if defined($bwlimit);
diff --git a/src/test/run_qemu_img_convert_tests.pl b/src/test/run_qemu_img_convert_tests.pl
index b5a457c3..d6118b33 100755
--- a/src/test/run_qemu_img_convert_tests.pl
+++ b/src/test/run_qemu_img_convert_tests.pl
@@ -21,6 +21,15 @@ my $storage_config = {
type => "dir",
shared => 0,
},
+ localsnapext => {
+ content => {
+ images => 1,
+ },
+ path => "/var/lib/vzsnapext",
+ type => "dir",
+ 'external-snapshots' => 1,
+ shared => 0,
+ },
btrfs => {
content => {
images => 1,
@@ -61,6 +70,13 @@ my $storage_config = {
images => 1,
},
},
+ "lvm-store" => {
+ vgname => "pve",
+ type => "lvm",
+ content => {
+ images => 1,
+ },
+ },
"zfs-over-iscsi" => {
type => "zfs",
iscsiprovider => "LIO",
@@ -469,6 +485,49 @@ my $tests = [
"/var/lib/vz/images/$vmid/vm-$vmid-disk-0.raw",
],
},
+ {
+ name => "qcow2_external_snapshot",
+ parameters => [
+ "localsnapext:$vmid/vm-$vmid-disk-0.qcow2",
+ "local:$vmid/vm-$vmid-disk-0.raw",
+ 1024 * 10,
+ { snapname => 'foo' },
+ ],
+ expected => [
+ "/usr/bin/qemu-img",
+ "convert",
+ "-p",
+ "-n",
+ "-f",
+ "qcow2",
+ "-O",
+ "raw",
+ "/var/lib/vzsnapext/images/$vmid/snap-foo-vm-$vmid-disk-0.qcow2",
+ "/var/lib/vz/images/$vmid/vm-$vmid-disk-0.raw",
+ ],
+ },
+ {
+ name => "lvmqcow2_external_snapshot",
+ parameters => [
+ "lvm-store:vm-$vmid-disk-0.qcow2",
+ "local:$vmid/vm-$vmid-disk-0.raw",
+ 1024 * 10,
+ { snapname => 'foo' },
+ ],
+ expected => [
+ "/usr/bin/qemu-img",
+ "convert",
+ "-p",
+ "-n",
+ "-f",
+ "qcow2",
+ "-O",
+ "raw",
+ "/dev/pve/snap_vm-8006-disk-0_foo.qcow2",
+ "/var/lib/vz/images/$vmid/vm-$vmid-disk-0.raw",
+ ],
+ },
+
];
my $command;
--
2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* [pve-devel] [PATCH qemu-server 2/4] blockdev: add backing_chain support
[not found] <20250709162202.2952597-1-alexandre.derumier@groupe-cyllene.com>
2025-07-09 16:21 ` [pve-devel] [PATCH pve-storage 01/13] plugin: add qemu_img_create Alexandre Derumier via pve-devel
2025-07-09 16:21 ` [pve-devel] [PATCH qemu-server 1/4] qemu_img convert : add external snapshot support Alexandre Derumier via pve-devel
@ 2025-07-09 16:21 ` Alexandre Derumier via pve-devel
2025-07-15 9:02 ` Wolfgang Bumiller
2025-07-09 16:21 ` [pve-devel] [PATCH pve-storage 02/13] plugin: add qemu_img_create_qcow2_backed Alexandre Derumier via pve-devel
` (14 subsequent siblings)
17 siblings, 1 reply; 49+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-07-09 16:21 UTC (permalink / raw)
To: pve-devel; +Cc: Alexandre Derumier
[-- Attachment #1: Type: message/rfc822, Size: 14262 bytes --]
From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH qemu-server 2/4] blockdev: add backing_chain support
Date: Wed, 9 Jul 2025 18:21:48 +0200
Message-ID: <20250709162202.2952597-4-alexandre.derumier@groupe-cyllene.com>
We need to define name-nodes for all backing chain images,
to be able to live rename them with blockdev-reopen
For linked clone, we don't need to definebase image(s) chain.
They are auto added with #block nodename.
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
---
src/PVE/QemuServer/Blockdev.pm | 49 +++++++++++++++++++
src/test/cfg2cmd/simple-backingchain.conf | 25 ++++++++++
src/test/cfg2cmd/simple-backingchain.conf.cmd | 33 +++++++++++++
src/test/run_config2command_tests.pl | 47 ++++++++++++++++++
4 files changed, 154 insertions(+)
create mode 100644 src/test/cfg2cmd/simple-backingchain.conf
create mode 100644 src/test/cfg2cmd/simple-backingchain.conf.cmd
diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index 5f1fdae3..2a0513fb 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -360,6 +360,46 @@ my sub generate_format_blockdev {
return $blockdev;
}
+my sub generate_backing_blockdev;
+
+sub generate_backing_blockdev {
+ my ($storecfg, $snapshots, $deviceid, $drive, $machine_version, $options) = @_;
+
+ my $snap_id = $options->{'snapshot-name'};
+ my $snapshot = $snapshots->{$snap_id};
+ my $parentid = $snapshot->{parent};
+
+ my $volid = $drive->{file};
+
+ my $snap_file_blockdev = generate_file_blockdev($storecfg, $drive, $machine_version, $options);
+ $snap_file_blockdev->{filename} = $snapshot->{file};
+
+ my $snap_fmt_blockdev =
+ generate_format_blockdev($storecfg, $drive, $snap_file_blockdev, $options);
+
+ if ($parentid) {
+ my $options = { 'snapshot-name' => $parentid };
+ $snap_fmt_blockdev->{backing} = generate_backing_blockdev(
+ $storecfg, $snapshots, $deviceid, $drive, $machine_version, $options,
+ );
+ }
+ return $snap_fmt_blockdev;
+}
+
+my sub generate_backing_chain_blockdev {
+ my ($storecfg, $deviceid, $drive, $machine_version) = @_;
+
+ my $volid = $drive->{file};
+
+ my $snapshots = PVE::Storage::volume_snapshot_info($storecfg, $volid);
+ my $parentid = $snapshots->{'current'}->{parent};
+ return undef if !$parentid;
+ my $options = { 'snapshot-name' => $parentid };
+ return generate_backing_blockdev(
+ $storecfg, $snapshots, $deviceid, $drive, $machine_version, $options,
+ );
+}
+
sub generate_drive_blockdev {
my ($storecfg, $drive, $machine_version, $options) = @_;
@@ -371,6 +411,15 @@ sub generate_drive_blockdev {
my $child = generate_file_blockdev($storecfg, $drive, $machine_version, $options);
if (!is_nbd($drive)) {
$child = generate_format_blockdev($storecfg, $drive, $child, $options);
+
+ my $support_qemu_snapshots =
+ PVE::Storage::volume_support_qemu_snapshot($storecfg, $drive->{file});
+ if ($support_qemu_snapshots && $support_qemu_snapshots eq 'external') {
+ my $backing_chain = generate_backing_chain_blockdev(
+ $storecfg, "drive-$drive_id", $drive, $machine_version,
+ );
+ $child->{backing} = $backing_chain if $backing_chain;
+ }
}
if ($options->{'zero-initialized'}) {
diff --git a/src/test/cfg2cmd/simple-backingchain.conf b/src/test/cfg2cmd/simple-backingchain.conf
new file mode 100644
index 00000000..2c0b0f2c
--- /dev/null
+++ b/src/test/cfg2cmd/simple-backingchain.conf
@@ -0,0 +1,25 @@
+# TEST: Simple test for external snapshot backing chain
+name: simple
+parent: snap3
+scsi0: localsnapext:8006/vm-8006-disk-0.qcow2,size=1G
+scsi1: lvm-store:vm-8006-disk-0.qcow2,size=1G
+
+[snap1]
+name: simple
+scsi0: localsnapext:8006/vm-8006-disk-0.qcow2,size=1G
+scsi1: lvm-store:vm-8006-disk-0.qcow2,size=1G
+snaptime: 1748933042
+
+[snap2]
+parent: snap1
+name: simple
+scsi0: localsnapext:8006/vm-8006-disk-0.qcow2,size=1G
+scsi1: lvm-store:vm-8006-disk-0.qcow2,size=1G
+snaptime: 1748933043
+
+[snap3]
+parent: snap2
+name: simple
+scsi0: localsnapext:8006/vm-8006-disk-0.qcow2,size=1G
+scsi1: lvm-store:vm-8006-disk-0.qcow2,size=1G
+snaptime: 1748933044
diff --git a/src/test/cfg2cmd/simple-backingchain.conf.cmd b/src/test/cfg2cmd/simple-backingchain.conf.cmd
new file mode 100644
index 00000000..40c957f5
--- /dev/null
+++ b/src/test/cfg2cmd/simple-backingchain.conf.cmd
@@ -0,0 +1,33 @@
+/usr/bin/kvm \
+ -id 8006 \
+ -name 'simple,debug-threads=on' \
+ -no-shutdown \
+ -chardev 'socket,id=qmp,path=/var/run/qemu-server/8006.qmp,server=on,wait=off' \
+ -mon 'chardev=qmp,mode=control' \
+ -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect-ms=5000' \
+ -mon 'chardev=qmp-event,mode=control' \
+ -pidfile /var/run/qemu-server/8006.pid \
+ -daemonize \
+ -smp '1,sockets=1,cores=1,maxcpus=1' \
+ -nodefaults \
+ -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
+ -vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
+ -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
+ -m 512 \
+ -object '{"id":"throttle-drive-scsi0","limits":{},"qom-type":"throttle-group"}' \
+ -object '{"id":"throttle-drive-scsi1","limits":{},"qom-type":"throttle-group"}' \
+ -global 'PIIX4_PM.disable_s3=1' \
+ -global 'PIIX4_PM.disable_s4=1' \
+ -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
+ -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' \
+ -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' \
+ -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' \
+ -device 'VGA,id=vga,bus=pci.0,addr=0x2' \
+ -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
+ -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
+ -device 'lsi,id=scsihw0,bus=pci.0,addr=0x5' \
+ -blockdev '{"driver":"throttle","file":{"backing":{"backing":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vzsnapext/images/8006/snap1-vm-8006-disk-0.qcow2","node-name":"ea91a385a49a008a4735c0aec5c6749","read-only":false},"node-name":"fa91a385a49a008a4735c0aec5c6749","read-only":false},"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vzsnapext/images/8006/snap2-vm-8006-disk-0.qcow2","node-name":"ec0289317073959d450248d8cd7a480","read-only":false},"node-name":"fc0289317073959d450248d8cd7a480","read-only":false},"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vzsnapext/images/8006/vm-8006-disk-0.qcow2","node-name":"e74f4959037afb46eddc7313c43dfdd","read-only":false},"node-name":"f74f4959037afb46eddc7313c43dfdd","read-only":false},"node-name":"drive-scsi0","throttle-group":"throttle-drive-scsi0"}' \
+ -device 'scsi-hd,bus=scsihw0.0,scsi-id=0,drive=drive-scsi0,id=scsi0,write-cache=on' \
+ -blockdev '{"driver":"throttle","file":{"backing":{"backing":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"native","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"host_device","filename":"/dev/veegee/snap1-vm-8006-disk-0.qcow2","node-name":"e25f58d3e6e11f2065ad41253988915","read-only":false},"node-name":"f25f58d3e6e11f2065ad41253988915","read-only":false},"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"native","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"host_device","filename":"/dev/veegee/snap2-vm-8006-disk-0.qcow2","node-name":"e9415bb5e484c1e25d25063b01686fe","read-only":false},"node-name":"f9415bb5e484c1e25d25063b01686fe","read-only":false},"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"native","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"host_device","filename":"/dev/veegee/vm-8006-disk-0.qcow2","node-name":"e87358a470ca311f94d5cc61d1eb428","read-only":false},"node-name":"f87358a470ca311f94d5cc61d1eb428","read-only":false},"node-name":"drive-scsi1","throttle-group":"throttle-drive-scsi1"}' \
+ -device 'scsi-hd,bus=scsihw0.0,scsi-id=1,drive=drive-scsi1,id=scsi1,write-cache=on' \
+ -machine 'type=pc+pve0'
diff --git a/src/test/run_config2command_tests.pl b/src/test/run_config2command_tests.pl
index 1262a0df..61302f6b 100755
--- a/src/test/run_config2command_tests.pl
+++ b/src/test/run_config2command_tests.pl
@@ -21,6 +21,7 @@ use PVE::QemuServer::Helpers;
use PVE::QemuServer::Monitor;
use PVE::QemuServer::QMPHelpers;
use PVE::QemuServer::CPUConfig;
+use PVE::Storage;
my $base_env = {
storage_config => {
@@ -34,6 +35,15 @@ my $base_env = {
type => 'dir',
shared => 0,
},
+ localsnapext => {
+ content => {
+ images => 1,
+ },
+ path => '/var/lib/vzsnapext',
+ type => 'dir',
+ shared => 0,
+ snapext => 1,
+ },
noimages => {
content => {
iso => 1,
@@ -264,6 +274,43 @@ $storage_module->mock(
deactivate_volumes => sub {
return;
},
+ volume_snapshot_info => sub {
+ my ($cfg, $volid) = @_;
+
+ my ($storeid, $volname) = PVE::Storage::parse_volume_id($volid);
+
+ my $snapshots = {};
+ if ($storeid eq 'localsnapext') {
+ $snapshots = {
+ current => {
+ file => 'var/lib/vzsnapext/images/8006/vm-8006-disk-0.qcow2',
+ parent => 'snap2',
+ },
+ snap2 => {
+ file => '/var/lib/vzsnapext/images/8006/snap2-vm-8006-disk-0.qcow2',
+ parent => 'snap1',
+ },
+ snap1 => {
+ file => '/var/lib/vzsnapext/images/8006/snap1-vm-8006-disk-0.qcow2',
+ },
+ };
+ } elsif ($storeid eq 'lvm-store') {
+ $snapshots = {
+ current => {
+ file => '/dev/veegee/vm-8006-disk-0.qcow2',
+ parent => 'snap2',
+ },
+ snap2 => {
+ file => '/dev/veegee/snap2-vm-8006-disk-0.qcow2',
+ parent => 'snap1',
+ },
+ snap1 => {
+ file => '/dev/veegee/snap1-vm-8006-disk-0.qcow2',
+ },
+ };
+ }
+ return $snapshots;
+ },
);
my $file_stat_module = Test::MockModule->new("File::stat");
--
2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* [pve-devel] [PATCH pve-storage 02/13] plugin: add qemu_img_create_qcow2_backed
[not found] <20250709162202.2952597-1-alexandre.derumier@groupe-cyllene.com>
` (2 preceding siblings ...)
2025-07-09 16:21 ` [pve-devel] [PATCH qemu-server 2/4] blockdev: add backing_chain support Alexandre Derumier via pve-devel
@ 2025-07-09 16:21 ` Alexandre Derumier via pve-devel
2025-07-09 16:21 ` [pve-devel] [PATCH pve-storage 03/13] plugin: add qemu_img_info Alexandre Derumier via pve-devel
` (13 subsequent siblings)
17 siblings, 0 replies; 49+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-07-09 16:21 UTC (permalink / raw)
To: pve-devel; +Cc: Alexandre Derumier
[-- Attachment #1: Type: message/rfc822, Size: 6738 bytes --]
From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH pve-storage 02/13] plugin: add qemu_img_create_qcow2_backed
Date: Wed, 9 Jul 2025 18:21:49 +0200
Message-ID: <20250709162202.2952597-5-alexandre.derumier@groupe-cyllene.com>
and use it for plugin linked clone
This also enable extended_l2=on, as it's mandatory for backing file
preallocation.
Preallocation was missing previously, so it should increase performance
for linked clone now (around x5 in randwrite 4k)
cluster_size is set to 128k, as it reduce qcow2 overhead (reduce disk,
but also memory needed to cache metadatas)
l2_extended is not enabled yet on base image, but it could help too
to reduce overhead without impacting performance
bench on 100G qcow2 file:
fio --filename=/dev/sdb --direct=1 --rw=randwrite --bs=4k --iodepth=32 --ioengine=libaio --name=test
fio --filename=/dev/sdb --direct=1 --rw=randread --bs=4k --iodepth=32 --ioengine=libaio --name=test
base image:
randwrite 4k: prealloc=metadata, l2_extended=off, cluster_size=64k: 20215
randread 4k: prealloc=metadata, l2_extended=off, cluster_size=64k: 22219
randwrite 4k: prealloc=metadata, l2_extended=on, cluster_size=64k: 20217
randread 4k: prealloc=metadata, l2_extended=on, cluster_size=64k: 21742
randwrite 4k: prealloc=metadata, l2_extended=on, cluster_size=128k: 21599
randread 4k: prealloc=metadata, l2_extended=on, cluster_size=128k: 22037
clone image with backing file:
randwrite 4k: prealloc=metadata, l2_extended=off, cluster_size=64k: 3912
randread 4k: prealloc=metadata, l2_extended=off, cluster_size=64k: 21476
randwrite 4k: prealloc=metadata, l2_extended=on, cluster_size=64k: 20563
randread 4k: prealloc=metadata, l2_extended=on, cluster_size=64k: 22265
randwrite 4k: prealloc=metadata, l2_extended=on, cluster_size=128k: 18016
randread 4k: prealloc=metadata, l2_extended=on, cluster_size=128k: 21611
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
---
src/PVE/Storage/Plugin.pm | 52 ++++++++++++++++++++++++++++-----------
1 file changed, 38 insertions(+), 14 deletions(-)
diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index 65a34b1..c4b4cd3 100644
--- a/src/PVE/Storage/Plugin.pm
+++ b/src/PVE/Storage/Plugin.pm
@@ -51,6 +51,10 @@ our $RAW_PREALLOCATION = {
full => 1,
};
+my $QCOW2_CLUSTERS = {
+ backed => ['extended_l2=on', 'cluster_size=128k'],
+};
+
our $MAX_VOLUMES_PER_GUEST = 1024;
cfs_register_file(
@@ -654,6 +658,39 @@ sub qemu_img_create {
run_command($cmd, errmsg => "unable to create image");
}
+=pod
+
+=head3 qemu_img_create_qcow2_backed
+
+ qemu_img_create_qcow2_backed($scfg, $path, $backing_path, $backing_format)
+
+Create a new qemu qcow2 image C<$path> using an existing backing image C<$backing_path> with backing_format C<$backing_format>.
+
+=cut
+
+sub qemu_img_create_qcow2_backed {
+ my ($scfg, $path, $backing_path, $backing_format) = @_;
+
+ my $cmd = [
+ '/usr/bin/qemu-img',
+ 'create',
+ '-F',
+ $backing_format,
+ '-b',
+ $backing_path,
+ '-f',
+ 'qcow2',
+ $path,
+ ];
+
+ my $options = $QCOW2_CLUSTERS->{backed};
+
+ push @$options, preallocation_cmd_option($scfg, 'qcow2');
+ push @$cmd, '-o', join(',', @$options) if @$options > 0;
+
+ run_command($cmd, errmsg => "unable to create image");
+}
+
# Storage implementation
# called during addition of storage (before the new storage config got written)
@@ -941,20 +978,7 @@ sub clone_image {
# Note: we use relative paths, so we need to call chdir before qemu-img
eval {
local $CWD = $imagedir;
-
- my $cmd = [
- '/usr/bin/qemu-img',
- 'create',
- '-b',
- "../$basevmid/$basename",
- '-F',
- $format,
- '-f',
- 'qcow2',
- $path,
- ];
-
- run_command($cmd);
+ qemu_img_create_qcow2_backed($scfg, $path, "../$basevmid/$basename", $format);
};
my $err = $@;
--
2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* [pve-devel] [PATCH pve-storage 03/13] plugin: add qemu_img_info
[not found] <20250709162202.2952597-1-alexandre.derumier@groupe-cyllene.com>
` (3 preceding siblings ...)
2025-07-09 16:21 ` [pve-devel] [PATCH pve-storage 02/13] plugin: add qemu_img_create_qcow2_backed Alexandre Derumier via pve-devel
@ 2025-07-09 16:21 ` Alexandre Derumier via pve-devel
2025-07-09 16:21 ` [pve-devel] [PATCH qemu-server 3/4] qcow2: add external snapshot support Alexandre Derumier via pve-devel
` (12 subsequent siblings)
17 siblings, 0 replies; 49+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-07-09 16:21 UTC (permalink / raw)
To: pve-devel; +Cc: Alexandre Derumier
[-- Attachment #1: Type: message/rfc822, Size: 6123 bytes --]
From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH pve-storage 03/13] plugin: add qemu_img_info
Date: Wed, 9 Jul 2025 18:21:50 +0200
Message-ID: <20250709162202.2952597-6-alexandre.derumier@groupe-cyllene.com>
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
---
src/PVE/Storage/Common.pm | 33 ++++++++++++++++++++++++++++++++
src/PVE/Storage/Plugin.pm | 40 +++++++++++++++++++--------------------
2 files changed, 53 insertions(+), 20 deletions(-)
diff --git a/src/PVE/Storage/Common.pm b/src/PVE/Storage/Common.pm
index 89a70f4..aa89e68 100644
--- a/src/PVE/Storage/Common.pm
+++ b/src/PVE/Storage/Common.pm
@@ -5,6 +5,7 @@ use warnings;
use PVE::JSONSchema;
use PVE::Syscall;
+use PVE::Tools qw(run_command);
use constant {
FALLOC_FL_KEEP_SIZE => 0x01, # see linux/falloc.h
@@ -110,4 +111,36 @@ sub deallocate : prototype($$$) {
}
}
+=pod
+
+=head3 run_qemu_img_json
+
+ run_qemu_img_json($cmd, $timeout)
+
+Execute qemu_img command C<$cmd> with a timeout C<$timeout>.
+Parse the output result and return a json.
+
+=cut
+
+sub run_qemu_img_json {
+ my ($cmd, $timeout) = @_;
+ my $json = '';
+ my $err_output = '';
+ eval {
+ run_command(
+ $cmd,
+ timeout => $timeout,
+ outfunc => sub { $json .= shift },
+ errfunc => sub { $err_output .= shift . "\n" },
+ );
+ };
+ warn $@ if $@;
+ if ($err_output) {
+ # if qemu did not output anything to stdout we die with stderr as an error
+ die $err_output if !$json;
+ # otherwise we warn about it and try to parse the json
+ warn $err_output;
+ }
+ return $json;
+}
1;
diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index c4b4cd3..9c79439 100644
--- a/src/PVE/Storage/Plugin.pm
+++ b/src/PVE/Storage/Plugin.pm
@@ -691,6 +691,25 @@ sub qemu_img_create_qcow2_backed {
run_command($cmd, errmsg => "unable to create image");
}
+=pod
+
+=head3 qemu_img_info
+
+ qemu_img_info($filename, $file_format, $timeout)
+
+Returns a json with qemu image C<$filename> informations with format <$file_format>.
+
+=cut
+
+sub qemu_img_info {
+ my ($filename, $file_format, $timeout) = @_;
+
+ my $cmd = ['/usr/bin/qemu-img', 'info', '--output=json', $filename];
+ push $cmd->@*, '-f', $file_format if $file_format;
+
+ return PVE::Storage::Common::run_qemu_img_json($cmd, $timeout);
+}
+
# Storage implementation
# called during addition of storage (before the new storage config got written)
@@ -1127,26 +1146,7 @@ sub file_size_info {
"file_size_info: '$filename': falling back to 'raw' from unknown format '$file_format'\n";
$file_format = 'raw';
}
- my $cmd = ['/usr/bin/qemu-img', 'info', '--output=json', $filename];
- push $cmd->@*, '-f', $file_format if $file_format;
-
- my $json = '';
- my $err_output = '';
- eval {
- run_command(
- $cmd,
- timeout => $timeout,
- outfunc => sub { $json .= shift },
- errfunc => sub { $err_output .= shift . "\n" },
- );
- };
- warn $@ if $@;
- if ($err_output) {
- # if qemu did not output anything to stdout we die with stderr as an error
- die $err_output if !$json;
- # otherwise we warn about it and try to parse the json
- warn $err_output;
- }
+ my $json = qemu_img_info($filename, $file_format, $timeout);
if (!$json) {
die "failed to query file information with qemu-img\n" if $untrusted;
# skip decoding if there was no output, e.g. if there was a timeout.
--
2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* [pve-devel] [PATCH qemu-server 3/4] qcow2: add external snapshot support
[not found] <20250709162202.2952597-1-alexandre.derumier@groupe-cyllene.com>
` (4 preceding siblings ...)
2025-07-09 16:21 ` [pve-devel] [PATCH pve-storage 03/13] plugin: add qemu_img_info Alexandre Derumier via pve-devel
@ 2025-07-09 16:21 ` Alexandre Derumier via pve-devel
2025-07-15 13:21 ` Wolfgang Bumiller
2025-07-15 14:31 ` Wolfgang Bumiller
2025-07-09 16:21 ` [pve-devel] [PATCH pve-storage 04/13] plugin: add qemu_img_measure Alexandre Derumier via pve-devel
` (11 subsequent siblings)
17 siblings, 2 replies; 49+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-07-09 16:21 UTC (permalink / raw)
To: pve-devel; +Cc: Alexandre Derumier
[-- Attachment #1: Type: message/rfc822, Size: 22066 bytes --]
From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH qemu-server 3/4] qcow2: add external snapshot support
Date: Wed, 9 Jul 2025 18:21:51 +0200
Message-ID: <20250709162202.2952597-7-alexandre.derumier@groupe-cyllene.com>
fixme:
- add test for internal (was missing) && external qemu snapshots
- is it possible to use blockjob transactions for commit && steam
for atomatic disk commit ?
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
---
src/PVE/QemuConfig.pm | 4 +-
src/PVE/QemuServer.pm | 132 ++++++++++++---
src/PVE/QemuServer/Blockdev.pm | 296 ++++++++++++++++++++++++++++++++-
src/test/snapshot-test.pm | 4 +-
4 files changed, 402 insertions(+), 34 deletions(-)
diff --git a/src/PVE/QemuConfig.pm b/src/PVE/QemuConfig.pm
index 82295641..e0853d65 100644
--- a/src/PVE/QemuConfig.pm
+++ b/src/PVE/QemuConfig.pm
@@ -398,7 +398,7 @@ sub __snapshot_create_vol_snapshot {
print "snapshotting '$device' ($drive->{file})\n";
- PVE::QemuServer::qemu_volume_snapshot($vmid, $device, $storecfg, $volid, $snapname);
+ PVE::QemuServer::qemu_volume_snapshot($vmid, $device, $storecfg, $drive, $snapname);
}
sub __snapshot_delete_remove_drive {
@@ -435,7 +435,7 @@ sub __snapshot_delete_vol_snapshot {
my $storecfg = PVE::Storage::config();
my $volid = $drive->{file};
- PVE::QemuServer::qemu_volume_snapshot_delete($vmid, $storecfg, $volid, $snapname);
+ PVE::QemuServer::qemu_volume_snapshot_delete($vmid, $storecfg, $drive, $snapname);
push @$unused, $volid;
}
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 92c8fad6..c1e15675 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -4340,20 +4340,64 @@ sub qemu_cpu_hotplug {
}
sub qemu_volume_snapshot {
- my ($vmid, $deviceid, $storecfg, $volid, $snap) = @_;
+ my ($vmid, $deviceid, $storecfg, $drive, $snap) = @_;
+ my $volid = $drive->{file};
my $running = check_running($vmid);
- if ($running && do_snapshots_with_qemu($storecfg, $volid, $deviceid)) {
+ my $do_snapshots_type = do_snapshots_type($storecfg, $volid, $deviceid, $running);
+
+ if ($do_snapshots_type eq 'internal') {
+ print "internal qemu snapshot\n";
mon_cmd($vmid, 'blockdev-snapshot-internal-sync', device => $deviceid, name => $snap);
- } else {
+ } elsif ($do_snapshots_type eq 'external') {
+ my $storeid = (PVE::Storage::parse_volume_id($volid))[0];
+ my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
+ print "external qemu snapshot\n";
+ my $snapshots = PVE::Storage::volume_snapshot_info($storecfg, $volid);
+ my $parent_snap = $snapshots->{'current'}->{parent};
+ my $machine_version = PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
+
+ PVE::QemuServer::Blockdev::blockdev_rename(
+ $storecfg,
+ $vmid,
+ $machine_version,
+ $deviceid,
+ $drive,
+ 'current',
+ $snap,
+ $parent_snap,
+ );
+ eval {
+ PVE::QemuServer::Blockdev::blockdev_external_snapshot(
+ $storecfg, $vmid, $machine_version, $deviceid, $drive, $snap,
+ );
+ };
+ if ($@) {
+ warn $@ if $@;
+ print "Error creating snapshot. Revert rename\n";
+ eval {
+ PVE::QemuServer::Blockdev::blockdev_rename(
+ $storecfg,
+ $vmid,
+ $machine_version,
+ $deviceid,
+ $drive,
+ $snap,
+ 'current',
+ $parent_snap,
+ );
+ };
+ }
+ } elsif ($do_snapshots_type eq 'storage') {
PVE::Storage::volume_snapshot($storecfg, $volid, $snap);
}
}
sub qemu_volume_snapshot_delete {
- my ($vmid, $storecfg, $volid, $snap) = @_;
+ my ($vmid, $storecfg, $drive, $snap) = @_;
+ my $volid = $drive->{file};
my $running = check_running($vmid);
my $attached_deviceid;
@@ -4368,14 +4412,62 @@ sub qemu_volume_snapshot_delete {
);
}
- if ($attached_deviceid && do_snapshots_with_qemu($storecfg, $volid, $attached_deviceid)) {
+ my $do_snapshots_type = do_snapshots_type($storecfg, $volid, $attached_deviceid, $running);
+
+ if ($do_snapshots_type eq 'internal') {
mon_cmd(
$vmid,
'blockdev-snapshot-delete-internal-sync',
device => $attached_deviceid,
name => $snap,
);
- } else {
+ } elsif ($do_snapshots_type eq 'external') {
+ print "delete qemu external snapshot\n";
+
+ my $path = PVE::Storage::path($storecfg, $volid);
+ my $snapshots = PVE::Storage::volume_snapshot_info($storecfg, $volid);
+ my $parentsnap = $snapshots->{$snap}->{parent};
+ my $childsnap = $snapshots->{$snap}->{child};
+ my $machine_version = PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
+
+ # if we delete the first snasphot, we commit because the first snapshot original base image, it should be big.
+ # improve-me: if firstsnap > child : commit, if firstsnap < child do a stream.
+ if (!$parentsnap) {
+ print "delete first snapshot $snap\n";
+ PVE::QemuServer::Blockdev::blockdev_commit(
+ $storecfg,
+ $vmid,
+ $machine_version,
+ $attached_deviceid,
+ $drive,
+ $childsnap,
+ $snap,
+ );
+ PVE::QemuServer::Blockdev::blockdev_rename(
+ $storecfg,
+ $vmid,
+ $machine_version,
+ $attached_deviceid,
+ $drive,
+ $snap,
+ $childsnap,
+ $snapshots->{$childsnap}->{child},
+ );
+ } else {
+ #intermediate snapshot, we always stream the snapshot to child snapshot
+ print "stream intermediate snapshot $snap to $childsnap\n";
+ PVE::QemuServer::Blockdev::blockdev_stream(
+ $storecfg,
+ $vmid,
+ $machine_version,
+ $attached_deviceid,
+ $drive,
+ $snap,
+ $parentsnap,
+ $childsnap,
+ );
+ }
+ } elsif ($do_snapshots_type eq 'storage') {
PVE::Storage::volume_snapshot_delete(
$storecfg,
$volid,
@@ -7563,28 +7655,20 @@ sub restore_tar_archive {
warn $@ if $@;
}
-my $qemu_snap_storage = {
- rbd => 1,
-};
-
-sub do_snapshots_with_qemu {
- my ($storecfg, $volid, $deviceid) = @_;
-
- return if $deviceid =~ m/tpmstate0/;
+sub do_snapshots_type {
+ my ($storecfg, $volid, $deviceid, $running) = @_;
- my $storage_name = PVE::Storage::parse_volume_id($volid);
- my $scfg = $storecfg->{ids}->{$storage_name};
- die "could not find storage '$storage_name'\n" if !defined($scfg);
+ #always use storage snapshot for tpmstate
+ return 'storage' if $deviceid && $deviceid =~ m/tpmstate0/;
- if ($qemu_snap_storage->{ $scfg->{type} } && !$scfg->{krbd}) {
- return 1;
- }
+ #we use storage snapshot if vm is not running or if disk is unused;
+ return 'storage' if !$running || !$deviceid;
- if ($volid =~ m/\.(qcow2|qed)$/) {
- return 1;
- }
+ my $qemu_snapshot_type = PVE::Storage::volume_support_qemu_snapshot($storecfg, $volid);
+ # if running, but don't support qemu snapshot, we use storage snapshot
+ return 'storage' if !$qemu_snapshot_type;
- return;
+ return $qemu_snapshot_type;
}
=head3 template_create($vmid, $conf [, $disk])
diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
index 2a0513fb..f5c07e30 100644
--- a/src/PVE/QemuServer/Blockdev.pm
+++ b/src/PVE/QemuServer/Blockdev.pm
@@ -11,6 +11,7 @@ use JSON;
use PVE::JSONSchema qw(json_bool);
use PVE::Storage;
+use PVE::QemuServer::BlockJob;
use PVE::QemuServer::Drive qw(drive_is_cdrom);
use PVE::QemuServer::Helpers;
use PVE::QemuServer::Monitor qw(mon_cmd);
@@ -243,6 +244,9 @@ my sub generate_file_blockdev {
my $blockdev = {};
my $scfg = undef;
+ delete $options->{'snapshot-name'}
+ if $options->{'snapshot-name'} && $options->{'snapshot-name'} eq 'current';
+
die "generate_file_blockdev called without volid/path\n" if !$drive->{file};
die "generate_file_blockdev called with 'none'\n" if $drive->{file} eq 'none';
# FIXME use overlay and new config option to define storage for temp write device
@@ -322,6 +326,9 @@ my sub generate_format_blockdev {
die "generate_format_blockdev called with 'none'\n" if $drive->{file} eq 'none';
die "generate_format_blockdev called with NBD path\n" if is_nbd($drive);
+ delete($options->{'snapshot-name'})
+ if $options->{'snapshot-name'} && $options->{'snapshot-name'} eq 'current';
+
my $scfg;
my $format;
my $volid = $drive->{file};
@@ -400,6 +407,17 @@ my sub generate_backing_chain_blockdev {
);
}
+sub generate_throttle_blockdev {
+ my ($drive_id, $child) = @_;
+
+ return {
+ driver => "throttle",
+ 'node-name' => top_node_name($drive_id),
+ 'throttle-group' => throttle_group_id($drive_id),
+ file => $child,
+ };
+}
+
sub generate_drive_blockdev {
my ($storecfg, $drive, $machine_version, $options) = @_;
@@ -442,12 +460,7 @@ sub generate_drive_blockdev {
return $child if $options->{fleecing} || $options->{'tpm-backup'} || $options->{'no-throttle'};
# this is the top filter entry point, use $drive-drive_id as nodename
- return {
- driver => "throttle",
- 'node-name' => top_node_name($drive_id),
- 'throttle-group' => throttle_group_id($drive_id),
- file => $child,
- };
+ return generate_throttle_blockdev($drive_id, $child);
}
sub generate_pbs_blockdev {
@@ -785,4 +798,275 @@ sub set_io_throttle {
}
}
+sub blockdev_external_snapshot {
+ my ($storecfg, $vmid, $machine_version, $deviceid, $drive, $snap, $size) = @_;
+
+ print "Creating a new current volume with $snap as backing snap\n";
+
+ my $volid = $drive->{file};
+
+ #preallocate add a new current file with reference to backing-file
+ PVE::Storage::volume_snapshot($storecfg, $volid, $snap, 1);
+
+ #be sure to add drive in write mode
+ delete($drive->{ro});
+
+ my $new_file_blockdev = generate_file_blockdev($storecfg, $drive);
+ my $new_fmt_blockdev = generate_format_blockdev($storecfg, $drive, $new_file_blockdev);
+
+ my $snap_file_blockdev = generate_file_blockdev($storecfg, $drive, $snap);
+ my $snap_fmt_blockdev = generate_format_blockdev(
+ $storecfg,
+ $drive,
+ $snap_file_blockdev,
+ { 'snapshot-name' => $snap },
+ );
+
+ #backing need to be forced to undef in blockdev, to avoid reopen of backing-file on blockdev-add
+ $new_fmt_blockdev->{backing} = undef;
+
+ mon_cmd($vmid, 'blockdev-add', %$new_fmt_blockdev);
+
+ mon_cmd(
+ $vmid, 'blockdev-snapshot',
+ node => $snap_fmt_blockdev->{'node-name'},
+ overlay => $new_fmt_blockdev->{'node-name'},
+ );
+}
+
+sub blockdev_delete {
+ my ($storecfg, $vmid, $drive, $file_blockdev, $fmt_blockdev, $snap) = @_;
+
+ #add eval as reopen is auto removing the old nodename automatically only if it was created at vm start in command line argument
+ eval { mon_cmd($vmid, 'blockdev-del', 'node-name' => $file_blockdev->{'node-name'}) };
+ eval { mon_cmd($vmid, 'blockdev-del', 'node-name' => $fmt_blockdev->{'node-name'}) };
+
+ #delete the file (don't use vdisk_free as we don't want to delete all snapshot chain)
+ print "delete old $file_blockdev->{filename}\n";
+
+ my $storage_name = PVE::Storage::parse_volume_id($drive->{file});
+
+ my $volid = $drive->{file};
+ PVE::Storage::volume_snapshot_delete($storecfg, $volid, $snap, 1);
+}
+
+sub blockdev_rename {
+ my (
+ $storecfg,
+ $vmid,
+ $machine_version,
+ $deviceid,
+ $drive,
+ $src_snap,
+ $target_snap,
+ $parent_snap,
+ ) = @_;
+
+ print "rename $src_snap to $target_snap\n";
+
+ my $volid = $drive->{file};
+
+ my $src_file_blockdev = generate_file_blockdev(
+ $storecfg,
+ $drive,
+ $machine_version,
+ { 'snapshot-name' => $src_snap },
+ );
+ my $src_fmt_blockdev = generate_format_blockdev(
+ $storecfg,
+ $drive,
+ $src_file_blockdev,
+ { 'snapshot-name' => $src_snap },
+ );
+
+ #rename the snapshot
+ PVE::Storage::rename_snapshot($storecfg, $volid, $src_snap, $target_snap);
+
+ my $target_file_blockdev = generate_file_blockdev(
+ $storecfg,
+ $drive,
+ $machine_version,
+ { 'snapshot-name' => $target_snap },
+ );
+ my $target_fmt_blockdev = generate_format_blockdev(
+ $storecfg,
+ $drive,
+ $target_file_blockdev,
+ { 'snapshot-name' => $target_snap },
+ );
+
+ if ($target_snap eq 'current' || $src_snap eq 'current') {
+ #rename from|to current
+ my $drive_id = PVE::QemuServer::Drive::get_drive_id($drive);
+
+ #add backing to target
+ if ($parent_snap) {
+ my $parent_fmt_nodename =
+ get_node_name('fmt', $drive_id, $volid, { 'snapshot-name' => $parent_snap });
+ $target_fmt_blockdev->{backing} = $parent_fmt_nodename;
+ }
+ mon_cmd($vmid, 'blockdev-add', %$target_fmt_blockdev);
+
+ #reopen the current throttlefilter nodename with the target fmt nodename
+ my $throttle_blockdev =
+ generate_throttle_blockdev($drive_id, $target_fmt_blockdev->{'node-name'});
+ mon_cmd($vmid, 'blockdev-reopen', options => [$throttle_blockdev]);
+ } else {
+ rename($src_file_blockdev->{filename}, $target_file_blockdev->{filename});
+
+ #intermediate snapshot
+ mon_cmd($vmid, 'blockdev-add', %$target_fmt_blockdev);
+
+ #reopen the parent node with the new target fmt backing node
+ my $parent_file_blockdev = generate_file_blockdev(
+ $storecfg,
+ $drive,
+ $machine_version,
+ { 'snapshot-name' => $parent_snap },
+ );
+ my $parent_fmt_blockdev = generate_format_blockdev(
+ $storecfg,
+ $drive,
+ $parent_file_blockdev,
+ { 'snapshot-name' => $parent_snap },
+ );
+ $parent_fmt_blockdev->{backing} = $target_fmt_blockdev->{'node-name'};
+ mon_cmd($vmid, 'blockdev-reopen', options => [$parent_fmt_blockdev]);
+
+ #change backing-file in qcow2 metadatas
+ mon_cmd(
+ $vmid, 'change-backing-file',
+ device => $deviceid,
+ 'image-node-name' => $parent_fmt_blockdev->{'node-name'},
+ 'backing-file' => $target_file_blockdev->{filename},
+ );
+ }
+
+ # delete old file|fmt nodes
+ # add eval as reopen is auto removing the old nodename automatically only if it was created at vm start in command line argument
+ eval { mon_cmd($vmid, 'blockdev-del', 'node-name' => $src_file_blockdev->{'node-name'}) };
+ eval { mon_cmd($vmid, 'blockdev-del', 'node-name' => $src_fmt_blockdev->{'node-name'}) };
+}
+
+sub blockdev_commit {
+ my ($storecfg, $vmid, $machine_version, $deviceid, $drive, $src_snap, $target_snap) = @_;
+
+ my $volid = $drive->{file};
+
+ print "block-commit $src_snap to base:$target_snap\n";
+
+ my $target_file_blockdev = generate_file_blockdev(
+ $storecfg,
+ $drive,
+ $machine_version,
+ { 'snapshot-name' => $target_snap },
+ );
+ my $target_fmt_blockdev = generate_format_blockdev(
+ $storecfg,
+ $drive,
+ $target_file_blockdev,
+ { 'snapshot-name' => $target_snap },
+ );
+
+ my $src_file_blockdev = generate_file_blockdev(
+ $storecfg,
+ $drive,
+ $machine_version,
+ { 'snapshot-name' => $src_snap },
+ );
+ my $src_fmt_blockdev = generate_format_blockdev(
+ $storecfg,
+ $drive,
+ $src_file_blockdev,
+ { 'snapshot-name' => $src_snap },
+ );
+
+ my $job_id = "commit-$deviceid";
+ my $jobs = {};
+ my $opts = { 'job-id' => $job_id, device => $deviceid };
+
+ $opts->{'base-node'} = $target_fmt_blockdev->{'node-name'};
+ $opts->{'top-node'} = $src_fmt_blockdev->{'node-name'};
+
+ mon_cmd($vmid, "block-commit", %$opts);
+ $jobs->{$job_id} = {};
+
+ # if we commit the current, the blockjob need to be in 'complete' mode
+ my $complete = $src_snap && $src_snap ne 'current' ? 'auto' : 'complete';
+
+ eval {
+ PVE::QemuServer::BlockJob::qemu_drive_mirror_monitor(
+ $vmid, undef, $jobs, $complete, 0, 'commit',
+ );
+ };
+ if ($@) {
+ die "Failed to complete block commit: $@\n";
+ }
+
+ blockdev_delete($storecfg, $vmid, $drive, $src_file_blockdev, $src_fmt_blockdev, $src_snap);
+}
+
+sub blockdev_stream {
+ my ($storecfg, $vmid, $machine_version, $deviceid, $drive, $snap, $parent_snap, $target_snap) =
+ @_;
+
+ my $volid = $drive->{file};
+ $target_snap = undef if $target_snap eq 'current';
+
+ my $parent_file_blockdev = generate_file_blockdev(
+ $storecfg,
+ $drive,
+ $machine_version,
+ { 'snapshot-name' => $parent_snap },
+ );
+ my $parent_fmt_blockdev = generate_format_blockdev(
+ $storecfg,
+ $drive,
+ $parent_file_blockdev,
+ { 'snapshot-name' => $parent_snap },
+ );
+
+ my $target_file_blockdev = generate_file_blockdev(
+ $storecfg,
+ $drive,
+ $machine_version,
+ { 'snapshot-name' => $target_snap },
+ );
+ my $target_fmt_blockdev = generate_format_blockdev(
+ $storecfg,
+ $drive,
+ $target_file_blockdev,
+ { 'snapshot-name' => $target_snap },
+ );
+
+ my $snap_file_blockdev =
+ generate_file_blockdev($storecfg, $drive, $machine_version, { 'snapshot-name' => $snap });
+ my $snap_fmt_blockdev = generate_format_blockdev(
+ $storecfg,
+ $drive,
+ $snap_file_blockdev,
+ { 'snapshot-name' => $snap },
+ );
+
+ my $job_id = "stream-$deviceid";
+ my $jobs = {};
+ my $options = { 'job-id' => $job_id, device => $target_fmt_blockdev->{'node-name'} };
+ $options->{'base-node'} = $parent_fmt_blockdev->{'node-name'};
+ $options->{'backing-file'} = $parent_file_blockdev->{filename};
+
+ mon_cmd($vmid, 'block-stream', %$options);
+ $jobs->{$job_id} = {};
+
+ eval {
+ PVE::QemuServer::BlockJob::qemu_drive_mirror_monitor(
+ $vmid, undef, $jobs, 'auto', 0, 'stream',
+ );
+ };
+ if ($@) {
+ die "Failed to complete block stream: $@\n";
+ }
+
+ blockdev_delete($storecfg, $vmid, $drive, $snap_file_blockdev, $snap_fmt_blockdev, $snap);
+}
+
1;
diff --git a/src/test/snapshot-test.pm b/src/test/snapshot-test.pm
index 4fce87f1..f61cd64b 100644
--- a/src/test/snapshot-test.pm
+++ b/src/test/snapshot-test.pm
@@ -399,8 +399,8 @@ sub set_migration_caps { } # ignored
# BEGIN redefine PVE::QemuServer methods
-sub do_snapshots_with_qemu {
- return 0;
+sub do_snapshots_type {
+ return 'storage';
}
sub vm_start {
--
2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* [pve-devel] [PATCH pve-storage 04/13] plugin: add qemu_img_measure
[not found] <20250709162202.2952597-1-alexandre.derumier@groupe-cyllene.com>
` (5 preceding siblings ...)
2025-07-09 16:21 ` [pve-devel] [PATCH qemu-server 3/4] qcow2: add external snapshot support Alexandre Derumier via pve-devel
@ 2025-07-09 16:21 ` Alexandre Derumier via pve-devel
2025-07-09 16:21 ` [pve-devel] [PATCH qemu-server 4/4] tests: fix efi vm-disk-100-0.raw -> vm-100-disk-0.raw Alexandre Derumier via pve-devel
` (10 subsequent siblings)
17 siblings, 0 replies; 49+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-07-09 16:21 UTC (permalink / raw)
To: pve-devel; +Cc: Alexandre Derumier
[-- Attachment #1: Type: message/rfc822, Size: 4132 bytes --]
From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH pve-storage 04/13] plugin: add qemu_img_measure
Date: Wed, 9 Jul 2025 18:21:52 +0200
Message-ID: <20250709162202.2952597-8-alexandre.derumier@groupe-cyllene.com>
This compute the whole size of a qcow2 volume with datas + metadatas.
Needed for qcow2 over lvm volume.
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
---
src/PVE/Storage/Plugin.pm | 23 +++++++++++++++++++++++
1 file changed, 23 insertions(+)
diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index 9c79439..b7c9524 100644
--- a/src/PVE/Storage/Plugin.pm
+++ b/src/PVE/Storage/Plugin.pm
@@ -710,6 +710,29 @@ sub qemu_img_info {
return PVE::Storage::Common::run_qemu_img_json($cmd, $timeout);
}
+=pod
+
+=head3 qemu_img_measure
+
+ qemu_img_measure($size, $fmt, $timeout, $is_backed)
+
+Returns a json with the maximum size including all metadatas overhead for an image with format C<$fmt> and original size C<$size>Kb.
+If the image is backed C<$is_backed>, we use different cluster size informations.
+=cut
+
+sub qemu_img_measure {
+ my ($size, $fmt, $timeout, $is_backed) = @_;
+
+ die "format is missing" if !$fmt;
+
+ my $cmd = ['/usr/bin/qemu-img', 'measure', '--output=json', '--size', "${size}K", '-O', $fmt];
+ if ($is_backed) {
+ my $options = $QCOW2_CLUSTERS->{backed};
+ push $cmd->@*, '-o', join(',', @$options) if @$options > 0;
+ }
+ return PVE::Storage::Common::run_qemu_img_json($cmd, $timeout);
+}
+
# Storage implementation
# called during addition of storage (before the new storage config got written)
--
2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* [pve-devel] [PATCH qemu-server 4/4] tests: fix efi vm-disk-100-0.raw -> vm-100-disk-0.raw
[not found] <20250709162202.2952597-1-alexandre.derumier@groupe-cyllene.com>
` (6 preceding siblings ...)
2025-07-09 16:21 ` [pve-devel] [PATCH pve-storage 04/13] plugin: add qemu_img_measure Alexandre Derumier via pve-devel
@ 2025-07-09 16:21 ` Alexandre Derumier via pve-devel
2025-07-09 16:21 ` [pve-devel] [PATCH pve-storage 05/13] plugin: add qemu_img_resize Alexandre Derumier via pve-devel
` (9 subsequent siblings)
17 siblings, 0 replies; 49+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-07-09 16:21 UTC (permalink / raw)
To: pve-devel; +Cc: Alexandre Derumier
[-- Attachment #1: Type: message/rfc822, Size: 18570 bytes --]
From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH qemu-server 4/4] tests: fix efi vm-disk-100-0.raw -> vm-100-disk-0.raw
Date: Wed, 9 Jul 2025 18:21:53 +0200
Message-ID: <20250709162202.2952597-9-alexandre.derumier@groupe-cyllene.com>
not sure if it was an error, but it's failing now with new more
restricted volname parsing
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
---
src/test/cfg2cmd/efi-raw-old.conf | 2 +-
src/test/cfg2cmd/efi-raw-old.conf.cmd | 2 +-
src/test/cfg2cmd/efi-raw-template.conf | 2 +-
src/test/cfg2cmd/efi-raw-template.conf.cmd | 2 +-
src/test/cfg2cmd/efi-raw.conf | 2 +-
src/test/cfg2cmd/efi-raw.conf.cmd | 2 +-
src/test/cfg2cmd/efi-secboot-and-tpm-q35.conf | 2 +-
src/test/cfg2cmd/efi-secboot-and-tpm-q35.conf.cmd | 2 +-
src/test/cfg2cmd/efi-secboot-and-tpm.conf | 2 +-
src/test/cfg2cmd/efi-secboot-and-tpm.conf.cmd | 2 +-
src/test/cfg2cmd/sev-es.conf | 2 +-
src/test/cfg2cmd/sev-es.conf.cmd | 2 +-
src/test/cfg2cmd/sev-std.conf | 2 +-
src/test/cfg2cmd/sev-std.conf.cmd | 2 +-
src/test/run_config2command_tests.pl | 2 +-
15 files changed, 15 insertions(+), 15 deletions(-)
diff --git a/src/test/cfg2cmd/efi-raw-old.conf b/src/test/cfg2cmd/efi-raw-old.conf
index 4566b9c3..621470ed 100644
--- a/src/test/cfg2cmd/efi-raw-old.conf
+++ b/src/test/cfg2cmd/efi-raw-old.conf
@@ -2,4 +2,4 @@
smbios1: uuid=7b10d7af-b932-4c66-b2c3-3996152ec465
bios: ovmf
machine: pc-i440fx-4.1+pve0
-efidisk0: local:100/vm-disk-100-0.raw
+efidisk0: local:100/vm-100-disk-0.raw
diff --git a/src/test/cfg2cmd/efi-raw-old.conf.cmd b/src/test/cfg2cmd/efi-raw-old.conf.cmd
index 3990de38..b62967bd 100644
--- a/src/test/cfg2cmd/efi-raw-old.conf.cmd
+++ b/src/test/cfg2cmd/efi-raw-old.conf.cmd
@@ -10,7 +10,7 @@
-daemonize \
-smbios 'type=1,uuid=7b10d7af-b932-4c66-b2c3-3996152ec465' \
-drive 'if=pflash,unit=0,format=raw,readonly=on,file=/usr/share/pve-edk2-firmware//OVMF_CODE.fd' \
- -drive 'if=pflash,unit=1,id=drive-efidisk0,format=raw,file=/var/lib/vz/images/100/vm-disk-100-0.raw' \
+ -drive 'if=pflash,unit=1,id=drive-efidisk0,format=raw,file=/var/lib/vz/images/100/vm-100-disk-0.raw' \
-smp '1,sockets=1,cores=1,maxcpus=1' \
-nodefaults \
-boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
diff --git a/src/test/cfg2cmd/efi-raw-template.conf b/src/test/cfg2cmd/efi-raw-template.conf
index a1815226..3619d3ec 100644
--- a/src/test/cfg2cmd/efi-raw-template.conf
+++ b/src/test/cfg2cmd/efi-raw-template.conf
@@ -1,5 +1,5 @@
# TEST: Test raw efidisk size parameter
smbios1: uuid=7b10d7af-b932-4c66-b2c3-3996152ec465
bios: ovmf
-efidisk0: local:100/base-disk-100-0.raw
+efidisk0: local:100/base-100-disk-0.raw
template: 1
diff --git a/src/test/cfg2cmd/efi-raw-template.conf.cmd b/src/test/cfg2cmd/efi-raw-template.conf.cmd
index b6064f98..686f9189 100644
--- a/src/test/cfg2cmd/efi-raw-template.conf.cmd
+++ b/src/test/cfg2cmd/efi-raw-template.conf.cmd
@@ -10,7 +10,7 @@
-daemonize \
-object '{"id":"throttle-drive-efidisk0","limits":{},"qom-type":"throttle-group"}' \
-blockdev '{"driver":"raw","file":{"driver":"file","filename":"/usr/share/pve-edk2-firmware//OVMF_CODE.fd"},"node-name":"pflash0","read-only":true}' \
- -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/base-disk-100-0.raw","node-name":"e3bd051dc2860cd423537bc00138c50","read-only":true},"node-name":"f3bd051dc2860cd423537bc00138c50","read-only":true,"size":131072},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/base-100-disk-0.raw","node-name":"e2ab65c8ec567acbeb645244f6c4982","read-only":true},"node-name":"f2ab65c8ec567acbeb645244f6c4982","read-only":true,"size":131072},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
-smp '1,sockets=1,cores=1,maxcpus=1' \
-nodefaults \
-boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
diff --git a/src/test/cfg2cmd/efi-raw.conf b/src/test/cfg2cmd/efi-raw.conf
index 11e6a3e6..37f0358f 100644
--- a/src/test/cfg2cmd/efi-raw.conf
+++ b/src/test/cfg2cmd/efi-raw.conf
@@ -1,4 +1,4 @@
# TEST: Test raw efidisk size parameter
smbios1: uuid=7b10d7af-b932-4c66-b2c3-3996152ec465
bios: ovmf
-efidisk0: local:100/vm-disk-100-0.raw
+efidisk0: local:100/vm-100-disk-0.raw
diff --git a/src/test/cfg2cmd/efi-raw.conf.cmd b/src/test/cfg2cmd/efi-raw.conf.cmd
index c10df1cb..394ea3a3 100644
--- a/src/test/cfg2cmd/efi-raw.conf.cmd
+++ b/src/test/cfg2cmd/efi-raw.conf.cmd
@@ -11,7 +11,7 @@
-smbios 'type=1,uuid=7b10d7af-b932-4c66-b2c3-3996152ec465' \
-object '{"id":"throttle-drive-efidisk0","limits":{},"qom-type":"throttle-group"}' \
-blockdev '{"driver":"raw","file":{"driver":"file","filename":"/usr/share/pve-edk2-firmware//OVMF_CODE.fd"},"node-name":"pflash0","read-only":true}' \
- -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-disk-100-0.raw","node-name":"e3c77a41648168ee29008dc344126a9","read-only":false},"node-name":"f3c77a41648168ee29008dc344126a9","read-only":false,"size":131072},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-100-disk-0.raw","node-name":"e1175f2a490414e7c53337589fde17a","read-only":false},"node-name":"f1175f2a490414e7c53337589fde17a","read-only":false,"size":131072},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
-smp '1,sockets=1,cores=1,maxcpus=1' \
-nodefaults \
-boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
diff --git a/src/test/cfg2cmd/efi-secboot-and-tpm-q35.conf b/src/test/cfg2cmd/efi-secboot-and-tpm-q35.conf
index 5d4b5f5e..51e525b3 100644
--- a/src/test/cfg2cmd/efi-secboot-and-tpm-q35.conf
+++ b/src/test/cfg2cmd/efi-secboot-and-tpm-q35.conf
@@ -2,5 +2,5 @@
smbios1: uuid=7b10d7af-b932-4c66-b2c3-3996152ec465
bios: ovmf
machine: q35
-efidisk0: local:100/vm-disk-100-0.raw,efitype=4m,pre-enrolled-keys=1,size=528K
+efidisk0: local:100/vm-100-disk-0.raw,efitype=4m,pre-enrolled-keys=1,size=528K
tpmstate0: local:108/vm-100-disk-1.raw,size=4M,version=v2.0
diff --git a/src/test/cfg2cmd/efi-secboot-and-tpm-q35.conf.cmd b/src/test/cfg2cmd/efi-secboot-and-tpm-q35.conf.cmd
index a9dcd474..844bc622 100644
--- a/src/test/cfg2cmd/efi-secboot-and-tpm-q35.conf.cmd
+++ b/src/test/cfg2cmd/efi-secboot-and-tpm-q35.conf.cmd
@@ -11,7 +11,7 @@
-smbios 'type=1,uuid=7b10d7af-b932-4c66-b2c3-3996152ec465' \
-object '{"id":"throttle-drive-efidisk0","limits":{},"qom-type":"throttle-group"}' \
-blockdev '{"driver":"raw","file":{"driver":"file","filename":"/usr/share/pve-edk2-firmware//OVMF_CODE_4M.secboot.fd"},"node-name":"pflash0","read-only":true}' \
- -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-disk-100-0.raw","node-name":"e3c77a41648168ee29008dc344126a9","read-only":false},"node-name":"f3c77a41648168ee29008dc344126a9","read-only":false,"size":540672},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-100-disk-0.raw","node-name":"e1175f2a490414e7c53337589fde17a","read-only":false},"node-name":"f1175f2a490414e7c53337589fde17a","read-only":false,"size":540672},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
-smp '1,sockets=1,cores=1,maxcpus=1' \
-nodefaults \
-boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
diff --git a/src/test/cfg2cmd/efi-secboot-and-tpm.conf b/src/test/cfg2cmd/efi-secboot-and-tpm.conf
index 915424ec..4856b531 100644
--- a/src/test/cfg2cmd/efi-secboot-and-tpm.conf
+++ b/src/test/cfg2cmd/efi-secboot-and-tpm.conf
@@ -1,5 +1,5 @@
# TEST: Test newer 4MB efidisk with secureboot and a TPM device
smbios1: uuid=7b10d7af-b932-4c66-b2c3-3996152ec465
bios: ovmf
-efidisk0: local:100/vm-disk-100-0.raw,efitype=4m,pre-enrolled-keys=1,size=528K
+efidisk0: local:100/vm-100-disk-0.raw,efitype=4m,pre-enrolled-keys=1,size=528K
tpmstate0: local:108/vm-100-disk-1.raw,size=4M,version=v2.0
diff --git a/src/test/cfg2cmd/efi-secboot-and-tpm.conf.cmd b/src/test/cfg2cmd/efi-secboot-and-tpm.conf.cmd
index c65c74f5..c7b5e8d0 100644
--- a/src/test/cfg2cmd/efi-secboot-and-tpm.conf.cmd
+++ b/src/test/cfg2cmd/efi-secboot-and-tpm.conf.cmd
@@ -11,7 +11,7 @@
-smbios 'type=1,uuid=7b10d7af-b932-4c66-b2c3-3996152ec465' \
-object '{"id":"throttle-drive-efidisk0","limits":{},"qom-type":"throttle-group"}' \
-blockdev '{"driver":"raw","file":{"driver":"file","filename":"/usr/share/pve-edk2-firmware//OVMF_CODE_4M.fd"},"node-name":"pflash0","read-only":true}' \
- -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-disk-100-0.raw","node-name":"e3c77a41648168ee29008dc344126a9","read-only":false},"node-name":"f3c77a41648168ee29008dc344126a9","read-only":false,"size":540672},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-100-disk-0.raw","node-name":"e1175f2a490414e7c53337589fde17a","read-only":false},"node-name":"f1175f2a490414e7c53337589fde17a","read-only":false,"size":540672},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
-smp '1,sockets=1,cores=1,maxcpus=1' \
-nodefaults \
-boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
diff --git a/src/test/cfg2cmd/sev-es.conf b/src/test/cfg2cmd/sev-es.conf
index bdae430e..3da71e76 100644
--- a/src/test/cfg2cmd/sev-es.conf
+++ b/src/test/cfg2cmd/sev-es.conf
@@ -2,5 +2,5 @@
# HW_CAPABILITIES: amd-turin-9005
smbios1: uuid=7b10d7af-b932-4c66-b2c3-3996152ec465
bios: ovmf
-efidisk0: local:100/vm-disk-100-0.raw,efitype=4m,pre-enrolled-keys=1,size=528K
+efidisk0: local:100/vm-100-disk-0.raw,efitype=4m,pre-enrolled-keys=1,size=528K
amd-sev: type=es
diff --git a/src/test/cfg2cmd/sev-es.conf.cmd b/src/test/cfg2cmd/sev-es.conf.cmd
index a39b6f67..686550f6 100644
--- a/src/test/cfg2cmd/sev-es.conf.cmd
+++ b/src/test/cfg2cmd/sev-es.conf.cmd
@@ -11,7 +11,7 @@
-smbios 'type=1,uuid=7b10d7af-b932-4c66-b2c3-3996152ec465' \
-object '{"id":"throttle-drive-efidisk0","limits":{},"qom-type":"throttle-group"}' \
-blockdev '{"driver":"raw","file":{"driver":"file","filename":"/usr/share/pve-edk2-firmware//OVMF_CVM_CODE_4M.fd"},"node-name":"pflash0","read-only":true}' \
- -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-disk-100-0.raw","node-name":"e3c77a41648168ee29008dc344126a9","read-only":false},"node-name":"f3c77a41648168ee29008dc344126a9","read-only":false,"size":540672},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-100-disk-0.raw","node-name":"e1175f2a490414e7c53337589fde17a","read-only":false},"node-name":"f1175f2a490414e7c53337589fde17a","read-only":false,"size":540672},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
-smp '1,sockets=1,cores=1,maxcpus=1' \
-nodefaults \
-boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
diff --git a/src/test/cfg2cmd/sev-std.conf b/src/test/cfg2cmd/sev-std.conf
index d636f559..a85a08c9 100644
--- a/src/test/cfg2cmd/sev-std.conf
+++ b/src/test/cfg2cmd/sev-std.conf
@@ -2,5 +2,5 @@
# HW_CAPABILITIES: amd-turin-9005
smbios1: uuid=7b10d7af-b932-4c66-b2c3-3996152ec465
bios: ovmf
-efidisk0: local:100/vm-disk-100-0.raw,efitype=4m,pre-enrolled-keys=1,size=528K
+efidisk0: local:100/vm-100-disk-0.raw,efitype=4m,pre-enrolled-keys=1,size=528K
amd-sev: type=std
diff --git a/src/test/cfg2cmd/sev-std.conf.cmd b/src/test/cfg2cmd/sev-std.conf.cmd
index 3878f15c..7d586d38 100644
--- a/src/test/cfg2cmd/sev-std.conf.cmd
+++ b/src/test/cfg2cmd/sev-std.conf.cmd
@@ -11,7 +11,7 @@
-smbios 'type=1,uuid=7b10d7af-b932-4c66-b2c3-3996152ec465' \
-object '{"id":"throttle-drive-efidisk0","limits":{},"qom-type":"throttle-group"}' \
-blockdev '{"driver":"raw","file":{"driver":"file","filename":"/usr/share/pve-edk2-firmware//OVMF_CVM_CODE_4M.fd"},"node-name":"pflash0","read-only":true}' \
- -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-disk-100-0.raw","node-name":"e3c77a41648168ee29008dc344126a9","read-only":false},"node-name":"f3c77a41648168ee29008dc344126a9","read-only":false,"size":540672},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
+ -blockdev '{"driver":"throttle","file":{"cache":{"direct":true,"no-flush":false},"driver":"raw","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vz/images/100/vm-100-disk-0.raw","node-name":"e1175f2a490414e7c53337589fde17a","read-only":false},"node-name":"f1175f2a490414e7c53337589fde17a","read-only":false,"size":540672},"node-name":"drive-efidisk0","throttle-group":"throttle-drive-efidisk0"}' \
-smp '1,sockets=1,cores=1,maxcpus=1' \
-nodefaults \
-boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
diff --git a/src/test/run_config2command_tests.pl b/src/test/run_config2command_tests.pl
index 61302f6b..16a56987 100755
--- a/src/test/run_config2command_tests.pl
+++ b/src/test/run_config2command_tests.pl
@@ -42,7 +42,7 @@ my $base_env = {
path => '/var/lib/vzsnapext',
type => 'dir',
shared => 0,
- snapext => 1,
+ 'external-snapshots' => 1,
},
noimages => {
content => {
--
2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* [pve-devel] [PATCH pve-storage 05/13] plugin: add qemu_img_resize
[not found] <20250709162202.2952597-1-alexandre.derumier@groupe-cyllene.com>
` (7 preceding siblings ...)
2025-07-09 16:21 ` [pve-devel] [PATCH qemu-server 4/4] tests: fix efi vm-disk-100-0.raw -> vm-100-disk-0.raw Alexandre Derumier via pve-devel
@ 2025-07-09 16:21 ` Alexandre Derumier via pve-devel
2025-07-09 16:21 ` [pve-devel] [PATCH pve-storage 06/13] rbd && zfs : create_base : remove $running param from volume_snapshot Alexandre Derumier via pve-devel
` (8 subsequent siblings)
17 siblings, 0 replies; 49+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-07-09 16:21 UTC (permalink / raw)
To: pve-devel; +Cc: Alexandre Derumier
[-- Attachment #1: Type: message/rfc822, Size: 4383 bytes --]
From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH pve-storage 05/13] plugin: add qemu_img_resize
Date: Wed, 9 Jul 2025 18:21:54 +0200
Message-ID: <20250709162202.2952597-10-alexandre.derumier@groupe-cyllene.com>
and add missing preallocation
https://github.com/qemu/qemu/commit/dc5f690b97ccdffa79fe7169bb26b0ebf06688bf
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
---
src/PVE/Storage/Plugin.pm | 28 +++++++++++++++++++++++++---
1 file changed, 25 insertions(+), 3 deletions(-)
diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index b7c9524..ed310a2 100644
--- a/src/PVE/Storage/Plugin.pm
+++ b/src/PVE/Storage/Plugin.pm
@@ -733,6 +733,30 @@ sub qemu_img_measure {
return PVE::Storage::Common::run_qemu_img_json($cmd, $timeout);
}
+=pod
+
+=head3 qemu_img_resize
+
+ qemu_img_resize($scfg, $path, $format, $size, $timeout)
+
+Resize a qemu image C<$path> with format C<$format> to a target Kb size C<$size>.
+Default timeout C<$timeout> is 10s if not specified.
+=cut
+
+sub qemu_img_resize {
+ my ($scfg, $path, $format, $size, $timeout) = @_;
+
+ die "format is missing" if !$format;
+
+ my $prealloc_opt = preallocation_cmd_option($scfg, $format);
+ my $cmd = ['/usr/bin/qemu-img', 'resize'];
+ push $cmd->@*, "--$prealloc_opt" if $prealloc_opt;
+ push $cmd->@*, '-f', $format, $path, $size;
+
+ $timeout = 10 if !$timeout;
+ run_command($cmd, timeout => $timeout);
+}
+
# Storage implementation
# called during addition of storage (before the new storage config got written)
@@ -1284,9 +1308,7 @@ sub volume_resize {
my $format = ($class->parse_volname($volname))[6];
- my $cmd = ['/usr/bin/qemu-img', 'resize', '-f', $format, $path, $size];
-
- run_command($cmd, timeout => 10);
+ qemu_img_resize($scfg, $path, $format, $size, 10);
return undef;
}
--
2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* [pve-devel] [PATCH pve-storage 06/13] rbd && zfs : create_base : remove $running param from volume_snapshot
[not found] <20250709162202.2952597-1-alexandre.derumier@groupe-cyllene.com>
` (8 preceding siblings ...)
2025-07-09 16:21 ` [pve-devel] [PATCH pve-storage 05/13] plugin: add qemu_img_resize Alexandre Derumier via pve-devel
@ 2025-07-09 16:21 ` Alexandre Derumier via pve-devel
2025-07-09 16:21 ` [pve-devel] [PATCH pve-storage 07/13] storage: volume_snapshot: add $running param Alexandre Derumier via pve-devel
` (7 subsequent siblings)
17 siblings, 0 replies; 49+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-07-09 16:21 UTC (permalink / raw)
To: pve-devel; +Cc: Alexandre Derumier
[-- Attachment #1: Type: message/rfc822, Size: 4276 bytes --]
From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH pve-storage 06/13] rbd && zfs : create_base : remove $running param from volume_snapshot
Date: Wed, 9 Jul 2025 18:21:55 +0200
Message-ID: <20250709162202.2952597-11-alexandre.derumier@groupe-cyllene.com>
template guests are never running and never write
to their disks/mountpoints, those $running parameters there can be
dropped.
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
---
src/PVE/Storage/RBDPlugin.pm | 4 +---
src/PVE/Storage/ZFSPlugin.pm | 4 +---
2 files changed, 2 insertions(+), 6 deletions(-)
diff --git a/src/PVE/Storage/RBDPlugin.pm b/src/PVE/Storage/RBDPlugin.pm
index ce7db50..88367ea 100644
--- a/src/PVE/Storage/RBDPlugin.pm
+++ b/src/PVE/Storage/RBDPlugin.pm
@@ -628,9 +628,7 @@ sub create_base {
eval { $class->unmap_volume($storeid, $scfg, $volname); };
warn $@ if $@;
- my $running = undef; #fixme : is create_base always offline ?
-
- $class->volume_snapshot($scfg, $storeid, $newname, $snap, $running);
+ $class->volume_snapshot($scfg, $storeid, $newname, $snap);
my (undef, undef, undef, $protected) = rbd_volume_info($scfg, $storeid, $newname, $snap);
diff --git a/src/PVE/Storage/ZFSPlugin.pm b/src/PVE/Storage/ZFSPlugin.pm
index eed39cd..eedbcdc 100644
--- a/src/PVE/Storage/ZFSPlugin.pm
+++ b/src/PVE/Storage/ZFSPlugin.pm
@@ -286,9 +286,7 @@ sub create_base {
my $guid = $class->zfs_create_lu($scfg, $newname);
$class->zfs_add_lun_mapping_entry($scfg, $newname, $guid);
- my $running = undef; #fixme : is create_base always offline ?
-
- $class->volume_snapshot($scfg, $storeid, $newname, $snap, $running);
+ $class->volume_snapshot($scfg, $storeid, $newname, $snap);
return $newvolname;
}
--
2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* [pve-devel] [PATCH pve-storage 07/13] storage: volume_snapshot: add $running param
[not found] <20250709162202.2952597-1-alexandre.derumier@groupe-cyllene.com>
` (9 preceding siblings ...)
2025-07-09 16:21 ` [pve-devel] [PATCH pve-storage 06/13] rbd && zfs : create_base : remove $running param from volume_snapshot Alexandre Derumier via pve-devel
@ 2025-07-09 16:21 ` Alexandre Derumier via pve-devel
2025-07-09 16:21 ` [pve-devel] [PATCH pve-storage 08/13] storage: add rename_snapshot method Alexandre Derumier via pve-devel
` (6 subsequent siblings)
17 siblings, 0 replies; 49+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-07-09 16:21 UTC (permalink / raw)
To: pve-devel; +Cc: Alexandre Derumier
[-- Attachment #1: Type: message/rfc822, Size: 8434 bytes --]
From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH pve-storage 07/13] storage: volume_snapshot: add $running param
Date: Wed, 9 Jul 2025 18:21:56 +0200
Message-ID: <20250709162202.2952597-12-alexandre.derumier@groupe-cyllene.com>
This add a $running param to volume_snapshot,
it can be used if some extra actions need to be done at the storage
layer when the snapshot has already be done at qemu level.
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
---
ApiChangeLog | 4 ++++
src/PVE/Storage.pm | 4 ++--
src/PVE/Storage/ESXiPlugin.pm | 2 +-
src/PVE/Storage/ISCSIDirectPlugin.pm | 2 +-
src/PVE/Storage/LVMPlugin.pm | 2 +-
src/PVE/Storage/LvmThinPlugin.pm | 2 +-
src/PVE/Storage/PBSPlugin.pm | 2 +-
src/PVE/Storage/Plugin.pm | 2 +-
src/PVE/Storage/RBDPlugin.pm | 2 +-
src/PVE/Storage/ZFSPoolPlugin.pm | 2 +-
10 files changed, 14 insertions(+), 10 deletions(-)
diff --git a/ApiChangeLog b/ApiChangeLog
index 6baedd2..2a01e3f 100644
--- a/ApiChangeLog
+++ b/ApiChangeLog
@@ -22,6 +22,10 @@ Future changes should be documented in here.
Feel free to request allowing more drivers or options on the pve-devel mailing list based on your
needs.
+* Add `running` parameter to `volume_snapshot()`
+ The parameter *can* be used if some extra actions need to be done at the storage
+ layer when the snapshot has already be done at qemu level when the vm is running.
+
## Version 11:
* Allow declaring storage features via plugin data
diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index 7861bf6..7f2da80 100755
--- a/src/PVE/Storage.pm
+++ b/src/PVE/Storage.pm
@@ -449,13 +449,13 @@ sub volume_rollback_is_possible {
}
sub volume_snapshot {
- my ($cfg, $volid, $snap) = @_;
+ my ($cfg, $volid, $snap, $running) = @_;
my ($storeid, $volname) = parse_volume_id($volid, 1);
if ($storeid) {
my $scfg = storage_config($cfg, $storeid);
my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
- return $plugin->volume_snapshot($scfg, $storeid, $volname, $snap);
+ return $plugin->volume_snapshot($scfg, $storeid, $volname, $snap, $running);
} elsif ($volid =~ m|^(/.+)$| && -e $volid) {
die "snapshot file/device '$volid' is not possible\n";
} else {
diff --git a/src/PVE/Storage/ESXiPlugin.pm b/src/PVE/Storage/ESXiPlugin.pm
index ab5242d..e655d7b 100644
--- a/src/PVE/Storage/ESXiPlugin.pm
+++ b/src/PVE/Storage/ESXiPlugin.pm
@@ -555,7 +555,7 @@ sub volume_size_info {
}
sub volume_snapshot {
- my ($class, $scfg, $storeid, $volname, $snap) = @_;
+ my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
die "creating snapshots is not supported for $class\n";
}
diff --git a/src/PVE/Storage/ISCSIDirectPlugin.pm b/src/PVE/Storage/ISCSIDirectPlugin.pm
index 62e9026..93cfd3c 100644
--- a/src/PVE/Storage/ISCSIDirectPlugin.pm
+++ b/src/PVE/Storage/ISCSIDirectPlugin.pm
@@ -232,7 +232,7 @@ sub volume_resize {
}
sub volume_snapshot {
- my ($class, $scfg, $storeid, $volname, $snap) = @_;
+ my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
die "volume snapshot is not possible on iscsi device\n";
}
diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
index 1a992e8..72eb0cd 100644
--- a/src/PVE/Storage/LVMPlugin.pm
+++ b/src/PVE/Storage/LVMPlugin.pm
@@ -691,7 +691,7 @@ sub volume_size_info {
}
sub volume_snapshot {
- my ($class, $scfg, $storeid, $volname, $snap) = @_;
+ my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
die "lvm snapshot is not implemented";
}
diff --git a/src/PVE/Storage/LvmThinPlugin.pm b/src/PVE/Storage/LvmThinPlugin.pm
index c244c91..e5df0b4 100644
--- a/src/PVE/Storage/LvmThinPlugin.pm
+++ b/src/PVE/Storage/LvmThinPlugin.pm
@@ -339,7 +339,7 @@ sub create_base {
# sub volume_resize {} reuse code from parent class
sub volume_snapshot {
- my ($class, $scfg, $storeid, $volname, $snap) = @_;
+ my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
my $vg = $scfg->{vgname};
my $snapvol = "snap_${volname}_$snap";
diff --git a/src/PVE/Storage/PBSPlugin.pm b/src/PVE/Storage/PBSPlugin.pm
index 00170f5..45edc46 100644
--- a/src/PVE/Storage/PBSPlugin.pm
+++ b/src/PVE/Storage/PBSPlugin.pm
@@ -966,7 +966,7 @@ sub volume_resize {
}
sub volume_snapshot {
- my ($class, $scfg, $storeid, $volname, $snap) = @_;
+ my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
die "volume snapshot is not possible on pbs device";
}
diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index ed310a2..da26c0c 100644
--- a/src/PVE/Storage/Plugin.pm
+++ b/src/PVE/Storage/Plugin.pm
@@ -1314,7 +1314,7 @@ sub volume_resize {
}
sub volume_snapshot {
- my ($class, $scfg, $storeid, $volname, $snap) = @_;
+ my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
die "can't snapshot this image format\n" if $volname !~ m/\.(qcow2|qed)$/;
diff --git a/src/PVE/Storage/RBDPlugin.pm b/src/PVE/Storage/RBDPlugin.pm
index 88367ea..38d61e9 100644
--- a/src/PVE/Storage/RBDPlugin.pm
+++ b/src/PVE/Storage/RBDPlugin.pm
@@ -866,7 +866,7 @@ sub volume_resize {
}
sub volume_snapshot {
- my ($class, $scfg, $storeid, $volname, $snap) = @_;
+ my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
my ($vtype, $name, $vmid) = $class->parse_volname($volname);
diff --git a/src/PVE/Storage/ZFSPoolPlugin.pm b/src/PVE/Storage/ZFSPoolPlugin.pm
index 979cf2c..9cdfa68 100644
--- a/src/PVE/Storage/ZFSPoolPlugin.pm
+++ b/src/PVE/Storage/ZFSPoolPlugin.pm
@@ -480,7 +480,7 @@ sub volume_size_info {
}
sub volume_snapshot {
- my ($class, $scfg, $storeid, $volname, $snap) = @_;
+ my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
my $vname = ($class->parse_volname($volname))[1];
--
2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* [pve-devel] [PATCH pve-storage 08/13] storage: add rename_snapshot method
[not found] <20250709162202.2952597-1-alexandre.derumier@groupe-cyllene.com>
` (10 preceding siblings ...)
2025-07-09 16:21 ` [pve-devel] [PATCH pve-storage 07/13] storage: volume_snapshot: add $running param Alexandre Derumier via pve-devel
@ 2025-07-09 16:21 ` Alexandre Derumier via pve-devel
2025-07-09 16:21 ` [pve-devel] [PATCH pve-storage 09/13] storage: add volume_support_qemu_snapshot Alexandre Derumier via pve-devel
` (5 subsequent siblings)
17 siblings, 0 replies; 49+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-07-09 16:21 UTC (permalink / raw)
To: pve-devel; +Cc: Alexandre Derumier
[-- Attachment #1: Type: message/rfc822, Size: 8545 bytes --]
From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH pve-storage 08/13] storage: add rename_snapshot method
Date: Wed, 9 Jul 2025 18:21:57 +0200
Message-ID: <20250709162202.2952597-13-alexandre.derumier@groupe-cyllene.com>
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
---
ApiChangeLog | 3 +++
src/PVE/Storage.pm | 25 +++++++++++++++++++++++++
src/PVE/Storage/BTRFSPlugin.pm | 6 ++++++
src/PVE/Storage/ESXiPlugin.pm | 6 ++++++
src/PVE/Storage/LVMPlugin.pm | 6 ++++++
src/PVE/Storage/LvmThinPlugin.pm | 6 ++++++
src/PVE/Storage/Plugin.pm | 16 ++++++++++++++++
src/PVE/Storage/RBDPlugin.pm | 6 ++++++
src/PVE/Storage/ZFSPoolPlugin.pm | 6 ++++++
9 files changed, 80 insertions(+)
diff --git a/ApiChangeLog b/ApiChangeLog
index 2a01e3f..12eef1f 100644
--- a/ApiChangeLog
+++ b/ApiChangeLog
@@ -26,6 +26,9 @@ Future changes should be documented in here.
The parameter *can* be used if some extra actions need to be done at the storage
layer when the snapshot has already be done at qemu level when the vm is running.
+* Introduce rename_snapshot() plugin method
+ This method allow to rename a vm disk snapshot name to a different snapshot name.
+
## Version 11:
* Allow declaring storage features via plugin data
diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index 7f2da80..e0b79fa 100755
--- a/src/PVE/Storage.pm
+++ b/src/PVE/Storage.pm
@@ -2345,6 +2345,31 @@ sub rename_volume {
);
}
+sub rename_snapshot {
+ my ($cfg, $volid, $source_snap, $target_snap) = @_;
+
+ die "no volid provided\n" if !$volid;
+ die "no source or target snap provided\n" if !$source_snap && !$target_snap;
+
+ my ($storeid, $volname) = parse_volume_id($volid);
+
+ activate_storage($cfg, $storeid);
+
+ my $scfg = storage_config($cfg, $storeid);
+ my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+
+ return $plugin->cluster_lock_storage(
+ $storeid,
+ $scfg->{shared},
+ undef,
+ sub {
+ return $plugin->rename_snapshot(
+ $scfg, $storeid, $volname, $source_snap, $target_snap,
+ );
+ },
+ );
+}
+
# Various io-heavy operations require io/bandwidth limits which can be
# configured on multiple levels: The global defaults in datacenter.cfg, and
# per-storage overrides. When we want to do a restore from storage A to storage
diff --git a/src/PVE/Storage/BTRFSPlugin.pm b/src/PVE/Storage/BTRFSPlugin.pm
index 8c79ea4..26eef2b 100644
--- a/src/PVE/Storage/BTRFSPlugin.pm
+++ b/src/PVE/Storage/BTRFSPlugin.pm
@@ -995,6 +995,12 @@ sub rename_volume {
return "${storeid}:$target_volname";
}
+sub rename_snapshot {
+ my ($class, $scfg, $storeid, $volname, $source_snap, $target_snap) = @_;
+
+ die "rename_snapshot is not supported for $class";
+}
+
sub get_import_metadata {
return PVE::Storage::DirPlugin::get_import_metadata(@_);
}
diff --git a/src/PVE/Storage/ESXiPlugin.pm b/src/PVE/Storage/ESXiPlugin.pm
index e655d7b..66ef289 100644
--- a/src/PVE/Storage/ESXiPlugin.pm
+++ b/src/PVE/Storage/ESXiPlugin.pm
@@ -497,6 +497,12 @@ sub rename_volume {
die "renaming volumes is not supported for $class\n";
}
+sub rename_snapshot {
+ my ($class, $scfg, $storeid, $volname, $source_snap, $target_snap) = @_;
+
+ die "rename_snapshot is not supported for $class";
+}
+
sub volume_export_formats {
my ($class, $scfg, $storeid, $volname, $snapshot, $base_snapshot, $with_snapshots) = @_;
diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
index 72eb0cd..2441e59 100644
--- a/src/PVE/Storage/LVMPlugin.pm
+++ b/src/PVE/Storage/LVMPlugin.pm
@@ -855,4 +855,10 @@ sub rename_volume {
return "${storeid}:${target_volname}";
}
+sub rename_snapshot {
+ my ($class, $scfg, $storeid, $volname, $source_snap, $target_snap) = @_;
+
+ die "rename_snapshot is not implemented for $class";
+}
+
1;
diff --git a/src/PVE/Storage/LvmThinPlugin.pm b/src/PVE/Storage/LvmThinPlugin.pm
index e5df0b4..6bc76c9 100644
--- a/src/PVE/Storage/LvmThinPlugin.pm
+++ b/src/PVE/Storage/LvmThinPlugin.pm
@@ -468,4 +468,10 @@ sub volume_import_write {
);
}
+sub rename_snapshot {
+ my ($class, $scfg, $storeid, $volname, $source_snap, $target_snap) = @_;
+
+ die "rename_snapshot is not supported for $class";
+}
+
1;
diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index da26c0c..6b2dc32 100644
--- a/src/PVE/Storage/Plugin.pm
+++ b/src/PVE/Storage/Plugin.pm
@@ -2046,6 +2046,22 @@ sub rename_volume {
return "${storeid}:${base}${target_vmid}/${target_volname}";
}
+=pod
+
+=head3 rename_snapshot
+
+ $plugin->rename_snapshot($scfg, $storeid, $volname, $source_snap, $target_snap)
+
+Rename a volume source snapshot C<$source_snap> to a target snapshot C<$target_snap>.
+
+=cut
+
+sub rename_snapshot {
+ my ($class, $scfg, $storeid, $volname, $source_snap, $target_snap) = @_;
+
+ die "rename_snapshot is not implemented for $class";
+}
+
my sub blockdev_options_nbd_tcp {
my ($host, $port, $export) = @_;
diff --git a/src/PVE/Storage/RBDPlugin.pm b/src/PVE/Storage/RBDPlugin.pm
index 38d61e9..ee33006 100644
--- a/src/PVE/Storage/RBDPlugin.pm
+++ b/src/PVE/Storage/RBDPlugin.pm
@@ -1055,4 +1055,10 @@ sub rename_volume {
return "${storeid}:${base_name}${target_volname}";
}
+sub rename_snapshot {
+ my ($class, $scfg, $storeid, $volname, $source_snap, $target_snap) = @_;
+
+ die "rename_snapshot is not implemented for $class";
+}
+
1;
diff --git a/src/PVE/Storage/ZFSPoolPlugin.pm b/src/PVE/Storage/ZFSPoolPlugin.pm
index 9cdfa68..28d4795 100644
--- a/src/PVE/Storage/ZFSPoolPlugin.pm
+++ b/src/PVE/Storage/ZFSPoolPlugin.pm
@@ -895,4 +895,10 @@ sub rename_volume {
return "${storeid}:${base_name}${target_volname}";
}
+sub rename_snapshot {
+ my ($class, $scfg, $storeid, $volname, $source_snap, $target_snap) = @_;
+
+ die "rename_snapshot is not supported for $class";
+}
+
1;
--
2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* [pve-devel] [PATCH pve-storage 09/13] storage: add volume_support_qemu_snapshot
[not found] <20250709162202.2952597-1-alexandre.derumier@groupe-cyllene.com>
` (11 preceding siblings ...)
2025-07-09 16:21 ` [pve-devel] [PATCH pve-storage 08/13] storage: add rename_snapshot method Alexandre Derumier via pve-devel
@ 2025-07-09 16:21 ` Alexandre Derumier via pve-devel
2025-07-15 11:33 ` Wolfgang Bumiller
2025-07-09 16:21 ` [pve-devel] [PATCH pve-storage 10/13] plugin: fix volname parsing Alexandre Derumier via pve-devel
` (4 subsequent siblings)
17 siblings, 1 reply; 49+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-07-09 16:21 UTC (permalink / raw)
To: pve-devel; +Cc: Alexandre Derumier
[-- Attachment #1: Type: message/rfc822, Size: 7258 bytes --]
From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH pve-storage 09/13] storage: add volume_support_qemu_snapshot
Date: Wed, 9 Jul 2025 18:21:58 +0200
Message-ID: <20250709162202.2952597-14-alexandre.derumier@groupe-cyllene.com>
Returns if the volume is supporting qemu snapshot:
'internal' : do the snapshot with qemu internal snapshot
'external' : do the snapshot with qemu external snapshot
undef : does not support qemu snapshot
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
---
ApiChangeLog | 8 ++++++++
src/PVE/Storage.pm | 15 +++++++++++++++
src/PVE/Storage/DirPlugin.pm | 10 ++++++++++
src/PVE/Storage/LVMPlugin.pm | 7 +++++++
src/PVE/Storage/Plugin.pm | 20 ++++++++++++++++++++
src/PVE/Storage/RBDPlugin.pm | 6 ++++++
6 files changed, 66 insertions(+)
diff --git a/ApiChangeLog b/ApiChangeLog
index 12eef1f..68e94fd 100644
--- a/ApiChangeLog
+++ b/ApiChangeLog
@@ -29,6 +29,14 @@ Future changes should be documented in here.
* Introduce rename_snapshot() plugin method
This method allow to rename a vm disk snapshot name to a different snapshot name.
+* Introduce volume_support_qemu_snapshot() plugin method
+ This method is used to known if the a snapshot need to be done by qemu
+ or by the storage api.
+ returned values are :
+ 'internal' : support snapshot with qemu internal snapshot
+ 'external' : support snapshot with qemu external snapshot
+ undef : don't support qemu snapshot
+
## Version 11:
* Allow declaring storage features via plugin data
diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index e0b79fa..b796908 100755
--- a/src/PVE/Storage.pm
+++ b/src/PVE/Storage.pm
@@ -2370,6 +2370,21 @@ sub rename_snapshot {
);
}
+sub volume_support_qemu_snapshot {
+ my ($cfg, $volid) = @_;
+
+ my ($storeid, $volname) = parse_volume_id($volid, 1);
+
+ if ($storeid) {
+ my $scfg = storage_config($cfg, $storeid);
+
+ my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+
+ return $plugin->volume_support_qemu_snapshot($storeid, $scfg, $volname);
+ }
+ return undef;
+}
+
# Various io-heavy operations require io/bandwidth limits which can be
# configured on multiple levels: The global defaults in datacenter.cfg, and
# per-storage overrides. When we want to do a restore from storage A to storage
diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
index 10e4f70..3e92383 100644
--- a/src/PVE/Storage/DirPlugin.pm
+++ b/src/PVE/Storage/DirPlugin.pm
@@ -314,4 +314,14 @@ sub get_import_metadata {
};
}
+sub volume_support_qemu_snapshot {
+ my ($class, $storeid, $scfg, $volname) = @_;
+
+ my $format = ($class->parse_volname($volname))[6];
+ return if $format ne 'qcow2';
+
+ my $type = $scfg->{'external-snapshots'} ? 'external' : 'internal';
+ return $type;
+}
+
1;
diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
index 2441e59..be411e5 100644
--- a/src/PVE/Storage/LVMPlugin.pm
+++ b/src/PVE/Storage/LVMPlugin.pm
@@ -861,4 +861,11 @@ sub rename_snapshot {
die "rename_snapshot is not implemented for $class";
}
+sub volume_support_qemu_snapshot {
+ my ($class, $storeid, $scfg, $volname) = @_;
+
+ my $format = ($class->parse_volname($volname))[6];
+ return 'external' if $format eq 'qcow2';
+}
+
1;
diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index 6b2dc32..aab2024 100644
--- a/src/PVE/Storage/Plugin.pm
+++ b/src/PVE/Storage/Plugin.pm
@@ -2262,6 +2262,26 @@ sub new_backup_provider {
die "implement me if enabling the feature 'backup-provider' in plugindata()->{features}\n";
}
+=pod
+
+=head3 volume_support_qemu_snapshot
+
+ $blockdev = $plugin->volume_support_qemu_snapshot($storeid, $scfg, $volname)
+
+Returns a string with the type of snapshot that qemu can do for a specific volume
+
+'internal' : support snapshot with qemu internal snapshot
+'external' : support snapshot with qemu external snapshot
+undef : don't support qemu snapshot
+=cut
+
+sub volume_support_qemu_snapshot {
+ my ($class, $storeid, $scfg, $volname) = @_;
+
+ my $format = ($class->parse_volname($volname))[6];
+ return 'internal' if $format eq 'qcow2';
+}
+
sub config_aware_base_mkdir {
my ($class, $scfg, $path) = @_;
diff --git a/src/PVE/Storage/RBDPlugin.pm b/src/PVE/Storage/RBDPlugin.pm
index ee33006..45f8a7f 100644
--- a/src/PVE/Storage/RBDPlugin.pm
+++ b/src/PVE/Storage/RBDPlugin.pm
@@ -1061,4 +1061,10 @@ sub rename_snapshot {
die "rename_snapshot is not implemented for $class";
}
+sub volume_support_qemu_snapshot {
+ my ($class, $storeid, $scfg, $volname) = @_;
+
+ return 'internal' if !$scfg->{krbd};
+}
+
1;
--
2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* [pve-devel] [PATCH pve-storage 10/13] plugin: fix volname parsing
[not found] <20250709162202.2952597-1-alexandre.derumier@groupe-cyllene.com>
` (12 preceding siblings ...)
2025-07-09 16:21 ` [pve-devel] [PATCH pve-storage 09/13] storage: add volume_support_qemu_snapshot Alexandre Derumier via pve-devel
@ 2025-07-09 16:21 ` Alexandre Derumier via pve-devel
2025-07-09 16:22 ` [pve-devel] [PATCH pve-storage 11/13] qcow2: add external snapshot support Alexandre Derumier via pve-devel
` (3 subsequent siblings)
17 siblings, 0 replies; 49+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-07-09 16:21 UTC (permalink / raw)
To: pve-devel; +Cc: Alexandre Derumier
[-- Attachment #1: Type: message/rfc822, Size: 3672 bytes --]
From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH pve-storage 10/13] plugin: fix volname parsing
Date: Wed, 9 Jul 2025 18:21:59 +0200
Message-ID: <20250709162202.2952597-15-alexandre.derumier@groupe-cyllene.com>
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
---
src/PVE/Storage/Plugin.pm | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index aab2024..b65d296 100644
--- a/src/PVE/Storage/Plugin.pm
+++ b/src/PVE/Storage/Plugin.pm
@@ -811,8 +811,11 @@ sub cluster_lock_storage {
sub parse_name_dir {
my $name = shift;
- if ($name =~ m!^((base-)?[^/\s]+\.(raw|qcow2|vmdk|subvol))$!) {
- return ($1, $3, $2); # (name, format, isBase)
+ if ($name =~ m!^((vm-|base-|subvol-)(\d+)-[^/\s]+\.(raw|qcow2|vmdk|subvol))$!) {
+ my $isbase = $2 eq 'base-' ? $2 : undef;
+ return ($1, $4, $isbase); # (name, format, isBase)
+ } elsif ($name =~ m!^((base-)?[^/\s]+\.(raw|qcow2|vmdk|subvol))$!) {
+ warn "this volume filename is not supported anymore\n";
}
die "unable to parse volume filename '$name'\n";
--
2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* [pve-devel] [PATCH pve-storage 11/13] qcow2: add external snapshot support
[not found] <20250709162202.2952597-1-alexandre.derumier@groupe-cyllene.com>
` (13 preceding siblings ...)
2025-07-09 16:21 ` [pve-devel] [PATCH pve-storage 10/13] plugin: fix volname parsing Alexandre Derumier via pve-devel
@ 2025-07-09 16:22 ` Alexandre Derumier via pve-devel
2025-07-09 16:22 ` [pve-devel] [PATCH pve-storage 12/13] lvmplugin: add qcow2 snapshot Alexandre Derumier via pve-devel
` (2 subsequent siblings)
17 siblings, 0 replies; 49+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-07-09 16:22 UTC (permalink / raw)
To: pve-devel; +Cc: Alexandre Derumier
[-- Attachment #1: Type: message/rfc822, Size: 20599 bytes --]
From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH pve-storage 11/13] qcow2: add external snapshot support
Date: Wed, 9 Jul 2025 18:22:00 +0200
Message-ID: <20250709162202.2952597-16-alexandre.derumier@groupe-cyllene.com>
add a snapext option to enable the feature
When a snapshot is taken, the current volume is renamed to snap volname
and a current image is created with the snap volume as backing file
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
---
src/PVE/Storage.pm | 1 -
src/PVE/Storage/CIFSPlugin.pm | 1 +
src/PVE/Storage/DirPlugin.pm | 1 +
src/PVE/Storage/NFSPlugin.pm | 1 +
src/PVE/Storage/Plugin.pm | 304 ++++++++++++++++++++++++++++++++--
5 files changed, 289 insertions(+), 19 deletions(-)
diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index b796908..53965ee 100755
--- a/src/PVE/Storage.pm
+++ b/src/PVE/Storage.pm
@@ -479,7 +479,6 @@ sub volume_snapshot_rollback {
}
}
-# FIXME PVE 8.x remove $running parameter (needs APIAGE reset)
sub volume_snapshot_delete {
my ($cfg, $volid, $snap, $running) = @_;
diff --git a/src/PVE/Storage/CIFSPlugin.pm b/src/PVE/Storage/CIFSPlugin.pm
index c1441e9..a79f68d 100644
--- a/src/PVE/Storage/CIFSPlugin.pm
+++ b/src/PVE/Storage/CIFSPlugin.pm
@@ -168,6 +168,7 @@ sub options {
bwlimit => { optional => 1 },
preallocation => { optional => 1 },
options => { optional => 1 },
+ 'external-snapshots' => { optional => 1, fixed => 1 },
};
}
diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
index 3e92383..543aacb 100644
--- a/src/PVE/Storage/DirPlugin.pm
+++ b/src/PVE/Storage/DirPlugin.pm
@@ -95,6 +95,7 @@ sub options {
is_mountpoint => { optional => 1 },
bwlimit => { optional => 1 },
preallocation => { optional => 1 },
+ 'external-snapshots' => { optional => 1, fixed => 1 },
};
}
diff --git a/src/PVE/Storage/NFSPlugin.pm b/src/PVE/Storage/NFSPlugin.pm
index 65c5e11..849b46d 100644
--- a/src/PVE/Storage/NFSPlugin.pm
+++ b/src/PVE/Storage/NFSPlugin.pm
@@ -104,6 +104,7 @@ sub options {
'create-subdirs' => { optional => 1 },
bwlimit => { optional => 1 },
preallocation => { optional => 1 },
+ 'external-snapshots' => { optional => 1, fixed => 1 },
};
}
diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index b65d296..0b7989b 100644
--- a/src/PVE/Storage/Plugin.pm
+++ b/src/PVE/Storage/Plugin.pm
@@ -232,6 +232,11 @@ my $defaultData = {
maximum => 65535,
optional => 1,
},
+ 'external-snapshots' => {
+ type => 'boolean',
+ description => 'Enable external snapshot.',
+ optional => 1,
+ },
},
};
@@ -695,17 +700,20 @@ sub qemu_img_create_qcow2_backed {
=head3 qemu_img_info
- qemu_img_info($filename, $file_format, $timeout)
+ qemu_img_info($filename, $file_format, $timeout, $follow_backing_files)
Returns a json with qemu image C<$filename> informations with format <$file_format>.
+If C<$follow_backing_files> option is defined, return a json with the whole chain
+of backing files images.
=cut
sub qemu_img_info {
- my ($filename, $file_format, $timeout) = @_;
+ my ($filename, $file_format, $timeout, $follow_backing_files) = @_;
my $cmd = ['/usr/bin/qemu-img', 'info', '--output=json', $filename];
push $cmd->@*, '-f', $file_format if $file_format;
+ push $cmd->@*, '--backing-chain' if $follow_backing_files;
return PVE::Storage::Common::run_qemu_img_json($cmd, $timeout);
}
@@ -890,10 +898,22 @@ sub get_subdir {
return "$path/$subdir";
}
+my sub get_snap_name {
+ my ($class, $volname, $snapname) = @_;
+ die "missing snapname\n" if !$snapname;
+
+ my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
+ $class->parse_volname($volname);
+ $name = $snapname eq 'current' ? $name : "snap-$snapname-$name";
+ return $name;
+}
+
sub filesystem_path {
my ($class, $scfg, $volname, $snapname) = @_;
my ($vtype, $name, $vmid, undef, undef, $isBase, $format) = $class->parse_volname($volname);
+ $name = get_snap_name($class, $volname, $snapname)
+ if $scfg->{'external-snapshots'} && $snapname;
# Note: qcow2/qed has internal snapshot, so path is always
# the same (with or without snapshot => same file).
@@ -1096,6 +1116,28 @@ sub alloc_image {
return "$vmid/$name";
}
+my sub alloc_backed_image {
+ my ($class, $storeid, $scfg, $volname, $backing_snap) = @_;
+
+ my $path = $class->path($scfg, $volname, $storeid);
+ my ($vmid, $backing_format) = ($class->parse_volname($volname))[2, 6];
+
+ my $backing_volname = get_snap_name($class, $volname, $backing_snap);
+ #qemu_img use relative path from base image for the backing_volname by default
+ eval { qemu_img_create_qcow2_backed($scfg, $path, $backing_volname, $backing_format) };
+ if ($@) {
+ unlink $path;
+ die "$@";
+ }
+}
+
+my sub free_snap_image {
+ my ($class, $storeid, $scfg, $volname, $snap) = @_;
+
+ my $path = $class->path($scfg, $volname, $storeid, $snap);
+ unlink($path) || die "unlink '$path' failed - $!\n";
+}
+
sub free_image {
my ($class, $storeid, $scfg, $volname, $isBase, $format) = @_;
@@ -1118,7 +1160,25 @@ sub free_image {
return undef;
}
+ my $snapshots = undef;
+ if ($scfg->{'external-snapshots'}) {
+ $snapshots = $class->volume_snapshot_info($scfg, $storeid, $volname);
+ }
unlink($path) || die "unlink '$path' failed - $!\n";
+
+ #delete external snapshots
+ if ($scfg->{'external-snapshots'}) {
+ for my $snapid (
+ sort { $snapshots->{$b}->{order} <=> $snapshots->{$a}->{order} }
+ keys %$snapshots
+ ) {
+ my $snap = $snapshots->{$snapid};
+ next if $snapid eq 'current';
+ next if !$snap->{ext};
+ eval { free_snap_image($class, $storeid, $scfg, $volname, $snapid); };
+ warn $@ if $@;
+ }
+ }
}
# try to cleanup directory to not clutter storage with empty $vmid dirs if
@@ -1319,13 +1379,37 @@ sub volume_resize {
sub volume_snapshot {
my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
- die "can't snapshot this image format\n" if $volname !~ m/\.(qcow2|qed)$/;
+ if ($scfg->{'external-snapshots'}) {
- my $path = $class->filesystem_path($scfg, $volname);
+ die "can't snapshot this image format\n" if $volname !~ m/\.(qcow2)$/;
- my $cmd = ['/usr/bin/qemu-img', 'snapshot', '-c', $snap, $path];
+ my $vmid = ($class->parse_volname($volname))[2];
- run_command($cmd);
+ if (!$running) {
+ #rename volume unless qemu has already done it for us
+ $class->rename_snapshot($scfg, $storeid, $volname, 'current', $snap);
+ }
+ eval { alloc_backed_image($class, $storeid, $scfg, $volname, $snap) };
+ if ($@) {
+ warn "$@ \n";
+ #if running, the revert is done by qemu with blockdev-reopen
+ if (!$running) {
+ eval { $class->rename_snapshot($scfg, $storeid, $volname, $snap, 'current'); };
+ warn $@ if $@;
+ }
+ die "can't allocate new volume $volname with $snap backing image\n";
+ }
+
+ } else {
+
+ die "can't snapshot this image format\n" if $volname !~ m/\.(qcow2|qed)$/;
+
+ my $path = $class->filesystem_path($scfg, $volname);
+
+ my $cmd = ['/usr/bin/qemu-img', 'snapshot', '-c', $snap, $path];
+
+ run_command($cmd);
+ }
return undef;
}
@@ -1336,6 +1420,35 @@ sub volume_snapshot {
sub volume_rollback_is_possible {
my ($class, $scfg, $storeid, $volname, $snap, $blockers) = @_;
+ return 1 if !$scfg->{'external-snapshots'};
+
+ #technically, we could manage multibranch, we it need lot more work for snapshot delete
+ #we need to implemente block-stream from deleted snapshot to all others child branchs
+ #when online, we need to do a transaction for multiple disk when delete the last snapshot
+ #and need to merge in current running file
+
+ my $snapshots = $class->volume_snapshot_info($scfg, $storeid, $volname);
+ my $found;
+ $blockers //= []; # not guaranteed to be set by caller
+ for my $snapid (
+ sort { $snapshots->{$b}->{order} <=> $snapshots->{$a}->{order} }
+ keys %$snapshots
+ ) {
+ next if $snapid eq 'current';
+
+ if ($snapid eq $snap) {
+ $found = 1;
+ } elsif ($found) {
+ push $blockers->@*, $snapid;
+ }
+ }
+
+ die "can't rollback, snapshot '$snap' does not exist on '$volname'\n"
+ if !$found;
+
+ die "can't rollback, '$snap' is not most recent snapshot on '$volname'\n"
+ if scalar($blockers->@*) > 0;
+
return 1;
}
@@ -1344,11 +1457,22 @@ sub volume_snapshot_rollback {
die "can't rollback snapshot this image format\n" if $volname !~ m/\.(qcow2|qed)$/;
- my $path = $class->filesystem_path($scfg, $volname);
-
- my $cmd = ['/usr/bin/qemu-img', 'snapshot', '-a', $snap, $path];
+ if ($scfg->{'external-snapshots'}) {
+ #simply delete the current snapshot and recreate it
+ eval { free_snap_image($class, $storeid, $scfg, $volname, 'current') };
+ if ($@) {
+ die "can't delete old volume $volname: $@\n";
+ }
- run_command($cmd);
+ eval { alloc_backed_image($class, $storeid, $scfg, $volname, $snap) };
+ if ($@) {
+ die "can't allocate new volume $volname: $@\n";
+ }
+ } else {
+ my $path = $class->filesystem_path($scfg, $volname);
+ my $cmd = ['/usr/bin/qemu-img', 'snapshot', '-a', $snap, $path];
+ run_command($cmd);
+ }
return undef;
}
@@ -1358,15 +1482,83 @@ sub volume_snapshot_delete {
die "can't delete snapshot for this image format\n" if $volname !~ m/\.(qcow2|qed)$/;
- return 1 if $running;
+ my $cmd = "";
- my $path = $class->filesystem_path($scfg, $volname);
+ if ($scfg->{'external-snapshots'}) {
- $class->deactivate_volume($storeid, $scfg, $volname, $snap, {});
+ #qemu has already live commit|stream the snapshot, therefore we only have to drop the image itself
+ if ($running) {
+ eval { free_snap_image($class, $storeid, $scfg, $volname, $snap) };
+ if ($@) {
+ die "can't delete snapshot $snap of volume $volname: $@\n";
+ }
+ return;
+ }
- my $cmd = ['/usr/bin/qemu-img', 'snapshot', '-d', $snap, $path];
+ my $snapshots = $class->volume_snapshot_info($scfg, $storeid, $volname);
+ my $snappath = $snapshots->{$snap}->{file};
+ my $snap_volname = $snapshots->{$snap}->{volname};
+ die "volume $snappath is missing" if !-e $snappath;
+
+ my $parentsnap = $snapshots->{$snap}->{parent};
+ my $childsnap = $snapshots->{$snap}->{child};
+ my $childpath = $snapshots->{$childsnap}->{file};
+
+ #if first snapshot,as it should be bigger, we merge child, and rename the snapshot to child
+ if (!$parentsnap) {
+ print "$volname: deleting snapshot '$snap' by commiting snapshot '$childsnap'\n";
+ print "running 'qemu-img commit $childpath'\n";
+ $cmd = ['/usr/bin/qemu-img', 'commit', $childpath];
+ eval { run_command($cmd) };
+ if ($@) {
+ warn
+ "The state of $snap is now invalid. Don't try to clone or rollback it. You can only try to delete it again later\n";
+ die "error commiting $childsnap to $snap; $@\n";
+ }
- run_command($cmd);
+ print "rename $snappath to $childpath\n";
+ rename($snappath, $childpath)
+ || die "rename '$snappath' to '$childpath' failed - $!\n";
+
+ } else {
+ #we rebase the child image on the parent as new backing image
+ my $parentpath = $snapshots->{$parentsnap}->{file};
+ print
+ "$volname: deleting snapshot '$snap' by rebasing '$childsnap' on top of '$parentsnap'\n";
+ print "running 'qemu-img rebase -b $parentpath -F qcow -f qcow2 $childpath'\n";
+ $cmd = [
+ '/usr/bin/qemu-img',
+ 'rebase',
+ '-b',
+ $parentpath,
+ '-F',
+ 'qcow2',
+ '-f',
+ 'qcow2',
+ $childpath,
+ ];
+ eval { run_command($cmd) };
+ if ($@) {
+ #in case of abort, the state of the snap is still clean, just a little bit bigger
+ die "error rebase $childsnap from $parentsnap; $@\n";
+ }
+ #delete the old snapshot file (not part of the backing chain anymore)
+ eval { free_snap_image($class, $storeid, $scfg, $volname, $snap) };
+ if ($@) {
+ die "error delete old snapshot volume $snap_volname: $@\n";
+ }
+ }
+
+ } else {
+
+ return 1 if $running;
+
+ my $path = $class->filesystem_path($scfg, $volname);
+ $class->deactivate_volume($storeid, $scfg, $volname, $snap, {});
+
+ $cmd = ['/usr/bin/qemu-img', 'snapshot', '-d', $snap, $path];
+ run_command($cmd);
+ }
return undef;
}
@@ -1639,6 +1831,27 @@ sub status {
return ($res->{total}, $res->{avail}, $res->{used}, 1);
}
+sub get_snap_volname {
+ my ($class, $volname, $snapname) = @_;
+
+ my $vmid = ($class->parse_volname($volname))[2];
+ my $name = get_snap_name($class, $volname, $snapname);
+ return "$vmid/$name";
+}
+
+#return snapshot name from a file path
+sub get_snapname_from_path {
+ my ($class, $volname, $path) = @_;
+
+ my $basepath = basename($path);
+ if ($basepath =~ m/^snap-(.*)-vm(.*)$/) {
+ return $1;
+ } elsif ($basepath eq basename($volname)) {
+ return 'current';
+ }
+ return undef;
+}
+
# Returns a hash with the snapshot names as keys and the following data:
# id - Unique id to distinguish different snapshots even if the have the same name.
# timestamp - Creation time of the snapshot (seconds since epoch).
@@ -1646,7 +1859,54 @@ sub status {
sub volume_snapshot_info {
my ($class, $scfg, $storeid, $volname) = @_;
- die "volume_snapshot_info is not implemented for $class";
+ my $path = $class->filesystem_path($scfg, $volname);
+ my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
+ $class->parse_volname($volname);
+
+ my $json = qemu_img_info($path, undef, 10, 1);
+ die "failed to query file information with qemu-img\n" if !$json;
+ my $json_decode = eval { decode_json($json) };
+ if ($@) {
+ die "Can't decode qemu snapshot list. Invalid JSON: $@\n";
+ }
+ my $info = {};
+ my $order = 0;
+ if (ref($json_decode) eq 'HASH') {
+ #internal snapshots is a hashref
+ my $snapshots = $json_decode->{snapshots};
+ for my $snap (@$snapshots) {
+ my $snapname = $snap->{name};
+ $info->{$snapname}->{order} = $snap->{id};
+ $info->{$snapname}->{timestamp} = $snap->{'date-sec'};
+
+ }
+ } elsif (ref($json_decode) eq 'ARRAY') {
+ #no snapshot or external snapshots is an arrayref
+ my $snapshots = $json_decode;
+ for my $snap (@$snapshots) {
+ my $snapfile = $snap->{filename};
+ my $snapname = $class->get_snapname_from_path($volname, $snapfile);
+ #not a proxmox snapshot
+ next if !$snapname;
+
+ my $snapvolname = $class->get_snap_volname($volname, $snapname);
+ $info->{$snapname}->{order} = $order;
+ $info->{$snapname}->{file} = $snapfile;
+ $info->{$snapname}->{volname} = "$snapvolname";
+ $info->{$snapname}->{volid} = "$storeid:$snapvolname";
+ $info->{$snapname}->{ext} = 1;
+
+ my $parentfile = $snap->{'backing-filename'};
+ if ($parentfile) {
+ my $parentname = $class->get_snapname_from_path($volname, $parentfile);
+ $info->{$snapname}->{parent} = $parentname;
+ $info->{$parentname}->{child} = $snapname;
+ }
+ $order++;
+ }
+ }
+
+ return $info;
}
sub activate_storage {
@@ -2062,7 +2322,14 @@ Rename a volume source snapshot C<$source_snap> to a target snapshot C<$target_s
sub rename_snapshot {
my ($class, $scfg, $storeid, $volname, $source_snap, $target_snap) = @_;
- die "rename_snapshot is not implemented for $class";
+ my $source_snap_path = $class->filesystem_path($scfg, $volname, $source_snap);
+ my $target_snap_path = $class->filesystem_path($scfg, $volname, $target_snap);
+ print "rename $source_snap_path to $target_snap_path\n";
+
+ die "target snapshot '${target_snap}' already exists\n" if -e $target_snap_path;
+
+ rename($source_snap_path, $target_snap_path)
+ || die "rename '$source_snap_path' to '$target_snap_path' failed - $!\n";
}
my sub blockdev_options_nbd_tcp {
@@ -2170,7 +2437,8 @@ sub qemu_blockdev_options {
# the snapshot alone.
my $format = ($class->parse_volname($volname))[6];
die "cannot attach only the snapshot of a '$format' image\n"
- if $options->{'snapshot-name'} && ($format eq 'qcow2' || $format eq 'qed');
+ if $options->{'snapshot-name'}
+ && ($format eq 'qcow2' && !$scfg->{'external-snapshots'} || $format eq 'qed');
# The 'file' driver only works for regular files. The check below is taken from
# block/file-posix.c:hdev_probe_device() in QEMU. Do not bother with detecting 'host_cdrom'
--
2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* [pve-devel] [PATCH pve-storage 12/13] lvmplugin: add qcow2 snapshot
[not found] <20250709162202.2952597-1-alexandre.derumier@groupe-cyllene.com>
` (14 preceding siblings ...)
2025-07-09 16:22 ` [pve-devel] [PATCH pve-storage 11/13] qcow2: add external snapshot support Alexandre Derumier via pve-devel
@ 2025-07-09 16:22 ` Alexandre Derumier via pve-devel
2025-07-09 16:22 ` [pve-devel] [PATCH pve-storage 13/13] tests: add lvmplugin test Alexandre Derumier via pve-devel
2025-07-16 15:15 ` [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support Thomas Lamprecht
17 siblings, 0 replies; 49+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-07-09 16:22 UTC (permalink / raw)
To: pve-devel; +Cc: Alexandre Derumier
[-- Attachment #1: Type: message/rfc822, Size: 28389 bytes --]
From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH pve-storage 12/13] lvmplugin: add qcow2 snapshot
Date: Wed, 9 Jul 2025 18:22:01 +0200
Message-ID: <20250709162202.2952597-17-alexandre.derumier@groupe-cyllene.com>
we format lvm logical volume with qcow2 to handle snapshot chain.
like for qcow2 file, when a snapshot is taken, the current lvm volume
is renamed to snap volname, and a new current lvm volume is created
with the snap volname as backing file
snapshot volname is similar to lvmthin : snap_${volname}_{$snapname}.qcow2
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
---
src/PVE/Storage/LVMPlugin.pm | 558 +++++++++++++++++++++++++++++------
1 file changed, 466 insertions(+), 92 deletions(-)
diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
index be411e5..78f2015 100644
--- a/src/PVE/Storage/LVMPlugin.pm
+++ b/src/PVE/Storage/LVMPlugin.pm
@@ -3,6 +3,7 @@ package PVE::Storage::LVMPlugin;
use strict;
use warnings;
+use File::Basename;
use IO::File;
use PVE::Tools qw(run_command trim);
@@ -11,6 +12,8 @@ use PVE::JSONSchema qw(get_standard_option);
use PVE::Storage::Common;
+use JSON;
+
use base qw(PVE::Storage::Plugin);
# lvm helper functions
@@ -24,6 +27,15 @@ my $ignore_no_medium_warnings = sub {
}
};
+my sub fork_cleanup_worker {
+ my ($cleanup_worker) = @_;
+
+ return if !$cleanup_worker;
+ my $rpcenv = PVE::RPCEnvironment::get();
+ my $authuser = $rpcenv->get_user();
+ $rpcenv->fork_worker('imgdel', undef, $authuser, $cleanup_worker);
+}
+
sub lvm_pv_info {
my ($device) = @_;
@@ -267,6 +279,73 @@ sub lvm_list_volumes {
return $lvs;
}
+my sub free_lvm_volumes {
+ my ($class, $scfg, $storeid, $volnames) = @_;
+
+ my $vg = $scfg->{vgname};
+
+ # we need to zero out LVM data for security reasons
+ # and to allow thin provisioning
+ my $zero_out_worker = sub {
+ # wipe throughput up to 10MB/s by default; may be overwritten with saferemove_throughput
+ my $throughput = '-10485760';
+ if ($scfg->{saferemove_throughput}) {
+ $throughput = $scfg->{saferemove_throughput};
+ }
+ for my $name (@$volnames) {
+ print "zero-out data on image $name (/dev/$vg/del-$name)\n";
+
+ my $cmd = [
+ '/usr/bin/cstream',
+ '-i',
+ '/dev/zero',
+ '-o',
+ "/dev/$vg/del-$name",
+ '-T',
+ '10',
+ '-v',
+ '1',
+ '-b',
+ '1048576',
+ '-t',
+ "$throughput",
+ ];
+ eval {
+ run_command(
+ $cmd,
+ errmsg => "zero out finished (note: 'No space left on device' is ok here)",
+ );
+ };
+ warn $@ if $@;
+
+ $class->cluster_lock_storage(
+ $storeid,
+ $scfg->{shared},
+ undef,
+ sub {
+ my $cmd = ['/sbin/lvremove', '-f', "$vg/del-$name"];
+ run_command($cmd, errmsg => "lvremove '$vg/del-$name' error");
+ },
+ );
+ print "successfully removed volume $name ($vg/del-$name)\n";
+ }
+ };
+
+ if ($scfg->{saferemove}) {
+ for my $name (@$volnames) {
+ # avoid long running task, so we only rename here
+ my $cmd = ['/sbin/lvrename', $vg, $name, "del-$name"];
+ run_command($cmd, errmsg => "lvrename '$vg/$name' error");
+ }
+ return $zero_out_worker;
+ } else {
+ for my $name (@$volnames) {
+ my $cmd = ['/sbin/lvremove', '-f', "$vg/$name"];
+ run_command($cmd, errmsg => "lvremove '$vg/$name' error");
+ }
+ }
+}
+
# Configuration
sub type {
@@ -276,6 +355,7 @@ sub type {
sub plugindata {
return {
content => [{ images => 1, rootdir => 1 }, { images => 1 }],
+ format => [{ raw => 1, qcow2 => 1 }, 'raw'],
'sensitive-properties' => {},
};
}
@@ -354,26 +434,76 @@ sub parse_volname {
PVE::Storage::Plugin::parse_lvm_name($volname);
if ($volname =~ m/^(vm-(\d+)-\S+)$/) {
- return ('images', $1, $2, undef, undef, undef, 'raw');
+ my $name = $1;
+ my $vmid = $2;
+ my $format = $volname =~ m/\.qcow2$/ ? 'qcow2' : 'raw';
+ return ('images', $name, $vmid, undef, undef, undef, $format);
}
die "unable to parse lvm volume name '$volname'\n";
}
+#return snapshot name from a file path
+sub get_snapname_from_path {
+ my ($class, $volname, $path) = @_;
+
+ my $basepath = basename($path);
+ my $name = ($volname =~ s/\.[^.]+$//r);
+ if ($basepath =~ m/^snap_${name}_(.*)\.qcow2$/) {
+ return $1;
+ } elsif ($basepath eq $volname) {
+ return 'current';
+ }
+ return undef;
+}
+
+my sub get_snap_name {
+ my ($class, $volname, $snapname) = @_;
+
+ die "missing snapname\n" if !$snapname;
+
+ my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
+ $class->parse_volname($volname);
+ if ($snapname eq 'current') {
+ return $name;
+ } else {
+ $name =~ s/\.[^.]+$//;
+ return "snap_${name}_${snapname}.qcow2";
+ }
+}
+
+sub get_snap_volname {
+ my ($class, $volname, $snapname) = @_;
+
+ return get_snap_name($class, $volname, $snapname);
+}
+
sub filesystem_path {
my ($class, $scfg, $volname, $snapname) = @_;
- die "lvm snapshot is not implemented" if defined($snapname);
+ my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
+ $class->parse_volname($volname);
- my ($vtype, $name, $vmid) = $class->parse_volname($volname);
+ die "snapshot is working with qcow2 format only" if defined($snapname) && $format ne 'qcow2';
my $vg = $scfg->{vgname};
+ $name = get_snap_name($class, $volname, $snapname) if $snapname;
my $path = "/dev/$vg/$name";
return wantarray ? ($path, $vmid, $vtype) : $path;
}
+sub qemu_blockdev_options {
+ my ($class, $scfg, $storeid, $volname, $machine_version, $options) = @_;
+
+ my ($path) = $class->path($scfg, $volname, $storeid, $options->{'snapshot-name'});
+
+ my $blockdev = { driver => 'host_device', filename => $path };
+
+ return $blockdev;
+}
+
sub create_base {
my ($class, $storeid, $scfg, $volname) = @_;
@@ -395,7 +525,11 @@ sub find_free_diskname {
my $disk_list = [keys %{ $lvs->{$vg} }];
- return PVE::Storage::Plugin::get_next_vm_diskname($disk_list, $storeid, $vmid, undef, $scfg);
+ $add_fmt_suffix = $fmt eq 'qcow2' ? 1 : undef;
+
+ return PVE::Storage::Plugin::get_next_vm_diskname(
+ $disk_list, $storeid, $vmid, $fmt, $scfg, $add_fmt_suffix,
+ );
}
sub lvcreate {
@@ -415,7 +549,12 @@ sub lvcreate {
}
sub lvrename {
- my ($vg, $oldname, $newname) = @_;
+ my ($scfg, $oldname, $newname) = @_;
+
+ my $vg = $scfg->{vgname};
+ my $lvs = lvm_list_volumes($vg);
+ die "target volume '${newname}' already exists\n"
+ if ($lvs->{$vg}->{$newname});
run_command(
['/sbin/lvrename', $vg, $oldname, $newname],
@@ -423,13 +562,46 @@ sub lvrename {
);
}
-sub alloc_image {
- my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
+my sub lvm_qcow2_format {
+ my ($class, $storeid, $scfg, $name, $fmt, $backing_snap, $size) = @_;
+
+ $class->activate_volume($storeid, $scfg, $name);
+ my $path = $class->path($scfg, $name, $storeid);
+
+ if ($backing_snap) {
+ my $backing_path = $class->path($scfg, $name, $storeid, $backing_snap);
+ PVE::Storage::Plugin::qemu_img_create_qcow2_backed($scfg, $path, $backing_path, $fmt);
+ } else {
+ PVE::Storage::Plugin::qemu_img_create($scfg, $fmt, $size, $path);
+ }
+}
+
+my sub calculate_lvm_size {
+ my ($size, $fmt, $backing_snap) = @_;
+ #input size = qcow2 image size in kb
+
+ return $size if $fmt ne 'qcow2';
+
+ my $options = $backing_snap ? ['extended_l2=on', 'cluster_size=128k'] : [];
+
+ my $json = PVE::Storage::Plugin::qemu_img_measure($size, $fmt, 5, $options);
+ die "failed to query file information with qemu-img measure\n" if !$json;
+ my $info = eval { decode_json($json) };
+ if ($@) {
+ die "Invalid JSON: $@\n";
+ }
+
+ die "Missing fully-allocated value from json" if !$info->{'fully-allocated'};
+
+ return $info->{'fully-allocated'} / 1024;
+}
- die "unsupported format '$fmt'" if $fmt ne 'raw';
+my sub alloc_lvm_image {
+ my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size, $backing_snap) = @_;
- die "illegal name '$name' - should be 'vm-$vmid-*'\n"
- if $name && $name !~ m/^vm-$vmid-/;
+ die "unsupported format '$fmt'" if $fmt ne 'raw' && $fmt ne 'qcow2';
+
+ $class->parse_volname($name);
my $vgs = lvm_vgs();
@@ -438,86 +610,99 @@ sub alloc_image {
die "no such volume group '$vg'\n" if !defined($vgs->{$vg});
my $free = int($vgs->{$vg}->{free});
+ my $lvmsize = calculate_lvm_size($size, $fmt, $backing_snap);
die "not enough free space ($free < $size)\n" if $free < $size;
- $name = $class->find_free_diskname($storeid, $scfg, $vmid)
+ my $tags = ["pve-vm-$vmid"];
+ #tags all snapshots volumes with the main volume tag for easier activation of the whole group
+ push @$tags, "\@pve-$name" if $fmt eq 'qcow2';
+ lvcreate($vg, $name, $lvmsize, $tags);
+
+ return if $fmt ne 'qcow2';
+
+ #format the lvm volume with qcow2 format
+ eval { lvm_qcow2_format($class, $storeid, $scfg, $name, $fmt, $backing_snap, $size) };
+ if ($@) {
+ my $err = $@;
+ #no need to safe cleanup as the volume is still empty
+ eval {
+ my $cmd = ['/sbin/lvremove', '-f', "$vg/$name"];
+ run_command($cmd, errmsg => "lvremove '$vg/$name' error");
+ };
+ die $err;
+ }
+
+}
+
+sub alloc_image {
+ my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
+
+ $name = $class->find_free_diskname($storeid, $scfg, $vmid, $fmt)
if !$name;
- lvcreate($vg, $name, $size, ["pve-vm-$vmid"]);
+ alloc_lvm_image($class, $storeid, $scfg, $vmid, $fmt, $name, $size);
return $name;
}
-sub free_image {
- my ($class, $storeid, $scfg, $volname, $isBase) = @_;
+my sub alloc_snap_image {
+ my ($class, $storeid, $scfg, $volname, $backing_snap) = @_;
- my $vg = $scfg->{vgname};
+ my ($vmid, $format) = ($class->parse_volname($volname))[2, 6];
+ my $path = $class->path($scfg, $volname, $storeid, $backing_snap);
- # we need to zero out LVM data for security reasons
- # and to allow thin provisioning
+ #we need to use same size than the backing image qcow2 virtual-size
+ my $size = PVE::Storage::Plugin::file_size_info($path, 5, $format);
+ $size = $size / 1024; #we use kb in lvcreate
- my $zero_out_worker = sub {
- print "zero-out data on image $volname (/dev/$vg/del-$volname)\n";
-
- # wipe throughput up to 10MB/s by default; may be overwritten with saferemove_throughput
- my $throughput = '-10485760';
- if ($scfg->{saferemove_throughput}) {
- $throughput = $scfg->{saferemove_throughput};
- }
+ alloc_lvm_image($class, $storeid, $scfg, $vmid, $format, $volname, $size, $backing_snap);
+}
- my $cmd = [
- '/usr/bin/cstream',
- '-i',
- '/dev/zero',
- '-o',
- "/dev/$vg/del-$volname",
- '-T',
- '10',
- '-v',
- '1',
- '-b',
- '1048576',
- '-t',
- "$throughput",
- ];
- eval {
- run_command(
- $cmd,
- errmsg => "zero out finished (note: 'No space left on device' is ok here)",
- );
- };
- warn $@ if $@;
+my sub free_snap_image {
+ my ($class, $storeid, $scfg, $volname, $snap) = @_;
- $class->cluster_lock_storage(
- $storeid,
- $scfg->{shared},
- undef,
- sub {
- my $cmd = ['/sbin/lvremove', '-f', "$vg/del-$volname"];
- run_command($cmd, errmsg => "lvremove '$vg/del-$volname' error");
- },
- );
- print "successfully removed volume $volname ($vg/del-$volname)\n";
- };
+ #activate only the snapshot volume
+ my $path = $class->path($scfg, $volname, $storeid, $snap);
+ my $cmd = ['/sbin/lvchange', '-aly', $path];
+ run_command($cmd, errmsg => "can't activate LV '$path' to zero-out its data");
+ $cmd = ['/sbin/lvchange', '--refresh', $path];
+ run_command($cmd, errmsg => "can't refresh LV '$path' to zero-out its data");
- my $cmd = ['/sbin/lvchange', '-aly', "$vg/$volname"];
- run_command($cmd, errmsg => "can't activate LV '$vg/$volname' to zero-out its data");
- $cmd = ['/sbin/lvchange', '--refresh', "$vg/$volname"];
- run_command($cmd, errmsg => "can't refresh LV '$vg/$volname' to zero-out its data");
+ my $snap_volname = get_snap_name($class, $volname, $snap);
+ return free_lvm_volumes($class, $scfg, $storeid, [$snap_volname]);
+}
- if ($scfg->{saferemove}) {
- # avoid long running task, so we only rename here
- $cmd = ['/sbin/lvrename', $vg, $volname, "del-$volname"];
- run_command($cmd, errmsg => "lvrename '$vg/$volname' error");
- return $zero_out_worker;
- } else {
- my $tmpvg = $scfg->{vgname};
- $cmd = ['/sbin/lvremove', '-f', "$tmpvg/$volname"];
- run_command($cmd, errmsg => "lvremove '$tmpvg/$volname' error");
+sub free_image {
+ my ($class, $storeid, $scfg, $volname, $isBase, $format) = @_;
+
+ my $name = ($class->parse_volname($volname))[1];
+
+ my $volnames = [$volname];
+
+ if ($format eq 'qcow2') {
+ #activate volumes && snapshot volumes
+ my $path = $class->path($scfg, $volname, $storeid);
+ $path = "\@pve-$name" if $format && $format eq 'qcow2';
+ my $cmd = ['/sbin/lvchange', '-aly', $path];
+ run_command($cmd, errmsg => "can't activate LV '$path' to zero-out its data");
+ $cmd = ['/sbin/lvchange', '--refresh', $path];
+ run_command($cmd, errmsg => "can't refresh LV '$path' to zero-out its data");
+
+ my $snapshots = $class->volume_snapshot_info($scfg, $storeid, $volname);
+ for my $snapid (
+ sort { $snapshots->{$a}->{order} <=> $snapshots->{$b}->{order} }
+ keys %$snapshots
+ ) {
+ my $snap = $snapshots->{$snapid};
+ next if $snapid eq 'current';
+ next if !$snap->{volid};
+ my ($snap_storeid, $snap_volname) = PVE::Storage::parse_volume_id($snap->{volid});
+ push @$volnames, $snap_volname;
+ }
}
- return undef;
+ return free_lvm_volumes($class, $scfg, $storeid, $volnames);
}
my $check_tags = sub {
@@ -624,6 +809,12 @@ sub activate_volume {
my $lvm_activate_mode = 'ey';
+ #activate volume && all snapshots volumes by tag
+ my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
+ $class->parse_volname($volname);
+
+ $path = "\@pve-$name" if $format eq 'qcow2';
+
my $cmd = ['/sbin/lvchange', "-a$lvm_activate_mode", $path];
run_command($cmd, errmsg => "can't activate LV '$path'");
$cmd = ['/sbin/lvchange', '--refresh', $path];
@@ -636,6 +827,10 @@ sub deactivate_volume {
my $path = $class->path($scfg, $volname, $storeid, $snapname);
return if !-b $path;
+ my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
+ $class->parse_volname($volname);
+ $path = "\@pve-$name" if $format eq 'qcow2';
+
my $cmd = ['/sbin/lvchange', '-aln', $path];
run_command($cmd, errmsg => "can't deactivate LV '$path'");
}
@@ -643,10 +838,14 @@ sub deactivate_volume {
sub volume_resize {
my ($class, $scfg, $storeid, $volname, $size, $running) = @_;
- $size = ($size / 1024 / 1024) . "M";
+ my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
+ $class->parse_volname($volname);
+
+ my $lvmsize = calculate_lvm_size($size / 1024, $format);
+ $lvmsize = "${lvmsize}k";
my $path = $class->path($scfg, $volname);
- my $cmd = ['/sbin/lvextend', '-L', $size, $path];
+ my $cmd = ['/sbin/lvextend', '-L', $lvmsize, $path];
$class->cluster_lock_storage(
$storeid,
@@ -657,6 +856,10 @@ sub volume_resize {
},
);
+ if (!$running && $format eq 'qcow2') {
+ PVE::Storage::Plugin::qemu_img_resize($scfg, $path, $format, $size, 10);
+ }
+
return 1;
}
@@ -693,30 +896,205 @@ sub volume_size_info {
sub volume_snapshot {
my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
- die "lvm snapshot is not implemented";
+ my ($vmid, $format) = ($class->parse_volname($volname))[2, 6];
+
+ die "can't snapshot '$format' volume\n" if $format ne 'qcow2';
+
+ if ($running) {
+ #rename with blockdev-reopen is done at qemu level when running
+ eval { alloc_snap_image($class, $storeid, $scfg, $volname, $snap) };
+ if ($@) {
+ die "can't allocate new volume $volname: $@\n";
+ }
+ return;
+ }
+
+ #rename current volume to snap volume
+ eval { $class->rename_snapshot($scfg, $storeid, $volname, 'current', $snap) };
+ die "error rename $volname to $snap\n" if $@;
+
+ eval { alloc_snap_image($class, $storeid, $scfg, $volname, $snap) };
+ if ($@) {
+ my $err = $@;
+ eval { $class->rename_snapshot($scfg, $storeid, $volname, $snap, 'current') };
+ die $err;
+ }
+}
+
+# Asserts that a rollback to $snap on $volname is possible.
+# If certain snapshots are preventing the rollback and $blockers is an array
+# reference, the snapshot names can be pushed onto $blockers prior to dying.
+sub volume_rollback_is_possible {
+ my ($class, $scfg, $storeid, $volname, $snap, $blockers) = @_;
+
+ my $format = ($class->parse_volname($volname))[6];
+ die "can't rollback snapshot for '$format' volume\n" if $format ne 'qcow2';
+
+ $class->activate_volume($storeid, $scfg, $volname);
+
+ my $snapshots = $class->volume_snapshot_info($scfg, $storeid, $volname);
+ my $found;
+ $blockers //= []; # not guaranteed to be set by caller
+ for my $snapid (
+ sort { $snapshots->{$b}->{order} <=> $snapshots->{$a}->{order} }
+ keys %$snapshots
+ ) {
+ next if $snapid eq 'current';
+
+ if ($snapid eq $snap) {
+ $found = 1;
+ } elsif ($found) {
+ push $blockers->@*, $snapid;
+ }
+ }
+
+ die "can't rollback, snapshot '$snap' does not exist on '$volname'\n"
+ if !$found;
+
+ die "can't rollback, '$snap' is not most recent snapshot on '$volname'\n"
+ if scalar($blockers->@*) > 0;
+
+ return 1;
}
sub volume_snapshot_rollback {
my ($class, $scfg, $storeid, $volname, $snap) = @_;
- die "lvm snapshot rollback is not implemented";
+ my $format = ($class->parse_volname($volname))[6];
+
+ die "can't rollback snapshot for '$format' volume\n" if $format ne 'qcow2';
+
+ my $cleanup_worker = eval { free_snap_image($class, $storeid, $scfg, $volname, 'current'); };
+ die "error deleting snapshot $snap $@\n" if $@;
+
+ eval { alloc_snap_image($class, $storeid, $scfg, $volname, $snap) };
+
+ fork_cleanup_worker($cleanup_worker);
+
+ if ($@) {
+ die "can't allocate new volume $volname: $@\n";
+ }
+
+ return undef;
}
sub volume_snapshot_delete {
- my ($class, $scfg, $storeid, $volname, $snap) = @_;
+ my ($class, $scfg, $storeid, $volname, $snap, $running) = @_;
- die "lvm snapshot delete is not implemented";
+ my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
+ $class->parse_volname($volname);
+
+ die "can't delete snapshot for '$format' volume\n" if $format ne 'qcow2';
+
+ if ($running) {
+ my $cleanup_worker = eval { free_snap_image($class, $storeid, $scfg, $volname, $snap); };
+ die "error deleting snapshot $snap $@\n" if $@;
+ fork_cleanup_worker($cleanup_worker);
+ return;
+ }
+
+ my $cmd = "";
+ my $path = $class->filesystem_path($scfg, $volname);
+
+ my $snapshots = $class->volume_snapshot_info($scfg, $storeid, $volname);
+ my $snappath = $snapshots->{$snap}->{file};
+ my $snapvolname = $snapshots->{$snap}->{volname};
+ die "volume $snappath is missing" if !-e $snappath;
+
+ my $parentsnap = $snapshots->{$snap}->{parent};
+
+ my $childsnap = $snapshots->{$snap}->{child};
+ my $childpath = $snapshots->{$childsnap}->{file};
+ my $childvolname = $snapshots->{$childsnap}->{volname};
+
+ my $err = undef;
+ #if first snapshot,as it should be bigger, we merge child, and rename the snapshot to child
+ if (!$parentsnap) {
+ print "$volname: deleting snapshot '$snap' by commiting snapshot '$childsnap'\n";
+ print "running 'qemu-img commit $childpath'\n";
+ #can't use -d here, as it's an lvm volume
+ $cmd = ['/usr/bin/qemu-img', 'commit', $childpath];
+ eval { run_command($cmd) };
+ if ($@) {
+ warn
+ "The state of $snap is now invalid. Don't try to clone or rollback it. You can only try to delete it again later\n";
+ die "error commiting $childsnap to $snap; $@\n";
+ }
+ print "delete $childvolname\n";
+ my $cleanup_worker =
+ eval { free_snap_image($class, $storeid, $scfg, $volname, $childsnap) };
+ if ($@) {
+ die "error delete old snapshot volume $childvolname: $@\n";
+ }
+
+ print "rename $snapvolname to $childvolname\n";
+ eval { lvrename($scfg, $snapvolname, $childvolname) };
+ if ($@) {
+ warn $@;
+ $err = "error renaming snapshot: $@\n";
+ }
+ fork_cleanup_worker($cleanup_worker);
+
+ } else {
+ #we rebase the child image on the parent as new backing image
+ my $parentpath = $snapshots->{$parentsnap}->{file};
+ print
+ "$volname: deleting snapshot '$snap' by rebasing '$childsnap' on top of '$parentsnap'\n";
+ print "running 'qemu-img rebase -b $parentpath -F qcow -f qcow2 $childpath'\n";
+ $cmd = [
+ '/usr/bin/qemu-img',
+ 'rebase',
+ '-b',
+ $parentpath,
+ '-F',
+ 'qcow2',
+ '-f',
+ 'qcow2',
+ $childpath,
+ ];
+ eval { run_command($cmd) };
+ if ($@) {
+ #in case of abort, the state of the snap is still clean, just a little bit bigger
+ die "error rebase $childsnap from $parentsnap; $@\n";
+ }
+ #delete the snapshot
+ my $cleanup_worker = eval { free_snap_image($class, $storeid, $scfg, $volname, $snap); };
+ if ($@) {
+ die "error deleting old snapshot volume $snapvolname\n";
+ }
+ fork_cleanup_worker($cleanup_worker);
+ }
+
+ die $err if $err;
}
sub volume_has_feature {
my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running) = @_;
my $features = {
- copy => { base => 1, current => 1 },
- rename => { current => 1 },
+ copy => {
+ base => { qcow2 => 1, raw => 1 },
+ current => { qcow2 => 1, raw => 1 },
+ snap => { qcow2 => 1 },
+ },
+ 'rename' => {
+ current => { qcow2 => 1, raw => 1 },
+ },
+ snapshot => {
+ current => { qcow2 => 1 },
+ snap => { qcow2 => 1 },
+ },
+ # fixme: add later ? (we need to handle basepath, volume activation,...)
+ # template => {
+ # current => { raw => 1, qcow2 => 1},
+ # },
+ # clone => {
+ # base => { qcow2 => 1 },
+ # },
};
- my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) = $class->parse_volname($volname);
+ my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
+ $class->parse_volname($volname);
my $key = undef;
if ($snapname) {
@@ -724,7 +1102,7 @@ sub volume_has_feature {
} else {
$key = $isBase ? 'base' : 'current';
}
- return 1 if $features->{$feature}->{$key};
+ return 1 if defined($features->{$feature}->{$key}->{$format});
return undef;
}
@@ -818,14 +1196,7 @@ sub volume_import {
if (my $err = $@) {
my $cleanup_worker = eval { $class->free_image($storeid, $scfg, $volname, 0) };
warn $@ if $@;
-
- if ($cleanup_worker) {
- my $rpcenv = PVE::RPCEnvironment::get();
- my $authuser = $rpcenv->get_user();
-
- $rpcenv->fork_worker('imgdel', undef, $authuser, $cleanup_worker);
- }
-
+ fork_cleanup_worker($cleanup_worker);
die $err;
}
@@ -851,14 +1222,17 @@ sub rename_volume {
die "target volume '${target_volname}' already exists\n"
if ($lvs->{$vg}->{$target_volname});
- lvrename($vg, $source_volname, $target_volname);
+ lvrename($scfg, $source_volname, $target_volname);
return "${storeid}:${target_volname}";
}
sub rename_snapshot {
my ($class, $scfg, $storeid, $volname, $source_snap, $target_snap) = @_;
- die "rename_snapshot is not implemented for $class";
+ my $source_snap_volname = get_snap_name($class, $volname, $source_snap);
+ my $target_snap_volname = get_snap_name($class, $volname, $target_snap);
+
+ lvrename($scfg, $source_snap_volname, $target_snap_volname);
}
sub volume_support_qemu_snapshot {
--
2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* [pve-devel] [PATCH pve-storage 13/13] tests: add lvmplugin test
[not found] <20250709162202.2952597-1-alexandre.derumier@groupe-cyllene.com>
` (15 preceding siblings ...)
2025-07-09 16:22 ` [pve-devel] [PATCH pve-storage 12/13] lvmplugin: add qcow2 snapshot Alexandre Derumier via pve-devel
@ 2025-07-09 16:22 ` Alexandre Derumier via pve-devel
2025-07-16 15:15 ` [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support Thomas Lamprecht
17 siblings, 0 replies; 49+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-07-09 16:22 UTC (permalink / raw)
To: pve-devel; +Cc: Alexandre Derumier
[-- Attachment #1: Type: message/rfc822, Size: 16650 bytes --]
From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH pve-storage 13/13] tests: add lvmplugin test
Date: Wed, 9 Jul 2025 18:22:02 +0200
Message-ID: <20250709162202.2952597-18-alexandre.derumier@groupe-cyllene.com>
use same template than zfspoolplugin tests
Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
---
src/test/Makefile | 5 +-
src/test/run_test_lvmplugin.pl | 577 +++++++++++++++++++++++++++++++++
2 files changed, 581 insertions(+), 1 deletion(-)
create mode 100755 src/test/run_test_lvmplugin.pl
diff --git a/src/test/Makefile b/src/test/Makefile
index 12991da..9186303 100644
--- a/src/test/Makefile
+++ b/src/test/Makefile
@@ -1,10 +1,13 @@
all: test
-test: test_zfspoolplugin test_disklist test_bwlimit test_plugin test_ovf
+test: test_zfspoolplugin test_lvmplugin test_disklist test_bwlimit test_plugin test_ovf
test_zfspoolplugin: run_test_zfspoolplugin.pl
./run_test_zfspoolplugin.pl
+test_lvmplugin: run_test_lvmplugin.pl
+ ./run_test_lvmplugin.pl
+
test_disklist: run_disk_tests.pl
./run_disk_tests.pl
diff --git a/src/test/run_test_lvmplugin.pl b/src/test/run_test_lvmplugin.pl
new file mode 100755
index 0000000..e87a3de
--- /dev/null
+++ b/src/test/run_test_lvmplugin.pl
@@ -0,0 +1,577 @@
+#!/usr/bin/perl
+
+use lib '..';
+
+use strict;
+use warnings;
+
+use Data::Dumper qw(Dumper);
+use PVE::Storage;
+use PVE::Cluster;
+use PVE::Tools qw(run_command);
+use Cwd;
+$Data::Dumper::Sortkeys = 1;
+
+my $verbose = undef;
+
+my $storagename = "lvmregression";
+my $vgname = 'regressiontest';
+
+#volsize in GB
+my $volsize = 1;
+my $vmdisk = "vm-102-disk-1";
+
+my $tests = {};
+
+my $cfg = undef;
+my $count = 0;
+my $testnum = 12;
+my $end_test = $testnum;
+my $start_test = 1;
+
+if (@ARGV == 2) {
+ $end_test = $ARGV[1];
+ $start_test = $ARGV[0];
+} elsif (@ARGV == 1) {
+ $start_test = $ARGV[0];
+ $end_test = $ARGV[0];
+}
+
+my $test12 = sub {
+
+ print "\nrun test12 \"path\"\n";
+
+ my @res;
+ my $fail = 0;
+ eval {
+ @res = PVE::Storage::path($cfg, "$storagename:$vmdisk");
+ if ($res[0] ne "\/dev\/regressiontest\/$vmdisk") {
+ $count++;
+ $fail = 1;
+ warn
+ "Test 12 a: path is not correct: expected \'\/dev\/regressiontest\/$vmdisk'\ get \'$res[0]\'";
+ }
+ if ($res[1] ne "102") {
+ if (!$fail) {
+ $count++;
+ $fail = 1;
+ }
+ warn "Test 12 a: owner is not correct: expected \'102\' get \'$res[1]\'";
+ }
+ if ($res[2] ne "images") {
+ if (!$fail) {
+ $count++;
+ $fail = 1;
+ }
+ warn "Test 12 a: owner is not correct: expected \'images\' get \'$res[2]\'";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test 12 a: $@";
+ }
+
+};
+$tests->{12} = $test12;
+
+my $test11 = sub {
+
+ print "\nrun test11 \"deactivate_storage\"\n";
+
+ eval {
+ PVE::Storage::activate_storage($cfg, $storagename);
+ PVE::Storage::deactivate_storage($cfg, $storagename);
+ };
+ if ($@) {
+ $count++;
+ warn "Test 11 a: $@";
+ }
+};
+$tests->{11} = $test11;
+
+my $test10 = sub {
+
+ print "\nrun test10 \"activate_storage\"\n";
+
+ eval { PVE::Storage::activate_storage($cfg, $storagename); };
+ if ($@) {
+ $count++;
+ warn "Test 10 a: $@";
+ }
+};
+$tests->{10} = $test10;
+
+my $test9 = sub {
+
+ print "\nrun test15 \"template_list and vdisk_list\"\n";
+
+ my $hash = Dumper {};
+
+ my $res = Dumper PVE::Storage::template_list($cfg, $storagename, "vztmpl");
+ if ($hash ne $res) {
+ $count++;
+ warn "Test 9 a failed\n";
+ }
+ $res = undef;
+
+ $res = Dumper PVE::Storage::template_list($cfg, $storagename, "iso");
+ if ($hash ne $res) {
+ $count++;
+ warn "Test 9 b failed\n";
+ }
+ $res = undef;
+
+ $res = Dumper PVE::Storage::template_list($cfg, $storagename, "backup");
+ if ($hash ne $res) {
+ $count++;
+ warn "Test 9 c failed\n";
+ }
+
+};
+$tests->{9} = $test9;
+
+my $test8 = sub {
+
+ print "\nrun test8 \"vdisk_free\"\n";
+
+ eval {
+ PVE::Storage::vdisk_free($cfg, "$storagename:$vmdisk");
+
+ eval {
+ run_command("lvs $vgname/$vmdisk", outfunc => sub { }, errfunc => sub { });
+ };
+ if (!$@) {
+ $count++;
+ warn "Test8 a: vdisk still exists\n";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test8 a: $@";
+ }
+
+};
+$tests->{8} = $test8;
+
+my $test7 = sub {
+
+ print "\nrun test7 \"vdisk_alloc\"\n";
+
+ eval {
+ my $tmp_volid =
+ PVE::Storage::vdisk_alloc($cfg, $storagename, "112", "raw", undef, 1024 * 1024);
+
+ if ($tmp_volid ne "$storagename:vm-112-disk-0") {
+ die "volname:$tmp_volid don't match\n";
+ }
+ eval {
+ run_command(
+ "lvs --noheadings -o lv_size $vgname/vm-112-disk-0",
+ outfunc => sub {
+ my $tmp = shift;
+ if ($tmp !~ m/1\.00g/) {
+ die "size don't match\n";
+ }
+ },
+ );
+ };
+ if ($@) {
+ $count++;
+ warn "Test7 a: $@";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test7 a: $@";
+ }
+
+ eval {
+ my $tmp_volid =
+ PVE::Storage::vdisk_alloc($cfg, $storagename, "112", "raw", undef, 2048 * 1024);
+
+ if ($tmp_volid ne "$storagename:vm-112-disk-1") {
+ die "volname:$tmp_volid don't match\n";
+ }
+ eval {
+ run_command(
+ "lvs --noheadings -o lv_size $vgname/vm-112-disk-1",
+ outfunc => sub {
+ my $tmp = shift;
+ if ($tmp !~ m/2\.00g/) {
+ die "size don't match\n";
+ }
+ },
+ );
+ };
+ if ($@) {
+ $count++;
+ warn "Test7 b: $@";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test7 b: $@";
+ }
+
+};
+$tests->{7} = $test7;
+
+my $test6 = sub {
+
+ print "\nrun test6 \"parse_volume_id\"\n";
+
+ eval {
+ my ($store, $disk) = PVE::Storage::parse_volume_id("$storagename:$vmdisk");
+
+ if ($store ne $storagename || $disk ne $vmdisk) {
+ $count++;
+ warn "Test6 a: parsing wrong";
+ }
+
+ };
+ if ($@) {
+ $count++;
+ warn "Test6 a: $@";
+ }
+
+};
+$tests->{6} = $test6;
+
+my $test5 = sub {
+
+ print "\nrun test5 \"parse_volname\"\n";
+
+ eval {
+ my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
+ PVE::Storage::parse_volname($cfg, "$storagename:$vmdisk");
+
+ if (
+ $vtype ne 'images'
+ || $vmid ne '102'
+ || $name ne $vmdisk
+ || defined($basename)
+ || defined($basevmid)
+ || $isBase
+ || $format ne 'raw'
+ ) {
+ $count++;
+ warn "Test5 a: parsing wrong";
+ }
+
+ };
+ if ($@) {
+ $count++;
+ warn "Test5 a: $@";
+ }
+
+};
+$tests->{5} = $test5;
+
+my $test4 = sub {
+
+ print "\nrun test4 \"volume_rollback_is_possible\"\n";
+
+ eval {
+ my $blockers = [];
+ my $res = undef;
+ eval {
+ $res = PVE::Storage::volume_rollback_is_possible(
+ $cfg, "$storagename:$vmdisk", 'snap1', $blockers,
+ );
+ };
+ if (!$@) {
+ $count++;
+ warn "Test4 a: Rollback shouldn't be possible";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test4 a: $@";
+ }
+
+};
+$tests->{4} = $test4;
+
+my $test3 = sub {
+
+ print "\nrun test3 \"volume_has_feature\"\n";
+
+ eval {
+ if (PVE::Storage::volume_has_feature(
+ $cfg, 'snapshot', "$storagename:$vmdisk", undef, 0,
+ )) {
+ $count++;
+ warn "Test3 a failed";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test3 a: $@";
+ }
+
+ eval {
+ if (PVE::Storage::volume_has_feature($cfg, 'clone', "$storagename:$vmdisk", undef, 0)) {
+ $count++;
+ warn "Test3 g failed";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test3 g: $@";
+ }
+
+ eval {
+ if (PVE::Storage::volume_has_feature(
+ $cfg, 'template', "$storagename:$vmdisk", undef, 0,
+ )) {
+ $count++;
+ warn "Test3 l failed";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test3 l: $@";
+ }
+
+ eval {
+ if (!PVE::Storage::volume_has_feature($cfg, 'copy', "$storagename:$vmdisk", undef, 0)) {
+ $count++;
+ warn "Test3 r failed";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test3 r: $@";
+ }
+
+ eval {
+ if (PVE::Storage::volume_has_feature(
+ $cfg, 'sparseinit', "$storagename:$vmdisk", undef, 0,
+ )) {
+ $count++;
+ warn "Test3 x failed";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test3 x: $@";
+ }
+
+ eval {
+ if (PVE::Storage::volume_has_feature(
+ $cfg, 'snapshot', "$storagename:$vmdisk", 'test', 0,
+ )) {
+ $count++;
+ warn "Test3 a1 failed";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test3 a1: $@";
+ }
+
+ eval {
+ if (PVE::Storage::volume_has_feature($cfg, 'clone', "$storagename:$vmdisk", 'test', 0)) {
+ $count++;
+ warn "Test3 g1 failed";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test3 g1: $@";
+ }
+
+ eval {
+ if (PVE::Storage::volume_has_feature(
+ $cfg, 'template', "$storagename:$vmdisk", 'test', 0,
+ )) {
+ $count++;
+ warn "Test3 l1 failed";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test3 l1: $@";
+ }
+
+ eval {
+ if (PVE::Storage::volume_has_feature($cfg, 'copy', "$storagename:$vmdisk", 'test', 0)) {
+ $count++;
+ warn "Test3 r1 failed";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test3 r1: $@";
+ }
+
+ eval {
+ if (PVE::Storage::volume_has_feature(
+ $cfg, 'sparseinit', "$storagename:$vmdisk", 'test', 0,
+ )) {
+ $count++;
+ warn "Test3 x1 failed";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test3 x1: $@";
+ }
+
+};
+$tests->{3} = $test3;
+
+my $test2 = sub {
+
+ print "\nrun test2 \"volume_resize\"\n";
+ my $newsize = ($volsize + 1) * 1024 * 1024 * 1024;
+
+ eval {
+ eval { PVE::Storage::volume_resize($cfg, "$storagename:$vmdisk", $newsize, 0); };
+ if ($@) {
+ $count++;
+ warn "Test2 a failed";
+ }
+ if ($newsize != PVE::Storage::volume_size_info($cfg, "$storagename:$vmdisk")) {
+ $count++;
+ warn "Test2 a failed";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test2 a: $@";
+ }
+
+};
+$tests->{2} = $test2;
+
+my $test1 = sub {
+
+ print "\nrun test1 \"volume_size_info\"\n";
+ my $size = ($volsize * 1024 * 1024 * 1024);
+
+ eval {
+ if ($size != PVE::Storage::volume_size_info($cfg, "$storagename:$vmdisk")) {
+ $count++;
+ warn "Test1 a failed";
+ }
+ };
+ if ($@) {
+ $count++;
+ warn "Test1 a : $@";
+ }
+
+};
+$tests->{1} = $test1;
+
+sub setup_lvm_volumes {
+ eval { run_command("vgcreate $vgname /dev/loop1"); };
+
+ print "create lvm volume $vmdisk\n" if $verbose;
+ run_command("lvcreate -L${volsize}G -n $vmdisk $vgname");
+
+ my $vollist = [
+ "$storagename:$vmdisk",
+ ];
+
+ PVE::Storage::activate_volumes($cfg, $vollist);
+}
+
+sub cleanup_lvm_volumes {
+
+ print "destroy $vgname\n" if $verbose;
+ eval { run_command("vgremove $vgname -y"); };
+ if ($@) {
+ print "cleanup failed: $@\nretrying once\n" if $verbose;
+ eval { run_command("vgremove $vgname -y"); };
+ if ($@) {
+ clean_up_lvm();
+ setup_lvm();
+ }
+ }
+}
+
+sub setup_lvm {
+
+ unlink 'lvm.img';
+ eval { run_command("dd if=/dev/zero of=lvm.img bs=1M count=8000"); };
+ if ($@) {
+ clean_up_lvm();
+ }
+ my $pwd = cwd();
+ eval { run_command("losetup /dev/loop1 $pwd\/lvm.img"); };
+ if ($@) {
+ clean_up_lvm();
+ }
+ eval { run_command("pvcreate /dev/loop1"); };
+ if ($@) {
+ clean_up_lvm();
+ }
+}
+
+sub clean_up_lvm {
+
+ eval { run_command("pvremove /dev/loop1 -ff -y"); };
+ if ($@) {
+ warn $@;
+ }
+ eval { run_command("losetup -d /dev/loop1"); };
+ if ($@) {
+ warn $@;
+ }
+
+ unlink 'lvm.img';
+}
+
+sub volume_is_base {
+ my ($cfg, $volid) = @_;
+
+ my (undef, undef, undef, undef, undef, $isBase, undef) =
+ PVE::Storage::parse_volname($cfg, $volid);
+
+ return $isBase;
+}
+
+if ($> != 0) { #EUID
+ warn "not root, skipping lvm tests\n";
+ exit 0;
+}
+
+my $time = time;
+print "Start tests for LVMPlugin\n";
+
+$cfg = {
+ 'ids' => {
+ $storagename => {
+ 'content' => {
+ 'images' => 1,
+ 'rootdir' => 1,
+ },
+ 'vgname' => $vgname,
+ 'type' => 'lvm',
+ },
+ },
+ 'order' => { 'lvmregression' => 1 },
+};
+
+setup_lvm();
+for (my $i = $start_test; $i <= $end_test; $i++) {
+ setup_lvm_volumes();
+
+ eval { $tests->{$i}(); };
+ if (my $err = $@) {
+ warn $err;
+ $count++;
+ }
+ cleanup_lvm_volumes();
+
+}
+clean_up_lvm();
+
+$time = time - $time;
+
+print "Stop tests for LVMPlugin\n";
+print "$count tests failed\n";
+print "Time: ${time}s\n";
+
+exit -1 if $count > 0;
--
2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [pve-devel] [PATCH qemu-server 2/4] blockdev: add backing_chain support
2025-07-09 16:21 ` [pve-devel] [PATCH qemu-server 2/4] blockdev: add backing_chain support Alexandre Derumier via pve-devel
@ 2025-07-15 9:02 ` Wolfgang Bumiller
0 siblings, 0 replies; 49+ messages in thread
From: Wolfgang Bumiller @ 2025-07-15 9:02 UTC (permalink / raw)
To: Proxmox VE development discussion
On Wed, Jul 09, 2025 at 06:21:48PM +0200, Alexandre Derumier via pve-devel wrote:
> From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
> To: pve-devel@lists.proxmox.com
> Subject: [PATCH qemu-server 2/4] blockdev: add backing_chain support
> Date: Wed, 9 Jul 2025 18:21:48 +0200
> Message-Id: <20250709162202.2952597-4-alexandre.derumier@groupe-cyllene.com>
> X-Mailer: git-send-email 2.39.5
>
> We need to define name-nodes for all backing chain images,
> to be able to live rename them with blockdev-reopen
>
> For linked clone, we don't need to definebase image(s) chain.
> They are auto added with #block nodename.
>
> Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
> ---
> src/PVE/QemuServer/Blockdev.pm | 49 +++++++++++++++++++
> src/test/cfg2cmd/simple-backingchain.conf | 25 ++++++++++
> src/test/cfg2cmd/simple-backingchain.conf.cmd | 33 +++++++++++++
> src/test/run_config2command_tests.pl | 47 ++++++++++++++++++
> 4 files changed, 154 insertions(+)
> create mode 100644 src/test/cfg2cmd/simple-backingchain.conf
> create mode 100644 src/test/cfg2cmd/simple-backingchain.conf.cmd
>
> diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
> index 5f1fdae3..2a0513fb 100644
> --- a/src/PVE/QemuServer/Blockdev.pm
> +++ b/src/PVE/QemuServer/Blockdev.pm
> @@ -360,6 +360,46 @@ my sub generate_format_blockdev {
> return $blockdev;
> }
>
> +my sub generate_backing_blockdev;
> +
> +sub generate_backing_blockdev {
FYI: You can just use `my sub` once and recurse with `__SUB__->(...)`
instead of using its name.
> + my ($storecfg, $snapshots, $deviceid, $drive, $machine_version, $options) = @_;
> +
> + my $snap_id = $options->{'snapshot-name'};
> + my $snapshot = $snapshots->{$snap_id};
> + my $parentid = $snapshot->{parent};
> +
> + my $volid = $drive->{file};
> +
> + my $snap_file_blockdev = generate_file_blockdev($storecfg, $drive, $machine_version, $options);
> + $snap_file_blockdev->{filename} = $snapshot->{file};
> +
> + my $snap_fmt_blockdev =
> + generate_format_blockdev($storecfg, $drive, $snap_file_blockdev, $options);
> +
> + if ($parentid) {
> + my $options = { 'snapshot-name' => $parentid };
> + $snap_fmt_blockdev->{backing} = generate_backing_blockdev(
> + $storecfg, $snapshots, $deviceid, $drive, $machine_version, $options,
> + );
> + }
> + return $snap_fmt_blockdev;
> +}
> +
> +my sub generate_backing_chain_blockdev {
> + my ($storecfg, $deviceid, $drive, $machine_version) = @_;
> +
> + my $volid = $drive->{file};
> +
> + my $snapshots = PVE::Storage::volume_snapshot_info($storecfg, $volid);
> + my $parentid = $snapshots->{'current'}->{parent};
> + return undef if !$parentid;
> + my $options = { 'snapshot-name' => $parentid };
> + return generate_backing_blockdev(
> + $storecfg, $snapshots, $deviceid, $drive, $machine_version, $options,
> + );
> +}
> +
> sub generate_drive_blockdev {
> my ($storecfg, $drive, $machine_version, $options) = @_;
>
> @@ -371,6 +411,15 @@ sub generate_drive_blockdev {
> my $child = generate_file_blockdev($storecfg, $drive, $machine_version, $options);
> if (!is_nbd($drive)) {
> $child = generate_format_blockdev($storecfg, $drive, $child, $options);
> +
> + my $support_qemu_snapshots =
> + PVE::Storage::volume_support_qemu_snapshot($storecfg, $drive->{file});
> + if ($support_qemu_snapshots && $support_qemu_snapshots eq 'external') {
> + my $backing_chain = generate_backing_chain_blockdev(
> + $storecfg, "drive-$drive_id", $drive, $machine_version,
> + );
> + $child->{backing} = $backing_chain if $backing_chain;
> + }
> }
>
> if ($options->{'zero-initialized'}) {
> diff --git a/src/test/cfg2cmd/simple-backingchain.conf b/src/test/cfg2cmd/simple-backingchain.conf
> new file mode 100644
> index 00000000..2c0b0f2c
> --- /dev/null
> +++ b/src/test/cfg2cmd/simple-backingchain.conf
> @@ -0,0 +1,25 @@
> +# TEST: Simple test for external snapshot backing chain
> +name: simple
> +parent: snap3
> +scsi0: localsnapext:8006/vm-8006-disk-0.qcow2,size=1G
> +scsi1: lvm-store:vm-8006-disk-0.qcow2,size=1G
> +
> +[snap1]
> +name: simple
> +scsi0: localsnapext:8006/vm-8006-disk-0.qcow2,size=1G
> +scsi1: lvm-store:vm-8006-disk-0.qcow2,size=1G
> +snaptime: 1748933042
> +
> +[snap2]
> +parent: snap1
> +name: simple
> +scsi0: localsnapext:8006/vm-8006-disk-0.qcow2,size=1G
> +scsi1: lvm-store:vm-8006-disk-0.qcow2,size=1G
> +snaptime: 1748933043
> +
> +[snap3]
> +parent: snap2
> +name: simple
> +scsi0: localsnapext:8006/vm-8006-disk-0.qcow2,size=1G
> +scsi1: lvm-store:vm-8006-disk-0.qcow2,size=1G
> +snaptime: 1748933044
> diff --git a/src/test/cfg2cmd/simple-backingchain.conf.cmd b/src/test/cfg2cmd/simple-backingchain.conf.cmd
> new file mode 100644
> index 00000000..40c957f5
> --- /dev/null
> +++ b/src/test/cfg2cmd/simple-backingchain.conf.cmd
> @@ -0,0 +1,33 @@
> +/usr/bin/kvm \
> + -id 8006 \
> + -name 'simple,debug-threads=on' \
> + -no-shutdown \
> + -chardev 'socket,id=qmp,path=/var/run/qemu-server/8006.qmp,server=on,wait=off' \
> + -mon 'chardev=qmp,mode=control' \
> + -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect-ms=5000' \
> + -mon 'chardev=qmp-event,mode=control' \
> + -pidfile /var/run/qemu-server/8006.pid \
> + -daemonize \
> + -smp '1,sockets=1,cores=1,maxcpus=1' \
> + -nodefaults \
> + -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
> + -vnc 'unix:/var/run/qemu-server/8006.vnc,password=on' \
> + -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep \
> + -m 512 \
> + -object '{"id":"throttle-drive-scsi0","limits":{},"qom-type":"throttle-group"}' \
> + -object '{"id":"throttle-drive-scsi1","limits":{},"qom-type":"throttle-group"}' \
> + -global 'PIIX4_PM.disable_s3=1' \
> + -global 'PIIX4_PM.disable_s4=1' \
> + -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \
> + -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' \
> + -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' \
> + -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' \
> + -device 'VGA,id=vga,bus=pci.0,addr=0x2' \
> + -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \
> + -iscsi 'initiator-name=iqn.1993-08.org.debian:01:aabbccddeeff' \
> + -device 'lsi,id=scsihw0,bus=pci.0,addr=0x5' \
> + -blockdev '{"driver":"throttle","file":{"backing":{"backing":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vzsnapext/images/8006/snap1-vm-8006-disk-0.qcow2","node-name":"ea91a385a49a008a4735c0aec5c6749","read-only":false},"node-name":"fa91a385a49a008a4735c0aec5c6749","read-only":false},"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"file","filename":"/var/lib/vzsnapext/images/8006/snap2-vm-8006-disk-0.qcow2","node-name":"ec0289317073959d450248d8cd7a480","read-only":false},"node-name":"fc0289317073959d450248d8cd7a480","read-only":false},"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"io_uring","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driv
er":"file","filename":"/var/lib/vzsnapext/images/8006/vm-8006-disk-0.qcow2","node-name":"e74f4959037afb46eddc7313c43dfdd","read-only":false},"node-name":"f74f4959037afb46eddc7313c43dfdd","read-only":false},"node-name":"drive-scsi0","throttle-group":"throttle-drive-scsi0"}' \
> + -device 'scsi-hd,bus=scsihw0.0,scsi-id=0,drive=drive-scsi0,id=scsi0,write-cache=on' \
> + -blockdev '{"driver":"throttle","file":{"backing":{"backing":{"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"native","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"host_device","filename":"/dev/veegee/snap1-vm-8006-disk-0.qcow2","node-name":"e25f58d3e6e11f2065ad41253988915","read-only":false},"node-name":"f25f58d3e6e11f2065ad41253988915","read-only":false},"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"native","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"host_device","filename":"/dev/veegee/snap2-vm-8006-disk-0.qcow2","node-name":"e9415bb5e484c1e25d25063b01686fe","read-only":false},"node-name":"f9415bb5e484c1e25d25063b01686fe","read-only":false},"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":{"aio":"native","cache":{"direct":true,"no-flush":false},"detect-zeroes":"on","discard":"ignore","driver":"host_device","filename":"
/dev/veegee/vm-8006-disk-0.qcow2","node-name":"e87358a470ca311f94d5cc61d1eb428","read-only":false},"node-name":"f87358a470ca311f94d5cc61d1eb428","read-only":false},"node-name":"drive-scsi1","throttle-group":"throttle-drive-scsi1"}' \
> + -device 'scsi-hd,bus=scsihw0.0,scsi-id=1,drive=drive-scsi1,id=scsi1,write-cache=on' \
> + -machine 'type=pc+pve0'
> diff --git a/src/test/run_config2command_tests.pl b/src/test/run_config2command_tests.pl
> index 1262a0df..61302f6b 100755
> --- a/src/test/run_config2command_tests.pl
> +++ b/src/test/run_config2command_tests.pl
> @@ -21,6 +21,7 @@ use PVE::QemuServer::Helpers;
> use PVE::QemuServer::Monitor;
> use PVE::QemuServer::QMPHelpers;
> use PVE::QemuServer::CPUConfig;
> +use PVE::Storage;
>
> my $base_env = {
> storage_config => {
> @@ -34,6 +35,15 @@ my $base_env = {
> type => 'dir',
> shared => 0,
> },
> + localsnapext => {
> + content => {
> + images => 1,
> + },
> + path => '/var/lib/vzsnapext',
> + type => 'dir',
> + shared => 0,
> + snapext => 1,
> + },
> noimages => {
> content => {
> iso => 1,
> @@ -264,6 +274,43 @@ $storage_module->mock(
> deactivate_volumes => sub {
> return;
> },
> + volume_snapshot_info => sub {
> + my ($cfg, $volid) = @_;
> +
> + my ($storeid, $volname) = PVE::Storage::parse_volume_id($volid);
> +
> + my $snapshots = {};
> + if ($storeid eq 'localsnapext') {
> + $snapshots = {
> + current => {
> + file => 'var/lib/vzsnapext/images/8006/vm-8006-disk-0.qcow2',
> + parent => 'snap2',
> + },
> + snap2 => {
> + file => '/var/lib/vzsnapext/images/8006/snap2-vm-8006-disk-0.qcow2',
> + parent => 'snap1',
> + },
> + snap1 => {
> + file => '/var/lib/vzsnapext/images/8006/snap1-vm-8006-disk-0.qcow2',
> + },
> + };
> + } elsif ($storeid eq 'lvm-store') {
> + $snapshots = {
> + current => {
> + file => '/dev/veegee/vm-8006-disk-0.qcow2',
> + parent => 'snap2',
> + },
> + snap2 => {
> + file => '/dev/veegee/snap2-vm-8006-disk-0.qcow2',
> + parent => 'snap1',
> + },
> + snap1 => {
> + file => '/dev/veegee/snap1-vm-8006-disk-0.qcow2',
> + },
> + };
> + }
> + return $snapshots;
> + },
> );
>
> my $file_stat_module = Test::MockModule->new("File::stat");
> --
> 2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 09/13] storage: add volume_support_qemu_snapshot
2025-07-09 16:21 ` [pve-devel] [PATCH pve-storage 09/13] storage: add volume_support_qemu_snapshot Alexandre Derumier via pve-devel
@ 2025-07-15 11:33 ` Wolfgang Bumiller
2025-07-15 13:59 ` DERUMIER, Alexandre via pve-devel
[not found] ` <4756bd155509ba20a3a6bf16191f1a539ee5b23e.camel@groupe-cyllene.com>
0 siblings, 2 replies; 49+ messages in thread
From: Wolfgang Bumiller @ 2025-07-15 11:33 UTC (permalink / raw)
To: Proxmox VE development discussion; +Cc: Thomas Lamprecht
On Wed, Jul 09, 2025 at 06:21:58PM +0200, Alexandre Derumier via pve-devel wrote:
> From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
> To: pve-devel@lists.proxmox.com
> Subject: [PATCH pve-storage 09/13] storage: add volume_support_qemu_snapshot
> Date: Wed, 9 Jul 2025 18:21:58 +0200
> Message-Id: <20250709162202.2952597-14-alexandre.derumier@groupe-cyllene.com>
> X-Mailer: git-send-email 2.39.5
>
> Returns if the volume is supporting qemu snapshot:
> 'internal' : do the snapshot with qemu internal snapshot
> 'external' : do the snapshot with qemu external snapshot
> undef : does not support qemu snapshot
>
> Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
> ---
> ApiChangeLog | 8 ++++++++
> src/PVE/Storage.pm | 15 +++++++++++++++
> src/PVE/Storage/DirPlugin.pm | 10 ++++++++++
> src/PVE/Storage/LVMPlugin.pm | 7 +++++++
> src/PVE/Storage/Plugin.pm | 20 ++++++++++++++++++++
> src/PVE/Storage/RBDPlugin.pm | 6 ++++++
> 6 files changed, 66 insertions(+)
>
> diff --git a/ApiChangeLog b/ApiChangeLog
> index 12eef1f..68e94fd 100644
> --- a/ApiChangeLog
> +++ b/ApiChangeLog
> @@ -29,6 +29,14 @@ Future changes should be documented in here.
> * Introduce rename_snapshot() plugin method
> This method allow to rename a vm disk snapshot name to a different snapshot name.
>
> +* Introduce volume_support_qemu_snapshot() plugin method
> + This method is used to known if the a snapshot need to be done by qemu
> + or by the storage api.
> + returned values are :
> + 'internal' : support snapshot with qemu internal snapshot
> + 'external' : support snapshot with qemu external snapshot
> + undef : don't support qemu snapshot
The naming, description and returned values seem a bit of a mixed bag
here.
Also because "internal", from the POV of the storage, happens
"outside" of the storage.
I'd also argue this doesn't *specifically* have anything to do with qemu
itself (we could technically hook qcow2 files with ublk/fuse+loopdev to
use for containers as well...).
Let me try to layout my thoughts...
- "internal" means they are completely isolated within the volume
itself, the storage has nothing to do with it. (sort of).
- "external" means the storage is used to allocate volumes, but *also*
handles the creation of the *format*
- undef = only the storage itself can make snapshots (which is
technically incorrect because qemu could still just use *qcow2
snapshots*...
- The "internal" vs `undef` case is just a matter of "is the format
snapshot-capable", which makes me wonder whether this method should
even have 3 cases.
Conceptually, what we want is a way of creating a new volume with a
backing volume, specifically.
If we cannot do this, either the "user" needs to be able to create
snapshots *inside* the volume, or there is no snapshot support.
Also consider that technically the *beginning* of a snapshot chain can
even have a raw format, so I also wouldn't specifically pin the ability
to create a new volume using a backing volume on the current *volume*.
As a side note: Looking at the LVM plugin, it now reports "qcow2"
support, but refuses it unless an option is set. The list of which
formats are supported is currently not storage-config dependent, which
is a bit unfortunate there.
How about one of these (a bit long but bear with me):
supports_snapshots_as_backed_volumes($scfg)
The storage:
- Can create volumes with backing volumes (required for NOT-running
VMs)
- Allows renaming a volume into a snapshot such that calling
`volume_snapshot` then recreates the original name... (BUT! See
note [1] below for why I think this should not be a requirement!)
(for running VMs)
combined with
is_format_available($scfg, "qcow2")
(to directly check for the optional support as in the LVM plugin early)
qemu-server's do_snapshot_type() then does:
"storage" if
is tpmstate
OR is not running
OR format isn't qcow2 (this case would be new in qemu-server)
"external" if
supports_snapshots_as_backed_volumes() is true
AND is_format_available($scfg, "qcow2")
"internal" otherwise
Notes:
[1] Do we really need to rename the volume *outside* of the storage?
Why does eg. the LVM plugin need to distinguish between running and not
running at all? All it does is shove some `/dev/` names around, after
all. We should be able to go through the change-file/reopen code in
qemu-server regardless of when/who/where renames the files, no?
Taking a snapshot of a running VM does:
1. rename volume "into" snapshot from out of qemu-server
2. tell qemu to reopen the files under the new names
3. call volume_snapshot which:
- creates volume with the original name and the snapshot as backing
4. have qemu open the disk by the original name for blockdev-snapshot
Can't we make this:
1. tell qemu to reopen the original files with different *node ids*
(since the clashing of those is the only issue we run into when
simply dropping the $running case and skipping the early rename...)
2. call volume_snapshot without a $running parameter
3. continue as before (point 4 above)
> +
> ## Version 11:
>
> * Allow declaring storage features via plugin data
> diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
> index e0b79fa..b796908 100755
> --- a/src/PVE/Storage.pm
> +++ b/src/PVE/Storage.pm
> @@ -2370,6 +2370,21 @@ sub rename_snapshot {
> );
> }
>
> +sub volume_support_qemu_snapshot {
> + my ($cfg, $volid) = @_;
> +
> + my ($storeid, $volname) = parse_volume_id($volid, 1);
> +
> + if ($storeid) {
> + my $scfg = storage_config($cfg, $storeid);
> +
> + my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
> +
> + return $plugin->volume_support_qemu_snapshot($storeid, $scfg, $volname);
> + }
> + return undef;
> +}
> +
> # Various io-heavy operations require io/bandwidth limits which can be
> # configured on multiple levels: The global defaults in datacenter.cfg, and
> # per-storage overrides. When we want to do a restore from storage A to storage
> diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
> index 10e4f70..3e92383 100644
> --- a/src/PVE/Storage/DirPlugin.pm
> +++ b/src/PVE/Storage/DirPlugin.pm
> @@ -314,4 +314,14 @@ sub get_import_metadata {
> };
> }
>
> +sub volume_support_qemu_snapshot {
> + my ($class, $storeid, $scfg, $volname) = @_;
> +
> + my $format = ($class->parse_volname($volname))[6];
> + return if $format ne 'qcow2';
> +
> + my $type = $scfg->{'external-snapshots'} ? 'external' : 'internal';
> + return $type;
> +}
> +
> 1;
> diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
> index 2441e59..be411e5 100644
> --- a/src/PVE/Storage/LVMPlugin.pm
> +++ b/src/PVE/Storage/LVMPlugin.pm
> @@ -861,4 +861,11 @@ sub rename_snapshot {
> die "rename_snapshot is not implemented for $class";
> }
>
> +sub volume_support_qemu_snapshot {
> + my ($class, $storeid, $scfg, $volname) = @_;
> +
> + my $format = ($class->parse_volname($volname))[6];
> + return 'external' if $format eq 'qcow2';
> +}
> +
> 1;
> diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
> index 6b2dc32..aab2024 100644
> --- a/src/PVE/Storage/Plugin.pm
> +++ b/src/PVE/Storage/Plugin.pm
> @@ -2262,6 +2262,26 @@ sub new_backup_provider {
> die "implement me if enabling the feature 'backup-provider' in plugindata()->{features}\n";
> }
>
> +=pod
> +
> +=head3 volume_support_qemu_snapshot
> +
> + $blockdev = $plugin->volume_support_qemu_snapshot($storeid, $scfg, $volname)
> +
> +Returns a string with the type of snapshot that qemu can do for a specific volume
> +
> +'internal' : support snapshot with qemu internal snapshot
> +'external' : support snapshot with qemu external snapshot
> +undef : don't support qemu snapshot
> +=cut
> +
> +sub volume_support_qemu_snapshot {
> + my ($class, $storeid, $scfg, $volname) = @_;
> +
> + my $format = ($class->parse_volname($volname))[6];
> + return 'internal' if $format eq 'qcow2';
> +}
> +
> sub config_aware_base_mkdir {
> my ($class, $scfg, $path) = @_;
>
> diff --git a/src/PVE/Storage/RBDPlugin.pm b/src/PVE/Storage/RBDPlugin.pm
> index ee33006..45f8a7f 100644
> --- a/src/PVE/Storage/RBDPlugin.pm
> +++ b/src/PVE/Storage/RBDPlugin.pm
> @@ -1061,4 +1061,10 @@ sub rename_snapshot {
> die "rename_snapshot is not implemented for $class";
> }
>
> +sub volume_support_qemu_snapshot {
> + my ($class, $storeid, $scfg, $volname) = @_;
> +
> + return 'internal' if !$scfg->{krbd};
> +}
> +
> 1;
> --
> 2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [pve-devel] [PATCH qemu-server 3/4] qcow2: add external snapshot support
2025-07-09 16:21 ` [pve-devel] [PATCH qemu-server 3/4] qcow2: add external snapshot support Alexandre Derumier via pve-devel
@ 2025-07-15 13:21 ` Wolfgang Bumiller
2025-07-15 14:21 ` DERUMIER, Alexandre via pve-devel
2025-07-15 14:31 ` Wolfgang Bumiller
1 sibling, 1 reply; 49+ messages in thread
From: Wolfgang Bumiller @ 2025-07-15 13:21 UTC (permalink / raw)
To: Proxmox VE development discussion
On Wed, Jul 09, 2025 at 06:21:51PM +0200, Alexandre Derumier via pve-devel wrote:
> From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
> To: pve-devel@lists.proxmox.com
> Subject: [PATCH qemu-server 3/4] qcow2: add external snapshot support
> Date: Wed, 9 Jul 2025 18:21:51 +0200
> Message-Id: <20250709162202.2952597-7-alexandre.derumier@groupe-cyllene.com>
> X-Mailer: git-send-email 2.39.5
>
> fixme:
> - add test for internal (was missing) && external qemu snapshots
> - is it possible to use blockjob transactions for commit && steam
> for atomatic disk commit ?
>
> Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
> ---
> src/PVE/QemuConfig.pm | 4 +-
> src/PVE/QemuServer.pm | 132 ++++++++++++---
> src/PVE/QemuServer/Blockdev.pm | 296 ++++++++++++++++++++++++++++++++-
> src/test/snapshot-test.pm | 4 +-
> 4 files changed, 402 insertions(+), 34 deletions(-)
>
> diff --git a/src/PVE/QemuConfig.pm b/src/PVE/QemuConfig.pm
> index 82295641..e0853d65 100644
> --- a/src/PVE/QemuConfig.pm
> +++ b/src/PVE/QemuConfig.pm
> @@ -398,7 +398,7 @@ sub __snapshot_create_vol_snapshot {
>
> print "snapshotting '$device' ($drive->{file})\n";
>
> - PVE::QemuServer::qemu_volume_snapshot($vmid, $device, $storecfg, $volid, $snapname);
> + PVE::QemuServer::qemu_volume_snapshot($vmid, $device, $storecfg, $drive, $snapname);
> }
>
> sub __snapshot_delete_remove_drive {
> @@ -435,7 +435,7 @@ sub __snapshot_delete_vol_snapshot {
> my $storecfg = PVE::Storage::config();
> my $volid = $drive->{file};
>
> - PVE::QemuServer::qemu_volume_snapshot_delete($vmid, $storecfg, $volid, $snapname);
> + PVE::QemuServer::qemu_volume_snapshot_delete($vmid, $storecfg, $drive, $snapname);
>
> push @$unused, $volid;
> }
> diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
> index 92c8fad6..c1e15675 100644
> --- a/src/PVE/QemuServer.pm
> +++ b/src/PVE/QemuServer.pm
> @@ -4340,20 +4340,64 @@ sub qemu_cpu_hotplug {
> }
>
> sub qemu_volume_snapshot {
> - my ($vmid, $deviceid, $storecfg, $volid, $snap) = @_;
> + my ($vmid, $deviceid, $storecfg, $drive, $snap) = @_;
>
> + my $volid = $drive->{file};
> my $running = check_running($vmid);
>
> - if ($running && do_snapshots_with_qemu($storecfg, $volid, $deviceid)) {
> + my $do_snapshots_type = do_snapshots_type($storecfg, $volid, $deviceid, $running);
> +
> + if ($do_snapshots_type eq 'internal') {
> + print "internal qemu snapshot\n";
> mon_cmd($vmid, 'blockdev-snapshot-internal-sync', device => $deviceid, name => $snap);
> - } else {
> + } elsif ($do_snapshots_type eq 'external') {
> + my $storeid = (PVE::Storage::parse_volume_id($volid))[0];
> + my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
> + print "external qemu snapshot\n";
> + my $snapshots = PVE::Storage::volume_snapshot_info($storecfg, $volid);
> + my $parent_snap = $snapshots->{'current'}->{parent};
> + my $machine_version = PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
> +
> + PVE::QemuServer::Blockdev::blockdev_rename(
> + $storecfg,
> + $vmid,
> + $machine_version,
> + $deviceid,
> + $drive,
> + 'current',
> + $snap,
> + $parent_snap,
> + );
> + eval {
> + PVE::QemuServer::Blockdev::blockdev_external_snapshot(
> + $storecfg, $vmid, $machine_version, $deviceid, $drive, $snap,
> + );
> + };
> + if ($@) {
> + warn $@ if $@;
> + print "Error creating snapshot. Revert rename\n";
> + eval {
> + PVE::QemuServer::Blockdev::blockdev_rename(
> + $storecfg,
> + $vmid,
> + $machine_version,
> + $deviceid,
> + $drive,
> + $snap,
> + 'current',
> + $parent_snap,
> + );
> + };
> + }
> + } elsif ($do_snapshots_type eq 'storage') {
> PVE::Storage::volume_snapshot($storecfg, $volid, $snap);
> }
> }
>
> sub qemu_volume_snapshot_delete {
> - my ($vmid, $storecfg, $volid, $snap) = @_;
> + my ($vmid, $storecfg, $drive, $snap) = @_;
>
> + my $volid = $drive->{file};
> my $running = check_running($vmid);
> my $attached_deviceid;
>
> @@ -4368,14 +4412,62 @@ sub qemu_volume_snapshot_delete {
> );
> }
>
> - if ($attached_deviceid && do_snapshots_with_qemu($storecfg, $volid, $attached_deviceid)) {
> + my $do_snapshots_type = do_snapshots_type($storecfg, $volid, $attached_deviceid, $running);
> +
> + if ($do_snapshots_type eq 'internal') {
> mon_cmd(
> $vmid,
> 'blockdev-snapshot-delete-internal-sync',
> device => $attached_deviceid,
> name => $snap,
> );
> - } else {
> + } elsif ($do_snapshots_type eq 'external') {
> + print "delete qemu external snapshot\n";
> +
> + my $path = PVE::Storage::path($storecfg, $volid);
> + my $snapshots = PVE::Storage::volume_snapshot_info($storecfg, $volid);
> + my $parentsnap = $snapshots->{$snap}->{parent};
> + my $childsnap = $snapshots->{$snap}->{child};
> + my $machine_version = PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
> +
> + # if we delete the first snasphot, we commit because the first snapshot original base image, it should be big.
> + # improve-me: if firstsnap > child : commit, if firstsnap < child do a stream.
> + if (!$parentsnap) {
> + print "delete first snapshot $snap\n";
> + PVE::QemuServer::Blockdev::blockdev_commit(
> + $storecfg,
> + $vmid,
> + $machine_version,
> + $attached_deviceid,
> + $drive,
> + $childsnap,
> + $snap,
> + );
> + PVE::QemuServer::Blockdev::blockdev_rename(
> + $storecfg,
> + $vmid,
> + $machine_version,
> + $attached_deviceid,
> + $drive,
> + $snap,
> + $childsnap,
> + $snapshots->{$childsnap}->{child},
> + );
> + } else {
> + #intermediate snapshot, we always stream the snapshot to child snapshot
> + print "stream intermediate snapshot $snap to $childsnap\n";
> + PVE::QemuServer::Blockdev::blockdev_stream(
> + $storecfg,
> + $vmid,
> + $machine_version,
> + $attached_deviceid,
> + $drive,
> + $snap,
> + $parentsnap,
> + $childsnap,
> + );
> + }
> + } elsif ($do_snapshots_type eq 'storage') {
> PVE::Storage::volume_snapshot_delete(
> $storecfg,
> $volid,
> @@ -7563,28 +7655,20 @@ sub restore_tar_archive {
> warn $@ if $@;
> }
>
> -my $qemu_snap_storage = {
> - rbd => 1,
> -};
> -
> -sub do_snapshots_with_qemu {
> - my ($storecfg, $volid, $deviceid) = @_;
> -
> - return if $deviceid =~ m/tpmstate0/;
> +sub do_snapshots_type {
> + my ($storecfg, $volid, $deviceid, $running) = @_;
>
> - my $storage_name = PVE::Storage::parse_volume_id($volid);
> - my $scfg = $storecfg->{ids}->{$storage_name};
> - die "could not find storage '$storage_name'\n" if !defined($scfg);
> + #always use storage snapshot for tpmstate
> + return 'storage' if $deviceid && $deviceid =~ m/tpmstate0/;
>
> - if ($qemu_snap_storage->{ $scfg->{type} } && !$scfg->{krbd}) {
> - return 1;
> - }
> + #we use storage snapshot if vm is not running or if disk is unused;
> + return 'storage' if !$running || !$deviceid;
>
> - if ($volid =~ m/\.(qcow2|qed)$/) {
> - return 1;
> - }
> + my $qemu_snapshot_type = PVE::Storage::volume_support_qemu_snapshot($storecfg, $volid);
> + # if running, but don't support qemu snapshot, we use storage snapshot
> + return 'storage' if !$qemu_snapshot_type;
>
> - return;
> + return $qemu_snapshot_type;
> }
>
> =head3 template_create($vmid, $conf [, $disk])
> diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
> index 2a0513fb..f5c07e30 100644
> --- a/src/PVE/QemuServer/Blockdev.pm
> +++ b/src/PVE/QemuServer/Blockdev.pm
> @@ -11,6 +11,7 @@ use JSON;
> use PVE::JSONSchema qw(json_bool);
> use PVE::Storage;
>
> +use PVE::QemuServer::BlockJob;
> use PVE::QemuServer::Drive qw(drive_is_cdrom);
> use PVE::QemuServer::Helpers;
> use PVE::QemuServer::Monitor qw(mon_cmd);
> @@ -243,6 +244,9 @@ my sub generate_file_blockdev {
> my $blockdev = {};
> my $scfg = undef;
>
> + delete $options->{'snapshot-name'}
> + if $options->{'snapshot-name'} && $options->{'snapshot-name'} eq 'current';
> +
> die "generate_file_blockdev called without volid/path\n" if !$drive->{file};
> die "generate_file_blockdev called with 'none'\n" if $drive->{file} eq 'none';
> # FIXME use overlay and new config option to define storage for temp write device
> @@ -322,6 +326,9 @@ my sub generate_format_blockdev {
> die "generate_format_blockdev called with 'none'\n" if $drive->{file} eq 'none';
> die "generate_format_blockdev called with NBD path\n" if is_nbd($drive);
>
> + delete($options->{'snapshot-name'})
> + if $options->{'snapshot-name'} && $options->{'snapshot-name'} eq 'current';
> +
> my $scfg;
> my $format;
> my $volid = $drive->{file};
> @@ -400,6 +407,17 @@ my sub generate_backing_chain_blockdev {
> );
> }
>
> +sub generate_throttle_blockdev {
> + my ($drive_id, $child) = @_;
> +
> + return {
> + driver => "throttle",
> + 'node-name' => top_node_name($drive_id),
> + 'throttle-group' => throttle_group_id($drive_id),
> + file => $child,
> + };
> +}
> +
> sub generate_drive_blockdev {
> my ($storecfg, $drive, $machine_version, $options) = @_;
>
> @@ -442,12 +460,7 @@ sub generate_drive_blockdev {
> return $child if $options->{fleecing} || $options->{'tpm-backup'} || $options->{'no-throttle'};
>
> # this is the top filter entry point, use $drive-drive_id as nodename
> - return {
> - driver => "throttle",
> - 'node-name' => top_node_name($drive_id),
> - 'throttle-group' => throttle_group_id($drive_id),
> - file => $child,
> - };
> + return generate_throttle_blockdev($drive_id, $child);
> }
>
> sub generate_pbs_blockdev {
> @@ -785,4 +798,275 @@ sub set_io_throttle {
> }
> }
>
> +sub blockdev_external_snapshot {
> + my ($storecfg, $vmid, $machine_version, $deviceid, $drive, $snap, $size) = @_;
> +
> + print "Creating a new current volume with $snap as backing snap\n";
> +
> + my $volid = $drive->{file};
> +
> + #preallocate add a new current file with reference to backing-file
> + PVE::Storage::volume_snapshot($storecfg, $volid, $snap, 1);
> +
> + #be sure to add drive in write mode
> + delete($drive->{ro});
> +
> + my $new_file_blockdev = generate_file_blockdev($storecfg, $drive);
> + my $new_fmt_blockdev = generate_format_blockdev($storecfg, $drive, $new_file_blockdev);
> +
> + my $snap_file_blockdev = generate_file_blockdev($storecfg, $drive, $snap);
> + my $snap_fmt_blockdev = generate_format_blockdev(
> + $storecfg,
> + $drive,
> + $snap_file_blockdev,
> + { 'snapshot-name' => $snap },
> + );
> +
> + #backing need to be forced to undef in blockdev, to avoid reopen of backing-file on blockdev-add
> + $new_fmt_blockdev->{backing} = undef;
> +
> + mon_cmd($vmid, 'blockdev-add', %$new_fmt_blockdev);
> +
> + mon_cmd(
> + $vmid, 'blockdev-snapshot',
> + node => $snap_fmt_blockdev->{'node-name'},
> + overlay => $new_fmt_blockdev->{'node-name'},
> + );
> +}
> +
> +sub blockdev_delete {
> + my ($storecfg, $vmid, $drive, $file_blockdev, $fmt_blockdev, $snap) = @_;
> +
> + #add eval as reopen is auto removing the old nodename automatically only if it was created at vm start in command line argument
> + eval { mon_cmd($vmid, 'blockdev-del', 'node-name' => $file_blockdev->{'node-name'}) };
> + eval { mon_cmd($vmid, 'blockdev-del', 'node-name' => $fmt_blockdev->{'node-name'}) };
> +
> + #delete the file (don't use vdisk_free as we don't want to delete all snapshot chain)
> + print "delete old $file_blockdev->{filename}\n";
> +
> + my $storage_name = PVE::Storage::parse_volume_id($drive->{file});
> +
> + my $volid = $drive->{file};
> + PVE::Storage::volume_snapshot_delete($storecfg, $volid, $snap, 1);
> +}
> +
> +sub blockdev_rename {
> + my (
> + $storecfg,
> + $vmid,
> + $machine_version,
> + $deviceid,
> + $drive,
> + $src_snap,
> + $target_snap,
> + $parent_snap,
> + ) = @_;
> +
> + print "rename $src_snap to $target_snap\n";
> +
> + my $volid = $drive->{file};
> +
> + my $src_file_blockdev = generate_file_blockdev(
> + $storecfg,
> + $drive,
> + $machine_version,
> + { 'snapshot-name' => $src_snap },
> + );
> + my $src_fmt_blockdev = generate_format_blockdev(
> + $storecfg,
> + $drive,
> + $src_file_blockdev,
> + { 'snapshot-name' => $src_snap },
> + );
> +
> + #rename the snapshot
> + PVE::Storage::rename_snapshot($storecfg, $volid, $src_snap, $target_snap);
> +
> + my $target_file_blockdev = generate_file_blockdev(
> + $storecfg,
> + $drive,
> + $machine_version,
> + { 'snapshot-name' => $target_snap },
> + );
> + my $target_fmt_blockdev = generate_format_blockdev(
> + $storecfg,
> + $drive,
> + $target_file_blockdev,
> + { 'snapshot-name' => $target_snap },
> + );
> +
> + if ($target_snap eq 'current' || $src_snap eq 'current') {
> + #rename from|to current
> + my $drive_id = PVE::QemuServer::Drive::get_drive_id($drive);
> +
> + #add backing to target
> + if ($parent_snap) {
> + my $parent_fmt_nodename =
> + get_node_name('fmt', $drive_id, $volid, { 'snapshot-name' => $parent_snap });
> + $target_fmt_blockdev->{backing} = $parent_fmt_nodename;
> + }
> + mon_cmd($vmid, 'blockdev-add', %$target_fmt_blockdev);
> +
> + #reopen the current throttlefilter nodename with the target fmt nodename
> + my $throttle_blockdev =
> + generate_throttle_blockdev($drive_id, $target_fmt_blockdev->{'node-name'});
> + mon_cmd($vmid, 'blockdev-reopen', options => [$throttle_blockdev]);
> + } else {
> + rename($src_file_blockdev->{filename}, $target_file_blockdev->{filename});
^ This seems out of place. This does not make sense for all storages and
should already have happened via PVE::Storage::rename_snapshot(), no?
You don't want to literally rename block device nodes?
> +
> + #intermediate snapshot
> + mon_cmd($vmid, 'blockdev-add', %$target_fmt_blockdev);
> +
> + #reopen the parent node with the new target fmt backing node
> + my $parent_file_blockdev = generate_file_blockdev(
> + $storecfg,
> + $drive,
> + $machine_version,
> + { 'snapshot-name' => $parent_snap },
> + );
> + my $parent_fmt_blockdev = generate_format_blockdev(
> + $storecfg,
> + $drive,
> + $parent_file_blockdev,
> + { 'snapshot-name' => $parent_snap },
> + );
> + $parent_fmt_blockdev->{backing} = $target_fmt_blockdev->{'node-name'};
> + mon_cmd($vmid, 'blockdev-reopen', options => [$parent_fmt_blockdev]);
> +
> + #change backing-file in qcow2 metadatas
> + mon_cmd(
> + $vmid, 'change-backing-file',
> + device => $deviceid,
> + 'image-node-name' => $parent_fmt_blockdev->{'node-name'},
> + 'backing-file' => $target_file_blockdev->{filename},
> + );
> + }
> +
> + # delete old file|fmt nodes
> + # add eval as reopen is auto removing the old nodename automatically only if it was created at vm start in command line argument
> + eval { mon_cmd($vmid, 'blockdev-del', 'node-name' => $src_file_blockdev->{'node-name'}) };
> + eval { mon_cmd($vmid, 'blockdev-del', 'node-name' => $src_fmt_blockdev->{'node-name'}) };
> +}
> +
> +sub blockdev_commit {
> + my ($storecfg, $vmid, $machine_version, $deviceid, $drive, $src_snap, $target_snap) = @_;
> +
> + my $volid = $drive->{file};
> +
> + print "block-commit $src_snap to base:$target_snap\n";
> +
> + my $target_file_blockdev = generate_file_blockdev(
> + $storecfg,
> + $drive,
> + $machine_version,
> + { 'snapshot-name' => $target_snap },
> + );
> + my $target_fmt_blockdev = generate_format_blockdev(
> + $storecfg,
> + $drive,
> + $target_file_blockdev,
> + { 'snapshot-name' => $target_snap },
> + );
> +
> + my $src_file_blockdev = generate_file_blockdev(
> + $storecfg,
> + $drive,
> + $machine_version,
> + { 'snapshot-name' => $src_snap },
> + );
> + my $src_fmt_blockdev = generate_format_blockdev(
> + $storecfg,
> + $drive,
> + $src_file_blockdev,
> + { 'snapshot-name' => $src_snap },
> + );
> +
> + my $job_id = "commit-$deviceid";
> + my $jobs = {};
> + my $opts = { 'job-id' => $job_id, device => $deviceid };
> +
> + $opts->{'base-node'} = $target_fmt_blockdev->{'node-name'};
> + $opts->{'top-node'} = $src_fmt_blockdev->{'node-name'};
> +
> + mon_cmd($vmid, "block-commit", %$opts);
> + $jobs->{$job_id} = {};
> +
> + # if we commit the current, the blockjob need to be in 'complete' mode
> + my $complete = $src_snap && $src_snap ne 'current' ? 'auto' : 'complete';
> +
> + eval {
> + PVE::QemuServer::BlockJob::qemu_drive_mirror_monitor(
> + $vmid, undef, $jobs, $complete, 0, 'commit',
> + );
> + };
> + if ($@) {
> + die "Failed to complete block commit: $@\n";
> + }
> +
> + blockdev_delete($storecfg, $vmid, $drive, $src_file_blockdev, $src_fmt_blockdev, $src_snap);
> +}
> +
> +sub blockdev_stream {
> + my ($storecfg, $vmid, $machine_version, $deviceid, $drive, $snap, $parent_snap, $target_snap) =
> + @_;
> +
> + my $volid = $drive->{file};
> + $target_snap = undef if $target_snap eq 'current';
> +
> + my $parent_file_blockdev = generate_file_blockdev(
> + $storecfg,
> + $drive,
> + $machine_version,
> + { 'snapshot-name' => $parent_snap },
> + );
> + my $parent_fmt_blockdev = generate_format_blockdev(
> + $storecfg,
> + $drive,
> + $parent_file_blockdev,
> + { 'snapshot-name' => $parent_snap },
> + );
> +
> + my $target_file_blockdev = generate_file_blockdev(
> + $storecfg,
> + $drive,
> + $machine_version,
> + { 'snapshot-name' => $target_snap },
> + );
> + my $target_fmt_blockdev = generate_format_blockdev(
> + $storecfg,
> + $drive,
> + $target_file_blockdev,
> + { 'snapshot-name' => $target_snap },
> + );
> +
> + my $snap_file_blockdev =
> + generate_file_blockdev($storecfg, $drive, $machine_version, { 'snapshot-name' => $snap });
> + my $snap_fmt_blockdev = generate_format_blockdev(
> + $storecfg,
> + $drive,
> + $snap_file_blockdev,
> + { 'snapshot-name' => $snap },
> + );
> +
> + my $job_id = "stream-$deviceid";
> + my $jobs = {};
> + my $options = { 'job-id' => $job_id, device => $target_fmt_blockdev->{'node-name'} };
> + $options->{'base-node'} = $parent_fmt_blockdev->{'node-name'};
> + $options->{'backing-file'} = $parent_file_blockdev->{filename};
> +
> + mon_cmd($vmid, 'block-stream', %$options);
> + $jobs->{$job_id} = {};
> +
> + eval {
> + PVE::QemuServer::BlockJob::qemu_drive_mirror_monitor(
> + $vmid, undef, $jobs, 'auto', 0, 'stream',
> + );
> + };
> + if ($@) {
> + die "Failed to complete block stream: $@\n";
> + }
> +
> + blockdev_delete($storecfg, $vmid, $drive, $snap_file_blockdev, $snap_fmt_blockdev, $snap);
> +}
> +
> 1;
> diff --git a/src/test/snapshot-test.pm b/src/test/snapshot-test.pm
> index 4fce87f1..f61cd64b 100644
> --- a/src/test/snapshot-test.pm
> +++ b/src/test/snapshot-test.pm
> @@ -399,8 +399,8 @@ sub set_migration_caps { } # ignored
>
> # BEGIN redefine PVE::QemuServer methods
>
> -sub do_snapshots_with_qemu {
> - return 0;
> +sub do_snapshots_type {
> + return 'storage';
> }
>
> sub vm_start {
> --
> 2.39.5
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 09/13] storage: add volume_support_qemu_snapshot
2025-07-15 11:33 ` Wolfgang Bumiller
@ 2025-07-15 13:59 ` DERUMIER, Alexandre via pve-devel
[not found] ` <4756bd155509ba20a3a6bf16191f1a539ee5b23e.camel@groupe-cyllene.com>
1 sibling, 0 replies; 49+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-15 13:59 UTC (permalink / raw)
To: w.bumiller, pve-devel; +Cc: DERUMIER, Alexandre, t.lamprecht
[-- Attachment #1: Type: message/rfc822, Size: 18869 bytes --]
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "w.bumiller@proxmox.com" <w.bumiller@proxmox.com>, "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>
Cc: "t.lamprecht@proxmox.com" <t.lamprecht@proxmox.com>, "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Subject: Re: [pve-devel] [PATCH pve-storage 09/13] storage: add volume_support_qemu_snapshot
Date: Tue, 15 Jul 2025 13:59:33 +0000
Message-ID: <4756bd155509ba20a3a6bf16191f1a539ee5b23e.camel@groupe-cyllene.com>
> +* Introduce volume_support_qemu_snapshot() plugin method
> + This method is used to known if the a snapshot need to be done
> by qemu
> + or by the storage api.
> + returned values are :
> + 'internal' : support snapshot with qemu internal snapshot
> + 'external' : support snapshot with qemu external snapshot
> + undef : don't support qemu snapshot
>>The naming, description and returned values seem a bit of a mixed bag
>>here.
>>Also because "internal", from the POV of the storage, happens
>>"outside" of the storage.
yes, indeed, I was a little bit out of idea for the naming, sorry :/
>>Also consider that technically the *beginning* of a snapshot chain
>>can
>>even have a raw format, so I also wouldn't specifically pin the
>>ability
>>to create a new volume using a backing volume on the current *v
>olume*.
yes, Fabian wanted to use only qcow2 for the base image for now, to
avoid to mix both format in the chain
>>As a side note: Looking at the LVM plugin, it now reports "qcow2"
>>support, but refuses it unless an option is set. The list of which
>>formats are supported is currently not storage-config dependent,
>>which
>>is a bit unfortunate there.
do you mean my last follow-up about "external-snapshots" option for lvm
? Because Fabian && Thomas wanted it as safe-guard until it's still
experimental
>>How about one of these (a bit long but bear with me):
>> supports_snapshots_as_backed_volumes($scfg)
>> The storage:
>> - Can create volumes with backing volumes (required for NOT-
>>running
>> VMs)
>> - Allows renaming a volume into a snapshot such that calling
>> `volume_snapshot` then recreates the original name... (BUT! See
>> note [1] below for why I think this should not be a
>>requirement!)
>> (for running VMs)
>>combined with
>>
>> is_format_available($scfg, "qcow2")
>>
>>(to directly check for the optional support as in the LVM plugin
early)
>>qemu-server's do_snapshot_type() then does:
>>
>> "storage" if
>> is tpmstate
>> OR is not running
>> OR format isn't qcow2 (this case would be new in qemu-server)
>> "external" if
>> supports_snapshots_as_backed_volumes() is true
>> AND is_format_available($scfg, "qcow2")
>> "internal" otherwise
you also have the case for rbd, where krbd need to be done at storage
level, but rbd need to be done by qemu when it's running.
So I think it's more something like :
-- OR format isn't qcow2 (this case would be new in qemu-server)
++ OR qemu block driver support .bdrv_snapshot_create
(currently it's only qcow2 && rbd :
https://github.com/search?q=repo%3Aqemu%2Fqemu%20%20.bdrv_snapshot_create%20&type=code)
>>Notes:
>>[1] Do we really need to rename the volume *outside* of the storage?
>>Why does eg. the LVM plugin need to distinguish between running and
>>not
>>running at all? All it does is shove some `/dev/` names around, after
>>all. We should be able to go through the change-file/reopen code in
>>qemu-server regardless of when/who/where renames the files, no?
>>
>>Taking a snapshot of a running VM does:
>> 1. rename volume "into" snapshot from out of qemu-server
>> 2. tell qemu to reopen the files under the new names
>> 3. call volume_snapshot which:
>> - creates volume with the original name and the snapshot as
>>backing
>> 4. have qemu open the disk by the original name for blockdev-
snapshot
Can't we make this:
>> 1. tell qemu to reopen the original files with different *node ids*
>> (since the clashing of those is the only issue we run into when
>> simply dropping the $running case and skipping the early
>>rename...)
>> 2. call volume_snapshot without a $running parameter
>> 3. continue as before (point 4 above)
ok, so we could try to
1) reopen the current volume (vm-disk-..) with nodename=volname+snap
before renaming it,
2) do the volume_snapshot in storage (rename the current volume to snap
, create a new current volume with the snap backing)
3) add the new current volume blockdev + reopen it with blockdev-
snapshot.
Here, I'm not sure if it's working, because AFAIR, when you call
blockdev-snapshot,
mon_cmd(
$vmid, 'blockdev-snapshot',
node => $snap_fmt_blockdev->{'node-name'},
overlay => $new_fmt_blockdev->{'node-name'},
);
it's reading the filename in the snap blockdev node (and in this case,
it's still vm-disk-...) and write it in the backing_file info of the
new overlay node.
I think I have tried in past, but I'm not 100% sure. I'll try to test
it again this week.
I'm going on holiday for 3 weeks this Friday, so I'll not have time to
sent patch before nest, but feel free to change|improve|rewrite my code
by something better :)
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [pve-devel] [PATCH qemu-server 3/4] qcow2: add external snapshot support
2025-07-15 13:21 ` Wolfgang Bumiller
@ 2025-07-15 14:21 ` DERUMIER, Alexandre via pve-devel
0 siblings, 0 replies; 49+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-15 14:21 UTC (permalink / raw)
To: w.bumiller, pve-devel; +Cc: DERUMIER, Alexandre
[-- Attachment #1: Type: message/rfc822, Size: 12734 bytes --]
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "w.bumiller@proxmox.com" <w.bumiller@proxmox.com>, "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH qemu-server 3/4] qcow2: add external snapshot support
Date: Tue, 15 Jul 2025 14:21:59 +0000
Message-ID: <93c06c9fa518ca5db1f4dcb6039e4346f777cb34.camel@groupe-cyllene.com>
> + rename($src_file_blockdev->{filename},
> $target_file_blockdev->{filename});
>>
>>^ This seems out of place. This does not make sense for all storages
>>and
>>should already have happened via PVE::Storage::rename_snapshot(), no?
>>You don't want to literally rename block device nodes?
ah sorry, this is an left-over of a previous initial patch serie.It's
indeed done in rename_snapshot().
it need to be removed
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [pve-devel] [PATCH qemu-server 3/4] qcow2: add external snapshot support
2025-07-09 16:21 ` [pve-devel] [PATCH qemu-server 3/4] qcow2: add external snapshot support Alexandre Derumier via pve-devel
2025-07-15 13:21 ` Wolfgang Bumiller
@ 2025-07-15 14:31 ` Wolfgang Bumiller
1 sibling, 0 replies; 49+ messages in thread
From: Wolfgang Bumiller @ 2025-07-15 14:31 UTC (permalink / raw)
To: Proxmox VE development discussion
On Wed, Jul 09, 2025 at 06:21:51PM +0200, Alexandre Derumier via pve-devel wrote:
> From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
> To: pve-devel@lists.proxmox.com
> Subject: [PATCH qemu-server 3/4] qcow2: add external snapshot support
> Date: Wed, 9 Jul 2025 18:21:51 +0200
> Message-Id: <20250709162202.2952597-7-alexandre.derumier@groupe-cyllene.com>
> X-Mailer: git-send-email 2.39.5
>
> fixme:
> - add test for internal (was missing) && external qemu snapshots
> - is it possible to use blockjob transactions for commit && steam
> for atomatic disk commit ?
>
> Signed-off-by: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
> ---
> src/PVE/QemuConfig.pm | 4 +-
> src/PVE/QemuServer.pm | 132 ++++++++++++---
> src/PVE/QemuServer/Blockdev.pm | 296 ++++++++++++++++++++++++++++++++-
> src/test/snapshot-test.pm | 4 +-
> 4 files changed, 402 insertions(+), 34 deletions(-)
>
> diff --git a/src/PVE/QemuConfig.pm b/src/PVE/QemuConfig.pm
> index 82295641..e0853d65 100644
> --- a/src/PVE/QemuConfig.pm
> +++ b/src/PVE/QemuConfig.pm
> @@ -398,7 +398,7 @@ sub __snapshot_create_vol_snapshot {
>
> print "snapshotting '$device' ($drive->{file})\n";
>
> - PVE::QemuServer::qemu_volume_snapshot($vmid, $device, $storecfg, $volid, $snapname);
> + PVE::QemuServer::qemu_volume_snapshot($vmid, $device, $storecfg, $drive, $snapname);
> }
>
> sub __snapshot_delete_remove_drive {
> @@ -435,7 +435,7 @@ sub __snapshot_delete_vol_snapshot {
> my $storecfg = PVE::Storage::config();
> my $volid = $drive->{file};
>
> - PVE::QemuServer::qemu_volume_snapshot_delete($vmid, $storecfg, $volid, $snapname);
> + PVE::QemuServer::qemu_volume_snapshot_delete($vmid, $storecfg, $drive, $snapname);
>
> push @$unused, $volid;
> }
> diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
> index 92c8fad6..c1e15675 100644
> --- a/src/PVE/QemuServer.pm
> +++ b/src/PVE/QemuServer.pm
> @@ -4340,20 +4340,64 @@ sub qemu_cpu_hotplug {
> }
>
> sub qemu_volume_snapshot {
> - my ($vmid, $deviceid, $storecfg, $volid, $snap) = @_;
> + my ($vmid, $deviceid, $storecfg, $drive, $snap) = @_;
>
> + my $volid = $drive->{file};
> my $running = check_running($vmid);
>
> - if ($running && do_snapshots_with_qemu($storecfg, $volid, $deviceid)) {
> + my $do_snapshots_type = do_snapshots_type($storecfg, $volid, $deviceid, $running);
> +
> + if ($do_snapshots_type eq 'internal') {
> + print "internal qemu snapshot\n";
> mon_cmd($vmid, 'blockdev-snapshot-internal-sync', device => $deviceid, name => $snap);
> - } else {
> + } elsif ($do_snapshots_type eq 'external') {
> + my $storeid = (PVE::Storage::parse_volume_id($volid))[0];
> + my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
> + print "external qemu snapshot\n";
> + my $snapshots = PVE::Storage::volume_snapshot_info($storecfg, $volid);
> + my $parent_snap = $snapshots->{'current'}->{parent};
> + my $machine_version = PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
> +
> + PVE::QemuServer::Blockdev::blockdev_rename(
> + $storecfg,
> + $vmid,
> + $machine_version,
> + $deviceid,
> + $drive,
> + 'current',
> + $snap,
> + $parent_snap,
> + );
> + eval {
> + PVE::QemuServer::Blockdev::blockdev_external_snapshot(
> + $storecfg, $vmid, $machine_version, $deviceid, $drive, $snap,
> + );
> + };
> + if ($@) {
> + warn $@ if $@;
> + print "Error creating snapshot. Revert rename\n";
> + eval {
> + PVE::QemuServer::Blockdev::blockdev_rename(
> + $storecfg,
> + $vmid,
> + $machine_version,
> + $deviceid,
> + $drive,
> + $snap,
> + 'current',
> + $parent_snap,
> + );
> + };
> + }
> + } elsif ($do_snapshots_type eq 'storage') {
> PVE::Storage::volume_snapshot($storecfg, $volid, $snap);
> }
> }
>
> sub qemu_volume_snapshot_delete {
> - my ($vmid, $storecfg, $volid, $snap) = @_;
> + my ($vmid, $storecfg, $drive, $snap) = @_;
>
> + my $volid = $drive->{file};
> my $running = check_running($vmid);
> my $attached_deviceid;
>
> @@ -4368,14 +4412,62 @@ sub qemu_volume_snapshot_delete {
> );
> }
>
> - if ($attached_deviceid && do_snapshots_with_qemu($storecfg, $volid, $attached_deviceid)) {
> + my $do_snapshots_type = do_snapshots_type($storecfg, $volid, $attached_deviceid, $running);
> +
> + if ($do_snapshots_type eq 'internal') {
> mon_cmd(
> $vmid,
> 'blockdev-snapshot-delete-internal-sync',
> device => $attached_deviceid,
> name => $snap,
> );
> - } else {
> + } elsif ($do_snapshots_type eq 'external') {
> + print "delete qemu external snapshot\n";
> +
> + my $path = PVE::Storage::path($storecfg, $volid);
> + my $snapshots = PVE::Storage::volume_snapshot_info($storecfg, $volid);
> + my $parentsnap = $snapshots->{$snap}->{parent};
> + my $childsnap = $snapshots->{$snap}->{child};
> + my $machine_version = PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
> +
> + # if we delete the first snasphot, we commit because the first snapshot original base image, it should be big.
> + # improve-me: if firstsnap > child : commit, if firstsnap < child do a stream.
> + if (!$parentsnap) {
> + print "delete first snapshot $snap\n";
> + PVE::QemuServer::Blockdev::blockdev_commit(
> + $storecfg,
> + $vmid,
> + $machine_version,
> + $attached_deviceid,
> + $drive,
> + $childsnap,
> + $snap,
> + );
> + PVE::QemuServer::Blockdev::blockdev_rename(
> + $storecfg,
> + $vmid,
> + $machine_version,
> + $attached_deviceid,
> + $drive,
> + $snap,
> + $childsnap,
> + $snapshots->{$childsnap}->{child},
> + );
> + } else {
> + #intermediate snapshot, we always stream the snapshot to child snapshot
> + print "stream intermediate snapshot $snap to $childsnap\n";
> + PVE::QemuServer::Blockdev::blockdev_stream(
> + $storecfg,
> + $vmid,
> + $machine_version,
> + $attached_deviceid,
> + $drive,
> + $snap,
> + $parentsnap,
> + $childsnap,
> + );
> + }
> + } elsif ($do_snapshots_type eq 'storage') {
> PVE::Storage::volume_snapshot_delete(
> $storecfg,
> $volid,
> @@ -7563,28 +7655,20 @@ sub restore_tar_archive {
> warn $@ if $@;
> }
>
> -my $qemu_snap_storage = {
> - rbd => 1,
> -};
> -
> -sub do_snapshots_with_qemu {
> - my ($storecfg, $volid, $deviceid) = @_;
> -
> - return if $deviceid =~ m/tpmstate0/;
> +sub do_snapshots_type {
> + my ($storecfg, $volid, $deviceid, $running) = @_;
>
> - my $storage_name = PVE::Storage::parse_volume_id($volid);
> - my $scfg = $storecfg->{ids}->{$storage_name};
> - die "could not find storage '$storage_name'\n" if !defined($scfg);
> + #always use storage snapshot for tpmstate
> + return 'storage' if $deviceid && $deviceid =~ m/tpmstate0/;
>
> - if ($qemu_snap_storage->{ $scfg->{type} } && !$scfg->{krbd}) {
> - return 1;
> - }
> + #we use storage snapshot if vm is not running or if disk is unused;
> + return 'storage' if !$running || !$deviceid;
>
> - if ($volid =~ m/\.(qcow2|qed)$/) {
> - return 1;
> - }
> + my $qemu_snapshot_type = PVE::Storage::volume_support_qemu_snapshot($storecfg, $volid);
> + # if running, but don't support qemu snapshot, we use storage snapshot
> + return 'storage' if !$qemu_snapshot_type;
>
> - return;
> + return $qemu_snapshot_type;
> }
>
> =head3 template_create($vmid, $conf [, $disk])
> diff --git a/src/PVE/QemuServer/Blockdev.pm b/src/PVE/QemuServer/Blockdev.pm
> index 2a0513fb..f5c07e30 100644
> --- a/src/PVE/QemuServer/Blockdev.pm
> +++ b/src/PVE/QemuServer/Blockdev.pm
> @@ -11,6 +11,7 @@ use JSON;
> use PVE::JSONSchema qw(json_bool);
> use PVE::Storage;
>
> +use PVE::QemuServer::BlockJob;
> use PVE::QemuServer::Drive qw(drive_is_cdrom);
> use PVE::QemuServer::Helpers;
> use PVE::QemuServer::Monitor qw(mon_cmd);
> @@ -243,6 +244,9 @@ my sub generate_file_blockdev {
> my $blockdev = {};
> my $scfg = undef;
>
> + delete $options->{'snapshot-name'}
> + if $options->{'snapshot-name'} && $options->{'snapshot-name'} eq 'current';
> +
> die "generate_file_blockdev called without volid/path\n" if !$drive->{file};
> die "generate_file_blockdev called with 'none'\n" if $drive->{file} eq 'none';
> # FIXME use overlay and new config option to define storage for temp write device
> @@ -322,6 +326,9 @@ my sub generate_format_blockdev {
> die "generate_format_blockdev called with 'none'\n" if $drive->{file} eq 'none';
> die "generate_format_blockdev called with NBD path\n" if is_nbd($drive);
>
> + delete($options->{'snapshot-name'})
> + if $options->{'snapshot-name'} && $options->{'snapshot-name'} eq 'current';
> +
> my $scfg;
> my $format;
> my $volid = $drive->{file};
> @@ -400,6 +407,17 @@ my sub generate_backing_chain_blockdev {
> );
> }
>
> +sub generate_throttle_blockdev {
> + my ($drive_id, $child) = @_;
> +
> + return {
> + driver => "throttle",
> + 'node-name' => top_node_name($drive_id),
> + 'throttle-group' => throttle_group_id($drive_id),
> + file => $child,
> + };
> +}
> +
> sub generate_drive_blockdev {
> my ($storecfg, $drive, $machine_version, $options) = @_;
>
> @@ -442,12 +460,7 @@ sub generate_drive_blockdev {
> return $child if $options->{fleecing} || $options->{'tpm-backup'} || $options->{'no-throttle'};
>
> # this is the top filter entry point, use $drive-drive_id as nodename
> - return {
> - driver => "throttle",
> - 'node-name' => top_node_name($drive_id),
> - 'throttle-group' => throttle_group_id($drive_id),
> - file => $child,
> - };
> + return generate_throttle_blockdev($drive_id, $child);
> }
>
> sub generate_pbs_blockdev {
> @@ -785,4 +798,275 @@ sub set_io_throttle {
> }
> }
>
> +sub blockdev_external_snapshot {
> + my ($storecfg, $vmid, $machine_version, $deviceid, $drive, $snap, $size) = @_;
> +
> + print "Creating a new current volume with $snap as backing snap\n";
> +
> + my $volid = $drive->{file};
> +
> + #preallocate add a new current file with reference to backing-file
> + PVE::Storage::volume_snapshot($storecfg, $volid, $snap, 1);
> +
> + #be sure to add drive in write mode
> + delete($drive->{ro});
> +
> + my $new_file_blockdev = generate_file_blockdev($storecfg, $drive);
> + my $new_fmt_blockdev = generate_format_blockdev($storecfg, $drive, $new_file_blockdev);
> +
> + my $snap_file_blockdev = generate_file_blockdev($storecfg, $drive, $snap);
^ This call is wrong - the 3rd parameter should be $machine_version, and
the snapshot passed as a hash like below.
> + my $snap_fmt_blockdev = generate_format_blockdev(
> + $storecfg,
> + $drive,
> + $snap_file_blockdev,
> + { 'snapshot-name' => $snap },
> + );
> +
> + #backing need to be forced to undef in blockdev, to avoid reopen of backing-file on blockdev-add
> + $new_fmt_blockdev->{backing} = undef;
> +
> + mon_cmd($vmid, 'blockdev-add', %$new_fmt_blockdev);
> +
> + mon_cmd(
> + $vmid, 'blockdev-snapshot',
> + node => $snap_fmt_blockdev->{'node-name'},
> + overlay => $new_fmt_blockdev->{'node-name'},
> + );
> +}
> +
> +sub blockdev_delete {
> + my ($storecfg, $vmid, $drive, $file_blockdev, $fmt_blockdev, $snap) = @_;
> +
> + #add eval as reopen is auto removing the old nodename automatically only if it was created at vm start in command line argument
> + eval { mon_cmd($vmid, 'blockdev-del', 'node-name' => $file_blockdev->{'node-name'}) };
> + eval { mon_cmd($vmid, 'blockdev-del', 'node-name' => $fmt_blockdev->{'node-name'}) };
> +
> + #delete the file (don't use vdisk_free as we don't want to delete all snapshot chain)
> + print "delete old $file_blockdev->{filename}\n";
> +
> + my $storage_name = PVE::Storage::parse_volume_id($drive->{file});
> +
> + my $volid = $drive->{file};
> + PVE::Storage::volume_snapshot_delete($storecfg, $volid, $snap, 1);
> +}
> +
> +sub blockdev_rename {
> + my (
> + $storecfg,
> + $vmid,
> + $machine_version,
> + $deviceid,
> + $drive,
> + $src_snap,
> + $target_snap,
> + $parent_snap,
> + ) = @_;
> +
> + print "rename $src_snap to $target_snap\n";
> +
> + my $volid = $drive->{file};
> +
> + my $src_file_blockdev = generate_file_blockdev(
> + $storecfg,
> + $drive,
> + $machine_version,
> + { 'snapshot-name' => $src_snap },
> + );
> + my $src_fmt_blockdev = generate_format_blockdev(
> + $storecfg,
> + $drive,
> + $src_file_blockdev,
> + { 'snapshot-name' => $src_snap },
> + );
> +
> + #rename the snapshot
> + PVE::Storage::rename_snapshot($storecfg, $volid, $src_snap, $target_snap);
> +
> + my $target_file_blockdev = generate_file_blockdev(
> + $storecfg,
> + $drive,
> + $machine_version,
> + { 'snapshot-name' => $target_snap },
> + );
> + my $target_fmt_blockdev = generate_format_blockdev(
> + $storecfg,
> + $drive,
> + $target_file_blockdev,
> + { 'snapshot-name' => $target_snap },
> + );
> +
> + if ($target_snap eq 'current' || $src_snap eq 'current') {
> + #rename from|to current
> + my $drive_id = PVE::QemuServer::Drive::get_drive_id($drive);
> +
> + #add backing to target
> + if ($parent_snap) {
> + my $parent_fmt_nodename =
> + get_node_name('fmt', $drive_id, $volid, { 'snapshot-name' => $parent_snap });
> + $target_fmt_blockdev->{backing} = $parent_fmt_nodename;
> + }
> + mon_cmd($vmid, 'blockdev-add', %$target_fmt_blockdev);
> +
> + #reopen the current throttlefilter nodename with the target fmt nodename
> + my $throttle_blockdev =
> + generate_throttle_blockdev($drive_id, $target_fmt_blockdev->{'node-name'});
> + mon_cmd($vmid, 'blockdev-reopen', options => [$throttle_blockdev]);
> + } else {
> + rename($src_file_blockdev->{filename}, $target_file_blockdev->{filename});
> +
> + #intermediate snapshot
> + mon_cmd($vmid, 'blockdev-add', %$target_fmt_blockdev);
> +
> + #reopen the parent node with the new target fmt backing node
> + my $parent_file_blockdev = generate_file_blockdev(
> + $storecfg,
> + $drive,
> + $machine_version,
> + { 'snapshot-name' => $parent_snap },
> + );
> + my $parent_fmt_blockdev = generate_format_blockdev(
> + $storecfg,
> + $drive,
> + $parent_file_blockdev,
> + { 'snapshot-name' => $parent_snap },
> + );
> + $parent_fmt_blockdev->{backing} = $target_fmt_blockdev->{'node-name'};
> + mon_cmd($vmid, 'blockdev-reopen', options => [$parent_fmt_blockdev]);
> +
> + #change backing-file in qcow2 metadatas
> + mon_cmd(
> + $vmid, 'change-backing-file',
> + device => $deviceid,
> + 'image-node-name' => $parent_fmt_blockdev->{'node-name'},
> + 'backing-file' => $target_file_blockdev->{filename},
> + );
> + }
> +
> + # delete old file|fmt nodes
> + # add eval as reopen is auto removing the old nodename automatically only if it was created at vm start in command line argument
> + eval { mon_cmd($vmid, 'blockdev-del', 'node-name' => $src_file_blockdev->{'node-name'}) };
> + eval { mon_cmd($vmid, 'blockdev-del', 'node-name' => $src_fmt_blockdev->{'node-name'}) };
> +}
> +
> +sub blockdev_commit {
> + my ($storecfg, $vmid, $machine_version, $deviceid, $drive, $src_snap, $target_snap) = @_;
> +
> + my $volid = $drive->{file};
> +
> + print "block-commit $src_snap to base:$target_snap\n";
> +
> + my $target_file_blockdev = generate_file_blockdev(
> + $storecfg,
> + $drive,
> + $machine_version,
> + { 'snapshot-name' => $target_snap },
> + );
> + my $target_fmt_blockdev = generate_format_blockdev(
> + $storecfg,
> + $drive,
> + $target_file_blockdev,
> + { 'snapshot-name' => $target_snap },
> + );
> +
> + my $src_file_blockdev = generate_file_blockdev(
> + $storecfg,
> + $drive,
> + $machine_version,
> + { 'snapshot-name' => $src_snap },
> + );
> + my $src_fmt_blockdev = generate_format_blockdev(
> + $storecfg,
> + $drive,
> + $src_file_blockdev,
> + { 'snapshot-name' => $src_snap },
> + );
> +
> + my $job_id = "commit-$deviceid";
> + my $jobs = {};
> + my $opts = { 'job-id' => $job_id, device => $deviceid };
> +
> + $opts->{'base-node'} = $target_fmt_blockdev->{'node-name'};
> + $opts->{'top-node'} = $src_fmt_blockdev->{'node-name'};
> +
> + mon_cmd($vmid, "block-commit", %$opts);
> + $jobs->{$job_id} = {};
> +
> + # if we commit the current, the blockjob need to be in 'complete' mode
> + my $complete = $src_snap && $src_snap ne 'current' ? 'auto' : 'complete';
> +
> + eval {
> + PVE::QemuServer::BlockJob::qemu_drive_mirror_monitor(
> + $vmid, undef, $jobs, $complete, 0, 'commit',
> + );
> + };
> + if ($@) {
> + die "Failed to complete block commit: $@\n";
> + }
> +
> + blockdev_delete($storecfg, $vmid, $drive, $src_file_blockdev, $src_fmt_blockdev, $src_snap);
> +}
> +
> +sub blockdev_stream {
> + my ($storecfg, $vmid, $machine_version, $deviceid, $drive, $snap, $parent_snap, $target_snap) =
> + @_;
> +
> + my $volid = $drive->{file};
> + $target_snap = undef if $target_snap eq 'current';
> +
> + my $parent_file_blockdev = generate_file_blockdev(
> + $storecfg,
> + $drive,
> + $machine_version,
> + { 'snapshot-name' => $parent_snap },
> + );
> + my $parent_fmt_blockdev = generate_format_blockdev(
> + $storecfg,
> + $drive,
> + $parent_file_blockdev,
> + { 'snapshot-name' => $parent_snap },
> + );
> +
> + my $target_file_blockdev = generate_file_blockdev(
> + $storecfg,
> + $drive,
> + $machine_version,
> + { 'snapshot-name' => $target_snap },
> + );
> + my $target_fmt_blockdev = generate_format_blockdev(
> + $storecfg,
> + $drive,
> + $target_file_blockdev,
> + { 'snapshot-name' => $target_snap },
> + );
> +
> + my $snap_file_blockdev =
> + generate_file_blockdev($storecfg, $drive, $machine_version, { 'snapshot-name' => $snap });
> + my $snap_fmt_blockdev = generate_format_blockdev(
> + $storecfg,
> + $drive,
> + $snap_file_blockdev,
> + { 'snapshot-name' => $snap },
> + );
> +
> + my $job_id = "stream-$deviceid";
> + my $jobs = {};
> + my $options = { 'job-id' => $job_id, device => $target_fmt_blockdev->{'node-name'} };
> + $options->{'base-node'} = $parent_fmt_blockdev->{'node-name'};
> + $options->{'backing-file'} = $parent_file_blockdev->{filename};
> +
> + mon_cmd($vmid, 'block-stream', %$options);
> + $jobs->{$job_id} = {};
> +
> + eval {
> + PVE::QemuServer::BlockJob::qemu_drive_mirror_monitor(
> + $vmid, undef, $jobs, 'auto', 0, 'stream',
> + );
> + };
> + if ($@) {
> + die "Failed to complete block stream: $@\n";
> + }
> +
> + blockdev_delete($storecfg, $vmid, $drive, $snap_file_blockdev, $snap_fmt_blockdev, $snap);
> +}
> +
> 1;
> diff --git a/src/test/snapshot-test.pm b/src/test/snapshot-test.pm
> index 4fce87f1..f61cd64b 100644
> --- a/src/test/snapshot-test.pm
> +++ b/src/test/snapshot-test.pm
> @@ -399,8 +399,8 @@ sub set_migration_caps { } # ignored
>
> # BEGIN redefine PVE::QemuServer methods
>
> -sub do_snapshots_with_qemu {
> - return 0;
> +sub do_snapshots_type {
> + return 'storage';
> }
>
> sub vm_start {
> --
> 2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 09/13] storage: add volume_support_qemu_snapshot
[not found] ` <4756bd155509ba20a3a6bf16191f1a539ee5b23e.camel@groupe-cyllene.com>
@ 2025-07-15 14:49 ` Wolfgang Bumiller
2025-07-15 15:38 ` DERUMIER, Alexandre via pve-devel
0 siblings, 1 reply; 49+ messages in thread
From: Wolfgang Bumiller @ 2025-07-15 14:49 UTC (permalink / raw)
To: DERUMIER, Alexandre; +Cc: pve-devel, t.lamprecht
On Tue, Jul 15, 2025 at 01:59:33PM +0000, DERUMIER, Alexandre wrote:
>
> > +* Introduce volume_support_qemu_snapshot() plugin method
> > + This method is used to known if the a snapshot need to be done
> > by qemu
> > + or by the storage api.
> > + returned values are :
> > + 'internal' : support snapshot with qemu internal snapshot
> > + 'external' : support snapshot with qemu external snapshot
> > + undef : don't support qemu snapshot
>
> >>The naming, description and returned values seem a bit of a mixed bag
> >>here.
> >>Also because "internal", from the POV of the storage, happens
> >>"outside" of the storage.
> yes, indeed, I was a little bit out of idea for the naming, sorry :/
>
>
>
> >>Also consider that technically the *beginning* of a snapshot chain
> >>can
> >>even have a raw format, so I also wouldn't specifically pin the
> >>ability
> >>to create a new volume using a backing volume on the current *v
> >olume*.
>
> yes, Fabian wanted to use only qcow2 for the base image for now, to
> avoid to mix both format in the chain
>
>
> >>As a side note: Looking at the LVM plugin, it now reports "qcow2"
> >>support, but refuses it unless an option is set. The list of which
> >>formats are supported is currently not storage-config dependent,
> >>which
> >>is a bit unfortunate there.
>
> do you mean my last follow-up about "external-snapshots" option for lvm
> ? Because Fabian && Thomas wanted it as safe-guard until it's still
> experimental
>
>
>
> >>How about one of these (a bit long but bear with me):
> >> supports_snapshots_as_backed_volumes($scfg)
> >> The storage:
> >> - Can create volumes with backing volumes (required for NOT-
> >>running
> >> VMs)
> >> - Allows renaming a volume into a snapshot such that calling
> >> `volume_snapshot` then recreates the original name... (BUT! See
> >> note [1] below for why I think this should not be a
> >>requirement!)
> >> (for running VMs)
> >>combined with
> >>
> >> is_format_available($scfg, "qcow2")
> >>
> >>(to directly check for the optional support as in the LVM plugin
> early)
>
> >>qemu-server's do_snapshot_type() then does:
> >>
> >> "storage" if
> >> is tpmstate
> >> OR is not running
> >> OR format isn't qcow2 (this case would be new in qemu-server)
> >> "external" if
> >> supports_snapshots_as_backed_volumes() is true
> >> AND is_format_available($scfg, "qcow2")
> >> "internal" otherwise
>
> you also have the case for rbd, where krbd need to be done at storage
> level, but rbd need to be done by qemu when it's running.
>
> So I think it's more something like :
> -- OR format isn't qcow2 (this case would be new in qemu-server)
> ++ OR qemu block driver support .bdrv_snapshot_create
> (currently it's only qcow2 && rbd :
> https://github.com/search?q=repo%3Aqemu%2Fqemu%20%20.bdrv_snapshot_create%20&type=code)
Right, I forgot about that.
Okay, so I guess we'd keep this qemu-related. In that case:
sub qemu_snapshot_responsibility($scfg, $volname)
-> qemu (previously "internal")
-> storage (previously undef)
-> mixed (previously "external")
We should probably note in the API documentation that "mixed" is
experimental and the expected behavior may change, because, as I
mentioned further down previously, I think the behavior could be a bit
less awkward on the storage side.
>
>
>
>
>
> >>Notes:
> >>[1] Do we really need to rename the volume *outside* of the storage?
> >>Why does eg. the LVM plugin need to distinguish between running and
> >>not
> >>running at all? All it does is shove some `/dev/` names around, after
> >>all. We should be able to go through the change-file/reopen code in
> >>qemu-server regardless of when/who/where renames the files, no?
> >>
> >>Taking a snapshot of a running VM does:
> >> 1. rename volume "into" snapshot from out of qemu-server
> >> 2. tell qemu to reopen the files under the new names
> >> 3. call volume_snapshot which:
> >> - creates volume with the original name and the snapshot as
> >>backing
> >> 4. have qemu open the disk by the original name for blockdev-
> snapshot
>
>
> Can't we make this:
> >> 1. tell qemu to reopen the original files with different *node ids*
> >> (since the clashing of those is the only issue we run into when
> >> simply dropping the $running case and skipping the early
> >>rename...)
> >> 2. call volume_snapshot without a $running parameter
> >> 3. continue as before (point 4 above)
>
> ok, so we could try to
> 1) reopen the current volume (vm-disk-..) with nodename=volname+snap
> before renaming it,
> 2) do the volume_snapshot in storage (rename the current volume to snap
> , create a new current volume with the snap backing)
> 3) add the new current volume blockdev + reopen it with blockdev-
> snapshot.
>
> Here, I'm not sure if it's working, because AFAIR, when you call
> blockdev-snapshot,
> mon_cmd(
> $vmid, 'blockdev-snapshot',
> node => $snap_fmt_blockdev->{'node-name'},
> overlay => $new_fmt_blockdev->{'node-name'},
> );
>
> it's reading the filename in the snap blockdev node (and in this case,
> it's still vm-disk-...) and write it in the backing_file info of the
> new overlay node.
>
> I think I have tried in past, but I'm not 100% sure. I'll try to test
> it again this week.
Yeah I tried some quick tests and it seems to be a bit tricky. Or maybe
I just missed something.
>
>
>
> I'm going on holiday for 3 weeks this Friday, so I'll not have time to
> sent patch before nest, but feel free to change|improve|rewrite my code
> by something better :)
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 09/13] storage: add volume_support_qemu_snapshot
2025-07-15 14:49 ` Wolfgang Bumiller
@ 2025-07-15 15:38 ` DERUMIER, Alexandre via pve-devel
2025-07-16 10:21 ` Wolfgang Bumiller
0 siblings, 1 reply; 49+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-15 15:38 UTC (permalink / raw)
To: w.bumiller; +Cc: DERUMIER, Alexandre, pve-devel, t.lamprecht
[-- Attachment #1: Type: message/rfc822, Size: 16281 bytes --]
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "w.bumiller@proxmox.com" <w.bumiller@proxmox.com>
Cc: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "t.lamprecht@proxmox.com" <t.lamprecht@proxmox.com>, "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Subject: Re: [pve-devel] [PATCH pve-storage 09/13] storage: add volume_support_qemu_snapshot
Date: Tue, 15 Jul 2025 15:38:57 +0000
Message-ID: <82c18a632e730b7c18a93aacd9f4d623d56061be.camel@groupe-cyllene.com>
>>Yeah I tried some quick tests and it seems to be a bit tricky. Or
>>maybe
>>I just missed something.
Just have done fast tests, I think I have found a way.
(I'll do more test tomorrow to see if everything is ok in guest)
sub qemu_volume_snapshot {
my ($vmid, $deviceid, $storecfg, $drive, $snap) = @_;
.....
} elsif ($do_snapshots_type eq 'external') {
my $storeid = (PVE::Storage::parse_volume_id($volid))[0];
my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
print "external qemu snapshot\n";
my $snapshots = PVE::Storage::volume_snapshot_info($storecfg,
$volid);
my $parent_snap = $snapshots->{'current'}->{parent};
my $machine_version =
PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
#no rename here anymore
PVE::QemuServer::Blockdev::blockdev_external_snapshot(
$storecfg, $vmid, $machine_version, $deviceid, $drive,
$snap, $parent_snap
);
...
}
sub blockdev_external_snapshot {
my ($storecfg, $vmid, $machine_version, $deviceid, $drive, $snap,
$parent_snap) = @_;
print "Creating a new current volume with $snap as backing snap\n";
my $volid = $drive->{file};
#rename current to snap && preallocate add a new current file with
reference to snapbacking-file, we can remove the $running param
my $running = undef
PVE::Storage::volume_snapshot($storecfg, $volid, $snap, undef);
#call the blockdev_rename, only to reopen internal blockdev current
active image to snap volname
my $skip_volume_rename=1;
#reopen current to snap
PVE::QemuServer::Blockdev::blockdev_rename(
$storecfg,
$vmid,
$machine_version,
$deviceid,
$drive,
'current',
$snap,
$parent_snap,
$skip_volume_rename
);
###the rest is the same than before
#be sure to add drive in write mode
delete($drive->{ro});
my $new_file_blockdev = generate_file_blockdev($storecfg, $drive);
my $new_fmt_blockdev = generate_format_blockdev($storecfg, $drive,
$new_file_blockdev);
my $snap_file_blockdev = generate_file_blockdev($storecfg, $drive,
$snap);
my $snap_fmt_blockdev = generate_format_blockdev(
$storecfg,
$drive,
$snap_file_blockdev,
{ 'snapshot-name' => $snap },
);
#backing need to be forced to undef in blockdev, to avoid reopen of
backing-file on blockdev-add
$new_fmt_blockdev->{backing} = undef;
mon_cmd($vmid, 'blockdev-add', %$new_fmt_blockdev);
mon_cmd(
$vmid, 'blockdev-snapshot',
node => $snap_fmt_blockdev->{'node-name'},
overlay => $new_fmt_blockdev->{'node-name'},
);
}
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [pve-devel] [PATCH pve-storage 09/13] storage: add volume_support_qemu_snapshot
2025-07-15 15:38 ` DERUMIER, Alexandre via pve-devel
@ 2025-07-16 10:21 ` Wolfgang Bumiller
0 siblings, 0 replies; 49+ messages in thread
From: Wolfgang Bumiller @ 2025-07-16 10:21 UTC (permalink / raw)
To: Proxmox VE development discussion; +Cc: t.lamprecht
On Tue, Jul 15, 2025 at 03:38:57PM +0000, DERUMIER, Alexandre via pve-devel wrote:
> From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
> To: "w.bumiller@proxmox.com" <w.bumiller@proxmox.com>
> CC: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>,
> "t.lamprecht@proxmox.com" <t.lamprecht@proxmox.com>,
> "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
> Subject: Re: [pve-devel] [PATCH pve-storage 09/13] storage: add
> volume_support_qemu_snapshot
> Date: Tue, 15 Jul 2025 15:38:57 +0000
> Message-ID: <82c18a632e730b7c18a93aacd9f4d623d56061be.camel@groupe-cyllene.com>
>
> >>Yeah I tried some quick tests and it seems to be a bit tricky. Or
> >>maybe
> >>I just missed something.
>
> Just have done fast tests, I think I have found a way.
> (I'll do more test tomorrow to see if everything is ok in guest)
>
>
>
> sub qemu_volume_snapshot {
> my ($vmid, $deviceid, $storecfg, $drive, $snap) = @_;
> .....
>
> } elsif ($do_snapshots_type eq 'external') {
> my $storeid = (PVE::Storage::parse_volume_id($volid))[0];
> my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
> print "external qemu snapshot\n";
> my $snapshots = PVE::Storage::volume_snapshot_info($storecfg,
> $volid);
> my $parent_snap = $snapshots->{'current'}->{parent};
> my $machine_version =
> PVE::QemuServer::Machine::get_current_qemu_machine($vmid);
>
> #no rename here anymore
>
> PVE::QemuServer::Blockdev::blockdev_external_snapshot(
> $storecfg, $vmid, $machine_version, $deviceid, $drive,
> $snap, $parent_snap
> );
> ...
> }
>
>
> sub blockdev_external_snapshot {
> my ($storecfg, $vmid, $machine_version, $deviceid, $drive, $snap,
> $parent_snap) = @_;
>
> print "Creating a new current volume with $snap as backing snap\n";
>
> my $volid = $drive->{file};
>
> #rename current to snap && preallocate add a new current file with
> reference to snapbacking-file, we can remove the $running param
>
> my $running = undef
> PVE::Storage::volume_snapshot($storecfg, $volid, $snap, undef);
>
> #call the blockdev_rename, only to reopen internal blockdev current
> active image to snap volname
>
> my $skip_volume_rename=1;
> #reopen current to snap
> PVE::QemuServer::Blockdev::blockdev_rename(
> $storecfg,
> $vmid,
> $machine_version,
> $deviceid,
> $drive,
> 'current',
> $snap,
> $parent_snap,
> $skip_volume_rename
> );
>
> ###the rest is the same than before
>
> #be sure to add drive in write mode
> delete($drive->{ro});
>
> my $new_file_blockdev = generate_file_blockdev($storecfg, $drive);
> my $new_fmt_blockdev = generate_format_blockdev($storecfg, $drive,
> $new_file_blockdev);
>
> my $snap_file_blockdev = generate_file_blockdev($storecfg, $drive,
> $snap);
> my $snap_fmt_blockdev = generate_format_blockdev(
> $storecfg,
> $drive,
> $snap_file_blockdev,
> { 'snapshot-name' => $snap },
> );
>
>
> #backing need to be forced to undef in blockdev, to avoid reopen of
> backing-file on blockdev-add
> $new_fmt_blockdev->{backing} = undef;
>
> mon_cmd($vmid, 'blockdev-add', %$new_fmt_blockdev);
>
> mon_cmd(
> $vmid, 'blockdev-snapshot',
> node => $snap_fmt_blockdev->{'node-name'},
> overlay => $new_fmt_blockdev->{'node-name'},
> );
> }
seems to work - at least on my lvm tests
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
[not found] <20250709162202.2952597-1-alexandre.derumier@groupe-cyllene.com>
` (16 preceding siblings ...)
2025-07-09 16:22 ` [pve-devel] [PATCH pve-storage 13/13] tests: add lvmplugin test Alexandre Derumier via pve-devel
@ 2025-07-16 15:15 ` Thomas Lamprecht
2025-07-17 8:01 ` DERUMIER, Alexandre via pve-devel
17 siblings, 1 reply; 49+ messages in thread
From: Thomas Lamprecht @ 2025-07-16 15:15 UTC (permalink / raw)
To: Alexandre Derumier, pve-devel
Am 09.07.25 um 18:21 schrieb Alexandre Derumier:
> This patch series implement qcow2 external snapshot support for files && lvm volumes
>
> The current internal qcow2 snapshots have bad write performance because no metadatas can be preallocated.
>
> This is particulary visible on a shared filesystem like ocfs2 or gfs2.
>
> Also other bugs are freeze/lock reported by users since years on snapshots delete on nfs
> (The disk access seem to be frozen during all the delete duration)
>
> This also open doors for remote snapshot export-import for storage replication.
>
Wolfgang now applied all patches and some follow-ups, we test this a bit
more internal, but if nothing grave comes up this should be included in
upcoming PVE 9 - great work!
btw. and mostly out of interest: What was the status for using qcow2 for
thin-provisioning idea again? I know there was some discussion, but could
not quickly find what the last status was.
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
2025-07-16 15:15 ` [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support Thomas Lamprecht
@ 2025-07-17 8:01 ` DERUMIER, Alexandre via pve-devel
2025-07-17 14:49 ` Tiago Sousa via pve-devel
[not found] ` <1fddff1a-b806-475a-ac75-1dd0107d1013@eurotux.com>
0 siblings, 2 replies; 49+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-17 8:01 UTC (permalink / raw)
To: pve-devel, t.lamprecht; +Cc: DERUMIER, Alexandre
[-- Attachment #1: Type: message/rfc822, Size: 13563 bytes --]
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "t.lamprecht@proxmox.com" <t.lamprecht@proxmox.com>
Subject: Re: [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
Date: Thu, 17 Jul 2025 08:01:01 +0000
Message-ID: <ed658e13ad9c6e02454ba29f462ca0515d8234da.camel@groupe-cyllene.com>
>>
>>
>>Wolfgang now applied all patches and some follow-ups, we test this a
>>bit
>>more internal, but if nothing grave comes up this should be included
>>in
>>upcoming PVE 9 - great work!
Fantastic ! I would like to thank you guys for your support ! So
thanks Fiona, Fabian, Wolfgang, Thomas for the help, review, ideas, and
for your time on this !
>>btw. and mostly out of interest: What was the status for using qcow2
>>for
>>thin-provisioning idea again? I know there was some discussion, but
>>could
>>not quickly find what the last status was.
for lvm+qcow2 provisionning, it's really early POC/RFC. (I have sent
some patches 1year ago).
https://lore.proxmox.com/all/mailman.400.1724670042.302.pve-devel@lists.proxmox.com/
I'll try to work on it after this summer for pve9.1.
The main thing to work is how to handle cluster storage lock for lvm
resize.
Qemu send an event when the used size reach a threshold to qmeventd,
Pvestatd can also sent an event on disk full by security
Then, some kind of daemon need to resize the lvm with a global lock.
I think we can use a local daemon on each node, then maybe use lock
through pmxcfs, with some kind of queue file with error/retry handling.
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
2025-07-17 8:01 ` DERUMIER, Alexandre via pve-devel
@ 2025-07-17 14:49 ` Tiago Sousa via pve-devel
[not found] ` <1fddff1a-b806-475a-ac75-1dd0107d1013@eurotux.com>
1 sibling, 0 replies; 49+ messages in thread
From: Tiago Sousa via pve-devel @ 2025-07-17 14:49 UTC (permalink / raw)
To: Proxmox VE development discussion; +Cc: Tiago Sousa
[-- Attachment #1: Type: message/rfc822, Size: 5936 bytes --]
From: Tiago Sousa <joao.sousa@eurotux.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
Date: Thu, 17 Jul 2025 15:49:05 +0100
Message-ID: <1fddff1a-b806-475a-ac75-1dd0107d1013@eurotux.com>
Hi,
I'm starting to develop the thin provisioning feature to integrate with
the new snapshot feature for LVM. The architecture I'm following is very
similar to the one Alexandre mentioned here
https://lore.proxmox.com/pve-devel/mailman.380.1750053104.395.pve-devel@lists.proxmox.com/
However I have some questions:
1. When qmeventd receives the BLOCK_WRITE_THRESHOLD event, should the
extend request (writing the nodename to the extend queue) be handled
directly in C, or would it be preferable to do it via an API call such
as PUT /nodes/{node}/qemu/{vmid}/extend_request, passing the nodename as
a parameter?
2. If we use a local daemon for each node how is it decided which node
will preform the extend operation?
Another option is to use a centralized daemon (maybe on the quorum
master) that performs every extend.
3. Is there any specific reason for the event only be triggered at 50%
of last chunk, in your implementation? I was thinking of implementing it
with 10% of the current provisioned space to be safe. Any options on this?
In terms of locking I'm planning to use the cfs_lock_file to write to
the extend queue and cfs_lock_storage to perform the extend on the
target disk.
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
[not found] ` <1fddff1a-b806-475a-ac75-1dd0107d1013@eurotux.com>
@ 2025-07-17 15:08 ` DERUMIER, Alexandre via pve-devel
[not found] ` <47b76678f969ba97926c85af4bf8e50c9dda161d.camel@groupe-cyllene.com>
1 sibling, 0 replies; 49+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-17 15:08 UTC (permalink / raw)
To: joao.sousa, pve-devel; +Cc: DERUMIER, Alexandre
[-- Attachment #1: Type: message/rfc822, Size: 16433 bytes --]
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "joao.sousa@eurotux.com" <joao.sousa@eurotux.com>, "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
Date: Thu, 17 Jul 2025 15:08:45 +0000
Message-ID: <47b76678f969ba97926c85af4bf8e50c9dda161d.camel@groupe-cyllene.com>
>>
>>However I have some questions:
>>
>>1. When qmeventd receives the BLOCK_WRITE_THRESHOLD event, should the
>>extend request (writing the nodename to the extend queue) be handled
>>directly in C, or would it be preferable to do it via an API call
>>such
>>as PUT /nodes/{node}/qemu/{vmid}/extend_request, passing the nodename
>>as
>>a parameter?
I think that qemevent should be as fast as possible. For my first
version, I was doing an exec of "qm extend ....", but I think it should
be even better if qmevent is simply write vmid/diskid in a text file
somewhere in /etc/pve. (the the other daemon can manage the queue)
>>2. If we use a local daemon for each node how is it decided which
>>node
>>will preform the extend operation?
>>Another option is to use a centralized daemon (maybe on the quorum
>>master) that performs every extend.
The local daemon where the vm is running should resize the lvm, because
if the resize is done on another node, the lvm need to be
rescan/refresh to see the new size. AFAIK, It's not done automatically.
>>3. Is there any specific reason for the event only be triggered at
>>50%
>>of last chunk, in your implementation? I was thinking of implementing
>>it
>>with 10% of the current provisioned space to be safe. Any options on
>>this?
I have use same implementation than redhat, but it could be some
tunable value. It's really depend of how much fast is your storage.
as far I remember, it's was 50% of the size of the chunk (1GB).
so when you have 500MB free, it should add another 1GB.
of course, if you have fast nvme, and you write 2GB/s, it'll be too
short.
if you go with 10% of provisionned, if you have 2TB qcow2, it'll
increase when 200GB is free. (and how much do we want to increase ?
another 10% ? )
but if you want 2GB qcow2, it'll increase when 200MB is free.
so, we a fast nvme, it could not work with small disk, but ok with big
disk.
I think that's why redhat use percent of fixed chunksize (that you can
configure depending of your storage speed)
>>In terms of locking I'm planning to use the cfs_lock_file to write to
>>the extend queue and cfs_lock_storage to perform the extend on the
>>target disk.
yes, that's what I had in mind too.
I thing to also handle, is if the server loose quorum for some second
(so /etc/pve readonly). I think we need to keep info in qmeventd memory
and try to write the file in queue when quorum is good again.
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
[not found] ` <47b76678f969ba97926c85af4bf8e50c9dda161d.camel@groupe-cyllene.com>
@ 2025-07-17 15:42 ` Tiago Sousa via pve-devel
[not found] ` <58c2db18-c2c2-4c91-9521-bdb42a302e93@eurotux.com>
1 sibling, 0 replies; 49+ messages in thread
From: Tiago Sousa via pve-devel @ 2025-07-17 15:42 UTC (permalink / raw)
To: DERUMIER, Alexandre, pve-devel; +Cc: Tiago Sousa
[-- Attachment #1: Type: message/rfc822, Size: 7789 bytes --]
From: Tiago Sousa <joao.sousa@eurotux.com>
To: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>, "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
Date: Thu, 17 Jul 2025 16:42:06 +0100
Message-ID: <58c2db18-c2c2-4c91-9521-bdb42a302e93@eurotux.com>
On 7/17/25 16:08, DERUMIER, Alexandre wrote:
>>>
>>> However I have some questions:
>>>
>>> 1. When qmeventd receives the BLOCK_WRITE_THRESHOLD event, should the
>>> extend request (writing the nodename to the extend queue) be handled
>>> directly in C, or would it be preferable to do it via an API call
>>> such
>>> as PUT /nodes/{node}/qemu/{vmid}/extend_request, passing the nodename
>>> as
>>> a parameter?
>
> I think that qemevent should be as fast as possible. For my first
> version, I was doing an exec of "qm extend ....", but I think it should
> be even better if qmevent is simply write vmid/diskid in a text file
> somewhere in /etc/pve. (the the other daemon can manage the queue)
Ok, are you thinking of having a local queue for each node as well?
since if there was a single queue for all nodes how would you manage the
concurrency of writes of each node? (is there a similar function to
cfs_lock_file but for C code?)
>>> 2. If we use a local daemon for each node how is it decided which
>>> node
>>> will preform the extend operation?
>>> Another option is to use a centralized daemon (maybe on the quorum
>>> master) that performs every extend.
> The local daemon where the vm is running should resize the lvm, because
> if the resize is done on another node, the lvm need to be
> rescan/refresh to see the new size. AFAIK, It's not done automatically.
Ok, at first I was thinking of doing a FIFO like cluster wide queue
where the extends would be done in the order of arrival. But if I'm
understanding correctly, by doing a local queue and extend daemon, the
extends would be done in order but not in a global cluster sense right?
Where each extend daemon would be competing to acquire the storage lock
to proceed with the next extend. Please let me know if I'm understanding
your idea correctly.
>>> 3. Is there any specific reason for the event only be triggered at
>>> 50%
>>> of last chunk, in your implementation? I was thinking of implementing
>>> it
>>> with 10% of the current provisioned space to be safe. Any options on
>>> this?
>
> I have use same implementation than redhat, but it could be some
> tunable value. It's really depend of how much fast is your storage.
> as far I remember, it's was 50% of the size of the chunk (1GB).
> so when you have 500MB free, it should add another 1GB.
>
> of course, if you have fast nvme, and you write 2GB/s, it'll be too
> short.
>
>
> if you go with 10% of provisionned, if you have 2TB qcow2, it'll
> increase when 200GB is free. (and how much do we want to increase ?
> another 10% ? )
>
> but if you want 2GB qcow2, it'll increase when 200MB is free.
> so, we a fast nvme, it could not work with small disk, but ok with big
> disk.
> I think that's why redhat use percent of fixed chunksize (that you can
> configure depending of your storage speed)
You're right that way makes the most sense.
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
[not found] ` <58c2db18-c2c2-4c91-9521-bdb42a302e93@eurotux.com>
@ 2025-07-17 15:53 ` DERUMIER, Alexandre via pve-devel
2025-07-17 16:05 ` DERUMIER, Alexandre via pve-devel
1 sibling, 0 replies; 49+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-17 15:53 UTC (permalink / raw)
To: joao.sousa, pve-devel; +Cc: DERUMIER, Alexandre
[-- Attachment #1: Type: message/rfc822, Size: 15680 bytes --]
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "joao.sousa@eurotux.com" <joao.sousa@eurotux.com>, "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
Date: Thu, 17 Jul 2025 15:53:50 +0000
Message-ID: <7d63c3908c3ea00861a4582b217195305ea4ece0.camel@groupe-cyllene.com>
>>Ok, are you thinking of having a local queue for each node as well?
>>since if there was a single queue for all nodes how would you manage
>>the
>>concurrency of writes of each node?
I really don't known to be honest.
What could happen, is a live migration, where an event is emit on
node1, but the vm is transferred on node2 before the resize occur.
So maybe a central queue could be great here. (I think it's not too
much a problem for concurrency to have write lock when we need to
queue/dequeue).
And each local resize daemon looking at the queue if the vmid is
currently running locally.
Something like that.
>>(is there a similar function to >>cfs_lock_file but for C code?)
the pxmcfs is wrote in C, so I'm pretty sure it's possible to do it.
^_^
(I'm really bad in C or Rust, so maybe some proxmox developpers could
help on this part)
> > > 2. If we use a local daemon for each node how is it decided which
> > > node
> > > will preform the extend operation?
> > > Another option is to use a centralized daemon (maybe on the
> > > quorum
> > > master) that performs every extend.
> The local daemon where the vm is running should resize the lvm,
> because
> if the resize is done on another node, the lvm need to be
> rescan/refresh to see the new size. AFAIK, It's not done
> automatically.
>>Ok, at first I was thinking of doing a FIFO like cluster wide queue
>>where the extends would be done in the order of arrival. But if I'm
>>understanding correctly, by doing a local queue and extend daemon,
>>the
>>extends would be done in order but not in a global cluster sense
>>right?
>>Where each extend daemon would be competing to acquire the storage
>>lock
>>to proceed with the next extend. Please let me know if I'm
>>understanding
>>your idea correctly.
The lvm resize itself need to be locked globally. (because if you
resize in // on different servers different lvm volume, you could have
same disk sectors allocated to different lvms volumes)
so, indeed, each local daemon competing to have the resize lock.
FIFO should be use, because we want to resize as fast as possible after
en event emit (before than the vm is going out of space).
So, a global queue is better here also, just keeping local daemons
pooling the queue in FIFO order, and if the vm is local to the daemon,
the daemon is doing the resize.
something like that.
(but all ideas are welcome ^_^)
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
[not found] ` <58c2db18-c2c2-4c91-9521-bdb42a302e93@eurotux.com>
2025-07-17 15:53 ` DERUMIER, Alexandre via pve-devel
@ 2025-07-17 16:05 ` DERUMIER, Alexandre via pve-devel
1 sibling, 0 replies; 49+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-17 16:05 UTC (permalink / raw)
To: joao.sousa, pve-devel; +Cc: DERUMIER, Alexandre
[-- Attachment #1: Type: message/rfc822, Size: 13199 bytes --]
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "joao.sousa@eurotux.com" <joao.sousa@eurotux.com>, "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
Date: Thu, 17 Jul 2025 16:05:05 +0000
Message-ID: <bb552989f78d0f9c532b1ab09ced5a05299c459f.camel@groupe-cyllene.com>
also thinking about that,
if we local daemon is down/dead, and that it's don't unqueue,
maybe some kind of ttl for the object in queue should be use.
if we go with a separate queue for each server, it's no a problem,
but they need to compete on the resize lock. (so we can't be sure of
the order of the resize)
and they are still the case of live migration, in this case, the vmid
need to be transfered between the different queues after live migration
to be resized by the target node
I really don't known what is the best way
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
2025-07-14 11:02 ` Fabian Grünbichler
@ 2025-07-15 4:15 ` DERUMIER, Alexandre via pve-devel
0 siblings, 0 replies; 49+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-15 4:15 UTC (permalink / raw)
To: pve-devel, f.gruenbichler; +Cc: DERUMIER, Alexandre, w.bumiller, t.lamprecht
[-- Attachment #1: Type: message/rfc822, Size: 14069 bytes --]
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Cc: "w.bumiller@proxmox.com" <w.bumiller@proxmox.com>, "t.lamprecht@proxmox.com" <t.lamprecht@proxmox.com>
Subject: Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
Date: Tue, 15 Jul 2025 04:15:48 +0000
Message-ID: <555c5acfdf30a1914ce4030b51d1cb685e23b097.camel@groupe-cyllene.com>
> > > 4. all snapshot volumes on extsnap dir storages will print
> > > warnings
> > > like
> > >
> > > `this volume filename is not supported anymore`
> > >
> > > when hitting `parse_namedir` - those can likely be avoided by
> > > skipping the warning if the name matches the snapshot format and
> > > external-snapshots are enabled..
>
> Do you have seen a case where it's displayed ?
>
> because I never call parse_volname() with a $snapvolname, only with
> the
> main $volname. (because we don't want to display snapshots volumes in
> volumes list for example)
>>IIRC a simple `pvesm list <storage>` does..
sub list_images don't use parse_volname/parse_namedir but
foreach my $fn (<$imagedir/[0-9][0-9]*/*>) {
next if $fn !~ m!^(/.+/(\d+)/([^/]+\.($fmts)))$!;
$fn = $1; # untaint
so the snapshots files are listed.
The warning '`this volume filename is not supported anymore`', is
coming after in the api, where PVE::Storage::check_volume_access is
called on each volume listed. (and check_volume_access is calling
parse_volname).
I'm not sure if list_images filtering should be improved now that are
more strict ?
lvm for example, have a basic :
next if $volname !~ m/^vm-(\d+)-/;
for now, I'll fix the parse_namedir
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
2025-07-10 15:13 ` Fabian Grünbichler
` (6 preceding siblings ...)
[not found] ` <719c71b1148846e0cdd7e5c149d8b20146b4d043.camel@groupe-cyllene.com>
@ 2025-07-14 15:12 ` Tiago Sousa via pve-devel
7 siblings, 0 replies; 49+ messages in thread
From: Tiago Sousa via pve-devel @ 2025-07-14 15:12 UTC (permalink / raw)
To: Proxmox VE development discussion, Fabian Grünbichler
Cc: Tiago Sousa, Wolfgang Bumiller, Thomas Lamprecht
[-- Attachment #1: Type: message/rfc822, Size: 5327 bytes --]
From: Tiago Sousa <joao.sousa@eurotux.com>
To: "Proxmox VE development discussion" <pve-devel@lists.proxmox.com>, "Fabian Grünbichler" <f.gruenbichler@proxmox.com>
Cc: Wolfgang Bumiller <w.bumiller@proxmox.com>, Thomas Lamprecht <t.lamprecht@proxmox.com>
Subject: Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
Date: Mon, 14 Jul 2025 16:12:04 +0100
Message-ID: <cadccf7c-71d0-4d22-8d1f-266257a88064@eurotux.com>
Just adding a few things I noticed during testing, on top of what Fabian
already mentioned:
1. VM data not wiped after deletion:
When I delete a VM and then create a new one with the same settings, the
old OS and data are still there. It looks like the disk wasn't properly
cleaned up.
2. Can’t roll back to older snapshots:
Rolling back only works for the latest snapshot. If I try to go back to
an earlier one, I get this error:
TASK ERROR: can't rollback, 'test01' is not most recent snapshot on
'vm-104-disk-0.qcow2'
3. Rollback doesn't work for LVM-thin volumes:
When trying to roll back a snapshot on an LVM-thin volume, it fails with:
can't rollback snapshot for 'raw' volume
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
2025-07-14 11:27 ` Thomas Lamprecht
@ 2025-07-14 11:46 ` Fabian Grünbichler
0 siblings, 0 replies; 49+ messages in thread
From: Fabian Grünbichler @ 2025-07-14 11:46 UTC (permalink / raw)
To: Thomas Lamprecht, DERUMIER, Alexandre, pve-devel; +Cc: w.bumiller
> Thomas Lamprecht <t.lamprecht@proxmox.com> hat am 14.07.2025 13:27 CEST geschrieben:
>
>
> Am 14.07.25 um 13:15 schrieb Fabian Grünbichler:
> >> Thomas Lamprecht <t.lamprecht@proxmox.com> hat am 14.07.2025 13:11 CEST geschrieben:
> >> Albeit, taking a step back, I'm not so sure if that really helps the user?
> >> I.e., what's something actionable they can do when seeing such a warning?
> >> Is it just to signal "this is a tech preview, be wary"? If, then I'd rather
> >> just signal that in the storage edit/add window.
> >
> > for LVM, there is no new plugin or storage option, the plugin just gains
> > a new supported format and that has the experimental status. we could
> > guard that (like in the DirPlugin) with the external-snapshots flag, and
> > only allow qcow2 formatted volumes if that is set - that way, users would
> > need to "opt-in" to the experimental behaviour by creating a new storage
> > with that flag set. if we don't do that, then the only thing that we can
> > warn about is qcow2 *being used*, and there is no central place for that
> > as all existing LVM storages would get that support "over night" (once
> > they upgrade).
>
> Ah okay, I actually thought it already worked that way when I saw the
> new flag getting added.
>
> Might be indeed safer and more telling if we couple allowing it with that
> flag being set. We could then change the default of that flag for a future
> release, once we got more exposure and maybe improved UX polishing a bit
> and if we think it helps users. But it might not be bad to require an
> explicit decision for this.
>
> If we really just enforce that check on volume creation, we could even
> allow changing the flag for an existing storage, as that would be a bit
> more convenient, and, more importantly, better than nudge the admin towards
> adding the same storage twice, which we do not support.
yes, for LVM the parameter would not need to be fixed, like for the DirPlugin.
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
2025-07-14 11:15 ` Fabian Grünbichler
@ 2025-07-14 11:27 ` Thomas Lamprecht
2025-07-14 11:46 ` Fabian Grünbichler
0 siblings, 1 reply; 49+ messages in thread
From: Thomas Lamprecht @ 2025-07-14 11:27 UTC (permalink / raw)
To: Fabian Grünbichler, DERUMIER, Alexandre, pve-devel; +Cc: w.bumiller
Am 14.07.25 um 13:15 schrieb Fabian Grünbichler:
>> Thomas Lamprecht <t.lamprecht@proxmox.com> hat am 14.07.2025 13:11 CEST geschrieben:
>> Albeit, taking a step back, I'm not so sure if that really helps the user?
>> I.e., what's something actionable they can do when seeing such a warning?
>> Is it just to signal "this is a tech preview, be wary"? If, then I'd rather
>> just signal that in the storage edit/add window.
>
> for LVM, there is no new plugin or storage option, the plugin just gains
> a new supported format and that has the experimental status. we could
> guard that (like in the DirPlugin) with the external-snapshots flag, and
> only allow qcow2 formatted volumes if that is set - that way, users would
> need to "opt-in" to the experimental behaviour by creating a new storage
> with that flag set. if we don't do that, then the only thing that we can
> warn about is qcow2 *being used*, and there is no central place for that
> as all existing LVM storages would get that support "over night" (once
> they upgrade).
Ah okay, I actually thought it already worked that way when I saw the
new flag getting added.
Might be indeed safer and more telling if we couple allowing it with that
flag being set. We could then change the default of that flag for a future
release, once we got more exposure and maybe improved UX polishing a bit
and if we think it helps users. But it might not be bad to require an
explicit decision for this.
If we really just enforce that check on volume creation, we could even
allow changing the flag for an existing storage, as that would be a bit
more convenient, and, more importantly, better than nudge the admin towards
adding the same storage twice, which we do not support.
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
2025-07-14 11:11 ` Thomas Lamprecht
@ 2025-07-14 11:15 ` Fabian Grünbichler
2025-07-14 11:27 ` Thomas Lamprecht
0 siblings, 1 reply; 49+ messages in thread
From: Fabian Grünbichler @ 2025-07-14 11:15 UTC (permalink / raw)
To: Thomas Lamprecht, DERUMIER, Alexandre, pve-devel; +Cc: w.bumiller
> Thomas Lamprecht <t.lamprecht@proxmox.com> hat am 14.07.2025 13:11 CEST geschrieben:
>
>
> Am 14.07.25 um 13:04 schrieb Fabian Grünbichler:
> >> so I think that a warning in the UI is fine enough if qcow2 format is
> >> selected.
> > would probably need to go into a few places then (allocating a disk,
> > moving it, restore of backup - at least?)
>
>
> We might be able to add it relatively centrally to the storage selector,
> or a new wrapper component, and then enable an option (or switch to the
> new wrapper component) - Dominik probably would have a good idea where
> this could fit nicely.
>
> Albeit, taking a step back, I'm not so sure if that really helps the user?
> I.e., what's something actionable they can do when seeing such a warning?
> Is it just to signal "this is a tech preview, be wary"? If, then I'd rather
> just signal that in the storage edit/add window.
for LVM, there is no new plugin or storage option, the plugin just gains
a new supported format and that has the experimental status. we could
guard that (like in the DirPlugin) with the external-snapshots flag, and
only allow qcow2 formatted volumes if that is set - that way, users would
need to "opt-in" to the experimental behaviour by creating a new storage
with that flag set. if we don't do that, then the only thing that we can
warn about is qcow2 *being used*, and there is no central place for that
as all existing LVM storages would get that support "over night" (once
they upgrade).
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
2025-07-14 11:04 ` Fabian Grünbichler
@ 2025-07-14 11:11 ` Thomas Lamprecht
2025-07-14 11:15 ` Fabian Grünbichler
0 siblings, 1 reply; 49+ messages in thread
From: Thomas Lamprecht @ 2025-07-14 11:11 UTC (permalink / raw)
To: Fabian Grünbichler, DERUMIER, Alexandre, pve-devel; +Cc: w.bumiller
Am 14.07.25 um 13:04 schrieb Fabian Grünbichler:
>> so I think that a warning in the UI is fine enough if qcow2 format is
>> selected.
> would probably need to go into a few places then (allocating a disk,
> moving it, restore of backup - at least?)
We might be able to add it relatively centrally to the storage selector,
or a new wrapper component, and then enable an option (or switch to the
new wrapper component) - Dominik probably would have a good idea where
this could fit nicely.
Albeit, taking a step back, I'm not so sure if that really helps the user?
I.e., what's something actionable they can do when seeing such a warning?
Is it just to signal "this is a tech preview, be wary"? If, then I'd rather
just signal that in the storage edit/add window.
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
[not found] ` <719c71b1148846e0cdd7e5c149d8b20146b4d043.camel@groupe-cyllene.com>
@ 2025-07-14 11:04 ` Fabian Grünbichler
2025-07-14 11:11 ` Thomas Lamprecht
0 siblings, 1 reply; 49+ messages in thread
From: Fabian Grünbichler @ 2025-07-14 11:04 UTC (permalink / raw)
To: DERUMIER, Alexandre, pve-devel; +Cc: w.bumiller, t.lamprecht
> DERUMIER, Alexandre <alexandre.derumier@groupe-cyllene.com> hat am 14.07.2025 10:42 CEST geschrieben:
>
>
> >>6. it's fairly easy to accidentally create qcow2-formatted LVM
> >>volumes, as opposed to the requirement to enable a non-UI storage
> >>option at storage creation time for dir storages, we might want to
> >>add some warning to the UI at least? or we could also guard usage of
> >>the format with a config option..
>
> Personnaly, this don't shock that's much
>
> the external-snapshot option is to distinguished between
> internal|external snapshot.
>
> you can also "accidentally" create a qcow2 file instead a raw file on
> dir storage.
yes, but that one has been tested for a while, the LVM one is new and
still experimental, but that is not obvious from the UI.
> so I think that a warning in the UI is fine enough if qcow2 format is
> selected.
would probably need to go into a few places then (allocating a disk,
moving it, restore of backup - at least?)
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
[not found] ` <26badbf66613a89e63eaad8b24dd05567900250b.camel@groupe-cyllene.com>
@ 2025-07-14 11:02 ` Fabian Grünbichler
2025-07-15 4:15 ` DERUMIER, Alexandre via pve-devel
0 siblings, 1 reply; 49+ messages in thread
From: Fabian Grünbichler @ 2025-07-14 11:02 UTC (permalink / raw)
To: DERUMIER, Alexandre, pve-devel; +Cc: w.bumiller, t.lamprecht
> DERUMIER, Alexandre <alexandre.derumier@groupe-cyllene.com> hat am 14.07.2025 10:18 CEST geschrieben:
>
>
> >>4. all snapshot volumes on extsnap dir storages will print warnings
> >>like
> >>
> >>`this volume filename is not supported anymore`
> >>
> >>when hitting `parse_namedir` - those can likely be avoided by
> >>skipping the warning if the name matches the snapshot format and
> >>external-snapshots are enabled..
>
> Do you have seen a case where it's displayed ?
>
> because I never call parse_volname() with a $snapvolname, only with the
> main $volname. (because we don't want to display snapshots volumes in
> volumes list for example)
IIRC a simple `pvesm list <storage>` does..
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
2025-07-10 15:13 ` Fabian Grünbichler
` (3 preceding siblings ...)
2025-07-14 8:18 ` DERUMIER, Alexandre via pve-devel
@ 2025-07-14 8:42 ` DERUMIER, Alexandre via pve-devel
[not found] ` <26badbf66613a89e63eaad8b24dd05567900250b.camel@groupe-cyllene.com>
` (2 subsequent siblings)
7 siblings, 0 replies; 49+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-14 8:42 UTC (permalink / raw)
To: pve-devel, f.gruenbichler; +Cc: DERUMIER, Alexandre, w.bumiller, t.lamprecht
[-- Attachment #1: Type: message/rfc822, Size: 12929 bytes --]
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Cc: "w.bumiller@proxmox.com" <w.bumiller@proxmox.com>, "t.lamprecht@proxmox.com" <t.lamprecht@proxmox.com>
Subject: Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
Date: Mon, 14 Jul 2025 08:42:17 +0000
Message-ID: <719c71b1148846e0cdd7e5c149d8b20146b4d043.camel@groupe-cyllene.com>
>>6. it's fairly easy to accidentally create qcow2-formatted LVM
>>volumes, as opposed to the requirement to enable a non-UI storage
>>option at storage creation time for dir storages, we might want to
>>add some warning to the UI at least? or we could also guard usage of
>>the format with a config option..
Personnaly, this don't shock that's much
the external-snapshot option is to distinguished between
internal|external snapshot.
you can also "accidentally" create a qcow2 file instead a raw file on
dir storage.
so I think that a warning in the UI is fine enough if qcow2 format is
selected.
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
2025-07-10 15:13 ` Fabian Grünbichler
` (2 preceding siblings ...)
2025-07-14 6:25 ` DERUMIER, Alexandre via pve-devel
@ 2025-07-14 8:18 ` DERUMIER, Alexandre via pve-devel
2025-07-14 8:42 ` DERUMIER, Alexandre via pve-devel
` (3 subsequent siblings)
7 siblings, 0 replies; 49+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-14 8:18 UTC (permalink / raw)
To: pve-devel, f.gruenbichler; +Cc: DERUMIER, Alexandre, w.bumiller, t.lamprecht
[-- Attachment #1: Type: message/rfc822, Size: 12892 bytes --]
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Cc: "w.bumiller@proxmox.com" <w.bumiller@proxmox.com>, "t.lamprecht@proxmox.com" <t.lamprecht@proxmox.com>
Subject: Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
Date: Mon, 14 Jul 2025 08:18:53 +0000
Message-ID: <26badbf66613a89e63eaad8b24dd05567900250b.camel@groupe-cyllene.com>
>>4. all snapshot volumes on extsnap dir storages will print warnings
>>like
>>
>>`this volume filename is not supported anymore`
>>
>>when hitting `parse_namedir` - those can likely be avoided by
>>skipping the warning if the name matches the snapshot format and
>>external-snapshots are enabled..
Do you have seen a case where it's displayed ?
because I never call parse_volname() with a $snapvolname, only with the
main $volname. (because we don't want to display snapshots volumes in
volumes list for example)
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
2025-07-10 15:13 ` Fabian Grünbichler
2025-07-10 15:46 ` DERUMIER, Alexandre via pve-devel
[not found] ` <c6c3457906642a30ddffc0f6b9d28ea6a745ac7c.camel@groupe-cyllene.com>
@ 2025-07-14 6:25 ` DERUMIER, Alexandre via pve-devel
2025-07-14 8:18 ` DERUMIER, Alexandre via pve-devel
` (4 subsequent siblings)
7 siblings, 0 replies; 49+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-14 6:25 UTC (permalink / raw)
To: pve-devel, f.gruenbichler; +Cc: DERUMIER, Alexandre, w.bumiller, t.lamprecht
[-- Attachment #1: Type: message/rfc822, Size: 13738 bytes --]
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Cc: "w.bumiller@proxmox.com" <w.bumiller@proxmox.com>, "t.lamprecht@proxmox.com" <t.lamprecht@proxmox.com>
Subject: Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
Date: Mon, 14 Jul 2025 06:25:58 +0000
Message-ID: <e4e612ebca9f43afa3a01699caa81a3798be1fbb.camel@groupe-cyllene.com>
>>1. missing activation when snapshotting an LVM volume if the VM is
>>not running
Ah yes, I didn't see it, on volume create, the volume is auto-
activate, but if you start/stop the vm, it's inactivate.
I'm seeing new patch to disable auto-activation too
https://git.proxmox.com/?p=pve-storage.git;a=commit;h=2796d6b6395c2c4d0da55df51ca41b0e045af3a0
Do we need to also deactivate it after each action (snapshot,
resize,....) if the vm is not running ?
not sure, but maybe it need to be done in qemu-server as it need to
known if the vm is running and we don't have the $running option
everywhere.
I'll fix the activate for the create snapshot
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
2025-07-10 16:29 ` Thomas Lamprecht
@ 2025-07-11 12:04 ` DERUMIER, Alexandre via pve-devel
0 siblings, 0 replies; 49+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-11 12:04 UTC (permalink / raw)
To: pve-devel, t.lamprecht, f.gruenbichler; +Cc: DERUMIER, Alexandre, w.bumiller
[-- Attachment #1: Type: message/rfc822, Size: 14160 bytes --]
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "t.lamprecht@proxmox.com" <t.lamprecht@proxmox.com>, "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Cc: "w.bumiller@proxmox.com" <w.bumiller@proxmox.com>
Subject: Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
Date: Fri, 11 Jul 2025 12:04:44 +0000
Message-ID: <86f74ead797f07da2244caec44ed5da39c07274b.camel@groupe-cyllene.com>
Hi Thomas
Am 10.07.25 um 17:46 schrieb DERUMIER, Alexandre:
> I'll try to fix all your comments for next week.
>
> I'm going on holiday end of the next week, the 18th july to around 10
> August, so I think It'll be the last time I can work on it before
> next
> month. But feel free to improve my patches during this time.
>>I'm pondering a bit if taking this + Fabian's follow-up in already
>>and
>>then doing the rest of the improvements as further follow-ups would
>>make
>>them a bit easier to review and maybe also develop, what do you
>>think?
>>I'm naturally also fine with waiting until another revision from you
>>next week though, just as an idea.
Yes, sure, we can work with follow-up based on this serie, no problem.
(Not sure I'll have time to fully working it before vacation anyway,
next week will be really full)
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
[not found] ` <c6c3457906642a30ddffc0f6b9d28ea6a745ac7c.camel@groupe-cyllene.com>
@ 2025-07-10 16:29 ` Thomas Lamprecht
2025-07-11 12:04 ` DERUMIER, Alexandre via pve-devel
0 siblings, 1 reply; 49+ messages in thread
From: Thomas Lamprecht @ 2025-07-10 16:29 UTC (permalink / raw)
To: DERUMIER, Alexandre, pve-devel, f.gruenbichler; +Cc: w.bumiller
Hi Alexandre,
Am 10.07.25 um 17:46 schrieb DERUMIER, Alexandre:
> I'll try to fix all your comments for next week.
>
> I'm going on holiday end of the next week, the 18th july to around 10
> August, so I think It'll be the last time I can work on it before next
> month. But feel free to improve my patches during this time.
I'm pondering a bit if taking this + Fabian's follow-up in already and
then doing the rest of the improvements as further follow-ups would make
them a bit easier to review and maybe also develop, what do you think?
I'm naturally also fine with waiting until another revision from you
next week though, just as an idea.
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
2025-07-10 15:13 ` Fabian Grünbichler
@ 2025-07-10 15:46 ` DERUMIER, Alexandre via pve-devel
[not found] ` <c6c3457906642a30ddffc0f6b9d28ea6a745ac7c.camel@groupe-cyllene.com>
` (6 subsequent siblings)
7 siblings, 0 replies; 49+ messages in thread
From: DERUMIER, Alexandre via pve-devel @ 2025-07-10 15:46 UTC (permalink / raw)
To: pve-devel, f.gruenbichler; +Cc: DERUMIER, Alexandre, w.bumiller, t.lamprecht
[-- Attachment #1: Type: message/rfc822, Size: 23664 bytes --]
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>, "f.gruenbichler@proxmox.com" <f.gruenbichler@proxmox.com>
Cc: "w.bumiller@proxmox.com" <w.bumiller@proxmox.com>, "t.lamprecht@proxmox.com" <t.lamprecht@proxmox.com>
Subject: Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
Date: Thu, 10 Jul 2025 15:46:35 +0000
Message-ID: <c6c3457906642a30ddffc0f6b9d28ea6a745ac7c.camel@groupe-cyllene.com>
Hi Fabian,
I'll try to fix all your comments for next week.
I'm going on holiday end of the next week, the 18th july to around 10
August, so I think It'll be the last time I can work on it before next
month. But feel free to improve my patches during this time.
-------- Message initial --------
De: Fabian Grünbichler <f.gruenbichler@proxmox.com>
À: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Cc: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>,
Wolfgang Bumiller <w.bumiller@proxmox.com>, Thomas Lamprecht
<t.lamprecht@proxmox.com>
Objet: Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add
external qcow2 snapshot support
Date: 10/07/2025 17:13:43
> Alexandre Derumier via pve-devel <pve-devel@lists.proxmox.com> hat am
> 09.07.2025 18:21 CEST geschrieben:
> This patch series implement qcow2 external snapshot support for files
> && lvm volumes
>
> The current internal qcow2 snapshots have bad write performance
> because no metadatas can be preallocated.
>
> This is particulary visible on a shared filesystem like ocfs2 or
> gfs2.
>
> Also other bugs are freeze/lock reported by users since years on
> snapshots delete on nfs
> (The disk access seem to be frozen during all the delete duration)
>
> This also open doors for remote snapshot export-import for storage
> replication.
>
> Changelog v8:
> storage :
> - fix Fabian comments
> - add rename_snapshot
> - add qemu_resize
> - plugin: restrict volnames
> - plugin: use 'external-snapshots' instead 'snapext'
> qemu-server:
> - fix efi test with wrong volnames vm-disk-100-0.raw
> - use rename_snapshot
> MAIN TODO:
> - add snapshots tests in both pve-storage && qemu-server
> - better handle snapshot failure with multiple disks
sent a few follow-ups, as I didn't manage to fully test things and
depending on the outcome of such tests, it might be okay to apply the
series with those follow-ups, and fix up the rest later, or not..
some open issues that I discovered that still need fixing:
1. missing activation when snapshotting an LVM volume if the VM is not
running
snapshotting 'drive-scsi0' (test:123/vm-123-disk-0.qcow2)
snapshotting 'drive-scsi1' (lvm:vm-123-disk-0.qcow2)
Renamed "vm-123-disk-0.qcow2" to "snap_vm-123-disk-0_test.qcow2" in
volume group "lvm"
failed to stat '/dev/lvm/snap_vm-123-disk-0_test.qcow2' <============
Use of uninitialized value $size in division (/) at
/usr/share/perl5/PVE/Storage/LVMPlugin.pm line 671.
Rounding up size to full physical extent 4.00 MiB
Logical volume "vm-123-disk-0.qcow2" created.
Logical volume "vm-123-disk-0.qcow2" successfully removed.
Renamed "snap_vm-123-disk-0_test.qcow2" to "vm-123-disk-0.qcow2" in
volume group "lvm"
2. storage migration with external snapshots needs to be implemented or
disabled (LVM is raw-only at the moment)
3. moving a 'raw' lvm volume to the same storage with format 'qcow2'
fails with "you can't move to the same storage with same format (500)"
(UI and CLI, other way round works)
4. all snapshot volumes on extsnap dir storages will print warnings
like
`this volume filename is not supported anymore`
when hitting `parse_namedir` - those can likely be avoided by skipping
the warning if the name matches the snapshot format and external-
snapshots are enabled..
5. the backing file paths are not relative for LVM
6. it's fairly easy to accidentally create qcow2-formatted LVM volumes,
as opposed to the requirement to enable a non-UI storage option at
storage creation time for dir storages, we might want to add some
warning to the UI at least? or we could also guard usage of the format
with a config option..
7. the snapshot name related helpers being public would be nice to
avoid - one way would be to inline them and duplicate
volume_snapshot_info in the LVM plugin, but if a better option is found
that would be great
8. renaming a volume needs to be forbidden if snapshots exist, or the
whole chain needs to be renamed (this is currently caught higher up in
the stack, not sure if we need/want to also check in the storage
layer?)
the parse_namedir change also need a close look to see if some other
plugins get broken.. (@Wolfgang - since you are working on related
changes!)
I am sure there are more rough edges to be found, so don't consider the
list above complete!
>
>
> pve-storage:
>
> Alexandre Derumier (13):
> plugin: add qemu_img_create
> plugin: add qemu_img_create_qcow2_backed
> plugin: add qemu_img_info
> plugin: add qemu_img_measure
> plugin: add qemu_img_resize
> rbd && zfs : create_base : remove $running param from
> volume_snapshot
> storage: volume_snapshot: add $running param
> storage: add rename_snapshot method
> storage: add volume_support_qemu_snapshot
> plugin: fix volname parsing
> qcow2: add external snapshot support
> lvmplugin: add qcow2 snapshot
> tests: add lvmplugin test
>
> ApiChangeLog | 15 +
> src/PVE/Storage.pm | 45 ++-
> src/PVE/Storage/BTRFSPlugin.pm | 6 +
> src/PVE/Storage/CIFSPlugin.pm | 1 +
> src/PVE/Storage/Common.pm | 33 ++
> src/PVE/Storage/DirPlugin.pm | 11 +
> src/PVE/Storage/ESXiPlugin.pm | 8 +-
> src/PVE/Storage/ISCSIDirectPlugin.pm | 2 +-
> src/PVE/Storage/LVMPlugin.pm | 571 +++++++++++++++++++++----
> -
> src/PVE/Storage/LvmThinPlugin.pm | 8 +-
> src/PVE/Storage/NFSPlugin.pm | 1 +
> src/PVE/Storage/PBSPlugin.pm | 2 +-
> src/PVE/Storage/Plugin.pm | 518 +++++++++++++++++++++---
> src/PVE/Storage/RBDPlugin.pm | 18 +-
> src/PVE/Storage/ZFSPlugin.pm | 4 +-
> src/PVE/Storage/ZFSPoolPlugin.pm | 8 +-
> src/test/Makefile | 5 +-
> src/test/run_test_lvmplugin.pl | 577
> +++++++++++++++++++++++++++
> 18 files changed, 1662 insertions(+), 171 deletions(-)
> create mode 100755 src/test/run_test_lvmplugin.pl
>
> qemu-server :
>
> Alexandre Derumier (4):
> qemu_img convert : add external snapshot support
> blockdev: add backing_chain support
> qcow2: add external snapshot support
> tests: fix efi vm-disk-100-0.raw -> vm-100-disk-0.raw
>
> src/PVE/QemuConfig.pm | 4 +-
> src/PVE/QemuServer.pm | 132 +++++--
> src/PVE/QemuServer/Blockdev.pm | 345
> +++++++++++++++++-
> src/PVE/QemuServer/QemuImage.pm | 6 +-
> src/test/cfg2cmd/efi-raw-old.conf | 2 +-
> src/test/cfg2cmd/efi-raw-old.conf.cmd | 2 +-
> src/test/cfg2cmd/efi-raw-template.conf | 2 +-
> src/test/cfg2cmd/efi-raw-template.conf.cmd | 2 +-
> src/test/cfg2cmd/efi-raw.conf | 2 +-
> src/test/cfg2cmd/efi-raw.conf.cmd | 2 +-
> src/test/cfg2cmd/efi-secboot-and-tpm-q35.conf | 2 +-
> .../cfg2cmd/efi-secboot-and-tpm-q35.conf.cmd | 2 +-
> src/test/cfg2cmd/efi-secboot-and-tpm.conf | 2 +-
> src/test/cfg2cmd/efi-secboot-and-tpm.conf.cmd | 2 +-
> src/test/cfg2cmd/sev-es.conf | 2 +-
> src/test/cfg2cmd/sev-es.conf.cmd | 2 +-
> src/test/cfg2cmd/sev-std.conf | 2 +-
> src/test/cfg2cmd/sev-std.conf.cmd | 2 +-
> src/test/cfg2cmd/simple-backingchain.conf | 25 ++
> src/test/cfg2cmd/simple-backingchain.conf.cmd | 33 ++
> src/test/run_config2command_tests.pl | 47 +++
> src/test/run_qemu_img_convert_tests.pl | 59 +++
> src/test/snapshot-test.pm | 4 +-
> 23 files changed, 634 insertions(+), 49 deletions(-)
> create mode 100644 src/test/cfg2cmd/simple-backingchain.conf
> create mode 100644 src/test/cfg2cmd/simple-backingchain.conf.cmd
>
>
> --
> 2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
2025-07-09 16:21 Alexandre Derumier via pve-devel
@ 2025-07-10 15:13 ` Fabian Grünbichler
2025-07-10 15:46 ` DERUMIER, Alexandre via pve-devel
` (7 more replies)
0 siblings, 8 replies; 49+ messages in thread
From: Fabian Grünbichler @ 2025-07-10 15:13 UTC (permalink / raw)
To: Proxmox VE development discussion; +Cc: Wolfgang Bumiller, Thomas Lamprecht
> Alexandre Derumier via pve-devel <pve-devel@lists.proxmox.com> hat am 09.07.2025 18:21 CEST geschrieben:
> This patch series implement qcow2 external snapshot support for files && lvm volumes
>
> The current internal qcow2 snapshots have bad write performance because no metadatas can be preallocated.
>
> This is particulary visible on a shared filesystem like ocfs2 or gfs2.
>
> Also other bugs are freeze/lock reported by users since years on snapshots delete on nfs
> (The disk access seem to be frozen during all the delete duration)
>
> This also open doors for remote snapshot export-import for storage replication.
>
> Changelog v8:
> storage :
> - fix Fabian comments
> - add rename_snapshot
> - add qemu_resize
> - plugin: restrict volnames
> - plugin: use 'external-snapshots' instead 'snapext'
> qemu-server:
> - fix efi test with wrong volnames vm-disk-100-0.raw
> - use rename_snapshot
> MAIN TODO:
> - add snapshots tests in both pve-storage && qemu-server
> - better handle snapshot failure with multiple disks
sent a few follow-ups, as I didn't manage to fully test things and depending on the outcome of such tests, it might be okay to apply the series with those follow-ups, and fix up the rest later, or not..
some open issues that I discovered that still need fixing:
1. missing activation when snapshotting an LVM volume if the VM is not running
snapshotting 'drive-scsi0' (test:123/vm-123-disk-0.qcow2)
snapshotting 'drive-scsi1' (lvm:vm-123-disk-0.qcow2)
Renamed "vm-123-disk-0.qcow2" to "snap_vm-123-disk-0_test.qcow2" in volume group "lvm"
failed to stat '/dev/lvm/snap_vm-123-disk-0_test.qcow2' <============
Use of uninitialized value $size in division (/) at /usr/share/perl5/PVE/Storage/LVMPlugin.pm line 671.
Rounding up size to full physical extent 4.00 MiB
Logical volume "vm-123-disk-0.qcow2" created.
Logical volume "vm-123-disk-0.qcow2" successfully removed.
Renamed "snap_vm-123-disk-0_test.qcow2" to "vm-123-disk-0.qcow2" in volume group "lvm"
2. storage migration with external snapshots needs to be implemented or disabled (LVM is raw-only at the moment)
3. moving a 'raw' lvm volume to the same storage with format 'qcow2' fails with "you can't move to the same storage with same format (500)" (UI and CLI, other way round works)
4. all snapshot volumes on extsnap dir storages will print warnings like
`this volume filename is not supported anymore`
when hitting `parse_namedir` - those can likely be avoided by skipping the warning if the name matches the snapshot format and external-snapshots are enabled..
5. the backing file paths are not relative for LVM
6. it's fairly easy to accidentally create qcow2-formatted LVM volumes, as opposed to the requirement to enable a non-UI storage option at storage creation time for dir storages, we might want to add some warning to the UI at least? or we could also guard usage of the format with a config option..
7. the snapshot name related helpers being public would be nice to avoid - one way would be to inline them and duplicate volume_snapshot_info in the LVM plugin, but if a better option is found that would be great
8. renaming a volume needs to be forbidden if snapshots exist, or the whole chain needs to be renamed (this is currently caught higher up in the stack, not sure if we need/want to also check in the storage layer?)
the parse_namedir change also need a close look to see if some other plugins get broken.. (@Wolfgang - since you are working on related changes!)
I am sure there are more rough edges to be found, so don't consider the list above complete!
>
>
> pve-storage:
>
> Alexandre Derumier (13):
> plugin: add qemu_img_create
> plugin: add qemu_img_create_qcow2_backed
> plugin: add qemu_img_info
> plugin: add qemu_img_measure
> plugin: add qemu_img_resize
> rbd && zfs : create_base : remove $running param from volume_snapshot
> storage: volume_snapshot: add $running param
> storage: add rename_snapshot method
> storage: add volume_support_qemu_snapshot
> plugin: fix volname parsing
> qcow2: add external snapshot support
> lvmplugin: add qcow2 snapshot
> tests: add lvmplugin test
>
> ApiChangeLog | 15 +
> src/PVE/Storage.pm | 45 ++-
> src/PVE/Storage/BTRFSPlugin.pm | 6 +
> src/PVE/Storage/CIFSPlugin.pm | 1 +
> src/PVE/Storage/Common.pm | 33 ++
> src/PVE/Storage/DirPlugin.pm | 11 +
> src/PVE/Storage/ESXiPlugin.pm | 8 +-
> src/PVE/Storage/ISCSIDirectPlugin.pm | 2 +-
> src/PVE/Storage/LVMPlugin.pm | 571 +++++++++++++++++++++-----
> src/PVE/Storage/LvmThinPlugin.pm | 8 +-
> src/PVE/Storage/NFSPlugin.pm | 1 +
> src/PVE/Storage/PBSPlugin.pm | 2 +-
> src/PVE/Storage/Plugin.pm | 518 +++++++++++++++++++++---
> src/PVE/Storage/RBDPlugin.pm | 18 +-
> src/PVE/Storage/ZFSPlugin.pm | 4 +-
> src/PVE/Storage/ZFSPoolPlugin.pm | 8 +-
> src/test/Makefile | 5 +-
> src/test/run_test_lvmplugin.pl | 577 +++++++++++++++++++++++++++
> 18 files changed, 1662 insertions(+), 171 deletions(-)
> create mode 100755 src/test/run_test_lvmplugin.pl
>
> qemu-server :
>
> Alexandre Derumier (4):
> qemu_img convert : add external snapshot support
> blockdev: add backing_chain support
> qcow2: add external snapshot support
> tests: fix efi vm-disk-100-0.raw -> vm-100-disk-0.raw
>
> src/PVE/QemuConfig.pm | 4 +-
> src/PVE/QemuServer.pm | 132 +++++--
> src/PVE/QemuServer/Blockdev.pm | 345 +++++++++++++++++-
> src/PVE/QemuServer/QemuImage.pm | 6 +-
> src/test/cfg2cmd/efi-raw-old.conf | 2 +-
> src/test/cfg2cmd/efi-raw-old.conf.cmd | 2 +-
> src/test/cfg2cmd/efi-raw-template.conf | 2 +-
> src/test/cfg2cmd/efi-raw-template.conf.cmd | 2 +-
> src/test/cfg2cmd/efi-raw.conf | 2 +-
> src/test/cfg2cmd/efi-raw.conf.cmd | 2 +-
> src/test/cfg2cmd/efi-secboot-and-tpm-q35.conf | 2 +-
> .../cfg2cmd/efi-secboot-and-tpm-q35.conf.cmd | 2 +-
> src/test/cfg2cmd/efi-secboot-and-tpm.conf | 2 +-
> src/test/cfg2cmd/efi-secboot-and-tpm.conf.cmd | 2 +-
> src/test/cfg2cmd/sev-es.conf | 2 +-
> src/test/cfg2cmd/sev-es.conf.cmd | 2 +-
> src/test/cfg2cmd/sev-std.conf | 2 +-
> src/test/cfg2cmd/sev-std.conf.cmd | 2 +-
> src/test/cfg2cmd/simple-backingchain.conf | 25 ++
> src/test/cfg2cmd/simple-backingchain.conf.cmd | 33 ++
> src/test/run_config2command_tests.pl | 47 +++
> src/test/run_qemu_img_convert_tests.pl | 59 +++
> src/test/snapshot-test.pm | 4 +-
> 23 files changed, 634 insertions(+), 49 deletions(-)
> create mode 100644 src/test/cfg2cmd/simple-backingchain.conf
> create mode 100644 src/test/cfg2cmd/simple-backingchain.conf.cmd
>
>
> --
> 2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
* [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
@ 2025-07-09 16:21 Alexandre Derumier via pve-devel
2025-07-10 15:13 ` Fabian Grünbichler
0 siblings, 1 reply; 49+ messages in thread
From: Alexandre Derumier via pve-devel @ 2025-07-09 16:21 UTC (permalink / raw)
To: pve-devel; +Cc: Alexandre Derumier
[-- Attachment #1: Type: message/rfc822, Size: 7826 bytes --]
From: Alexandre Derumier <alexandre.derumier@groupe-cyllene.com>
To: pve-devel@lists.proxmox.com
Subject: [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support
Date: Wed, 9 Jul 2025 18:21:45 +0200
Message-ID: <20250709162202.2952597-1-alexandre.derumier@groupe-cyllene.com>
This patch series implement qcow2 external snapshot support for files && lvm volumes
The current internal qcow2 snapshots have bad write performance because no metadatas can be preallocated.
This is particulary visible on a shared filesystem like ocfs2 or gfs2.
Also other bugs are freeze/lock reported by users since years on snapshots delete on nfs
(The disk access seem to be frozen during all the delete duration)
This also open doors for remote snapshot export-import for storage replication.
Changelog v8:
storage :
- fix Fabian comments
- add rename_snapshot
- add qemu_resize
- plugin: restrict volnames
- plugin: use 'external-snapshots' instead 'snapext'
qemu-server:
- fix efi test with wrong volnames vm-disk-100-0.raw
- use rename_snapshot
MAIN TODO:
- add snapshots tests in both pve-storage && qemu-server
- better handle snapshot failure with multiple disks
pve-storage:
Alexandre Derumier (13):
plugin: add qemu_img_create
plugin: add qemu_img_create_qcow2_backed
plugin: add qemu_img_info
plugin: add qemu_img_measure
plugin: add qemu_img_resize
rbd && zfs : create_base : remove $running param from volume_snapshot
storage: volume_snapshot: add $running param
storage: add rename_snapshot method
storage: add volume_support_qemu_snapshot
plugin: fix volname parsing
qcow2: add external snapshot support
lvmplugin: add qcow2 snapshot
tests: add lvmplugin test
ApiChangeLog | 15 +
src/PVE/Storage.pm | 45 ++-
src/PVE/Storage/BTRFSPlugin.pm | 6 +
src/PVE/Storage/CIFSPlugin.pm | 1 +
src/PVE/Storage/Common.pm | 33 ++
src/PVE/Storage/DirPlugin.pm | 11 +
src/PVE/Storage/ESXiPlugin.pm | 8 +-
src/PVE/Storage/ISCSIDirectPlugin.pm | 2 +-
src/PVE/Storage/LVMPlugin.pm | 571 +++++++++++++++++++++-----
src/PVE/Storage/LvmThinPlugin.pm | 8 +-
src/PVE/Storage/NFSPlugin.pm | 1 +
src/PVE/Storage/PBSPlugin.pm | 2 +-
src/PVE/Storage/Plugin.pm | 518 +++++++++++++++++++++---
src/PVE/Storage/RBDPlugin.pm | 18 +-
src/PVE/Storage/ZFSPlugin.pm | 4 +-
src/PVE/Storage/ZFSPoolPlugin.pm | 8 +-
src/test/Makefile | 5 +-
src/test/run_test_lvmplugin.pl | 577 +++++++++++++++++++++++++++
18 files changed, 1662 insertions(+), 171 deletions(-)
create mode 100755 src/test/run_test_lvmplugin.pl
qemu-server :
Alexandre Derumier (4):
qemu_img convert : add external snapshot support
blockdev: add backing_chain support
qcow2: add external snapshot support
tests: fix efi vm-disk-100-0.raw -> vm-100-disk-0.raw
src/PVE/QemuConfig.pm | 4 +-
src/PVE/QemuServer.pm | 132 +++++--
src/PVE/QemuServer/Blockdev.pm | 345 +++++++++++++++++-
src/PVE/QemuServer/QemuImage.pm | 6 +-
src/test/cfg2cmd/efi-raw-old.conf | 2 +-
src/test/cfg2cmd/efi-raw-old.conf.cmd | 2 +-
src/test/cfg2cmd/efi-raw-template.conf | 2 +-
src/test/cfg2cmd/efi-raw-template.conf.cmd | 2 +-
src/test/cfg2cmd/efi-raw.conf | 2 +-
src/test/cfg2cmd/efi-raw.conf.cmd | 2 +-
src/test/cfg2cmd/efi-secboot-and-tpm-q35.conf | 2 +-
.../cfg2cmd/efi-secboot-and-tpm-q35.conf.cmd | 2 +-
src/test/cfg2cmd/efi-secboot-and-tpm.conf | 2 +-
src/test/cfg2cmd/efi-secboot-and-tpm.conf.cmd | 2 +-
src/test/cfg2cmd/sev-es.conf | 2 +-
src/test/cfg2cmd/sev-es.conf.cmd | 2 +-
src/test/cfg2cmd/sev-std.conf | 2 +-
src/test/cfg2cmd/sev-std.conf.cmd | 2 +-
src/test/cfg2cmd/simple-backingchain.conf | 25 ++
src/test/cfg2cmd/simple-backingchain.conf.cmd | 33 ++
src/test/run_config2command_tests.pl | 47 +++
src/test/run_qemu_img_convert_tests.pl | 59 +++
src/test/snapshot-test.pm | 4 +-
23 files changed, 634 insertions(+), 49 deletions(-)
create mode 100644 src/test/cfg2cmd/simple-backingchain.conf
create mode 100644 src/test/cfg2cmd/simple-backingchain.conf.cmd
--
2.39.5
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 49+ messages in thread
end of thread, other threads:[~2025-07-17 16:04 UTC | newest]
Thread overview: 49+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <20250709162202.2952597-1-alexandre.derumier@groupe-cyllene.com>
2025-07-09 16:21 ` [pve-devel] [PATCH pve-storage 01/13] plugin: add qemu_img_create Alexandre Derumier via pve-devel
2025-07-09 16:21 ` [pve-devel] [PATCH qemu-server 1/4] qemu_img convert : add external snapshot support Alexandre Derumier via pve-devel
2025-07-09 16:21 ` [pve-devel] [PATCH qemu-server 2/4] blockdev: add backing_chain support Alexandre Derumier via pve-devel
2025-07-15 9:02 ` Wolfgang Bumiller
2025-07-09 16:21 ` [pve-devel] [PATCH pve-storage 02/13] plugin: add qemu_img_create_qcow2_backed Alexandre Derumier via pve-devel
2025-07-09 16:21 ` [pve-devel] [PATCH pve-storage 03/13] plugin: add qemu_img_info Alexandre Derumier via pve-devel
2025-07-09 16:21 ` [pve-devel] [PATCH qemu-server 3/4] qcow2: add external snapshot support Alexandre Derumier via pve-devel
2025-07-15 13:21 ` Wolfgang Bumiller
2025-07-15 14:21 ` DERUMIER, Alexandre via pve-devel
2025-07-15 14:31 ` Wolfgang Bumiller
2025-07-09 16:21 ` [pve-devel] [PATCH pve-storage 04/13] plugin: add qemu_img_measure Alexandre Derumier via pve-devel
2025-07-09 16:21 ` [pve-devel] [PATCH qemu-server 4/4] tests: fix efi vm-disk-100-0.raw -> vm-100-disk-0.raw Alexandre Derumier via pve-devel
2025-07-09 16:21 ` [pve-devel] [PATCH pve-storage 05/13] plugin: add qemu_img_resize Alexandre Derumier via pve-devel
2025-07-09 16:21 ` [pve-devel] [PATCH pve-storage 06/13] rbd && zfs : create_base : remove $running param from volume_snapshot Alexandre Derumier via pve-devel
2025-07-09 16:21 ` [pve-devel] [PATCH pve-storage 07/13] storage: volume_snapshot: add $running param Alexandre Derumier via pve-devel
2025-07-09 16:21 ` [pve-devel] [PATCH pve-storage 08/13] storage: add rename_snapshot method Alexandre Derumier via pve-devel
2025-07-09 16:21 ` [pve-devel] [PATCH pve-storage 09/13] storage: add volume_support_qemu_snapshot Alexandre Derumier via pve-devel
2025-07-15 11:33 ` Wolfgang Bumiller
2025-07-15 13:59 ` DERUMIER, Alexandre via pve-devel
[not found] ` <4756bd155509ba20a3a6bf16191f1a539ee5b23e.camel@groupe-cyllene.com>
2025-07-15 14:49 ` Wolfgang Bumiller
2025-07-15 15:38 ` DERUMIER, Alexandre via pve-devel
2025-07-16 10:21 ` Wolfgang Bumiller
2025-07-09 16:21 ` [pve-devel] [PATCH pve-storage 10/13] plugin: fix volname parsing Alexandre Derumier via pve-devel
2025-07-09 16:22 ` [pve-devel] [PATCH pve-storage 11/13] qcow2: add external snapshot support Alexandre Derumier via pve-devel
2025-07-09 16:22 ` [pve-devel] [PATCH pve-storage 12/13] lvmplugin: add qcow2 snapshot Alexandre Derumier via pve-devel
2025-07-09 16:22 ` [pve-devel] [PATCH pve-storage 13/13] tests: add lvmplugin test Alexandre Derumier via pve-devel
2025-07-16 15:15 ` [pve-devel] [PATCH-SERIES v8 pve-storage/qemu-server] add external qcow2 snapshot support Thomas Lamprecht
2025-07-17 8:01 ` DERUMIER, Alexandre via pve-devel
2025-07-17 14:49 ` Tiago Sousa via pve-devel
[not found] ` <1fddff1a-b806-475a-ac75-1dd0107d1013@eurotux.com>
2025-07-17 15:08 ` DERUMIER, Alexandre via pve-devel
[not found] ` <47b76678f969ba97926c85af4bf8e50c9dda161d.camel@groupe-cyllene.com>
2025-07-17 15:42 ` Tiago Sousa via pve-devel
[not found] ` <58c2db18-c2c2-4c91-9521-bdb42a302e93@eurotux.com>
2025-07-17 15:53 ` DERUMIER, Alexandre via pve-devel
2025-07-17 16:05 ` DERUMIER, Alexandre via pve-devel
2025-07-09 16:21 Alexandre Derumier via pve-devel
2025-07-10 15:13 ` Fabian Grünbichler
2025-07-10 15:46 ` DERUMIER, Alexandre via pve-devel
[not found] ` <c6c3457906642a30ddffc0f6b9d28ea6a745ac7c.camel@groupe-cyllene.com>
2025-07-10 16:29 ` Thomas Lamprecht
2025-07-11 12:04 ` DERUMIER, Alexandre via pve-devel
2025-07-14 6:25 ` DERUMIER, Alexandre via pve-devel
2025-07-14 8:18 ` DERUMIER, Alexandre via pve-devel
2025-07-14 8:42 ` DERUMIER, Alexandre via pve-devel
[not found] ` <26badbf66613a89e63eaad8b24dd05567900250b.camel@groupe-cyllene.com>
2025-07-14 11:02 ` Fabian Grünbichler
2025-07-15 4:15 ` DERUMIER, Alexandre via pve-devel
[not found] ` <719c71b1148846e0cdd7e5c149d8b20146b4d043.camel@groupe-cyllene.com>
2025-07-14 11:04 ` Fabian Grünbichler
2025-07-14 11:11 ` Thomas Lamprecht
2025-07-14 11:15 ` Fabian Grünbichler
2025-07-14 11:27 ` Thomas Lamprecht
2025-07-14 11:46 ` Fabian Grünbichler
2025-07-14 15:12 ` Tiago Sousa via pve-devel
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.