public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Fiona Ebner <f.ebner@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH qemu-server] fix #2258: select correct device when removing drive snapshot via QEMU
Date: Wed,  3 Jan 2024 14:41:49 +0100	[thread overview]
Message-ID: <20240103134149.86608-1-f.ebner@proxmox.com> (raw)

The QMP command needs to be issued for the device where the disk is
currently attached, not for the device where the disk was attached at
the time the snapshot was taken.

Fixes the following scenario with a disk image for which
do_snapshots_with_qemu() is true (i.e. qcow2 or RBD+krbd=0):
1. Take snapshot while disk image is attached to a given bus+ID.
2. Detach disk image.
3. Attach disk image to a different bus+ID.
4. Remove snapshot.

Previously, this would result in an error like:
> blockdev-snapshot-delete-internal-sync' failed - Cannot find device=drive-scsi1 nor node_name=drive-scsi1

While the $running parameter for volume_snapshot_delete() is planned
to be removed on the next storage plugin APIAGE reset, it currently
causes an immediate return in Storage/Plugin.pm. So passing a truthy
value would prevent removing a snapshot from an unused qcow2 disk that
was still used at the time the snapshot was taken. Thus, and because
some exotic third party plugin might be using it for whatever reason,
it's necessary to keep passing the same value as before.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---

Nicer to read with a word-based diff e.g.:
git log -p -w --word-diff=color --word-diff-regex='\w+'

 PVE/QemuServer.pm | 19 ++++++++++++-------
 1 file changed, 12 insertions(+), 7 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 3b1540b6..82b78fa5 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4754,21 +4754,26 @@ sub qemu_volume_snapshot_delete {
     my ($vmid, $deviceid, $storecfg, $volid, $snap) = @_;
 
     my $running = check_running($vmid);
+    my $attached_deviceid;
 
-    if($running) {
-
-	$running = undef;
+    if ($running) {
 	my $conf = PVE::QemuConfig->load_config($vmid);
 	PVE::QemuConfig->foreach_volume($conf, sub {
 	    my ($ds, $drive) = @_;
-	    $running = 1 if $drive->{file} eq $volid;
+	    $attached_deviceid = "drive-$ds" if $drive->{file} eq $volid;
 	});
     }
 
-    if ($running && do_snapshots_with_qemu($storecfg, $volid, $deviceid)) {
-	mon_cmd($vmid, 'blockdev-snapshot-delete-internal-sync', device => $deviceid, name => $snap);
+    if ($attached_deviceid && do_snapshots_with_qemu($storecfg, $volid, $attached_deviceid)) {
+	mon_cmd(
+	    $vmid,
+	    'blockdev-snapshot-delete-internal-sync',
+	    device => $attached_deviceid,
+	    name => $snap,
+	);
     } else {
-	PVE::Storage::volume_snapshot_delete($storecfg, $volid, $snap, $running);
+	PVE::Storage::volume_snapshot_delete(
+	    $storecfg, $volid, $snap, $attached_deviceid ? 1 : undef);
     }
 }
 
-- 
2.39.2





             reply	other threads:[~2024-01-03 13:41 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-03 13:41 Fiona Ebner [this message]
2024-01-09  9:27 ` [pve-devel] applied: " Fabian Grünbichler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240103134149.86608-1-f.ebner@proxmox.com \
    --to=f.ebner@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal