public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: "Fabian Grünbichler" <f.gruenbichler@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [storage] Lack of deactivate_volume call after cloning from a snapshot
Date: Fri, 03 Oct 2025 12:16:17 +0200	[thread overview]
Message-ID: <1759486487.j2y5qrrzjm.astroid@yuna.none> (raw)
In-Reply-To: <mailman.534.1759272790.390.pve-devel@lists.proxmox.com>

On October 1, 2025 12:39 am, Andrei Perapiolkin via pve-devel wrote:
> Hi,
> 
> 'deactivate_volume' in the storage plugin is not called after cloning
> completes if the clone is initiated from the Proxmox web UI and both VMs
> are on the same host.
> 
> I’ve created a bug report: https://bugzilla.proxmox.com/show_bug.cgi?id=6879
> 
> In my understanding, the problem is related to:
> 
> /method({ name => 'clone_vm',  path => '{vmid}/clone', ...
> /
> Located in PVE/API2/Qemu.pm around 1459 - 4568
> 
> In this API method, deactivation calls are triggered only when the
> $target parameter is provided.
> 
>                  if ($target) {
>                      if (!$running) {
>                          # always deactivate volumes - avoids that LVM
> LVs are active on several nodes
>                          eval {
> PVE::Storage::deactivate_volumes($storecfg, $vollist, $snapname);
>                          };
>                          # but only warn when that fails (e.g., parallel
> clones keeping them active)
>                          log_warn($@) if $@;
>                      }
> 
>                      PVE::Storage::deactivate_volumes($storecfg,
> $newvollist);
> 
>                      my $newconffile =
> PVE::QemuConfig->config_file($newid, $target);
>                      die "Failed to move config to node '$target' -
> rename failed: $!\n"
>                          if !rename($conffile, $newconffile);
>                  }
> 
> https://github.com/proxmox/qemu-server/blob/31d6f5f63bd6edb6c43de3e0213d38199c350cd4/src/PVE/API2/Qemu.pm#L4533C1-L4547C18
> 
> It seems that web UI, for cloning on the same node, does not provide
> '$target' and as a result no deactivation being triggered volume
> snapshot and target volume.
> 
> Is it expected behavior?
> 
> Will it be OK to move 'deactivate' section outside of the $target check?

also filed as https://bugzilla.proxmox.com/show_bug.cgi?id=6879, see my
reply there:

the issue is that multiple such clones can run in parallel, and there is
no ref counting at the moment that would make deactivation safe in this
scenario. that is also the reason why we don't deactivate volumes in
other places where it's impossible to tell whether there are no more
other users.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

      reply	other threads:[~2025-10-03 10:16 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-09-30 22:39 Andrei Perapiolkin via pve-devel
2025-10-03 10:16 ` Fabian Grünbichler [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1759486487.j2y5qrrzjm.astroid@yuna.none \
    --to=f.gruenbichler@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal