public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH qemu-server] block job: mirror: always detach the target node upon error
@ 2025-07-28 10:06 Fiona Ebner
  2025-07-28 12:08 ` Hannes Duerr
  2025-07-29  6:08 ` Thomas Lamprecht
  0 siblings, 2 replies; 3+ messages in thread
From: Fiona Ebner @ 2025-07-28 10:06 UTC (permalink / raw)
  To: pve-devel

For example, attempting to live-migrate a disk again after failure
would not work, because a node with the same name would still be
attached.

Mirroring the disk to a shared storage after VM live-migration failure
and then attempting VM live-migration again, would result in

> migration status error: failed - Error in migration completion: Input/output error

This is because for migration completion, all attached block devices
are flushed, but the NBD export does not exist on the target side
anymore.

Fixes: 1da91175 ("block job: add blockdev mirror")
Reported-by: Hannes Dürr <h.duerr@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/QemuServer/BlockJob.pm | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/src/PVE/QemuServer/BlockJob.pm b/src/PVE/QemuServer/BlockJob.pm
index 9c04600b..f742b184 100644
--- a/src/PVE/QemuServer/BlockJob.pm
+++ b/src/PVE/QemuServer/BlockJob.pm
@@ -28,6 +28,9 @@ sub qemu_handle_concluded_blockjob {
     eval { mon_cmd($vmid, 'job-dismiss', id => $job_id); };
     log_warn("$job_id: failed to dismiss job - $@") if $@;
 
+    # If there was an error, always detach the target.
+    $job->{'detach-node-name'} = $job->{'target-node-name'} if $qmp_info->{error};
+
     if (my $node_name = $job->{'detach-node-name'}) {
         eval { PVE::QemuServer::Blockdev::detach($vmid, $node_name); };
         log_warn($@) if $@;
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [pve-devel] [PATCH qemu-server] block job: mirror: always detach the target node upon error
  2025-07-28 10:06 [pve-devel] [PATCH qemu-server] block job: mirror: always detach the target node upon error Fiona Ebner
@ 2025-07-28 12:08 ` Hannes Duerr
  2025-07-29  6:08 ` Thomas Lamprecht
  1 sibling, 0 replies; 3+ messages in thread
From: Hannes Duerr @ 2025-07-28 12:08 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fiona Ebner

I attempted to migrate a VM with its disk on local storage, but this 
failed due to insufficient space on the target node.
I then moved the disk to shared storage and attempted the migration again.
This time, the migration succeeded.

Please consider this:

Tested-by: Hannes Duerr <h.duerr@proxmox.com>

On 7/28/25 12:07 PM, Fiona Ebner wrote:
> For example, attempting to live-migrate a disk again after failure
> would not work, because a node with the same name would still be
> attached.
>
> Mirroring the disk to a shared storage after VM live-migration failure
> and then attempting VM live-migration again, would result in
>
>> migration status error: failed - Error in migration completion: Input/output error
> This is because for migration completion, all attached block devices
> are flushed, but the NBD export does not exist on the target side
> anymore.
>
> Fixes: 1da91175 ("block job: add blockdev mirror")
> Reported-by: Hannes Dürr <h.duerr@proxmox.com>
> Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
> ---
>   src/PVE/QemuServer/BlockJob.pm | 3 +++
>   1 file changed, 3 insertions(+)
>
> diff --git a/src/PVE/QemuServer/BlockJob.pm b/src/PVE/QemuServer/BlockJob.pm
> index 9c04600b..f742b184 100644
> --- a/src/PVE/QemuServer/BlockJob.pm
> +++ b/src/PVE/QemuServer/BlockJob.pm
> @@ -28,6 +28,9 @@ sub qemu_handle_concluded_blockjob {
>       eval { mon_cmd($vmid, 'job-dismiss', id => $job_id); };
>       log_warn("$job_id: failed to dismiss job - $@") if $@;
>   
> +    # If there was an error, always detach the target.
> +    $job->{'detach-node-name'} = $job->{'target-node-name'} if $qmp_info->{error};
> +
>       if (my $node_name = $job->{'detach-node-name'}) {
>           eval { PVE::QemuServer::Blockdev::detach($vmid, $node_name); };
>           log_warn($@) if $@;


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [pve-devel] [PATCH qemu-server] block job: mirror: always detach the target node upon error
  2025-07-28 10:06 [pve-devel] [PATCH qemu-server] block job: mirror: always detach the target node upon error Fiona Ebner
  2025-07-28 12:08 ` Hannes Duerr
@ 2025-07-29  6:08 ` Thomas Lamprecht
  1 sibling, 0 replies; 3+ messages in thread
From: Thomas Lamprecht @ 2025-07-29  6:08 UTC (permalink / raw)
  To: pve-devel, Fiona Ebner

On Mon, 28 Jul 2025 12:06:36 +0200, Fiona Ebner wrote:
> For example, attempting to live-migrate a disk again after failure
> would not work, because a node with the same name would still be
> attached.
> 
> Mirroring the disk to a shared storage after VM live-migration failure
> and then attempting VM live-migration again, would result in
> 
> [...]

Applied, thanks!

[1/1] block job: mirror: always detach the target node upon error
      commit: 8e671e795f43ef6c314a4da7602c80049c275c03


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2025-07-29  6:06 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-07-28 10:06 [pve-devel] [PATCH qemu-server] block job: mirror: always detach the target node upon error Fiona Ebner
2025-07-28 12:08 ` Hannes Duerr
2025-07-29  6:08 ` Thomas Lamprecht

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal