public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Fiona Ebner <f.ebner@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH qemu 1/2] add patch to work around stuck guest IO with iothread and VirtIO block/SCSI
Date: Mon, 11 Dec 2023 14:28:38 +0100	[thread overview]
Message-ID: <20231211132839.747351-1-f.ebner@proxmox.com> (raw)

When using iothread, after commits
1665d9326f ("virtio-blk: implement BlockDevOps->drained_begin()")
766aa2de0f ("virtio-scsi: implement BlockDevOps->drained_begin()")
it can happen that polling gets stuck when draining. This would cause
IO in the guest to get completely stuck.

A workaround for users is stopping and resuming the vCPUs because that
would also stop and resume the dataplanes which would kick the host
notifiers.

This can happen with block jobs like backup and drive mirror as well
as with hotplug [2].

Reports in the community forum that might be about this issue[0][1]
and there is also one in the enterprise support channel.

As a workaround in the code, just re-enable notifications and kick the
virt queue after draining. Draining is already costly and rare, so no
need to worry about a performance penalty here. This was taken from
the following comment of a QEMU developer [3] (in my debugging,
I had already found re-enabling notification to work around the issue,
but also kicking the queue is more complete).

[0]: https://forum.proxmox.com/threads/137286/
[1]: https://forum.proxmox.com/threads/137536/
[2]: https://issues.redhat.com/browse/RHEL-3934
[3]: https://issues.redhat.com/browse/RHEL-3934?focusedId=23562096&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-23562096

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 ...work-around-iothread-polling-getting.patch | 66 +++++++++++++++++++
 debian/patches/series                         |  1 +
 2 files changed, 67 insertions(+)
 create mode 100644 debian/patches/pve/0046-virtio-blk-scsi-work-around-iothread-polling-getting.patch

diff --git a/debian/patches/pve/0046-virtio-blk-scsi-work-around-iothread-polling-getting.patch b/debian/patches/pve/0046-virtio-blk-scsi-work-around-iothread-polling-getting.patch
new file mode 100644
index 0000000..3ac10a8
--- /dev/null
+++ b/debian/patches/pve/0046-virtio-blk-scsi-work-around-iothread-polling-getting.patch
@@ -0,0 +1,66 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Fiona Ebner <f.ebner@proxmox.com>
+Date: Tue, 5 Dec 2023 14:05:49 +0100
+Subject: [PATCH] virtio blk/scsi: work around iothread polling getting stuck
+ with drain
+
+When using iothread, after commits
+1665d9326f ("virtio-blk: implement BlockDevOps->drained_begin()")
+766aa2de0f ("virtio-scsi: implement BlockDevOps->drained_begin()")
+it can happen that polling gets stuck when draining. This would cause
+IO in the guest to get completely stuck.
+
+A workaround for users is stopping and resuming the vCPUs because that
+would also stop and resume the dataplanes which would kick the host
+notifiers.
+
+This can happen with block jobs like backup and drive mirror as well
+as with hotplug [2].
+
+Reports in the community forum that might be about this issue[0][1]
+and there is also one in the enterprise support channel.
+
+As a workaround in the code, just re-enable notifications and kick the
+virt queue after draining. Draining is already costly and rare, so no
+need to worry about a performance penalty here. This was taken from
+the following comment of a QEMU developer [3] (in my debugging,
+I had already found re-enabling notification to work around the issue,
+but also kicking the queue is more complete).
+
+[0]: https://forum.proxmox.com/threads/137286/
+[1]: https://forum.proxmox.com/threads/137536/
+[2]: https://issues.redhat.com/browse/RHEL-3934
+[3]: https://issues.redhat.com/browse/RHEL-3934?focusedId=23562096&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-23562096
+
+Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
+---
+ hw/block/virtio-blk.c | 2 ++
+ hw/scsi/virtio-scsi.c | 2 ++
+ 2 files changed, 4 insertions(+)
+
+diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
+index 39e7f23fab..22502047d5 100644
+--- a/hw/block/virtio-blk.c
++++ b/hw/block/virtio-blk.c
+@@ -1537,6 +1537,8 @@ static void virtio_blk_drained_end(void *opaque)
+     for (uint16_t i = 0; i < s->conf.num_queues; i++) {
+         VirtQueue *vq = virtio_get_queue(vdev, i);
+         virtio_queue_aio_attach_host_notifier(vq, ctx);
++        virtio_queue_set_notification(vq, 1);
++        virtio_queue_notify(vdev, i);
+     }
+ }
+ 
+diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
+index 45b95ea070..a7bddbf899 100644
+--- a/hw/scsi/virtio-scsi.c
++++ b/hw/scsi/virtio-scsi.c
+@@ -1166,6 +1166,8 @@ static void virtio_scsi_drained_end(SCSIBus *bus)
+     for (uint32_t i = 0; i < total_queues; i++) {
+         VirtQueue *vq = virtio_get_queue(vdev, i);
+         virtio_queue_aio_attach_host_notifier(vq, s->ctx);
++        virtio_queue_set_notification(vq, 1);
++        virtio_queue_notify(vdev, i);
+     }
+ }
+ 
diff --git a/debian/patches/series b/debian/patches/series
index 9938b8e..0e21f1f 100644
--- a/debian/patches/series
+++ b/debian/patches/series
@@ -59,3 +59,4 @@ pve/0042-Revert-block-rbd-implement-bdrv_co_block_status.patch
 pve/0043-alloc-track-fix-deadlock-during-drop.patch
 pve/0044-migration-for-snapshots-hold-the-BQL-during-setup-ca.patch
 pve/0045-savevm-async-don-t-hold-BQL-during-setup.patch
+pve/0046-virtio-blk-scsi-work-around-iothread-polling-getting.patch
-- 
2.39.2





             reply	other threads:[~2023-12-11 13:28 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-11 13:28 Fiona Ebner [this message]
2023-12-11 13:28 ` [pve-devel] [PATCH qemu 2/2] pick fix for potential deadlock with QMP resize and iothread Fiona Ebner
2023-12-11 16:17 ` [pve-devel] applied-series: [PATCH qemu 1/2] add patch to work around stuck guest IO with iothread and VirtIO block/SCSI Thomas Lamprecht

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20231211132839.747351-1-f.ebner@proxmox.com \
    --to=f.ebner@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal