* [pve-devel] [PATCH qemu] add patch to fix deadlock with VirtIO block and iothread during QMP stop
@ 2024-02-21 13:01 Fiona Ebner
2024-02-21 19:18 ` [pve-devel] applied: " Thomas Lamprecht
0 siblings, 1 reply; 2+ messages in thread
From: Fiona Ebner @ 2024-02-21 13:01 UTC (permalink / raw)
To: pve-devel
Backported from commit bfa36802d1 ("virtio-blk: avoid using ioeventfd
state in irqfd conditional") because the rework/rename dataplane ->
ioeventfd didn't happen yet.
Reported in the community forum [0] and reproduced doing a backup loop
to PBS with suspend mode with fio doing heavy IO in the guest and
using an RBD storage (with krbd).
[0]: https://forum.proxmox.com/threads/141320
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
...-using-ioeventfd-state-in-irqfd-cond.patch | 61 +++++++++++++++++++
debian/patches/series | 1 +
2 files changed, 62 insertions(+)
create mode 100644 debian/patches/extra/0013-virtio-blk-avoid-using-ioeventfd-state-in-irqfd-cond.patch
diff --git a/debian/patches/extra/0013-virtio-blk-avoid-using-ioeventfd-state-in-irqfd-cond.patch b/debian/patches/extra/0013-virtio-blk-avoid-using-ioeventfd-state-in-irqfd-cond.patch
new file mode 100644
index 0000000..8109e7d
--- /dev/null
+++ b/debian/patches/extra/0013-virtio-blk-avoid-using-ioeventfd-state-in-irqfd-cond.patch
@@ -0,0 +1,61 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Stefan Hajnoczi <stefanha@redhat.com>
+Date: Mon, 22 Jan 2024 12:26:25 -0500
+Subject: [PATCH] virtio-blk: avoid using ioeventfd state in irqfd conditional
+
+Requests that complete in an IOThread use irqfd to notify the guest
+while requests that complete in the main loop thread use the traditional
+qdev irq code path. The reason for this conditional is that the irq code
+path requires the BQL:
+
+ if (s->ioeventfd_started && !s->ioeventfd_disabled) {
+ virtio_notify_irqfd(vdev, req->vq);
+ } else {
+ virtio_notify(vdev, req->vq);
+ }
+
+There is a corner case where the conditional invokes the irq code path
+instead of the irqfd code path:
+
+ static void virtio_blk_stop_ioeventfd(VirtIODevice *vdev)
+ {
+ ...
+ /*
+ * Set ->ioeventfd_started to false before draining so that host notifiers
+ * are not detached/attached anymore.
+ */
+ s->ioeventfd_started = false;
+
+ /* Wait for virtio_blk_dma_restart_bh() and in flight I/O to complete */
+ blk_drain(s->conf.conf.blk);
+
+During blk_drain() the conditional produces the wrong result because
+ioeventfd_started is false.
+
+Use qemu_in_iothread() instead of checking the ioeventfd state.
+
+Cc: qemu-stable@nongnu.org
+Buglink: https://issues.redhat.com/browse/RHEL-15394
+Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
+Message-ID: <20240122172625.415386-1-stefanha@redhat.com>
+Reviewed-by: Kevin Wolf <kwolf@redhat.com>
+Signed-off-by: Kevin Wolf <kwolf@redhat.com>
+[FE: backport: dataplane -> ioeventfd rework didn't happen yet]
+Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
+---
+ hw/block/virtio-blk.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
+index 39e7f23fab..61bd1f6859 100644
+--- a/hw/block/virtio-blk.c
++++ b/hw/block/virtio-blk.c
+@@ -64,7 +64,7 @@ static void virtio_blk_req_complete(VirtIOBlockReq *req, unsigned char status)
+ iov_discard_undo(&req->inhdr_undo);
+ iov_discard_undo(&req->outhdr_undo);
+ virtqueue_push(req->vq, &req->elem, req->in_len);
+- if (s->dataplane_started && !s->dataplane_disabled) {
++ if (qemu_in_iothread()) {
+ virtio_blk_data_plane_notify(s->dataplane, req->vq);
+ } else {
+ virtio_notify(vdev, req->vq);
diff --git a/debian/patches/series b/debian/patches/series
index 4d75ec3..90553de 100644
--- a/debian/patches/series
+++ b/debian/patches/series
@@ -10,6 +10,7 @@ extra/0009-ui-clipboard-mark-type-as-not-available-when-there-i.patch
extra/0010-virtio-scsi-Attach-event-vq-notifier-with-no_poll.patch
extra/0011-virtio-Re-enable-notifications-after-drain.patch
extra/0012-qemu_init-increase-NOFILE-soft-limit-on-POSIX.patch
+extra/0013-virtio-blk-avoid-using-ioeventfd-state-in-irqfd-cond.patch
bitmap-mirror/0001-drive-mirror-add-support-for-sync-bitmap-mode-never.patch
bitmap-mirror/0002-drive-mirror-add-support-for-conditional-and-always-.patch
bitmap-mirror/0003-mirror-add-check-for-bitmap-mode-without-bitmap.patch
--
2.39.2
^ permalink raw reply [flat|nested] 2+ messages in thread
* [pve-devel] applied: [PATCH qemu] add patch to fix deadlock with VirtIO block and iothread during QMP stop
2024-02-21 13:01 [pve-devel] [PATCH qemu] add patch to fix deadlock with VirtIO block and iothread during QMP stop Fiona Ebner
@ 2024-02-21 19:18 ` Thomas Lamprecht
0 siblings, 0 replies; 2+ messages in thread
From: Thomas Lamprecht @ 2024-02-21 19:18 UTC (permalink / raw)
To: Proxmox VE development discussion, Fiona Ebner
Am 21/02/2024 um 14:01 schrieb Fiona Ebner:
> Backported from commit bfa36802d1 ("virtio-blk: avoid using ioeventfd
> state in irqfd conditional") because the rework/rename dataplane ->
> ioeventfd didn't happen yet.
>
> Reported in the community forum [0] and reproduced doing a backup loop
> to PBS with suspend mode with fio doing heavy IO in the guest and
> using an RBD storage (with krbd).
>
> [0]: https://forum.proxmox.com/threads/141320
>
> Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
> ---
> ...-using-ioeventfd-state-in-irqfd-cond.patch | 61 +++++++++++++++++++
> debian/patches/series | 1 +
> 2 files changed, 62 insertions(+)
> create mode 100644 debian/patches/extra/0013-virtio-blk-avoid-using-ioeventfd-state-in-irqfd-cond.patch
>
>
applied, thanks!
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2024-02-21 19:19 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-02-21 13:01 [pve-devel] [PATCH qemu] add patch to fix deadlock with VirtIO block and iothread during QMP stop Fiona Ebner
2024-02-21 19:18 ` [pve-devel] applied: " Thomas Lamprecht
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox