all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: Thomas Lamprecht <t.lamprecht@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>,
	Fiona Ebner <f.ebner@proxmox.com>
Subject: [pve-devel] applied: [PATCH v2 qemu] add patches to work around stuck guest IO with iothread and VirtIO block/SCSI
Date: Fri, 2 Feb 2024 19:42:56 +0100	[thread overview]
Message-ID: <c101e725-909f-42f8-898f-c856813d4de9@proxmox.com> (raw)
In-Reply-To: <20240125094030.72875-1-f.ebner@proxmox.com>

Am 25/01/2024 um 10:40 schrieb Fiona Ebner:
> This essentially repeats commit 6b7c181 ("add patch to work around
> stuck guest IO with iothread and VirtIO block/SCSI") with an added
> fix for the SCSI event virtqueue, which requires special handling.
> This is to avoid the issue [3] that made the revert 2a49e66 ("Revert
> "add patch to work around stuck guest IO with iothread and VirtIO
> block/SCSI"") necessary the first time around.
> 
> When using iothread, after commits
> 1665d9326f ("virtio-blk: implement BlockDevOps->drained_begin()")
> 766aa2de0f ("virtio-scsi: implement BlockDevOps->drained_begin()")
> it can happen that polling gets stuck when draining. This would cause
> IO in the guest to get completely stuck.
> 
> A workaround for users is stopping and resuming the vCPUs because that
> would also stop and resume the dataplanes which would kick the host
> notifiers.
> 
> This can happen with block jobs like backup and drive mirror as well
> as with hotplug [2].
> 
> Reports in the community forum that might be about this issue[0][1]
> and there is also one in the enterprise support channel.
> 
> As a workaround in the code, just re-enable notifications and kick the
> virt queue after draining. Draining is already costly and rare, so no
> need to worry about a performance penalty here.
> 
> Take special care to attach the SCSI event virtqueue host notifier
> with the _no_poll() variant like in virtio_scsi_dataplane_start().
> This avoids the issue from the first attempted fix where the iothread
> would suddenly loop with 100% CPU usage whenever some guest IO came in
> [3]. This is necessary because of commit 38738f7dbb ("virtio-scsi:
> don't waste CPU polling the event virtqueue"). See [4] for the
> relevant discussion.
> 
> [0]: https://forum.proxmox.com/threads/137286/
> [1]: https://forum.proxmox.com/threads/137536/
> [2]: https://issues.redhat.com/browse/RHEL-3934
> [3]: https://forum.proxmox.com/threads/138140/
> [4]: https://lore.kernel.org/qemu-devel/bfc7b20c-2144-46e9-acbc-e726276c5a31@proxmox.com/
> 
> Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
> ---
> 
> Changes in v2:
>     * Pick (functionally equivalent) upstream patches to reduce diff.
> 
>  ...ttach-event-vq-notifier-with-no_poll.patch |  62 +++++++++
>  ...-notifications-disabled-during-drain.patch | 126 ++++++++++++++++++
>  debian/patches/series                         |   2 +
>  3 files changed, 190 insertions(+)
>  create mode 100644 debian/patches/extra/0012-virtio-scsi-Attach-event-vq-notifier-with-no_poll.patch
>  create mode 100644 debian/patches/extra/0013-virtio-Keep-notifications-disabled-during-drain.patch
> 
>

basically applied this, thanks.

Basically, as I went for the v2 [0], as the series file needed some adaption
 due to recent v8.1.5 patch I just made a new commit but kept your message and
added an Originally-by Trailer, hope that's all right with you.

[0]: https://lore.kernel.org/qemu-devel/20240202153158.788922-1-hreitz@redhat.com/
     But still only the first two patches, the clean-up did not applied at 8.1.5 
     and I did not bother checking that out closely.




  reply	other threads:[~2024-02-02 18:42 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-25  9:40 [pve-devel] " Fiona Ebner
2024-02-02 18:42 ` Thomas Lamprecht [this message]
2024-02-05 11:39   ` [pve-devel] applied: " Fiona Ebner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c101e725-909f-42f8-898f-c856813d4de9@proxmox.com \
    --to=t.lamprecht@proxmox.com \
    --cc=f.ebner@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal