public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Fiona Ebner <f.ebner@proxmox.com>
To: Thomas Lamprecht <t.lamprecht@proxmox.com>, pve-devel@lists.proxmox.com
Subject: Re: [PATCH qemu] fdmon-io_uring: avoid idle event loop being accounted as IO wait
Date: Wed, 22 Apr 2026 16:55:03 +0200	[thread overview]
Message-ID: <c2117d42-be80-4397-9420-129a3e77706a@proxmox.com> (raw)
In-Reply-To: <20260414211600.4023940-1-t.lamprecht@proxmox.com>

Am 14.04.26 um 11:15 PM schrieb Thomas Lamprecht:
> Based on Jens Axboe's succinct reply [0] on the kernel side:
>> It's not "IO pressure", it's the useless iowait metric...
>> [...]
>> If you won't want it, just turn it off with io_uring_set_iowait().
> 
> [0]: https://lore.kernel.org/all/49a977f3-45da-41dd-9fd6-75fd6760a591@kernel.dk/ 
> 

During testing with an RBD stroage (no krbd), I found that compared to
pve-qemu-kvm=10.1.2-7 the IO wait is still extremely much higher even
with your patch, for VMs just idling around. And still "more idle"
correlating to "more IO pressure". I'm seeing 0.0% or 0.x% with
pve-qemu-kvm=10.1.2-7 and 40-100% even with the patch.

Maybe there is a separate issue specific to the RBD block driver, but
not sure why it would be. At least, there is no IO pressure when setting
the IORING_ENTER_NO_IOWAIT flag unconditonally every time. My
suggestions below would only lead to setting the flag less often
compared to the current patch, so not help for this either. Will need to
investigate more.

> Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
> ---
> 
> Might need some closer checking, but seems to work out OK.
> Thanks @Fiona for some great off-list input w.r.t. this.
> 
> If it works out I might also submit this upstream, but would need to
> recheck the current status (as tbh. I'd be a bit surprised if nobody run
> into this and didn't get annoyed yet) upstream @qemu.
> 
>  ...void-idle-event-loop-being-accounted.patch | 89 +++++++++++++++++++
>  debian/patches/series                         |  1 +
>  2 files changed, 90 insertions(+)
>  create mode 100644 debian/patches/extra/0031-fdmon-io_uring-avoid-idle-event-loop-being-accounted.patch
> 
> diff --git a/debian/patches/extra/0031-fdmon-io_uring-avoid-idle-event-loop-being-accounted.patch b/debian/patches/extra/0031-fdmon-io_uring-avoid-idle-event-loop-being-accounted.patch
> new file mode 100644
> index 0000000..b23cd50
> --- /dev/null
> +++ b/debian/patches/extra/0031-fdmon-io_uring-avoid-idle-event-loop-being-accounted.patch
> @@ -0,0 +1,89 @@
> +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
> +From: Thomas Lamprecht <t.lamprecht@proxmox.com>
> +Date: Wed, 1 Apr 2026 21:08:28 +0200
> +Subject: [PATCH] fdmon-io_uring: avoid idle event loop being accounted as IO
> + wait
> +
> +Since QEMU 10.2, iothreads use io_uring for file descriptor monitoring
> +when available at build time. The blocking wait in io_uring_enter()
> +gets accounted as IO wait by the kernel, causing nearly 100% IO
> +pressure in PSI even when the system is idle.
> +
> +Split io_uring_submit_and_wait() into separate submit and wait calls
> +so the IORING_ENTER_NO_IOWAIT flag can be passed to io_uring_enter().
> +Only set the flag when the timeout is the sole pending SQE, i.e., the

Would change it to: "... sole pending SQE or none at all, i.e. ..."

> +event loop is truly idle with no block layer IO or poll re-arms
> +pending. The flag is gated on IORING_FEAT_NO_IOWAIT so older kernels
> +are unaffected.
> +
> +Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
> +---
> + util/fdmon-io_uring.c | 34 +++++++++++++++++++++++++++++++++-
> + 1 file changed, 33 insertions(+), 1 deletion(-)
> +
> +diff --git a/util/fdmon-io_uring.c b/util/fdmon-io_uring.c
> +index d0b56127c6..1d24c2dd5d 100644
> +--- a/util/fdmon-io_uring.c
> ++++ b/util/fdmon-io_uring.c
> +@@ -51,6 +51,15 @@
> + #include "aio-posix.h"
> + #include "trace.h"
> + 
> ++/* Compat defines for liburing versions that lack NO_IOWAIT support */
> ++#ifndef IORING_ENTER_NO_IOWAIT
> ++#define IORING_ENTER_NO_IOWAIT (1U << 7)
> ++#endif
> ++
> ++#ifndef IORING_FEAT_NO_IOWAIT
> ++#define IORING_FEAT_NO_IOWAIT  (1U << 17)
> ++#endif
> ++
> + enum {
> +     FDMON_IO_URING_ENTRIES  = 128, /* sq/cq ring size */
> + 
> +@@ -385,6 +394,7 @@ static int fdmon_io_uring_wait(AioContext *ctx, AioHandlerList *ready_list,
> + {
> +     struct __kernel_timespec ts;
> +     unsigned wait_nr = 1; /* block until at least one cqe is ready */
> ++    bool no_iowait;
> +     int ret;
> + 
> +     if (timeout == 0) {
> +@@ -405,14 +415,36 @@ static int fdmon_io_uring_wait(AioContext *ctx, AioHandlerList *ready_list,

When the timeout SQE is prepared, get_sqe() is called, which calls
io_uring_submit() if the ring is full of pending requests. We don't
detect those requests anymore with the io_uring_sq_ready() call below.

> + 
> +     fill_sq_ring(ctx);
> + 
> ++    /*
> ++     * When only the timeout SQE is pending (or none at all), the event loop
> ++     * is idle and the blocking wait should not be accounted as IO wait in PSI.
> ++     */
> ++    no_iowait = (ctx->fdmon_io_uring.features & IORING_FEAT_NO_IOWAIT) &&
> ++                io_uring_sq_ready(&ctx->fdmon_io_uring) ==
> ++                    (timeout > 0 ? 1 : 0);
> ++

While probably not too relevant in practice, there is a small race where
another thread could add an SQE after the check here and before submitting.

> +     /*
> +      * Loop to handle signals in both cases:
> +      * 1. If no SQEs were submitted, then -EINTR is returned.
> +      * 2. If SQEs were submitted then the number of SQEs submitted is returned
> +      *    rather than -EINTR.
> ++     *
> ++     * Submit and wait are split into separate calls so we can pass
> ++     * IORING_ENTER_NO_IOWAIT when the event loop is idle.
> +      */
> +     do {
> +-        ret = io_uring_submit_and_wait(&ctx->fdmon_io_uring, wait_nr);
> ++        ret = io_uring_submit(&ctx->fdmon_io_uring);

If an interrupt happens, the time window is larger, since there is
another call for submitting.

> ++
> ++        if (ret >= 0 && wait_nr > io_uring_cq_ready(&ctx->fdmon_io_uring)) {
> ++            unsigned flags = IORING_ENTER_GETEVENTS;
> ++
> ++            if (no_iowait) {
> ++                flags |= IORING_ENTER_NO_IOWAIT;
> ++            }
> ++
> ++            ret = io_uring_enter(ctx->fdmon_io_uring.ring_fd, 0, wait_nr,
> ++                                 flags, NULL);
> ++        }
> +     } while (ret == -EINTR ||
> +              (ret >= 0 && wait_nr > io_uring_cq_ready(&ctx->fdmon_io_uring)));
> + 

My proposal to avoid those issues:

> diff --git a/util/fdmon-io_uring.c b/util/fdmon-io_uring.c
> index 3427dbd1a9..1ed619bfe2 100644
> --- a/util/fdmon-io_uring.c
> +++ b/util/fdmon-io_uring.c
> @@ -406,7 +406,13 @@ static int fdmon_io_uring_wait(AioContext *ctx, AioHandlerList *ready_list,
>  {
>      struct __kernel_timespec ts;
>      unsigned wait_nr = 1; /* block until at least one cqe is ready */
> -    bool no_iowait;
> +    /*
> +     * Starts out as true if the NO_IOWAIT feature is supported. If a
> +     * non-timeout request is detected, it is set to false to enable accounting.
> +     */
> +    bool no_iowait = (ctx->fdmon_io_uring.features & IORING_FEAT_NO_IOWAIT);
> +    /* Whether there is a stand-alone timeout SQE or not. */
> +    bool standalone_timeout_sqe = false;
>      int ret;
>  
>      if (timeout == 0) {
> @@ -420,6 +426,9 @@ static int fdmon_io_uring_wait(AioContext *ctx, AioHandlerList *ready_list,
>              .tv_nsec = timeout % NANOSECONDS_PER_SECOND,
>          };
>  
> +        /* get_sqe() might call io_uring_submit(), so check early */
> +        standalone_timeout_sqe = io_uring_sq_ready(&ctx->fdmon_io_uring) == 0;
> +
>          sqe = get_sqe(ctx);
>          io_uring_prep_timeout(sqe, &ts, 1, 0);
>          io_uring_sqe_set_data(sqe, NULL);
> @@ -427,13 +436,14 @@ static int fdmon_io_uring_wait(AioContext *ctx, AioHandlerList *ready_list,
>  
>      fill_sq_ring(ctx);
>  
> -    /*
> -     * When only the timeout SQE is pending (or none at all), the event loop
> -     * is idle and the blocking wait should not be accounted as IO wait in PSI.
> -     */
> -    no_iowait = (ctx->fdmon_io_uring.features & IORING_FEAT_NO_IOWAIT) &&
> -                io_uring_sq_ready(&ctx->fdmon_io_uring) ==
> -                    (timeout > 0 ? 1 : 0);
> +    if (standalone_timeout_sqe) { /* Re-check if still stand-alone. */
> +        standalone_timeout_sqe = io_uring_sq_ready(&ctx->fdmon_io_uring) == 1;
> +    }
> +
> +    if (timeout > 0 && !standalone_timeout_sqe) {
> +        /* There is a timeout SQE, but not stand-alone. */
> +        no_iowait = false;
> +    }
>  
>      /*
>       * Loop to handle signals in both cases:
> @@ -447,6 +457,19 @@ static int fdmon_io_uring_wait(AioContext *ctx, AioHandlerList *ready_list,
>      do {
>          ret = io_uring_submit(&ctx->fdmon_io_uring);
>  
> +        /*
> +         * When only the timeout SQE was submitted (or none at all), the event
> +         * loop is idle and the blocking wait should not be accounted as IO wait
> +         * in PSI. If any other request is detected, enable IO wait accounting.
> +         */
> +        if (ret > (standalone_timeout_sqe ? 1 : 0)) {
> +            no_iowait = false;
> +        }
> +
> +        if (standalone_timeout_sqe && ret > 0) {
> +            standalone_timeout_sqe = false;
> +        }
> +
>          if (ret >= 0 && wait_nr > io_uring_cq_ready(&ctx->fdmon_io_uring)) {
>              unsigned flags = IORING_ENTER_GETEVENTS;




  reply	other threads:[~2026-04-22 14:55 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-14 21:09 Thomas Lamprecht
2026-04-22 14:55 ` Fiona Ebner [this message]
2026-04-23 15:36   ` Fiona Ebner
2026-04-23 20:39     ` Thomas Lamprecht
2026-04-24  9:13       ` Fiona Ebner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c2117d42-be80-4397-9420-129a3e77706a@proxmox.com \
    --to=f.ebner@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    --cc=t.lamprecht@proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal