From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id 9EBFC1FF13B for ; Wed, 22 Apr 2026 16:55:39 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 7274022F1D; Wed, 22 Apr 2026 16:55:39 +0200 (CEST) Message-ID: Date: Wed, 22 Apr 2026 16:55:03 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH qemu] fdmon-io_uring: avoid idle event loop being accounted as IO wait To: Thomas Lamprecht , pve-devel@lists.proxmox.com References: <20260414211600.4023940-1-t.lamprecht@proxmox.com> Content-Language: en-US From: Fiona Ebner In-Reply-To: <20260414211600.4023940-1-t.lamprecht@proxmox.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1776869616402 X-SPAM-LEVEL: Spam detection results: 0 AWL 0.008 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Message-ID-Hash: SDVG55X3RAUWKK2TOUTOJG766Q7H4DEV X-Message-ID-Hash: SDVG55X3RAUWKK2TOUTOJG766Q7H4DEV X-MailFrom: f.ebner@proxmox.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header X-Mailman-Version: 3.3.10 Precedence: list List-Id: Proxmox VE development discussion List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: Am 14.04.26 um 11:15 PM schrieb Thomas Lamprecht: > Based on Jens Axboe's succinct reply [0] on the kernel side: >> It's not "IO pressure", it's the useless iowait metric... >> [...] >> If you won't want it, just turn it off with io_uring_set_iowait(). > > [0]: https://lore.kernel.org/all/49a977f3-45da-41dd-9fd6-75fd6760a591@kernel.dk/ > During testing with an RBD stroage (no krbd), I found that compared to pve-qemu-kvm=10.1.2-7 the IO wait is still extremely much higher even with your patch, for VMs just idling around. And still "more idle" correlating to "more IO pressure". I'm seeing 0.0% or 0.x% with pve-qemu-kvm=10.1.2-7 and 40-100% even with the patch. Maybe there is a separate issue specific to the RBD block driver, but not sure why it would be. At least, there is no IO pressure when setting the IORING_ENTER_NO_IOWAIT flag unconditonally every time. My suggestions below would only lead to setting the flag less often compared to the current patch, so not help for this either. Will need to investigate more. > Signed-off-by: Thomas Lamprecht > --- > > Might need some closer checking, but seems to work out OK. > Thanks @Fiona for some great off-list input w.r.t. this. > > If it works out I might also submit this upstream, but would need to > recheck the current status (as tbh. I'd be a bit surprised if nobody run > into this and didn't get annoyed yet) upstream @qemu. > > ...void-idle-event-loop-being-accounted.patch | 89 +++++++++++++++++++ > debian/patches/series | 1 + > 2 files changed, 90 insertions(+) > create mode 100644 debian/patches/extra/0031-fdmon-io_uring-avoid-idle-event-loop-being-accounted.patch > > diff --git a/debian/patches/extra/0031-fdmon-io_uring-avoid-idle-event-loop-being-accounted.patch b/debian/patches/extra/0031-fdmon-io_uring-avoid-idle-event-loop-being-accounted.patch > new file mode 100644 > index 0000000..b23cd50 > --- /dev/null > +++ b/debian/patches/extra/0031-fdmon-io_uring-avoid-idle-event-loop-being-accounted.patch > @@ -0,0 +1,89 @@ > +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001 > +From: Thomas Lamprecht > +Date: Wed, 1 Apr 2026 21:08:28 +0200 > +Subject: [PATCH] fdmon-io_uring: avoid idle event loop being accounted as IO > + wait > + > +Since QEMU 10.2, iothreads use io_uring for file descriptor monitoring > +when available at build time. The blocking wait in io_uring_enter() > +gets accounted as IO wait by the kernel, causing nearly 100% IO > +pressure in PSI even when the system is idle. > + > +Split io_uring_submit_and_wait() into separate submit and wait calls > +so the IORING_ENTER_NO_IOWAIT flag can be passed to io_uring_enter(). > +Only set the flag when the timeout is the sole pending SQE, i.e., the Would change it to: "... sole pending SQE or none at all, i.e. ..." > +event loop is truly idle with no block layer IO or poll re-arms > +pending. The flag is gated on IORING_FEAT_NO_IOWAIT so older kernels > +are unaffected. > + > +Signed-off-by: Thomas Lamprecht > +--- > + util/fdmon-io_uring.c | 34 +++++++++++++++++++++++++++++++++- > + 1 file changed, 33 insertions(+), 1 deletion(-) > + > +diff --git a/util/fdmon-io_uring.c b/util/fdmon-io_uring.c > +index d0b56127c6..1d24c2dd5d 100644 > +--- a/util/fdmon-io_uring.c > ++++ b/util/fdmon-io_uring.c > +@@ -51,6 +51,15 @@ > + #include "aio-posix.h" > + #include "trace.h" > + > ++/* Compat defines for liburing versions that lack NO_IOWAIT support */ > ++#ifndef IORING_ENTER_NO_IOWAIT > ++#define IORING_ENTER_NO_IOWAIT (1U << 7) > ++#endif > ++ > ++#ifndef IORING_FEAT_NO_IOWAIT > ++#define IORING_FEAT_NO_IOWAIT (1U << 17) > ++#endif > ++ > + enum { > + FDMON_IO_URING_ENTRIES = 128, /* sq/cq ring size */ > + > +@@ -385,6 +394,7 @@ static int fdmon_io_uring_wait(AioContext *ctx, AioHandlerList *ready_list, > + { > + struct __kernel_timespec ts; > + unsigned wait_nr = 1; /* block until at least one cqe is ready */ > ++ bool no_iowait; > + int ret; > + > + if (timeout == 0) { > +@@ -405,14 +415,36 @@ static int fdmon_io_uring_wait(AioContext *ctx, AioHandlerList *ready_list, When the timeout SQE is prepared, get_sqe() is called, which calls io_uring_submit() if the ring is full of pending requests. We don't detect those requests anymore with the io_uring_sq_ready() call below. > + > + fill_sq_ring(ctx); > + > ++ /* > ++ * When only the timeout SQE is pending (or none at all), the event loop > ++ * is idle and the blocking wait should not be accounted as IO wait in PSI. > ++ */ > ++ no_iowait = (ctx->fdmon_io_uring.features & IORING_FEAT_NO_IOWAIT) && > ++ io_uring_sq_ready(&ctx->fdmon_io_uring) == > ++ (timeout > 0 ? 1 : 0); > ++ While probably not too relevant in practice, there is a small race where another thread could add an SQE after the check here and before submitting. > + /* > + * Loop to handle signals in both cases: > + * 1. If no SQEs were submitted, then -EINTR is returned. > + * 2. If SQEs were submitted then the number of SQEs submitted is returned > + * rather than -EINTR. > ++ * > ++ * Submit and wait are split into separate calls so we can pass > ++ * IORING_ENTER_NO_IOWAIT when the event loop is idle. > + */ > + do { > +- ret = io_uring_submit_and_wait(&ctx->fdmon_io_uring, wait_nr); > ++ ret = io_uring_submit(&ctx->fdmon_io_uring); If an interrupt happens, the time window is larger, since there is another call for submitting. > ++ > ++ if (ret >= 0 && wait_nr > io_uring_cq_ready(&ctx->fdmon_io_uring)) { > ++ unsigned flags = IORING_ENTER_GETEVENTS; > ++ > ++ if (no_iowait) { > ++ flags |= IORING_ENTER_NO_IOWAIT; > ++ } > ++ > ++ ret = io_uring_enter(ctx->fdmon_io_uring.ring_fd, 0, wait_nr, > ++ flags, NULL); > ++ } > + } while (ret == -EINTR || > + (ret >= 0 && wait_nr > io_uring_cq_ready(&ctx->fdmon_io_uring))); > + My proposal to avoid those issues: > diff --git a/util/fdmon-io_uring.c b/util/fdmon-io_uring.c > index 3427dbd1a9..1ed619bfe2 100644 > --- a/util/fdmon-io_uring.c > +++ b/util/fdmon-io_uring.c > @@ -406,7 +406,13 @@ static int fdmon_io_uring_wait(AioContext *ctx, AioHandlerList *ready_list, > { > struct __kernel_timespec ts; > unsigned wait_nr = 1; /* block until at least one cqe is ready */ > - bool no_iowait; > + /* > + * Starts out as true if the NO_IOWAIT feature is supported. If a > + * non-timeout request is detected, it is set to false to enable accounting. > + */ > + bool no_iowait = (ctx->fdmon_io_uring.features & IORING_FEAT_NO_IOWAIT); > + /* Whether there is a stand-alone timeout SQE or not. */ > + bool standalone_timeout_sqe = false; > int ret; > > if (timeout == 0) { > @@ -420,6 +426,9 @@ static int fdmon_io_uring_wait(AioContext *ctx, AioHandlerList *ready_list, > .tv_nsec = timeout % NANOSECONDS_PER_SECOND, > }; > > + /* get_sqe() might call io_uring_submit(), so check early */ > + standalone_timeout_sqe = io_uring_sq_ready(&ctx->fdmon_io_uring) == 0; > + > sqe = get_sqe(ctx); > io_uring_prep_timeout(sqe, &ts, 1, 0); > io_uring_sqe_set_data(sqe, NULL); > @@ -427,13 +436,14 @@ static int fdmon_io_uring_wait(AioContext *ctx, AioHandlerList *ready_list, > > fill_sq_ring(ctx); > > - /* > - * When only the timeout SQE is pending (or none at all), the event loop > - * is idle and the blocking wait should not be accounted as IO wait in PSI. > - */ > - no_iowait = (ctx->fdmon_io_uring.features & IORING_FEAT_NO_IOWAIT) && > - io_uring_sq_ready(&ctx->fdmon_io_uring) == > - (timeout > 0 ? 1 : 0); > + if (standalone_timeout_sqe) { /* Re-check if still stand-alone. */ > + standalone_timeout_sqe = io_uring_sq_ready(&ctx->fdmon_io_uring) == 1; > + } > + > + if (timeout > 0 && !standalone_timeout_sqe) { > + /* There is a timeout SQE, but not stand-alone. */ > + no_iowait = false; > + } > > /* > * Loop to handle signals in both cases: > @@ -447,6 +457,19 @@ static int fdmon_io_uring_wait(AioContext *ctx, AioHandlerList *ready_list, > do { > ret = io_uring_submit(&ctx->fdmon_io_uring); > > + /* > + * When only the timeout SQE was submitted (or none at all), the event > + * loop is idle and the blocking wait should not be accounted as IO wait > + * in PSI. If any other request is detected, enable IO wait accounting. > + */ > + if (ret > (standalone_timeout_sqe ? 1 : 0)) { > + no_iowait = false; > + } > + > + if (standalone_timeout_sqe && ret > 0) { > + standalone_timeout_sqe = false; > + } > + > if (ret >= 0 && wait_nr > io_uring_cq_ready(&ctx->fdmon_io_uring)) { > unsigned flags = IORING_ENTER_GETEVENTS;