public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Thomas Lamprecht <t.lamprecht@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] applied: [PATCH qemu 1/3] alloc track: acquire BS AIO context during dropping
Date: Tue,  6 Apr 2021 16:41:57 +0200	[thread overview]
Message-ID: <20210406144159.1934669-1-t.lamprecht@proxmox.com> (raw)

ran into this when live-restoring a backup configured for IO-threads,
got the good ol':
> qemu: qemu_mutex_unlock_impl: Operation not permitted
error.

Checking out the history of the related bdrv_backup_top_drop(*bs)
method, we can see that it used to do the AIO context acquiring too,
but in the backup path this was problematic and was changed to be
higher up in the call path in a upstream series from Stefan[0].

That said, this is a completely different code path and it is safe to
do so here. We always run from the main threads's AIO context here
and we call it only indirectly once, guarded by checking for
`s->drop_state == DropNone` and set `s->drop_state = DropRequested`
shortly before we schedule the track_drop() in a bh.

[0]: https://lists.gnu.org/archive/html/qemu-devel/2020-03/msg09139.html

Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
---

It's for an opt-in, not yet exposed feature, so not much to break here (also it
was clearly broken, and this _is_ the right fix IMO, if anything one could
argue if here is the right place, but checking callers and testing various
scenarios it seems right)

 debian/patches/pve/0047-block-add-alloc-track-driver.patch | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/debian/patches/pve/0047-block-add-alloc-track-driver.patch b/debian/patches/pve/0047-block-add-alloc-track-driver.patch
index 232ad7e..8eed42f 100644
--- a/debian/patches/pve/0047-block-add-alloc-track-driver.patch
+++ b/debian/patches/pve/0047-block-add-alloc-track-driver.patch
@@ -34,7 +34,7 @@ new file mode 100644
 index 0000000000..b579380279
 --- /dev/null
 +++ b/block/alloc-track.c
-@@ -0,0 +1,342 @@
+@@ -0,0 +1,345 @@
 +/*
 + * Node to allow backing images to be applied to any node. Assumes a blank
 + * image to begin with, only new writes are tracked as allocated, thus this
@@ -313,6 +313,8 @@ index 0000000000..b579380279
 +        aio_bh_schedule_oneshot(qemu_get_aio_context(), track_drop, opaque);
 +        return;
 +    }
++    AioContext *aio_context = bdrv_get_aio_context(bs);
++    aio_context_acquire(aio_context);
 +
 +    bdrv_drained_begin(bs);
 +
@@ -324,6 +326,7 @@ index 0000000000..b579380279
 +    bdrv_set_backing_hd(bs, NULL, &error_abort);
 +    bdrv_drained_end(bs);
 +    bdrv_unref(bs);
++    aio_context_release(aio_context);
 +}
 +
 +static int track_change_backing_file(BlockDriverState *bs,
-- 
2.29.2





             reply	other threads:[~2021-04-06 14:42 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-06 14:41 Thomas Lamprecht [this message]
2021-04-06 14:41 ` [pve-devel] applied: [PATCH qemu 2/3] pbs block driver: run read in the AIO context of the bs Thomas Lamprecht
2021-04-06 14:41 ` [pve-devel] applied: [PATCH qemu 3/3] alloc track: use coroutine version of bdrv_pwrite_zeroes Thomas Lamprecht

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210406144159.1934669-1-t.lamprecht@proxmox.com \
    --to=t.lamprecht@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal