From: Stefan Reiter <s.reiter@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH v2 qemu 4/6] PVE: Don't call job_cancel in coroutines
Date: Thu, 29 Oct 2020 14:10:34 +0100 [thread overview]
Message-ID: <20201029131036.11786-5-s.reiter@proxmox.com> (raw)
In-Reply-To: <20201029131036.11786-1-s.reiter@proxmox.com>
...because it hangs on cancelling other jobs in the txn if you do.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
v2:
* use new CoCtxData
* use aio_co_enter vs aio_co_schedule for BH return
* cache job_ctx since job_cancel_sync might switch the job to a different
context (when iothreads are in use) thus making us drop the wrong AioContext
if we access job->aio_context again. This is incidentally the same bug I once
fixed for upstream, almost made it in again...
pve-backup.c | 22 +++++++++++++++++++++-
1 file changed, 21 insertions(+), 1 deletion(-)
diff --git a/pve-backup.c b/pve-backup.c
index 92eaada0bc..0466145bec 100644
--- a/pve-backup.c
+++ b/pve-backup.c
@@ -332,6 +332,20 @@ static void pvebackup_complete_cb(void *opaque, int ret)
aio_co_enter(qemu_get_aio_context(), co);
}
+/*
+ * job_cancel(_sync) does not like to be called from coroutines, so defer to
+ * main loop processing via a bottom half.
+ */
+static void job_cancel_bh(void *opaque) {
+ CoCtxData *data = (CoCtxData*)opaque;
+ Job *job = (Job*)data->data;
+ AioContext *job_ctx = job->aio_context;
+ aio_context_acquire(job_ctx);
+ job_cancel_sync(job);
+ aio_context_release(job_ctx);
+ aio_co_enter(data->ctx, data->co);
+}
+
static void coroutine_fn pvebackup_co_cancel(void *opaque)
{
Error *cancel_err = NULL;
@@ -357,7 +371,13 @@ static void coroutine_fn pvebackup_co_cancel(void *opaque)
NULL;
if (cancel_job) {
- job_cancel(&cancel_job->job, false);
+ CoCtxData data = {
+ .ctx = qemu_get_current_aio_context(),
+ .co = qemu_coroutine_self(),
+ .data = &cancel_job->job,
+ };
+ aio_bh_schedule_oneshot(data.ctx, job_cancel_bh, &data);
+ qemu_coroutine_yield();
}
qemu_co_mutex_unlock(&backup_state.backup_mutex);
--
2.20.1
next prev parent reply other threads:[~2020-10-29 13:11 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-10-29 13:10 [pve-devel] [PATCH v2 0/6] QEMU backup cancellation fixes Stefan Reiter
2020-10-29 13:10 ` [pve-devel] [PATCH v2 qemu 1/6] PVE: fixup: drop CoMutex on error return Stefan Reiter
2020-10-29 13:10 ` [pve-devel] [PATCH v2 qemu 2/6] PVE: Introduce generic CoCtxData struct Stefan Reiter
2020-10-29 13:10 ` [pve-devel] [PATCH v2 qemu 3/6] PVE: Don't expect complete_cb to be called outside coroutine Stefan Reiter
2020-10-29 13:10 ` Stefan Reiter [this message]
2020-10-29 13:10 ` [pve-devel] [PATCH v2 qemu 5/6] PVE: fix and clean up error handling for create_backup_jobs Stefan Reiter
2020-10-29 13:10 ` [pve-devel] [PATCH v2 qemu 6/6] Several fixes for backup abort and error reporting Stefan Reiter
2020-10-29 17:27 ` [pve-devel] applied: " Thomas Lamprecht
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201029131036.11786-5-s.reiter@proxmox.com \
--to=s.reiter@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal