From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 8EEF07196B for ; Thu, 9 Jun 2022 17:20:10 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 7EEAF1EC8C for ; Thu, 9 Jun 2022 17:19:40 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 746011EC83 for ; Thu, 9 Jun 2022 17:19:39 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 4ABC5436EB for ; Thu, 9 Jun 2022 17:19:39 +0200 (CEST) Date: Thu, 09 Jun 2022 17:19:31 +0200 From: Fabian =?iso-8859-1?q?Gr=FCnbichler?= To: Proxmox VE development discussion References: <20220609123113.166873-1-f.ebner@proxmox.com> In-Reply-To: <20220609123113.166873-1-f.ebner@proxmox.com> MIME-Version: 1.0 User-Agent: astroid/0.15.0 (https://github.com/astroidmail/astroid) Message-Id: <1654787952.zid1tpgfg2.astroid@nora.none> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-SPAM-LEVEL: Spam detection results: 0 AWL 0.170 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record T_SCC_BODY_TEXT_LINE -0.01 - URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [proxmox.com] Subject: [pve-devel] applied: [PATCH v2 qemu] fix #4101: acquire job's aio context before calling job_unref X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Jun 2022 15:20:10 -0000 with Wolfgang's A-B, thanks! On June 9, 2022 2:31 pm, Fabian Ebner wrote: > Otherwise, we might run into an abort via bdrv_co_yield_to_drain() > (can at least happen when a disk with iothread is used): >> #0 0x00007fef4f5dece1 __GI_raise (libc.so.6 + 0x3bce1) >> #1 0x00007fef4f5c8537 __GI_abort (libc.so.6 + 0x25537) >> #2 0x00005641bce3c71f error_exit (qemu-system-x86_64 + 0x80371f) >> #3 0x00005641bce3d02b qemu_mutex_unlock_impl (qemu-system-x86_64 + 0x80= 402b) >> #4 0x00005641bcd51655 bdrv_co_yield_to_drain (qemu-system-x86_64 + 0x71= 8655) >> #5 0x00005641bcd52de8 bdrv_do_drained_begin (qemu-system-x86_64 + 0x719= de8) >> #6 0x00005641bcd47e07 blk_drain (qemu-system-x86_64 + 0x70ee07) >> #7 0x00005641bcd498cd blk_unref (qemu-system-x86_64 + 0x7108cd) >> #8 0x00005641bcd31e6f block_job_free (qemu-system-x86_64 + 0x6f8e6f) >> #9 0x00005641bcd32d65 job_unref (qemu-system-x86_64 + 0x6f9d65) >> #10 0x00005641bcd93b3d pvebackup_co_complete_stream (qemu-system-x86_64 = + 0x75ab3d) >> #11 0x00005641bce4e353 coroutine_trampoline (qemu-system-x86_64 + 0x8153= 53) >=20 > Signed-off-by: Fabian Ebner > --- >=20 > Changes from v1: > * Fix commit message. > * Do di->job =3D NULL before releasing the context. > * Also keep context before job_ref. > * Avoid temporarily releasing/re-acquiring aio context in error > scenario. >=20 > Sent as a direct patch for reviewability. Intended to be squashed into > 0055-PVE-Backup-ensure-jobs-in-di_list-are-referenced.patch >=20 > pve-backup.c | 20 +++++++++++--------- > 1 file changed, 11 insertions(+), 9 deletions(-) >=20 > diff --git a/pve-backup.c b/pve-backup.c > index be21027dad..2e22030eec 100644 > --- a/pve-backup.c > +++ b/pve-backup.c > @@ -317,8 +317,11 @@ static void coroutine_fn pvebackup_co_complete_strea= m(void *opaque) > } > =20 > if (di->job) { > + AioContext *ctx =3D di->job->job.aio_context; > + aio_context_acquire(ctx); > job_unref(&di->job->job); > di->job =3D NULL; > + aio_context_release(ctx); > } > =20 > // remove self from job list > @@ -513,13 +516,13 @@ static void create_backup_jobs_bh(void *opaque) { > bitmap_mode, false, NULL, &perf, BLOCKDEV_ON_ERROR_REPORT, B= LOCKDEV_ON_ERROR_REPORT, > JOB_DEFAULT, pvebackup_complete_cb, di, backup_state.txn, &l= ocal_err); > =20 > - aio_context_release(aio_context); > - > di->job =3D job; > if (job) { > job_ref(&job->job); > } > =20 > + aio_context_release(aio_context); > + > if (!job || local_err) { > error_setg(errp, "backup_job_create failed: %s", > local_err ? error_get_pretty(local_err) : "null")= ; > @@ -546,17 +549,16 @@ static void create_backup_jobs_bh(void *opaque) { > di->target =3D NULL; > } > =20 > - if (!canceled && di->job) { > + if (di->job) { > AioContext *ctx =3D di->job->job.aio_context; > aio_context_acquire(ctx); > - job_cancel_sync(&di->job->job, true); > - aio_context_release(ctx); > - canceled =3D true; > - } > - > - if (di->job) { > + if (!canceled) { > + job_cancel_sync(&di->job->job, true); > + canceled =3D true; > + } > job_unref(&di->job->job); > di->job =3D NULL; > + aio_context_release(ctx); > } > } > } > --=20 > 2.30.2 >=20 >=20 >=20 > _______________________________________________ > pve-devel mailing list > pve-devel@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel >=20 >=20 >=20