From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 4DD0C71949 for ; Thu, 9 Jun 2022 14:37:47 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 465FB1BD15 for ; Thu, 9 Jun 2022 14:37:47 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 835E21BD0B for ; Thu, 9 Jun 2022 14:37:46 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 5652943A9D for ; Thu, 9 Jun 2022 14:37:46 +0200 (CEST) Date: Thu, 9 Jun 2022 14:37:45 +0200 From: Wolfgang Bumiller To: Fabian Ebner Cc: pve-devel@lists.proxmox.com Message-ID: <20220609123745.lnmkgtweupnooiin@wobu-vie.proxmox.com> References: <20220609123113.166873-1-f.ebner@proxmox.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220609123113.166873-1-f.ebner@proxmox.com> X-SPAM-LEVEL: Spam detection results: 0 AWL 0.321 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record T_SCC_BODY_TEXT_LINE -0.01 - Subject: Re: [pve-devel] [PATCH v2 qemu] fix #4101: acquire job's aio context before calling job_unref X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Jun 2022 12:37:47 -0000 On Thu, Jun 09, 2022 at 02:31:13PM +0200, Fabian Ebner wrote: > Otherwise, we might run into an abort via bdrv_co_yield_to_drain() > (can at least happen when a disk with iothread is used): > > #0 0x00007fef4f5dece1 __GI_raise (libc.so.6 + 0x3bce1) > > #1 0x00007fef4f5c8537 __GI_abort (libc.so.6 + 0x25537) > > #2 0x00005641bce3c71f error_exit (qemu-system-x86_64 + 0x80371f) > > #3 0x00005641bce3d02b qemu_mutex_unlock_impl (qemu-system-x86_64 + 0x80402b) > > #4 0x00005641bcd51655 bdrv_co_yield_to_drain (qemu-system-x86_64 + 0x718655) > > #5 0x00005641bcd52de8 bdrv_do_drained_begin (qemu-system-x86_64 + 0x719de8) > > #6 0x00005641bcd47e07 blk_drain (qemu-system-x86_64 + 0x70ee07) > > #7 0x00005641bcd498cd blk_unref (qemu-system-x86_64 + 0x7108cd) > > #8 0x00005641bcd31e6f block_job_free (qemu-system-x86_64 + 0x6f8e6f) > > #9 0x00005641bcd32d65 job_unref (qemu-system-x86_64 + 0x6f9d65) > > #10 0x00005641bcd93b3d pvebackup_co_complete_stream (qemu-system-x86_64 + 0x75ab3d) > > #11 0x00005641bce4e353 coroutine_trampoline (qemu-system-x86_64 + 0x815353) > > Signed-off-by: Fabian Ebner Acked-by: Wolfgang Bumiller > --- > > Changes from v1: > * Fix commit message. > * Do di->job = NULL before releasing the context. > * Also keep context before job_ref. > * Avoid temporarily releasing/re-acquiring aio context in error > scenario. > > Sent as a direct patch for reviewability. Intended to be squashed into > 0055-PVE-Backup-ensure-jobs-in-di_list-are-referenced.patch > > pve-backup.c | 20 +++++++++++--------- > 1 file changed, 11 insertions(+), 9 deletions(-) > > diff --git a/pve-backup.c b/pve-backup.c > index be21027dad..2e22030eec 100644 > --- a/pve-backup.c > +++ b/pve-backup.c > @@ -317,8 +317,11 @@ static void coroutine_fn pvebackup_co_complete_stream(void *opaque) > } > > if (di->job) { > + AioContext *ctx = di->job->job.aio_context; > + aio_context_acquire(ctx); > job_unref(&di->job->job); > di->job = NULL; > + aio_context_release(ctx); > } > > // remove self from job list > @@ -513,13 +516,13 @@ static void create_backup_jobs_bh(void *opaque) { > bitmap_mode, false, NULL, &perf, BLOCKDEV_ON_ERROR_REPORT, BLOCKDEV_ON_ERROR_REPORT, > JOB_DEFAULT, pvebackup_complete_cb, di, backup_state.txn, &local_err); > > - aio_context_release(aio_context); > - > di->job = job; > if (job) { > job_ref(&job->job); > } > > + aio_context_release(aio_context); > + > if (!job || local_err) { > error_setg(errp, "backup_job_create failed: %s", > local_err ? error_get_pretty(local_err) : "null"); > @@ -546,17 +549,16 @@ static void create_backup_jobs_bh(void *opaque) { > di->target = NULL; > } > > - if (!canceled && di->job) { > + if (di->job) { > AioContext *ctx = di->job->job.aio_context; > aio_context_acquire(ctx); > - job_cancel_sync(&di->job->job, true); > - aio_context_release(ctx); > - canceled = true; > - } > - > - if (di->job) { > + if (!canceled) { > + job_cancel_sync(&di->job->job, true); > + canceled = true; > + } > job_unref(&di->job->job); > di->job = NULL; > + aio_context_release(ctx); > } > } > } > -- > 2.30.2