From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 471CD7189A for ; Thu, 9 Jun 2022 13:56:31 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 355A81B787 for ; Thu, 9 Jun 2022 13:56:01 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 70BBC1B77D for ; Thu, 9 Jun 2022 13:56:00 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 49B4F43AB0 for ; Thu, 9 Jun 2022 13:56:00 +0200 (CEST) From: Fabian Ebner To: pve-devel@lists.proxmox.com Date: Thu, 9 Jun 2022 13:55:38 +0200 Message-Id: <20220609115538.135041-1-f.ebner@proxmox.com> X-Mailer: git-send-email 2.30.2 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 AWL 0.058 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record T_SCC_BODY_TEXT_LINE -0.01 - Subject: [pve-devel] [PATCH qemu] fix #4101: acquire job's aio context before calling job_unref X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Jun 2022 11:56:31 -0000 Otherwise, we might not run into an abort via bdrv_co_yield_to_drain() (can at least happen when a disk with iothread is used): > #0 0x00007fef4f5dece1 __GI_raise (libc.so.6 + 0x3bce1) > #1 0x00007fef4f5c8537 __GI_abort (libc.so.6 + 0x25537) > #2 0x00005641bce3c71f error_exit (qemu-system-x86_64 + 0x80371f) > #3 0x00005641bce3d02b qemu_mutex_unlock_impl (qemu-system-x86_64 + 0x80402b) > #4 0x00005641bcd51655 bdrv_co_yield_to_drain (qemu-system-x86_64 + 0x718655) > #5 0x00005641bcd52de8 bdrv_do_drained_begin (qemu-system-x86_64 + 0x719de8) > #6 0x00005641bcd47e07 blk_drain (qemu-system-x86_64 + 0x70ee07) > #7 0x00005641bcd498cd blk_unref (qemu-system-x86_64 + 0x7108cd) > #8 0x00005641bcd31e6f block_job_free (qemu-system-x86_64 + 0x6f8e6f) > #9 0x00005641bcd32d65 job_unref (qemu-system-x86_64 + 0x6f9d65) > #10 0x00005641bcd93b3d pvebackup_co_complete_stream (qemu-system-x86_64 + 0x75ab3d) > #11 0x00005641bce4e353 coroutine_trampoline (qemu-system-x86_64 + 0x815353) Signed-off-by: Fabian Ebner --- ...ensure-jobs-in-di_list-are-referenced.patch | 18 ++++++++++++------ ...id-segfault-issues-upon-backup-cancel.patch | 6 +++--- 2 files changed, 15 insertions(+), 9 deletions(-) diff --git a/debian/patches/pve/0055-PVE-Backup-ensure-jobs-in-di_list-are-referenced.patch b/debian/patches/pve/0055-PVE-Backup-ensure-jobs-in-di_list-are-referenced.patch index db86632..ebb7919 100644 --- a/debian/patches/pve/0055-PVE-Backup-ensure-jobs-in-di_list-are-referenced.patch +++ b/debian/patches/pve/0055-PVE-Backup-ensure-jobs-in-di_list-are-referenced.patch @@ -17,26 +17,29 @@ freed. With unlucky timings it seems possible that: Signed-off-by: Fabian Ebner Signed-off-by: Wolfgang Bumiller --- - pve-backup.c | 13 +++++++++++++ - 1 file changed, 13 insertions(+) + pve-backup.c | 19 +++++++++++++++++++ + 1 file changed, 19 insertions(+) diff --git a/pve-backup.c b/pve-backup.c -index 5bed6f4014..cd45e66a61 100644 +index 5bed6f4014..7b094e5018 100644 --- a/pve-backup.c +++ b/pve-backup.c -@@ -316,6 +316,11 @@ static void coroutine_fn pvebackup_co_complete_stream(void *opaque) +@@ -316,6 +316,14 @@ static void coroutine_fn pvebackup_co_complete_stream(void *opaque) } } + if (di->job) { ++ AioContext *ctx = di->job->job.aio_context; ++ aio_context_acquire(ctx); + job_unref(&di->job->job); ++ aio_context_release(ctx); + di->job = NULL; + } + // remove self from job list backup_state.di_list = g_list_remove(backup_state.di_list, di); -@@ -494,6 +499,9 @@ static void create_backup_jobs_bh(void *opaque) { +@@ -494,6 +502,9 @@ static void create_backup_jobs_bh(void *opaque) { aio_context_release(aio_context); di->job = job; @@ -46,13 +49,16 @@ index 5bed6f4014..cd45e66a61 100644 if (!job || local_err) { error_setg(errp, "backup_job_create failed: %s", -@@ -528,6 +536,11 @@ static void create_backup_jobs_bh(void *opaque) { +@@ -528,6 +539,14 @@ static void create_backup_jobs_bh(void *opaque) { aio_context_release(ctx); canceled = true; } + + if (di->job) { ++ AioContext *ctx = di->job->job.aio_context; ++ aio_context_acquire(ctx); + job_unref(&di->job->job); ++ aio_context_release(ctx); + di->job = NULL; + } } diff --git a/debian/patches/pve/0056-PVE-Backup-avoid-segfault-issues-upon-backup-cancel.patch b/debian/patches/pve/0056-PVE-Backup-avoid-segfault-issues-upon-backup-cancel.patch index e0c91e6..a5de91d 100644 --- a/debian/patches/pve/0056-PVE-Backup-avoid-segfault-issues-upon-backup-cancel.patch +++ b/debian/patches/pve/0056-PVE-Backup-avoid-segfault-issues-upon-backup-cancel.patch @@ -37,10 +37,10 @@ Signed-off-by: Wolfgang Bumiller 1 file changed, 39 insertions(+), 22 deletions(-) diff --git a/pve-backup.c b/pve-backup.c -index cd45e66a61..be21027dad 100644 +index 7b094e5018..26f9aad3d8 100644 --- a/pve-backup.c +++ b/pve-backup.c -@@ -352,15 +352,42 @@ static void pvebackup_complete_cb(void *opaque, int ret) +@@ -355,15 +355,42 @@ static void pvebackup_complete_cb(void *opaque, int ret) /* * job_cancel(_sync) does not like to be called from coroutines, so defer to @@ -89,7 +89,7 @@ index cd45e66a61..be21027dad 100644 aio_co_enter(data->ctx, data->co); } -@@ -381,22 +408,12 @@ void coroutine_fn qmp_backup_cancel(Error **errp) +@@ -384,22 +411,12 @@ void coroutine_fn qmp_backup_cancel(Error **errp) proxmox_backup_abort(backup_state.pbs, "backup canceled"); } -- 2.30.2