From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <f.ebner@proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id 065C072BC1
 for <pve-devel@lists.proxmox.com>; Tue, 24 May 2022 13:30:56 +0200 (CEST)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id F0BD21EB14
 for <pve-devel@lists.proxmox.com>; Tue, 24 May 2022 13:30:55 +0200 (CEST)
Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com
 [94.136.29.106])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS id 87F6B1EB08
 for <pve-devel@lists.proxmox.com>; Tue, 24 May 2022 13:30:54 +0200 (CEST)
Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1])
 by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 28F414381D
 for <pve-devel@lists.proxmox.com>; Tue, 24 May 2022 13:30:54 +0200 (CEST)
From: Fabian Ebner <f.ebner@proxmox.com>
To: pve-devel@lists.proxmox.com
Date: Tue, 24 May 2022 13:30:50 +0200
Message-Id: <20220524113050.179182-1-f.ebner@proxmox.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SPAM-LEVEL: Spam detection results:  0
 AWL 0.068 Adjusted score from AWL reputation of From: address
 BAYES_00                 -1.9 Bayes spam probability is 0 to 1%
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
 T_SCC_BODY_TEXT_LINE    -0.01 -
Subject: [pve-devel] [RFC/PATCH qemu] PVE-Backup: avoid segfault issues upon
 backup-cancel
X-BeenThere: pve-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/>
List-Post: <mailto:pve-devel@lists.proxmox.com>
List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Tue, 24 May 2022 11:30:56 -0000

When canceling a backup in PVE via a signal it's easy to run into a
situation where the job is already failing when the backup_cancel QMP
command comes in. With a bit of unlucky timing on top, it can happen
that job_exit() runs between schedulung of job_cancel_bh() and
execution of job_cancel_bh(). But job_cancel_sync() does not expect
that the job is already finalized (in fact, the job might've been
freed already, but even if it isn't, job_cancel_sync() would try to
deref job->txn which would be NULL at that point).

It is not possible to simply use the job_cancel() (which is advertised
as being async but isn't in all cases) in qmp_backup_cancel() for the
same reason job_cancel_sync() cannot be used. Namely, because it can
invoke job_finish_sync() (which uses AIO_WAIT_WHILE and thus hangs if
called from a coroutine). This happens when there's multiple jobs in
the transaction and job->deferred_to_main_loop is true (is set before
scheduling job_exit()) or if the job was not started yet.

Fix the issue by selecting the job to cancel in job_cancel_bh() itself
using the first job that's not completed yet. This is not necessarily
the first job in the list, because pvebackup_co_complete_stream()
might not yet have removed a completed job when job_cancel_bh() runs.

An alternative would be to continue using only the first job and
checking against JOB_STATUS_CONCLUDED|JOB_STATUS_NULL to decide if
it's still necessary and possible to cancel, but the approach with
using the first non-completed job seemed more robust.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---

Intended to be ordered after
0038-PVE-Backup-Don-t-block-on-finishing-and-cleanup-crea.patch
or could also be squashed into that (while lifting the commit message
to the main repo). Of course I can also send that directly if this is
ACKed.

 pve-backup.c | 72 +++++++++++++++++++++++++++++++++++++---------------
 1 file changed, 52 insertions(+), 20 deletions(-)

diff --git a/pve-backup.c b/pve-backup.c
index 6f05796fad..d0da6b63be 100644
--- a/pve-backup.c
+++ b/pve-backup.c
@@ -345,15 +345,45 @@ static void pvebackup_complete_cb(void *opaque, int ret)
 
 /*
  * job_cancel(_sync) does not like to be called from coroutines, so defer to
- * main loop processing via a bottom half.
+ * main loop processing via a bottom half. Assumes that caller holds
+ * backup_mutex and called job_ref on all jobs in backup_state.di_list.
  */
 static void job_cancel_bh(void *opaque) {
     CoCtxData *data = (CoCtxData*)opaque;
-    Job *job = (Job*)data->data;
-    AioContext *job_ctx = job->aio_context;
-    aio_context_acquire(job_ctx);
-    job_cancel_sync(job, true);
-    aio_context_release(job_ctx);
+
+    /*
+     * It's enough to cancel one job in the transaction, the rest will follow
+     * automatically.
+     */
+    bool canceled = false;
+
+    /*
+     * Be careful to pick a valid job to cancel:
+     * 1. job_cancel_sync() does not expect the job to be finalized already.
+     * 2. job_exit() might run between scheduling and running job_cancel_bh()
+     *    and pvebackup_co_complete_stream() might not have removed the job from
+     *    the list yet (in fact, cannot, because it waits for the backup_mutex).
+     * Requiring !job_is_completed() ensures that no finalized job is picked.
+     */
+    GList *bdi = g_list_first(backup_state.di_list);
+    while (bdi) {
+        if (bdi->data) {
+            BlockJob *bj = ((PVEBackupDevInfo *)bdi->data)->job;
+            if (bj) {
+                Job *job = &bj->job;
+                if (!canceled && !job_is_completed(job)) {
+                    AioContext *job_ctx = job->aio_context;
+                    aio_context_acquire(job_ctx);
+                    job_cancel_sync(job, true);
+                    aio_context_release(job_ctx);
+                    canceled = true;
+                }
+                job_unref(job);
+            }
+        }
+        bdi = g_list_next(bdi);
+    }
+
     aio_co_enter(data->ctx, data->co);
 }
 
@@ -374,23 +404,25 @@ static void coroutine_fn pvebackup_co_cancel(void *opaque)
         proxmox_backup_abort(backup_state.pbs, "backup canceled");
     }
 
-    /* it's enough to cancel one job in the transaction, the rest will follow
-     * automatically */
     GList *bdi = g_list_first(backup_state.di_list);
-    BlockJob *cancel_job = bdi && bdi->data ?
-        ((PVEBackupDevInfo *)bdi->data)->job :
-        NULL;
-
-    if (cancel_job) {
-        CoCtxData data = {
-            .ctx = qemu_get_current_aio_context(),
-            .co = qemu_coroutine_self(),
-            .data = &cancel_job->job,
-        };
-        aio_bh_schedule_oneshot(data.ctx, job_cancel_bh, &data);
-        qemu_coroutine_yield();
+    while (bdi) {
+        if (bdi->data) {
+            BlockJob *bj = ((PVEBackupDevInfo *)bdi->data)->job;
+            if (bj) {
+                /* ensure job is not freed before job_cancel_bh() runs */
+                job_ref(&bj->job);
+            }
+        }
+        bdi = g_list_next(bdi);
     }
 
+    CoCtxData data = {
+        .ctx = qemu_get_current_aio_context(),
+        .co = qemu_coroutine_self(),
+    };
+    aio_bh_schedule_oneshot(data.ctx, job_cancel_bh, &data);
+    qemu_coroutine_yield();
+
     qemu_co_mutex_unlock(&backup_state.backup_mutex);
 }
 
-- 
2.30.2