* [pve-devel] [RFC/PATCH v2 qemu 1/3] PVE-Backup: create jobs: correctly cancel in error scenario
@ 2022-05-25 11:59 Fabian Ebner
2022-05-25 11:59 ` [pve-devel] [RFC/PATCH v2 qemu 2/3] PVE-Backup: ensure jobs in di_list are referenced Fabian Ebner
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Fabian Ebner @ 2022-05-25 11:59 UTC (permalink / raw)
To: pve-devel
The first call to job_cancel_sync() will cancel and free all jobs in
the transaction, so ensure that it's called only once and get rid of
the job_unref() that would operate on freed memory.
It's also necessary to NULL backup_state.pbs in the error scenario,
because a subsequent backup_cancel QMP call (as happens in PVE when
the backup QMP command fails) would try to call proxmox_backup_abort()
and run into a segfault.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
All patches are intended to be ordered after
0038-PVE-Backup-Don-t-block-on-finishing-and-cleanup-crea.patch
or could also be squashed into that (while lifting the commit message
to the main repo). Of course I can also send that directly if this is
ACKed.
Would be glad if someone could confirm that the cleanup for PBS is
correct like this. I didn't have the time to look into all the details
there yet.
New in v2.
pve-backup.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/pve-backup.c b/pve-backup.c
index 6f05796fad..dfaf4c93f8 100644
--- a/pve-backup.c
+++ b/pve-backup.c
@@ -509,6 +509,11 @@ static void create_backup_jobs_bh(void *opaque) {
}
if (*errp) {
+ /*
+ * It's enough to cancel one job in the transaction, the rest will
+ * follow automatically.
+ */
+ bool canceled = false;
l = backup_state.di_list;
while (l) {
PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
@@ -519,12 +524,12 @@ static void create_backup_jobs_bh(void *opaque) {
di->target = NULL;
}
- if (di->job) {
+ if (!canceled && di->job) {
AioContext *ctx = di->job->job.aio_context;
aio_context_acquire(ctx);
job_cancel_sync(&di->job->job, true);
- job_unref(&di->job->job);
aio_context_release(ctx);
+ canceled = true;
}
}
}
@@ -974,6 +979,7 @@ err:
if (pbs) {
proxmox_backup_disconnect(pbs);
+ backup_state.pbs = NULL;
}
if (backup_dir) {
--
2.30.2
^ permalink raw reply [flat|nested] 4+ messages in thread
* [pve-devel] [RFC/PATCH v2 qemu 2/3] PVE-Backup: ensure jobs in di_list are referenced
2022-05-25 11:59 [pve-devel] [RFC/PATCH v2 qemu 1/3] PVE-Backup: create jobs: correctly cancel in error scenario Fabian Ebner
@ 2022-05-25 11:59 ` Fabian Ebner
2022-05-25 11:59 ` [pve-devel] [RFC/PATCH v2 qemu 3/3] PVE-Backup: avoid segfault issues upon backup-cancel Fabian Ebner
2022-06-08 12:04 ` [pve-devel] applied-series: [RFC/PATCH v2 qemu 1/3] PVE-Backup: create jobs: correctly cancel in error scenario Wolfgang Bumiller
2 siblings, 0 replies; 4+ messages in thread
From: Fabian Ebner @ 2022-05-25 11:59 UTC (permalink / raw)
To: pve-devel
Ensures that qmp_backup_cancel doesn't pick a job that's already been
freed. With unlucky timings it seems possible that:
1. job_exit -> job_completed -> job_finalize_single starts
2. pvebackup_co_complete_stream gets spawned in completion callback
3. job finalize_single finishes -> job's refcount hits zero -> job is
freed
4. qmp_backup_cancel comes in and locks backup_state.backup_mutex
before pvebackup_co_complete_stream can remove the job from the
di_list
5. qmp_backup_cancel will pick a job that's already been freed
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
New in v2.
pve-backup.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/pve-backup.c b/pve-backup.c
index dfaf4c93f8..3cede98b1d 100644
--- a/pve-backup.c
+++ b/pve-backup.c
@@ -314,6 +314,11 @@ static void coroutine_fn pvebackup_co_complete_stream(void *opaque)
}
}
+ if (di->job) {
+ job_unref(&di->job->job);
+ di->job = NULL;
+ }
+
// remove self from job list
backup_state.di_list = g_list_remove(backup_state.di_list, di);
@@ -497,6 +502,9 @@ static void create_backup_jobs_bh(void *opaque) {
aio_context_release(aio_context);
di->job = job;
+ if (job) {
+ job_ref(&job->job);
+ }
if (!job || local_err) {
error_setg(errp, "backup_job_create failed: %s",
@@ -531,6 +539,11 @@ static void create_backup_jobs_bh(void *opaque) {
aio_context_release(ctx);
canceled = true;
}
+
+ if (di->job) {
+ job_unref(&di->job->job);
+ di->job = NULL;
+ }
}
}
--
2.30.2
^ permalink raw reply [flat|nested] 4+ messages in thread
* [pve-devel] [RFC/PATCH v2 qemu 3/3] PVE-Backup: avoid segfault issues upon backup-cancel
2022-05-25 11:59 [pve-devel] [RFC/PATCH v2 qemu 1/3] PVE-Backup: create jobs: correctly cancel in error scenario Fabian Ebner
2022-05-25 11:59 ` [pve-devel] [RFC/PATCH v2 qemu 2/3] PVE-Backup: ensure jobs in di_list are referenced Fabian Ebner
@ 2022-05-25 11:59 ` Fabian Ebner
2022-06-08 12:04 ` [pve-devel] applied-series: [RFC/PATCH v2 qemu 1/3] PVE-Backup: create jobs: correctly cancel in error scenario Wolfgang Bumiller
2 siblings, 0 replies; 4+ messages in thread
From: Fabian Ebner @ 2022-05-25 11:59 UTC (permalink / raw)
To: pve-devel
When canceling a backup in PVE via a signal it's easy to run into a
situation where the job is already failing when the backup_cancel QMP
command comes in. With a bit of unlucky timing on top, it can happen
that job_exit() runs between schedulung of job_cancel_bh() and
execution of job_cancel_bh(). But job_cancel_sync() does not expect
that the job is already finalized (in fact, the job might've been
freed already, but even if it isn't, job_cancel_sync() would try to
deref job->txn which would be NULL at that point).
It is not possible to simply use the job_cancel() (which is advertised
as being async but isn't in all cases) in qmp_backup_cancel() for the
same reason job_cancel_sync() cannot be used. Namely, because it can
invoke job_finish_sync() (which uses AIO_WAIT_WHILE and thus hangs if
called from a coroutine). This happens when there's multiple jobs in
the transaction and job->deferred_to_main_loop is true (is set before
scheduling job_exit()) or if the job was not started yet.
Fix the issue by selecting the job to cancel in job_cancel_bh() itself
using the first job that's not completed yet. This is not necessarily
the first job in the list, because pvebackup_co_complete_stream()
might not yet have removed a completed job when job_cancel_bh() runs.
An alternative would be to continue using only the first job and
checking against JOB_STATUS_CONCLUDED or JOB_STATUS_NULL to decide if
it's still necessary and possible to cancel, but the approach with
using the first non-completed job seemed more robust.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
Changes from v1:
* ref/unref is now done by the previous patch and wider in scope
making sure a job in the di_list is always referenced.
pve-backup.c | 61 +++++++++++++++++++++++++++++++++-------------------
1 file changed, 39 insertions(+), 22 deletions(-)
diff --git a/pve-backup.c b/pve-backup.c
index 3cede98b1d..f65b872177 100644
--- a/pve-backup.c
+++ b/pve-backup.c
@@ -350,15 +350,42 @@ static void pvebackup_complete_cb(void *opaque, int ret)
/*
* job_cancel(_sync) does not like to be called from coroutines, so defer to
- * main loop processing via a bottom half.
+ * main loop processing via a bottom half. Assumes that caller holds
+ * backup_mutex.
*/
static void job_cancel_bh(void *opaque) {
CoCtxData *data = (CoCtxData*)opaque;
- Job *job = (Job*)data->data;
- AioContext *job_ctx = job->aio_context;
- aio_context_acquire(job_ctx);
- job_cancel_sync(job, true);
- aio_context_release(job_ctx);
+
+ /*
+ * Be careful to pick a valid job to cancel:
+ * 1. job_cancel_sync() does not expect the job to be finalized already.
+ * 2. job_exit() might run between scheduling and running job_cancel_bh()
+ * and pvebackup_co_complete_stream() might not have removed the job from
+ * the list yet (in fact, cannot, because it waits for the backup_mutex).
+ * Requiring !job_is_completed() ensures that no finalized job is picked.
+ */
+ GList *bdi = g_list_first(backup_state.di_list);
+ while (bdi) {
+ if (bdi->data) {
+ BlockJob *bj = ((PVEBackupDevInfo *)bdi->data)->job;
+ if (bj) {
+ Job *job = &bj->job;
+ if (!job_is_completed(job)) {
+ AioContext *job_ctx = job->aio_context;
+ aio_context_acquire(job_ctx);
+ job_cancel_sync(job, true);
+ aio_context_release(job_ctx);
+ /*
+ * It's enough to cancel one job in the transaction, the
+ * rest will follow automatically.
+ */
+ break;
+ }
+ }
+ }
+ bdi = g_list_next(bdi);
+ }
+
aio_co_enter(data->ctx, data->co);
}
@@ -379,22 +406,12 @@ static void coroutine_fn pvebackup_co_cancel(void *opaque)
proxmox_backup_abort(backup_state.pbs, "backup canceled");
}
- /* it's enough to cancel one job in the transaction, the rest will follow
- * automatically */
- GList *bdi = g_list_first(backup_state.di_list);
- BlockJob *cancel_job = bdi && bdi->data ?
- ((PVEBackupDevInfo *)bdi->data)->job :
- NULL;
-
- if (cancel_job) {
- CoCtxData data = {
- .ctx = qemu_get_current_aio_context(),
- .co = qemu_coroutine_self(),
- .data = &cancel_job->job,
- };
- aio_bh_schedule_oneshot(data.ctx, job_cancel_bh, &data);
- qemu_coroutine_yield();
- }
+ CoCtxData data = {
+ .ctx = qemu_get_current_aio_context(),
+ .co = qemu_coroutine_self(),
+ };
+ aio_bh_schedule_oneshot(data.ctx, job_cancel_bh, &data);
+ qemu_coroutine_yield();
qemu_co_mutex_unlock(&backup_state.backup_mutex);
}
--
2.30.2
^ permalink raw reply [flat|nested] 4+ messages in thread
* [pve-devel] applied-series: [RFC/PATCH v2 qemu 1/3] PVE-Backup: create jobs: correctly cancel in error scenario
2022-05-25 11:59 [pve-devel] [RFC/PATCH v2 qemu 1/3] PVE-Backup: create jobs: correctly cancel in error scenario Fabian Ebner
2022-05-25 11:59 ` [pve-devel] [RFC/PATCH v2 qemu 2/3] PVE-Backup: ensure jobs in di_list are referenced Fabian Ebner
2022-05-25 11:59 ` [pve-devel] [RFC/PATCH v2 qemu 3/3] PVE-Backup: avoid segfault issues upon backup-cancel Fabian Ebner
@ 2022-06-08 12:04 ` Wolfgang Bumiller
2 siblings, 0 replies; 4+ messages in thread
From: Wolfgang Bumiller @ 2022-06-08 12:04 UTC (permalink / raw)
To: Fabian Ebner; +Cc: pve-devel
On Wed, May 25, 2022 at 01:59:37PM +0200, Fabian Ebner wrote:
> The first call to job_cancel_sync() will cancel and free all jobs in
> the transaction, so ensure that it's called only once and get rid of
> the job_unref() that would operate on freed memory.
>
> It's also necessary to NULL backup_state.pbs in the error scenario,
> because a subsequent backup_cancel QMP call (as happens in PVE when
> the backup QMP command fails) would try to call proxmox_backup_abort()
> and run into a segfault.
>
> Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Series LGTM, applied, thanks
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2022-06-08 12:04 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-25 11:59 [pve-devel] [RFC/PATCH v2 qemu 1/3] PVE-Backup: create jobs: correctly cancel in error scenario Fabian Ebner
2022-05-25 11:59 ` [pve-devel] [RFC/PATCH v2 qemu 2/3] PVE-Backup: ensure jobs in di_list are referenced Fabian Ebner
2022-05-25 11:59 ` [pve-devel] [RFC/PATCH v2 qemu 3/3] PVE-Backup: avoid segfault issues upon backup-cancel Fabian Ebner
2022-06-08 12:04 ` [pve-devel] applied-series: [RFC/PATCH v2 qemu 1/3] PVE-Backup: create jobs: correctly cancel in error scenario Wolfgang Bumiller
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox