public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Fabian Ebner <f.ebner@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [RFC/PATCH v2 qemu 1/3] PVE-Backup: create jobs: correctly cancel in error scenario
Date: Wed, 25 May 2022 13:59:37 +0200	[thread overview]
Message-ID: <20220525115939.347193-1-f.ebner@proxmox.com> (raw)

The first call to job_cancel_sync() will cancel and free all jobs in
the transaction, so ensure that it's called only once and get rid of
the job_unref() that would operate on freed memory.

It's also necessary to NULL backup_state.pbs in the error scenario,
because a subsequent backup_cancel QMP call (as happens in PVE when
the backup QMP command fails) would try to call proxmox_backup_abort()
and run into a segfault.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---

All patches are intended to be ordered after
0038-PVE-Backup-Don-t-block-on-finishing-and-cleanup-crea.patch
or could also be squashed into that (while lifting the commit message
to the main repo). Of course I can also send that directly if this is
ACKed.

Would be glad if someone could confirm that the cleanup for PBS is
correct like this. I didn't have the time to look into all the details
there yet.

New in v2.

 pve-backup.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/pve-backup.c b/pve-backup.c
index 6f05796fad..dfaf4c93f8 100644
--- a/pve-backup.c
+++ b/pve-backup.c
@@ -509,6 +509,11 @@ static void create_backup_jobs_bh(void *opaque) {
     }
 
     if (*errp) {
+        /*
+         * It's enough to cancel one job in the transaction, the rest will
+         * follow automatically.
+         */
+        bool canceled = false;
         l = backup_state.di_list;
         while (l) {
             PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
@@ -519,12 +524,12 @@ static void create_backup_jobs_bh(void *opaque) {
                 di->target = NULL;
             }
 
-            if (di->job) {
+            if (!canceled && di->job) {
                 AioContext *ctx = di->job->job.aio_context;
                 aio_context_acquire(ctx);
                 job_cancel_sync(&di->job->job, true);
-                job_unref(&di->job->job);
                 aio_context_release(ctx);
+                canceled = true;
             }
         }
     }
@@ -974,6 +979,7 @@ err:
 
     if (pbs) {
         proxmox_backup_disconnect(pbs);
+        backup_state.pbs = NULL;
     }
 
     if (backup_dir) {
-- 
2.30.2





             reply	other threads:[~2022-05-25 12:00 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-25 11:59 Fabian Ebner [this message]
2022-05-25 11:59 ` [pve-devel] [RFC/PATCH v2 qemu 2/3] PVE-Backup: ensure jobs in di_list are referenced Fabian Ebner
2022-05-25 11:59 ` [pve-devel] [RFC/PATCH v2 qemu 3/3] PVE-Backup: avoid segfault issues upon backup-cancel Fabian Ebner
2022-06-08 12:04 ` [pve-devel] applied-series: [RFC/PATCH v2 qemu 1/3] PVE-Backup: create jobs: correctly cancel in error scenario Wolfgang Bumiller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220525115939.347193-1-f.ebner@proxmox.com \
    --to=f.ebner@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal