all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: Stefan Reiter <s.reiter@proxmox.com>
To: Wolfgang Bumiller <w.bumiller@proxmox.com>
Cc: pve-devel@lists.proxmox.com
Subject: Re: [pve-devel] [PATCH qemu 1/2] PVE: Don't expect complete_cb to be called outside coroutine
Date: Tue, 27 Oct 2020 15:57:03 +0100	[thread overview]
Message-ID: <852869e0-e1fc-de3f-439f-881954630b52@proxmox.com> (raw)
In-Reply-To: <20201027141622.xom5xghfujman3fb@olga.proxmox.com>

On 10/27/20 3:16 PM, Wolfgang Bumiller wrote:
> On Thu, Oct 22, 2020 at 02:11:17PM +0200, Stefan Reiter wrote:
>> We're at the mercy of the rest of QEMU here, and it sometimes decides to
>> call pvebackup_complete_cb from a coroutine. This really doesn't matter
>> to us, so don't assert and crash on it.
>>
>> Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
>> ---
>>   pve-backup.c | 7 +++----
>>   1 file changed, 3 insertions(+), 4 deletions(-)
>>
>> diff --git a/pve-backup.c b/pve-backup.c
>> index 53cf23ed5a..9179754dcb 100644
>> --- a/pve-backup.c
>> +++ b/pve-backup.c
>> @@ -318,19 +318,18 @@ static void coroutine_fn pvebackup_co_complete_stream(void *opaque)
>>   
>>   static void pvebackup_complete_cb(void *opaque, int ret)
>>   {
>> -    assert(!qemu_in_coroutine());
>> -
>>       PVEBackupDevInfo *di = opaque;
>>       di->completed_ret = ret;
>>   
>>       /*
>>        * Schedule stream cleanup in async coroutine. close_image and finish might
>> -     * take a while, so we can't block on them here.
>> +     * take a while, so we can't block on them here. This way it also doesn't
>> +     * matter if we're already running in a coroutine or not.
>>        * Note: di is a pointer to an entry in the global backup_state struct, so
>>        * it stays valid.
>>        */
>>       Coroutine *co = qemu_coroutine_create(pvebackup_co_complete_stream, di);
>> -    aio_co_schedule(qemu_get_aio_context(), co);
>> +    aio_co_enter(qemu_get_aio_context(), co);
> 
> Shouldn't this be decided based on `qemu_in_coroutine()`? Or are we
> allowed to call enter regardless, I forgot...?
> 

We are allowed to call whenever, if we're in a coroutine in the same 
context this becomes a call, if not it's scheduled correctly.

>>   }
>>   
>>   static void coroutine_fn pvebackup_co_cancel(void *opaque)
>> -- 
>> 2.20.1




  reply	other threads:[~2020-10-27 14:57 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-22 12:11 [pve-devel] [PATCH 0/2] QEMU backup cancellation fixes Stefan Reiter
2020-10-22 12:11 ` [pve-devel] [PATCH qemu 1/2] PVE: Don't expect complete_cb to be called outside coroutine Stefan Reiter
2020-10-27 14:16   ` Wolfgang Bumiller
2020-10-27 14:57     ` Stefan Reiter [this message]
2020-10-22 12:11 ` [pve-devel] [PATCH qemu 2/2] PVE: Don't call job_cancel in coroutines Stefan Reiter
2020-10-27 14:17   ` Wolfgang Bumiller
2020-10-27 14:57     ` Stefan Reiter
2020-10-22 13:07 ` [pve-devel] [PATCH 0/2] QEMU backup cancellation fixes Dominik Csapak

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=852869e0-e1fc-de3f-439f-881954630b52@proxmox.com \
    --to=s.reiter@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    --cc=w.bumiller@proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal