From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 4148D91882 for ; Wed, 8 Feb 2023 15:07:51 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 25A8218AC0 for ; Wed, 8 Feb 2023 15:07:51 +0100 (CET) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Wed, 8 Feb 2023 15:07:50 +0100 (CET) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 4D6AE460DB for ; Wed, 8 Feb 2023 15:07:50 +0100 (CET) From: Fiona Ebner To: pve-devel@lists.proxmox.com Date: Wed, 8 Feb 2023 15:07:46 +0100 Message-Id: <20230208140746.165320-1-f.ebner@proxmox.com> X-Mailer: git-send-email 2.30.2 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 AWL -0.003 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: [pve-devel] [RFC] PVE-Backup: create jobs in a drained section X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Feb 2023 14:07:51 -0000 With the drive-backup QMP command, upstream QEMU uses a drained section for the source drive when creating the backup job. Do the same here to avoid potential subtle bugs in the future if upstream starts relying on that behavior. There, the drained section extends until after the job is started, but this cannot be done here for mutlti-disk backups (could at most start the first job). The important thing is that the copy-before-write node is in place and the bcs bitmap is initialized, which both happen during job creation (ensured by the "block/backup: move bcs bitmap initialization to job creation" PVE patch). Inserting the copy-before-write node is already protected with a drained section, which is why there should be no actual issue right now. While that drained section does not extend until the bcs bitmap initialization, it should also not be an issue currently, because the job is not created from a coroutine (and even if, there would need to be a yield point in between). Signed-off-by: Fiona Ebner --- @Thomas: The stuff with making sure source and target use the same AioContext I talked about off-list doesn't seem to be required, or to be more precise, it already happens: when copy-before-write opens the target, it is opened in the same AioContext. There even is an assertion in block-copy for source and target to use the same AioContext, so we couldn't mess it up without noticing. pve-backup.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/pve-backup.c b/pve-backup.c index 3ca4f74cb8..75791c2313 100644 --- a/pve-backup.c +++ b/pve-backup.c @@ -508,12 +508,16 @@ static void create_backup_jobs_bh(void *opaque) { AioContext *aio_context = bdrv_get_aio_context(di->bs); aio_context_acquire(aio_context); + bdrv_drained_begin(di->bs); + BlockJob *job = backup_job_create( NULL, di->bs, di->target, backup_state.speed, sync_mode, di->bitmap, bitmap_mode, false, NULL, &backup_state.perf, BLOCKDEV_ON_ERROR_REPORT, BLOCKDEV_ON_ERROR_REPORT, JOB_DEFAULT, pvebackup_complete_cb, di, backup_state.txn, &local_err); + bdrv_drained_end(di->bs); + aio_context_release(aio_context); di->job = job; -- 2.30.2