* [pve-devel] [PATCH pve-qemu 1/5] Add transaction patches and fix for blocking finish
@ 2020-09-28 15:48 Stefan Reiter
2020-09-28 15:48 ` [pve-devel] [PATCH qemu 2/5] PVE: Add sequential job transaction support Stefan Reiter
` (4 more replies)
0 siblings, 5 replies; 7+ messages in thread
From: Stefan Reiter @ 2020-09-28 15:48 UTC (permalink / raw)
To: pve-devel
With the transaction patches, patch 0026-PVE-Backup-modify-job-api.patch
is no longer necessary, so drop it and rebase all following patches on
top.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
The following three patches marked "qemu" in the email chain are the same as the
ones being added here, for easier review/commenting. I sent the patch-of-patches
since the "modify job API" patch needs to be dropped for transaction support to
work.
The fix for the bug discussed (VM hang on server-side verify) is patch 4 of 5,
now based on top of transaction support (makes some changes easier, less code
requires fixing).
Patch 5 makes log output for backups look a bit nicer with the new "finishing"
flag, but no dependencies are created (old and new versions on both sides should
continue working fine).
...E-Backup-add-vma-backup-format-code.patch} | 0
.../pve/0026-PVE-Backup-modify-job-api.patch | 92 -----
...Backup-add-backup-dump-block-driver.patch} | 8 +-
...kup-proxmox-backup-patches-for-qemu.patch} | 10 +-
...store-new-command-to-restore-from-p.patch} | 0
...coroutines-to-fix-AIO-freeze-cleanu.patch} | 18 +-
...-support-for-sync-bitmap-mode-never.patch} | 18 +-
...support-for-conditional-and-always-.patch} | 0
...heck-for-bitmap-mode-without-bitmap.patch} | 4 +-
...to-bdrv_dirty_bitmap_merge_internal.patch} | 0
...-iotests-add-test-for-bitmap-mirror.patch} | 0
...0036-mirror-move-some-checks-to-qmp.patch} | 4 +-
...rty-bitmap-tracking-for-incremental.patch} | 24 +-
...ame-incremental-to-use-dirty-bitmap.patch} | 16 +-
...h => 0039-PVE-fixup-pbs-restore-API.patch} | 0
...rty-counter-for-non-incremental-bac.patch} | 4 +-
...se-proxmox_backup_check_incremental.patch} | 4 +-
...kup-add-compress-and-encrypt-option.patch} | 10 +-
...-driver-to-map-backup-archives-into.patch} | 0
...d-query_proxmox_support-QMP-command.patch} | 4 +-
...ssing-crypt-and-compress-parameters.patch} | 8 +-
...ite-callback-with-big-blocks-correc.patch} | 4 +-
...block-handling-to-PBS-dump-callback.patch} | 2 +-
...-add-query-pbs-bitmap-info-QMP-call.patch} | 28 +-
...t-stderr-to-journal-when-daemonized.patch} | 8 +-
...d-sequential-job-transaction-support.patch | 82 ++++
...-transaction-to-synchronize-job-stat.patch | 290 ++++++++++++++
...ore-coroutines-and-don-t-block-on-fi.patch | 363 ++++++++++++++++++
debian/patches/series | 52 +--
29 files changed, 849 insertions(+), 204 deletions(-)
rename debian/patches/pve/{0027-PVE-Backup-add-vma-backup-format-code.patch => 0026-PVE-Backup-add-vma-backup-format-code.patch} (100%)
delete mode 100644 debian/patches/pve/0026-PVE-Backup-modify-job-api.patch
rename debian/patches/pve/{0028-PVE-Backup-add-backup-dump-block-driver.patch => 0027-PVE-Backup-add-backup-dump-block-driver.patch} (97%)
rename debian/patches/pve/{0029-PVE-Backup-proxmox-backup-patches-for-qemu.patch => 0028-PVE-Backup-proxmox-backup-patches-for-qemu.patch} (99%)
rename debian/patches/pve/{0030-PVE-Backup-pbs-restore-new-command-to-restore-from-p.patch => 0029-PVE-Backup-pbs-restore-new-command-to-restore-from-p.patch} (100%)
rename debian/patches/pve/{0031-PVE-Backup-avoid-coroutines-to-fix-AIO-freeze-cleanu.patch => 0030-PVE-Backup-avoid-coroutines-to-fix-AIO-freeze-cleanu.patch} (98%)
rename debian/patches/pve/{0032-drive-mirror-add-support-for-sync-bitmap-mode-never.patch => 0031-drive-mirror-add-support-for-sync-bitmap-mode-never.patch} (97%)
rename debian/patches/pve/{0033-drive-mirror-add-support-for-conditional-and-always-.patch => 0032-drive-mirror-add-support-for-conditional-and-always-.patch} (100%)
rename debian/patches/pve/{0034-mirror-add-check-for-bitmap-mode-without-bitmap.patch => 0033-mirror-add-check-for-bitmap-mode-without-bitmap.patch} (90%)
rename debian/patches/pve/{0035-mirror-switch-to-bdrv_dirty_bitmap_merge_internal.patch => 0034-mirror-switch-to-bdrv_dirty_bitmap_merge_internal.patch} (100%)
rename debian/patches/pve/{0036-iotests-add-test-for-bitmap-mirror.patch => 0035-iotests-add-test-for-bitmap-mirror.patch} (100%)
rename debian/patches/pve/{0037-mirror-move-some-checks-to-qmp.patch => 0036-mirror-move-some-checks-to-qmp.patch} (99%)
rename debian/patches/pve/{0038-PVE-Backup-Add-dirty-bitmap-tracking-for-incremental.patch => 0037-PVE-Backup-Add-dirty-bitmap-tracking-for-incremental.patch} (96%)
rename debian/patches/pve/{0039-PVE-backup-rename-incremental-to-use-dirty-bitmap.patch => 0038-PVE-backup-rename-incremental-to-use-dirty-bitmap.patch} (91%)
rename debian/patches/pve/{0040-PVE-fixup-pbs-restore-API.patch => 0039-PVE-fixup-pbs-restore-API.patch} (100%)
rename debian/patches/pve/{0041-PVE-always-set-dirty-counter-for-non-incremental-bac.patch => 0040-PVE-always-set-dirty-counter-for-non-incremental-bac.patch} (91%)
rename debian/patches/pve/{0042-PVE-use-proxmox_backup_check_incremental.patch => 0041-PVE-use-proxmox_backup_check_incremental.patch} (93%)
rename debian/patches/pve/{0043-PVE-fixup-pbs-backup-add-compress-and-encrypt-option.patch => 0042-PVE-fixup-pbs-backup-add-compress-and-encrypt-option.patch} (93%)
rename debian/patches/pve/{0044-PVE-Add-PBS-block-driver-to-map-backup-archives-into.patch => 0043-PVE-Add-PBS-block-driver-to-map-backup-archives-into.patch} (100%)
rename debian/patches/pve/{0045-PVE-add-query_proxmox_support-QMP-command.patch => 0044-PVE-add-query_proxmox_support-QMP-command.patch} (94%)
rename debian/patches/pve/{0046-pbs-fix-missing-crypt-and-compress-parameters.patch => 0045-pbs-fix-missing-crypt-and-compress-parameters.patch} (89%)
rename debian/patches/pve/{0047-PVE-handle-PBS-write-callback-with-big-blocks-correc.patch => 0046-PVE-handle-PBS-write-callback-with-big-blocks-correc.patch} (96%)
rename debian/patches/pve/{0048-PVE-add-zero-block-handling-to-PBS-dump-callback.patch => 0047-PVE-add-zero-block-handling-to-PBS-dump-callback.patch} (98%)
rename debian/patches/pve/{0049-PVE-add-query-pbs-bitmap-info-QMP-call.patch => 0048-PVE-add-query-pbs-bitmap-info-QMP-call.patch} (94%)
rename debian/patches/pve/{0050-PVE-redirect-stderr-to-journal-when-daemonized.patch => 0049-PVE-redirect-stderr-to-journal-when-daemonized.patch} (89%)
create mode 100644 debian/patches/pve/0050-PVE-Add-sequential-job-transaction-support.patch
create mode 100644 debian/patches/pve/0051-PVE-Backup-Use-a-transaction-to-synchronize-job-stat.patch
create mode 100644 debian/patches/pve/0052-PVE-Backup-Use-more-coroutines-and-don-t-block-on-fi.patch
diff --git a/debian/patches/pve/0027-PVE-Backup-add-vma-backup-format-code.patch b/debian/patches/pve/0026-PVE-Backup-add-vma-backup-format-code.patch
similarity index 100%
rename from debian/patches/pve/0027-PVE-Backup-add-vma-backup-format-code.patch
rename to debian/patches/pve/0026-PVE-Backup-add-vma-backup-format-code.patch
diff --git a/debian/patches/pve/0026-PVE-Backup-modify-job-api.patch b/debian/patches/pve/0026-PVE-Backup-modify-job-api.patch
deleted file mode 100644
index 716a5d5..0000000
--- a/debian/patches/pve/0026-PVE-Backup-modify-job-api.patch
+++ /dev/null
@@ -1,92 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Wolfgang Bumiller <w.bumiller@proxmox.com>
-Date: Mon, 6 Apr 2020 12:16:56 +0200
-Subject: [PATCH] PVE-Backup: modify job api
-
-Introduce a pause_count parameter to start a backup in
-paused mode. This way backups of multiple drives can be
-started up sequentially via the completion callback while
-having been started at the same point in time.
-
-Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
----
- block/backup.c | 3 +++
- block/replication.c | 2 +-
- blockdev.c | 3 ++-
- include/block/block_int.h | 1 +
- job.c | 2 +-
- 5 files changed, 8 insertions(+), 3 deletions(-)
-
-diff --git a/block/backup.c b/block/backup.c
-index 4f13bb20a5..5f373a4f9b 100644
---- a/block/backup.c
-+++ b/block/backup.c
-@@ -338,6 +338,7 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
- BlockdevOnError on_target_error,
- int creation_flags,
- BlockCompletionFunc *cb, void *opaque,
-+ int pause_count,
- JobTxn *txn, Error **errp)
- {
- int64_t len, target_len;
-@@ -471,6 +472,8 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
- block_job_add_bdrv(&job->common, "target", target, 0, BLK_PERM_ALL,
- &error_abort);
-
-+ job->common.job.pause_count += pause_count;
-+
- return &job->common;
-
- error:
-diff --git a/block/replication.c b/block/replication.c
-index 0c70215784..59270a0468 100644
---- a/block/replication.c
-+++ b/block/replication.c
-@@ -560,7 +560,7 @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
- 0, MIRROR_SYNC_MODE_NONE, NULL, 0, false, NULL,
- BLOCKDEV_ON_ERROR_REPORT,
- BLOCKDEV_ON_ERROR_REPORT, JOB_INTERNAL,
-- backup_job_completed, bs, NULL, &local_err);
-+ backup_job_completed, bs, 0, NULL, &local_err);
- if (local_err) {
- error_propagate(errp, local_err);
- backup_job_cleanup(bs);
-diff --git a/blockdev.c b/blockdev.c
-index 3848a9c8ab..5107c5445e 100644
---- a/blockdev.c
-+++ b/blockdev.c
-@@ -2832,7 +2832,8 @@ static BlockJob *do_backup_common(BackupCommon *backup,
- backup->filter_node_name,
- backup->on_source_error,
- backup->on_target_error,
-- job_flags, NULL, NULL, txn, errp);
-+ job_flags, NULL, NULL, 0, txn, errp);
-+
- return job;
- }
-
-diff --git a/include/block/block_int.h b/include/block/block_int.h
-index 38dec0275b..5094ae1e95 100644
---- a/include/block/block_int.h
-+++ b/include/block/block_int.h
-@@ -1254,6 +1254,7 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
- BlockdevOnError on_target_error,
- int creation_flags,
- BlockCompletionFunc *cb, void *opaque,
-+ int pause_count,
- JobTxn *txn, Error **errp);
-
- BdrvChild *bdrv_root_attach_child(BlockDriverState *child_bs,
-diff --git a/job.c b/job.c
-index 53be57a3a0..e82253e041 100644
---- a/job.c
-+++ b/job.c
-@@ -918,7 +918,7 @@ void job_start(Job *job)
- job->co = qemu_coroutine_create(job_co_entry, job);
- job->pause_count--;
- job->busy = true;
-- job->paused = false;
-+ job->paused = job->pause_count > 0;
- job_state_transition(job, JOB_STATUS_RUNNING);
- aio_co_enter(job->aio_context, job->co);
- }
diff --git a/debian/patches/pve/0028-PVE-Backup-add-backup-dump-block-driver.patch b/debian/patches/pve/0027-PVE-Backup-add-backup-dump-block-driver.patch
similarity index 97%
rename from debian/patches/pve/0028-PVE-Backup-add-backup-dump-block-driver.patch
rename to debian/patches/pve/0027-PVE-Backup-add-backup-dump-block-driver.patch
index af81be5..cfd297b 100644
--- a/debian/patches/pve/0028-PVE-Backup-add-backup-dump-block-driver.patch
+++ b/debian/patches/pve/0027-PVE-Backup-add-backup-dump-block-driver.patch
@@ -203,7 +203,7 @@ index 0000000000..93d7f46950
+ return bs;
+}
diff --git a/block/backup.c b/block/backup.c
-index 5f373a4f9b..1bcc7faa32 100644
+index 4f13bb20a5..cd42236b79 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -32,24 +32,6 @@
@@ -231,7 +231,7 @@ index 5f373a4f9b..1bcc7faa32 100644
static const BlockJobDriver backup_job_driver;
static void backup_progress_bytes_callback(int64_t bytes, void *opaque)
-@@ -423,6 +405,11 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
+@@ -422,6 +404,11 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
goto error;
}
@@ -244,7 +244,7 @@ index 5f373a4f9b..1bcc7faa32 100644
* If source is in backing chain of target assume that target is going to be
* used for "image fleecing", i.e. it should represent a kind of snapshot of
diff --git a/include/block/block_int.h b/include/block/block_int.h
-index 5094ae1e95..dc72197388 100644
+index 38dec0275b..1efb1f527c 100644
--- a/include/block/block_int.h
+++ b/include/block/block_int.h
@@ -62,6 +62,36 @@
@@ -285,7 +285,7 @@ index 5094ae1e95..dc72197388 100644
BDRV_TRACKED_READ,
BDRV_TRACKED_WRITE,
diff --git a/job.c b/job.c
-index e82253e041..bcbbb0be02 100644
+index 53be57a3a0..b8139c80a4 100644
--- a/job.c
+++ b/job.c
@@ -269,7 +269,8 @@ static bool job_started(Job *job)
diff --git a/debian/patches/pve/0029-PVE-Backup-proxmox-backup-patches-for-qemu.patch b/debian/patches/pve/0028-PVE-Backup-proxmox-backup-patches-for-qemu.patch
similarity index 99%
rename from debian/patches/pve/0029-PVE-Backup-proxmox-backup-patches-for-qemu.patch
rename to debian/patches/pve/0028-PVE-Backup-proxmox-backup-patches-for-qemu.patch
index 46bede6..d41b675 100644
--- a/debian/patches/pve/0029-PVE-Backup-proxmox-backup-patches-for-qemu.patch
+++ b/debian/patches/pve/0028-PVE-Backup-proxmox-backup-patches-for-qemu.patch
@@ -16,11 +16,11 @@ Subject: [PATCH] PVE-Backup: proxmox backup patches for qemu
monitor/hmp-cmds.c | 44 ++
proxmox-backup-client.c | 182 +++++++
proxmox-backup-client.h | 52 ++
- pve-backup.c | 959 +++++++++++++++++++++++++++++++++
+ pve-backup.c | 955 +++++++++++++++++++++++++++++++++
qapi/block-core.json | 109 ++++
qapi/common.json | 13 +
qapi/misc.json | 13 -
- 16 files changed, 1443 insertions(+), 15 deletions(-)
+ 16 files changed, 1439 insertions(+), 15 deletions(-)
create mode 100644 proxmox-backup-client.c
create mode 100644 proxmox-backup-client.h
create mode 100644 pve-backup.c
@@ -112,7 +112,7 @@ index 4c8c375172..d485c3ac79 100644
+ hmp_handle_error(mon, error);
+}
diff --git a/blockdev.c b/blockdev.c
-index 5107c5445e..3c427fc4be 100644
+index 3848a9c8ab..681da7c8b6 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -36,6 +36,7 @@
@@ -188,7 +188,7 @@ index 2b58ac4a1c..9e58b6a5fc 100644
{
diff --git a/include/block/block_int.h b/include/block/block_int.h
-index dc72197388..5ff3b1186f 100644
+index 1efb1f527c..8dda6f769d 100644
--- a/include/block/block_int.h
+++ b/include/block/block_int.h
@@ -64,7 +64,7 @@
@@ -524,7 +524,7 @@ index 0000000000..b311bf8de8
+#endif /* PROXMOX_BACKUP_CLIENT_H */
diff --git a/pve-backup.c b/pve-backup.c
new file mode 100644
-index 0000000000..9ae89fb679
+index 0000000000..55441eb9d1
--- /dev/null
+++ b/pve-backup.c
@@ -0,0 +1,955 @@
diff --git a/debian/patches/pve/0030-PVE-Backup-pbs-restore-new-command-to-restore-from-p.patch b/debian/patches/pve/0029-PVE-Backup-pbs-restore-new-command-to-restore-from-p.patch
similarity index 100%
rename from debian/patches/pve/0030-PVE-Backup-pbs-restore-new-command-to-restore-from-p.patch
rename to debian/patches/pve/0029-PVE-Backup-pbs-restore-new-command-to-restore-from-p.patch
diff --git a/debian/patches/pve/0031-PVE-Backup-avoid-coroutines-to-fix-AIO-freeze-cleanu.patch b/debian/patches/pve/0030-PVE-Backup-avoid-coroutines-to-fix-AIO-freeze-cleanu.patch
similarity index 98%
rename from debian/patches/pve/0031-PVE-Backup-avoid-coroutines-to-fix-AIO-freeze-cleanu.patch
rename to debian/patches/pve/0030-PVE-Backup-avoid-coroutines-to-fix-AIO-freeze-cleanu.patch
index 6f29231..0d874ce 100644
--- a/debian/patches/pve/0031-PVE-Backup-avoid-coroutines-to-fix-AIO-freeze-cleanu.patch
+++ b/debian/patches/pve/0030-PVE-Backup-avoid-coroutines-to-fix-AIO-freeze-cleanu.patch
@@ -36,7 +36,7 @@ Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
1 file changed, 320 insertions(+), 318 deletions(-)
diff --git a/pve-backup.c b/pve-backup.c
-index 9ae89fb679..bb917ee972 100644
+index 55441eb9d1..d40f3f2fd6 100644
--- a/pve-backup.c
+++ b/pve-backup.c
@@ -11,11 +11,27 @@
@@ -746,7 +746,7 @@ index 9ae89fb679..bb917ee972 100644
"previous backup not finished");
return;
}
-@@ -631,7 +692,7 @@ static void coroutine_fn pvebackup_co_start(void *opaque)
+@@ -627,7 +688,7 @@ static void coroutine_fn pvebackup_co_start(void *opaque)
if (dev_id < 0)
goto err;
@@ -755,7 +755,7 @@ index 9ae89fb679..bb917ee972 100644
goto err;
}
-@@ -652,7 +713,7 @@ static void coroutine_fn pvebackup_co_start(void *opaque)
+@@ -648,7 +709,7 @@ static void coroutine_fn pvebackup_co_start(void *opaque)
PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
l = g_list_next(l);
@@ -764,7 +764,7 @@ index 9ae89fb679..bb917ee972 100644
goto err;
}
-@@ -717,9 +778,7 @@ static void coroutine_fn pvebackup_co_start(void *opaque)
+@@ -713,9 +774,7 @@ static void coroutine_fn pvebackup_co_start(void *opaque)
}
/* initialize global backup_state now */
@@ -775,7 +775,7 @@ index 9ae89fb679..bb917ee972 100644
if (backup_state.stat.error) {
error_free(backup_state.stat.error);
-@@ -742,7 +801,7 @@ static void coroutine_fn pvebackup_co_start(void *opaque)
+@@ -738,7 +797,7 @@ static void coroutine_fn pvebackup_co_start(void *opaque)
backup_state.stat.transferred = 0;
backup_state.stat.zero_bytes = 0;
@@ -784,7 +784,7 @@ index 9ae89fb679..bb917ee972 100644
backup_state.speed = (task->has_speed && task->speed > 0) ? task->speed : 0;
-@@ -751,48 +810,6 @@ static void coroutine_fn pvebackup_co_start(void *opaque)
+@@ -747,48 +806,6 @@ static void coroutine_fn pvebackup_co_start(void *opaque)
backup_state.di_list = di_list;
@@ -833,7 +833,7 @@ index 9ae89fb679..bb917ee972 100644
uuid_info = g_malloc0(sizeof(*uuid_info));
uuid_info->UUID = uuid_str;
-@@ -835,8 +852,6 @@ err:
+@@ -831,8 +848,6 @@ err:
rmdir(backup_dir);
}
@@ -842,7 +842,7 @@ index 9ae89fb679..bb917ee972 100644
task->result = NULL;
return;
}
-@@ -880,32 +895,31 @@ UuidInfo *qmp_backup(
+@@ -876,32 +891,31 @@ UuidInfo *qmp_backup(
.errp = errp,
};
@@ -890,7 +890,7 @@ index 9ae89fb679..bb917ee972 100644
}
info->has_status = true;
-@@ -941,19 +955,7 @@ static void coroutine_fn pvebackup_co_query(void *opaque)
+@@ -937,19 +951,7 @@ static void coroutine_fn pvebackup_co_query(void *opaque)
info->has_transferred = true;
info->transferred = backup_state.stat.transferred;
diff --git a/debian/patches/pve/0032-drive-mirror-add-support-for-sync-bitmap-mode-never.patch b/debian/patches/pve/0031-drive-mirror-add-support-for-sync-bitmap-mode-never.patch
similarity index 97%
rename from debian/patches/pve/0032-drive-mirror-add-support-for-sync-bitmap-mode-never.patch
rename to debian/patches/pve/0031-drive-mirror-add-support-for-sync-bitmap-mode-never.patch
index d9dc74e..f09b953 100644
--- a/debian/patches/pve/0032-drive-mirror-add-support-for-sync-bitmap-mode-never.patch
+++ b/debian/patches/pve/0031-drive-mirror-add-support-for-sync-bitmap-mode-never.patch
@@ -249,10 +249,10 @@ index e8e8844afc..100e828639 100644
&local_err);
if (local_err) {
diff --git a/blockdev.c b/blockdev.c
-index 3c427fc4be..28ed750ba5 100644
+index 681da7c8b6..02d58e7645 100644
--- a/blockdev.c
+++ b/blockdev.c
-@@ -2877,6 +2877,10 @@ static void blockdev_mirror_common(const char *job_id, BlockDriverState *bs,
+@@ -2876,6 +2876,10 @@ static void blockdev_mirror_common(const char *job_id, BlockDriverState *bs,
BlockDriverState *target,
bool has_replaces, const char *replaces,
enum MirrorSyncMode sync,
@@ -263,7 +263,7 @@ index 3c427fc4be..28ed750ba5 100644
BlockMirrorBackingMode backing_mode,
bool zero_target,
bool has_speed, int64_t speed,
-@@ -2895,6 +2899,7 @@ static void blockdev_mirror_common(const char *job_id, BlockDriverState *bs,
+@@ -2894,6 +2898,7 @@ static void blockdev_mirror_common(const char *job_id, BlockDriverState *bs,
Error **errp)
{
int job_flags = JOB_DEFAULT;
@@ -271,7 +271,7 @@ index 3c427fc4be..28ed750ba5 100644
if (!has_speed) {
speed = 0;
-@@ -2949,6 +2954,29 @@ static void blockdev_mirror_common(const char *job_id, BlockDriverState *bs,
+@@ -2948,6 +2953,29 @@ static void blockdev_mirror_common(const char *job_id, BlockDriverState *bs,
sync = MIRROR_SYNC_MODE_FULL;
}
@@ -301,7 +301,7 @@ index 3c427fc4be..28ed750ba5 100644
if (has_replaces) {
BlockDriverState *to_replace_bs;
AioContext *replace_aio_context;
-@@ -2986,8 +3014,8 @@ static void blockdev_mirror_common(const char *job_id, BlockDriverState *bs,
+@@ -2985,8 +3013,8 @@ static void blockdev_mirror_common(const char *job_id, BlockDriverState *bs,
* and will allow to check whether the node still exist at mirror completion
*/
mirror_start(job_id, bs, target,
@@ -312,7 +312,7 @@ index 3c427fc4be..28ed750ba5 100644
on_source_error, on_target_error, unmap, filter_node_name,
copy_mode, errp);
}
-@@ -3128,6 +3156,8 @@ void qmp_drive_mirror(DriveMirror *arg, Error **errp)
+@@ -3127,6 +3155,8 @@ void qmp_drive_mirror(DriveMirror *arg, Error **errp)
blockdev_mirror_common(arg->has_job_id ? arg->job_id : NULL, bs, target_bs,
arg->has_replaces, arg->replaces, arg->sync,
@@ -321,7 +321,7 @@ index 3c427fc4be..28ed750ba5 100644
backing_mode, zero_target,
arg->has_speed, arg->speed,
arg->has_granularity, arg->granularity,
-@@ -3149,6 +3179,8 @@ void qmp_blockdev_mirror(bool has_job_id, const char *job_id,
+@@ -3148,6 +3178,8 @@ void qmp_blockdev_mirror(bool has_job_id, const char *job_id,
const char *device, const char *target,
bool has_replaces, const char *replaces,
MirrorSyncMode sync,
@@ -330,7 +330,7 @@ index 3c427fc4be..28ed750ba5 100644
bool has_speed, int64_t speed,
bool has_granularity, uint32_t granularity,
bool has_buf_size, int64_t buf_size,
-@@ -3198,7 +3230,8 @@ void qmp_blockdev_mirror(bool has_job_id, const char *job_id,
+@@ -3197,7 +3229,8 @@ void qmp_blockdev_mirror(bool has_job_id, const char *job_id,
}
blockdev_mirror_common(has_job_id ? job_id : NULL, bs, target_bs,
@@ -341,7 +341,7 @@ index 3c427fc4be..28ed750ba5 100644
has_granularity, granularity,
has_buf_size, buf_size,
diff --git a/include/block/block_int.h b/include/block/block_int.h
-index 5ff3b1186f..befdae125b 100644
+index 8dda6f769d..279bd4ab61 100644
--- a/include/block/block_int.h
+++ b/include/block/block_int.h
@@ -1245,7 +1245,9 @@ void mirror_start(const char *job_id, BlockDriverState *bs,
diff --git a/debian/patches/pve/0033-drive-mirror-add-support-for-conditional-and-always-.patch b/debian/patches/pve/0032-drive-mirror-add-support-for-conditional-and-always-.patch
similarity index 100%
rename from debian/patches/pve/0033-drive-mirror-add-support-for-conditional-and-always-.patch
rename to debian/patches/pve/0032-drive-mirror-add-support-for-conditional-and-always-.patch
diff --git a/debian/patches/pve/0034-mirror-add-check-for-bitmap-mode-without-bitmap.patch b/debian/patches/pve/0033-mirror-add-check-for-bitmap-mode-without-bitmap.patch
similarity index 90%
rename from debian/patches/pve/0034-mirror-add-check-for-bitmap-mode-without-bitmap.patch
rename to debian/patches/pve/0033-mirror-add-check-for-bitmap-mode-without-bitmap.patch
index aca79a5..607c90b 100644
--- a/debian/patches/pve/0034-mirror-add-check-for-bitmap-mode-without-bitmap.patch
+++ b/debian/patches/pve/0033-mirror-add-check-for-bitmap-mode-without-bitmap.patch
@@ -15,10 +15,10 @@ Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
1 file changed, 3 insertions(+)
diff --git a/blockdev.c b/blockdev.c
-index 28ed750ba5..4665321bd8 100644
+index 02d58e7645..0d480f02c7 100644
--- a/blockdev.c
+++ b/blockdev.c
-@@ -2975,6 +2975,9 @@ static void blockdev_mirror_common(const char *job_id, BlockDriverState *bs,
+@@ -2974,6 +2974,9 @@ static void blockdev_mirror_common(const char *job_id, BlockDriverState *bs,
if (bdrv_dirty_bitmap_check(bitmap, BDRV_BITMAP_ALLOW_RO, errp)) {
return;
}
diff --git a/debian/patches/pve/0035-mirror-switch-to-bdrv_dirty_bitmap_merge_internal.patch b/debian/patches/pve/0034-mirror-switch-to-bdrv_dirty_bitmap_merge_internal.patch
similarity index 100%
rename from debian/patches/pve/0035-mirror-switch-to-bdrv_dirty_bitmap_merge_internal.patch
rename to debian/patches/pve/0034-mirror-switch-to-bdrv_dirty_bitmap_merge_internal.patch
diff --git a/debian/patches/pve/0036-iotests-add-test-for-bitmap-mirror.patch b/debian/patches/pve/0035-iotests-add-test-for-bitmap-mirror.patch
similarity index 100%
rename from debian/patches/pve/0036-iotests-add-test-for-bitmap-mirror.patch
rename to debian/patches/pve/0035-iotests-add-test-for-bitmap-mirror.patch
diff --git a/debian/patches/pve/0037-mirror-move-some-checks-to-qmp.patch b/debian/patches/pve/0036-mirror-move-some-checks-to-qmp.patch
similarity index 99%
rename from debian/patches/pve/0037-mirror-move-some-checks-to-qmp.patch
rename to debian/patches/pve/0036-mirror-move-some-checks-to-qmp.patch
index 1633911..cd96a02 100644
--- a/debian/patches/pve/0037-mirror-move-some-checks-to-qmp.patch
+++ b/debian/patches/pve/0036-mirror-move-some-checks-to-qmp.patch
@@ -59,10 +59,10 @@ index fb12ccb932..dfce442e97 100644
if (bitmap_mode != BITMAP_SYNC_MODE_NEVER) {
diff --git a/blockdev.c b/blockdev.c
-index 4665321bd8..1db0cbcad5 100644
+index 0d480f02c7..be87d65c02 100644
--- a/blockdev.c
+++ b/blockdev.c
-@@ -2954,7 +2954,36 @@ static void blockdev_mirror_common(const char *job_id, BlockDriverState *bs,
+@@ -2953,7 +2953,36 @@ static void blockdev_mirror_common(const char *job_id, BlockDriverState *bs,
sync = MIRROR_SYNC_MODE_FULL;
}
diff --git a/debian/patches/pve/0038-PVE-Backup-Add-dirty-bitmap-tracking-for-incremental.patch b/debian/patches/pve/0037-PVE-Backup-Add-dirty-bitmap-tracking-for-incremental.patch
similarity index 96%
rename from debian/patches/pve/0038-PVE-Backup-Add-dirty-bitmap-tracking-for-incremental.patch
rename to debian/patches/pve/0037-PVE-Backup-Add-dirty-bitmap-tracking-for-incremental.patch
index 858712d..1e550f4 100644
--- a/debian/patches/pve/0038-PVE-Backup-Add-dirty-bitmap-tracking-for-incremental.patch
+++ b/debian/patches/pve/0037-PVE-Backup-Add-dirty-bitmap-tracking-for-incremental.patch
@@ -132,7 +132,7 @@ index b311bf8de8..20fd6b1719 100644
diff --git a/pve-backup.c b/pve-backup.c
-index bb917ee972..7b5558e28e 100644
+index d40f3f2fd6..d50f03a050 100644
--- a/pve-backup.c
+++ b/pve-backup.c
@@ -28,6 +28,8 @@
@@ -262,7 +262,7 @@ index bb917ee972..7b5558e28e 100644
bool has_format;
BackupFormat format;
bool has_config_file;
-@@ -621,6 +654,7 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
+@@ -617,6 +650,7 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
}
size_t total = 0;
@@ -270,7 +270,7 @@ index bb917ee972..7b5558e28e 100644
l = di_list;
while (l) {
-@@ -658,6 +692,8 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
+@@ -654,6 +688,8 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
int dump_cb_block_size = PROXMOX_BACKUP_DEFAULT_CHUNK_SIZE; // Hardcoded (4M)
firewall_name = "fw.conf";
@@ -279,7 +279,7 @@ index bb917ee972..7b5558e28e 100644
char *pbs_err = NULL;
pbs = proxmox_backup_new(
task->backup_file,
-@@ -677,7 +713,8 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
+@@ -673,7 +709,8 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
goto err;
}
@@ -289,7 +289,7 @@ index bb917ee972..7b5558e28e 100644
goto err;
/* register all devices */
-@@ -688,9 +725,32 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
+@@ -684,9 +721,32 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
const char *devname = bdrv_get_device_name(di->bs);
@@ -324,7 +324,7 @@ index bb917ee972..7b5558e28e 100644
if (!(di->target = bdrv_backup_dump_create(dump_cb_block_size, di->size, pvebackup_co_dump_pbs_cb, di, task->errp))) {
goto err;
-@@ -699,6 +759,8 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
+@@ -695,6 +755,8 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
di->dev_id = dev_id;
}
} else if (format == BACKUP_FORMAT_VMA) {
@@ -333,7 +333,7 @@ index bb917ee972..7b5558e28e 100644
vmaw = vma_writer_create(task->backup_file, uuid, &local_err);
if (!vmaw) {
if (local_err) {
-@@ -726,6 +788,8 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
+@@ -722,6 +784,8 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
}
}
} else if (format == BACKUP_FORMAT_DIR) {
@@ -342,7 +342,7 @@ index bb917ee972..7b5558e28e 100644
if (mkdir(task->backup_file, 0640) != 0) {
error_setg_errno(task->errp, errno, "can't create directory '%s'\n",
task->backup_file);
-@@ -798,8 +862,10 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
+@@ -794,8 +858,10 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
char *uuid_str = g_strdup(backup_state.stat.uuid_str);
backup_state.stat.total = total;
@@ -353,7 +353,7 @@ index bb917ee972..7b5558e28e 100644
qemu_mutex_unlock(&backup_state.stat.lock);
-@@ -823,6 +889,10 @@ err:
+@@ -819,6 +885,10 @@ err:
PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
l = g_list_next(l);
@@ -364,7 +364,7 @@ index bb917ee972..7b5558e28e 100644
if (di->target) {
bdrv_unref(di->target);
}
-@@ -864,6 +934,7 @@ UuidInfo *qmp_backup(
+@@ -860,6 +930,7 @@ UuidInfo *qmp_backup(
bool has_fingerprint, const char *fingerprint,
bool has_backup_id, const char *backup_id,
bool has_backup_time, int64_t backup_time,
@@ -372,7 +372,7 @@ index bb917ee972..7b5558e28e 100644
bool has_format, BackupFormat format,
bool has_config_file, const char *config_file,
bool has_firewall_file, const char *firewall_file,
-@@ -882,6 +953,8 @@ UuidInfo *qmp_backup(
+@@ -878,6 +949,8 @@ UuidInfo *qmp_backup(
.backup_id = backup_id,
.has_backup_time = has_backup_time,
.backup_time = backup_time,
@@ -381,7 +381,7 @@ index bb917ee972..7b5558e28e 100644
.has_format = has_format,
.format = format,
.has_config_file = has_config_file,
-@@ -950,10 +1023,14 @@ BackupStatus *qmp_query_backup(Error **errp)
+@@ -946,10 +1019,14 @@ BackupStatus *qmp_query_backup(Error **errp)
info->has_total = true;
info->total = backup_state.stat.total;
diff --git a/debian/patches/pve/0039-PVE-backup-rename-incremental-to-use-dirty-bitmap.patch b/debian/patches/pve/0038-PVE-backup-rename-incremental-to-use-dirty-bitmap.patch
similarity index 91%
rename from debian/patches/pve/0039-PVE-backup-rename-incremental-to-use-dirty-bitmap.patch
rename to debian/patches/pve/0038-PVE-backup-rename-incremental-to-use-dirty-bitmap.patch
index f812313..f1e2f55 100644
--- a/debian/patches/pve/0039-PVE-backup-rename-incremental-to-use-dirty-bitmap.patch
+++ b/debian/patches/pve/0038-PVE-backup-rename-incremental-to-use-dirty-bitmap.patch
@@ -11,7 +11,7 @@ Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
2 files changed, 14 insertions(+), 14 deletions(-)
diff --git a/pve-backup.c b/pve-backup.c
-index 7b5558e28e..9e767d724c 100644
+index d50f03a050..7bf54b4c5d 100644
--- a/pve-backup.c
+++ b/pve-backup.c
@@ -557,8 +557,8 @@ typedef struct QmpBackupTask {
@@ -25,7 +25,7 @@ index 7b5558e28e..9e767d724c 100644
bool has_format;
BackupFormat format;
bool has_config_file;
-@@ -692,7 +692,7 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
+@@ -688,7 +688,7 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
int dump_cb_block_size = PROXMOX_BACKUP_DEFAULT_CHUNK_SIZE; // Hardcoded (4M)
firewall_name = "fw.conf";
@@ -34,7 +34,7 @@ index 7b5558e28e..9e767d724c 100644
char *pbs_err = NULL;
pbs = proxmox_backup_new(
-@@ -726,9 +726,9 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
+@@ -722,9 +722,9 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
const char *devname = bdrv_get_device_name(di->bs);
BdrvDirtyBitmap *bitmap = bdrv_find_dirty_bitmap(di->bs, PBS_BITMAP_NAME);
@@ -46,7 +46,7 @@ index 7b5558e28e..9e767d724c 100644
if (bitmap == NULL) {
bitmap = bdrv_create_dirty_bitmap(di->bs, dump_cb_block_size, PBS_BITMAP_NAME, task->errp);
if (!bitmap) {
-@@ -738,7 +738,7 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
+@@ -734,7 +734,7 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
bdrv_set_dirty_bitmap(bitmap, 0, di->size);
dirty += di->size;
} else {
@@ -55,7 +55,7 @@ index 7b5558e28e..9e767d724c 100644
dirty += bdrv_get_dirty_count(bitmap);
}
di->bitmap = bitmap;
-@@ -747,7 +747,7 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
+@@ -743,7 +743,7 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
bdrv_release_dirty_bitmap(bitmap);
}
@@ -64,7 +64,7 @@ index 7b5558e28e..9e767d724c 100644
if (dev_id < 0) {
goto err;
}
-@@ -865,7 +865,7 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
+@@ -861,7 +861,7 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
backup_state.stat.dirty = dirty;
backup_state.stat.transferred = 0;
backup_state.stat.zero_bytes = 0;
@@ -73,7 +73,7 @@ index 7b5558e28e..9e767d724c 100644
qemu_mutex_unlock(&backup_state.stat.lock);
-@@ -934,7 +934,7 @@ UuidInfo *qmp_backup(
+@@ -930,7 +930,7 @@ UuidInfo *qmp_backup(
bool has_fingerprint, const char *fingerprint,
bool has_backup_id, const char *backup_id,
bool has_backup_time, int64_t backup_time,
@@ -82,7 +82,7 @@ index 7b5558e28e..9e767d724c 100644
bool has_format, BackupFormat format,
bool has_config_file, const char *config_file,
bool has_firewall_file, const char *firewall_file,
-@@ -953,8 +953,8 @@ UuidInfo *qmp_backup(
+@@ -949,8 +949,8 @@ UuidInfo *qmp_backup(
.backup_id = backup_id,
.has_backup_time = has_backup_time,
.backup_time = backup_time,
diff --git a/debian/patches/pve/0040-PVE-fixup-pbs-restore-API.patch b/debian/patches/pve/0039-PVE-fixup-pbs-restore-API.patch
similarity index 100%
rename from debian/patches/pve/0040-PVE-fixup-pbs-restore-API.patch
rename to debian/patches/pve/0039-PVE-fixup-pbs-restore-API.patch
diff --git a/debian/patches/pve/0041-PVE-always-set-dirty-counter-for-non-incremental-bac.patch b/debian/patches/pve/0040-PVE-always-set-dirty-counter-for-non-incremental-bac.patch
similarity index 91%
rename from debian/patches/pve/0041-PVE-always-set-dirty-counter-for-non-incremental-bac.patch
rename to debian/patches/pve/0040-PVE-always-set-dirty-counter-for-non-incremental-bac.patch
index 9087e9e..c7b267e 100644
--- a/debian/patches/pve/0041-PVE-always-set-dirty-counter-for-non-incremental-bac.patch
+++ b/debian/patches/pve/0040-PVE-always-set-dirty-counter-for-non-incremental-bac.patch
@@ -9,10 +9,10 @@ Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/pve-backup.c b/pve-backup.c
-index 9e767d724c..c108f6a745 100644
+index 7bf54b4c5d..1f2a0bbe8c 100644
--- a/pve-backup.c
+++ b/pve-backup.c
-@@ -742,9 +742,13 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
+@@ -738,9 +738,13 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
dirty += bdrv_get_dirty_count(bitmap);
}
di->bitmap = bitmap;
diff --git a/debian/patches/pve/0042-PVE-use-proxmox_backup_check_incremental.patch b/debian/patches/pve/0041-PVE-use-proxmox_backup_check_incremental.patch
similarity index 93%
rename from debian/patches/pve/0042-PVE-use-proxmox_backup_check_incremental.patch
rename to debian/patches/pve/0041-PVE-use-proxmox_backup_check_incremental.patch
index bbf32d0..c55357f 100644
--- a/debian/patches/pve/0042-PVE-use-proxmox_backup_check_incremental.patch
+++ b/debian/patches/pve/0041-PVE-use-proxmox_backup_check_incremental.patch
@@ -10,10 +10,10 @@ Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/pve-backup.c b/pve-backup.c
-index c108f6a745..aa62a1da16 100644
+index 1f2a0bbe8c..1cd9d31d7c 100644
--- a/pve-backup.c
+++ b/pve-backup.c
-@@ -734,12 +734,16 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
+@@ -730,12 +730,16 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
if (!bitmap) {
goto err;
}
diff --git a/debian/patches/pve/0043-PVE-fixup-pbs-backup-add-compress-and-encrypt-option.patch b/debian/patches/pve/0042-PVE-fixup-pbs-backup-add-compress-and-encrypt-option.patch
similarity index 93%
rename from debian/patches/pve/0043-PVE-fixup-pbs-backup-add-compress-and-encrypt-option.patch
rename to debian/patches/pve/0042-PVE-fixup-pbs-backup-add-compress-and-encrypt-option.patch
index 8aa745d..90cb3e1 100644
--- a/debian/patches/pve/0043-PVE-fixup-pbs-backup-add-compress-and-encrypt-option.patch
+++ b/debian/patches/pve/0042-PVE-fixup-pbs-backup-add-compress-and-encrypt-option.patch
@@ -25,7 +25,7 @@ index fdc85a5c0e..43aa87487b 100644
false, NULL, false, NULL, !!devlist,
devlist, qdict_haskey(qdict, "speed"), speed, &error);
diff --git a/pve-backup.c b/pve-backup.c
-index aa62a1da16..343035b5c9 100644
+index 1cd9d31d7c..bfb648d6b5 100644
--- a/pve-backup.c
+++ b/pve-backup.c
@@ -567,6 +567,10 @@ typedef struct QmpBackupTask {
@@ -39,7 +39,7 @@ index aa62a1da16..343035b5c9 100644
bool has_speed;
int64_t speed;
Error **errp;
-@@ -694,6 +698,7 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
+@@ -690,6 +694,7 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
bool use_dirty_bitmap = task->has_use_dirty_bitmap && task->use_dirty_bitmap;
@@ -47,7 +47,7 @@ index aa62a1da16..343035b5c9 100644
char *pbs_err = NULL;
pbs = proxmox_backup_new(
task->backup_file,
-@@ -703,8 +708,10 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
+@@ -699,8 +704,10 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
task->has_password ? task->password : NULL,
task->has_keyfile ? task->keyfile : NULL,
task->has_key_password ? task->key_password : NULL,
@@ -59,7 +59,7 @@ index aa62a1da16..343035b5c9 100644
if (!pbs) {
error_set(task->errp, ERROR_CLASS_GENERIC_ERROR,
-@@ -943,6 +950,8 @@ UuidInfo *qmp_backup(
+@@ -939,6 +946,8 @@ UuidInfo *qmp_backup(
bool has_backup_id, const char *backup_id,
bool has_backup_time, int64_t backup_time,
bool has_use_dirty_bitmap, bool use_dirty_bitmap,
@@ -68,7 +68,7 @@ index aa62a1da16..343035b5c9 100644
bool has_format, BackupFormat format,
bool has_config_file, const char *config_file,
bool has_firewall_file, const char *firewall_file,
-@@ -971,6 +980,8 @@ UuidInfo *qmp_backup(
+@@ -967,6 +976,8 @@ UuidInfo *qmp_backup(
.firewall_file = firewall_file,
.has_devlist = has_devlist,
.devlist = devlist,
diff --git a/debian/patches/pve/0044-PVE-Add-PBS-block-driver-to-map-backup-archives-into.patch b/debian/patches/pve/0043-PVE-Add-PBS-block-driver-to-map-backup-archives-into.patch
similarity index 100%
rename from debian/patches/pve/0044-PVE-Add-PBS-block-driver-to-map-backup-archives-into.patch
rename to debian/patches/pve/0043-PVE-Add-PBS-block-driver-to-map-backup-archives-into.patch
diff --git a/debian/patches/pve/0045-PVE-add-query_proxmox_support-QMP-command.patch b/debian/patches/pve/0044-PVE-add-query_proxmox_support-QMP-command.patch
similarity index 94%
rename from debian/patches/pve/0045-PVE-add-query_proxmox_support-QMP-command.patch
rename to debian/patches/pve/0044-PVE-add-query_proxmox_support-QMP-command.patch
index 514d942..1549af3 100644
--- a/debian/patches/pve/0045-PVE-add-query_proxmox_support-QMP-command.patch
+++ b/debian/patches/pve/0044-PVE-add-query_proxmox_support-QMP-command.patch
@@ -14,10 +14,10 @@ Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
2 files changed, 29 insertions(+)
diff --git a/pve-backup.c b/pve-backup.c
-index 343035b5c9..b0e7b51eef 100644
+index bfb648d6b5..2539ae1520 100644
--- a/pve-backup.c
+++ b/pve-backup.c
-@@ -1055,3 +1055,10 @@ BackupStatus *qmp_query_backup(Error **errp)
+@@ -1051,3 +1051,10 @@ BackupStatus *qmp_query_backup(Error **errp)
return info;
}
diff --git a/debian/patches/pve/0046-pbs-fix-missing-crypt-and-compress-parameters.patch b/debian/patches/pve/0045-pbs-fix-missing-crypt-and-compress-parameters.patch
similarity index 89%
rename from debian/patches/pve/0046-pbs-fix-missing-crypt-and-compress-parameters.patch
rename to debian/patches/pve/0045-pbs-fix-missing-crypt-and-compress-parameters.patch
index 950bdbe..f0ff58a 100644
--- a/debian/patches/pve/0046-pbs-fix-missing-crypt-and-compress-parameters.patch
+++ b/debian/patches/pve/0045-pbs-fix-missing-crypt-and-compress-parameters.patch
@@ -9,10 +9,10 @@ Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/pve-backup.c b/pve-backup.c
-index b0e7b51eef..77eb475563 100644
+index 2539ae1520..0e293a4f5e 100644
--- a/pve-backup.c
+++ b/pve-backup.c
-@@ -962,6 +962,8 @@ UuidInfo *qmp_backup(
+@@ -958,6 +958,8 @@ UuidInfo *qmp_backup(
.backup_file = backup_file,
.has_password = has_password,
.password = password,
@@ -21,7 +21,7 @@ index b0e7b51eef..77eb475563 100644
.has_key_password = has_key_password,
.key_password = key_password,
.has_fingerprint = has_fingerprint,
-@@ -972,6 +974,10 @@ UuidInfo *qmp_backup(
+@@ -968,6 +970,10 @@ UuidInfo *qmp_backup(
.backup_time = backup_time,
.has_use_dirty_bitmap = has_use_dirty_bitmap,
.use_dirty_bitmap = use_dirty_bitmap,
@@ -32,7 +32,7 @@ index b0e7b51eef..77eb475563 100644
.has_format = has_format,
.format = format,
.has_config_file = has_config_file,
-@@ -980,8 +986,6 @@ UuidInfo *qmp_backup(
+@@ -976,8 +982,6 @@ UuidInfo *qmp_backup(
.firewall_file = firewall_file,
.has_devlist = has_devlist,
.devlist = devlist,
diff --git a/debian/patches/pve/0047-PVE-handle-PBS-write-callback-with-big-blocks-correc.patch b/debian/patches/pve/0046-PVE-handle-PBS-write-callback-with-big-blocks-correc.patch
similarity index 96%
rename from debian/patches/pve/0047-PVE-handle-PBS-write-callback-with-big-blocks-correc.patch
rename to debian/patches/pve/0046-PVE-handle-PBS-write-callback-with-big-blocks-correc.patch
index e761eea..5f73a01 100644
--- a/debian/patches/pve/0047-PVE-handle-PBS-write-callback-with-big-blocks-correc.patch
+++ b/debian/patches/pve/0046-PVE-handle-PBS-write-callback-with-big-blocks-correc.patch
@@ -17,7 +17,7 @@ Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
1 file changed, 22 insertions(+), 8 deletions(-)
diff --git a/pve-backup.c b/pve-backup.c
-index 77eb475563..40d8136f1a 100644
+index 0e293a4f5e..8999692418 100644
--- a/pve-backup.c
+++ b/pve-backup.c
@@ -67,6 +67,7 @@ opts_init(pvebackup_init);
@@ -65,7 +65,7 @@ index 77eb475563..40d8136f1a 100644
return size;
}
-@@ -730,6 +742,8 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
+@@ -726,6 +738,8 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
l = g_list_next(l);
diff --git a/debian/patches/pve/0048-PVE-add-zero-block-handling-to-PBS-dump-callback.patch b/debian/patches/pve/0047-PVE-add-zero-block-handling-to-PBS-dump-callback.patch
similarity index 98%
rename from debian/patches/pve/0048-PVE-add-zero-block-handling-to-PBS-dump-callback.patch
rename to debian/patches/pve/0047-PVE-add-zero-block-handling-to-PBS-dump-callback.patch
index d1661d4..d09b817 100644
--- a/debian/patches/pve/0048-PVE-add-zero-block-handling-to-PBS-dump-callback.patch
+++ b/debian/patches/pve/0047-PVE-add-zero-block-handling-to-PBS-dump-callback.patch
@@ -20,7 +20,7 @@ Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/pve-backup.c b/pve-backup.c
-index 40d8136f1a..7c99554514 100644
+index 8999692418..562fcc20f7 100644
--- a/pve-backup.c
+++ b/pve-backup.c
@@ -8,6 +8,7 @@
diff --git a/debian/patches/pve/0049-PVE-add-query-pbs-bitmap-info-QMP-call.patch b/debian/patches/pve/0048-PVE-add-query-pbs-bitmap-info-QMP-call.patch
similarity index 94%
rename from debian/patches/pve/0049-PVE-add-query-pbs-bitmap-info-QMP-call.patch
rename to debian/patches/pve/0048-PVE-add-query-pbs-bitmap-info-QMP-call.patch
index 31ab8a8..c8581c8 100644
--- a/debian/patches/pve/0049-PVE-add-query-pbs-bitmap-info-QMP-call.patch
+++ b/debian/patches/pve/0048-PVE-add-query-pbs-bitmap-info-QMP-call.patch
@@ -68,7 +68,7 @@ index 3ff014d32a..c3227a1498 100644
info->zero_bytes, zero_per);
diff --git a/pve-backup.c b/pve-backup.c
-index 7c99554514..c6d8a51406 100644
+index 562fcc20f7..04c21c80aa 100644
--- a/pve-backup.c
+++ b/pve-backup.c
@@ -46,6 +46,7 @@ static struct PVEBackupState {
@@ -79,7 +79,7 @@ index 7c99554514..c6d8a51406 100644
} stat;
int64_t speed;
VmaWriter *vmaw;
-@@ -674,7 +675,6 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
+@@ -670,7 +671,6 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
}
size_t total = 0;
@@ -87,7 +87,7 @@ index 7c99554514..c6d8a51406 100644
l = di_list;
while (l) {
-@@ -695,18 +695,33 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
+@@ -691,18 +691,33 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
uuid_generate(uuid);
@@ -124,7 +124,7 @@ index 7c99554514..c6d8a51406 100644
}
int dump_cb_block_size = PROXMOX_BACKUP_DEFAULT_CHUNK_SIZE; // Hardcoded (4M)
-@@ -733,12 +748,12 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
+@@ -729,12 +744,12 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
error_set(task->errp, ERROR_CLASS_GENERIC_ERROR,
"proxmox_backup_new failed: %s", pbs_err);
proxmox_backup_free_error(pbs_err);
@@ -139,7 +139,7 @@ index 7c99554514..c6d8a51406 100644
/* register all devices */
l = di_list;
-@@ -749,6 +764,8 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
+@@ -745,6 +760,8 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
di->block_size = dump_cb_block_size;
const char *devname = bdrv_get_device_name(di->bs);
@@ -148,7 +148,7 @@ index 7c99554514..c6d8a51406 100644
BdrvDirtyBitmap *bitmap = bdrv_find_dirty_bitmap(di->bs, PBS_BITMAP_NAME);
bool expect_only_dirty = false;
-@@ -757,49 +774,59 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
+@@ -753,49 +770,59 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
if (bitmap == NULL) {
bitmap = bdrv_create_dirty_bitmap(di->bs, dump_cb_block_size, PBS_BITMAP_NAME, task->errp);
if (!bitmap) {
@@ -218,7 +218,7 @@ index 7c99554514..c6d8a51406 100644
}
/* register all devices for vma writer */
-@@ -809,7 +836,7 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
+@@ -805,7 +832,7 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
l = g_list_next(l);
if (!(di->target = bdrv_backup_dump_create(VMA_CLUSTER_SIZE, di->size, pvebackup_co_dump_vma_cb, di, task->errp))) {
@@ -227,7 +227,7 @@ index 7c99554514..c6d8a51406 100644
}
const char *devname = bdrv_get_device_name(di->bs);
-@@ -817,16 +844,14 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
+@@ -813,16 +840,14 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
if (di->dev_id <= 0) {
error_set(task->errp, ERROR_CLASS_GENERIC_ERROR,
"register_stream failed");
@@ -246,7 +246,7 @@ index 7c99554514..c6d8a51406 100644
}
backup_dir = task->backup_file;
-@@ -843,18 +868,18 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
+@@ -839,18 +864,18 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
di->size, flags, false, &local_err);
if (local_err) {
error_propagate(task->errp, local_err);
@@ -268,7 +268,7 @@ index 7c99554514..c6d8a51406 100644
}
-@@ -862,7 +887,7 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
+@@ -858,7 +883,7 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
if (task->has_config_file) {
if (pvebackup_co_add_config(task->config_file, config_name, format, backup_dir,
vmaw, pbs, task->errp) != 0) {
@@ -277,7 +277,7 @@ index 7c99554514..c6d8a51406 100644
}
}
-@@ -870,12 +895,11 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
+@@ -866,12 +891,11 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
if (task->has_firewall_file) {
if (pvebackup_co_add_config(task->firewall_file, firewall_name, format, backup_dir,
vmaw, pbs, task->errp) != 0) {
@@ -292,7 +292,7 @@ index 7c99554514..c6d8a51406 100644
if (backup_state.stat.error) {
error_free(backup_state.stat.error);
-@@ -895,10 +919,9 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
+@@ -891,10 +915,9 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
char *uuid_str = g_strdup(backup_state.stat.uuid_str);
backup_state.stat.total = total;
@@ -304,7 +304,7 @@ index 7c99554514..c6d8a51406 100644
qemu_mutex_unlock(&backup_state.stat.lock);
-@@ -915,6 +938,9 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
+@@ -911,6 +934,9 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
task->result = uuid_info;
return;
@@ -314,7 +314,7 @@ index 7c99554514..c6d8a51406 100644
err:
l = di_list;
-@@ -1078,9 +1104,40 @@ BackupStatus *qmp_query_backup(Error **errp)
+@@ -1074,9 +1100,40 @@ BackupStatus *qmp_query_backup(Error **errp)
return info;
}
diff --git a/debian/patches/pve/0050-PVE-redirect-stderr-to-journal-when-daemonized.patch b/debian/patches/pve/0049-PVE-redirect-stderr-to-journal-when-daemonized.patch
similarity index 89%
rename from debian/patches/pve/0050-PVE-redirect-stderr-to-journal-when-daemonized.patch
rename to debian/patches/pve/0049-PVE-redirect-stderr-to-journal-when-daemonized.patch
index f73de53..a4ca3c4 100644
--- a/debian/patches/pve/0050-PVE-redirect-stderr-to-journal-when-daemonized.patch
+++ b/debian/patches/pve/0049-PVE-redirect-stderr-to-journal-when-daemonized.patch
@@ -11,10 +11,10 @@ fitting.
2 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/Makefile.objs b/Makefile.objs
-index b7d58e592e..105f23bff7 100644
+index 240eb503f2..c7ba4e11e7 100644
--- a/Makefile.objs
+++ b/Makefile.objs
-@@ -55,6 +55,7 @@ common-obj-y += net/
+@@ -54,6 +54,7 @@ common-obj-y += net/
common-obj-y += qdev-monitor.o
common-obj-$(CONFIG_WIN32) += os-win32.o
common-obj-$(CONFIG_POSIX) += os-posix.o
@@ -23,7 +23,7 @@ index b7d58e592e..105f23bff7 100644
common-obj-$(CONFIG_LINUX) += fsdev/
diff --git a/os-posix.c b/os-posix.c
-index 3cd52e1e70..ab4d052c62 100644
+index 3572db3f44..b45dde63ac 100644
--- a/os-posix.c
+++ b/os-posix.c
@@ -28,6 +28,8 @@
@@ -35,7 +35,7 @@ index 3cd52e1e70..ab4d052c62 100644
#include "qemu-common.h"
/* Needed early for CONFIG_BSD etc. */
-@@ -309,9 +311,10 @@ void os_setup_post(void)
+@@ -312,9 +314,10 @@ void os_setup_post(void)
dup2(fd, 0);
dup2(fd, 1);
diff --git a/debian/patches/pve/0050-PVE-Add-sequential-job-transaction-support.patch b/debian/patches/pve/0050-PVE-Add-sequential-job-transaction-support.patch
new file mode 100644
index 0000000..67b053d
--- /dev/null
+++ b/debian/patches/pve/0050-PVE-Add-sequential-job-transaction-support.patch
@@ -0,0 +1,82 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Stefan Reiter <s.reiter@proxmox.com>
+Date: Thu, 20 Aug 2020 14:31:59 +0200
+Subject: [PATCH] PVE: Add sequential job transaction support
+
+---
+ include/qemu/job.h | 12 ++++++++++++
+ job.c | 24 ++++++++++++++++++++++++
+ 2 files changed, 36 insertions(+)
+
+diff --git a/include/qemu/job.h b/include/qemu/job.h
+index 32aabb1c60..f7a6a0926a 100644
+--- a/include/qemu/job.h
++++ b/include/qemu/job.h
+@@ -280,6 +280,18 @@ typedef enum JobCreateFlags {
+ */
+ JobTxn *job_txn_new(void);
+
++/**
++ * Create a new transaction and set it to sequential mode, i.e. run all jobs
++ * one after the other instead of at the same time.
++ */
++JobTxn *job_txn_new_seq(void);
++
++/**
++ * Helper method to start the first job in a sequential transaction to kick it
++ * off. Other jobs will be run after this one completes.
++ */
++void job_txn_start_seq(JobTxn *txn);
++
+ /**
+ * Release a reference that was previously acquired with job_txn_add_job or
+ * job_txn_new. If it's the last reference to the object, it will be freed.
+diff --git a/job.c b/job.c
+index b8139c80a4..97ee97a192 100644
+--- a/job.c
++++ b/job.c
+@@ -72,6 +72,8 @@ struct JobTxn {
+
+ /* Reference count */
+ int refcnt;
++
++ bool sequential;
+ };
+
+ /* Right now, this mutex is only needed to synchronize accesses to job->busy
+@@ -102,6 +104,25 @@ JobTxn *job_txn_new(void)
+ return txn;
+ }
+
++JobTxn *job_txn_new_seq(void)
++{
++ JobTxn *txn = job_txn_new();
++ txn->sequential = true;
++ return txn;
++}
++
++void job_txn_start_seq(JobTxn *txn)
++{
++ assert(txn->sequential);
++ assert(!txn->aborting);
++
++ Job *first = QLIST_FIRST(&txn->jobs);
++ assert(first);
++ assert(first->status == JOB_STATUS_CREATED);
++
++ job_start(first);
++}
++
+ static void job_txn_ref(JobTxn *txn)
+ {
+ txn->refcnt++;
+@@ -841,6 +862,9 @@ static void job_completed_txn_success(Job *job)
+ */
+ QLIST_FOREACH(other_job, &txn->jobs, txn_list) {
+ if (!job_is_completed(other_job)) {
++ if (txn->sequential) {
++ job_start(other_job);
++ }
+ return;
+ }
+ assert(other_job->ret == 0);
diff --git a/debian/patches/pve/0051-PVE-Backup-Use-a-transaction-to-synchronize-job-stat.patch b/debian/patches/pve/0051-PVE-Backup-Use-a-transaction-to-synchronize-job-stat.patch
new file mode 100644
index 0000000..0e33331
--- /dev/null
+++ b/debian/patches/pve/0051-PVE-Backup-Use-a-transaction-to-synchronize-job-stat.patch
@@ -0,0 +1,290 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Stefan Reiter <s.reiter@proxmox.com>
+Date: Thu, 20 Aug 2020 14:25:00 +0200
+Subject: [PATCH] PVE-Backup: Use a transaction to synchronize job states
+
+By using a JobTxn, we can sync dirty bitmaps only when *all* jobs were
+successful - meaning we don't need to remove them when the backup fails,
+since QEMU's BITMAP_SYNC_MODE_ON_SUCCESS will now handle that for us.
+
+To keep the rate-limiting and IO impact from before, we use a sequential
+transaction, so drives will still be backed up one after the other.
+---
+ pve-backup.c | 167 +++++++++++++++------------------------------------
+ 1 file changed, 49 insertions(+), 118 deletions(-)
+
+diff --git a/pve-backup.c b/pve-backup.c
+index 04c21c80aa..9562e9c98d 100644
+--- a/pve-backup.c
++++ b/pve-backup.c
+@@ -52,6 +52,7 @@ static struct PVEBackupState {
+ VmaWriter *vmaw;
+ ProxmoxBackupHandle *pbs;
+ GList *di_list;
++ JobTxn *txn;
+ QemuMutex backup_mutex;
+ CoMutex dump_callback_mutex;
+ } backup_state;
+@@ -71,32 +72,12 @@ typedef struct PVEBackupDevInfo {
+ size_t size;
+ uint64_t block_size;
+ uint8_t dev_id;
+- bool completed;
+ char targetfile[PATH_MAX];
+ BdrvDirtyBitmap *bitmap;
+ BlockDriverState *target;
++ BlockJob *job;
+ } PVEBackupDevInfo;
+
+-static void pvebackup_run_next_job(void);
+-
+-static BlockJob *
+-lookup_active_block_job(PVEBackupDevInfo *di)
+-{
+- if (!di->completed && di->bs) {
+- for (BlockJob *job = block_job_next(NULL); job; job = block_job_next(job)) {
+- if (job->job.driver->job_type != JOB_TYPE_BACKUP) {
+- continue;
+- }
+-
+- BackupBlockJob *bjob = container_of(job, BackupBlockJob, common);
+- if (bjob && bjob->source_bs == di->bs) {
+- return job;
+- }
+- }
+- }
+- return NULL;
+-}
+-
+ static void pvebackup_propagate_error(Error *err)
+ {
+ qemu_mutex_lock(&backup_state.stat.lock);
+@@ -272,18 +253,6 @@ static void coroutine_fn pvebackup_co_cleanup(void *unused)
+ if (local_err != NULL) {
+ pvebackup_propagate_error(local_err);
+ }
+- } else {
+- // on error or cancel we cannot ensure synchronization of dirty
+- // bitmaps with backup server, so remove all and do full backup next
+- GList *l = backup_state.di_list;
+- while (l) {
+- PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
+- l = g_list_next(l);
+-
+- if (di->bitmap) {
+- bdrv_release_dirty_bitmap(di->bitmap);
+- }
+- }
+ }
+
+ proxmox_backup_disconnect(backup_state.pbs);
+@@ -322,8 +291,6 @@ static void pvebackup_complete_cb(void *opaque, int ret)
+
+ qemu_mutex_lock(&backup_state.backup_mutex);
+
+- di->completed = true;
+-
+ if (ret < 0) {
+ Error *local_err = NULL;
+ error_setg(&local_err, "job failed with err %d - %s", ret, strerror(-ret));
+@@ -336,20 +303,17 @@ static void pvebackup_complete_cb(void *opaque, int ret)
+
+ block_on_coroutine_fn(pvebackup_complete_stream, di);
+
+- // remove self from job queue
++ // remove self from job list
+ backup_state.di_list = g_list_remove(backup_state.di_list, di);
+
+- if (di->bitmap && ret < 0) {
+- // on error or cancel we cannot ensure synchronization of dirty
+- // bitmaps with backup server, so remove all and do full backup next
+- bdrv_release_dirty_bitmap(di->bitmap);
+- }
+-
+ g_free(di);
+
+- qemu_mutex_unlock(&backup_state.backup_mutex);
++ /* call cleanup if we're the last job */
++ if (!g_list_first(backup_state.di_list)) {
++ block_on_coroutine_fn(pvebackup_co_cleanup, NULL);
++ }
+
+- pvebackup_run_next_job();
++ qemu_mutex_unlock(&backup_state.backup_mutex);
+ }
+
+ static void pvebackup_cancel(void)
+@@ -371,36 +335,28 @@ static void pvebackup_cancel(void)
+ proxmox_backup_abort(backup_state.pbs, "backup canceled");
+ }
+
+- qemu_mutex_unlock(&backup_state.backup_mutex);
+-
+- for(;;) {
+-
+- BlockJob *next_job = NULL;
+-
+- qemu_mutex_lock(&backup_state.backup_mutex);
+-
+- GList *l = backup_state.di_list;
+- while (l) {
+- PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
+- l = g_list_next(l);
++ /* it's enough to cancel one job in the transaction, the rest will follow
++ * automatically */
++ GList *bdi = g_list_first(backup_state.di_list);
++ BlockJob *cancel_job = bdi && bdi->data ?
++ ((PVEBackupDevInfo *)bdi->data)->job :
++ NULL;
+
+- BlockJob *job = lookup_active_block_job(di);
+- if (job != NULL) {
+- next_job = job;
+- break;
+- }
+- }
++ /* ref the job before releasing the mutex, just to be safe */
++ if (cancel_job) {
++ job_ref(&cancel_job->job);
++ }
+
+- qemu_mutex_unlock(&backup_state.backup_mutex);
++ /* job_cancel_sync may enter the job, so we need to release the
++ * backup_mutex to avoid deadlock */
++ qemu_mutex_unlock(&backup_state.backup_mutex);
+
+- if (next_job) {
+- AioContext *aio_context = next_job->job.aio_context;
+- aio_context_acquire(aio_context);
+- job_cancel_sync(&next_job->job);
+- aio_context_release(aio_context);
+- } else {
+- break;
+- }
++ if (cancel_job) {
++ AioContext *aio_context = cancel_job->job.aio_context;
++ aio_context_acquire(aio_context);
++ job_cancel_sync(&cancel_job->job);
++ job_unref(&cancel_job->job);
++ aio_context_release(aio_context);
+ }
+ }
+
+@@ -459,51 +415,19 @@ static int coroutine_fn pvebackup_co_add_config(
+ goto out;
+ }
+
+-bool job_should_pause(Job *job);
+-
+-static void pvebackup_run_next_job(void)
+-{
+- assert(!qemu_in_coroutine());
+-
+- qemu_mutex_lock(&backup_state.backup_mutex);
+-
+- GList *l = backup_state.di_list;
+- while (l) {
+- PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
+- l = g_list_next(l);
+-
+- BlockJob *job = lookup_active_block_job(di);
+-
+- if (job) {
+- qemu_mutex_unlock(&backup_state.backup_mutex);
+-
+- AioContext *aio_context = job->job.aio_context;
+- aio_context_acquire(aio_context);
+-
+- if (job_should_pause(&job->job)) {
+- bool error_or_canceled = pvebackup_error_or_canceled();
+- if (error_or_canceled) {
+- job_cancel_sync(&job->job);
+- } else {
+- job_resume(&job->job);
+- }
+- }
+- aio_context_release(aio_context);
+- return;
+- }
+- }
+-
+- block_on_coroutine_fn(pvebackup_co_cleanup, NULL); // no more jobs, run cleanup
+-
+- qemu_mutex_unlock(&backup_state.backup_mutex);
+-}
+-
+ static bool create_backup_jobs(void) {
+
+ assert(!qemu_in_coroutine());
+
+ Error *local_err = NULL;
+
++ /* create job transaction to synchronize bitmap commit and cancel all
++ * jobs in case one errors */
++ if (backup_state.txn) {
++ job_txn_unref(backup_state.txn);
++ }
++ backup_state.txn = job_txn_new_seq();
++
+ /* create and start all jobs (paused state) */
+ GList *l = backup_state.di_list;
+ while (l) {
+@@ -524,7 +448,7 @@ static bool create_backup_jobs(void) {
+ BlockJob *job = backup_job_create(
+ NULL, di->bs, di->target, backup_state.speed, sync_mode, di->bitmap,
+ bitmap_mode, false, NULL, BLOCKDEV_ON_ERROR_REPORT, BLOCKDEV_ON_ERROR_REPORT,
+- JOB_DEFAULT, pvebackup_complete_cb, di, 1, NULL, &local_err);
++ JOB_DEFAULT, pvebackup_complete_cb, di, backup_state.txn, &local_err);
+
+ aio_context_release(aio_context);
+
+@@ -536,7 +460,8 @@ static bool create_backup_jobs(void) {
+ pvebackup_propagate_error(create_job_err);
+ break;
+ }
+- job_start(&job->job);
++
++ di->job = job;
+
+ bdrv_unref(di->target);
+ di->target = NULL;
+@@ -554,6 +479,10 @@ static bool create_backup_jobs(void) {
+ bdrv_unref(di->target);
+ di->target = NULL;
+ }
++
++ if (di->job) {
++ job_unref(&di->job->job);
++ }
+ }
+ }
+
+@@ -944,10 +873,6 @@ err:
+ PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
+ l = g_list_next(l);
+
+- if (di->bitmap) {
+- bdrv_release_dirty_bitmap(di->bitmap);
+- }
+-
+ if (di->target) {
+ bdrv_unref(di->target);
+ }
+@@ -1036,9 +961,15 @@ UuidInfo *qmp_backup(
+ block_on_coroutine_fn(pvebackup_co_prepare, &task);
+
+ if (*errp == NULL) {
+- create_backup_jobs();
++ bool errors = create_backup_jobs();
+ qemu_mutex_unlock(&backup_state.backup_mutex);
+- pvebackup_run_next_job();
++
++ if (!errors) {
++ /* start the first job in the transaction
++ * note: this might directly enter the job, so we need to do this
++ * after unlocking the backup_mutex */
++ job_txn_start_seq(backup_state.txn);
++ }
+ } else {
+ qemu_mutex_unlock(&backup_state.backup_mutex);
+ }
diff --git a/debian/patches/pve/0052-PVE-Backup-Use-more-coroutines-and-don-t-block-on-fi.patch b/debian/patches/pve/0052-PVE-Backup-Use-more-coroutines-and-don-t-block-on-fi.patch
new file mode 100644
index 0000000..8ae58fe
--- /dev/null
+++ b/debian/patches/pve/0052-PVE-Backup-Use-more-coroutines-and-don-t-block-on-fi.patch
@@ -0,0 +1,363 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Stefan Reiter <s.reiter@proxmox.com>
+Date: Mon, 28 Sep 2020 13:40:51 +0200
+Subject: [PATCH] PVE-Backup: Use more coroutines and don't block on finishing
+
+proxmox_backup_co_finish is already async, but previously we would wait
+for the coroutine using block_on_coroutine_fn(). Avoid this by
+scheduling pvebackup_co_complete_stream (and thus pvebackup_co_cleanup)
+as a real coroutine when calling from pvebackup_complete_cb. This is ok,
+since complete_stream uses the backup_mutex internally to synchronize,
+and other streams can happily continue writing in the meantime anyway.
+
+To accomodate, backup_mutex is converted to a CoMutex. This means
+converting every user to a coroutine. This is not just useful here, but
+will come in handy once this series[0] is merged, and QMP calls can be
+yield-able coroutines too. Then we can also finally get rid of
+block_on_coroutine_fn.
+
+Cases of aio_context_acquire/release from within what is now a coroutine
+are changed to aio_co_reschedule_self, which works since a running
+coroutine always holds the aio lock for the context it is running in.
+
+job_cancel_sync is changed to regular job_cancel, since job_cancel_sync
+uses AIO_WAIT_WHILE internally, which is forbidden from Coroutines.
+
+create_backup_jobs cannot be run from a coroutine, so it is run
+directly. This does however mean that it runs unlocked, as it cannot
+acquire a CoMutex (see it's comment for rational on why that's ok).
+
+To communicate the finishing state, a new property is introduced to
+query-backup: 'finishing'. A new state is explicitly not used, since
+that would break compatibility with older qemu-server versions.
+
+[0] https://lists.gnu.org/archive/html/qemu-devel/2020-09/msg03515.html
+---
+ proxmox-backup-client.h | 1 +
+ pve-backup.c | 124 ++++++++++++++++++++++------------------
+ qapi/block-core.json | 5 +-
+ 3 files changed, 74 insertions(+), 56 deletions(-)
+
+diff --git a/proxmox-backup-client.h b/proxmox-backup-client.h
+index 20fd6b1719..a4781c5851 100644
+--- a/proxmox-backup-client.h
++++ b/proxmox-backup-client.h
+@@ -5,6 +5,7 @@
+ #include "qemu/coroutine.h"
+ #include "proxmox-backup-qemu.h"
+
++// FIXME: Remove once coroutines are supported for QMP
+ void block_on_coroutine_fn(CoroutineEntry *entry, void *entry_arg);
+
+ int coroutine_fn
+diff --git a/pve-backup.c b/pve-backup.c
+index 9562e9c98d..53cf23ed5a 100644
+--- a/pve-backup.c
++++ b/pve-backup.c
+@@ -33,7 +33,9 @@ const char *PBS_BITMAP_NAME = "pbs-incremental-dirty-bitmap";
+
+ static struct PVEBackupState {
+ struct {
+- // Everithing accessed from qmp_backup_query command is protected using lock
++ // Everything accessed from qmp_backup_query command is protected using
++ // this lock. Do NOT hold this lock for long times, as it is sometimes
++ // acquired from coroutines, and thus any wait time may block the guest.
+ QemuMutex lock;
+ Error *error;
+ time_t start_time;
+@@ -47,20 +49,21 @@ static struct PVEBackupState {
+ size_t reused;
+ size_t zero_bytes;
+ GList *bitmap_list;
++ bool finishing;
+ } stat;
+ int64_t speed;
+ VmaWriter *vmaw;
+ ProxmoxBackupHandle *pbs;
+ GList *di_list;
+ JobTxn *txn;
+- QemuMutex backup_mutex;
++ CoMutex backup_mutex;
+ CoMutex dump_callback_mutex;
+ } backup_state;
+
+ static void pvebackup_init(void)
+ {
+ qemu_mutex_init(&backup_state.stat.lock);
+- qemu_mutex_init(&backup_state.backup_mutex);
++ qemu_co_mutex_init(&backup_state.backup_mutex);
+ qemu_co_mutex_init(&backup_state.dump_callback_mutex);
+ }
+
+@@ -72,6 +75,7 @@ typedef struct PVEBackupDevInfo {
+ size_t size;
+ uint64_t block_size;
+ uint8_t dev_id;
++ int completed_ret; // INT_MAX if not completed
+ char targetfile[PATH_MAX];
+ BdrvDirtyBitmap *bitmap;
+ BlockDriverState *target;
+@@ -227,12 +231,12 @@ pvebackup_co_dump_vma_cb(
+ }
+
+ // assumes the caller holds backup_mutex
+-static void coroutine_fn pvebackup_co_cleanup(void *unused)
++static void coroutine_fn pvebackup_co_cleanup(void)
+ {
+ assert(qemu_in_coroutine());
+
+ qemu_mutex_lock(&backup_state.stat.lock);
+- backup_state.stat.end_time = time(NULL);
++ backup_state.stat.finishing = true;
+ qemu_mutex_unlock(&backup_state.stat.lock);
+
+ if (backup_state.vmaw) {
+@@ -261,12 +265,29 @@ static void coroutine_fn pvebackup_co_cleanup(void *unused)
+
+ g_list_free(backup_state.di_list);
+ backup_state.di_list = NULL;
++
++ qemu_mutex_lock(&backup_state.stat.lock);
++ backup_state.stat.end_time = time(NULL);
++ backup_state.stat.finishing = false;
++ qemu_mutex_unlock(&backup_state.stat.lock);
+ }
+
+-// assumes the caller holds backup_mutex
+-static void coroutine_fn pvebackup_complete_stream(void *opaque)
++static void coroutine_fn pvebackup_co_complete_stream(void *opaque)
+ {
+ PVEBackupDevInfo *di = opaque;
++ int ret = di->completed_ret;
++
++ qemu_co_mutex_lock(&backup_state.backup_mutex);
++
++ if (ret < 0) {
++ Error *local_err = NULL;
++ error_setg(&local_err, "job failed with err %d - %s", ret, strerror(-ret));
++ pvebackup_propagate_error(local_err);
++ }
++
++ di->bs = NULL;
++
++ assert(di->target == NULL);
+
+ bool error_or_canceled = pvebackup_error_or_canceled();
+
+@@ -281,27 +302,6 @@ static void coroutine_fn pvebackup_complete_stream(void *opaque)
+ pvebackup_propagate_error(local_err);
+ }
+ }
+-}
+-
+-static void pvebackup_complete_cb(void *opaque, int ret)
+-{
+- assert(!qemu_in_coroutine());
+-
+- PVEBackupDevInfo *di = opaque;
+-
+- qemu_mutex_lock(&backup_state.backup_mutex);
+-
+- if (ret < 0) {
+- Error *local_err = NULL;
+- error_setg(&local_err, "job failed with err %d - %s", ret, strerror(-ret));
+- pvebackup_propagate_error(local_err);
+- }
+-
+- di->bs = NULL;
+-
+- assert(di->target == NULL);
+-
+- block_on_coroutine_fn(pvebackup_complete_stream, di);
+
+ // remove self from job list
+ backup_state.di_list = g_list_remove(backup_state.di_list, di);
+@@ -310,21 +310,36 @@ static void pvebackup_complete_cb(void *opaque, int ret)
+
+ /* call cleanup if we're the last job */
+ if (!g_list_first(backup_state.di_list)) {
+- block_on_coroutine_fn(pvebackup_co_cleanup, NULL);
++ pvebackup_co_cleanup();
+ }
+
+- qemu_mutex_unlock(&backup_state.backup_mutex);
++ qemu_co_mutex_unlock(&backup_state.backup_mutex);
+ }
+
+-static void pvebackup_cancel(void)
++static void pvebackup_complete_cb(void *opaque, int ret)
+ {
+ assert(!qemu_in_coroutine());
+
++ PVEBackupDevInfo *di = opaque;
++ di->completed_ret = ret;
++
++ /*
++ * Schedule stream cleanup in async coroutine. close_image and finish might
++ * take a while, so we can't block on them here.
++ * Note: di is a pointer to an entry in the global backup_state struct, so
++ * it stays valid.
++ */
++ Coroutine *co = qemu_coroutine_create(pvebackup_co_complete_stream, di);
++ aio_co_schedule(qemu_get_aio_context(), co);
++}
++
++static void coroutine_fn pvebackup_co_cancel(void *opaque)
++{
+ Error *cancel_err = NULL;
+ error_setg(&cancel_err, "backup canceled");
+ pvebackup_propagate_error(cancel_err);
+
+- qemu_mutex_lock(&backup_state.backup_mutex);
++ qemu_co_mutex_lock(&backup_state.backup_mutex);
+
+ if (backup_state.vmaw) {
+ /* make sure vma writer does not block anymore */
+@@ -342,27 +357,16 @@ static void pvebackup_cancel(void)
+ ((PVEBackupDevInfo *)bdi->data)->job :
+ NULL;
+
+- /* ref the job before releasing the mutex, just to be safe */
+ if (cancel_job) {
+- job_ref(&cancel_job->job);
++ job_cancel(&cancel_job->job, false);
+ }
+
+- /* job_cancel_sync may enter the job, so we need to release the
+- * backup_mutex to avoid deadlock */
+- qemu_mutex_unlock(&backup_state.backup_mutex);
+-
+- if (cancel_job) {
+- AioContext *aio_context = cancel_job->job.aio_context;
+- aio_context_acquire(aio_context);
+- job_cancel_sync(&cancel_job->job);
+- job_unref(&cancel_job->job);
+- aio_context_release(aio_context);
+- }
++ qemu_co_mutex_unlock(&backup_state.backup_mutex);
+ }
+
+ void qmp_backup_cancel(Error **errp)
+ {
+- pvebackup_cancel();
++ block_on_coroutine_fn(pvebackup_co_cancel, NULL);
+ }
+
+ // assumes the caller holds backup_mutex
+@@ -415,6 +419,14 @@ static int coroutine_fn pvebackup_co_add_config(
+ goto out;
+ }
+
++/*
++ * backup_job_create can *not* be run from a coroutine (and requires an
++ * acquired AioContext), so this can't either.
++ * This does imply that this function cannot run with backup_mutex acquired.
++ * That is ok because it is only ever called between setting up the backup_state
++ * struct and starting the jobs, and from within a QMP call. This means that no
++ * other QMP call can interrupt, and no background job is running yet.
++ */
+ static bool create_backup_jobs(void) {
+
+ assert(!qemu_in_coroutine());
+@@ -523,11 +535,12 @@ typedef struct QmpBackupTask {
+ UuidInfo *result;
+ } QmpBackupTask;
+
+-// assumes the caller holds backup_mutex
+ static void coroutine_fn pvebackup_co_prepare(void *opaque)
+ {
+ assert(qemu_in_coroutine());
+
++ qemu_co_mutex_lock(&backup_state.backup_mutex);
++
+ QmpBackupTask *task = opaque;
+
+ task->result = NULL; // just to be sure
+@@ -616,6 +629,8 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
+ }
+ di->size = size;
+ total += size;
++
++ di->completed_ret = INT_MAX;
+ }
+
+ uuid_generate(uuid);
+@@ -847,6 +862,7 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
+ backup_state.stat.dirty = total - backup_state.stat.reused;
+ backup_state.stat.transferred = 0;
+ backup_state.stat.zero_bytes = 0;
++ backup_state.stat.finishing = false;
+
+ qemu_mutex_unlock(&backup_state.stat.lock);
+
+@@ -861,6 +877,8 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
+ uuid_info->UUID = uuid_str;
+
+ task->result = uuid_info;
++
++ qemu_co_mutex_unlock(&backup_state.backup_mutex);
+ return;
+
+ err_mutex:
+@@ -903,6 +921,8 @@ err:
+ }
+
+ task->result = NULL;
++
++ qemu_co_mutex_unlock(&backup_state.backup_mutex);
+ return;
+ }
+
+@@ -956,22 +976,15 @@ UuidInfo *qmp_backup(
+ .errp = errp,
+ };
+
+- qemu_mutex_lock(&backup_state.backup_mutex);
+-
+ block_on_coroutine_fn(pvebackup_co_prepare, &task);
+
+ if (*errp == NULL) {
+ bool errors = create_backup_jobs();
+- qemu_mutex_unlock(&backup_state.backup_mutex);
+
+ if (!errors) {
+- /* start the first job in the transaction
+- * note: this might directly enter the job, so we need to do this
+- * after unlocking the backup_mutex */
++ // start the first job in the transaction
+ job_txn_start_seq(backup_state.txn);
+ }
+- } else {
+- qemu_mutex_unlock(&backup_state.backup_mutex);
+ }
+
+ return task.result;
+@@ -1025,6 +1038,7 @@ BackupStatus *qmp_query_backup(Error **errp)
+ info->transferred = backup_state.stat.transferred;
+ info->has_reused = true;
+ info->reused = backup_state.stat.reused;
++ info->finishing = backup_state.stat.finishing;
+
+ qemu_mutex_unlock(&backup_state.stat.lock);
+
+diff --git a/qapi/block-core.json b/qapi/block-core.json
+index 5fc42e87f3..b31ad8d989 100644
+--- a/qapi/block-core.json
++++ b/qapi/block-core.json
+@@ -784,12 +784,15 @@
+ #
+ # @uuid: uuid for this backup job
+ #
++# @finishing: if status='active' and finishing=true, then the backup process is
++# waiting for the target to finish.
++#
+ ##
+ { 'struct': 'BackupStatus',
+ 'data': {'*status': 'str', '*errmsg': 'str', '*total': 'int', '*dirty': 'int',
+ '*transferred': 'int', '*zero-bytes': 'int', '*reused': 'int',
+ '*start-time': 'int', '*end-time': 'int',
+- '*backup-file': 'str', '*uuid': 'str' } }
++ '*backup-file': 'str', '*uuid': 'str', 'finishing': 'bool' } }
+
+ ##
+ # @BackupFormat:
diff --git a/debian/patches/series b/debian/patches/series
index b8657bf..021c0d7 100644
--- a/debian/patches/series
+++ b/debian/patches/series
@@ -26,28 +26,30 @@ pve/0022-PVE-Up-Config-file-posix-make-locking-optiono-on-cre.patch
pve/0023-PVE-monitor-disable-oob-capability.patch
pve/0024-PVE-Compat-4.0-used-balloon-qemu-4-0-config-size-fal.patch
pve/0025-PVE-Allow-version-code-in-machine-type.patch
-pve/0026-PVE-Backup-modify-job-api.patch
-pve/0027-PVE-Backup-add-vma-backup-format-code.patch
-pve/0028-PVE-Backup-add-backup-dump-block-driver.patch
-pve/0029-PVE-Backup-proxmox-backup-patches-for-qemu.patch
-pve/0030-PVE-Backup-pbs-restore-new-command-to-restore-from-p.patch
-pve/0031-PVE-Backup-avoid-coroutines-to-fix-AIO-freeze-cleanu.patch
-pve/0032-drive-mirror-add-support-for-sync-bitmap-mode-never.patch
-pve/0033-drive-mirror-add-support-for-conditional-and-always-.patch
-pve/0034-mirror-add-check-for-bitmap-mode-without-bitmap.patch
-pve/0035-mirror-switch-to-bdrv_dirty_bitmap_merge_internal.patch
-pve/0036-iotests-add-test-for-bitmap-mirror.patch
-pve/0037-mirror-move-some-checks-to-qmp.patch
-pve/0038-PVE-Backup-Add-dirty-bitmap-tracking-for-incremental.patch
-pve/0039-PVE-backup-rename-incremental-to-use-dirty-bitmap.patch
-pve/0040-PVE-fixup-pbs-restore-API.patch
-pve/0041-PVE-always-set-dirty-counter-for-non-incremental-bac.patch
-pve/0042-PVE-use-proxmox_backup_check_incremental.patch
-pve/0043-PVE-fixup-pbs-backup-add-compress-and-encrypt-option.patch
-pve/0044-PVE-Add-PBS-block-driver-to-map-backup-archives-into.patch
-pve/0045-PVE-add-query_proxmox_support-QMP-command.patch
-pve/0046-pbs-fix-missing-crypt-and-compress-parameters.patch
-pve/0047-PVE-handle-PBS-write-callback-with-big-blocks-correc.patch
-pve/0048-PVE-add-zero-block-handling-to-PBS-dump-callback.patch
-pve/0049-PVE-add-query-pbs-bitmap-info-QMP-call.patch
-pve/0050-PVE-redirect-stderr-to-journal-when-daemonized.patch
+pve/0026-PVE-Backup-add-vma-backup-format-code.patch
+pve/0027-PVE-Backup-add-backup-dump-block-driver.patch
+pve/0028-PVE-Backup-proxmox-backup-patches-for-qemu.patch
+pve/0029-PVE-Backup-pbs-restore-new-command-to-restore-from-p.patch
+pve/0030-PVE-Backup-avoid-coroutines-to-fix-AIO-freeze-cleanu.patch
+pve/0031-drive-mirror-add-support-for-sync-bitmap-mode-never.patch
+pve/0032-drive-mirror-add-support-for-conditional-and-always-.patch
+pve/0033-mirror-add-check-for-bitmap-mode-without-bitmap.patch
+pve/0034-mirror-switch-to-bdrv_dirty_bitmap_merge_internal.patch
+pve/0035-iotests-add-test-for-bitmap-mirror.patch
+pve/0036-mirror-move-some-checks-to-qmp.patch
+pve/0037-PVE-Backup-Add-dirty-bitmap-tracking-for-incremental.patch
+pve/0038-PVE-backup-rename-incremental-to-use-dirty-bitmap.patch
+pve/0039-PVE-fixup-pbs-restore-API.patch
+pve/0040-PVE-always-set-dirty-counter-for-non-incremental-bac.patch
+pve/0041-PVE-use-proxmox_backup_check_incremental.patch
+pve/0042-PVE-fixup-pbs-backup-add-compress-and-encrypt-option.patch
+pve/0043-PVE-Add-PBS-block-driver-to-map-backup-archives-into.patch
+pve/0044-PVE-add-query_proxmox_support-QMP-command.patch
+pve/0045-pbs-fix-missing-crypt-and-compress-parameters.patch
+pve/0046-PVE-handle-PBS-write-callback-with-big-blocks-correc.patch
+pve/0047-PVE-add-zero-block-handling-to-PBS-dump-callback.patch
+pve/0048-PVE-add-query-pbs-bitmap-info-QMP-call.patch
+pve/0049-PVE-redirect-stderr-to-journal-when-daemonized.patch
+pve/0050-PVE-Add-sequential-job-transaction-support.patch
+pve/0051-PVE-Backup-Use-a-transaction-to-synchronize-job-stat.patch
+pve/0052-PVE-Backup-Use-more-coroutines-and-don-t-block-on-fi.patch
--
2.20.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [pve-devel] [PATCH qemu 2/5] PVE: Add sequential job transaction support
2020-09-28 15:48 [pve-devel] [PATCH pve-qemu 1/5] Add transaction patches and fix for blocking finish Stefan Reiter
@ 2020-09-28 15:48 ` Stefan Reiter
2020-09-28 15:48 ` [pve-devel] [PATCH qemu 3/5] PVE-Backup: Use a transaction to synchronize job states Stefan Reiter
` (3 subsequent siblings)
4 siblings, 0 replies; 7+ messages in thread
From: Stefan Reiter @ 2020-09-28 15:48 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
include/qemu/job.h | 12 ++++++++++++
job.c | 24 ++++++++++++++++++++++++
2 files changed, 36 insertions(+)
diff --git a/include/qemu/job.h b/include/qemu/job.h
index 32aabb1c60..f7a6a0926a 100644
--- a/include/qemu/job.h
+++ b/include/qemu/job.h
@@ -280,6 +280,18 @@ typedef enum JobCreateFlags {
*/
JobTxn *job_txn_new(void);
+/**
+ * Create a new transaction and set it to sequential mode, i.e. run all jobs
+ * one after the other instead of at the same time.
+ */
+JobTxn *job_txn_new_seq(void);
+
+/**
+ * Helper method to start the first job in a sequential transaction to kick it
+ * off. Other jobs will be run after this one completes.
+ */
+void job_txn_start_seq(JobTxn *txn);
+
/**
* Release a reference that was previously acquired with job_txn_add_job or
* job_txn_new. If it's the last reference to the object, it will be freed.
diff --git a/job.c b/job.c
index b8139c80a4..97ee97a192 100644
--- a/job.c
+++ b/job.c
@@ -72,6 +72,8 @@ struct JobTxn {
/* Reference count */
int refcnt;
+
+ bool sequential;
};
/* Right now, this mutex is only needed to synchronize accesses to job->busy
@@ -102,6 +104,25 @@ JobTxn *job_txn_new(void)
return txn;
}
+JobTxn *job_txn_new_seq(void)
+{
+ JobTxn *txn = job_txn_new();
+ txn->sequential = true;
+ return txn;
+}
+
+void job_txn_start_seq(JobTxn *txn)
+{
+ assert(txn->sequential);
+ assert(!txn->aborting);
+
+ Job *first = QLIST_FIRST(&txn->jobs);
+ assert(first);
+ assert(first->status == JOB_STATUS_CREATED);
+
+ job_start(first);
+}
+
static void job_txn_ref(JobTxn *txn)
{
txn->refcnt++;
@@ -841,6 +862,9 @@ static void job_completed_txn_success(Job *job)
*/
QLIST_FOREACH(other_job, &txn->jobs, txn_list) {
if (!job_is_completed(other_job)) {
+ if (txn->sequential) {
+ job_start(other_job);
+ }
return;
}
assert(other_job->ret == 0);
--
2.20.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [pve-devel] [PATCH qemu 3/5] PVE-Backup: Use a transaction to synchronize job states
2020-09-28 15:48 [pve-devel] [PATCH pve-qemu 1/5] Add transaction patches and fix for blocking finish Stefan Reiter
2020-09-28 15:48 ` [pve-devel] [PATCH qemu 2/5] PVE: Add sequential job transaction support Stefan Reiter
@ 2020-09-28 15:48 ` Stefan Reiter
2020-09-28 15:48 ` [pve-devel] [PATCH qemu 4/5] PVE-Backup: Use more coroutines and don't block on finishing Stefan Reiter
` (2 subsequent siblings)
4 siblings, 0 replies; 7+ messages in thread
From: Stefan Reiter @ 2020-09-28 15:48 UTC (permalink / raw)
To: pve-devel
By using a JobTxn, we can sync dirty bitmaps only when *all* jobs were
successful - meaning we don't need to remove them when the backup fails,
since QEMU's BITMAP_SYNC_MODE_ON_SUCCESS will now handle that for us.
To keep the rate-limiting and IO impact from before, we use a sequential
transaction, so drives will still be backed up one after the other.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
pve-backup.c | 169 +++++++++++++++------------------------------------
1 file changed, 50 insertions(+), 119 deletions(-)
diff --git a/pve-backup.c b/pve-backup.c
index 04c21c80aa..9562e9c98d 100644
--- a/pve-backup.c
+++ b/pve-backup.c
@@ -52,6 +52,7 @@ static struct PVEBackupState {
VmaWriter *vmaw;
ProxmoxBackupHandle *pbs;
GList *di_list;
+ JobTxn *txn;
QemuMutex backup_mutex;
CoMutex dump_callback_mutex;
} backup_state;
@@ -71,32 +72,12 @@ typedef struct PVEBackupDevInfo {
size_t size;
uint64_t block_size;
uint8_t dev_id;
- bool completed;
char targetfile[PATH_MAX];
BdrvDirtyBitmap *bitmap;
BlockDriverState *target;
+ BlockJob *job;
} PVEBackupDevInfo;
-static void pvebackup_run_next_job(void);
-
-static BlockJob *
-lookup_active_block_job(PVEBackupDevInfo *di)
-{
- if (!di->completed && di->bs) {
- for (BlockJob *job = block_job_next(NULL); job; job = block_job_next(job)) {
- if (job->job.driver->job_type != JOB_TYPE_BACKUP) {
- continue;
- }
-
- BackupBlockJob *bjob = container_of(job, BackupBlockJob, common);
- if (bjob && bjob->source_bs == di->bs) {
- return job;
- }
- }
- }
- return NULL;
-}
-
static void pvebackup_propagate_error(Error *err)
{
qemu_mutex_lock(&backup_state.stat.lock);
@@ -272,18 +253,6 @@ static void coroutine_fn pvebackup_co_cleanup(void *unused)
if (local_err != NULL) {
pvebackup_propagate_error(local_err);
}
- } else {
- // on error or cancel we cannot ensure synchronization of dirty
- // bitmaps with backup server, so remove all and do full backup next
- GList *l = backup_state.di_list;
- while (l) {
- PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
- l = g_list_next(l);
-
- if (di->bitmap) {
- bdrv_release_dirty_bitmap(di->bitmap);
- }
- }
}
proxmox_backup_disconnect(backup_state.pbs);
@@ -322,8 +291,6 @@ static void pvebackup_complete_cb(void *opaque, int ret)
qemu_mutex_lock(&backup_state.backup_mutex);
- di->completed = true;
-
if (ret < 0) {
Error *local_err = NULL;
error_setg(&local_err, "job failed with err %d - %s", ret, strerror(-ret));
@@ -336,20 +303,17 @@ static void pvebackup_complete_cb(void *opaque, int ret)
block_on_coroutine_fn(pvebackup_complete_stream, di);
- // remove self from job queue
+ // remove self from job list
backup_state.di_list = g_list_remove(backup_state.di_list, di);
- if (di->bitmap && ret < 0) {
- // on error or cancel we cannot ensure synchronization of dirty
- // bitmaps with backup server, so remove all and do full backup next
- bdrv_release_dirty_bitmap(di->bitmap);
- }
-
g_free(di);
- qemu_mutex_unlock(&backup_state.backup_mutex);
+ /* call cleanup if we're the last job */
+ if (!g_list_first(backup_state.di_list)) {
+ block_on_coroutine_fn(pvebackup_co_cleanup, NULL);
+ }
- pvebackup_run_next_job();
+ qemu_mutex_unlock(&backup_state.backup_mutex);
}
static void pvebackup_cancel(void)
@@ -371,36 +335,28 @@ static void pvebackup_cancel(void)
proxmox_backup_abort(backup_state.pbs, "backup canceled");
}
+ /* it's enough to cancel one job in the transaction, the rest will follow
+ * automatically */
+ GList *bdi = g_list_first(backup_state.di_list);
+ BlockJob *cancel_job = bdi && bdi->data ?
+ ((PVEBackupDevInfo *)bdi->data)->job :
+ NULL;
+
+ /* ref the job before releasing the mutex, just to be safe */
+ if (cancel_job) {
+ job_ref(&cancel_job->job);
+ }
+
+ /* job_cancel_sync may enter the job, so we need to release the
+ * backup_mutex to avoid deadlock */
qemu_mutex_unlock(&backup_state.backup_mutex);
- for(;;) {
-
- BlockJob *next_job = NULL;
-
- qemu_mutex_lock(&backup_state.backup_mutex);
-
- GList *l = backup_state.di_list;
- while (l) {
- PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
- l = g_list_next(l);
-
- BlockJob *job = lookup_active_block_job(di);
- if (job != NULL) {
- next_job = job;
- break;
- }
- }
-
- qemu_mutex_unlock(&backup_state.backup_mutex);
-
- if (next_job) {
- AioContext *aio_context = next_job->job.aio_context;
- aio_context_acquire(aio_context);
- job_cancel_sync(&next_job->job);
- aio_context_release(aio_context);
- } else {
- break;
- }
+ if (cancel_job) {
+ AioContext *aio_context = cancel_job->job.aio_context;
+ aio_context_acquire(aio_context);
+ job_cancel_sync(&cancel_job->job);
+ job_unref(&cancel_job->job);
+ aio_context_release(aio_context);
}
}
@@ -459,51 +415,19 @@ static int coroutine_fn pvebackup_co_add_config(
goto out;
}
-bool job_should_pause(Job *job);
-
-static void pvebackup_run_next_job(void)
-{
- assert(!qemu_in_coroutine());
-
- qemu_mutex_lock(&backup_state.backup_mutex);
-
- GList *l = backup_state.di_list;
- while (l) {
- PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
- l = g_list_next(l);
-
- BlockJob *job = lookup_active_block_job(di);
-
- if (job) {
- qemu_mutex_unlock(&backup_state.backup_mutex);
-
- AioContext *aio_context = job->job.aio_context;
- aio_context_acquire(aio_context);
-
- if (job_should_pause(&job->job)) {
- bool error_or_canceled = pvebackup_error_or_canceled();
- if (error_or_canceled) {
- job_cancel_sync(&job->job);
- } else {
- job_resume(&job->job);
- }
- }
- aio_context_release(aio_context);
- return;
- }
- }
-
- block_on_coroutine_fn(pvebackup_co_cleanup, NULL); // no more jobs, run cleanup
-
- qemu_mutex_unlock(&backup_state.backup_mutex);
-}
-
static bool create_backup_jobs(void) {
assert(!qemu_in_coroutine());
Error *local_err = NULL;
+ /* create job transaction to synchronize bitmap commit and cancel all
+ * jobs in case one errors */
+ if (backup_state.txn) {
+ job_txn_unref(backup_state.txn);
+ }
+ backup_state.txn = job_txn_new_seq();
+
/* create and start all jobs (paused state) */
GList *l = backup_state.di_list;
while (l) {
@@ -524,7 +448,7 @@ static bool create_backup_jobs(void) {
BlockJob *job = backup_job_create(
NULL, di->bs, di->target, backup_state.speed, sync_mode, di->bitmap,
bitmap_mode, false, NULL, BLOCKDEV_ON_ERROR_REPORT, BLOCKDEV_ON_ERROR_REPORT,
- JOB_DEFAULT, pvebackup_complete_cb, di, 1, NULL, &local_err);
+ JOB_DEFAULT, pvebackup_complete_cb, di, backup_state.txn, &local_err);
aio_context_release(aio_context);
@@ -536,7 +460,8 @@ static bool create_backup_jobs(void) {
pvebackup_propagate_error(create_job_err);
break;
}
- job_start(&job->job);
+
+ di->job = job;
bdrv_unref(di->target);
di->target = NULL;
@@ -554,6 +479,10 @@ static bool create_backup_jobs(void) {
bdrv_unref(di->target);
di->target = NULL;
}
+
+ if (di->job) {
+ job_unref(&di->job->job);
+ }
}
}
@@ -944,10 +873,6 @@ err:
PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
l = g_list_next(l);
- if (di->bitmap) {
- bdrv_release_dirty_bitmap(di->bitmap);
- }
-
if (di->target) {
bdrv_unref(di->target);
}
@@ -1036,9 +961,15 @@ UuidInfo *qmp_backup(
block_on_coroutine_fn(pvebackup_co_prepare, &task);
if (*errp == NULL) {
- create_backup_jobs();
+ bool errors = create_backup_jobs();
qemu_mutex_unlock(&backup_state.backup_mutex);
- pvebackup_run_next_job();
+
+ if (!errors) {
+ /* start the first job in the transaction
+ * note: this might directly enter the job, so we need to do this
+ * after unlocking the backup_mutex */
+ job_txn_start_seq(backup_state.txn);
+ }
} else {
qemu_mutex_unlock(&backup_state.backup_mutex);
}
--
2.20.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [pve-devel] [PATCH qemu 4/5] PVE-Backup: Use more coroutines and don't block on finishing
2020-09-28 15:48 [pve-devel] [PATCH pve-qemu 1/5] Add transaction patches and fix for blocking finish Stefan Reiter
2020-09-28 15:48 ` [pve-devel] [PATCH qemu 2/5] PVE: Add sequential job transaction support Stefan Reiter
2020-09-28 15:48 ` [pve-devel] [PATCH qemu 3/5] PVE-Backup: Use a transaction to synchronize job states Stefan Reiter
@ 2020-09-28 15:48 ` Stefan Reiter
2020-09-28 15:48 ` [pve-devel] [PATCH qemu-server 5/5] vzdump: log 'finishing' state Stefan Reiter
2020-09-29 14:22 ` [pve-devel] applied: [PATCH pve-qemu 1/5] Add transaction patches and fix for blocking finish Thomas Lamprecht
4 siblings, 0 replies; 7+ messages in thread
From: Stefan Reiter @ 2020-09-28 15:48 UTC (permalink / raw)
To: pve-devel
proxmox_backup_co_finish is already async, but previously we would wait
for the coroutine using block_on_coroutine_fn(). Avoid this by
scheduling pvebackup_co_complete_stream (and thus pvebackup_co_cleanup)
as a real coroutine when calling from pvebackup_complete_cb. This is ok,
since complete_stream uses the backup_mutex internally to synchronize,
and other streams can happily continue writing in the meantime anyway.
To accomodate, backup_mutex is converted to a CoMutex. This means
converting every user to a coroutine. This is not just useful here, but
will come in handy once this series[0] is merged, and QMP calls can be
yield-able coroutines too. Then we can also finally get rid of
block_on_coroutine_fn.
Cases of aio_context_acquire/release from within what is now a coroutine
are changed to aio_co_reschedule_self, which works since a running
coroutine always holds the aio lock for the context it is running in.
job_cancel_sync is changed to regular job_cancel, since job_cancel_sync
uses AIO_WAIT_WHILE internally, which is forbidden from Coroutines.
create_backup_jobs cannot be run from a coroutine, so it is run
directly. This does however mean that it runs unlocked, as it cannot
acquire a CoMutex (see it's comment for rational on why that's ok).
To communicate the finishing state, a new property is introduced to
query-backup: 'finishing'. A new state is explicitly not used, since
that would break compatibility with older qemu-server versions.
[0] https://lists.gnu.org/archive/html/qemu-devel/2020-09/msg03515.html
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
proxmox-backup-client.h | 1 +
pve-backup.c | 124 ++++++++++++++++++++++------------------
qapi/block-core.json | 5 +-
3 files changed, 74 insertions(+), 56 deletions(-)
diff --git a/proxmox-backup-client.h b/proxmox-backup-client.h
index 20fd6b1719..a4781c5851 100644
--- a/proxmox-backup-client.h
+++ b/proxmox-backup-client.h
@@ -5,6 +5,7 @@
#include "qemu/coroutine.h"
#include "proxmox-backup-qemu.h"
+// FIXME: Remove once coroutines are supported for QMP
void block_on_coroutine_fn(CoroutineEntry *entry, void *entry_arg);
int coroutine_fn
diff --git a/pve-backup.c b/pve-backup.c
index 9562e9c98d..53cf23ed5a 100644
--- a/pve-backup.c
+++ b/pve-backup.c
@@ -33,7 +33,9 @@ const char *PBS_BITMAP_NAME = "pbs-incremental-dirty-bitmap";
static struct PVEBackupState {
struct {
- // Everithing accessed from qmp_backup_query command is protected using lock
+ // Everything accessed from qmp_backup_query command is protected using
+ // this lock. Do NOT hold this lock for long times, as it is sometimes
+ // acquired from coroutines, and thus any wait time may block the guest.
QemuMutex lock;
Error *error;
time_t start_time;
@@ -47,20 +49,21 @@ static struct PVEBackupState {
size_t reused;
size_t zero_bytes;
GList *bitmap_list;
+ bool finishing;
} stat;
int64_t speed;
VmaWriter *vmaw;
ProxmoxBackupHandle *pbs;
GList *di_list;
JobTxn *txn;
- QemuMutex backup_mutex;
+ CoMutex backup_mutex;
CoMutex dump_callback_mutex;
} backup_state;
static void pvebackup_init(void)
{
qemu_mutex_init(&backup_state.stat.lock);
- qemu_mutex_init(&backup_state.backup_mutex);
+ qemu_co_mutex_init(&backup_state.backup_mutex);
qemu_co_mutex_init(&backup_state.dump_callback_mutex);
}
@@ -72,6 +75,7 @@ typedef struct PVEBackupDevInfo {
size_t size;
uint64_t block_size;
uint8_t dev_id;
+ int completed_ret; // INT_MAX if not completed
char targetfile[PATH_MAX];
BdrvDirtyBitmap *bitmap;
BlockDriverState *target;
@@ -227,12 +231,12 @@ pvebackup_co_dump_vma_cb(
}
// assumes the caller holds backup_mutex
-static void coroutine_fn pvebackup_co_cleanup(void *unused)
+static void coroutine_fn pvebackup_co_cleanup(void)
{
assert(qemu_in_coroutine());
qemu_mutex_lock(&backup_state.stat.lock);
- backup_state.stat.end_time = time(NULL);
+ backup_state.stat.finishing = true;
qemu_mutex_unlock(&backup_state.stat.lock);
if (backup_state.vmaw) {
@@ -261,12 +265,29 @@ static void coroutine_fn pvebackup_co_cleanup(void *unused)
g_list_free(backup_state.di_list);
backup_state.di_list = NULL;
+
+ qemu_mutex_lock(&backup_state.stat.lock);
+ backup_state.stat.end_time = time(NULL);
+ backup_state.stat.finishing = false;
+ qemu_mutex_unlock(&backup_state.stat.lock);
}
-// assumes the caller holds backup_mutex
-static void coroutine_fn pvebackup_complete_stream(void *opaque)
+static void coroutine_fn pvebackup_co_complete_stream(void *opaque)
{
PVEBackupDevInfo *di = opaque;
+ int ret = di->completed_ret;
+
+ qemu_co_mutex_lock(&backup_state.backup_mutex);
+
+ if (ret < 0) {
+ Error *local_err = NULL;
+ error_setg(&local_err, "job failed with err %d - %s", ret, strerror(-ret));
+ pvebackup_propagate_error(local_err);
+ }
+
+ di->bs = NULL;
+
+ assert(di->target == NULL);
bool error_or_canceled = pvebackup_error_or_canceled();
@@ -281,27 +302,6 @@ static void coroutine_fn pvebackup_complete_stream(void *opaque)
pvebackup_propagate_error(local_err);
}
}
-}
-
-static void pvebackup_complete_cb(void *opaque, int ret)
-{
- assert(!qemu_in_coroutine());
-
- PVEBackupDevInfo *di = opaque;
-
- qemu_mutex_lock(&backup_state.backup_mutex);
-
- if (ret < 0) {
- Error *local_err = NULL;
- error_setg(&local_err, "job failed with err %d - %s", ret, strerror(-ret));
- pvebackup_propagate_error(local_err);
- }
-
- di->bs = NULL;
-
- assert(di->target == NULL);
-
- block_on_coroutine_fn(pvebackup_complete_stream, di);
// remove self from job list
backup_state.di_list = g_list_remove(backup_state.di_list, di);
@@ -310,21 +310,36 @@ static void pvebackup_complete_cb(void *opaque, int ret)
/* call cleanup if we're the last job */
if (!g_list_first(backup_state.di_list)) {
- block_on_coroutine_fn(pvebackup_co_cleanup, NULL);
+ pvebackup_co_cleanup();
}
- qemu_mutex_unlock(&backup_state.backup_mutex);
+ qemu_co_mutex_unlock(&backup_state.backup_mutex);
}
-static void pvebackup_cancel(void)
+static void pvebackup_complete_cb(void *opaque, int ret)
{
assert(!qemu_in_coroutine());
+ PVEBackupDevInfo *di = opaque;
+ di->completed_ret = ret;
+
+ /*
+ * Schedule stream cleanup in async coroutine. close_image and finish might
+ * take a while, so we can't block on them here.
+ * Note: di is a pointer to an entry in the global backup_state struct, so
+ * it stays valid.
+ */
+ Coroutine *co = qemu_coroutine_create(pvebackup_co_complete_stream, di);
+ aio_co_schedule(qemu_get_aio_context(), co);
+}
+
+static void coroutine_fn pvebackup_co_cancel(void *opaque)
+{
Error *cancel_err = NULL;
error_setg(&cancel_err, "backup canceled");
pvebackup_propagate_error(cancel_err);
- qemu_mutex_lock(&backup_state.backup_mutex);
+ qemu_co_mutex_lock(&backup_state.backup_mutex);
if (backup_state.vmaw) {
/* make sure vma writer does not block anymore */
@@ -342,27 +357,16 @@ static void pvebackup_cancel(void)
((PVEBackupDevInfo *)bdi->data)->job :
NULL;
- /* ref the job before releasing the mutex, just to be safe */
if (cancel_job) {
- job_ref(&cancel_job->job);
+ job_cancel(&cancel_job->job, false);
}
- /* job_cancel_sync may enter the job, so we need to release the
- * backup_mutex to avoid deadlock */
- qemu_mutex_unlock(&backup_state.backup_mutex);
-
- if (cancel_job) {
- AioContext *aio_context = cancel_job->job.aio_context;
- aio_context_acquire(aio_context);
- job_cancel_sync(&cancel_job->job);
- job_unref(&cancel_job->job);
- aio_context_release(aio_context);
- }
+ qemu_co_mutex_unlock(&backup_state.backup_mutex);
}
void qmp_backup_cancel(Error **errp)
{
- pvebackup_cancel();
+ block_on_coroutine_fn(pvebackup_co_cancel, NULL);
}
// assumes the caller holds backup_mutex
@@ -415,6 +419,14 @@ static int coroutine_fn pvebackup_co_add_config(
goto out;
}
+/*
+ * backup_job_create can *not* be run from a coroutine (and requires an
+ * acquired AioContext), so this can't either.
+ * This does imply that this function cannot run with backup_mutex acquired.
+ * That is ok because it is only ever called between setting up the backup_state
+ * struct and starting the jobs, and from within a QMP call. This means that no
+ * other QMP call can interrupt, and no background job is running yet.
+ */
static bool create_backup_jobs(void) {
assert(!qemu_in_coroutine());
@@ -523,11 +535,12 @@ typedef struct QmpBackupTask {
UuidInfo *result;
} QmpBackupTask;
-// assumes the caller holds backup_mutex
static void coroutine_fn pvebackup_co_prepare(void *opaque)
{
assert(qemu_in_coroutine());
+ qemu_co_mutex_lock(&backup_state.backup_mutex);
+
QmpBackupTask *task = opaque;
task->result = NULL; // just to be sure
@@ -616,6 +629,8 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
}
di->size = size;
total += size;
+
+ di->completed_ret = INT_MAX;
}
uuid_generate(uuid);
@@ -847,6 +862,7 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
backup_state.stat.dirty = total - backup_state.stat.reused;
backup_state.stat.transferred = 0;
backup_state.stat.zero_bytes = 0;
+ backup_state.stat.finishing = false;
qemu_mutex_unlock(&backup_state.stat.lock);
@@ -861,6 +877,8 @@ static void coroutine_fn pvebackup_co_prepare(void *opaque)
uuid_info->UUID = uuid_str;
task->result = uuid_info;
+
+ qemu_co_mutex_unlock(&backup_state.backup_mutex);
return;
err_mutex:
@@ -903,6 +921,8 @@ err:
}
task->result = NULL;
+
+ qemu_co_mutex_unlock(&backup_state.backup_mutex);
return;
}
@@ -956,22 +976,15 @@ UuidInfo *qmp_backup(
.errp = errp,
};
- qemu_mutex_lock(&backup_state.backup_mutex);
-
block_on_coroutine_fn(pvebackup_co_prepare, &task);
if (*errp == NULL) {
bool errors = create_backup_jobs();
- qemu_mutex_unlock(&backup_state.backup_mutex);
if (!errors) {
- /* start the first job in the transaction
- * note: this might directly enter the job, so we need to do this
- * after unlocking the backup_mutex */
+ // start the first job in the transaction
job_txn_start_seq(backup_state.txn);
}
- } else {
- qemu_mutex_unlock(&backup_state.backup_mutex);
}
return task.result;
@@ -1025,6 +1038,7 @@ BackupStatus *qmp_query_backup(Error **errp)
info->transferred = backup_state.stat.transferred;
info->has_reused = true;
info->reused = backup_state.stat.reused;
+ info->finishing = backup_state.stat.finishing;
qemu_mutex_unlock(&backup_state.stat.lock);
diff --git a/qapi/block-core.json b/qapi/block-core.json
index 5fc42e87f3..b31ad8d989 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -784,12 +784,15 @@
#
# @uuid: uuid for this backup job
#
+# @finishing: if status='active' and finishing=true, then the backup process is
+# waiting for the target to finish.
+#
##
{ 'struct': 'BackupStatus',
'data': {'*status': 'str', '*errmsg': 'str', '*total': 'int', '*dirty': 'int',
'*transferred': 'int', '*zero-bytes': 'int', '*reused': 'int',
'*start-time': 'int', '*end-time': 'int',
- '*backup-file': 'str', '*uuid': 'str' } }
+ '*backup-file': 'str', '*uuid': 'str', 'finishing': 'bool' } }
##
# @BackupFormat:
--
2.20.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [pve-devel] [PATCH qemu-server 5/5] vzdump: log 'finishing' state
2020-09-28 15:48 [pve-devel] [PATCH pve-qemu 1/5] Add transaction patches and fix for blocking finish Stefan Reiter
` (2 preceding siblings ...)
2020-09-28 15:48 ` [pve-devel] [PATCH qemu 4/5] PVE-Backup: Use more coroutines and don't block on finishing Stefan Reiter
@ 2020-09-28 15:48 ` Stefan Reiter
2020-09-29 16:25 ` [pve-devel] applied: " Thomas Lamprecht
2020-09-29 14:22 ` [pve-devel] applied: [PATCH pve-qemu 1/5] Add transaction patches and fix for blocking finish Thomas Lamprecht
4 siblings, 1 reply; 7+ messages in thread
From: Stefan Reiter @ 2020-09-28 15:48 UTC (permalink / raw)
To: pve-devel
...and avoid printing 100% status twice
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
PVE/VZDump/QemuServer.pm | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/PVE/VZDump/QemuServer.pm b/PVE/VZDump/QemuServer.pm
index 7297795..575abb3 100644
--- a/PVE/VZDump/QemuServer.pm
+++ b/PVE/VZDump/QemuServer.pm
@@ -360,6 +360,7 @@ my $query_backup_status_loop = sub {
}
my $first_round = 1;
+ my $last_finishing = 0;
while(1) {
my $status = mon_cmd($vmid, 'query-backup');
@@ -401,7 +402,9 @@ my $query_backup_status_loop = sub {
my $res = $status->{status} || 'unknown';
if ($res ne 'active') {
- $self->loginfo($statusline);
+ if ($last_percent < 100) {
+ $self->loginfo($statusline);
+ }
if ($res ne 'done') {
die (($status->{errmsg} || "unknown error") . "\n") if $res eq 'error';
die "got unexpected status '$res'\n";
@@ -421,6 +424,11 @@ my $query_backup_status_loop = sub {
$last_transferred = $transferred if $transferred;
$last_time = $ctime;
$last_reused = $reused;
+
+ if (!$last_finishing && $status->{finishing}) {
+ $self->loginfo("Waiting for server to finish verification...");
+ }
+ $last_finishing = $status->{finishing};
}
sleep(1);
$first_round = 0 if $first_round;
--
2.20.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [pve-devel] applied: [PATCH pve-qemu 1/5] Add transaction patches and fix for blocking finish
2020-09-28 15:48 [pve-devel] [PATCH pve-qemu 1/5] Add transaction patches and fix for blocking finish Stefan Reiter
` (3 preceding siblings ...)
2020-09-28 15:48 ` [pve-devel] [PATCH qemu-server 5/5] vzdump: log 'finishing' state Stefan Reiter
@ 2020-09-29 14:22 ` Thomas Lamprecht
4 siblings, 0 replies; 7+ messages in thread
From: Thomas Lamprecht @ 2020-09-29 14:22 UTC (permalink / raw)
To: Proxmox VE development discussion, Stefan Reiter
On 28.09.20 17:48, Stefan Reiter wrote:
> With the transaction patches, patch 0026-PVE-Backup-modify-job-api.patch
> is no longer necessary, so drop it and rebase all following patches on
> top.
>
> Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
> ---
>
applied, thanks!
^ permalink raw reply [flat|nested] 7+ messages in thread
* [pve-devel] applied: [PATCH qemu-server 5/5] vzdump: log 'finishing' state
2020-09-28 15:48 ` [pve-devel] [PATCH qemu-server 5/5] vzdump: log 'finishing' state Stefan Reiter
@ 2020-09-29 16:25 ` Thomas Lamprecht
0 siblings, 0 replies; 7+ messages in thread
From: Thomas Lamprecht @ 2020-09-29 16:25 UTC (permalink / raw)
To: Proxmox VE development discussion, Stefan Reiter
On 28.09.20 17:48, Stefan Reiter wrote:
> ...and avoid printing 100% status twice
>
> Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
> ---
> PVE/VZDump/QemuServer.pm | 10 +++++++++-
> 1 file changed, 9 insertions(+), 1 deletion(-)
>
>
applied, thanks! But, I did s/verification/backup validation/ to avoid some
possible confusion with the more costly/in-depth server verification.
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2020-09-29 16:25 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-28 15:48 [pve-devel] [PATCH pve-qemu 1/5] Add transaction patches and fix for blocking finish Stefan Reiter
2020-09-28 15:48 ` [pve-devel] [PATCH qemu 2/5] PVE: Add sequential job transaction support Stefan Reiter
2020-09-28 15:48 ` [pve-devel] [PATCH qemu 3/5] PVE-Backup: Use a transaction to synchronize job states Stefan Reiter
2020-09-28 15:48 ` [pve-devel] [PATCH qemu 4/5] PVE-Backup: Use more coroutines and don't block on finishing Stefan Reiter
2020-09-28 15:48 ` [pve-devel] [PATCH qemu-server 5/5] vzdump: log 'finishing' state Stefan Reiter
2020-09-29 16:25 ` [pve-devel] applied: " Thomas Lamprecht
2020-09-29 14:22 ` [pve-devel] applied: [PATCH pve-qemu 1/5] Add transaction patches and fix for blocking finish Thomas Lamprecht
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal