public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Fiona Ebner <f.ebner@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH qemu 2/2] squash some related patches
Date: Mon,  2 Jun 2025 12:22:27 +0200	[thread overview]
Message-ID: <20250602102227.120732-3-f.ebner@proxmox.com> (raw)
In-Reply-To: <20250602102227.120732-1-f.ebner@proxmox.com>

In particular for backup and savevm-async, a lot can be grouped
together, many patches were just later fixups/improvements. Backup
patches are still grouped in three: original, addition of fleecing
and addition of external backup API (preparatory patches there
could be split and squashed too, but they're relatively new, so that
is something for next time).

The code with all patches applied is still the same.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 ...async-for-background-state-snapshots.patch | 128 +--
 ...add-optional-buffer-size-to-QEMUFile.patch |   6 +-
 ...ckup-Proxmox-backup-patches-for-QEMU.patch |  33 +-
 ...igrate-dirty-bitmap-state-via-savevm.patch |   4 +-
 .../0038-block-add-alloc-track-driver.patch   |  46 +-
 ...0042-PVE-backup-add-fleecing-option.patch} | 267 ++++--
 ...rror-out-when-auto-remove-is-not-set.patch |  43 -
 ...-version-deprecation-for-Proxmox-VE.patch} |   0
 ...d-seemingly-superfluous-child-permis.patch |  84 --
 ...oid-timer-storms-on-periodic-timers.patch} |   0
 ...ve-error-when-copy-before-write-fail.patch | 117 ---
 ...-full-64-bit-target-value-of-the-co.patch} |   0
 ...up-fixup-error-handling-for-fleecing.patch | 103 --
 ...hpet-accept-64-bit-reads-and-writes.patch} |   0
 ...r-out-setting-up-snapshot-access-for.patch | 135 ---
 ...-read-only-bits-directly-in-new_val.patch} |   0
 ...device-name-in-device-info-structure.patch | 135 ---
 ...t-remove-unnecessary-variable-index.patch} |   0
 ...de-device-name-in-error-when-setting.patch |  25 -
 ...e-high-bits-of-comparator-in-32-bit.patch} |   0
 ...nd-cleanup-persistence-of-interrupt.patch} |   0
 ...-out-helper-to-clear-backup-state-s.patch} |   0
 ...-out-helper-to-initialize-backup-st.patch} |   0
 ...ackup-add-target-ID-in-backup-state.patch} |   0
 ...vice-info-allow-caller-to-specify-f.patch} |   0
 ...ment-backup-access-setup-and-teardow.patch | 898 ++++++++++++++++++
 ...rove-setting-state-of-snapshot-opera.patch |  81 --
 ...ame-saved_vm_running-to-vm_needs_sta.patch |  71 --
 ...rove-runstate-preservation-cleanup-e.patch | 120 ---
 ...se-dedicated-iothread-for-state-file.patch | 177 ----
 ...at-failure-to-set-iothread-context-a.patch |  33 -
 ...-up-directly-in-setup_snapshot_acces.patch |  41 -
 ...ment-backup-access-setup-and-teardow.patch | 495 ----------
 ...or-out-get_single_device_info-helper.patch | 122 ---
 ...ment-bitmap-support-for-external-bac.patch | 470 ---------
 ...p-access-api-indicate-situation-wher.patch |  57 --
 ...p-access-api-explicit-bitmap-mode-pa.patch |  84 --
 ...kup-access-api-simplify-bitmap-logic.patch | 206 ----
 debian/patches/series                         |  46 +-
 39 files changed, 1221 insertions(+), 2806 deletions(-)
 rename debian/patches/pve/{0044-PVE-backup-add-fleecing-option.patch => 0042-PVE-backup-add-fleecing-option.patch} (59%)
 delete mode 100644 debian/patches/pve/0042-alloc-track-error-out-when-auto-remove-is-not-set.patch
 rename debian/patches/pve/{0050-adapt-machine-version-deprecation-for-Proxmox-VE.patch => 0043-adapt-machine-version-deprecation-for-Proxmox-VE.patch} (100%)
 delete mode 100644 debian/patches/pve/0043-alloc-track-avoid-seemingly-superfluous-child-permis.patch
 rename debian/patches/pve/{0051-Revert-hpet-avoid-timer-storms-on-periodic-timers.patch => 0044-Revert-hpet-avoid-timer-storms-on-periodic-timers.patch} (100%)
 delete mode 100644 debian/patches/pve/0045-PVE-backup-improve-error-when-copy-before-write-fail.patch
 rename debian/patches/pve/{0052-Revert-hpet-store-full-64-bit-target-value-of-the-co.patch => 0045-Revert-hpet-store-full-64-bit-target-value-of-the-co.patch} (100%)
 delete mode 100644 debian/patches/pve/0046-PVE-backup-fixup-error-handling-for-fleecing.patch
 rename debian/patches/pve/{0053-Revert-hpet-accept-64-bit-reads-and-writes.patch => 0046-Revert-hpet-accept-64-bit-reads-and-writes.patch} (100%)
 delete mode 100644 debian/patches/pve/0047-PVE-backup-factor-out-setting-up-snapshot-access-for.patch
 rename debian/patches/pve/{0054-Revert-hpet-place-read-only-bits-directly-in-new_val.patch => 0047-Revert-hpet-place-read-only-bits-directly-in-new_val.patch} (100%)
 delete mode 100644 debian/patches/pve/0048-PVE-backup-save-device-name-in-device-info-structure.patch
 rename debian/patches/pve/{0055-Revert-hpet-remove-unnecessary-variable-index.patch => 0048-Revert-hpet-remove-unnecessary-variable-index.patch} (100%)
 delete mode 100644 debian/patches/pve/0049-PVE-backup-include-device-name-in-error-when-setting.patch
 rename debian/patches/pve/{0056-Revert-hpet-ignore-high-bits-of-comparator-in-32-bit.patch => 0049-Revert-hpet-ignore-high-bits-of-comparator-in-32-bit.patch} (100%)
 rename debian/patches/pve/{0057-Revert-hpet-fix-and-cleanup-persistence-of-interrupt.patch => 0050-Revert-hpet-fix-and-cleanup-persistence-of-interrupt.patch} (100%)
 rename debian/patches/pve/{0064-PVE-backup-factor-out-helper-to-clear-backup-state-s.patch => 0051-PVE-backup-factor-out-helper-to-clear-backup-state-s.patch} (100%)
 rename debian/patches/pve/{0065-PVE-backup-factor-out-helper-to-initialize-backup-st.patch => 0052-PVE-backup-factor-out-helper-to-initialize-backup-st.patch} (100%)
 rename debian/patches/pve/{0066-PVE-backup-add-target-ID-in-backup-state.patch => 0053-PVE-backup-add-target-ID-in-backup-state.patch} (100%)
 rename debian/patches/pve/{0067-PVE-backup-get-device-info-allow-caller-to-specify-f.patch => 0054-PVE-backup-get-device-info-allow-caller-to-specify-f.patch} (100%)
 create mode 100644 debian/patches/pve/0055-PVE-backup-implement-backup-access-setup-and-teardow.patch
 delete mode 100644 debian/patches/pve/0058-savevm-async-improve-setting-state-of-snapshot-opera.patch
 delete mode 100644 debian/patches/pve/0059-savevm-async-rename-saved_vm_running-to-vm_needs_sta.patch
 delete mode 100644 debian/patches/pve/0060-savevm-async-improve-runstate-preservation-cleanup-e.patch
 delete mode 100644 debian/patches/pve/0061-savevm-async-use-dedicated-iothread-for-state-file.patch
 delete mode 100644 debian/patches/pve/0062-savevm-async-treat-failure-to-set-iothread-context-a.patch
 delete mode 100644 debian/patches/pve/0063-PVE-backup-clean-up-directly-in-setup_snapshot_acces.patch
 delete mode 100644 debian/patches/pve/0068-PVE-backup-implement-backup-access-setup-and-teardow.patch
 delete mode 100644 debian/patches/pve/0069-PVE-backup-factor-out-get_single_device_info-helper.patch
 delete mode 100644 debian/patches/pve/0070-PVE-backup-implement-bitmap-support-for-external-bac.patch
 delete mode 100644 debian/patches/pve/0071-PVE-backup-backup-access-api-indicate-situation-wher.patch
 delete mode 100644 debian/patches/pve/0072-PVE-backup-backup-access-api-explicit-bitmap-mode-pa.patch
 delete mode 100644 debian/patches/pve/0073-PVE-backup-backup-access-api-simplify-bitmap-logic.patch

diff --git a/debian/patches/pve/0017-PVE-add-savevm-async-for-background-state-snapshots.patch b/debian/patches/pve/0017-PVE-add-savevm-async-for-background-state-snapshots.patch
index 6e19c3d..622191d 100644
--- a/debian/patches/pve/0017-PVE-add-savevm-async-for-background-state-snapshots.patch
+++ b/debian/patches/pve/0017-PVE-add-savevm-async-for-background-state-snapshots.patch
@@ -30,21 +30,24 @@ Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
      adapt to QAPI and other changes for 8.2
      make sure to not call vm_start() from coroutine
      stop CPU throttling after finishing
-     force raw format when loading state as suggested by Friedrich Weber]
+     force raw format when loading state as suggested by Friedrich Weber
+     improve setting state in savevm-end handler
+     improve runstate preservation
+     use dedicated iothread for state file to avoid deadlock, bug #6262]
 Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
 ---
  hmp-commands-info.hx         |  13 +
- hmp-commands.hx              |  17 ++
+ hmp-commands.hx              |  17 +
  include/migration/snapshot.h |   2 +
  include/monitor/hmp.h        |   3 +
  migration/meson.build        |   1 +
- migration/savevm-async.c     | 572 +++++++++++++++++++++++++++++++++++
+ migration/savevm-async.c     | 581 +++++++++++++++++++++++++++++++++++
  monitor/hmp-cmds.c           |  38 +++
- qapi/migration.json          |  34 +++
+ qapi/migration.json          |  34 ++
  qapi/misc.json               |  18 ++
  qemu-options.hx              |  12 +
  system/vl.c                  |  10 +
- 11 files changed, 720 insertions(+)
+ 11 files changed, 729 insertions(+)
  create mode 100644 migration/savevm-async.c
 
 diff --git a/hmp-commands-info.hx b/hmp-commands-info.hx
@@ -142,10 +145,10 @@ index cf66c78681..46e92249a1 100644
    'threadinfo.c',
 diff --git a/migration/savevm-async.c b/migration/savevm-async.c
 new file mode 100644
-index 0000000000..7c0e857519
+index 0000000000..56e0fa6c69
 --- /dev/null
 +++ b/migration/savevm-async.c
-@@ -0,0 +1,572 @@
+@@ -0,0 +1,581 @@
 +#include "qemu/osdep.h"
 +#include "migration/channel-savevm-async.h"
 +#include "migration/migration.h"
@@ -200,12 +203,13 @@ index 0000000000..7c0e857519
 +    int state;
 +    Error *error;
 +    Error *blocker;
-+    int saved_vm_running;
++    int vm_needs_start;
 +    QEMUFile *file;
 +    int64_t total_time;
 +    QEMUBH *finalize_bh;
 +    Coroutine *co;
 +    QemuCoSleep target_close_wait;
++    IOThread *iothread;
 +} snap_state;
 +
 +static bool savevm_aborted(void)
@@ -327,6 +331,7 @@ index 0000000000..7c0e857519
 +     */
 +    blk_set_aio_context(snap_state.target, qemu_get_aio_context(), NULL);
 +
++    snap_state.vm_needs_start = runstate_is_running();
 +    ret = vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
 +    if (ret < 0) {
 +        save_snapshot_error("vm_stop_force_state error %d", ret);
@@ -373,9 +378,9 @@ index 0000000000..7c0e857519
 +        save_snapshot_error("process_savevm_cleanup: invalid state: %d",
 +                            snap_state.state);
 +    }
-+    if (snap_state.saved_vm_running) {
++    if (snap_state.vm_needs_start) {
 +        vm_start();
-+        snap_state.saved_vm_running = false;
++        snap_state.vm_needs_start = false;
 +    }
 +
 +    DPRINTF("timing: process_savevm_finalize (full) took %ld ms\n",
@@ -404,16 +409,13 @@ index 0000000000..7c0e857519
 +        uint64_t threshold = 400 * 1000;
 +
 +        /*
-+         * pending_{estimate,exact} are expected to be called without iothread
-+         * lock. Similar to what is done in migration.c, call the exact variant
-+         * only once pend_precopy in the estimate is below the threshold.
++         * Similar to what is done in migration.c, call the exact variant only
++         * once pend_precopy in the estimate is below the threshold.
 +         */
-+        bql_unlock();
 +        qemu_savevm_state_pending_estimate(&pend_precopy, &pend_postcopy);
 +        if (pend_precopy <= threshold) {
 +            qemu_savevm_state_pending_exact(&pend_precopy, &pend_postcopy);
 +        }
-+        bql_lock();
 +        pending_size = pend_precopy + pend_postcopy;
 +
 +        /*
@@ -480,11 +482,17 @@ index 0000000000..7c0e857519
 +    qemu_bh_schedule(snap_state.finalize_bh);
 +}
 +
++static void savevm_cleanup_iothread(void) {
++    if (snap_state.iothread) {
++        iothread_destroy(snap_state.iothread);
++        snap_state.iothread = NULL;
++    }
++}
++
 +void qmp_savevm_start(const char *statefile, Error **errp)
 +{
 +    Error *local_err = NULL;
 +    MigrationState *ms = migrate_get_current();
-+    AioContext *iohandler_ctx = iohandler_get_aio_context();
 +    BlockDriverState *target_bs = NULL;
 +    int ret = 0;
 +
@@ -501,7 +509,6 @@ index 0000000000..7c0e857519
 +    }
 +
 +    /* initialize snapshot info */
-+    snap_state.saved_vm_running = runstate_is_running();
 +    snap_state.bs_pos = 0;
 +    snap_state.total_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
 +    snap_state.blocker = NULL;
@@ -513,13 +520,27 @@ index 0000000000..7c0e857519
 +    }
 +
 +    if (!statefile) {
++        snap_state.vm_needs_start = runstate_is_running();
 +        vm_stop(RUN_STATE_SAVE_VM);
 +        snap_state.state = SAVE_STATE_COMPLETED;
 +        return;
 +    }
 +
 +    if (qemu_savevm_state_blocked(errp)) {
-+        return;
++        goto fail;
++    }
++
++    if (snap_state.iothread) {
++        /* This is not expected, so warn about it, but no point in re-creating a new iothread. */
++        warn_report("iothread for snapshot already exists - re-using");
++    } else {
++        snap_state.iothread =
++            iothread_create("__proxmox_savevm_async_iothread__", &local_err);
++        if (!snap_state.iothread) {
++            error_setg(errp, "creating iothread failed: %s",
++                       local_err ? error_get_pretty(local_err) : "unknown error");
++            goto fail;
++        }
 +    }
 +
 +    /* Open the image */
@@ -529,12 +550,12 @@ index 0000000000..7c0e857519
 +    snap_state.target = blk_new_open(statefile, NULL, options, bdrv_oflags, &local_err);
 +    if (!snap_state.target) {
 +        error_setg(errp, "failed to open '%s'", statefile);
-+        goto restart;
++        goto fail;
 +    }
 +    target_bs = blk_bs(snap_state.target);
 +    if (!target_bs) {
 +        error_setg(errp, "failed to open '%s' - no block driver state", statefile);
-+        goto restart;
++        goto fail;
 +    }
 +
 +    QIOChannel *ioc = QIO_CHANNEL(qio_channel_savevm_async_new(snap_state.target,
@@ -543,7 +564,7 @@ index 0000000000..7c0e857519
 +
 +    if (!snap_state.file) {
 +        error_setg(errp, "failed to open '%s'", statefile);
-+        goto restart;
++        goto fail;
 +    }
 +
 +    /*
@@ -551,8 +572,8 @@ index 0000000000..7c0e857519
 +     * State is cleared in process_savevm_co, but has to be initialized
 +     * here (blocking main thread, from QMP) to avoid race conditions.
 +     */
-+    if (migrate_init(ms, errp)) {
-+        return;
++    if (migrate_init(ms, errp) != 0) {
++        goto fail;
 +    }
 +    memset(&mig_stats, 0, sizeof(mig_stats));
 +    ms->to_dst_file = snap_state.file;
@@ -567,56 +588,49 @@ index 0000000000..7c0e857519
 +    if (ret != 0) {
 +        error_setg_errno(errp, -ret, "savevm state setup failed: %s",
 +                         local_err ? error_get_pretty(local_err) : "unknown error");
-+        return;
++        goto fail;
 +    }
 +
-+    /* Async processing from here on out happens in iohandler context, so let
-+     * the target bdrv have its home there.
-+     */
-+    ret = blk_set_aio_context(snap_state.target, iohandler_ctx, &local_err);
++    ret = blk_set_aio_context(snap_state.target, snap_state.iothread->ctx, &local_err);
 +    if (ret != 0) {
-+        warn_report("failed to set iohandler context for VM state target: %s %s",
-+                    local_err ? error_get_pretty(local_err) : "unknown error",
-+                    strerror(-ret));
++        error_setg_errno(errp, -ret, "failed to set iothread context for VM state target: %s",
++                         local_err ? error_get_pretty(local_err) : "unknown error");
++        goto fail;
 +    }
 +
 +    snap_state.co = qemu_coroutine_create(&process_savevm_co, NULL);
-+    aio_co_schedule(iohandler_ctx, snap_state.co);
++    aio_co_schedule(snap_state.iothread->ctx, snap_state.co);
 +
 +    return;
 +
-+restart:
-+
++fail:
++    savevm_cleanup_iothread();
 +    save_snapshot_error("setup failed");
-+
-+    if (snap_state.saved_vm_running) {
-+        vm_start();
-+        snap_state.saved_vm_running = false;
-+    }
 +}
 +
 +static void coroutine_fn wait_for_close_co(void *opaque)
 +{
 +    int64_t timeout;
 +
-+    if (!snap_state.target) {
++    if (snap_state.target) {
++        /* wait until cleanup is done before returning, this ensures that after this
++         * call exits the statefile will be closed and can be removed immediately */
++        DPRINTF("savevm-end: waiting for cleanup\n");
++        timeout = 30L * 1000 * 1000 * 1000;
++        qemu_co_sleep_ns_wakeable(&snap_state.target_close_wait,
++                                  QEMU_CLOCK_REALTIME, timeout);
++        if (snap_state.target) {
++            save_snapshot_error("timeout waiting for target file close in "
++                                "qmp_savevm_end");
++            /* we cannot assume the snapshot finished in this case, so leave the
++             * state alone - caller has to figure something out */
++            return;
++        }
++    } else {
 +        DPRINTF("savevm-end: no target file open\n");
-+        return;
 +    }
 +
-+    /* wait until cleanup is done before returning, this ensures that after this
-+     * call exits the statefile will be closed and can be removed immediately */
-+    DPRINTF("savevm-end: waiting for cleanup\n");
-+    timeout = 30L * 1000 * 1000 * 1000;
-+    qemu_co_sleep_ns_wakeable(&snap_state.target_close_wait,
-+                              QEMU_CLOCK_REALTIME, timeout);
-+    if (snap_state.target) {
-+        save_snapshot_error("timeout waiting for target file close in "
-+                            "qmp_savevm_end");
-+        /* we cannot assume the snapshot finished in this case, so leave the
-+         * state alone - caller has to figure something out */
-+        return;
-+    }
++    savevm_cleanup_iothread();
 +
 +    // File closed and no other error, so ensure next snapshot can be started.
 +    if (snap_state.state != SAVE_STATE_ERROR) {
@@ -641,13 +655,11 @@ index 0000000000..7c0e857519
 +        return;
 +    }
 +
-+    if (snap_state.saved_vm_running) {
++    if (snap_state.vm_needs_start) {
 +        vm_start();
-+        snap_state.saved_vm_running = false;
++        snap_state.vm_needs_start = false;
 +    }
 +
-+    snap_state.state = SAVE_STATE_DONE;
-+
 +    qemu_coroutine_enter(wait_for_close);
 +}
 +
diff --git a/debian/patches/pve/0018-PVE-add-optional-buffer-size-to-QEMUFile.patch b/debian/patches/pve/0018-PVE-add-optional-buffer-size-to-QEMUFile.patch
index 13d522a..f5a0d96 100644
--- a/debian/patches/pve/0018-PVE-add-optional-buffer-size-to-QEMUFile.patch
+++ b/debian/patches/pve/0018-PVE-add-optional-buffer-size-to-QEMUFile.patch
@@ -184,10 +184,10 @@ index f5b9f430e0..0179b90698 100644
  
  G_DEFINE_AUTOPTR_CLEANUP_FUNC(QEMUFile, qemu_fclose)
 diff --git a/migration/savevm-async.c b/migration/savevm-async.c
-index 7c0e857519..fbcf74f9e2 100644
+index 56e0fa6c69..730b815494 100644
 --- a/migration/savevm-async.c
 +++ b/migration/savevm-async.c
-@@ -391,7 +391,7 @@ void qmp_savevm_start(const char *statefile, Error **errp)
+@@ -409,7 +409,7 @@ void qmp_savevm_start(const char *statefile, Error **errp)
  
      QIOChannel *ioc = QIO_CHANNEL(qio_channel_savevm_async_new(snap_state.target,
                                                                 &snap_state.bs_pos));
@@ -196,7 +196,7 @@ index 7c0e857519..fbcf74f9e2 100644
  
      if (!snap_state.file) {
          error_setg(errp, "failed to open '%s'", statefile);
-@@ -535,7 +535,8 @@ int load_snapshot_from_blockdev(const char *filename, Error **errp)
+@@ -544,7 +544,8 @@ int load_snapshot_from_blockdev(const char *filename, Error **errp)
      bdrv_op_block_all(bs, blocker);
  
      /* restore the VM state */
diff --git a/debian/patches/pve/0030-PVE-Backup-Proxmox-backup-patches-for-QEMU.patch b/debian/patches/pve/0030-PVE-Backup-Proxmox-backup-patches-for-QEMU.patch
index 060525f..3a4ca0f 100644
--- a/debian/patches/pve/0030-PVE-Backup-Proxmox-backup-patches-for-QEMU.patch
+++ b/debian/patches/pve/0030-PVE-Backup-Proxmox-backup-patches-for-QEMU.patch
@@ -94,11 +94,11 @@ Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
  monitor/hmp-cmds.c             |   72 +++
  proxmox-backup-client.c        |  146 +++++
  proxmox-backup-client.h        |   60 ++
- pve-backup.c                   | 1091 ++++++++++++++++++++++++++++++++
+ pve-backup.c                   | 1096 ++++++++++++++++++++++++++++++++
  qapi/block-core.json           |  233 +++++++
  qapi/common.json               |   14 +
  qapi/machine.json              |   16 +-
- 14 files changed, 1710 insertions(+), 14 deletions(-)
+ 14 files changed, 1715 insertions(+), 14 deletions(-)
  create mode 100644 proxmox-backup-client.c
  create mode 100644 proxmox-backup-client.h
  create mode 100644 pve-backup.c
@@ -586,10 +586,10 @@ index 0000000000..8cbf645b2c
 +#endif /* PROXMOX_BACKUP_CLIENT_H */
 diff --git a/pve-backup.c b/pve-backup.c
 new file mode 100644
-index 0000000000..36e5042860
+index 0000000000..e931cb9203
 --- /dev/null
 +++ b/pve-backup.c
-@@ -0,0 +1,1091 @@
+@@ -0,0 +1,1096 @@
 +#include "proxmox-backup-client.h"
 +#include "vma.h"
 +
@@ -678,6 +678,7 @@ index 0000000000..36e5042860
 +    size_t size;
 +    uint64_t block_size;
 +    uint8_t dev_id;
++    char* device_name;
 +    int completed_ret; // INT_MAX if not completed
 +    BdrvDirtyBitmap *bitmap;
 +    BlockDriverState *target;
@@ -911,6 +912,8 @@ index 0000000000..36e5042860
 +    }
 +
 +    di->bs = NULL;
++    g_free(di->device_name);
++    di->device_name = NULL;
 +
 +    assert(di->target == NULL);
 +
@@ -1200,6 +1203,8 @@ index 0000000000..36e5042860
 +            }
 +            PVEBackupDevInfo *di = g_new0(PVEBackupDevInfo, 1);
 +            di->bs = bs;
++            di->device_name = g_strdup(bdrv_get_device_name(bs));
++
 +            di_list = g_list_append(di_list, di);
 +            d++;
 +        }
@@ -1213,6 +1218,7 @@ index 0000000000..36e5042860
 +
 +            PVEBackupDevInfo *di = g_new0(PVEBackupDevInfo, 1);
 +            di->bs = bs;
++            di->device_name = g_strdup(bdrv_get_device_name(bs));
 +            di_list = g_list_append(di_list, di);
 +        }
 +    }
@@ -1379,9 +1385,6 @@ index 0000000000..36e5042860
 +
 +            di->block_size = dump_cb_block_size;
 +
-+            bdrv_graph_co_rdlock();
-+            const char *devname = bdrv_get_device_name(di->bs);
-+            bdrv_graph_co_rdunlock();
 +            PBSBitmapAction action = PBS_BITMAP_ACTION_NOT_USED;
 +            size_t dirty = di->size;
 +
@@ -1396,7 +1399,8 @@ index 0000000000..36e5042860
 +                    }
 +                    action = PBS_BITMAP_ACTION_NEW;
 +                } else {
-+                    expect_only_dirty = proxmox_backup_check_incremental(pbs, devname, di->size) != 0;
++                    expect_only_dirty =
++                        proxmox_backup_check_incremental(pbs, di->device_name, di->size) != 0;
 +                }
 +
 +                if (expect_only_dirty) {
@@ -1420,7 +1424,8 @@ index 0000000000..36e5042860
 +                }
 +            }
 +
-+            int dev_id = proxmox_backup_co_register_image(pbs, devname, di->size, expect_only_dirty, errp);
++            int dev_id = proxmox_backup_co_register_image(pbs, di->device_name, di->size,
++                                                          expect_only_dirty, errp);
 +            if (dev_id < 0) {
 +                goto err_mutex;
 +            }
@@ -1432,7 +1437,7 @@ index 0000000000..36e5042860
 +            di->dev_id = dev_id;
 +
 +            PBSBitmapInfo *info = g_malloc(sizeof(*info));
-+            info->drive = g_strdup(devname);
++            info->drive = g_strdup(di->device_name);
 +            info->action = action;
 +            info->size = di->size;
 +            info->dirty = dirty;
@@ -1455,10 +1460,7 @@ index 0000000000..36e5042860
 +                goto err_mutex;
 +            }
 +
-+            bdrv_graph_co_rdlock();
-+            const char *devname = bdrv_get_device_name(di->bs);
-+            bdrv_graph_co_rdunlock();
-+            di->dev_id = vma_writer_register_stream(vmaw, devname, di->size);
++            di->dev_id = vma_writer_register_stream(vmaw, di->device_name, di->size);
 +            if (di->dev_id <= 0) {
 +                error_set(errp, ERROR_CLASS_GENERIC_ERROR,
 +                          "register_stream failed");
@@ -1569,6 +1571,9 @@ index 0000000000..36e5042860
 +            bdrv_co_unref(di->target);
 +        }
 +
++        g_free(di->device_name);
++        di->device_name = NULL;
++
 +        g_free(di);
 +    }
 +    g_list_free(di_list);
diff --git a/debian/patches/pve/0034-PVE-Migrate-dirty-bitmap-state-via-savevm.patch b/debian/patches/pve/0034-PVE-Migrate-dirty-bitmap-state-via-savevm.patch
index 45a968b..cf3897f 100644
--- a/debian/patches/pve/0034-PVE-Migrate-dirty-bitmap-state-via-savevm.patch
+++ b/debian/patches/pve/0034-PVE-Migrate-dirty-bitmap-state-via-savevm.patch
@@ -180,10 +180,10 @@ index 0000000000..a97187e4d7
 +                         NULL);
 +}
 diff --git a/pve-backup.c b/pve-backup.c
-index 36e5042860..11ea4b5c56 100644
+index e931cb9203..366b015589 100644
 --- a/pve-backup.c
 +++ b/pve-backup.c
-@@ -1084,6 +1084,7 @@ ProxmoxSupportStatus *qmp_query_proxmox_support(Error **errp)
+@@ -1089,6 +1089,7 @@ ProxmoxSupportStatus *qmp_query_proxmox_support(Error **errp)
      ret->pbs_library_version = g_strdup(proxmox_backup_qemu_version());
      ret->pbs_dirty_bitmap = true;
      ret->pbs_dirty_bitmap_savevm = true;
diff --git a/debian/patches/pve/0038-block-add-alloc-track-driver.patch b/debian/patches/pve/0038-block-add-alloc-track-driver.patch
index 5a527e4..5def145 100644
--- a/debian/patches/pve/0038-block-add-alloc-track-driver.patch
+++ b/debian/patches/pve/0038-block-add-alloc-track-driver.patch
@@ -31,21 +31,22 @@ Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
 [FE: adapt to changed function signatures
      make error return value consistent with QEMU
      avoid premature break during read
-     adhere to block graph lock requirements]
+     adhere to block graph lock requirements
+     avoid superfluous child permission update]
 Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
 ---
- block/alloc-track.c | 366 ++++++++++++++++++++++++++++++++++++++++++++
+ block/alloc-track.c | 343 ++++++++++++++++++++++++++++++++++++++++++++
  block/meson.build   |   1 +
- block/stream.c      |  34 ++++
- 3 files changed, 401 insertions(+)
+ block/stream.c      |  34 +++++
+ 3 files changed, 378 insertions(+)
  create mode 100644 block/alloc-track.c
 
 diff --git a/block/alloc-track.c b/block/alloc-track.c
 new file mode 100644
-index 0000000000..cb9829ae36
+index 0000000000..718aaabf2a
 --- /dev/null
 +++ b/block/alloc-track.c
-@@ -0,0 +1,366 @@
+@@ -0,0 +1,343 @@
 +/*
 + * Node to allow backing images to be applied to any node. Assumes a blank
 + * image to begin with, only new writes are tracked as allocated, thus this
@@ -73,16 +74,9 @@ index 0000000000..cb9829ae36
 +
 +#define TRACK_OPT_AUTO_REMOVE "auto-remove"
 +
-+typedef enum DropState {
-+    DropNone,
-+    DropInProgress,
-+} DropState;
-+
 +typedef struct {
 +    BdrvDirtyBitmap *bitmap;
 +    uint64_t granularity;
-+    DropState drop_state;
-+    bool auto_remove;
 +} BDRVAllocTrackState;
 +
 +static QemuOptsList runtime_opts = {
@@ -134,7 +128,11 @@ index 0000000000..cb9829ae36
 +        goto fail;
 +    }
 +
-+    s->auto_remove = qemu_opt_get_bool(opts, TRACK_OPT_AUTO_REMOVE, false);
++    if (!qemu_opt_get_bool(opts, TRACK_OPT_AUTO_REMOVE, false)) {
++        error_setg(errp, "alloc-track: requires auto-remove option to be set to on");
++        ret = -EINVAL;
++        goto fail;
++    }
 +
 +    /* open the target (write) node, backing will be attached by block layer */
 +    file = bdrv_open_child(NULL, options, "file", bs, &child_of_bds,
@@ -182,8 +180,6 @@ index 0000000000..cb9829ae36
 +        goto fail;
 +    }
 +
-+    s->drop_state = DropNone;
-+
 +fail:
 +    if (ret < 0) {
 +        bdrv_graph_wrlock();
@@ -334,18 +330,8 @@ index 0000000000..cb9829ae36
 +                 BlockReopenQueue *reopen_queue, uint64_t perm, uint64_t shared,
 +                 uint64_t *nperm, uint64_t *nshared)
 +{
-+    BDRVAllocTrackState *s = bs->opaque;
-+
 +    *nshared = BLK_PERM_ALL;
 +
-+    /* in case we're currently dropping ourselves, claim to not use any
-+     * permissions at all - which is fine, since from this point on we will
-+     * never issue a read or write anymore */
-+    if (s->drop_state == DropInProgress) {
-+        *nperm = 0;
-+        return;
-+    }
-+
 +    if (role & BDRV_CHILD_DATA) {
 +        *nperm = perm & DEFAULT_PERM_PASSTHROUGH;
 +    } else {
@@ -371,14 +357,6 @@ index 0000000000..cb9829ae36
 +     * kinda fits better, but in the long-term, a special parameter would be
 +     * nice (or done via qemu-server via upcoming blockdev-replace QMP command).
 +     */
-+    if (backing_file == NULL) {
-+        BDRVAllocTrackState *s = bs->opaque;
-+        bdrv_drained_begin(bs);
-+        s->drop_state = DropInProgress;
-+        bdrv_child_refresh_perms(bs, bs->file, &error_abort);
-+        bdrv_drained_end(bs);
-+    }
-+
 +    return 0;
 +}
 +
diff --git a/debian/patches/pve/0044-PVE-backup-add-fleecing-option.patch b/debian/patches/pve/0042-PVE-backup-add-fleecing-option.patch
similarity index 59%
rename from debian/patches/pve/0044-PVE-backup-add-fleecing-option.patch
rename to debian/patches/pve/0042-PVE-backup-add-fleecing-option.patch
index b6eaade..1fadfa5 100644
--- a/debian/patches/pve/0044-PVE-backup-add-fleecing-option.patch
+++ b/debian/patches/pve/0042-PVE-backup-add-fleecing-option.patch
@@ -61,12 +61,91 @@ it has no parent, so just pass the one from the original bs.
 
 Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
 Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
+[FE: improve error when cbw fails as reported by Friedrich Weber]
+Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
 ---
+ block/copy-before-write.c      |  18 ++--
+ block/copy-before-write.h      |   1 +
  block/monitor/block-hmp-cmds.c |   1 +
- pve-backup.c                   | 134 ++++++++++++++++++++++++++++++++-
- qapi/block-core.json           |  10 ++-
- 3 files changed, 141 insertions(+), 4 deletions(-)
+ pve-backup.c                   | 175 ++++++++++++++++++++++++++++++++-
+ qapi/block-core.json           |  10 +-
+ 5 files changed, 195 insertions(+), 10 deletions(-)
 
+diff --git a/block/copy-before-write.c b/block/copy-before-write.c
+index fd470f5f92..5c23b578ef 100644
+--- a/block/copy-before-write.c
++++ b/block/copy-before-write.c
+@@ -27,6 +27,7 @@
+ #include "qobject/qjson.h"
+ 
+ #include "system/block-backend.h"
++#include "qemu/atomic.h"
+ #include "qemu/cutils.h"
+ #include "qapi/error.h"
+ #include "block/block_int.h"
+@@ -75,7 +76,8 @@ typedef struct BDRVCopyBeforeWriteState {
+      * @snapshot_error is normally zero. But on first copy-before-write failure
+      * when @on_cbw_error == ON_CBW_ERROR_BREAK_SNAPSHOT, @snapshot_error takes
+      * value of this error (<0). After that all in-flight and further
+-     * snapshot-API requests will fail with that error.
++     * snapshot-API requests will fail with that error. To be accessed with
++     * atomics.
+      */
+     int snapshot_error;
+ } BDRVCopyBeforeWriteState;
+@@ -115,7 +117,7 @@ static coroutine_fn int cbw_do_copy_before_write(BlockDriverState *bs,
+         return 0;
+     }
+ 
+-    if (s->snapshot_error) {
++    if (qatomic_read(&s->snapshot_error)) {
+         return 0;
+     }
+ 
+@@ -139,9 +141,7 @@ static coroutine_fn int cbw_do_copy_before_write(BlockDriverState *bs,
+     WITH_QEMU_LOCK_GUARD(&s->lock) {
+         if (ret < 0) {
+             assert(s->on_cbw_error == ON_CBW_ERROR_BREAK_SNAPSHOT);
+-            if (!s->snapshot_error) {
+-                s->snapshot_error = ret;
+-            }
++            qatomic_cmpxchg(&s->snapshot_error, 0, ret);
+         } else {
+             bdrv_set_dirty_bitmap(s->done_bitmap, off, end - off);
+         }
+@@ -215,7 +215,7 @@ cbw_snapshot_read_lock(BlockDriverState *bs, int64_t offset, int64_t bytes,
+ 
+     QEMU_LOCK_GUARD(&s->lock);
+ 
+-    if (s->snapshot_error) {
++    if (qatomic_read(&s->snapshot_error)) {
+         g_free(req);
+         return NULL;
+     }
+@@ -595,6 +595,12 @@ void bdrv_cbw_drop(BlockDriverState *bs)
+     bdrv_unref(bs);
+ }
+ 
++int bdrv_cbw_snapshot_error(BlockDriverState *bs)
++{
++    BDRVCopyBeforeWriteState *s = bs->opaque;
++    return qatomic_read(&s->snapshot_error);
++}
++
+ static void cbw_init(void)
+ {
+     bdrv_register(&bdrv_cbw_filter);
+diff --git a/block/copy-before-write.h b/block/copy-before-write.h
+index 2a5d4ba693..969da3620f 100644
+--- a/block/copy-before-write.h
++++ b/block/copy-before-write.h
+@@ -44,5 +44,6 @@ BlockDriverState *bdrv_cbw_append(BlockDriverState *source,
+                                   BlockCopyState **bcs,
+                                   Error **errp);
+ void bdrv_cbw_drop(BlockDriverState *bs);
++int bdrv_cbw_snapshot_error(BlockDriverState *bs);
+ 
+ #endif /* COPY_BEFORE_WRITE_H */
 diff --git a/block/monitor/block-hmp-cmds.c b/block/monitor/block-hmp-cmds.c
 index 4f30f99644..66d16d342f 100644
 --- a/block/monitor/block-hmp-cmds.c
@@ -80,7 +159,7 @@ index 4f30f99644..66d16d342f 100644
  
      hmp_handle_error(mon, error);
 diff --git a/pve-backup.c b/pve-backup.c
-index 11ea4b5c56..257cd0b0e1 100644
+index 366b015589..9b66788ab5 100644
 --- a/pve-backup.c
 +++ b/pve-backup.c
 @@ -7,6 +7,7 @@
@@ -107,17 +186,12 @@ index 11ea4b5c56..257cd0b0e1 100644
      size_t size;
      uint64_t block_size;
      uint8_t dev_id;
-@@ -354,6 +362,22 @@ static void pvebackup_complete_cb(void *opaque, int ret)
-     PVEBackupDevInfo *di = opaque;
-     di->completed_ret = ret;
+@@ -352,11 +360,44 @@ static void coroutine_fn pvebackup_co_complete_stream(void *opaque)
+     qemu_co_mutex_unlock(&backup_state.backup_mutex);
+ }
  
-+    /*
-+     * Handle block-graph specific cleanup (for fleecing) outside of the coroutine, because the work
-+     * won't be done as a coroutine anyways:
-+     * - For snapshot_access, allows doing bdrv_unref() directly. Doing it via bdrv_co_unref() would
-+     *   just spawn a BH calling bdrv_unref().
-+     * - For cbw, draining would need to spawn a BH.
-+     */
++static void cleanup_snapshot_access(PVEBackupDevInfo *di)
++{
 +    if (di->fleecing.snapshot_access) {
 +        bdrv_unref(di->fleecing.snapshot_access);
 +        di->fleecing.snapshot_access = NULL;
@@ -126,11 +200,104 @@ index 11ea4b5c56..257cd0b0e1 100644
 +        bdrv_cbw_drop(di->fleecing.cbw);
 +        di->fleecing.cbw = NULL;
 +    }
++}
++
+ static void pvebackup_complete_cb(void *opaque, int ret)
+ {
+     PVEBackupDevInfo *di = opaque;
+     di->completed_ret = ret;
+ 
++    if (di->fleecing.cbw) {
++        /*
++         * With fleecing, failure for cbw does not fail the guest write, but only sets the snapshot
++         * error, making further requests to the snapshot fail with EACCES, which then also fail the
++         * job. But that code is not the root cause and just confusing, so update it.
++         */
++        int snapshot_error = bdrv_cbw_snapshot_error(di->fleecing.cbw);
++        if (di->completed_ret == -EACCES && snapshot_error) {
++            di->completed_ret = snapshot_error;
++        }
++    }
++
++    /*
++     * Handle block-graph specific cleanup (for fleecing) outside of the coroutine, because the work
++     * won't be done as a coroutine anyways:
++     * - For snapshot_access, allows doing bdrv_unref() directly. Doing it via bdrv_co_unref() would
++     *   just spawn a BH calling bdrv_unref().
++     * - For cbw, draining would need to spawn a BH.
++     */
++    cleanup_snapshot_access(di);
 +
      /*
       * Needs to happen outside of coroutine, because it takes the graph write lock.
       */
-@@ -520,9 +544,77 @@ static void create_backup_jobs_bh(void *opaque) {
+@@ -487,6 +528,65 @@ static int coroutine_fn pvebackup_co_add_config(
+     goto out;
+ }
+ 
++/*
++ * Setup a snapshot-access block node for a device with associated fleecing image.
++ */
++static int setup_snapshot_access(PVEBackupDevInfo *di, Error **errp)
++{
++    Error *local_err = NULL;
++
++    if (!di->fleecing.bs) {
++        error_setg(errp, "no associated fleecing image");
++        return -1;
++    }
++
++    QDict *cbw_opts = qdict_new();
++    qdict_put_str(cbw_opts, "driver", "copy-before-write");
++    qdict_put_str(cbw_opts, "file", bdrv_get_node_name(di->bs));
++    qdict_put_str(cbw_opts, "target", bdrv_get_node_name(di->fleecing.bs));
++
++    if (di->bitmap) {
++        /*
++         * Only guest writes to parts relevant for the backup need to be intercepted with
++         * old data being copied to the fleecing image.
++         */
++        qdict_put_str(cbw_opts, "bitmap.node", bdrv_get_node_name(di->bs));
++        qdict_put_str(cbw_opts, "bitmap.name", bdrv_dirty_bitmap_name(di->bitmap));
++    }
++    /*
++     * Fleecing storage is supposed to be fast and it's better to break backup than guest
++     * writes. Certain guest drivers like VirtIO-win have 60 seconds timeout by default, so
++     * abort a bit before that.
++     */
++    qdict_put_str(cbw_opts, "on-cbw-error", "break-snapshot");
++    qdict_put_int(cbw_opts, "cbw-timeout", 45);
++
++    di->fleecing.cbw = bdrv_insert_node(di->bs, cbw_opts, BDRV_O_RDWR, &local_err);
++
++    if (!di->fleecing.cbw) {
++        error_setg(errp, "appending cbw node for fleecing failed: %s",
++                   local_err ? error_get_pretty(local_err) : "unknown error");
++        return -1;
++    }
++
++    QDict *snapshot_access_opts = qdict_new();
++    qdict_put_str(snapshot_access_opts, "driver", "snapshot-access");
++    qdict_put_str(snapshot_access_opts, "file", bdrv_get_node_name(di->fleecing.cbw));
++
++    di->fleecing.snapshot_access =
++        bdrv_open(NULL, NULL, snapshot_access_opts, BDRV_O_RDWR | BDRV_O_UNMAP, &local_err);
++    if (!di->fleecing.snapshot_access) {
++        bdrv_cbw_drop(di->fleecing.cbw);
++        di->fleecing.cbw = NULL;
++
++        error_setg(errp, "setting up snapshot access for fleecing failed: %s",
++                   local_err ? error_get_pretty(local_err) : "unknown error");
++        return -1;
++    }
++
++    return 0;
++}
++
+ /*
+  * backup_job_create can *not* be run from a coroutine, so this can't either.
+  * The caller is responsible that backup_mutex is held nonetheless.
+@@ -523,9 +623,42 @@ static void create_backup_jobs_bh(void *opaque) {
          }
          bdrv_drained_begin(di->bs);
  
@@ -138,50 +305,15 @@ index 11ea4b5c56..257cd0b0e1 100644
 +
 +        BlockDriverState *source_bs = di->bs;
 +        bool discard_source = false;
-+        bdrv_graph_co_rdlock();
-+        const char *job_id = bdrv_get_device_name(di->bs);
-+        bdrv_graph_co_rdunlock();
 +        if (di->fleecing.bs) {
-+            QDict *cbw_opts = qdict_new();
-+            qdict_put_str(cbw_opts, "driver", "copy-before-write");
-+            qdict_put_str(cbw_opts, "file", bdrv_get_node_name(di->bs));
-+            qdict_put_str(cbw_opts, "target", bdrv_get_node_name(di->fleecing.bs));
-+
-+            if (di->bitmap) {
-+                /*
-+                 * Only guest writes to parts relevant for the backup need to be intercepted with
-+                 * old data being copied to the fleecing image.
-+                 */
-+                qdict_put_str(cbw_opts, "bitmap.node", bdrv_get_node_name(di->bs));
-+                qdict_put_str(cbw_opts, "bitmap.name", bdrv_dirty_bitmap_name(di->bitmap));
-+            }
-+            /*
-+             * Fleecing storage is supposed to be fast and it's better to break backup than guest
-+             * writes. Certain guest drivers like VirtIO-win have 60 seconds timeout by default, so
-+             * abort a bit before that.
-+             */
-+            qdict_put_str(cbw_opts, "on-cbw-error", "break-snapshot");
-+            qdict_put_int(cbw_opts, "cbw-timeout", 45);
-+
-+            di->fleecing.cbw = bdrv_insert_node(di->bs, cbw_opts, BDRV_O_RDWR, &local_err);
-+
-+            if (!di->fleecing.cbw) {
-+                error_setg(errp, "appending cbw node for fleecing failed: %s",
++            if (setup_snapshot_access(di, &local_err) < 0) {
++                error_setg(errp, "%s - setting up snapshot access for fleecing failed: %s",
++                           di->device_name,
 +                           local_err ? error_get_pretty(local_err) : "unknown error");
++                bdrv_drained_end(di->bs);
 +                break;
 +            }
 +
-+            QDict *snapshot_access_opts = qdict_new();
-+            qdict_put_str(snapshot_access_opts, "driver", "snapshot-access");
-+            qdict_put_str(snapshot_access_opts, "file", bdrv_get_node_name(di->fleecing.cbw));
-+
-+            di->fleecing.snapshot_access =
-+                bdrv_open(NULL, NULL, snapshot_access_opts, BDRV_O_RDWR | BDRV_O_UNMAP, &local_err);
-+            if (!di->fleecing.snapshot_access) {
-+                error_setg(errp, "setting up snapshot access for fleecing failed: %s",
-+                           local_err ? error_get_pretty(local_err) : "unknown error");
-+                break;
-+            }
 +            source_bs = di->fleecing.snapshot_access;
 +            discard_source = true;
 +
@@ -205,12 +337,20 @@ index 11ea4b5c56..257cd0b0e1 100644
          BlockJob *job = backup_job_create(
 -            NULL, di->bs, di->target, backup_state.speed, sync_mode, di->bitmap,
 -            bitmap_mode, false, NULL, &backup_state.perf, BLOCKDEV_ON_ERROR_REPORT,
-+            job_id, source_bs, di->target, backup_state.speed, sync_mode, di->bitmap,
++            di->device_name, source_bs, di->target, backup_state.speed, sync_mode, di->bitmap,
 +            bitmap_mode, false, discard_source, NULL, &perf, BLOCKDEV_ON_ERROR_REPORT,
              BLOCKDEV_ON_ERROR_REPORT, JOB_DEFAULT, pvebackup_complete_cb, di, backup_state.txn,
              &local_err);
  
-@@ -578,6 +670,14 @@ static void create_backup_jobs_bh(void *opaque) {
+@@ -539,6 +672,7 @@ static void create_backup_jobs_bh(void *opaque) {
+         }
+ 
+         if (!job || local_err) {
++            cleanup_snapshot_access(di);
+             error_setg(errp, "backup_job_create failed: %s",
+                        local_err ? error_get_pretty(local_err) : "null");
+             break;
+@@ -581,6 +715,14 @@ static void create_backup_jobs_bh(void *opaque) {
      aio_co_enter(data->ctx, data->co);
  }
  
@@ -225,7 +365,7 @@ index 11ea4b5c56..257cd0b0e1 100644
  /*
   * Returns a list of device infos, which needs to be freed by the caller. In
   * case of an error, errp will be set, but the returned value might still be a
-@@ -585,6 +685,7 @@ static void create_backup_jobs_bh(void *opaque) {
+@@ -588,6 +730,7 @@ static void create_backup_jobs_bh(void *opaque) {
   */
  static GList coroutine_fn GRAPH_RDLOCK *get_device_info(
      const char *devlist,
@@ -233,11 +373,10 @@ index 11ea4b5c56..257cd0b0e1 100644
      Error **errp)
  {
      gchar **devs = NULL;
-@@ -608,6 +709,31 @@ static GList coroutine_fn GRAPH_RDLOCK *get_device_info(
-             }
-             PVEBackupDevInfo *di = g_new0(PVEBackupDevInfo, 1);
+@@ -613,6 +756,30 @@ static GList coroutine_fn GRAPH_RDLOCK *get_device_info(
              di->bs = bs;
-+
+             di->device_name = g_strdup(bdrv_get_device_name(bs));
+ 
 +            if (fleecing && device_uses_fleecing(*d)) {
 +                g_autofree gchar *fleecing_devid = g_strconcat(*d, "-fleecing", NULL);
 +                BlockBackend *fleecing_blk = blk_by_name(fleecing_devid);
@@ -265,7 +404,7 @@ index 11ea4b5c56..257cd0b0e1 100644
              di_list = g_list_append(di_list, di);
              d++;
          }
-@@ -657,6 +783,7 @@ UuidInfo coroutine_fn *qmp_backup(
+@@ -663,6 +830,7 @@ UuidInfo coroutine_fn *qmp_backup(
      const char *devlist,
      bool has_speed, int64_t speed,
      bool has_max_workers, int64_t max_workers,
@@ -273,7 +412,7 @@ index 11ea4b5c56..257cd0b0e1 100644
      Error **errp)
  {
      assert(qemu_in_coroutine());
-@@ -685,7 +812,7 @@ UuidInfo coroutine_fn *qmp_backup(
+@@ -691,7 +859,7 @@ UuidInfo coroutine_fn *qmp_backup(
      format = has_format ? format : BACKUP_FORMAT_VMA;
  
      bdrv_graph_co_rdlock();
@@ -282,7 +421,7 @@ index 11ea4b5c56..257cd0b0e1 100644
      bdrv_graph_co_rdunlock();
      if (local_err) {
          error_propagate(errp, local_err);
-@@ -1088,5 +1215,6 @@ ProxmoxSupportStatus *qmp_query_proxmox_support(Error **errp)
+@@ -1093,5 +1261,6 @@ ProxmoxSupportStatus *qmp_query_proxmox_support(Error **errp)
      ret->query_bitmap_info = true;
      ret->pbs_masterkey = true;
      ret->backup_max_workers = true;
diff --git a/debian/patches/pve/0042-alloc-track-error-out-when-auto-remove-is-not-set.patch b/debian/patches/pve/0042-alloc-track-error-out-when-auto-remove-is-not-set.patch
deleted file mode 100644
index c65479d..0000000
--- a/debian/patches/pve/0042-alloc-track-error-out-when-auto-remove-is-not-set.patch
+++ /dev/null
@@ -1,43 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Fiona Ebner <f.ebner@proxmox.com>
-Date: Tue, 26 Mar 2024 14:57:51 +0100
-Subject: [PATCH] alloc-track: error out when auto-remove is not set
-
-Since replacing the node now happens in the stream job, where the
-option cannot be read from (it's internal to the driver), it will
-always be treated as on.
-
-qemu-server will always set it, make sure to have other users notice
-the change (should they even exist). The option can be fully dropped
-in the future while adding a version guard in qemu-server.
-
-Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
----
- block/alloc-track.c | 7 +++++--
- 1 file changed, 5 insertions(+), 2 deletions(-)
-
-diff --git a/block/alloc-track.c b/block/alloc-track.c
-index cb9829ae36..30fac992fa 100644
---- a/block/alloc-track.c
-+++ b/block/alloc-track.c
-@@ -34,7 +34,6 @@ typedef struct {
-     BdrvDirtyBitmap *bitmap;
-     uint64_t granularity;
-     DropState drop_state;
--    bool auto_remove;
- } BDRVAllocTrackState;
- 
- static QemuOptsList runtime_opts = {
-@@ -86,7 +85,11 @@ static int track_open(BlockDriverState *bs, QDict *options, int flags,
-         goto fail;
-     }
- 
--    s->auto_remove = qemu_opt_get_bool(opts, TRACK_OPT_AUTO_REMOVE, false);
-+    if (!qemu_opt_get_bool(opts, TRACK_OPT_AUTO_REMOVE, false)) {
-+        error_setg(errp, "alloc-track: requires auto-remove option to be set to on");
-+        ret = -EINVAL;
-+        goto fail;
-+    }
- 
-     /* open the target (write) node, backing will be attached by block layer */
-     file = bdrv_open_child(NULL, options, "file", bs, &child_of_bds,
diff --git a/debian/patches/pve/0050-adapt-machine-version-deprecation-for-Proxmox-VE.patch b/debian/patches/pve/0043-adapt-machine-version-deprecation-for-Proxmox-VE.patch
similarity index 100%
rename from debian/patches/pve/0050-adapt-machine-version-deprecation-for-Proxmox-VE.patch
rename to debian/patches/pve/0043-adapt-machine-version-deprecation-for-Proxmox-VE.patch
diff --git a/debian/patches/pve/0043-alloc-track-avoid-seemingly-superfluous-child-permis.patch b/debian/patches/pve/0043-alloc-track-avoid-seemingly-superfluous-child-permis.patch
deleted file mode 100644
index 5113d34..0000000
--- a/debian/patches/pve/0043-alloc-track-avoid-seemingly-superfluous-child-permis.patch
+++ /dev/null
@@ -1,84 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Fiona Ebner <f.ebner@proxmox.com>
-Date: Wed, 27 Mar 2024 11:15:39 +0100
-Subject: [PATCH] alloc-track: avoid seemingly superfluous child permission
- update
-
-Doesn't seem necessary nowadays (maybe after commit "alloc-track: fix
-deadlock during drop" where the dropping is not rescheduled and delayed
-anymore or some upstream change). Should there really be some issue,
-instead of having a drop state, this could also be just based off the
-fact whether there is still a backing child.
-
-Dumping the cumulative (shared) permissions for the BDS with a debug
-print yields the same values after this patch and with QEMU 8.1,
-namely 3 and 5.
-
-Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
----
- block/alloc-track.c | 26 --------------------------
- 1 file changed, 26 deletions(-)
-
-diff --git a/block/alloc-track.c b/block/alloc-track.c
-index 30fac992fa..718aaabf2a 100644
---- a/block/alloc-track.c
-+++ b/block/alloc-track.c
-@@ -25,15 +25,9 @@
- 
- #define TRACK_OPT_AUTO_REMOVE "auto-remove"
- 
--typedef enum DropState {
--    DropNone,
--    DropInProgress,
--} DropState;
--
- typedef struct {
-     BdrvDirtyBitmap *bitmap;
-     uint64_t granularity;
--    DropState drop_state;
- } BDRVAllocTrackState;
- 
- static QemuOptsList runtime_opts = {
-@@ -137,8 +131,6 @@ static int track_open(BlockDriverState *bs, QDict *options, int flags,
-         goto fail;
-     }
- 
--    s->drop_state = DropNone;
--
- fail:
-     if (ret < 0) {
-         bdrv_graph_wrlock();
-@@ -289,18 +281,8 @@ track_child_perm(BlockDriverState *bs, BdrvChild *c, BdrvChildRole role,
-                  BlockReopenQueue *reopen_queue, uint64_t perm, uint64_t shared,
-                  uint64_t *nperm, uint64_t *nshared)
- {
--    BDRVAllocTrackState *s = bs->opaque;
--
-     *nshared = BLK_PERM_ALL;
- 
--    /* in case we're currently dropping ourselves, claim to not use any
--     * permissions at all - which is fine, since from this point on we will
--     * never issue a read or write anymore */
--    if (s->drop_state == DropInProgress) {
--        *nperm = 0;
--        return;
--    }
--
-     if (role & BDRV_CHILD_DATA) {
-         *nperm = perm & DEFAULT_PERM_PASSTHROUGH;
-     } else {
-@@ -326,14 +308,6 @@ track_co_change_backing_file(BlockDriverState *bs, const char *backing_file,
-      * kinda fits better, but in the long-term, a special parameter would be
-      * nice (or done via qemu-server via upcoming blockdev-replace QMP command).
-      */
--    if (backing_file == NULL) {
--        BDRVAllocTrackState *s = bs->opaque;
--        bdrv_drained_begin(bs);
--        s->drop_state = DropInProgress;
--        bdrv_child_refresh_perms(bs, bs->file, &error_abort);
--        bdrv_drained_end(bs);
--    }
--
-     return 0;
- }
- 
diff --git a/debian/patches/pve/0051-Revert-hpet-avoid-timer-storms-on-periodic-timers.patch b/debian/patches/pve/0044-Revert-hpet-avoid-timer-storms-on-periodic-timers.patch
similarity index 100%
rename from debian/patches/pve/0051-Revert-hpet-avoid-timer-storms-on-periodic-timers.patch
rename to debian/patches/pve/0044-Revert-hpet-avoid-timer-storms-on-periodic-timers.patch
diff --git a/debian/patches/pve/0045-PVE-backup-improve-error-when-copy-before-write-fail.patch b/debian/patches/pve/0045-PVE-backup-improve-error-when-copy-before-write-fail.patch
deleted file mode 100644
index e6cecc4..0000000
--- a/debian/patches/pve/0045-PVE-backup-improve-error-when-copy-before-write-fail.patch
+++ /dev/null
@@ -1,117 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Fiona Ebner <f.ebner@proxmox.com>
-Date: Mon, 29 Apr 2024 14:43:58 +0200
-Subject: [PATCH] PVE backup: improve error when copy-before-write fails for
- fleecing
-
-With fleecing, failure for copy-before-write does not fail the guest
-write, but only sets the snapshot error that is associated to the
-copy-before-write filter, making further requests to the snapshot
-access fail with EACCES, which then also fails the job. But that error
-code is not the root cause of why the backup failed, so bubble up the
-original snapshot error instead.
-
-Reported-by: Friedrich Weber <f.weber@proxmox.com>
-Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
-Tested-by: Friedrich Weber <f.weber@proxmox.com>
----
- block/copy-before-write.c | 18 ++++++++++++------
- block/copy-before-write.h |  1 +
- pve-backup.c              |  9 +++++++++
- 3 files changed, 22 insertions(+), 6 deletions(-)
-
-diff --git a/block/copy-before-write.c b/block/copy-before-write.c
-index fd470f5f92..5c23b578ef 100644
---- a/block/copy-before-write.c
-+++ b/block/copy-before-write.c
-@@ -27,6 +27,7 @@
- #include "qobject/qjson.h"
- 
- #include "system/block-backend.h"
-+#include "qemu/atomic.h"
- #include "qemu/cutils.h"
- #include "qapi/error.h"
- #include "block/block_int.h"
-@@ -75,7 +76,8 @@ typedef struct BDRVCopyBeforeWriteState {
-      * @snapshot_error is normally zero. But on first copy-before-write failure
-      * when @on_cbw_error == ON_CBW_ERROR_BREAK_SNAPSHOT, @snapshot_error takes
-      * value of this error (<0). After that all in-flight and further
--     * snapshot-API requests will fail with that error.
-+     * snapshot-API requests will fail with that error. To be accessed with
-+     * atomics.
-      */
-     int snapshot_error;
- } BDRVCopyBeforeWriteState;
-@@ -115,7 +117,7 @@ static coroutine_fn int cbw_do_copy_before_write(BlockDriverState *bs,
-         return 0;
-     }
- 
--    if (s->snapshot_error) {
-+    if (qatomic_read(&s->snapshot_error)) {
-         return 0;
-     }
- 
-@@ -139,9 +141,7 @@ static coroutine_fn int cbw_do_copy_before_write(BlockDriverState *bs,
-     WITH_QEMU_LOCK_GUARD(&s->lock) {
-         if (ret < 0) {
-             assert(s->on_cbw_error == ON_CBW_ERROR_BREAK_SNAPSHOT);
--            if (!s->snapshot_error) {
--                s->snapshot_error = ret;
--            }
-+            qatomic_cmpxchg(&s->snapshot_error, 0, ret);
-         } else {
-             bdrv_set_dirty_bitmap(s->done_bitmap, off, end - off);
-         }
-@@ -215,7 +215,7 @@ cbw_snapshot_read_lock(BlockDriverState *bs, int64_t offset, int64_t bytes,
- 
-     QEMU_LOCK_GUARD(&s->lock);
- 
--    if (s->snapshot_error) {
-+    if (qatomic_read(&s->snapshot_error)) {
-         g_free(req);
-         return NULL;
-     }
-@@ -595,6 +595,12 @@ void bdrv_cbw_drop(BlockDriverState *bs)
-     bdrv_unref(bs);
- }
- 
-+int bdrv_cbw_snapshot_error(BlockDriverState *bs)
-+{
-+    BDRVCopyBeforeWriteState *s = bs->opaque;
-+    return qatomic_read(&s->snapshot_error);
-+}
-+
- static void cbw_init(void)
- {
-     bdrv_register(&bdrv_cbw_filter);
-diff --git a/block/copy-before-write.h b/block/copy-before-write.h
-index 2a5d4ba693..969da3620f 100644
---- a/block/copy-before-write.h
-+++ b/block/copy-before-write.h
-@@ -44,5 +44,6 @@ BlockDriverState *bdrv_cbw_append(BlockDriverState *source,
-                                   BlockCopyState **bcs,
-                                   Error **errp);
- void bdrv_cbw_drop(BlockDriverState *bs);
-+int bdrv_cbw_snapshot_error(BlockDriverState *bs);
- 
- #endif /* COPY_BEFORE_WRITE_H */
-diff --git a/pve-backup.c b/pve-backup.c
-index 257cd0b0e1..ffcfaa649d 100644
---- a/pve-backup.c
-+++ b/pve-backup.c
-@@ -374,6 +374,15 @@ static void pvebackup_complete_cb(void *opaque, int ret)
-         di->fleecing.snapshot_access = NULL;
-     }
-     if (di->fleecing.cbw) {
-+        /*
-+         * With fleecing, failure for cbw does not fail the guest write, but only sets the snapshot
-+         * error, making further requests to the snapshot fail with EACCES, which then also fail the
-+         * job. But that code is not the root cause and just confusing, so update it.
-+         */
-+        int snapshot_error = bdrv_cbw_snapshot_error(di->fleecing.cbw);
-+        if (di->completed_ret == -EACCES && snapshot_error) {
-+            di->completed_ret = snapshot_error;
-+        }
-         bdrv_cbw_drop(di->fleecing.cbw);
-         di->fleecing.cbw = NULL;
-     }
diff --git a/debian/patches/pve/0052-Revert-hpet-store-full-64-bit-target-value-of-the-co.patch b/debian/patches/pve/0045-Revert-hpet-store-full-64-bit-target-value-of-the-co.patch
similarity index 100%
rename from debian/patches/pve/0052-Revert-hpet-store-full-64-bit-target-value-of-the-co.patch
rename to debian/patches/pve/0045-Revert-hpet-store-full-64-bit-target-value-of-the-co.patch
diff --git a/debian/patches/pve/0046-PVE-backup-fixup-error-handling-for-fleecing.patch b/debian/patches/pve/0046-PVE-backup-fixup-error-handling-for-fleecing.patch
deleted file mode 100644
index 49a2256..0000000
--- a/debian/patches/pve/0046-PVE-backup-fixup-error-handling-for-fleecing.patch
+++ /dev/null
@@ -1,103 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Fiona Ebner <f.ebner@proxmox.com>
-Date: Thu, 7 Nov 2024 17:51:14 +0100
-Subject: [PATCH] PVE backup: fixup error handling for fleecing
-
-The drained section needs to be terminated before breaking out of the
-loop in the error scenarios. Otherwise, guest IO on the drive would
-become stuck.
-
-If the job is created successfully, then the job completion callback
-will clean up the snapshot access block nodes. In case failure
-happened before the job is created, there was no cleanup for the
-snapshot access block nodes yet. Add it.
-
-Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
----
- pve-backup.c | 38 +++++++++++++++++++++++++-------------
- 1 file changed, 25 insertions(+), 13 deletions(-)
-
-diff --git a/pve-backup.c b/pve-backup.c
-index ffcfaa649d..718a31e4ca 100644
---- a/pve-backup.c
-+++ b/pve-backup.c
-@@ -357,22 +357,23 @@ static void coroutine_fn pvebackup_co_complete_stream(void *opaque)
-     qemu_co_mutex_unlock(&backup_state.backup_mutex);
- }
- 
--static void pvebackup_complete_cb(void *opaque, int ret)
-+static void cleanup_snapshot_access(PVEBackupDevInfo *di)
- {
--    PVEBackupDevInfo *di = opaque;
--    di->completed_ret = ret;
--
--    /*
--     * Handle block-graph specific cleanup (for fleecing) outside of the coroutine, because the work
--     * won't be done as a coroutine anyways:
--     * - For snapshot_access, allows doing bdrv_unref() directly. Doing it via bdrv_co_unref() would
--     *   just spawn a BH calling bdrv_unref().
--     * - For cbw, draining would need to spawn a BH.
--     */
-     if (di->fleecing.snapshot_access) {
-         bdrv_unref(di->fleecing.snapshot_access);
-         di->fleecing.snapshot_access = NULL;
-     }
-+    if (di->fleecing.cbw) {
-+        bdrv_cbw_drop(di->fleecing.cbw);
-+        di->fleecing.cbw = NULL;
-+    }
-+}
-+
-+static void pvebackup_complete_cb(void *opaque, int ret)
-+{
-+    PVEBackupDevInfo *di = opaque;
-+    di->completed_ret = ret;
-+
-     if (di->fleecing.cbw) {
-         /*
-          * With fleecing, failure for cbw does not fail the guest write, but only sets the snapshot
-@@ -383,10 +384,17 @@ static void pvebackup_complete_cb(void *opaque, int ret)
-         if (di->completed_ret == -EACCES && snapshot_error) {
-             di->completed_ret = snapshot_error;
-         }
--        bdrv_cbw_drop(di->fleecing.cbw);
--        di->fleecing.cbw = NULL;
-     }
- 
-+    /*
-+     * Handle block-graph specific cleanup (for fleecing) outside of the coroutine, because the work
-+     * won't be done as a coroutine anyways:
-+     * - For snapshot_access, allows doing bdrv_unref() directly. Doing it via bdrv_co_unref() would
-+     *   just spawn a BH calling bdrv_unref().
-+     * - For cbw, draining would need to spawn a BH.
-+     */
-+    cleanup_snapshot_access(di);
-+
-     /*
-      * Needs to happen outside of coroutine, because it takes the graph write lock.
-      */
-@@ -587,6 +595,7 @@ static void create_backup_jobs_bh(void *opaque) {
-             if (!di->fleecing.cbw) {
-                 error_setg(errp, "appending cbw node for fleecing failed: %s",
-                            local_err ? error_get_pretty(local_err) : "unknown error");
-+                bdrv_drained_end(di->bs);
-                 break;
-             }
- 
-@@ -599,6 +608,8 @@ static void create_backup_jobs_bh(void *opaque) {
-             if (!di->fleecing.snapshot_access) {
-                 error_setg(errp, "setting up snapshot access for fleecing failed: %s",
-                            local_err ? error_get_pretty(local_err) : "unknown error");
-+                cleanup_snapshot_access(di);
-+                bdrv_drained_end(di->bs);
-                 break;
-             }
-             source_bs = di->fleecing.snapshot_access;
-@@ -637,6 +648,7 @@ static void create_backup_jobs_bh(void *opaque) {
-         }
- 
-         if (!job || local_err) {
-+            cleanup_snapshot_access(di);
-             error_setg(errp, "backup_job_create failed: %s",
-                        local_err ? error_get_pretty(local_err) : "null");
-             break;
diff --git a/debian/patches/pve/0053-Revert-hpet-accept-64-bit-reads-and-writes.patch b/debian/patches/pve/0046-Revert-hpet-accept-64-bit-reads-and-writes.patch
similarity index 100%
rename from debian/patches/pve/0053-Revert-hpet-accept-64-bit-reads-and-writes.patch
rename to debian/patches/pve/0046-Revert-hpet-accept-64-bit-reads-and-writes.patch
diff --git a/debian/patches/pve/0047-PVE-backup-factor-out-setting-up-snapshot-access-for.patch b/debian/patches/pve/0047-PVE-backup-factor-out-setting-up-snapshot-access-for.patch
deleted file mode 100644
index af8e0dd..0000000
--- a/debian/patches/pve/0047-PVE-backup-factor-out-setting-up-snapshot-access-for.patch
+++ /dev/null
@@ -1,135 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Fiona Ebner <f.ebner@proxmox.com>
-Date: Thu, 7 Nov 2024 17:51:15 +0100
-Subject: [PATCH] PVE backup: factor out setting up snapshot access for
- fleecing
-
-Avoids some line bloat in the create_backup_jobs_bh() function and is
-in preparation for setting up the snapshot access independently of
-fleecing, in particular that will be useful for providing access to
-the snapshot via NBD.
-
-Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
----
- pve-backup.c | 95 ++++++++++++++++++++++++++++++++--------------------
- 1 file changed, 58 insertions(+), 37 deletions(-)
-
-diff --git a/pve-backup.c b/pve-backup.c
-index 718a31e4ca..3551bd6c92 100644
---- a/pve-backup.c
-+++ b/pve-backup.c
-@@ -525,6 +525,62 @@ static int coroutine_fn pvebackup_co_add_config(
-     goto out;
- }
- 
-+/*
-+ * Setup a snapshot-access block node for a device with associated fleecing image.
-+ */
-+static int setup_snapshot_access(PVEBackupDevInfo *di, Error **errp)
-+{
-+    Error *local_err = NULL;
-+
-+    if (!di->fleecing.bs) {
-+        error_setg(errp, "no associated fleecing image");
-+        return -1;
-+    }
-+
-+    QDict *cbw_opts = qdict_new();
-+    qdict_put_str(cbw_opts, "driver", "copy-before-write");
-+    qdict_put_str(cbw_opts, "file", bdrv_get_node_name(di->bs));
-+    qdict_put_str(cbw_opts, "target", bdrv_get_node_name(di->fleecing.bs));
-+
-+    if (di->bitmap) {
-+        /*
-+         * Only guest writes to parts relevant for the backup need to be intercepted with
-+         * old data being copied to the fleecing image.
-+         */
-+        qdict_put_str(cbw_opts, "bitmap.node", bdrv_get_node_name(di->bs));
-+        qdict_put_str(cbw_opts, "bitmap.name", bdrv_dirty_bitmap_name(di->bitmap));
-+    }
-+    /*
-+     * Fleecing storage is supposed to be fast and it's better to break backup than guest
-+     * writes. Certain guest drivers like VirtIO-win have 60 seconds timeout by default, so
-+     * abort a bit before that.
-+     */
-+    qdict_put_str(cbw_opts, "on-cbw-error", "break-snapshot");
-+    qdict_put_int(cbw_opts, "cbw-timeout", 45);
-+
-+    di->fleecing.cbw = bdrv_insert_node(di->bs, cbw_opts, BDRV_O_RDWR, &local_err);
-+
-+    if (!di->fleecing.cbw) {
-+        error_setg(errp, "appending cbw node for fleecing failed: %s",
-+                   local_err ? error_get_pretty(local_err) : "unknown error");
-+        return -1;
-+    }
-+
-+    QDict *snapshot_access_opts = qdict_new();
-+    qdict_put_str(snapshot_access_opts, "driver", "snapshot-access");
-+    qdict_put_str(snapshot_access_opts, "file", bdrv_get_node_name(di->fleecing.cbw));
-+
-+    di->fleecing.snapshot_access =
-+        bdrv_open(NULL, NULL, snapshot_access_opts, BDRV_O_RDWR | BDRV_O_UNMAP, &local_err);
-+    if (!di->fleecing.snapshot_access) {
-+        error_setg(errp, "setting up snapshot access for fleecing failed: %s",
-+                   local_err ? error_get_pretty(local_err) : "unknown error");
-+        return -1;
-+    }
-+
-+    return 0;
-+}
-+
- /*
-  * backup_job_create can *not* be run from a coroutine, so this can't either.
-  * The caller is responsible that backup_mutex is held nonetheless.
-@@ -569,49 +625,14 @@ static void create_backup_jobs_bh(void *opaque) {
-         const char *job_id = bdrv_get_device_name(di->bs);
-         bdrv_graph_co_rdunlock();
-         if (di->fleecing.bs) {
--            QDict *cbw_opts = qdict_new();
--            qdict_put_str(cbw_opts, "driver", "copy-before-write");
--            qdict_put_str(cbw_opts, "file", bdrv_get_node_name(di->bs));
--            qdict_put_str(cbw_opts, "target", bdrv_get_node_name(di->fleecing.bs));
--
--            if (di->bitmap) {
--                /*
--                 * Only guest writes to parts relevant for the backup need to be intercepted with
--                 * old data being copied to the fleecing image.
--                 */
--                qdict_put_str(cbw_opts, "bitmap.node", bdrv_get_node_name(di->bs));
--                qdict_put_str(cbw_opts, "bitmap.name", bdrv_dirty_bitmap_name(di->bitmap));
--            }
--            /*
--             * Fleecing storage is supposed to be fast and it's better to break backup than guest
--             * writes. Certain guest drivers like VirtIO-win have 60 seconds timeout by default, so
--             * abort a bit before that.
--             */
--            qdict_put_str(cbw_opts, "on-cbw-error", "break-snapshot");
--            qdict_put_int(cbw_opts, "cbw-timeout", 45);
--
--            di->fleecing.cbw = bdrv_insert_node(di->bs, cbw_opts, BDRV_O_RDWR, &local_err);
--
--            if (!di->fleecing.cbw) {
--                error_setg(errp, "appending cbw node for fleecing failed: %s",
--                           local_err ? error_get_pretty(local_err) : "unknown error");
--                bdrv_drained_end(di->bs);
--                break;
--            }
--
--            QDict *snapshot_access_opts = qdict_new();
--            qdict_put_str(snapshot_access_opts, "driver", "snapshot-access");
--            qdict_put_str(snapshot_access_opts, "file", bdrv_get_node_name(di->fleecing.cbw));
--
--            di->fleecing.snapshot_access =
--                bdrv_open(NULL, NULL, snapshot_access_opts, BDRV_O_RDWR | BDRV_O_UNMAP, &local_err);
--            if (!di->fleecing.snapshot_access) {
-+            if (setup_snapshot_access(di, &local_err) < 0) {
-                 error_setg(errp, "setting up snapshot access for fleecing failed: %s",
-                            local_err ? error_get_pretty(local_err) : "unknown error");
-                 cleanup_snapshot_access(di);
-                 bdrv_drained_end(di->bs);
-                 break;
-             }
-+
-             source_bs = di->fleecing.snapshot_access;
-             discard_source = true;
- 
diff --git a/debian/patches/pve/0054-Revert-hpet-place-read-only-bits-directly-in-new_val.patch b/debian/patches/pve/0047-Revert-hpet-place-read-only-bits-directly-in-new_val.patch
similarity index 100%
rename from debian/patches/pve/0054-Revert-hpet-place-read-only-bits-directly-in-new_val.patch
rename to debian/patches/pve/0047-Revert-hpet-place-read-only-bits-directly-in-new_val.patch
diff --git a/debian/patches/pve/0048-PVE-backup-save-device-name-in-device-info-structure.patch b/debian/patches/pve/0048-PVE-backup-save-device-name-in-device-info-structure.patch
deleted file mode 100644
index 3523164..0000000
--- a/debian/patches/pve/0048-PVE-backup-save-device-name-in-device-info-structure.patch
+++ /dev/null
@@ -1,135 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Fiona Ebner <f.ebner@proxmox.com>
-Date: Thu, 7 Nov 2024 17:51:16 +0100
-Subject: [PATCH] PVE backup: save device name in device info structure
-
-The device name needs to be queried while holding the graph read lock
-and since it doesn't change during the whole operation, just get it
-once during setup and avoid the need to query it again in different
-places.
-
-Also in preparation to use it more often in error messages and for the
-upcoming external backup access API.
-
-Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
----
- pve-backup.c | 29 +++++++++++++++--------------
- 1 file changed, 15 insertions(+), 14 deletions(-)
-
-diff --git a/pve-backup.c b/pve-backup.c
-index 3551bd6c92..25f65952a0 100644
---- a/pve-backup.c
-+++ b/pve-backup.c
-@@ -94,6 +94,7 @@ typedef struct PVEBackupDevInfo {
-     size_t size;
-     uint64_t block_size;
-     uint8_t dev_id;
-+    char* device_name;
-     int completed_ret; // INT_MAX if not completed
-     BdrvDirtyBitmap *bitmap;
-     BlockDriverState *target;
-@@ -327,6 +328,8 @@ static void coroutine_fn pvebackup_co_complete_stream(void *opaque)
-     }
- 
-     di->bs = NULL;
-+    g_free(di->device_name);
-+    di->device_name = NULL;
- 
-     assert(di->target == NULL);
- 
-@@ -621,9 +624,6 @@ static void create_backup_jobs_bh(void *opaque) {
- 
-         BlockDriverState *source_bs = di->bs;
-         bool discard_source = false;
--        bdrv_graph_co_rdlock();
--        const char *job_id = bdrv_get_device_name(di->bs);
--        bdrv_graph_co_rdunlock();
-         if (di->fleecing.bs) {
-             if (setup_snapshot_access(di, &local_err) < 0) {
-                 error_setg(errp, "setting up snapshot access for fleecing failed: %s",
-@@ -654,7 +654,7 @@ static void create_backup_jobs_bh(void *opaque) {
-         }
- 
-         BlockJob *job = backup_job_create(
--            job_id, source_bs, di->target, backup_state.speed, sync_mode, di->bitmap,
-+            di->device_name, source_bs, di->target, backup_state.speed, sync_mode, di->bitmap,
-             bitmap_mode, false, discard_source, NULL, &perf, BLOCKDEV_ON_ERROR_REPORT,
-             BLOCKDEV_ON_ERROR_REPORT, JOB_DEFAULT, pvebackup_complete_cb, di, backup_state.txn,
-             &local_err);
-@@ -751,6 +751,7 @@ static GList coroutine_fn GRAPH_RDLOCK *get_device_info(
-             }
-             PVEBackupDevInfo *di = g_new0(PVEBackupDevInfo, 1);
-             di->bs = bs;
-+            di->device_name = g_strdup(bdrv_get_device_name(bs));
- 
-             if (fleecing && device_uses_fleecing(*d)) {
-                 g_autofree gchar *fleecing_devid = g_strconcat(*d, "-fleecing", NULL);
-@@ -789,6 +790,7 @@ static GList coroutine_fn GRAPH_RDLOCK *get_device_info(
- 
-             PVEBackupDevInfo *di = g_new0(PVEBackupDevInfo, 1);
-             di->bs = bs;
-+            di->device_name = g_strdup(bdrv_get_device_name(bs));
-             di_list = g_list_append(di_list, di);
-         }
-     }
-@@ -956,9 +958,6 @@ UuidInfo coroutine_fn *qmp_backup(
- 
-             di->block_size = dump_cb_block_size;
- 
--            bdrv_graph_co_rdlock();
--            const char *devname = bdrv_get_device_name(di->bs);
--            bdrv_graph_co_rdunlock();
-             PBSBitmapAction action = PBS_BITMAP_ACTION_NOT_USED;
-             size_t dirty = di->size;
- 
-@@ -973,7 +972,8 @@ UuidInfo coroutine_fn *qmp_backup(
-                     }
-                     action = PBS_BITMAP_ACTION_NEW;
-                 } else {
--                    expect_only_dirty = proxmox_backup_check_incremental(pbs, devname, di->size) != 0;
-+                    expect_only_dirty =
-+                        proxmox_backup_check_incremental(pbs, di->device_name, di->size) != 0;
-                 }
- 
-                 if (expect_only_dirty) {
-@@ -997,7 +997,8 @@ UuidInfo coroutine_fn *qmp_backup(
-                 }
-             }
- 
--            int dev_id = proxmox_backup_co_register_image(pbs, devname, di->size, expect_only_dirty, errp);
-+            int dev_id = proxmox_backup_co_register_image(pbs, di->device_name, di->size,
-+                                                          expect_only_dirty, errp);
-             if (dev_id < 0) {
-                 goto err_mutex;
-             }
-@@ -1009,7 +1010,7 @@ UuidInfo coroutine_fn *qmp_backup(
-             di->dev_id = dev_id;
- 
-             PBSBitmapInfo *info = g_malloc(sizeof(*info));
--            info->drive = g_strdup(devname);
-+            info->drive = g_strdup(di->device_name);
-             info->action = action;
-             info->size = di->size;
-             info->dirty = dirty;
-@@ -1032,10 +1033,7 @@ UuidInfo coroutine_fn *qmp_backup(
-                 goto err_mutex;
-             }
- 
--            bdrv_graph_co_rdlock();
--            const char *devname = bdrv_get_device_name(di->bs);
--            bdrv_graph_co_rdunlock();
--            di->dev_id = vma_writer_register_stream(vmaw, devname, di->size);
-+            di->dev_id = vma_writer_register_stream(vmaw, di->device_name, di->size);
-             if (di->dev_id <= 0) {
-                 error_set(errp, ERROR_CLASS_GENERIC_ERROR,
-                           "register_stream failed");
-@@ -1146,6 +1144,9 @@ err:
-             bdrv_co_unref(di->target);
-         }
- 
-+        g_free(di->device_name);
-+        di->device_name = NULL;
-+
-         g_free(di);
-     }
-     g_list_free(di_list);
diff --git a/debian/patches/pve/0055-Revert-hpet-remove-unnecessary-variable-index.patch b/debian/patches/pve/0048-Revert-hpet-remove-unnecessary-variable-index.patch
similarity index 100%
rename from debian/patches/pve/0055-Revert-hpet-remove-unnecessary-variable-index.patch
rename to debian/patches/pve/0048-Revert-hpet-remove-unnecessary-variable-index.patch
diff --git a/debian/patches/pve/0049-PVE-backup-include-device-name-in-error-when-setting.patch b/debian/patches/pve/0049-PVE-backup-include-device-name-in-error-when-setting.patch
deleted file mode 100644
index 477beca..0000000
--- a/debian/patches/pve/0049-PVE-backup-include-device-name-in-error-when-setting.patch
+++ /dev/null
@@ -1,25 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Fiona Ebner <f.ebner@proxmox.com>
-Date: Thu, 7 Nov 2024 17:51:17 +0100
-Subject: [PATCH] PVE backup: include device name in error when setting up
- snapshot access fails
-
-Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
----
- pve-backup.c | 3 ++-
- 1 file changed, 2 insertions(+), 1 deletion(-)
-
-diff --git a/pve-backup.c b/pve-backup.c
-index 25f65952a0..b411d58a9a 100644
---- a/pve-backup.c
-+++ b/pve-backup.c
-@@ -626,7 +626,8 @@ static void create_backup_jobs_bh(void *opaque) {
-         bool discard_source = false;
-         if (di->fleecing.bs) {
-             if (setup_snapshot_access(di, &local_err) < 0) {
--                error_setg(errp, "setting up snapshot access for fleecing failed: %s",
-+                error_setg(errp, "%s - setting up snapshot access for fleecing failed: %s",
-+                           di->device_name,
-                            local_err ? error_get_pretty(local_err) : "unknown error");
-                 cleanup_snapshot_access(di);
-                 bdrv_drained_end(di->bs);
diff --git a/debian/patches/pve/0056-Revert-hpet-ignore-high-bits-of-comparator-in-32-bit.patch b/debian/patches/pve/0049-Revert-hpet-ignore-high-bits-of-comparator-in-32-bit.patch
similarity index 100%
rename from debian/patches/pve/0056-Revert-hpet-ignore-high-bits-of-comparator-in-32-bit.patch
rename to debian/patches/pve/0049-Revert-hpet-ignore-high-bits-of-comparator-in-32-bit.patch
diff --git a/debian/patches/pve/0057-Revert-hpet-fix-and-cleanup-persistence-of-interrupt.patch b/debian/patches/pve/0050-Revert-hpet-fix-and-cleanup-persistence-of-interrupt.patch
similarity index 100%
rename from debian/patches/pve/0057-Revert-hpet-fix-and-cleanup-persistence-of-interrupt.patch
rename to debian/patches/pve/0050-Revert-hpet-fix-and-cleanup-persistence-of-interrupt.patch
diff --git a/debian/patches/pve/0064-PVE-backup-factor-out-helper-to-clear-backup-state-s.patch b/debian/patches/pve/0051-PVE-backup-factor-out-helper-to-clear-backup-state-s.patch
similarity index 100%
rename from debian/patches/pve/0064-PVE-backup-factor-out-helper-to-clear-backup-state-s.patch
rename to debian/patches/pve/0051-PVE-backup-factor-out-helper-to-clear-backup-state-s.patch
diff --git a/debian/patches/pve/0065-PVE-backup-factor-out-helper-to-initialize-backup-st.patch b/debian/patches/pve/0052-PVE-backup-factor-out-helper-to-initialize-backup-st.patch
similarity index 100%
rename from debian/patches/pve/0065-PVE-backup-factor-out-helper-to-initialize-backup-st.patch
rename to debian/patches/pve/0052-PVE-backup-factor-out-helper-to-initialize-backup-st.patch
diff --git a/debian/patches/pve/0066-PVE-backup-add-target-ID-in-backup-state.patch b/debian/patches/pve/0053-PVE-backup-add-target-ID-in-backup-state.patch
similarity index 100%
rename from debian/patches/pve/0066-PVE-backup-add-target-ID-in-backup-state.patch
rename to debian/patches/pve/0053-PVE-backup-add-target-ID-in-backup-state.patch
diff --git a/debian/patches/pve/0067-PVE-backup-get-device-info-allow-caller-to-specify-f.patch b/debian/patches/pve/0054-PVE-backup-get-device-info-allow-caller-to-specify-f.patch
similarity index 100%
rename from debian/patches/pve/0067-PVE-backup-get-device-info-allow-caller-to-specify-f.patch
rename to debian/patches/pve/0054-PVE-backup-get-device-info-allow-caller-to-specify-f.patch
diff --git a/debian/patches/pve/0055-PVE-backup-implement-backup-access-setup-and-teardow.patch b/debian/patches/pve/0055-PVE-backup-implement-backup-access-setup-and-teardow.patch
new file mode 100644
index 0000000..c4273a1
--- /dev/null
+++ b/debian/patches/pve/0055-PVE-backup-implement-backup-access-setup-and-teardow.patch
@@ -0,0 +1,898 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Fiona Ebner <f.ebner@proxmox.com>
+Date: Thu, 3 Apr 2025 14:30:46 +0200
+Subject: [PATCH] PVE backup: implement backup access setup and teardown API
+ for external providers
+
+For external backup providers, the state of the VM's disk images at
+the time the backup is started is preserved via a snapshot-access
+block node. Old data is moved to the fleecing image when new guest
+writes come in. The snapshot-access block node, as well as the
+associated bitmap in case of incremental backup, will be exported via
+NBD to the external provider. The NBD export will be done by the
+management layer, the missing functionality is setting up and tearing
+down the snapshot-access block nodes, which this patch adds.
+
+It is necessary to also set up fleecing for EFI and TPM disks, so that
+old data can be moved out of the way when a new guest write comes in.
+
+There can only be one regular backup or one active backup access at
+a time, because both require replacing the original block node of the
+drive. Thus the backup state is re-used, and checks are added to
+prohibit regular backup while snapshot access is active and vice
+versa.
+
+The block nodes added by the backup-access-setup QMP call are not
+tracked anywhere else (there is no job they are associated to like for
+regular backup). This requires adding a callback for teardown when
+QEMU exits, i.e. in qemu_cleanup(). Otherwise, there will be an
+assertion failure that the block graph is not empty when QEMU exits
+before the backup-access-teardown QMP command is called.
+
+The code for the qmp_backup_access_setup() was based on the existing
+qmp_backup() routine.
+
+The return value for the setup QMP command contains information about
+the snapshot-access block nodes that can be used by the management
+layer to set up the NBD exports.
+
+There can be one dirty bitmap for each backup target ID for each
+device (which are tracked in the backup_access_bitmaps hash table).
+The QMP user can specify the ID of the bitmap it likes to use. This ID
+is then compared to the current one for the given target and device.
+If they match, the bitmap is re-used (should it still exist on the
+drive, otherwise re-created). If there is a mismatch, the old bitmap
+is removed and a new one is created.
+
+The return value of the QMP command includes information about what
+bitmap action was taken. Similar to what the query-backup QMP command
+returns for regular backup. It also includes the bitmap name and
+associated block node, so the management layer can then set up an NBD
+export with the bitmap.
+
+While the backup access is active, a background bitmap is also
+required. This is necessary to implement bitmap handling according to
+the original reference [0]. In particular:
+
+- in the error case, new writes since the backup access was set up are
+  in the background bitmap. Because of failure, the previously tracked
+  writes from the backup access bitmap are still required too. Thus,
+  the bitmap is merged with the background bitmap to get all new
+  writes since the last backup.
+
+- in the success case, continue tracking for the next incremental
+  backup in the backup access bitmap. New writes since the backup
+  access was set up are in the background bitmap. Because the backup
+  was successfully, clear the backup access bitmap and merge back the
+  background bitmap to get only the new writes.
+
+Since QEMU cannot know if the backup was successful or not (except if
+failure already happens during the setup QMP command), the management
+layer needs to tell it via the teardown QMP command.
+
+The bitmap action is also recorded in the device info now.
+
+The backup-access api keeps track of what bitmap names got used for
+which devices and thus knows when a bitmap went missing. Propagate
+this information to the QMP user with a new 'missing-recreated'
+variant for the taken bitmap action.
+
+[0]: https://lore.kernel.org/qemu-devel/b68833dd-8864-4d72-7c61-c134a9835036@ya.ru/
+
+Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
+Reviewed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
+---
+ pve-backup.c         | 519 +++++++++++++++++++++++++++++++++++++++----
+ pve-backup.h         |  16 ++
+ qapi/block-core.json |  99 ++++++++-
+ system/runstate.c    |   6 +
+ 4 files changed, 596 insertions(+), 44 deletions(-)
+ create mode 100644 pve-backup.h
+
+diff --git a/pve-backup.c b/pve-backup.c
+index bd81621d51..0450303017 100644
+--- a/pve-backup.c
++++ b/pve-backup.c
+@@ -1,4 +1,5 @@
+ #include "proxmox-backup-client.h"
++#include "pve-backup.h"
+ #include "vma.h"
+ 
+ #include "qemu/osdep.h"
+@@ -14,6 +15,7 @@
+ #include "qobject/qdict.h"
+ #include "qapi/qmp/qerror.h"
+ #include "qemu/cutils.h"
++#include "qemu/error-report.h"
+ 
+ #if defined(CONFIG_MALLOC_TRIM)
+ #include <malloc.h>
+@@ -40,6 +42,7 @@
+  */
+ 
+ const char *PBS_BITMAP_NAME = "pbs-incremental-dirty-bitmap";
++const char *BACKGROUND_BITMAP_NAME = "backup-access-background-bitmap";
+ 
+ static struct PVEBackupState {
+     struct {
+@@ -98,8 +101,11 @@ typedef struct PVEBackupDevInfo {
+     char* device_name;
+     int completed_ret; // INT_MAX if not completed
+     BdrvDirtyBitmap *bitmap;
++    BdrvDirtyBitmap *background_bitmap; // used for external backup access
++    PBSBitmapAction bitmap_action;
+     BlockDriverState *target;
+     BlockJob *job;
++    BackupAccessSetupBitmapMode requested_bitmap_mode;
+ } PVEBackupDevInfo;
+ 
+ static void pvebackup_propagate_error(Error *err)
+@@ -361,6 +367,67 @@ static void coroutine_fn pvebackup_co_complete_stream(void *opaque)
+     qemu_co_mutex_unlock(&backup_state.backup_mutex);
+ }
+ 
++/*
++ * New writes since the backup access was set up are in the background bitmap. Because of failure,
++ * the previously tracked writes in di->bitmap are still required too. Thus, merge with the
++ * background bitmap to get all new writes since the last backup.
++ */
++static void handle_backup_access_bitmaps_in_error_case(PVEBackupDevInfo *di)
++{
++    Error *local_err = NULL;
++
++    if (di->bs && di->background_bitmap) {
++        bdrv_drained_begin(di->bs);
++        if (di->bitmap) {
++            bdrv_enable_dirty_bitmap(di->bitmap);
++            if (!bdrv_merge_dirty_bitmap(di->bitmap, di->background_bitmap, NULL, &local_err)) {
++                warn_report("backup access: %s - could not merge bitmaps in error path - %s",
++                            di->device_name,
++                            local_err ? error_get_pretty(local_err) : "unknown error");
++                /*
++                 * Could not merge, drop original bitmap too.
++                 */
++                bdrv_release_dirty_bitmap(di->bitmap);
++            }
++        } else {
++            warn_report("backup access: %s - expected bitmap not present", di->device_name);
++        }
++        bdrv_release_dirty_bitmap(di->background_bitmap);
++        bdrv_drained_end(di->bs);
++    }
++}
++
++/*
++ * Continue tracking for next incremental backup in di->bitmap. New writes since the backup access
++ * was set up are in the background bitmap. Because the backup was successful, clear di->bitmap and
++ * merge back the background bitmap to get only the new writes.
++ */
++static void handle_backup_access_bitmaps_after_success(PVEBackupDevInfo *di)
++{
++    Error *local_err = NULL;
++
++    if (di->bs && di->background_bitmap) {
++        bdrv_drained_begin(di->bs);
++        if (di->bitmap) {
++            bdrv_enable_dirty_bitmap(di->bitmap);
++            bdrv_clear_dirty_bitmap(di->bitmap, NULL);
++            if (!bdrv_merge_dirty_bitmap(di->bitmap, di->background_bitmap, NULL, &local_err)) {
++                warn_report("backup access: %s - could not merge bitmaps after backup - %s",
++                            di->device_name,
++                            local_err ? error_get_pretty(local_err) : "unknown error");
++                /*
++                 * Could not merge, drop original bitmap too.
++                 */
++                bdrv_release_dirty_bitmap(di->bitmap);
++            }
++        } else {
++            warn_report("backup access: %s - expected bitmap not present", di->device_name);
++        }
++        bdrv_release_dirty_bitmap(di->background_bitmap);
++        bdrv_drained_end(di->bs);
++    }
++}
++
+ static void cleanup_snapshot_access(PVEBackupDevInfo *di)
+ {
+     if (di->fleecing.snapshot_access) {
+@@ -588,6 +655,51 @@ static int setup_snapshot_access(PVEBackupDevInfo *di, Error **errp)
+     return 0;
+ }
+ 
++static void setup_all_snapshot_access_bh(void *opaque)
++{
++    assert(!qemu_in_coroutine());
++
++    CoCtxData *data = (CoCtxData*)opaque;
++    Error **errp = (Error**)data->data;
++
++    Error *local_err = NULL;
++
++    GList *l =  backup_state.di_list;
++    while (l) {
++        PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
++        l = g_list_next(l);
++
++        bdrv_drained_begin(di->bs);
++
++        if (di->bitmap) {
++            BdrvDirtyBitmap *background_bitmap =
++                bdrv_create_dirty_bitmap(di->bs, PROXMOX_BACKUP_DEFAULT_CHUNK_SIZE,
++                                         BACKGROUND_BITMAP_NAME, &local_err);
++            if (!background_bitmap) {
++                error_setg(errp, "%s - creating background bitmap for backup access failed: %s",
++                           di->device_name,
++                           local_err ? error_get_pretty(local_err) : "unknown error");
++                bdrv_drained_end(di->bs);
++                break;
++            }
++            di->background_bitmap = background_bitmap;
++            bdrv_disable_dirty_bitmap(di->bitmap);
++        }
++
++        if (setup_snapshot_access(di, &local_err) < 0) {
++            bdrv_drained_end(di->bs);
++            error_setg(errp, "%s - setting up snapshot access failed: %s", di->device_name,
++                       local_err ? error_get_pretty(local_err) : "unknown error");
++            break;
++        }
++
++        bdrv_drained_end(di->bs);
++    }
++
++    /* return */
++    aio_co_enter(data->ctx, data->co);
++}
++
+ /*
+  * backup_job_create can *not* be run from a coroutine, so this can't either.
+  * The caller is responsible that backup_mutex is held nonetheless.
+@@ -724,6 +836,62 @@ static bool fleecing_no_efi_tpm(const char *device_id)
+     return strncmp(device_id, "drive-efidisk", 13) && strncmp(device_id, "drive-tpmstate", 14);
+ }
+ 
++static bool fleecing_all(const char *device_id)
++{
++    return true;
++}
++
++static PVEBackupDevInfo coroutine_fn GRAPH_RDLOCK *get_single_device_info(
++    const char *device,
++    bool (*device_uses_fleecing)(const char*),
++    Error **errp)
++{
++    BlockBackend *blk = blk_by_name(device);
++    if (!blk) {
++        error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
++                  "Device '%s' not found", device);
++        return NULL;
++    }
++    BlockDriverState *bs = blk_bs(blk);
++    if (!bdrv_co_is_inserted(bs)) {
++        error_setg(errp, "Device '%s' has no medium", device);
++        return NULL;
++    }
++    PVEBackupDevInfo *di = g_new0(PVEBackupDevInfo, 1);
++    di->bs = bs;
++    di->device_name = g_strdup(bdrv_get_device_name(bs));
++
++    if (device_uses_fleecing && device_uses_fleecing(device)) {
++        g_autofree gchar *fleecing_devid = g_strconcat(device, "-fleecing", NULL);
++        BlockBackend *fleecing_blk = blk_by_name(fleecing_devid);
++        if (!fleecing_blk) {
++            error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
++                      "Device '%s' not found", fleecing_devid);
++            goto fail;
++        }
++        BlockDriverState *fleecing_bs = blk_bs(fleecing_blk);
++        if (!bdrv_co_is_inserted(fleecing_bs)) {
++            error_setg(errp, "Device '%s' has no medium", fleecing_devid);
++            goto fail;
++        }
++        /*
++         * Fleecing image needs to be the same size to act as a cbw target.
++         */
++        if (bs->total_sectors != fleecing_bs->total_sectors) {
++            error_setg(errp, "Size mismatch for '%s' - sector count %ld != %ld",
++                       fleecing_devid, fleecing_bs->total_sectors, bs->total_sectors);
++            goto fail;
++        }
++        di->fleecing.bs = fleecing_bs;
++    }
++
++    return di;
++fail:
++    g_free(di->device_name);
++    g_free(di);
++    return NULL;
++}
++
+ /*
+  * Returns a list of device infos, which needs to be freed by the caller. In
+  * case of an error, errp will be set, but the returned value might still be a
+@@ -742,45 +910,10 @@ static GList coroutine_fn GRAPH_RDLOCK *get_device_info(
+ 
+         gchar **d = devs;
+         while (d && *d) {
+-            BlockBackend *blk = blk_by_name(*d);
+-            if (!blk) {
+-                error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
+-                          "Device '%s' not found", *d);
+-                goto err;
+-            }
+-            BlockDriverState *bs = blk_bs(blk);
+-            if (!bdrv_co_is_inserted(bs)) {
+-                error_setg(errp, "Device '%s' has no medium", *d);
++            PVEBackupDevInfo *di = get_single_device_info(*d, device_uses_fleecing, errp);
++            if (!di) {
+                 goto err;
+             }
+-            PVEBackupDevInfo *di = g_new0(PVEBackupDevInfo, 1);
+-            di->bs = bs;
+-            di->device_name = g_strdup(bdrv_get_device_name(bs));
+-
+-            if (device_uses_fleecing && device_uses_fleecing(*d)) {
+-                g_autofree gchar *fleecing_devid = g_strconcat(*d, "-fleecing", NULL);
+-                BlockBackend *fleecing_blk = blk_by_name(fleecing_devid);
+-                if (!fleecing_blk) {
+-                    error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
+-                              "Device '%s' not found", fleecing_devid);
+-                    goto err;
+-                }
+-                BlockDriverState *fleecing_bs = blk_bs(fleecing_blk);
+-                if (!bdrv_co_is_inserted(fleecing_bs)) {
+-                    error_setg(errp, "Device '%s' has no medium", fleecing_devid);
+-                    goto err;
+-                }
+-                /*
+-                 * Fleecing image needs to be the same size to act as a cbw target.
+-                 */
+-                if (bs->total_sectors != fleecing_bs->total_sectors) {
+-                    error_setg(errp, "Size mismatch for '%s' - sector count %ld != %ld",
+-                               fleecing_devid, fleecing_bs->total_sectors, bs->total_sectors);
+-                    goto err;
+-                }
+-                di->fleecing.bs = fleecing_bs;
+-            }
+-
+             di_list = g_list_append(di_list, di);
+             d++;
+         }
+@@ -839,8 +972,9 @@ static void clear_backup_state_bitmap_list(void) {
+  */
+ static void initialize_backup_state_stat(
+     const char *backup_file,
+-    uuid_t uuid,
+-    size_t total)
++    uuid_t *uuid,
++    size_t total,
++    bool starting)
+ {
+     if (backup_state.stat.error) {
+         error_free(backup_state.stat.error);
+@@ -855,15 +989,19 @@ static void initialize_backup_state_stat(
+     }
+     backup_state.stat.backup_file = g_strdup(backup_file);
+ 
+-    uuid_copy(backup_state.stat.uuid, uuid);
+-    uuid_unparse_lower(uuid, backup_state.stat.uuid_str);
++    if (uuid) {
++        uuid_copy(backup_state.stat.uuid, *uuid);
++        uuid_unparse_lower(*uuid, backup_state.stat.uuid_str);
++    } else {
++        backup_state.stat.uuid_str[0] = '\0';
++    }
+ 
+     backup_state.stat.total = total;
+     backup_state.stat.dirty = total - backup_state.stat.reused;
+     backup_state.stat.transferred = 0;
+     backup_state.stat.zero_bytes = 0;
+     backup_state.stat.finishing = false;
+-    backup_state.stat.starting = true;
++    backup_state.stat.starting = starting;
+ }
+ 
+ /*
+@@ -876,6 +1014,299 @@ static void backup_state_set_target_id(const char *target_id) {
+     backup_state.target_id = g_strdup(target_id);
+ }
+ 
++BackupAccessInfoList *coroutine_fn qmp_backup_access_setup(
++    const char *target_id,
++    BackupAccessSourceDeviceList *devices,
++    Error **errp)
++{
++    assert(qemu_in_coroutine());
++
++    qemu_co_mutex_lock(&backup_state.backup_mutex);
++
++    Error *local_err = NULL;
++    GList *di_list = NULL;
++    GList *l;
++
++    if (backup_state.di_list) {
++        error_set(errp, ERROR_CLASS_GENERIC_ERROR,
++                  "previous backup for target '%s' not finished", backup_state.target_id);
++        qemu_co_mutex_unlock(&backup_state.backup_mutex);
++        return NULL;
++    }
++
++    bdrv_graph_co_rdlock();
++    for (BackupAccessSourceDeviceList *it = devices; it; it = it->next) {
++        PVEBackupDevInfo *di = get_single_device_info(it->value->device, fleecing_all, &local_err);
++        if (!di) {
++            bdrv_graph_co_rdunlock();
++            error_propagate(errp, local_err);
++            goto err;
++        }
++        di->requested_bitmap_mode = it->value->bitmap_mode;
++        di_list = g_list_append(di_list, di);
++    }
++    bdrv_graph_co_rdunlock();
++    assert(di_list);
++
++    size_t total = 0;
++
++    l = di_list;
++    while (l) {
++        PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
++        l = g_list_next(l);
++
++        ssize_t size = bdrv_getlength(di->bs);
++        if (size < 0) {
++            error_setg_errno(errp, -size, "bdrv_getlength failed");
++            goto err;
++        }
++        di->size = size;
++        total += size;
++
++        di->completed_ret = INT_MAX;
++    }
++
++    qemu_mutex_lock(&backup_state.stat.lock);
++    backup_state.stat.reused = 0;
++
++    /* clear previous backup's bitmap_list */
++    clear_backup_state_bitmap_list();
++
++    const char *bitmap_name = target_id;
++
++    /* create bitmaps if requested */
++    l = di_list;
++    while (l) {
++        PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
++        l = g_list_next(l);
++
++        di->block_size = PROXMOX_BACKUP_DEFAULT_CHUNK_SIZE;
++
++        PBSBitmapAction action = PBS_BITMAP_ACTION_NOT_USED;
++        size_t dirty = di->size;
++
++        if (di->requested_bitmap_mode == BACKUP_ACCESS_SETUP_BITMAP_MODE_NONE ||
++            di->requested_bitmap_mode == BACKUP_ACCESS_SETUP_BITMAP_MODE_NEW) {
++            BdrvDirtyBitmap *old_bitmap = bdrv_find_dirty_bitmap(di->bs, bitmap_name);
++            if (old_bitmap) {
++                bdrv_release_dirty_bitmap(old_bitmap);
++                action = PBS_BITMAP_ACTION_NOT_USED_REMOVED; // set below for new
++            }
++        }
++
++        BdrvDirtyBitmap *bitmap = NULL;
++        if (di->requested_bitmap_mode == BACKUP_ACCESS_SETUP_BITMAP_MODE_NEW ||
++            di->requested_bitmap_mode == BACKUP_ACCESS_SETUP_BITMAP_MODE_USE) {
++            bitmap = bdrv_find_dirty_bitmap(di->bs, bitmap_name);
++            if (!bitmap) {
++                bitmap = bdrv_create_dirty_bitmap(di->bs, PROXMOX_BACKUP_DEFAULT_CHUNK_SIZE,
++                                                  bitmap_name, errp);
++                if (!bitmap) {
++                    qemu_mutex_unlock(&backup_state.stat.lock);
++                    goto err;
++                }
++                bdrv_set_dirty_bitmap(bitmap, 0, di->size);
++                if (di->requested_bitmap_mode == BACKUP_ACCESS_SETUP_BITMAP_MODE_USE) {
++                    action = PBS_BITMAP_ACTION_MISSING_RECREATED;
++                } else {
++                    action = PBS_BITMAP_ACTION_NEW;
++                }
++            } else {
++                if (di->requested_bitmap_mode == BACKUP_ACCESS_SETUP_BITMAP_MODE_NEW) {
++                    qemu_mutex_unlock(&backup_state.stat.lock);
++                    error_setg(errp, "internal error - removed old bitmap still present");
++                    goto err;
++                }
++                /* track clean chunks as reused */
++                dirty = MIN(bdrv_get_dirty_count(bitmap), di->size);
++                backup_state.stat.reused += di->size - dirty;
++                action = PBS_BITMAP_ACTION_USED;
++            }
++        }
++
++        PBSBitmapInfo *info = g_malloc(sizeof(*info));
++        info->drive = g_strdup(di->device_name);
++        info->action = action;
++        info->size = di->size;
++        info->dirty = dirty;
++        backup_state.stat.bitmap_list = g_list_append(backup_state.stat.bitmap_list, info);
++
++        di->bitmap = bitmap;
++        di->bitmap_action = action;
++    }
++
++    /* starting=false, because there is no associated QEMU job */
++    initialize_backup_state_stat(NULL, NULL, total, false);
++
++    qemu_mutex_unlock(&backup_state.stat.lock);
++
++    backup_state_set_target_id(target_id);
++
++    backup_state.vmaw = NULL;
++    backup_state.pbs = NULL;
++
++    backup_state.di_list = di_list;
++
++    /* Run setup_all_snapshot_access_bh outside of coroutine (in BH) but keep
++    * backup_mutex locked. This is fine, a CoMutex can be held across yield
++    * points, and we'll release it as soon as the BH reschedules us.
++    */
++    CoCtxData waker = {
++        .co = qemu_coroutine_self(),
++        .ctx = qemu_get_current_aio_context(),
++        .data = &local_err,
++    };
++    aio_bh_schedule_oneshot(waker.ctx, setup_all_snapshot_access_bh, &waker);
++    qemu_coroutine_yield();
++
++    if (local_err) {
++        error_propagate(errp, local_err);
++        goto err;
++    }
++
++    qemu_co_mutex_unlock(&backup_state.backup_mutex);
++
++    BackupAccessInfoList *bai_head = NULL, **p_bai_next = &bai_head;
++
++    l = di_list;
++    while (l) {
++        PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
++        l = g_list_next(l);
++
++        BackupAccessInfoList *info = g_malloc0(sizeof(*info));
++        info->value = g_malloc0(sizeof(*info->value));
++        info->value->node_name = g_strdup(bdrv_get_node_name(di->fleecing.snapshot_access));
++        info->value->device = g_strdup(di->device_name);
++        info->value->size = di->size;
++        if (di->bitmap) {
++            info->value->bitmap_node_name = g_strdup(bdrv_get_node_name(di->bs));
++            info->value->bitmap_name = g_strdup(bitmap_name);
++            info->value->bitmap_action = di->bitmap_action;
++            info->value->has_bitmap_action = true;
++        }
++
++        *p_bai_next = info;
++        p_bai_next = &info->next;
++    }
++
++    return bai_head;
++
++err:
++
++    l = di_list;
++    while (l) {
++        PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
++        l = g_list_next(l);
++
++        handle_backup_access_bitmaps_in_error_case(di);
++
++        g_free(di->device_name);
++        di->device_name = NULL;
++
++        g_free(di);
++    }
++    g_list_free(di_list);
++    backup_state.di_list = NULL;
++
++    qemu_co_mutex_unlock(&backup_state.backup_mutex);
++    return NULL;
++}
++
++/*
++ * Caller needs to hold the backup mutex or the BQL.
++ */
++void backup_access_teardown(bool success)
++{
++    GList *l = backup_state.di_list;
++
++    qemu_mutex_lock(&backup_state.stat.lock);
++    backup_state.stat.finishing = true;
++    qemu_mutex_unlock(&backup_state.stat.lock);
++
++    while (l) {
++        PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
++        l = g_list_next(l);
++
++        if (di->fleecing.snapshot_access) {
++            bdrv_unref(di->fleecing.snapshot_access);
++            di->fleecing.snapshot_access = NULL;
++        }
++        if (di->fleecing.cbw) {
++            bdrv_cbw_drop(di->fleecing.cbw);
++            di->fleecing.cbw = NULL;
++        }
++
++        if (success) {
++            handle_backup_access_bitmaps_after_success(di);
++        } else {
++            handle_backup_access_bitmaps_in_error_case(di);
++        }
++
++        g_free(di->device_name);
++        di->device_name = NULL;
++
++        g_free(di);
++    }
++    g_list_free(backup_state.di_list);
++    backup_state.di_list = NULL;
++
++    qemu_mutex_lock(&backup_state.stat.lock);
++    backup_state.stat.end_time = time(NULL);
++    backup_state.stat.finishing = false;
++    qemu_mutex_unlock(&backup_state.stat.lock);
++}
++
++// Not done in a coroutine, because bdrv_co_unref() and cbw_drop() would just spawn BHs anyways.
++// Caller needs to hold the backup_state.backup_mutex lock
++static void backup_access_teardown_bh(void *opaque)
++{
++    CoCtxData *data = (CoCtxData*)opaque;
++
++    backup_access_teardown(*((bool*)data->data));
++
++    /* return */
++    aio_co_enter(data->ctx, data->co);
++}
++
++void coroutine_fn qmp_backup_access_teardown(const char *target_id, bool success, Error **errp)
++{
++    assert(qemu_in_coroutine());
++
++    qemu_co_mutex_lock(&backup_state.backup_mutex);
++
++    if (!backup_state.target_id) { // nothing to do
++        qemu_co_mutex_unlock(&backup_state.backup_mutex);
++        return;
++    }
++
++    /*
++     * Continue with target_id == NULL, used by the callback registered for qemu_cleanup()
++     */
++    if (target_id && strcmp(backup_state.target_id, target_id)) {
++        error_setg(errp, "cannot teardown backup access - got target %s instead of %s",
++                   target_id, backup_state.target_id);
++        qemu_co_mutex_unlock(&backup_state.backup_mutex);
++        return;
++    }
++
++    if (!strcmp(backup_state.target_id, "Proxmox VE")) {
++        error_setg(errp, "cannot teardown backup access for PVE - use backup-cancel instead");
++        qemu_co_mutex_unlock(&backup_state.backup_mutex);
++        return;
++    }
++
++    CoCtxData waker = {
++        .co = qemu_coroutine_self(),
++        .ctx = qemu_get_current_aio_context(),
++        .data = &success,
++    };
++    aio_bh_schedule_oneshot(waker.ctx, backup_access_teardown_bh, &waker);
++    qemu_coroutine_yield();
++
++    qemu_co_mutex_unlock(&backup_state.backup_mutex);
++    return;
++}
++
+ UuidInfo coroutine_fn *qmp_backup(
+     const char *backup_file,
+     const char *password,
+@@ -1068,6 +1499,7 @@ UuidInfo coroutine_fn *qmp_backup(
+             }
+ 
+             di->dev_id = dev_id;
++            di->bitmap_action = action;
+ 
+             PBSBitmapInfo *info = g_malloc(sizeof(*info));
+             info->drive = g_strdup(di->device_name);
+@@ -1119,7 +1551,7 @@ UuidInfo coroutine_fn *qmp_backup(
+         }
+     }
+     /* initialize global backup_state now */
+-    initialize_backup_state_stat(backup_file, uuid, total);
++    initialize_backup_state_stat(backup_file, &uuid, total, true);
+     char *uuid_str = g_strdup(backup_state.stat.uuid_str);
+ 
+     qemu_mutex_unlock(&backup_state.stat.lock);
+@@ -1298,5 +1730,6 @@ ProxmoxSupportStatus *qmp_query_proxmox_support(Error **errp)
+     ret->pbs_masterkey = true;
+     ret->backup_max_workers = true;
+     ret->backup_fleecing = true;
++    ret->backup_access_api = true;
+     return ret;
+ }
+diff --git a/pve-backup.h b/pve-backup.h
+new file mode 100644
+index 0000000000..9ebeef7c8f
+--- /dev/null
++++ b/pve-backup.h
+@@ -0,0 +1,16 @@
++/*
++ * Bacup code used by Proxmox VE
++ *
++ * Copyright (C) Proxmox Server Solutions
++ *
++ * This work is licensed under the terms of the GNU GPL, version 2 or later.
++ * See the COPYING file in the top-level directory.
++ *
++ */
++
++#ifndef PVE_BACKUP_H
++#define PVE_BACKUP_H
++
++void backup_access_teardown(bool success);
++
++#endif /* PVE_BACKUP_H */
+diff --git a/qapi/block-core.json b/qapi/block-core.json
+index 9bdcfa31ea..2fb51215f2 100644
+--- a/qapi/block-core.json
++++ b/qapi/block-core.json
+@@ -1023,6 +1023,9 @@
+ #
+ # @pbs-library-version: Running version of libproxmox-backup-qemu0 library.
+ #
++# @backup-access-api: Whether backup access API for external providers is
++#     supported or not.
++#
+ # @backup-fleecing: Whether backup fleecing is supported or not.
+ #
+ # @backup-max-workers: Whether the 'max-workers' @BackupPerf setting is
+@@ -1036,6 +1039,7 @@
+             'pbs-dirty-bitmap-migration': 'bool',
+             'pbs-masterkey': 'bool',
+             'pbs-library-version': 'str',
++            'backup-access-api': 'bool',
+             'backup-fleecing': 'bool',
+             'backup-max-workers': 'bool' } }
+ 
+@@ -1067,9 +1071,16 @@
+ #           base snapshot did not match the base given for the current job or
+ #           the crypt mode has changed.
+ #
++# @missing-recreated: A bitmap for incremental backup was expected to be
++#     present, but was missing and thus got recreated. For example, this can
++#     happen if the drive was re-attached or if the bitmap was deleted for some
++#     other reason. PBS does not currently keep track of this; the backup-access
++#     mechanism does.
++#
+ ##
+ { 'enum': 'PBSBitmapAction',
+-  'data': ['not-used', 'not-used-removed', 'new', 'used', 'invalid'] }
++  'data': ['not-used', 'not-used-removed', 'new', 'used', 'invalid',
++           'missing-recreated'] }
+ 
+ ##
+ # @PBSBitmapInfo:
+@@ -1102,6 +1113,92 @@
+ ##
+ { 'command': 'query-pbs-bitmap-info', 'returns': ['PBSBitmapInfo'] }
+ 
++##
++# @BackupAccessInfo:
++#
++# Info associated to a snapshot access for backup.  For more information about
++# the bitmap see @BackupAccessBitmapMode.
++#
++# @node-name: the block node name of the snapshot-access node.
++#
++# @device: the device on top of which the snapshot access was created.
++#
++# @size: the size of the block device in bytes.
++#
++# @bitmap-node-name: the block node name the dirty bitmap is associated to.
++#
++# @bitmap-name: the name of the dirty bitmap associated to the backup access.
++#
++# @bitmap-action: the action taken on the dirty bitmap.
++#
++##
++{ 'struct': 'BackupAccessInfo',
++  'data': { 'node-name': 'str', 'device': 'str', 'size': 'size',
++            '*bitmap-node-name': 'str', '*bitmap-name': 'str',
++            '*bitmap-action': 'PBSBitmapAction' } }
++
++##
++# @BackupAccessSourceDevice:
++#
++# Source block device information for creating a backup access.
++#
++# @device: the block device name.
++#
++# @bitmap-mode: used to control whether the bitmap should be reused or
++#     recreated or not used. Default is not using a bitmap.
++#
++##
++{ 'struct': 'BackupAccessSourceDevice',
++  'data': { 'device': 'str', '*bitmap-mode': 'BackupAccessSetupBitmapMode' } }
++
++##
++# @BackupAccessSetupBitmapMode:
++#
++# How to setup a bitmap for a device for @backup-access-setup.
++#
++# @none: do not use a bitmap. Removes an existing bitmap if present.
++#
++# @new: create and use a new bitmap.
++#
++# @use: try to re-use an existing bitmap. Create a new one if it doesn't exist.
++##
++{ 'enum': 'BackupAccessSetupBitmapMode',
++  'data': ['none', 'new', 'use' ] }
++
++##
++# @backup-access-setup:
++#
++# Set up snapshot access to VM drives for an external backup provider.  No other
++# backup or backup access can be done before tearing down the backup access.
++#
++# @target-id: the unique ID of the backup target.
++#
++# @devices: list of devices for which to create the backup access.  Also
++#     controls whether to use/create a bitmap for the device.  Check the
++#     @bitmap-action in the result to see what action was actually taken for the
++#     bitmap.  Each target controls its own bitmaps.
++#
++# Returns: a list of @BackupAccessInfo, one for each device.
++#
++##
++{ 'command': 'backup-access-setup',
++  'data': { 'target-id': 'str', 'devices': [ 'BackupAccessSourceDevice' ] },
++  'returns': [ 'BackupAccessInfo' ], 'coroutine': true }
++
++##
++# @backup-access-teardown:
++#
++# Tear down previously setup snapshot access for the same target.
++#
++# @target-id: the ID of the backup target.
++#
++# @success: whether the backup done by the external provider was successful.
++#
++##
++{ 'command': 'backup-access-teardown',
++  'data': { 'target-id': 'str', 'success': 'bool' },
++  'coroutine': true }
++
+ ##
+ # @BlockDeviceTimedStats:
+ #
+diff --git a/system/runstate.c b/system/runstate.c
+index 272801d307..cf775213bd 100644
+--- a/system/runstate.c
++++ b/system/runstate.c
+@@ -60,6 +60,7 @@
+ #include "system/system.h"
+ #include "system/tpm.h"
+ #include "trace.h"
++#include "pve-backup.h"
+ 
+ static NotifierList exit_notifiers =
+     NOTIFIER_LIST_INITIALIZER(exit_notifiers);
+@@ -921,6 +922,11 @@ void qemu_cleanup(int status)
+      * requests happening from here on anyway.
+      */
+     bdrv_drain_all_begin();
++    /*
++     * The backup access is set up by a QMP command, but is neither owned by a monitor nor
++     * associated to a BlockBackend. Need to tear it down manually here.
++     */
++    backup_access_teardown(false);
+     job_cancel_sync_all();
+     bdrv_close_all();
+ 
diff --git a/debian/patches/pve/0058-savevm-async-improve-setting-state-of-snapshot-opera.patch b/debian/patches/pve/0058-savevm-async-improve-setting-state-of-snapshot-opera.patch
deleted file mode 100644
index 8751ede..0000000
--- a/debian/patches/pve/0058-savevm-async-improve-setting-state-of-snapshot-opera.patch
+++ /dev/null
@@ -1,81 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Fiona Ebner <f.ebner@proxmox.com>
-Date: Mon, 31 Mar 2025 16:55:02 +0200
-Subject: [PATCH] savevm-async: improve setting state of snapshot operation in
- savevm-end handler
-
-One of the callers of wait_for_close_co() already sets the state to
-SAVE_STATE_DONE before, but that is not fully correct, because at that
-moment, the operation is not fully done. In particular, if closing the
-target later fails, the state would even be set to SAVE_STATE_ERROR
-afterwards. DONE -> ERROR is not a valid state transition. Although,
-it should not matter in practice as long as the relevant QMP commands
-are sequential.
-
-The other caller does not set the state and so there seems to be a
-race that could lead to the state not getting set at all. This is
-because before this commit, the wait_for_close_co() function could
-return early when there is no target file anymore. This can only
-happen when canceling and needs to happen right around the time when
-the snapshot is already finishing and closing the target.
-
-Simply avoid the early return and always set the state within the
-wait_for_close_co() function rather than at the call site.
-
-Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
----
- migration/savevm-async.c | 33 +++++++++++++++------------------
- 1 file changed, 15 insertions(+), 18 deletions(-)
-
-diff --git a/migration/savevm-async.c b/migration/savevm-async.c
-index fbcf74f9e2..2163bf50b1 100644
---- a/migration/savevm-async.c
-+++ b/migration/savevm-async.c
-@@ -451,23 +451,22 @@ static void coroutine_fn wait_for_close_co(void *opaque)
- {
-     int64_t timeout;
- 
--    if (!snap_state.target) {
--        DPRINTF("savevm-end: no target file open\n");
--        return;
--    }
--
--    /* wait until cleanup is done before returning, this ensures that after this
--     * call exits the statefile will be closed and can be removed immediately */
--    DPRINTF("savevm-end: waiting for cleanup\n");
--    timeout = 30L * 1000 * 1000 * 1000;
--    qemu_co_sleep_ns_wakeable(&snap_state.target_close_wait,
--                              QEMU_CLOCK_REALTIME, timeout);
-     if (snap_state.target) {
--        save_snapshot_error("timeout waiting for target file close in "
--                            "qmp_savevm_end");
--        /* we cannot assume the snapshot finished in this case, so leave the
--         * state alone - caller has to figure something out */
--        return;
-+        /* wait until cleanup is done before returning, this ensures that after this
-+         * call exits the statefile will be closed and can be removed immediately */
-+        DPRINTF("savevm-end: waiting for cleanup\n");
-+        timeout = 30L * 1000 * 1000 * 1000;
-+        qemu_co_sleep_ns_wakeable(&snap_state.target_close_wait,
-+                                  QEMU_CLOCK_REALTIME, timeout);
-+        if (snap_state.target) {
-+            save_snapshot_error("timeout waiting for target file close in "
-+                                "qmp_savevm_end");
-+            /* we cannot assume the snapshot finished in this case, so leave the
-+             * state alone - caller has to figure something out */
-+            return;
-+        }
-+    } else {
-+        DPRINTF("savevm-end: no target file open\n");
-     }
- 
-     // File closed and no other error, so ensure next snapshot can be started.
-@@ -498,8 +497,6 @@ void qmp_savevm_end(Error **errp)
-         snap_state.saved_vm_running = false;
-     }
- 
--    snap_state.state = SAVE_STATE_DONE;
--
-     qemu_coroutine_enter(wait_for_close);
- }
- 
diff --git a/debian/patches/pve/0059-savevm-async-rename-saved_vm_running-to-vm_needs_sta.patch b/debian/patches/pve/0059-savevm-async-rename-saved_vm_running-to-vm_needs_sta.patch
deleted file mode 100644
index 706089c..0000000
--- a/debian/patches/pve/0059-savevm-async-rename-saved_vm_running-to-vm_needs_sta.patch
+++ /dev/null
@@ -1,71 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Fiona Ebner <f.ebner@proxmox.com>
-Date: Mon, 31 Mar 2025 16:55:03 +0200
-Subject: [PATCH] savevm-async: rename saved_vm_running to vm_needs_start
-
-This is what the variable actually expresses. Otherwise, setting it
-to false after starting the VM doesn't make sense.
-
-Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
----
- migration/savevm-async.c | 16 ++++++++--------
- 1 file changed, 8 insertions(+), 8 deletions(-)
-
-diff --git a/migration/savevm-async.c b/migration/savevm-async.c
-index 2163bf50b1..657a4ec9f6 100644
---- a/migration/savevm-async.c
-+++ b/migration/savevm-async.c
-@@ -52,7 +52,7 @@ static struct SnapshotState {
-     int state;
-     Error *error;
-     Error *blocker;
--    int saved_vm_running;
-+    int vm_needs_start;
-     QEMUFile *file;
-     int64_t total_time;
-     QEMUBH *finalize_bh;
-@@ -225,9 +225,9 @@ static void process_savevm_finalize(void *opaque)
-         save_snapshot_error("process_savevm_cleanup: invalid state: %d",
-                             snap_state.state);
-     }
--    if (snap_state.saved_vm_running) {
-+    if (snap_state.vm_needs_start) {
-         vm_start();
--        snap_state.saved_vm_running = false;
-+        snap_state.vm_needs_start = false;
-     }
- 
-     DPRINTF("timing: process_savevm_finalize (full) took %ld ms\n",
-@@ -353,7 +353,7 @@ void qmp_savevm_start(const char *statefile, Error **errp)
-     }
- 
-     /* initialize snapshot info */
--    snap_state.saved_vm_running = runstate_is_running();
-+    snap_state.vm_needs_start = runstate_is_running();
-     snap_state.bs_pos = 0;
-     snap_state.total_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
-     snap_state.blocker = NULL;
-@@ -441,9 +441,9 @@ restart:
- 
-     save_snapshot_error("setup failed");
- 
--    if (snap_state.saved_vm_running) {
-+    if (snap_state.vm_needs_start) {
-         vm_start();
--        snap_state.saved_vm_running = false;
-+        snap_state.vm_needs_start = false;
-     }
- }
- 
-@@ -492,9 +492,9 @@ void qmp_savevm_end(Error **errp)
-         return;
-     }
- 
--    if (snap_state.saved_vm_running) {
-+    if (snap_state.vm_needs_start) {
-         vm_start();
--        snap_state.saved_vm_running = false;
-+        snap_state.vm_needs_start = false;
-     }
- 
-     qemu_coroutine_enter(wait_for_close);
diff --git a/debian/patches/pve/0060-savevm-async-improve-runstate-preservation-cleanup-e.patch b/debian/patches/pve/0060-savevm-async-improve-runstate-preservation-cleanup-e.patch
deleted file mode 100644
index 8afd450..0000000
--- a/debian/patches/pve/0060-savevm-async-improve-runstate-preservation-cleanup-e.patch
+++ /dev/null
@@ -1,120 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Fiona Ebner <f.ebner@proxmox.com>
-Date: Mon, 31 Mar 2025 16:55:04 +0200
-Subject: [PATCH] savevm-async: improve runstate preservation, cleanup error
- handling
-
-Determine if VM needs to be started after finishing right before
-actually stopping the VM instead of at the beginning.
-
-In qmp_savevm_start(), the only path stopping the VM returns right
-aftwards, so there is no need for the vm_start() handling after
-errors.
-
-Lastly, improve the code style for checking whether migrate_init()
-failed by explicitly comparing against 0.
-
-Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
-[WB: squashed error handling commits, rename goto branch instead of
-     inlining it]
-Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
----
- migration/savevm-async.c | 25 ++++++++++---------------
- 1 file changed, 10 insertions(+), 15 deletions(-)
-
-diff --git a/migration/savevm-async.c b/migration/savevm-async.c
-index 657a4ec9f6..d70ae737bd 100644
---- a/migration/savevm-async.c
-+++ b/migration/savevm-async.c
-@@ -179,6 +179,7 @@ static void process_savevm_finalize(void *opaque)
-      */
-     blk_set_aio_context(snap_state.target, qemu_get_aio_context(), NULL);
- 
-+    snap_state.vm_needs_start = runstate_is_running();
-     ret = vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
-     if (ret < 0) {
-         save_snapshot_error("vm_stop_force_state error %d", ret);
-@@ -353,7 +354,6 @@ void qmp_savevm_start(const char *statefile, Error **errp)
-     }
- 
-     /* initialize snapshot info */
--    snap_state.vm_needs_start = runstate_is_running();
-     snap_state.bs_pos = 0;
-     snap_state.total_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
-     snap_state.blocker = NULL;
-@@ -365,13 +365,14 @@ void qmp_savevm_start(const char *statefile, Error **errp)
-     }
- 
-     if (!statefile) {
-+        snap_state.vm_needs_start = runstate_is_running();
-         vm_stop(RUN_STATE_SAVE_VM);
-         snap_state.state = SAVE_STATE_COMPLETED;
-         return;
-     }
- 
-     if (qemu_savevm_state_blocked(errp)) {
--        return;
-+        goto fail;
-     }
- 
-     /* Open the image */
-@@ -381,12 +382,12 @@ void qmp_savevm_start(const char *statefile, Error **errp)
-     snap_state.target = blk_new_open(statefile, NULL, options, bdrv_oflags, &local_err);
-     if (!snap_state.target) {
-         error_setg(errp, "failed to open '%s'", statefile);
--        goto restart;
-+        goto fail;
-     }
-     target_bs = blk_bs(snap_state.target);
-     if (!target_bs) {
-         error_setg(errp, "failed to open '%s' - no block driver state", statefile);
--        goto restart;
-+        goto fail;
-     }
- 
-     QIOChannel *ioc = QIO_CHANNEL(qio_channel_savevm_async_new(snap_state.target,
-@@ -395,7 +396,7 @@ void qmp_savevm_start(const char *statefile, Error **errp)
- 
-     if (!snap_state.file) {
-         error_setg(errp, "failed to open '%s'", statefile);
--        goto restart;
-+        goto fail;
-     }
- 
-     /*
-@@ -403,8 +404,8 @@ void qmp_savevm_start(const char *statefile, Error **errp)
-      * State is cleared in process_savevm_co, but has to be initialized
-      * here (blocking main thread, from QMP) to avoid race conditions.
-      */
--    if (migrate_init(ms, errp)) {
--        return;
-+    if (migrate_init(ms, errp) != 0) {
-+        goto fail;
-     }
-     memset(&mig_stats, 0, sizeof(mig_stats));
-     ms->to_dst_file = snap_state.file;
-@@ -419,7 +420,7 @@ void qmp_savevm_start(const char *statefile, Error **errp)
-     if (ret != 0) {
-         error_setg_errno(errp, -ret, "savevm state setup failed: %s",
-                          local_err ? error_get_pretty(local_err) : "unknown error");
--        return;
-+        goto fail;
-     }
- 
-     /* Async processing from here on out happens in iohandler context, so let
-@@ -437,14 +438,8 @@ void qmp_savevm_start(const char *statefile, Error **errp)
- 
-     return;
- 
--restart:
--
-+fail:
-     save_snapshot_error("setup failed");
--
--    if (snap_state.vm_needs_start) {
--        vm_start();
--        snap_state.vm_needs_start = false;
--    }
- }
- 
- static void coroutine_fn wait_for_close_co(void *opaque)
diff --git a/debian/patches/pve/0061-savevm-async-use-dedicated-iothread-for-state-file.patch b/debian/patches/pve/0061-savevm-async-use-dedicated-iothread-for-state-file.patch
deleted file mode 100644
index 1e2c9e1..0000000
--- a/debian/patches/pve/0061-savevm-async-use-dedicated-iothread-for-state-file.patch
+++ /dev/null
@@ -1,177 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Fiona Ebner <f.ebner@proxmox.com>
-Date: Mon, 31 Mar 2025 16:55:06 +0200
-Subject: [PATCH] savevm-async: use dedicated iothread for state file
-
-Having the state file be in the iohandler context means that a
-blk_drain_all() call in the main thread or vCPU thread that happens
-while the snapshot is running will result in a deadlock.
-
-For example, the main thread might be stuck in:
-
-> 0  0x00007300ac9552d6 in __ppoll (fds=0x64bd5a411a50, nfds=2, timeout=<optimized out>, timeout@entry=0x0, sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:42
-> 1  0x000064bd51af3cad in ppoll (__ss=0x0, __timeout=0x0, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/x86_64-linux-gnu/bits/poll2.h:64
-> 2  0x000064bd51ad8799 in fdmon_poll_wait (ctx=0x64bd58d968a0, ready_list=0x7ffcfcc15558, timeout=-1) at ../util/fdmon-poll.c:79
-> 3  0x000064bd51ad7c3d in aio_poll (ctx=0x64bd58d968a0, blocking=blocking@entry=true) at ../util/aio-posix.c:671
-> 4  0x000064bd519a0b5d in bdrv_drain_all_begin () at ../block/io.c:531
-> 5  bdrv_drain_all_begin () at ../block/io.c:510
-> 6  0x000064bd519943c4 in blk_drain_all () at ../block/block-backend.c:2085
-> 7  0x000064bd5160fc5a in virtio_scsi_dataplane_stop (vdev=0x64bd5a215190) at ../hw/scsi/virtio-scsi-dataplane.c:213
-> 8  0x000064bd51664e90 in virtio_bus_stop_ioeventfd (bus=0x64bd5a215110) at ../hw/virtio/virtio-bus.c:259
-> 9  0x000064bd5166511b in virtio_bus_stop_ioeventfd (bus=<optimized out>) at ../hw/virtio/virtio-bus.c:251
-> 10 virtio_bus_reset (bus=<optimized out>) at ../hw/virtio/virtio-bus.c:107
-> 11 0x000064bd51667431 in virtio_pci_reset (qdev=<optimized out>) at ../hw/virtio/virtio-pci.c:2296
-...
-> 34 0x000064bd517aa951 in pc_machine_reset (machine=<optimized out>, type=<optimized out>) at ../hw/i386/pc.c:1722
-> 35 0x000064bd516aa4c4 in qemu_system_reset (reason=reason@entry=SHUTDOWN_CAUSE_GUEST_RESET) at ../system/runstate.c:525
-> 36 0x000064bd516aaeb9 in main_loop_should_exit (status=<synthetic pointer>) at ../system/runstate.c:801
-> 37 qemu_main_loop () at ../system/runstate.c:834
-
-which is in block/io.c:
-
-> /* Now poll the in-flight requests */
-> AIO_WAIT_WHILE_UNLOCKED(NULL, bdrv_drain_all_poll());
-
-The working theory is: The deadlock happens because the IO is issued
-from the process_savevm_co() coroutine, which doesn't get scheduled
-again to complete in-flight requests when the main thread is stuck
-there polling. The main thread itself is the one that would need to
-schedule it. In case of a vCPU triggering the VirtIO SCSI dataplane
-stop, which happens during (Linux) boot, the vCPU thread will hold the
-big QEMU lock (BQL) blocking the main thread from making progress
-scheduling the process_savevm_co() coroutine.
-
-This change should also help in general to reduce load on the main
-thread and for it to get stuck on IO, i.e. same benefits as using a
-dedicated IO thread for regular drives. This is particularly
-interesting when the VM state storage is a network storage like NFS.
-
-With some luck, it could also help with bug #6262 [0]. The failure
-there happens while issuing/right after the savevm-start QMP command,
-so the most likely coroutine is the process_savevm_co() that was
-previously scheduled to the iohandler context. Likely someone polls
-the iohandler context and wants to enter the already scheduled
-coroutine leading to the abort():
-> qemu_aio_coroutine_enter: Co-routine was already scheduled in 'aio_co_schedule'
-With a dedicated iothread, there hopefully is no such race.
-
-The comment above querying the pending bytes wrongly talked about the
-"iothread lock", but should've been "iohandler lock". This was even
-renamed to BQL (big QEMU lock) a few releases ago. Even if that was
-not a typo to begin with, there are no AioContext locks anymore.
-
-[0]: https://bugzilla.proxmox.com/show_bug.cgi?id=6262
-
-Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
-[WB: update to the changed error handling in the previous commit]
-Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
----
- migration/savevm-async.c | 41 ++++++++++++++++++++++++++++------------
- 1 file changed, 29 insertions(+), 12 deletions(-)
-
-diff --git a/migration/savevm-async.c b/migration/savevm-async.c
-index d70ae737bd..2236f32784 100644
---- a/migration/savevm-async.c
-+++ b/migration/savevm-async.c
-@@ -58,6 +58,7 @@ static struct SnapshotState {
-     QEMUBH *finalize_bh;
-     Coroutine *co;
-     QemuCoSleep target_close_wait;
-+    IOThread *iothread;
- } snap_state;
- 
- static bool savevm_aborted(void)
-@@ -257,16 +258,13 @@ static void coroutine_fn process_savevm_co(void *opaque)
-         uint64_t threshold = 400 * 1000;
- 
-         /*
--         * pending_{estimate,exact} are expected to be called without iothread
--         * lock. Similar to what is done in migration.c, call the exact variant
--         * only once pend_precopy in the estimate is below the threshold.
-+         * Similar to what is done in migration.c, call the exact variant only
-+         * once pend_precopy in the estimate is below the threshold.
-          */
--        bql_unlock();
-         qemu_savevm_state_pending_estimate(&pend_precopy, &pend_postcopy);
-         if (pend_precopy <= threshold) {
-             qemu_savevm_state_pending_exact(&pend_precopy, &pend_postcopy);
-         }
--        bql_lock();
-         pending_size = pend_precopy + pend_postcopy;
- 
-         /*
-@@ -333,11 +331,17 @@ static void coroutine_fn process_savevm_co(void *opaque)
-     qemu_bh_schedule(snap_state.finalize_bh);
- }
- 
-+static void savevm_cleanup_iothread(void) {
-+    if (snap_state.iothread) {
-+        iothread_destroy(snap_state.iothread);
-+        snap_state.iothread = NULL;
-+    }
-+}
-+
- void qmp_savevm_start(const char *statefile, Error **errp)
- {
-     Error *local_err = NULL;
-     MigrationState *ms = migrate_get_current();
--    AioContext *iohandler_ctx = iohandler_get_aio_context();
-     BlockDriverState *target_bs = NULL;
-     int ret = 0;
- 
-@@ -375,6 +379,19 @@ void qmp_savevm_start(const char *statefile, Error **errp)
-         goto fail;
-     }
- 
-+    if (snap_state.iothread) {
-+        /* This is not expected, so warn about it, but no point in re-creating a new iothread. */
-+        warn_report("iothread for snapshot already exists - re-using");
-+    } else {
-+        snap_state.iothread =
-+            iothread_create("__proxmox_savevm_async_iothread__", &local_err);
-+        if (!snap_state.iothread) {
-+            error_setg(errp, "creating iothread failed: %s",
-+                       local_err ? error_get_pretty(local_err) : "unknown error");
-+            goto fail;
-+        }
-+    }
-+
-     /* Open the image */
-     QDict *options = NULL;
-     options = qdict_new();
-@@ -423,22 +440,20 @@ void qmp_savevm_start(const char *statefile, Error **errp)
-         goto fail;
-     }
- 
--    /* Async processing from here on out happens in iohandler context, so let
--     * the target bdrv have its home there.
--     */
--    ret = blk_set_aio_context(snap_state.target, iohandler_ctx, &local_err);
-+    ret = blk_set_aio_context(snap_state.target, snap_state.iothread->ctx, &local_err);
-     if (ret != 0) {
--        warn_report("failed to set iohandler context for VM state target: %s %s",
-+        warn_report("failed to set iothread context for VM state target: %s %s",
-                     local_err ? error_get_pretty(local_err) : "unknown error",
-                     strerror(-ret));
-     }
- 
-     snap_state.co = qemu_coroutine_create(&process_savevm_co, NULL);
--    aio_co_schedule(iohandler_ctx, snap_state.co);
-+    aio_co_schedule(snap_state.iothread->ctx, snap_state.co);
- 
-     return;
- 
- fail:
-+    savevm_cleanup_iothread();
-     save_snapshot_error("setup failed");
- }
- 
-@@ -464,6 +479,8 @@ static void coroutine_fn wait_for_close_co(void *opaque)
-         DPRINTF("savevm-end: no target file open\n");
-     }
- 
-+    savevm_cleanup_iothread();
-+
-     // File closed and no other error, so ensure next snapshot can be started.
-     if (snap_state.state != SAVE_STATE_ERROR) {
-         snap_state.state = SAVE_STATE_DONE;
diff --git a/debian/patches/pve/0062-savevm-async-treat-failure-to-set-iothread-context-a.patch b/debian/patches/pve/0062-savevm-async-treat-failure-to-set-iothread-context-a.patch
deleted file mode 100644
index 1def7ac..0000000
--- a/debian/patches/pve/0062-savevm-async-treat-failure-to-set-iothread-context-a.patch
+++ /dev/null
@@ -1,33 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Fiona Ebner <f.ebner@proxmox.com>
-Date: Mon, 31 Mar 2025 16:55:07 +0200
-Subject: [PATCH] savevm-async: treat failure to set iothread context as a hard
- failure
-
-This is not expected to ever fail and there might be assumptions about
-having the expected context down the line.
-
-Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
-[WB: update to changed error handling]
-Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
----
- migration/savevm-async.c | 6 +++---
- 1 file changed, 3 insertions(+), 3 deletions(-)
-
-diff --git a/migration/savevm-async.c b/migration/savevm-async.c
-index 2236f32784..730b815494 100644
---- a/migration/savevm-async.c
-+++ b/migration/savevm-async.c
-@@ -442,9 +442,9 @@ void qmp_savevm_start(const char *statefile, Error **errp)
- 
-     ret = blk_set_aio_context(snap_state.target, snap_state.iothread->ctx, &local_err);
-     if (ret != 0) {
--        warn_report("failed to set iothread context for VM state target: %s %s",
--                    local_err ? error_get_pretty(local_err) : "unknown error",
--                    strerror(-ret));
-+        error_setg_errno(errp, -ret, "failed to set iothread context for VM state target: %s",
-+                         local_err ? error_get_pretty(local_err) : "unknown error");
-+        goto fail;
-     }
- 
-     snap_state.co = qemu_coroutine_create(&process_savevm_co, NULL);
diff --git a/debian/patches/pve/0063-PVE-backup-clean-up-directly-in-setup_snapshot_acces.patch b/debian/patches/pve/0063-PVE-backup-clean-up-directly-in-setup_snapshot_acces.patch
deleted file mode 100644
index a5582bd..0000000
--- a/debian/patches/pve/0063-PVE-backup-clean-up-directly-in-setup_snapshot_acces.patch
+++ /dev/null
@@ -1,41 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Fiona Ebner <f.ebner@proxmox.com>
-Date: Thu, 3 Apr 2025 14:30:41 +0200
-Subject: [PATCH] PVE backup: clean up directly in setup_snapshot_access() when
- it fails
-
-The only thing that might need to be cleaned up after
-setup_snapshot_access() failed is dropping the cbw filter. Do so in
-the single branch it matters inside setup_snapshot_access() itself.
-This avoids the need that callers of setup_snapshot_access() use
-cleanup_snapshot_access() when the call failed.
-
-Suggested-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
-Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
-Reviewed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
----
- pve-backup.c | 4 +++-
- 1 file changed, 3 insertions(+), 1 deletion(-)
-
-diff --git a/pve-backup.c b/pve-backup.c
-index b411d58a9a..9b66788ab5 100644
---- a/pve-backup.c
-+++ b/pve-backup.c
-@@ -576,6 +576,9 @@ static int setup_snapshot_access(PVEBackupDevInfo *di, Error **errp)
-     di->fleecing.snapshot_access =
-         bdrv_open(NULL, NULL, snapshot_access_opts, BDRV_O_RDWR | BDRV_O_UNMAP, &local_err);
-     if (!di->fleecing.snapshot_access) {
-+        bdrv_cbw_drop(di->fleecing.cbw);
-+        di->fleecing.cbw = NULL;
-+
-         error_setg(errp, "setting up snapshot access for fleecing failed: %s",
-                    local_err ? error_get_pretty(local_err) : "unknown error");
-         return -1;
-@@ -629,7 +632,6 @@ static void create_backup_jobs_bh(void *opaque) {
-                 error_setg(errp, "%s - setting up snapshot access for fleecing failed: %s",
-                            di->device_name,
-                            local_err ? error_get_pretty(local_err) : "unknown error");
--                cleanup_snapshot_access(di);
-                 bdrv_drained_end(di->bs);
-                 break;
-             }
diff --git a/debian/patches/pve/0068-PVE-backup-implement-backup-access-setup-and-teardow.patch b/debian/patches/pve/0068-PVE-backup-implement-backup-access-setup-and-teardow.patch
deleted file mode 100644
index 9e665ca..0000000
--- a/debian/patches/pve/0068-PVE-backup-implement-backup-access-setup-and-teardow.patch
+++ /dev/null
@@ -1,495 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Fiona Ebner <f.ebner@proxmox.com>
-Date: Thu, 3 Apr 2025 14:30:46 +0200
-Subject: [PATCH] PVE backup: implement backup access setup and teardown API
- for external providers
-
-For external backup providers, the state of the VM's disk images at
-the time the backup is started is preserved via a snapshot-access
-block node. Old data is moved to the fleecing image when new guest
-writes come in. The snapshot-access block node, as well as the
-associated bitmap in case of incremental backup, will be exported via
-NBD to the external provider. The NBD export will be done by the
-management layer, the missing functionality is setting up and tearing
-down the snapshot-access block nodes, which this patch adds.
-
-It is necessary to also set up fleecing for EFI and TPM disks, so that
-old data can be moved out of the way when a new guest write comes in.
-
-There can only be one regular backup or one active backup access at
-a time, because both require replacing the original block node of the
-drive. Thus the backup state is re-used, and checks are added to
-prohibit regular backup while snapshot access is active and vice
-versa.
-
-The block nodes added by the backup-access-setup QMP call are not
-tracked anywhere else (there is no job they are associated to like for
-regular backup). This requires adding a callback for teardown when
-QEMU exits, i.e. in qemu_cleanup(). Otherwise, there will be an
-assertion failure that the block graph is not empty when QEMU exits
-before the backup-access-teardown QMP command is called.
-
-The code for the qmp_backup_access_setup() was based on the existing
-qmp_backup() routine.
-
-The return value for the setup QMP command contains information about
-the snapshot-access block nodes that can be used by the management
-layer to set up the NBD exports.
-
-Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
-Reviewed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
----
- pve-backup.c         | 264 ++++++++++++++++++++++++++++++++++++++++++-
- pve-backup.h         |  16 +++
- qapi/block-core.json |  49 ++++++++
- system/runstate.c    |   6 +
- 4 files changed, 329 insertions(+), 6 deletions(-)
- create mode 100644 pve-backup.h
-
-diff --git a/pve-backup.c b/pve-backup.c
-index bd81621d51..78ed6c980c 100644
---- a/pve-backup.c
-+++ b/pve-backup.c
-@@ -1,4 +1,5 @@
- #include "proxmox-backup-client.h"
-+#include "pve-backup.h"
- #include "vma.h"
- 
- #include "qemu/osdep.h"
-@@ -588,6 +589,36 @@ static int setup_snapshot_access(PVEBackupDevInfo *di, Error **errp)
-     return 0;
- }
- 
-+static void setup_all_snapshot_access_bh(void *opaque)
-+{
-+    assert(!qemu_in_coroutine());
-+
-+    CoCtxData *data = (CoCtxData*)opaque;
-+    Error **errp = (Error**)data->data;
-+
-+    Error *local_err = NULL;
-+
-+    GList *l =  backup_state.di_list;
-+    while (l) {
-+        PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
-+        l = g_list_next(l);
-+
-+        bdrv_drained_begin(di->bs);
-+
-+        if (setup_snapshot_access(di, &local_err) < 0) {
-+            bdrv_drained_end(di->bs);
-+            error_setg(errp, "%s - setting up snapshot access failed: %s", di->device_name,
-+                       local_err ? error_get_pretty(local_err) : "unknown error");
-+            break;
-+        }
-+
-+        bdrv_drained_end(di->bs);
-+    }
-+
-+    /* return */
-+    aio_co_enter(data->ctx, data->co);
-+}
-+
- /*
-  * backup_job_create can *not* be run from a coroutine, so this can't either.
-  * The caller is responsible that backup_mutex is held nonetheless.
-@@ -724,6 +755,11 @@ static bool fleecing_no_efi_tpm(const char *device_id)
-     return strncmp(device_id, "drive-efidisk", 13) && strncmp(device_id, "drive-tpmstate", 14);
- }
- 
-+static bool fleecing_all(const char *device_id)
-+{
-+    return true;
-+}
-+
- /*
-  * Returns a list of device infos, which needs to be freed by the caller. In
-  * case of an error, errp will be set, but the returned value might still be a
-@@ -839,8 +875,9 @@ static void clear_backup_state_bitmap_list(void) {
-  */
- static void initialize_backup_state_stat(
-     const char *backup_file,
--    uuid_t uuid,
--    size_t total)
-+    uuid_t *uuid,
-+    size_t total,
-+    bool starting)
- {
-     if (backup_state.stat.error) {
-         error_free(backup_state.stat.error);
-@@ -855,15 +892,19 @@ static void initialize_backup_state_stat(
-     }
-     backup_state.stat.backup_file = g_strdup(backup_file);
- 
--    uuid_copy(backup_state.stat.uuid, uuid);
--    uuid_unparse_lower(uuid, backup_state.stat.uuid_str);
-+    if (uuid) {
-+        uuid_copy(backup_state.stat.uuid, *uuid);
-+        uuid_unparse_lower(*uuid, backup_state.stat.uuid_str);
-+    } else {
-+        backup_state.stat.uuid_str[0] = '\0';
-+    }
- 
-     backup_state.stat.total = total;
-     backup_state.stat.dirty = total - backup_state.stat.reused;
-     backup_state.stat.transferred = 0;
-     backup_state.stat.zero_bytes = 0;
-     backup_state.stat.finishing = false;
--    backup_state.stat.starting = true;
-+    backup_state.stat.starting = starting;
- }
- 
- /*
-@@ -876,6 +917,216 @@ static void backup_state_set_target_id(const char *target_id) {
-     backup_state.target_id = g_strdup(target_id);
- }
- 
-+BackupAccessInfoList *coroutine_fn qmp_backup_access_setup(
-+    const char *target_id,
-+    const char *devlist,
-+    Error **errp)
-+{
-+    assert(qemu_in_coroutine());
-+
-+    qemu_co_mutex_lock(&backup_state.backup_mutex);
-+
-+    Error *local_err = NULL;
-+    GList *di_list = NULL;
-+    GList *l;
-+
-+    if (backup_state.di_list) {
-+        error_set(errp, ERROR_CLASS_GENERIC_ERROR,
-+                  "previous backup for target '%s' not finished", backup_state.target_id);
-+        qemu_co_mutex_unlock(&backup_state.backup_mutex);
-+        return NULL;
-+    }
-+
-+    bdrv_graph_co_rdlock();
-+    di_list = get_device_info(devlist, fleecing_all, &local_err);
-+    bdrv_graph_co_rdunlock();
-+    if (local_err) {
-+        error_propagate(errp, local_err);
-+        goto err;
-+    }
-+    assert(di_list);
-+
-+    size_t total = 0;
-+
-+    l = di_list;
-+    while (l) {
-+        PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
-+        l = g_list_next(l);
-+
-+        ssize_t size = bdrv_getlength(di->bs);
-+        if (size < 0) {
-+            error_setg_errno(errp, -size, "bdrv_getlength failed");
-+            goto err;
-+        }
-+        di->size = size;
-+        total += size;
-+
-+        di->completed_ret = INT_MAX;
-+    }
-+
-+    qemu_mutex_lock(&backup_state.stat.lock);
-+    backup_state.stat.reused = 0;
-+
-+    /* clear previous backup's bitmap_list */
-+    clear_backup_state_bitmap_list();
-+
-+    /* starting=false, because there is no associated QEMU job */
-+    initialize_backup_state_stat(NULL, NULL, total, false);
-+
-+    qemu_mutex_unlock(&backup_state.stat.lock);
-+
-+    backup_state_set_target_id(target_id);
-+
-+    backup_state.vmaw = NULL;
-+    backup_state.pbs = NULL;
-+
-+    backup_state.di_list = di_list;
-+
-+    /* Run setup_all_snapshot_access_bh outside of coroutine (in BH) but keep
-+    * backup_mutex locked. This is fine, a CoMutex can be held across yield
-+    * points, and we'll release it as soon as the BH reschedules us.
-+    */
-+    CoCtxData waker = {
-+        .co = qemu_coroutine_self(),
-+        .ctx = qemu_get_current_aio_context(),
-+        .data = &local_err,
-+    };
-+    aio_bh_schedule_oneshot(waker.ctx, setup_all_snapshot_access_bh, &waker);
-+    qemu_coroutine_yield();
-+
-+    if (local_err) {
-+        error_propagate(errp, local_err);
-+        goto err;
-+    }
-+
-+    qemu_co_mutex_unlock(&backup_state.backup_mutex);
-+
-+    BackupAccessInfoList *bai_head = NULL, **p_bai_next = &bai_head;
-+
-+    l = di_list;
-+    while (l) {
-+        PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
-+        l = g_list_next(l);
-+
-+        BackupAccessInfoList *info = g_malloc0(sizeof(*info));
-+        info->value = g_malloc0(sizeof(*info->value));
-+        info->value->node_name = g_strdup(bdrv_get_node_name(di->fleecing.snapshot_access));
-+        info->value->device = g_strdup(di->device_name);
-+        info->value->size = di->size;
-+
-+        *p_bai_next = info;
-+        p_bai_next = &info->next;
-+    }
-+
-+    return bai_head;
-+
-+err:
-+
-+    l = di_list;
-+    while (l) {
-+        PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
-+        l = g_list_next(l);
-+
-+        g_free(di->device_name);
-+        di->device_name = NULL;
-+
-+        g_free(di);
-+    }
-+    g_list_free(di_list);
-+    backup_state.di_list = NULL;
-+
-+    qemu_co_mutex_unlock(&backup_state.backup_mutex);
-+    return NULL;
-+}
-+
-+/*
-+ * Caller needs to hold the backup mutex or the BQL.
-+ */
-+void backup_access_teardown(void)
-+{
-+    GList *l = backup_state.di_list;
-+
-+    qemu_mutex_lock(&backup_state.stat.lock);
-+    backup_state.stat.finishing = true;
-+    qemu_mutex_unlock(&backup_state.stat.lock);
-+
-+    while (l) {
-+        PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
-+        l = g_list_next(l);
-+
-+        if (di->fleecing.snapshot_access) {
-+            bdrv_unref(di->fleecing.snapshot_access);
-+            di->fleecing.snapshot_access = NULL;
-+        }
-+        if (di->fleecing.cbw) {
-+            bdrv_cbw_drop(di->fleecing.cbw);
-+            di->fleecing.cbw = NULL;
-+        }
-+
-+        g_free(di->device_name);
-+        di->device_name = NULL;
-+
-+        g_free(di);
-+    }
-+    g_list_free(backup_state.di_list);
-+    backup_state.di_list = NULL;
-+
-+    qemu_mutex_lock(&backup_state.stat.lock);
-+    backup_state.stat.end_time = time(NULL);
-+    backup_state.stat.finishing = false;
-+    qemu_mutex_unlock(&backup_state.stat.lock);
-+}
-+
-+// Not done in a coroutine, because bdrv_co_unref() and cbw_drop() would just spawn BHs anyways.
-+// Caller needs to hold the backup_state.backup_mutex lock
-+static void backup_access_teardown_bh(void *opaque)
-+{
-+    CoCtxData *data = (CoCtxData*)opaque;
-+
-+    backup_access_teardown();
-+
-+    /* return */
-+    aio_co_enter(data->ctx, data->co);
-+}
-+
-+void coroutine_fn qmp_backup_access_teardown(const char *target_id, Error **errp)
-+{
-+    assert(qemu_in_coroutine());
-+
-+    qemu_co_mutex_lock(&backup_state.backup_mutex);
-+
-+    if (!backup_state.target_id) { // nothing to do
-+        qemu_co_mutex_unlock(&backup_state.backup_mutex);
-+        return;
-+    }
-+
-+    /*
-+     * Continue with target_id == NULL, used by the callback registered for qemu_cleanup()
-+     */
-+    if (target_id && strcmp(backup_state.target_id, target_id)) {
-+        error_setg(errp, "cannot teardown backup access - got target %s instead of %s",
-+                   target_id, backup_state.target_id);
-+        qemu_co_mutex_unlock(&backup_state.backup_mutex);
-+        return;
-+    }
-+
-+    if (!strcmp(backup_state.target_id, "Proxmox VE")) {
-+        error_setg(errp, "cannot teardown backup access for PVE - use backup-cancel instead");
-+        qemu_co_mutex_unlock(&backup_state.backup_mutex);
-+        return;
-+    }
-+
-+    CoCtxData waker = {
-+        .co = qemu_coroutine_self(),
-+        .ctx = qemu_get_current_aio_context(),
-+    };
-+    aio_bh_schedule_oneshot(waker.ctx, backup_access_teardown_bh, &waker);
-+    qemu_coroutine_yield();
-+
-+    qemu_co_mutex_unlock(&backup_state.backup_mutex);
-+    return;
-+}
-+
- UuidInfo coroutine_fn *qmp_backup(
-     const char *backup_file,
-     const char *password,
-@@ -1119,7 +1370,7 @@ UuidInfo coroutine_fn *qmp_backup(
-         }
-     }
-     /* initialize global backup_state now */
--    initialize_backup_state_stat(backup_file, uuid, total);
-+    initialize_backup_state_stat(backup_file, &uuid, total, true);
-     char *uuid_str = g_strdup(backup_state.stat.uuid_str);
- 
-     qemu_mutex_unlock(&backup_state.stat.lock);
-@@ -1298,5 +1549,6 @@ ProxmoxSupportStatus *qmp_query_proxmox_support(Error **errp)
-     ret->pbs_masterkey = true;
-     ret->backup_max_workers = true;
-     ret->backup_fleecing = true;
-+    ret->backup_access_api = true;
-     return ret;
- }
-diff --git a/pve-backup.h b/pve-backup.h
-new file mode 100644
-index 0000000000..4033bc848f
---- /dev/null
-+++ b/pve-backup.h
-@@ -0,0 +1,16 @@
-+/*
-+ * Bacup code used by Proxmox VE
-+ *
-+ * Copyright (C) Proxmox Server Solutions
-+ *
-+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
-+ * See the COPYING file in the top-level directory.
-+ *
-+ */
-+
-+#ifndef PVE_BACKUP_H
-+#define PVE_BACKUP_H
-+
-+void backup_access_teardown(void);
-+
-+#endif /* PVE_BACKUP_H */
-diff --git a/qapi/block-core.json b/qapi/block-core.json
-index 9bdcfa31ea..8d499650a8 100644
---- a/qapi/block-core.json
-+++ b/qapi/block-core.json
-@@ -1023,6 +1023,9 @@
- #
- # @pbs-library-version: Running version of libproxmox-backup-qemu0 library.
- #
-+# @backup-access-api: Whether backup access API for external providers is
-+#     supported or not.
-+#
- # @backup-fleecing: Whether backup fleecing is supported or not.
- #
- # @backup-max-workers: Whether the 'max-workers' @BackupPerf setting is
-@@ -1036,6 +1039,7 @@
-             'pbs-dirty-bitmap-migration': 'bool',
-             'pbs-masterkey': 'bool',
-             'pbs-library-version': 'str',
-+            'backup-access-api': 'bool',
-             'backup-fleecing': 'bool',
-             'backup-max-workers': 'bool' } }
- 
-@@ -1102,6 +1106,51 @@
- ##
- { 'command': 'query-pbs-bitmap-info', 'returns': ['PBSBitmapInfo'] }
- 
-+##
-+# @BackupAccessInfo:
-+#
-+# Info associated to a snapshot access for backup.  For more information about
-+# the bitmap see @BackupAccessBitmapMode.
-+#
-+# @node-name: the block node name of the snapshot-access node.
-+#
-+# @device: the device on top of which the snapshot access was created.
-+#
-+# @size: the size of the block device in bytes.
-+#
-+##
-+{ 'struct': 'BackupAccessInfo',
-+  'data': { 'node-name': 'str', 'device': 'str', 'size': 'size' } }
-+
-+##
-+# @backup-access-setup:
-+#
-+# Set up snapshot access to VM drives for an external backup provider.  No other
-+# backup or backup access can be done before tearing down the backup access.
-+#
-+# @target-id: the unique ID of the backup target.
-+#
-+# @devlist: list of block device names (separated by ',', ';' or ':'). By
-+#     default the backup includes all writable block devices.
-+#
-+# Returns: a list of @BackupAccessInfo, one for each device.
-+#
-+##
-+{ 'command': 'backup-access-setup',
-+  'data': { 'target-id': 'str', '*devlist': 'str' },
-+  'returns': [ 'BackupAccessInfo' ], 'coroutine': true }
-+
-+##
-+# @backup-access-teardown:
-+#
-+# Tear down previously setup snapshot access for the same target.
-+#
-+# @target-id: the ID of the backup target.
-+#
-+##
-+{ 'command': 'backup-access-teardown', 'data': { 'target-id': 'str' },
-+  'coroutine': true }
-+
- ##
- # @BlockDeviceTimedStats:
- #
-diff --git a/system/runstate.c b/system/runstate.c
-index 272801d307..73026a3884 100644
---- a/system/runstate.c
-+++ b/system/runstate.c
-@@ -60,6 +60,7 @@
- #include "system/system.h"
- #include "system/tpm.h"
- #include "trace.h"
-+#include "pve-backup.h"
- 
- static NotifierList exit_notifiers =
-     NOTIFIER_LIST_INITIALIZER(exit_notifiers);
-@@ -921,6 +922,11 @@ void qemu_cleanup(int status)
-      * requests happening from here on anyway.
-      */
-     bdrv_drain_all_begin();
-+    /*
-+     * The backup access is set up by a QMP command, but is neither owned by a monitor nor
-+     * associated to a BlockBackend. Need to tear it down manually here.
-+     */
-+    backup_access_teardown();
-     job_cancel_sync_all();
-     bdrv_close_all();
- 
diff --git a/debian/patches/pve/0069-PVE-backup-factor-out-get_single_device_info-helper.patch b/debian/patches/pve/0069-PVE-backup-factor-out-get_single_device_info-helper.patch
deleted file mode 100644
index 85346c3..0000000
--- a/debian/patches/pve/0069-PVE-backup-factor-out-get_single_device_info-helper.patch
+++ /dev/null
@@ -1,122 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Fiona Ebner <f.ebner@proxmox.com>
-Date: Thu, 3 Apr 2025 14:30:47 +0200
-Subject: [PATCH] PVE backup: factor out get_single_device_info() helper
-
-Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
-[WB: free di and di->device_name on error]
-Sigend-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
----
- pve-backup.c | 90 +++++++++++++++++++++++++++++++---------------------
- 1 file changed, 53 insertions(+), 37 deletions(-)
-
-diff --git a/pve-backup.c b/pve-backup.c
-index 78ed6c980c..681d25b97e 100644
---- a/pve-backup.c
-+++ b/pve-backup.c
-@@ -760,6 +760,57 @@ static bool fleecing_all(const char *device_id)
-     return true;
- }
- 
-+static PVEBackupDevInfo coroutine_fn GRAPH_RDLOCK *get_single_device_info(
-+    const char *device,
-+    bool (*device_uses_fleecing)(const char*),
-+    Error **errp)
-+{
-+    BlockBackend *blk = blk_by_name(device);
-+    if (!blk) {
-+        error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
-+                  "Device '%s' not found", device);
-+        return NULL;
-+    }
-+    BlockDriverState *bs = blk_bs(blk);
-+    if (!bdrv_co_is_inserted(bs)) {
-+        error_setg(errp, "Device '%s' has no medium", device);
-+        return NULL;
-+    }
-+    PVEBackupDevInfo *di = g_new0(PVEBackupDevInfo, 1);
-+    di->bs = bs;
-+    di->device_name = g_strdup(bdrv_get_device_name(bs));
-+
-+    if (device_uses_fleecing && device_uses_fleecing(device)) {
-+        g_autofree gchar *fleecing_devid = g_strconcat(device, "-fleecing", NULL);
-+        BlockBackend *fleecing_blk = blk_by_name(fleecing_devid);
-+        if (!fleecing_blk) {
-+            error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
-+                      "Device '%s' not found", fleecing_devid);
-+            goto fail;
-+        }
-+        BlockDriverState *fleecing_bs = blk_bs(fleecing_blk);
-+        if (!bdrv_co_is_inserted(fleecing_bs)) {
-+            error_setg(errp, "Device '%s' has no medium", fleecing_devid);
-+            goto fail;
-+        }
-+        /*
-+         * Fleecing image needs to be the same size to act as a cbw target.
-+         */
-+        if (bs->total_sectors != fleecing_bs->total_sectors) {
-+            error_setg(errp, "Size mismatch for '%s' - sector count %ld != %ld",
-+                       fleecing_devid, fleecing_bs->total_sectors, bs->total_sectors);
-+            goto fail;
-+        }
-+        di->fleecing.bs = fleecing_bs;
-+    }
-+
-+    return di;
-+fail:
-+    g_free(di->device_name);
-+    g_free(di);
-+    return NULL;
-+}
-+
- /*
-  * Returns a list of device infos, which needs to be freed by the caller. In
-  * case of an error, errp will be set, but the returned value might still be a
-@@ -778,45 +829,10 @@ static GList coroutine_fn GRAPH_RDLOCK *get_device_info(
- 
-         gchar **d = devs;
-         while (d && *d) {
--            BlockBackend *blk = blk_by_name(*d);
--            if (!blk) {
--                error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
--                          "Device '%s' not found", *d);
-+            PVEBackupDevInfo *di = get_single_device_info(*d, device_uses_fleecing, errp);
-+            if (!di) {
-                 goto err;
-             }
--            BlockDriverState *bs = blk_bs(blk);
--            if (!bdrv_co_is_inserted(bs)) {
--                error_setg(errp, "Device '%s' has no medium", *d);
--                goto err;
--            }
--            PVEBackupDevInfo *di = g_new0(PVEBackupDevInfo, 1);
--            di->bs = bs;
--            di->device_name = g_strdup(bdrv_get_device_name(bs));
--
--            if (device_uses_fleecing && device_uses_fleecing(*d)) {
--                g_autofree gchar *fleecing_devid = g_strconcat(*d, "-fleecing", NULL);
--                BlockBackend *fleecing_blk = blk_by_name(fleecing_devid);
--                if (!fleecing_blk) {
--                    error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
--                              "Device '%s' not found", fleecing_devid);
--                    goto err;
--                }
--                BlockDriverState *fleecing_bs = blk_bs(fleecing_blk);
--                if (!bdrv_co_is_inserted(fleecing_bs)) {
--                    error_setg(errp, "Device '%s' has no medium", fleecing_devid);
--                    goto err;
--                }
--                /*
--                 * Fleecing image needs to be the same size to act as a cbw target.
--                 */
--                if (bs->total_sectors != fleecing_bs->total_sectors) {
--                    error_setg(errp, "Size mismatch for '%s' - sector count %ld != %ld",
--                               fleecing_devid, fleecing_bs->total_sectors, bs->total_sectors);
--                    goto err;
--                }
--                di->fleecing.bs = fleecing_bs;
--            }
--
-             di_list = g_list_append(di_list, di);
-             d++;
-         }
diff --git a/debian/patches/pve/0070-PVE-backup-implement-bitmap-support-for-external-bac.patch b/debian/patches/pve/0070-PVE-backup-implement-bitmap-support-for-external-bac.patch
deleted file mode 100644
index 597eabe..0000000
--- a/debian/patches/pve/0070-PVE-backup-implement-bitmap-support-for-external-bac.patch
+++ /dev/null
@@ -1,470 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Fiona Ebner <f.ebner@proxmox.com>
-Date: Thu, 3 Apr 2025 14:30:48 +0200
-Subject: [PATCH] PVE backup: implement bitmap support for external backup
- access
-
-There can be one dirty bitmap for each backup target ID for each
-device (which are tracked in the backup_access_bitmaps hash table).
-The QMP user can specify the ID of the bitmap it likes to use. This ID
-is then compared to the current one for the given target and device.
-If they match, the bitmap is re-used (should it still exist on the
-drive, otherwise re-created). If there is a mismatch, the old bitmap
-is removed and a new one is created.
-
-The return value of the QMP command includes information about what
-bitmap action was taken. Similar to what the query-backup QMP command
-returns for regular backup. It also includes the bitmap name and
-associated block node, so the management layer can then set up an NBD
-export with the bitmap.
-
-While the backup access is active, a background bitmap is also
-required. This is necessary to implement bitmap handling according to
-the original reference [0]. In particular:
-
-- in the error case, new writes since the backup access was set up are
-  in the background bitmap. Because of failure, the previously tracked
-  writes from the backup access bitmap are still required too. Thus,
-  the bitmap is merged with the background bitmap to get all new
-  writes since the last backup.
-
-- in the success case, continue tracking for the next incremental
-  backup in the backup access bitmap. New writes since the backup
-  access was set up are in the background bitmap. Because the backup
-  was successfully, clear the backup access bitmap and merge back the
-  background bitmap to get only the new writes.
-
-Since QEMU cannot know if the backup was successful or not (except if
-failure already happens during the setup QMP command), the management
-layer needs to tell it via the teardown QMP command.
-
-The bitmap action is also recorded in the device info now.
-
-[0]: https://lore.kernel.org/qemu-devel/b68833dd-8864-4d72-7c61-c134a9835036@ya.ru/
-
-Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
-Reviewed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
----
- pve-backup.c         | 196 +++++++++++++++++++++++++++++++++++++++++--
- pve-backup.h         |   2 +-
- qapi/block-core.json |  36 ++++++--
- system/runstate.c    |   2 +-
- 4 files changed, 220 insertions(+), 16 deletions(-)
-
-diff --git a/pve-backup.c b/pve-backup.c
-index 681d25b97e..bd4d5e2579 100644
---- a/pve-backup.c
-+++ b/pve-backup.c
-@@ -15,6 +15,7 @@
- #include "qobject/qdict.h"
- #include "qapi/qmp/qerror.h"
- #include "qemu/cutils.h"
-+#include "qemu/error-report.h"
- 
- #if defined(CONFIG_MALLOC_TRIM)
- #include <malloc.h>
-@@ -41,6 +42,7 @@
-  */
- 
- const char *PBS_BITMAP_NAME = "pbs-incremental-dirty-bitmap";
-+const char *BACKGROUND_BITMAP_NAME = "backup-access-background-bitmap";
- 
- static struct PVEBackupState {
-     struct {
-@@ -72,6 +74,7 @@ static struct PVEBackupState {
-     CoMutex backup_mutex;
-     CoMutex dump_callback_mutex;
-     char *target_id;
-+    GHashTable *backup_access_bitmaps; // key=target_id, value=bitmap_name
- } backup_state;
- 
- static void pvebackup_init(void)
-@@ -99,8 +102,11 @@ typedef struct PVEBackupDevInfo {
-     char* device_name;
-     int completed_ret; // INT_MAX if not completed
-     BdrvDirtyBitmap *bitmap;
-+    BdrvDirtyBitmap *background_bitmap; // used for external backup access
-+    PBSBitmapAction bitmap_action;
-     BlockDriverState *target;
-     BlockJob *job;
-+    char *requested_bitmap_name; // used by external backup access during initialization
- } PVEBackupDevInfo;
- 
- static void pvebackup_propagate_error(Error *err)
-@@ -362,6 +368,67 @@ static void coroutine_fn pvebackup_co_complete_stream(void *opaque)
-     qemu_co_mutex_unlock(&backup_state.backup_mutex);
- }
- 
-+/*
-+ * New writes since the backup access was set up are in the background bitmap. Because of failure,
-+ * the previously tracked writes in di->bitmap are still required too. Thus, merge with the
-+ * background bitmap to get all new writes since the last backup.
-+ */
-+static void handle_backup_access_bitmaps_in_error_case(PVEBackupDevInfo *di)
-+{
-+    Error *local_err = NULL;
-+
-+    if (di->bs && di->background_bitmap) {
-+        bdrv_drained_begin(di->bs);
-+        if (di->bitmap) {
-+            bdrv_enable_dirty_bitmap(di->bitmap);
-+            if (!bdrv_merge_dirty_bitmap(di->bitmap, di->background_bitmap, NULL, &local_err)) {
-+                warn_report("backup access: %s - could not merge bitmaps in error path - %s",
-+                            di->device_name,
-+                            local_err ? error_get_pretty(local_err) : "unknown error");
-+                /*
-+                 * Could not merge, drop original bitmap too.
-+                 */
-+                bdrv_release_dirty_bitmap(di->bitmap);
-+            }
-+        } else {
-+            warn_report("backup access: %s - expected bitmap not present", di->device_name);
-+        }
-+        bdrv_release_dirty_bitmap(di->background_bitmap);
-+        bdrv_drained_end(di->bs);
-+    }
-+}
-+
-+/*
-+ * Continue tracking for next incremental backup in di->bitmap. New writes since the backup access
-+ * was set up are in the background bitmap. Because the backup was successful, clear di->bitmap and
-+ * merge back the background bitmap to get only the new writes.
-+ */
-+static void handle_backup_access_bitmaps_after_success(PVEBackupDevInfo *di)
-+{
-+    Error *local_err = NULL;
-+
-+    if (di->bs && di->background_bitmap) {
-+        bdrv_drained_begin(di->bs);
-+        if (di->bitmap) {
-+            bdrv_enable_dirty_bitmap(di->bitmap);
-+            bdrv_clear_dirty_bitmap(di->bitmap, NULL);
-+            if (!bdrv_merge_dirty_bitmap(di->bitmap, di->background_bitmap, NULL, &local_err)) {
-+                warn_report("backup access: %s - could not merge bitmaps after backup - %s",
-+                            di->device_name,
-+                            local_err ? error_get_pretty(local_err) : "unknown error");
-+                /*
-+                 * Could not merge, drop original bitmap too.
-+                 */
-+                bdrv_release_dirty_bitmap(di->bitmap);
-+            }
-+        } else {
-+            warn_report("backup access: %s - expected bitmap not present", di->device_name);
-+        }
-+        bdrv_release_dirty_bitmap(di->background_bitmap);
-+        bdrv_drained_end(di->bs);
-+    }
-+}
-+
- static void cleanup_snapshot_access(PVEBackupDevInfo *di)
- {
-     if (di->fleecing.snapshot_access) {
-@@ -605,6 +672,21 @@ static void setup_all_snapshot_access_bh(void *opaque)
- 
-         bdrv_drained_begin(di->bs);
- 
-+        if (di->bitmap) {
-+            BdrvDirtyBitmap *background_bitmap =
-+                bdrv_create_dirty_bitmap(di->bs, PROXMOX_BACKUP_DEFAULT_CHUNK_SIZE,
-+                                         BACKGROUND_BITMAP_NAME, &local_err);
-+            if (!background_bitmap) {
-+                error_setg(errp, "%s - creating background bitmap for backup access failed: %s",
-+                           di->device_name,
-+                           local_err ? error_get_pretty(local_err) : "unknown error");
-+                bdrv_drained_end(di->bs);
-+                break;
-+            }
-+            di->background_bitmap = background_bitmap;
-+            bdrv_disable_dirty_bitmap(di->bitmap);
-+        }
-+
-         if (setup_snapshot_access(di, &local_err) < 0) {
-             bdrv_drained_end(di->bs);
-             error_setg(errp, "%s - setting up snapshot access failed: %s", di->device_name,
-@@ -935,7 +1017,7 @@ static void backup_state_set_target_id(const char *target_id) {
- 
- BackupAccessInfoList *coroutine_fn qmp_backup_access_setup(
-     const char *target_id,
--    const char *devlist,
-+    BackupAccessSourceDeviceList *devices,
-     Error **errp)
- {
-     assert(qemu_in_coroutine());
-@@ -954,12 +1036,17 @@ BackupAccessInfoList *coroutine_fn qmp_backup_access_setup(
-     }
- 
-     bdrv_graph_co_rdlock();
--    di_list = get_device_info(devlist, fleecing_all, &local_err);
--    bdrv_graph_co_rdunlock();
--    if (local_err) {
--        error_propagate(errp, local_err);
--        goto err;
-+    for (BackupAccessSourceDeviceList *it = devices; it; it = it->next) {
-+        PVEBackupDevInfo *di = get_single_device_info(it->value->device, fleecing_all, &local_err);
-+        if (!di) {
-+            bdrv_graph_co_rdunlock();
-+            error_propagate(errp, local_err);
-+            goto err;
-+        }
-+        di->requested_bitmap_name = g_strdup(it->value->bitmap_name);
-+        di_list = g_list_append(di_list, di);
-     }
-+    bdrv_graph_co_rdunlock();
-     assert(di_list);
- 
-     size_t total = 0;
-@@ -986,6 +1073,78 @@ BackupAccessInfoList *coroutine_fn qmp_backup_access_setup(
-     /* clear previous backup's bitmap_list */
-     clear_backup_state_bitmap_list();
- 
-+    if (!backup_state.backup_access_bitmaps) {
-+        backup_state.backup_access_bitmaps =
-+            g_hash_table_new_full(g_str_hash, g_str_equal, free, free);
-+    }
-+
-+    /* create bitmaps if requested */
-+    l = di_list;
-+    while (l) {
-+        PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
-+        l = g_list_next(l);
-+
-+        di->block_size = PROXMOX_BACKUP_DEFAULT_CHUNK_SIZE;
-+
-+        PBSBitmapAction action = PBS_BITMAP_ACTION_NOT_USED;
-+        size_t dirty = di->size;
-+
-+        const char *old_bitmap_name =
-+            (const char*)g_hash_table_lookup(backup_state.backup_access_bitmaps, target_id);
-+
-+        bool same_bitmap_name = old_bitmap_name
-+            && di->requested_bitmap_name
-+            && strcmp(di->requested_bitmap_name, old_bitmap_name) == 0;
-+
-+        if (old_bitmap_name && !same_bitmap_name) {
-+            BdrvDirtyBitmap *old_bitmap = bdrv_find_dirty_bitmap(di->bs, old_bitmap_name);
-+            if (!old_bitmap) {
-+                warn_report("setup backup access: expected old bitmap '%s' not found for drive "
-+                            "'%s'", old_bitmap_name, di->device_name);
-+            } else {
-+                g_hash_table_remove(backup_state.backup_access_bitmaps, target_id);
-+                bdrv_release_dirty_bitmap(old_bitmap);
-+                action = PBS_BITMAP_ACTION_NOT_USED_REMOVED;
-+            }
-+        }
-+
-+        BdrvDirtyBitmap *bitmap = NULL;
-+        if (di->requested_bitmap_name) {
-+            bitmap = bdrv_find_dirty_bitmap(di->bs, di->requested_bitmap_name);
-+            if (!bitmap) {
-+                bitmap = bdrv_create_dirty_bitmap(di->bs, PROXMOX_BACKUP_DEFAULT_CHUNK_SIZE,
-+                                                  di->requested_bitmap_name, errp);
-+                if (!bitmap) {
-+                    qemu_mutex_unlock(&backup_state.stat.lock);
-+                    goto err;
-+                }
-+                bdrv_set_dirty_bitmap(bitmap, 0, di->size);
-+                action = PBS_BITMAP_ACTION_NEW;
-+            } else {
-+                /* track clean chunks as reused */
-+                dirty = MIN(bdrv_get_dirty_count(bitmap), di->size);
-+                backup_state.stat.reused += di->size - dirty;
-+                action = PBS_BITMAP_ACTION_USED;
-+            }
-+
-+            if (!same_bitmap_name) {
-+                g_hash_table_insert(backup_state.backup_access_bitmaps,
-+                                    strdup(target_id), strdup(di->requested_bitmap_name));
-+            }
-+
-+        }
-+
-+        PBSBitmapInfo *info = g_malloc(sizeof(*info));
-+        info->drive = g_strdup(di->device_name);
-+        info->action = action;
-+        info->size = di->size;
-+        info->dirty = dirty;
-+        backup_state.stat.bitmap_list = g_list_append(backup_state.stat.bitmap_list, info);
-+
-+        di->bitmap = bitmap;
-+        di->bitmap_action = action;
-+    }
-+
-     /* starting=false, because there is no associated QEMU job */
-     initialize_backup_state_stat(NULL, NULL, total, false);
- 
-@@ -1029,6 +1188,12 @@ BackupAccessInfoList *coroutine_fn qmp_backup_access_setup(
-         info->value->node_name = g_strdup(bdrv_get_node_name(di->fleecing.snapshot_access));
-         info->value->device = g_strdup(di->device_name);
-         info->value->size = di->size;
-+        if (di->requested_bitmap_name) {
-+            info->value->bitmap_node_name = g_strdup(bdrv_get_node_name(di->bs));
-+            info->value->bitmap_name = g_strdup(di->requested_bitmap_name);
-+            info->value->bitmap_action = di->bitmap_action;
-+            info->value->has_bitmap_action = true;
-+        }
- 
-         *p_bai_next = info;
-         p_bai_next = &info->next;
-@@ -1043,6 +1208,8 @@ err:
-         PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
-         l = g_list_next(l);
- 
-+        handle_backup_access_bitmaps_in_error_case(di);
-+
-         g_free(di->device_name);
-         di->device_name = NULL;
- 
-@@ -1058,7 +1225,7 @@ err:
- /*
-  * Caller needs to hold the backup mutex or the BQL.
-  */
--void backup_access_teardown(void)
-+void backup_access_teardown(bool success)
- {
-     GList *l = backup_state.di_list;
- 
-@@ -1079,9 +1246,18 @@ void backup_access_teardown(void)
-             di->fleecing.cbw = NULL;
-         }
- 
-+        if (success) {
-+            handle_backup_access_bitmaps_after_success(di);
-+        } else {
-+            handle_backup_access_bitmaps_in_error_case(di);
-+        }
-+
-         g_free(di->device_name);
-         di->device_name = NULL;
- 
-+        g_free(di->requested_bitmap_name);
-+        di->requested_bitmap_name = NULL;
-+
-         g_free(di);
-     }
-     g_list_free(backup_state.di_list);
-@@ -1099,13 +1275,13 @@ static void backup_access_teardown_bh(void *opaque)
- {
-     CoCtxData *data = (CoCtxData*)opaque;
- 
--    backup_access_teardown();
-+    backup_access_teardown(*((bool*)data->data));
- 
-     /* return */
-     aio_co_enter(data->ctx, data->co);
- }
- 
--void coroutine_fn qmp_backup_access_teardown(const char *target_id, Error **errp)
-+void coroutine_fn qmp_backup_access_teardown(const char *target_id, bool success, Error **errp)
- {
-     assert(qemu_in_coroutine());
- 
-@@ -1135,6 +1311,7 @@ void coroutine_fn qmp_backup_access_teardown(const char *target_id, Error **errp
-     CoCtxData waker = {
-         .co = qemu_coroutine_self(),
-         .ctx = qemu_get_current_aio_context(),
-+        .data = &success,
-     };
-     aio_bh_schedule_oneshot(waker.ctx, backup_access_teardown_bh, &waker);
-     qemu_coroutine_yield();
-@@ -1335,6 +1512,7 @@ UuidInfo coroutine_fn *qmp_backup(
-             }
- 
-             di->dev_id = dev_id;
-+            di->bitmap_action = action;
- 
-             PBSBitmapInfo *info = g_malloc(sizeof(*info));
-             info->drive = g_strdup(di->device_name);
-diff --git a/pve-backup.h b/pve-backup.h
-index 4033bc848f..9ebeef7c8f 100644
---- a/pve-backup.h
-+++ b/pve-backup.h
-@@ -11,6 +11,6 @@
- #ifndef PVE_BACKUP_H
- #define PVE_BACKUP_H
- 
--void backup_access_teardown(void);
-+void backup_access_teardown(bool success);
- 
- #endif /* PVE_BACKUP_H */
-diff --git a/qapi/block-core.json b/qapi/block-core.json
-index 8d499650a8..61307ed1d7 100644
---- a/qapi/block-core.json
-+++ b/qapi/block-core.json
-@@ -1118,9 +1118,33 @@
- #
- # @size: the size of the block device in bytes.
- #
-+# @bitmap-node-name: the block node name the dirty bitmap is associated to.
-+#
-+# @bitmap-name: the name of the dirty bitmap associated to the backup access.
-+#
-+# @bitmap-action: the action taken on the dirty bitmap.
-+#
- ##
- { 'struct': 'BackupAccessInfo',
--  'data': { 'node-name': 'str', 'device': 'str', 'size': 'size' } }
-+  'data': { 'node-name': 'str', 'device': 'str', 'size': 'size',
-+            '*bitmap-node-name': 'str', '*bitmap-name': 'str',
-+            '*bitmap-action': 'PBSBitmapAction' } }
-+
-+##
-+# @BackupAccessSourceDevice:
-+#
-+# Source block device information for creating a backup access.
-+#
-+# @device: the block device name.
-+#
-+# @bitmap-name: use/create a bitmap with this name for the device. Re-using the
-+#     same name allows for making incremental backups. Check the @bitmap-action
-+#     in the result to see if you can actually re-use the bitmap or if it had to
-+#     be newly created.
-+#
-+##
-+{ 'struct': 'BackupAccessSourceDevice',
-+  'data': { 'device': 'str', '*bitmap-name': 'str' } }
- 
- ##
- # @backup-access-setup:
-@@ -1130,14 +1154,13 @@
- #
- # @target-id: the unique ID of the backup target.
- #
--# @devlist: list of block device names (separated by ',', ';' or ':'). By
--#     default the backup includes all writable block devices.
-+# @devices: list of devices for which to create the backup access.
- #
- # Returns: a list of @BackupAccessInfo, one for each device.
- #
- ##
- { 'command': 'backup-access-setup',
--  'data': { 'target-id': 'str', '*devlist': 'str' },
-+  'data': { 'target-id': 'str', 'devices': [ 'BackupAccessSourceDevice' ] },
-   'returns': [ 'BackupAccessInfo' ], 'coroutine': true }
- 
- ##
-@@ -1147,8 +1170,11 @@
- #
- # @target-id: the ID of the backup target.
- #
-+# @success: whether the backup done by the external provider was successful.
-+#
- ##
--{ 'command': 'backup-access-teardown', 'data': { 'target-id': 'str' },
-+{ 'command': 'backup-access-teardown',
-+  'data': { 'target-id': 'str', 'success': 'bool' },
-   'coroutine': true }
- 
- ##
-diff --git a/system/runstate.c b/system/runstate.c
-index 73026a3884..cf775213bd 100644
---- a/system/runstate.c
-+++ b/system/runstate.c
-@@ -926,7 +926,7 @@ void qemu_cleanup(int status)
-      * The backup access is set up by a QMP command, but is neither owned by a monitor nor
-      * associated to a BlockBackend. Need to tear it down manually here.
-      */
--    backup_access_teardown();
-+    backup_access_teardown(false);
-     job_cancel_sync_all();
-     bdrv_close_all();
- 
diff --git a/debian/patches/pve/0071-PVE-backup-backup-access-api-indicate-situation-wher.patch b/debian/patches/pve/0071-PVE-backup-backup-access-api-indicate-situation-wher.patch
deleted file mode 100644
index d33d173..0000000
--- a/debian/patches/pve/0071-PVE-backup-backup-access-api-indicate-situation-wher.patch
+++ /dev/null
@@ -1,57 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Fiona Ebner <f.ebner@proxmox.com>
-Date: Thu, 3 Apr 2025 14:30:49 +0200
-Subject: [PATCH] PVE backup: backup-access api: indicate situation where a
- bitmap was recreated
-
-The backup-access api keeps track of what bitmap names got used for
-which devices and thus knows when a bitmap went missing. Propagate
-this information to the QMP user with a new 'missing-recreated'
-variant for the taken bitmap action.
-
-Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
-Reviewed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
----
- pve-backup.c         | 6 +++++-
- qapi/block-core.json | 9 ++++++++-
- 2 files changed, 13 insertions(+), 2 deletions(-)
-
-diff --git a/pve-backup.c b/pve-backup.c
-index bd4d5e2579..788647a22d 100644
---- a/pve-backup.c
-+++ b/pve-backup.c
-@@ -1119,7 +1119,11 @@ BackupAccessInfoList *coroutine_fn qmp_backup_access_setup(
-                     goto err;
-                 }
-                 bdrv_set_dirty_bitmap(bitmap, 0, di->size);
--                action = PBS_BITMAP_ACTION_NEW;
-+                if (same_bitmap_name) {
-+                    action = PBS_BITMAP_ACTION_MISSING_RECREATED;
-+                } else {
-+                    action = PBS_BITMAP_ACTION_NEW;
-+                }
-             } else {
-                 /* track clean chunks as reused */
-                 dirty = MIN(bdrv_get_dirty_count(bitmap), di->size);
-diff --git a/qapi/block-core.json b/qapi/block-core.json
-index 61307ed1d7..d94c856160 100644
---- a/qapi/block-core.json
-+++ b/qapi/block-core.json
-@@ -1071,9 +1071,16 @@
- #           base snapshot did not match the base given for the current job or
- #           the crypt mode has changed.
- #
-+# @missing-recreated: A bitmap for incremental backup was expected to be
-+#     present, but was missing and thus got recreated. For example, this can
-+#     happen if the drive was re-attached or if the bitmap was deleted for some
-+#     other reason. PBS does not currently keep track of this; the backup-access
-+#     mechanism does.
-+#
- ##
- { 'enum': 'PBSBitmapAction',
--  'data': ['not-used', 'not-used-removed', 'new', 'used', 'invalid'] }
-+  'data': ['not-used', 'not-used-removed', 'new', 'used', 'invalid',
-+           'missing-recreated'] }
- 
- ##
- # @PBSBitmapInfo:
diff --git a/debian/patches/pve/0072-PVE-backup-backup-access-api-explicit-bitmap-mode-pa.patch b/debian/patches/pve/0072-PVE-backup-backup-access-api-explicit-bitmap-mode-pa.patch
deleted file mode 100644
index 5d38ad2..0000000
--- a/debian/patches/pve/0072-PVE-backup-backup-access-api-explicit-bitmap-mode-pa.patch
+++ /dev/null
@@ -1,84 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Wolfgang Bumiller <w.bumiller@proxmox.com>
-Date: Thu, 3 Apr 2025 14:30:50 +0200
-Subject: [PATCH] PVE backup: backup-access-api: explicit bitmap-mode parameter
-
-This allows to explicitly request to re-create a bitmap under the same
-name.
-
-Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
- [TL: fix trailing whitespace error]
-Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
----
- pve-backup.c         | 17 ++++++++++++++++-
- qapi/block-core.json | 20 +++++++++++++++++++-
- 2 files changed, 35 insertions(+), 2 deletions(-)
-
-diff --git a/pve-backup.c b/pve-backup.c
-index 788647a22d..f657aba68d 100644
---- a/pve-backup.c
-+++ b/pve-backup.c
-@@ -1043,7 +1043,16 @@ BackupAccessInfoList *coroutine_fn qmp_backup_access_setup(
-             error_propagate(errp, local_err);
-             goto err;
-         }
--        di->requested_bitmap_name = g_strdup(it->value->bitmap_name);
-+        if (it->value->bitmap_mode == BACKUP_ACCESS_SETUP_BITMAP_MODE_NONE) {
-+            di->bitmap_action = PBS_BITMAP_ACTION_NOT_USED;
-+        } else {
-+            di->requested_bitmap_name = g_strdup(it->value->bitmap_name);
-+            if (it->value->bitmap_mode == BACKUP_ACCESS_SETUP_BITMAP_MODE_NEW) {
-+                di->bitmap_action = PBS_BITMAP_ACTION_NEW;
-+            } else if (it->value->bitmap_mode == BACKUP_ACCESS_SETUP_BITMAP_MODE_USE) {
-+                di->bitmap_action = PBS_BITMAP_ACTION_USED;
-+            }
-+        }
-         di_list = g_list_append(di_list, di);
-     }
-     bdrv_graph_co_rdunlock();
-@@ -1096,6 +1105,12 @@ BackupAccessInfoList *coroutine_fn qmp_backup_access_setup(
-             && di->requested_bitmap_name
-             && strcmp(di->requested_bitmap_name, old_bitmap_name) == 0;
- 
-+        /* special case: if we explicitly requested a *new* bitmap, treat an
-+         * existing bitmap as having a different name */
-+        if (di->bitmap_action == PBS_BITMAP_ACTION_NEW) {
-+            same_bitmap_name = false;
-+        }
-+
-         if (old_bitmap_name && !same_bitmap_name) {
-             BdrvDirtyBitmap *old_bitmap = bdrv_find_dirty_bitmap(di->bs, old_bitmap_name);
-             if (!old_bitmap) {
-diff --git a/qapi/block-core.json b/qapi/block-core.json
-index d94c856160..cde92071a1 100644
---- a/qapi/block-core.json
-+++ b/qapi/block-core.json
-@@ -1149,9 +1149,27 @@
- #     in the result to see if you can actually re-use the bitmap or if it had to
- #     be newly created.
- #
-+# @bitmap-mode: used to control whether the bitmap should be reused or
-+#     recreated.
-+#
- ##
- { 'struct': 'BackupAccessSourceDevice',
--  'data': { 'device': 'str', '*bitmap-name': 'str' } }
-+  'data': { 'device': 'str', '*bitmap-name': 'str',
-+            '*bitmap-mode': 'BackupAccessSetupBitmapMode' } }
-+
-+##
-+# @BackupAccessSetupBitmapMode:
-+#
-+# How to setup a bitmap for a device for @backup-access-setup.
-+#
-+# @none: do not use a bitmap. Removes an existing bitmap if present.
-+#
-+# @new: create and use a new bitmap.
-+#
-+# @use: try to re-use an existing bitmap. Create a new one if it doesn't exist.
-+##
-+{ 'enum': 'BackupAccessSetupBitmapMode',
-+  'data': ['none', 'new', 'use' ] }
- 
- ##
- # @backup-access-setup:
diff --git a/debian/patches/pve/0073-PVE-backup-backup-access-api-simplify-bitmap-logic.patch b/debian/patches/pve/0073-PVE-backup-backup-access-api-simplify-bitmap-logic.patch
deleted file mode 100644
index 1074469..0000000
--- a/debian/patches/pve/0073-PVE-backup-backup-access-api-simplify-bitmap-logic.patch
+++ /dev/null
@@ -1,206 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Fiona Ebner <f.ebner@proxmox.com>
-Date: Fri, 4 Apr 2025 15:31:36 +0200
-Subject: [PATCH] PVE backup: backup-access api: simplify bitmap logic
-
-Currently, only one bitmap name per target is planned to be used.
-Simply use the target ID itself as the bitmap name. This allows to
-simplify the logic quite a bit and there also is no need for the
-backup_access_bitmaps hash table anymore.
-
-For the return value, the bitmap names are still passed along for
-convenience in the caller.
-
-Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
-Tested-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
-Reviewed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
----
- pve-backup.c         | 72 ++++++++++++--------------------------------
- qapi/block-core.json | 15 ++++-----
- 2 files changed, 26 insertions(+), 61 deletions(-)
-
-diff --git a/pve-backup.c b/pve-backup.c
-index f657aba68d..0450303017 100644
---- a/pve-backup.c
-+++ b/pve-backup.c
-@@ -74,7 +74,6 @@ static struct PVEBackupState {
-     CoMutex backup_mutex;
-     CoMutex dump_callback_mutex;
-     char *target_id;
--    GHashTable *backup_access_bitmaps; // key=target_id, value=bitmap_name
- } backup_state;
- 
- static void pvebackup_init(void)
-@@ -106,7 +105,7 @@ typedef struct PVEBackupDevInfo {
-     PBSBitmapAction bitmap_action;
-     BlockDriverState *target;
-     BlockJob *job;
--    char *requested_bitmap_name; // used by external backup access during initialization
-+    BackupAccessSetupBitmapMode requested_bitmap_mode;
- } PVEBackupDevInfo;
- 
- static void pvebackup_propagate_error(Error *err)
-@@ -1043,16 +1042,7 @@ BackupAccessInfoList *coroutine_fn qmp_backup_access_setup(
-             error_propagate(errp, local_err);
-             goto err;
-         }
--        if (it->value->bitmap_mode == BACKUP_ACCESS_SETUP_BITMAP_MODE_NONE) {
--            di->bitmap_action = PBS_BITMAP_ACTION_NOT_USED;
--        } else {
--            di->requested_bitmap_name = g_strdup(it->value->bitmap_name);
--            if (it->value->bitmap_mode == BACKUP_ACCESS_SETUP_BITMAP_MODE_NEW) {
--                di->bitmap_action = PBS_BITMAP_ACTION_NEW;
--            } else if (it->value->bitmap_mode == BACKUP_ACCESS_SETUP_BITMAP_MODE_USE) {
--                di->bitmap_action = PBS_BITMAP_ACTION_USED;
--            }
--        }
-+        di->requested_bitmap_mode = it->value->bitmap_mode;
-         di_list = g_list_append(di_list, di);
-     }
-     bdrv_graph_co_rdunlock();
-@@ -1082,10 +1072,7 @@ BackupAccessInfoList *coroutine_fn qmp_backup_access_setup(
-     /* clear previous backup's bitmap_list */
-     clear_backup_state_bitmap_list();
- 
--    if (!backup_state.backup_access_bitmaps) {
--        backup_state.backup_access_bitmaps =
--            g_hash_table_new_full(g_str_hash, g_str_equal, free, free);
--    }
-+    const char *bitmap_name = target_id;
- 
-     /* create bitmaps if requested */
-     l = di_list;
-@@ -1098,59 +1085,43 @@ BackupAccessInfoList *coroutine_fn qmp_backup_access_setup(
-         PBSBitmapAction action = PBS_BITMAP_ACTION_NOT_USED;
-         size_t dirty = di->size;
- 
--        const char *old_bitmap_name =
--            (const char*)g_hash_table_lookup(backup_state.backup_access_bitmaps, target_id);
--
--        bool same_bitmap_name = old_bitmap_name
--            && di->requested_bitmap_name
--            && strcmp(di->requested_bitmap_name, old_bitmap_name) == 0;
--
--        /* special case: if we explicitly requested a *new* bitmap, treat an
--         * existing bitmap as having a different name */
--        if (di->bitmap_action == PBS_BITMAP_ACTION_NEW) {
--            same_bitmap_name = false;
--        }
--
--        if (old_bitmap_name && !same_bitmap_name) {
--            BdrvDirtyBitmap *old_bitmap = bdrv_find_dirty_bitmap(di->bs, old_bitmap_name);
--            if (!old_bitmap) {
--                warn_report("setup backup access: expected old bitmap '%s' not found for drive "
--                            "'%s'", old_bitmap_name, di->device_name);
--            } else {
--                g_hash_table_remove(backup_state.backup_access_bitmaps, target_id);
-+        if (di->requested_bitmap_mode == BACKUP_ACCESS_SETUP_BITMAP_MODE_NONE ||
-+            di->requested_bitmap_mode == BACKUP_ACCESS_SETUP_BITMAP_MODE_NEW) {
-+            BdrvDirtyBitmap *old_bitmap = bdrv_find_dirty_bitmap(di->bs, bitmap_name);
-+            if (old_bitmap) {
-                 bdrv_release_dirty_bitmap(old_bitmap);
--                action = PBS_BITMAP_ACTION_NOT_USED_REMOVED;
-+                action = PBS_BITMAP_ACTION_NOT_USED_REMOVED; // set below for new
-             }
-         }
- 
-         BdrvDirtyBitmap *bitmap = NULL;
--        if (di->requested_bitmap_name) {
--            bitmap = bdrv_find_dirty_bitmap(di->bs, di->requested_bitmap_name);
-+        if (di->requested_bitmap_mode == BACKUP_ACCESS_SETUP_BITMAP_MODE_NEW ||
-+            di->requested_bitmap_mode == BACKUP_ACCESS_SETUP_BITMAP_MODE_USE) {
-+            bitmap = bdrv_find_dirty_bitmap(di->bs, bitmap_name);
-             if (!bitmap) {
-                 bitmap = bdrv_create_dirty_bitmap(di->bs, PROXMOX_BACKUP_DEFAULT_CHUNK_SIZE,
--                                                  di->requested_bitmap_name, errp);
-+                                                  bitmap_name, errp);
-                 if (!bitmap) {
-                     qemu_mutex_unlock(&backup_state.stat.lock);
-                     goto err;
-                 }
-                 bdrv_set_dirty_bitmap(bitmap, 0, di->size);
--                if (same_bitmap_name) {
-+                if (di->requested_bitmap_mode == BACKUP_ACCESS_SETUP_BITMAP_MODE_USE) {
-                     action = PBS_BITMAP_ACTION_MISSING_RECREATED;
-                 } else {
-                     action = PBS_BITMAP_ACTION_NEW;
-                 }
-             } else {
-+                if (di->requested_bitmap_mode == BACKUP_ACCESS_SETUP_BITMAP_MODE_NEW) {
-+                    qemu_mutex_unlock(&backup_state.stat.lock);
-+                    error_setg(errp, "internal error - removed old bitmap still present");
-+                    goto err;
-+                }
-                 /* track clean chunks as reused */
-                 dirty = MIN(bdrv_get_dirty_count(bitmap), di->size);
-                 backup_state.stat.reused += di->size - dirty;
-                 action = PBS_BITMAP_ACTION_USED;
-             }
--
--            if (!same_bitmap_name) {
--                g_hash_table_insert(backup_state.backup_access_bitmaps,
--                                    strdup(target_id), strdup(di->requested_bitmap_name));
--            }
--
-         }
- 
-         PBSBitmapInfo *info = g_malloc(sizeof(*info));
-@@ -1207,9 +1178,9 @@ BackupAccessInfoList *coroutine_fn qmp_backup_access_setup(
-         info->value->node_name = g_strdup(bdrv_get_node_name(di->fleecing.snapshot_access));
-         info->value->device = g_strdup(di->device_name);
-         info->value->size = di->size;
--        if (di->requested_bitmap_name) {
-+        if (di->bitmap) {
-             info->value->bitmap_node_name = g_strdup(bdrv_get_node_name(di->bs));
--            info->value->bitmap_name = g_strdup(di->requested_bitmap_name);
-+            info->value->bitmap_name = g_strdup(bitmap_name);
-             info->value->bitmap_action = di->bitmap_action;
-             info->value->has_bitmap_action = true;
-         }
-@@ -1274,9 +1245,6 @@ void backup_access_teardown(bool success)
-         g_free(di->device_name);
-         di->device_name = NULL;
- 
--        g_free(di->requested_bitmap_name);
--        di->requested_bitmap_name = NULL;
--
-         g_free(di);
-     }
-     g_list_free(backup_state.di_list);
-diff --git a/qapi/block-core.json b/qapi/block-core.json
-index cde92071a1..2fb51215f2 100644
---- a/qapi/block-core.json
-+++ b/qapi/block-core.json
-@@ -1144,18 +1144,12 @@
- #
- # @device: the block device name.
- #
--# @bitmap-name: use/create a bitmap with this name for the device. Re-using the
--#     same name allows for making incremental backups. Check the @bitmap-action
--#     in the result to see if you can actually re-use the bitmap or if it had to
--#     be newly created.
--#
- # @bitmap-mode: used to control whether the bitmap should be reused or
--#     recreated.
-+#     recreated or not used. Default is not using a bitmap.
- #
- ##
- { 'struct': 'BackupAccessSourceDevice',
--  'data': { 'device': 'str', '*bitmap-name': 'str',
--            '*bitmap-mode': 'BackupAccessSetupBitmapMode' } }
-+  'data': { 'device': 'str', '*bitmap-mode': 'BackupAccessSetupBitmapMode' } }
- 
- ##
- # @BackupAccessSetupBitmapMode:
-@@ -1179,7 +1173,10 @@
- #
- # @target-id: the unique ID of the backup target.
- #
--# @devices: list of devices for which to create the backup access.
-+# @devices: list of devices for which to create the backup access.  Also
-+#     controls whether to use/create a bitmap for the device.  Check the
-+#     @bitmap-action in the result to see what action was actually taken for the
-+#     bitmap.  Each target controls its own bitmaps.
- #
- # Returns: a list of @BackupAccessInfo, one for each device.
- #
diff --git a/debian/patches/series b/debian/patches/series
index fd151eb..88c3144 100644
--- a/debian/patches/series
+++ b/debian/patches/series
@@ -47,35 +47,17 @@ pve/0038-block-add-alloc-track-driver.patch
 pve/0039-Revert-block-rbd-workaround-for-ceph-issue-53784.patch
 pve/0040-Revert-block-rbd-fix-handling-of-holes-in-.bdrv_co_b.patch
 pve/0041-Revert-block-rbd-implement-bdrv_co_block_status.patch
-pve/0042-alloc-track-error-out-when-auto-remove-is-not-set.patch
-pve/0043-alloc-track-avoid-seemingly-superfluous-child-permis.patch
-pve/0044-PVE-backup-add-fleecing-option.patch
-pve/0045-PVE-backup-improve-error-when-copy-before-write-fail.patch
-pve/0046-PVE-backup-fixup-error-handling-for-fleecing.patch
-pve/0047-PVE-backup-factor-out-setting-up-snapshot-access-for.patch
-pve/0048-PVE-backup-save-device-name-in-device-info-structure.patch
-pve/0049-PVE-backup-include-device-name-in-error-when-setting.patch
-pve/0050-adapt-machine-version-deprecation-for-Proxmox-VE.patch
-pve/0051-Revert-hpet-avoid-timer-storms-on-periodic-timers.patch
-pve/0052-Revert-hpet-store-full-64-bit-target-value-of-the-co.patch
-pve/0053-Revert-hpet-accept-64-bit-reads-and-writes.patch
-pve/0054-Revert-hpet-place-read-only-bits-directly-in-new_val.patch
-pve/0055-Revert-hpet-remove-unnecessary-variable-index.patch
-pve/0056-Revert-hpet-ignore-high-bits-of-comparator-in-32-bit.patch
-pve/0057-Revert-hpet-fix-and-cleanup-persistence-of-interrupt.patch
-pve/0058-savevm-async-improve-setting-state-of-snapshot-opera.patch
-pve/0059-savevm-async-rename-saved_vm_running-to-vm_needs_sta.patch
-pve/0060-savevm-async-improve-runstate-preservation-cleanup-e.patch
-pve/0061-savevm-async-use-dedicated-iothread-for-state-file.patch
-pve/0062-savevm-async-treat-failure-to-set-iothread-context-a.patch
-pve/0063-PVE-backup-clean-up-directly-in-setup_snapshot_acces.patch
-pve/0064-PVE-backup-factor-out-helper-to-clear-backup-state-s.patch
-pve/0065-PVE-backup-factor-out-helper-to-initialize-backup-st.patch
-pve/0066-PVE-backup-add-target-ID-in-backup-state.patch
-pve/0067-PVE-backup-get-device-info-allow-caller-to-specify-f.patch
-pve/0068-PVE-backup-implement-backup-access-setup-and-teardow.patch
-pve/0069-PVE-backup-factor-out-get_single_device_info-helper.patch
-pve/0070-PVE-backup-implement-bitmap-support-for-external-bac.patch
-pve/0071-PVE-backup-backup-access-api-indicate-situation-wher.patch
-pve/0072-PVE-backup-backup-access-api-explicit-bitmap-mode-pa.patch
-pve/0073-PVE-backup-backup-access-api-simplify-bitmap-logic.patch
+pve/0042-PVE-backup-add-fleecing-option.patch
+pve/0043-adapt-machine-version-deprecation-for-Proxmox-VE.patch
+pve/0044-Revert-hpet-avoid-timer-storms-on-periodic-timers.patch
+pve/0045-Revert-hpet-store-full-64-bit-target-value-of-the-co.patch
+pve/0046-Revert-hpet-accept-64-bit-reads-and-writes.patch
+pve/0047-Revert-hpet-place-read-only-bits-directly-in-new_val.patch
+pve/0048-Revert-hpet-remove-unnecessary-variable-index.patch
+pve/0049-Revert-hpet-ignore-high-bits-of-comparator-in-32-bit.patch
+pve/0050-Revert-hpet-fix-and-cleanup-persistence-of-interrupt.patch
+pve/0051-PVE-backup-factor-out-helper-to-clear-backup-state-s.patch
+pve/0052-PVE-backup-factor-out-helper-to-initialize-backup-st.patch
+pve/0053-PVE-backup-add-target-ID-in-backup-state.patch
+pve/0054-PVE-backup-get-device-info-allow-caller-to-specify-f.patch
+pve/0055-PVE-backup-implement-backup-access-setup-and-teardow.patch
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


  parent reply	other threads:[~2025-06-02 11:25 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-02 10:22 [pve-devel] [PATCH-SERIES qemu 0/2] update submodule and patches to QEMU 10.0.2 Fiona Ebner
2025-06-02 10:22 ` [pve-devel] [PATCH qemu 1/2] " Fiona Ebner
2025-06-02 10:22 ` Fiona Ebner [this message]
2025-06-17  6:48 ` [pve-devel] applied: [PATCH-SERIES qemu 0/2] " Thomas Lamprecht

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250602102227.120732-3-f.ebner@proxmox.com \
    --to=f.ebner@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal