public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Fiona Ebner <f.ebner@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH qemu v4 04/27] PVE backup: implement bitmap support for external backup access
Date: Thu, 14 Nov 2024 16:07:31 +0100	[thread overview]
Message-ID: <20241114150754.374376-5-f.ebner@proxmox.com> (raw)
In-Reply-To: <20241114150754.374376-1-f.ebner@proxmox.com>

There can be one dirty bitmap for each backup target ID (which are
tracked in the backup_access_bitmaps hash table). The QMP user can
specify the ID of the bitmap it likes to use. This ID is then compared
to the current one for the given target. If they match, the bitmap is
re-used (should it still exist on the drive, otherwise re-created). If
there is a mismatch, the old bitmap is removed and a new one is
created.

The return value of the QMP command includes information about what
bitmap action was taken. Similar to what the query-backup QMP command
returns for regular backup. It also includes the bitmap name and
associated block node, so the management layer can then set up an NBD
export with the bitmap.

While the backup access is active, a background bitmap is also
required. This is necessary to implement bitmap handling according to
the original reference [0]. In particular:

- in the error case, new writes since the backup access was set up are
  in the background bitmap. Because of failure, the previously tracked
  writes from the backup access bitmap are still required too. Thus,
  the bitmap is merged with the background bitmap to get all new
  writes since the last backup.

- in the success case, continue tracking for the next incremental
  backup in the backup access bitmap. New writes since the backup
  access was set up are in the background bitmap. Because the backup
  was successfully, clear the backup access bitmap and merge back the
  background bitmap to get only the new writes.

Since QEMU cannot know if the backup was successful or not (except if
failure already happens during the setup QMP command), the management
layer needs to tell it via the teardown QMP command.

The bitmap action is also recorded in the device info now.

[0]: https://lore.kernel.org/qemu-devel/b68833dd-8864-4d72-7c61-c134a9835036@ya.ru/

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---

No changes in v4.

 pve-backup.c         | 175 ++++++++++++++++++++++++++++++++++++++++++-
 pve-backup.h         |   2 +-
 qapi/block-core.json |  22 +++++-
 system/runstate.c    |   2 +-
 4 files changed, 193 insertions(+), 8 deletions(-)

diff --git a/pve-backup.c b/pve-backup.c
index 06a5ca9eda..4cadde48ac 100644
--- a/pve-backup.c
+++ b/pve-backup.c
@@ -15,6 +15,7 @@
 #include "qapi/qmp/qdict.h"
 #include "qapi/qmp/qerror.h"
 #include "qemu/cutils.h"
+#include "qemu/error-report.h"
 
 #if defined(CONFIG_MALLOC_TRIM)
 #include <malloc.h>
@@ -41,6 +42,7 @@
  */
 
 const char *PBS_BITMAP_NAME = "pbs-incremental-dirty-bitmap";
+const char *BACKGROUND_BITMAP_NAME = "backup-access-background-bitmap";
 
 static struct PVEBackupState {
     struct {
@@ -72,6 +74,7 @@ static struct PVEBackupState {
     CoMutex backup_mutex;
     CoMutex dump_callback_mutex;
     char *target_id;
+    GHashTable *backup_access_bitmaps; // key=target_id, value=bitmap_name
 } backup_state;
 
 static void pvebackup_init(void)
@@ -99,6 +102,8 @@ typedef struct PVEBackupDevInfo {
     char* device_name;
     int completed_ret; // INT_MAX if not completed
     BdrvDirtyBitmap *bitmap;
+    BdrvDirtyBitmap *background_bitmap; // used for external backup access
+    PBSBitmapAction bitmap_action;
     BlockDriverState *target;
     BlockJob *job;
 } PVEBackupDevInfo;
@@ -362,6 +367,67 @@ static void coroutine_fn pvebackup_co_complete_stream(void *opaque)
     qemu_co_mutex_unlock(&backup_state.backup_mutex);
 }
 
+/*
+ * New writes since the backup access was set up are in the background bitmap. Because of failure,
+ * the previously tracked writes in di->bitmap are still required too. Thus, merge with the
+ * background bitmap to get all new writes since the last backup.
+ */
+static void handle_backup_access_bitmaps_in_error_case(PVEBackupDevInfo *di)
+{
+    Error *local_err = NULL;
+
+    if (di->bs && di->background_bitmap) {
+        bdrv_drained_begin(di->bs);
+        if (di->bitmap) {
+            bdrv_enable_dirty_bitmap(di->bitmap);
+            if (!bdrv_merge_dirty_bitmap(di->bitmap, di->background_bitmap, NULL, &local_err)) {
+                warn_report("backup access: %s - could not merge bitmaps in error path - %s",
+                            di->device_name,
+                            local_err ? error_get_pretty(local_err) : "unknown error");
+                /*
+                 * Could not merge, drop original bitmap too.
+                 */
+                bdrv_release_dirty_bitmap(di->bitmap);
+            }
+        } else {
+            warn_report("backup access: %s - expected bitmap not present", di->device_name);
+        }
+        bdrv_release_dirty_bitmap(di->background_bitmap);
+        bdrv_drained_end(di->bs);
+    }
+}
+
+/*
+ * Continue tracking for next incremental backup in di->bitmap. New writes since the backup access
+ * was set up are in the background bitmap. Because the backup was successful, clear di->bitmap and
+ * merge back the background bitmap to get only the new writes.
+ */
+static void handle_backup_access_bitmaps_after_success(PVEBackupDevInfo *di)
+{
+    Error *local_err = NULL;
+
+    if (di->bs && di->background_bitmap) {
+        bdrv_drained_begin(di->bs);
+        if (di->bitmap) {
+            bdrv_enable_dirty_bitmap(di->bitmap);
+            bdrv_clear_dirty_bitmap(di->bitmap, NULL);
+            if (!bdrv_merge_dirty_bitmap(di->bitmap, di->background_bitmap, NULL, &local_err)) {
+                warn_report("backup access: %s - could not merge bitmaps after backup - %s",
+                            di->device_name,
+                            local_err ? error_get_pretty(local_err) : "unknown error");
+                /*
+                 * Could not merge, drop original bitmap too.
+                 */
+                bdrv_release_dirty_bitmap(di->bitmap);
+            }
+        } else {
+            warn_report("backup access: %s - expected bitmap not present", di->device_name);
+        }
+        bdrv_release_dirty_bitmap(di->background_bitmap);
+        bdrv_drained_end(di->bs);
+    }
+}
+
 static void cleanup_snapshot_access(PVEBackupDevInfo *di)
 {
     if (di->fleecing.snapshot_access) {
@@ -602,6 +668,21 @@ static void setup_all_snapshot_access_bh(void *opaque)
 
         bdrv_drained_begin(di->bs);
 
+        if (di->bitmap) {
+            BdrvDirtyBitmap *background_bitmap =
+                bdrv_create_dirty_bitmap(di->bs, PROXMOX_BACKUP_DEFAULT_CHUNK_SIZE,
+                                         BACKGROUND_BITMAP_NAME, &local_err);
+            if (!background_bitmap) {
+                error_setg(errp, "%s - creating background bitmap for backup access failed: %s",
+                           di->device_name,
+                           local_err ? error_get_pretty(local_err) : "unknown error");
+                bdrv_drained_end(di->bs);
+                break;
+            }
+            di->background_bitmap = background_bitmap;
+            bdrv_disable_dirty_bitmap(di->bitmap);
+        }
+
         if (setup_snapshot_access(di, &local_err) < 0) {
             cleanup_snapshot_access(di);
             bdrv_drained_end(di->bs);
@@ -850,6 +931,7 @@ err:
 BackupAccessInfoList *coroutine_fn qmp_backup_access_setup(
     const char *target_id,
     const char *devlist,
+    const char *bitmap_name,
     Error **errp)
 {
     assert(qemu_in_coroutine());
@@ -909,6 +991,77 @@ BackupAccessInfoList *coroutine_fn qmp_backup_access_setup(
         backup_state.stat.bitmap_list = NULL;
     }
 
+    if (!backup_state.backup_access_bitmaps) {
+        backup_state.backup_access_bitmaps =
+            g_hash_table_new_full(g_str_hash, g_str_equal, free, free);
+    }
+
+    /* create bitmaps if requested */
+    l = di_list;
+    while (l) {
+        PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
+        l = g_list_next(l);
+
+        di->block_size = PROXMOX_BACKUP_DEFAULT_CHUNK_SIZE;
+
+        PBSBitmapAction action = PBS_BITMAP_ACTION_NOT_USED;
+        size_t dirty = di->size;
+
+        const char *old_bitmap_name =
+            (const char*)g_hash_table_lookup(backup_state.backup_access_bitmaps, target_id);
+
+        bool same_bitmap_name =
+            old_bitmap_name && bitmap_name && strcmp(bitmap_name, old_bitmap_name) == 0;
+
+        if (old_bitmap_name && !same_bitmap_name) {
+            BdrvDirtyBitmap *old_bitmap = bdrv_find_dirty_bitmap(di->bs, old_bitmap_name);
+            if (!old_bitmap) {
+                warn_report("setup backup access: expected old bitmap '%s' not found for drive "
+                            "'%s'", old_bitmap_name, di->device_name);
+            } else {
+                g_hash_table_remove(backup_state.backup_access_bitmaps, target_id);
+                bdrv_release_dirty_bitmap(old_bitmap);
+                action = PBS_BITMAP_ACTION_NOT_USED_REMOVED;
+            }
+        }
+
+        BdrvDirtyBitmap *bitmap = NULL;
+        if (bitmap_name) {
+            bitmap = bdrv_find_dirty_bitmap(di->bs, bitmap_name);
+            if (!bitmap) {
+                bitmap = bdrv_create_dirty_bitmap(di->bs, PROXMOX_BACKUP_DEFAULT_CHUNK_SIZE,
+                                                  bitmap_name, errp);
+                if (!bitmap) {
+                    qemu_mutex_unlock(&backup_state.stat.lock);
+                    goto err;
+                }
+                bdrv_set_dirty_bitmap(bitmap, 0, di->size);
+                action = same_bitmap_name ? PBS_BITMAP_ACTION_INVALID : PBS_BITMAP_ACTION_NEW;
+            } else {
+                /* track clean chunks as reused */
+                dirty = MIN(bdrv_get_dirty_count(bitmap), di->size);
+                backup_state.stat.reused += di->size - dirty;
+                action = PBS_BITMAP_ACTION_USED;
+            }
+
+            if (!same_bitmap_name) {
+                g_hash_table_insert(backup_state.backup_access_bitmaps,
+                                    strdup(target_id), strdup(bitmap_name));
+            }
+
+        }
+
+        PBSBitmapInfo *info = g_malloc(sizeof(*info));
+        info->drive = g_strdup(di->device_name);
+        info->action = action;
+        info->size = di->size;
+        info->dirty = dirty;
+        backup_state.stat.bitmap_list = g_list_append(backup_state.stat.bitmap_list, info);
+
+        di->bitmap = bitmap;
+        di->bitmap_action = action;
+    }
+
     /* initialize global backup_state now */
 
     if (backup_state.stat.error) {
@@ -978,6 +1131,12 @@ BackupAccessInfoList *coroutine_fn qmp_backup_access_setup(
         info->value->node_name = g_strdup(bdrv_get_node_name(di->fleecing.snapshot_access));
         info->value->device = g_strdup(di->device_name);
         info->value->size = di->size;
+        if (bitmap_name) {
+            info->value->bitmap_node_name = g_strdup(bdrv_get_node_name(di->bs));
+            info->value->bitmap_name = g_strdup(bitmap_name);
+            info->value->bitmap_action = di->bitmap_action;
+            info->value->has_bitmap_action = true;
+        }
 
         *p_bai_next = info;
         p_bai_next = &info->next;
@@ -992,6 +1151,8 @@ err:
         PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
         l = g_list_next(l);
 
+        handle_backup_access_bitmaps_in_error_case(di);
+
         g_free(di->device_name);
         di->device_name = NULL;
 
@@ -1007,7 +1168,7 @@ err:
 /*
  * Caller needs to hold the backup mutex or the BQL.
  */
-void backup_access_teardown(void)
+void backup_access_teardown(bool success)
 {
     GList *l = backup_state.di_list;
 
@@ -1024,6 +1185,12 @@ void backup_access_teardown(void)
             di->fleecing.cbw = NULL;
         }
 
+        if (success) {
+            handle_backup_access_bitmaps_after_success(di);
+        } else {
+            handle_backup_access_bitmaps_in_error_case(di);
+        }
+
         g_free(di->device_name);
         di->device_name = NULL;
 
@@ -1039,13 +1206,13 @@ static void backup_access_teardown_bh(void *opaque)
 {
     CoCtxData *data = (CoCtxData*)opaque;
 
-    backup_access_teardown();
+    backup_access_teardown(*((bool*)data->data));
 
     /* return */
     aio_co_enter(data->ctx, data->co);
 }
 
-void coroutine_fn qmp_backup_access_teardown(const char *target_id, Error **errp)
+void coroutine_fn qmp_backup_access_teardown(const char *target_id, bool success, Error **errp)
 {
     assert(qemu_in_coroutine());
 
@@ -1075,6 +1242,7 @@ void coroutine_fn qmp_backup_access_teardown(const char *target_id, Error **errp
     CoCtxData waker = {
         .co = qemu_coroutine_self(),
         .ctx = qemu_get_current_aio_context(),
+        .data = &success,
     };
     aio_bh_schedule_oneshot(waker.ctx, backup_access_teardown_bh, &waker);
     qemu_coroutine_yield();
@@ -1284,6 +1452,7 @@ UuidInfo coroutine_fn *qmp_backup(
             }
 
             di->dev_id = dev_id;
+            di->bitmap_action = action;
 
             PBSBitmapInfo *info = g_malloc(sizeof(*info));
             info->drive = g_strdup(di->device_name);
diff --git a/pve-backup.h b/pve-backup.h
index 4033bc848f..9ebeef7c8f 100644
--- a/pve-backup.h
+++ b/pve-backup.h
@@ -11,6 +11,6 @@
 #ifndef PVE_BACKUP_H
 #define PVE_BACKUP_H
 
-void backup_access_teardown(void);
+void backup_access_teardown(bool success);
 
 #endif /* PVE_BACKUP_H */
diff --git a/qapi/block-core.json b/qapi/block-core.json
index 4ddcc3216e..d069e46ed9 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -1110,9 +1110,17 @@
 #
 # @size: the size of the block device in bytes.
 #
+# @bitmap-node-name: the block node name the dirty bitmap is associated to.
+#
+# @bitmap-name: the name of the dirty bitmap associated to the backup access.
+#
+# @bitmap-action: the action taken on the dirty bitmap.
+#
 ##
 { 'struct': 'BackupAccessInfo',
-  'data': { 'node-name': 'str', 'device': 'str', 'size': 'size' } }
+  'data': { 'node-name': 'str', 'device': 'str', 'size': 'size',
+            '*bitmap-node-name': 'str', '*bitmap-name': 'str',
+            '*bitmap-action': 'PBSBitmapAction' } }
 
 ##
 # @backup-access-setup:
@@ -1125,11 +1133,16 @@
 # @devlist: list of block device names (separated by ',', ';' or ':'). By
 #     default the backup includes all writable block devices.
 #
+# @bitmap-name: use/create a bitmap with this name. Re-using the same name
+#     allows for making incremental backups. Check the @bitmap-action in the
+#     result to see if you can actually re-use the bitmap or if it had to be
+#     newly created.
+#
 # Returns: a list of @BackupAccessInfo, one for each device.
 #
 ##
 { 'command': 'backup-access-setup',
-  'data': { 'target-id': 'str', '*devlist': 'str' },
+  'data': { 'target-id': 'str', '*devlist': 'str', '*bitmap-name': 'str' },
   'returns': [ 'BackupAccessInfo' ], 'coroutine': true }
 
 ##
@@ -1139,8 +1152,11 @@
 #
 # @target-id: the ID of the backup target.
 #
+# @success: whether the backup done by the external provider was successful.
+#
 ##
-{ 'command': 'backup-access-teardown', 'data': { 'target-id': 'str' },
+{ 'command': 'backup-access-teardown',
+  'data': { 'target-id': 'str', 'success': 'bool' },
   'coroutine': true }
 
 ##
diff --git a/system/runstate.c b/system/runstate.c
index 7e641e4484..b61996dd7a 100644
--- a/system/runstate.c
+++ b/system/runstate.c
@@ -873,7 +873,7 @@ void qemu_cleanup(int status)
      * The backup access is set up by a QMP command, but is neither owned by a monitor nor
      * associated to a BlockBackend. Need to tear it down manually here.
      */
-    backup_access_teardown();
+    backup_access_teardown(false);
     job_cancel_sync_all();
     bdrv_close_all();
 
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


  parent reply	other threads:[~2024-11-14 15:09 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-11-14 15:07 [pve-devel] [PATCH-SERIES qemu/common/storage/qemu-server/container/manager v4 00/27] backup provider API Fiona Ebner
2024-11-14 15:07 ` [pve-devel] [PATCH qemu v4 01/27] PVE backup: add target ID in backup state Fiona Ebner
2024-11-14 15:07 ` [pve-devel] [PATCH qemu v4 02/27] PVE backup: get device info: allow caller to specify filter for which devices use fleecing Fiona Ebner
2024-11-14 15:07 ` [pve-devel] [PATCH qemu v4 03/27] PVE backup: implement backup access setup and teardown API for external providers Fiona Ebner
2024-11-14 15:07 ` Fiona Ebner [this message]
2024-11-14 15:07 ` [pve-devel] [PATCH common v4 05/27] test: lock file: get rid of END block that made test always pass Fiona Ebner
2024-11-14 19:46   ` [pve-devel] applied: " Thomas Lamprecht
2024-11-14 15:07 ` [pve-devel] [PATCH common v4 06/27] tools: run fork: allow running code in parent after fork Fiona Ebner
2024-11-14 19:46   ` [pve-devel] applied: " Thomas Lamprecht
2024-11-14 15:07 ` [pve-devel] [PATCH common v4 07/27] test: have lock file test use run_fork() helper Fiona Ebner
2024-11-14 19:46   ` [pve-devel] applied: " Thomas Lamprecht
2024-11-14 15:07 ` [pve-devel] [PATCH storage v4 08/27] add storage_has_feature() helper function Fiona Ebner
2024-11-14 15:07 ` [pve-devel] [PATCH storage v4 09/27] plugin: introduce new_backup_provider() method Fiona Ebner
2024-11-14 15:07 ` [pve-devel] [PATCH storage v4 10/27] extract backup config: delegate to backup provider for storages that support it Fiona Ebner
2024-11-14 15:07 ` [pve-devel] [POC storage v4 11/27] add backup provider example Fiona Ebner
2024-11-14 15:07 ` [pve-devel] [POC storage v4 12/27] WIP Borg plugin Fiona Ebner
2024-11-14 15:07 ` [pve-devel] [PATCH qemu-server v4 13/27] backup: keep track of block-node size for fleecing Fiona Ebner
2024-11-14 15:07 ` [pve-devel] [PATCH qemu-server v4 14/27] module config: load nbd module at boot Fiona Ebner
2024-11-14 15:07 ` [pve-devel] [PATCH qemu-server v4 15/27] backup: allow adding fleecing images also for EFI and TPM Fiona Ebner
2024-11-14 15:07 ` [pve-devel] [PATCH qemu-server v4 16/27] backup: implement backup for external providers Fiona Ebner
2024-11-14 15:07 ` [pve-devel] [PATCH qemu-server v4 17/27] backup: implement restore " Fiona Ebner
2024-11-14 15:07 ` [pve-devel] [PATCH qemu-server v4 18/27] backup restore: external: hardening check for untrusted source image Fiona Ebner
2024-11-14 15:07 ` [pve-devel] [PATCH container v4 19/27] add LXC::Namespaces module Fiona Ebner
2024-11-14 15:07 ` [pve-devel] [PATCH container v4 20/27] backup: implement backup for external providers Fiona Ebner
2024-11-14 15:07 ` [pve-devel] [PATCH container v4 21/27] backup: implement restore " Fiona Ebner
2024-11-14 15:07 ` [pve-devel] [PATCH container v4 22/27] external restore: don't use 'one-file-system' tar flag when restoring from a directory Fiona Ebner
2024-11-14 15:07 ` [pve-devel] [PATCH container v4 23/27] create: factor out compression option helper Fiona Ebner
2024-11-14 15:07 ` [pve-devel] [PATCH container v4 24/27] restore tar archive: check potentially untrusted archive Fiona Ebner
2024-11-14 15:07 ` [pve-devel] [PATCH container v4 25/27] api: add early check against restoring privileged container from external source Fiona Ebner
2024-11-14 15:07 ` [pve-devel] [PATCH manager v4 26/27] ui: backup: also check for backup subtype to classify archive Fiona Ebner
2024-11-14 15:07 ` [pve-devel] [PATCH manager v4 27/27] backup: implement backup for external providers Fiona Ebner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20241114150754.374376-5-f.ebner@proxmox.com \
    --to=f.ebner@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal