public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH v8 storage 0/9] backup provider API
@ 2025-04-03 12:30 Wolfgang Bumiller
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu 01/10] PVE backup: clean up directly in setup_snapshot_access() when it fails Wolfgang Bumiller
                   ` (38 more replies)
  0 siblings, 39 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:30 UTC (permalink / raw)
  To: pve-devel

v7: https://lore.proxmox.com/pve-devel/20250401173435.221892-1-f.ebner@proxmox.com/

I picked this up since Fiona is currently not available.

This contains 1 "major" change (and incorporates some minor doc feedback
from Max (sans the `=head` rewrite, feel free to send follow-up patches
for this @Max).

The change is that instead of having a 

    sub backup_vm_available_bitmaps($self, $volumes)

which fills bitmap names into the passed-along $volumes array,
there is now a

    sub backup_vm_query_incremental($self, $volumes)

which should *return* a new hash (as modifying parameters is a bit
awkward if a plugin is not written in perl, eg. our rust perlmod...),
which instead of bitmap *names*, contains bitmap *modes* which are:

    - new: create a new bitmap, or *recreate* an existing one
    - use: use an existing bitmap, or create one if none exists

Bitmap names have two issues:
1. It is not clear if they are actually useful.
2. They do not provide a way to distinguish between the above modes.
   If the backup provider loses access to a drive which used a bitmap
   before, the only way to recreate it was to *rename* it, which meant
   keeping track of those names.

   So now the only thing we need to keep track of, which for some backup
   providers can *probably* happen without any extra state, is which
   drive a bitmap is relative to.
   Tracking names on the other hand is quite an awkward interface.

Changes in v8:
- storage: replace backup_vm_available_bitmaps() with backup_vm_query_incremental()
- qemu: add an explicit optional 'bitmap-mode' parameter to
  qmp_backup_access_setup()
- qemu-sever: adapt to the storage api change and pass the bitmap-mode
  to the qmp-backup-access-setup command
- storage: adapt directory example
- storage: change the previous-info file to be a json hash mapping disks
  to their last backups
- qemu-server: fix 'bwlimit'/'bwlimiit' typo
- qemu: fix a leak on error in `get_single_device_info()`

build order:
1. qemu
2. storage
3. a) qemu-server: depends on qemu + storage
   b) pve-container: depends on storage
4. manager: depends on qemu-server/container I guess


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 qemu 01/10] PVE backup: clean up directly in setup_snapshot_access() when it fails
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
@ 2025-04-03 12:30 ` Wolfgang Bumiller
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu 02/10] PVE backup: factor out helper to clear backup state's bitmap list Wolfgang Bumiller
                   ` (37 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:30 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

The only thing that might need to be cleaned up after
setup_snapshot_access() failed is dropping the cbw filter. Do so in
the single branch it matters inside setup_snapshot_access() itself.
This avoids the need that callers of setup_snapshot_access() use
cleanup_snapshot_access() when the call failed.

Suggested-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
---
No changes to v7.

 pve-backup.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/pve-backup.c b/pve-backup.c
index 32352fb5ec..2408f182bc 100644
--- a/pve-backup.c
+++ b/pve-backup.c
@@ -576,6 +576,9 @@ static int setup_snapshot_access(PVEBackupDevInfo *di, Error **errp)
     di->fleecing.snapshot_access =
         bdrv_open(NULL, NULL, snapshot_access_opts, BDRV_O_RDWR | BDRV_O_UNMAP, &local_err);
     if (!di->fleecing.snapshot_access) {
+        bdrv_cbw_drop(di->fleecing.cbw);
+        di->fleecing.cbw = NULL;
+
         error_setg(errp, "setting up snapshot access for fleecing failed: %s",
                    local_err ? error_get_pretty(local_err) : "unknown error");
         return -1;
@@ -629,7 +632,6 @@ static void create_backup_jobs_bh(void *opaque) {
                 error_setg(errp, "%s - setting up snapshot access for fleecing failed: %s",
                            di->device_name,
                            local_err ? error_get_pretty(local_err) : "unknown error");
-                cleanup_snapshot_access(di);
                 bdrv_drained_end(di->bs);
                 break;
             }
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 qemu 02/10] PVE backup: factor out helper to clear backup state's bitmap list
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu 01/10] PVE backup: clean up directly in setup_snapshot_access() when it fails Wolfgang Bumiller
@ 2025-04-03 12:30 ` Wolfgang Bumiller
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu 03/10] PVE backup: factor out helper to initialize backup state stat struct Wolfgang Bumiller
                   ` (36 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:30 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

Suggested-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
---
No changes to v7.

 pve-backup.c | 28 ++++++++++++++++++----------
 1 file changed, 18 insertions(+), 10 deletions(-)

diff --git a/pve-backup.c b/pve-backup.c
index 2408f182bc..915649b5f9 100644
--- a/pve-backup.c
+++ b/pve-backup.c
@@ -811,6 +811,23 @@ err:
     return di_list;
 }
 
+/*
+ * To be called with the backup_state.stat mutex held.
+ */
+static void clear_backup_state_bitmap_list(void) {
+
+    if (backup_state.stat.bitmap_list) {
+        GList *bl = backup_state.stat.bitmap_list;
+        while (bl) {
+            g_free(((PBSBitmapInfo *)bl->data)->drive);
+            g_free(bl->data);
+            bl = g_list_next(bl);
+        }
+        g_list_free(backup_state.stat.bitmap_list);
+        backup_state.stat.bitmap_list = NULL;
+    }
+}
+
 UuidInfo coroutine_fn *qmp_backup(
     const char *backup_file,
     const char *password,
@@ -898,16 +915,7 @@ UuidInfo coroutine_fn *qmp_backup(
     backup_state.stat.reused = 0;
 
     /* clear previous backup's bitmap_list */
-    if (backup_state.stat.bitmap_list) {
-        GList *bl = backup_state.stat.bitmap_list;
-        while (bl) {
-            g_free(((PBSBitmapInfo *)bl->data)->drive);
-            g_free(bl->data);
-            bl = g_list_next(bl);
-        }
-        g_list_free(backup_state.stat.bitmap_list);
-        backup_state.stat.bitmap_list = NULL;
-    }
+    clear_backup_state_bitmap_list();
 
     if (format == BACKUP_FORMAT_PBS) {
         if (!password) {
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 qemu 03/10] PVE backup: factor out helper to initialize backup state stat struct
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu 01/10] PVE backup: clean up directly in setup_snapshot_access() when it fails Wolfgang Bumiller
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu 02/10] PVE backup: factor out helper to clear backup state's bitmap list Wolfgang Bumiller
@ 2025-04-03 12:30 ` Wolfgang Bumiller
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu 04/10] PVE backup: add target ID in backup state Wolfgang Bumiller
                   ` (35 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:30 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

Suggested-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
---
No changes to v7.

 pve-backup.c | 62 ++++++++++++++++++++++++++++++++--------------------
 1 file changed, 38 insertions(+), 24 deletions(-)

diff --git a/pve-backup.c b/pve-backup.c
index 915649b5f9..88a981f81c 100644
--- a/pve-backup.c
+++ b/pve-backup.c
@@ -828,6 +828,43 @@ static void clear_backup_state_bitmap_list(void) {
     }
 }
 
+/*
+ * Initializes most of the backup state 'stat' struct. Note that 'reused' and
+ * 'bitmap_list' are not changed by this function and need to be handled by
+ * the caller. In particular, 'reused' needs to be set before calling this
+ * function.
+ *
+ * To be called with the backup_state.stat mutex held.
+ */
+static void initialize_backup_state_stat(
+    const char *backup_file,
+    uuid_t uuid,
+    size_t total)
+{
+    if (backup_state.stat.error) {
+        error_free(backup_state.stat.error);
+        backup_state.stat.error = NULL;
+    }
+
+    backup_state.stat.start_time = time(NULL);
+    backup_state.stat.end_time = 0;
+
+    if (backup_state.stat.backup_file) {
+        g_free(backup_state.stat.backup_file);
+    }
+    backup_state.stat.backup_file = g_strdup(backup_file);
+
+    uuid_copy(backup_state.stat.uuid, uuid);
+    uuid_unparse_lower(uuid, backup_state.stat.uuid_str);
+
+    backup_state.stat.total = total;
+    backup_state.stat.dirty = total - backup_state.stat.reused;
+    backup_state.stat.transferred = 0;
+    backup_state.stat.zero_bytes = 0;
+    backup_state.stat.finishing = false;
+    backup_state.stat.starting = true;
+}
+
 UuidInfo coroutine_fn *qmp_backup(
     const char *backup_file,
     const char *password,
@@ -1070,32 +1107,9 @@ UuidInfo coroutine_fn *qmp_backup(
         }
     }
     /* initialize global backup_state now */
-    /* note: 'reused' and 'bitmap_list' are initialized earlier */
-
-    if (backup_state.stat.error) {
-        error_free(backup_state.stat.error);
-        backup_state.stat.error = NULL;
-    }
-
-    backup_state.stat.start_time = time(NULL);
-    backup_state.stat.end_time = 0;
-
-    if (backup_state.stat.backup_file) {
-        g_free(backup_state.stat.backup_file);
-    }
-    backup_state.stat.backup_file = g_strdup(backup_file);
-
-    uuid_copy(backup_state.stat.uuid, uuid);
-    uuid_unparse_lower(uuid, backup_state.stat.uuid_str);
+    initialize_backup_state_stat(backup_file, uuid, total);
     char *uuid_str = g_strdup(backup_state.stat.uuid_str);
 
-    backup_state.stat.total = total;
-    backup_state.stat.dirty = total - backup_state.stat.reused;
-    backup_state.stat.transferred = 0;
-    backup_state.stat.zero_bytes = 0;
-    backup_state.stat.finishing = false;
-    backup_state.stat.starting = true;
-
     qemu_mutex_unlock(&backup_state.stat.lock);
 
     backup_state.speed = (has_speed && speed > 0) ? speed : 0;
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 qemu 04/10] PVE backup: add target ID in backup state
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (2 preceding siblings ...)
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu 03/10] PVE backup: factor out helper to initialize backup state stat struct Wolfgang Bumiller
@ 2025-04-03 12:30 ` Wolfgang Bumiller
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu 05/10] PVE backup: get device info: allow caller to specify filter for which devices use fleecing Wolfgang Bumiller
                   ` (34 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:30 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

In preparation for allowing multiple backup providers and potentially
multiple targets for a given provider. Each backup target can then
have its own dirty bitmap and there can be additional checks that the
current backup state is actually associated to the expected target.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
---
No changes to v7.

 pve-backup.c | 15 ++++++++++++++-
 1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/pve-backup.c b/pve-backup.c
index 88a981f81c..8789a0667a 100644
--- a/pve-backup.c
+++ b/pve-backup.c
@@ -70,6 +70,7 @@ static struct PVEBackupState {
     JobTxn *txn;
     CoMutex backup_mutex;
     CoMutex dump_callback_mutex;
+    char *target_id;
 } backup_state;
 
 static void pvebackup_init(void)
@@ -865,6 +866,16 @@ static void initialize_backup_state_stat(
     backup_state.stat.starting = true;
 }
 
+/*
+ * To be called with the backup_state mutex held.
+ */
+static void backup_state_set_target_id(const char *target_id) {
+    if (backup_state.target_id) {
+        g_free(backup_state.target_id);
+    }
+    backup_state.target_id = g_strdup(target_id);
+}
+
 UuidInfo coroutine_fn *qmp_backup(
     const char *backup_file,
     const char *password,
@@ -904,7 +915,7 @@ UuidInfo coroutine_fn *qmp_backup(
 
     if (backup_state.di_list) {
         error_set(errp, ERROR_CLASS_GENERIC_ERROR,
-                  "previous backup not finished");
+                  "previous backup for target '%s' not finished", backup_state.target_id);
         qemu_co_mutex_unlock(&backup_state.backup_mutex);
         return NULL;
     }
@@ -1122,6 +1133,8 @@ UuidInfo coroutine_fn *qmp_backup(
     backup_state.vmaw = vmaw;
     backup_state.pbs = pbs;
 
+    backup_state_set_target_id("Proxmox");
+
     backup_state.di_list = di_list;
 
     uuid_info = g_malloc0(sizeof(*uuid_info));
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 qemu 05/10] PVE backup: get device info: allow caller to specify filter for which devices use fleecing
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (3 preceding siblings ...)
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu 04/10] PVE backup: add target ID in backup state Wolfgang Bumiller
@ 2025-04-03 12:30 ` Wolfgang Bumiller
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu 06/10] PVE backup: implement backup access setup and teardown API for external providers Wolfgang Bumiller
                   ` (33 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:30 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

For providing snapshot-access to external backup providers, EFI and
TPM also need an associated fleecing image. The new caller will thus
need a different filter.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
---
No changes to v7.

 pve-backup.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/pve-backup.c b/pve-backup.c
index 8789a0667a..755f1abcf1 100644
--- a/pve-backup.c
+++ b/pve-backup.c
@@ -719,7 +719,7 @@ static void create_backup_jobs_bh(void *opaque) {
 /*
  * EFI disk and TPM state are small and it's just not worth setting up fleecing for them.
  */
-static bool device_uses_fleecing(const char *device_id)
+static bool fleecing_no_efi_tpm(const char *device_id)
 {
     return strncmp(device_id, "drive-efidisk", 13) && strncmp(device_id, "drive-tpmstate", 14);
 }
@@ -731,7 +731,7 @@ static bool device_uses_fleecing(const char *device_id)
  */
 static GList coroutine_fn GRAPH_RDLOCK *get_device_info(
     const char *devlist,
-    bool fleecing,
+    bool (*device_uses_fleecing)(const char*),
     Error **errp)
 {
     gchar **devs = NULL;
@@ -757,7 +757,7 @@ static GList coroutine_fn GRAPH_RDLOCK *get_device_info(
             di->bs = bs;
             di->device_name = g_strdup(bdrv_get_device_name(bs));
 
-            if (fleecing && device_uses_fleecing(*d)) {
+            if (device_uses_fleecing && device_uses_fleecing(*d)) {
                 g_autofree gchar *fleecing_devid = g_strconcat(*d, "-fleecing", NULL);
                 BlockBackend *fleecing_blk = blk_by_name(fleecing_devid);
                 if (!fleecing_blk) {
@@ -924,7 +924,8 @@ UuidInfo coroutine_fn *qmp_backup(
     format = has_format ? format : BACKUP_FORMAT_VMA;
 
     bdrv_graph_co_rdlock();
-    di_list = get_device_info(devlist, has_fleecing && fleecing, &local_err);
+    di_list = get_device_info(devlist, (has_fleecing && fleecing) ? fleecing_no_efi_tpm : NULL,
+                              &local_err);
     bdrv_graph_co_rdunlock();
     if (local_err) {
         error_propagate(errp, local_err);
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 qemu 06/10] PVE backup: implement backup access setup and teardown API for external providers
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (4 preceding siblings ...)
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu 05/10] PVE backup: get device info: allow caller to specify filter for which devices use fleecing Wolfgang Bumiller
@ 2025-04-03 12:30 ` Wolfgang Bumiller
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu 07/10] PVE backup: factor out get_single_device_info() helper Wolfgang Bumiller
                   ` (32 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:30 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

For external backup providers, the state of the VM's disk images at
the time the backup is started is preserved via a snapshot-access
block node. Old data is moved to the fleecing image when new guest
writes come in. The snapshot-access block node, as well as the
associated bitmap in case of incremental backup, will be exported via
NBD to the external provider. The NBD export will be done by the
management layer, the missing functionality is setting up and tearing
down the snapshot-access block nodes, which this patch adds.

It is necessary to also set up fleecing for EFI and TPM disks, so that
old data can be moved out of the way when a new guest write comes in.

There can only be one regular backup or one active backup access at
a time, because both require replacing the original block node of the
drive. Thus the backup state is re-used, and checks are added to
prohibit regular backup while snapshot access is active and vice
versa.

The block nodes added by the backup-access-setup QMP call are not
tracked anywhere else (there is no job they are associated to like for
regular backup). This requires adding a callback for teardown when
QEMU exits, i.e. in qemu_cleanup(). Otherwise, there will be an
assertion failure that the block graph is not empty when QEMU exits
before the backup-access-teardown QMP command is called.

The code for the qmp_backup_access_setup() was based on the existing
qmp_backup() routine.

The return value for the setup QMP command contains information about
the snapshot-access block nodes that can be used by the management
layer to set up the NBD exports.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
---
No changes to v7.

 pve-backup.c         | 264 ++++++++++++++++++++++++++++++++++++++++++-
 pve-backup.h         |  16 +++
 qapi/block-core.json |  49 ++++++++
 system/runstate.c    |   6 +
 4 files changed, 329 insertions(+), 6 deletions(-)
 create mode 100644 pve-backup.h

diff --git a/pve-backup.c b/pve-backup.c
index 755f1abcf1..091b5bd231 100644
--- a/pve-backup.c
+++ b/pve-backup.c
@@ -1,4 +1,5 @@
 #include "proxmox-backup-client.h"
+#include "pve-backup.h"
 #include "vma.h"
 
 #include "qemu/osdep.h"
@@ -588,6 +589,36 @@ static int setup_snapshot_access(PVEBackupDevInfo *di, Error **errp)
     return 0;
 }
 
+static void setup_all_snapshot_access_bh(void *opaque)
+{
+    assert(!qemu_in_coroutine());
+
+    CoCtxData *data = (CoCtxData*)opaque;
+    Error **errp = (Error**)data->data;
+
+    Error *local_err = NULL;
+
+    GList *l =  backup_state.di_list;
+    while (l) {
+        PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
+        l = g_list_next(l);
+
+        bdrv_drained_begin(di->bs);
+
+        if (setup_snapshot_access(di, &local_err) < 0) {
+            bdrv_drained_end(di->bs);
+            error_setg(errp, "%s - setting up snapshot access failed: %s", di->device_name,
+                       local_err ? error_get_pretty(local_err) : "unknown error");
+            break;
+        }
+
+        bdrv_drained_end(di->bs);
+    }
+
+    /* return */
+    aio_co_enter(data->ctx, data->co);
+}
+
 /*
  * backup_job_create can *not* be run from a coroutine, so this can't either.
  * The caller is responsible that backup_mutex is held nonetheless.
@@ -724,6 +755,11 @@ static bool fleecing_no_efi_tpm(const char *device_id)
     return strncmp(device_id, "drive-efidisk", 13) && strncmp(device_id, "drive-tpmstate", 14);
 }
 
+static bool fleecing_all(const char *device_id)
+{
+    return true;
+}
+
 /*
  * Returns a list of device infos, which needs to be freed by the caller. In
  * case of an error, errp will be set, but the returned value might still be a
@@ -839,8 +875,9 @@ static void clear_backup_state_bitmap_list(void) {
  */
 static void initialize_backup_state_stat(
     const char *backup_file,
-    uuid_t uuid,
-    size_t total)
+    uuid_t *uuid,
+    size_t total,
+    bool starting)
 {
     if (backup_state.stat.error) {
         error_free(backup_state.stat.error);
@@ -855,15 +892,19 @@ static void initialize_backup_state_stat(
     }
     backup_state.stat.backup_file = g_strdup(backup_file);
 
-    uuid_copy(backup_state.stat.uuid, uuid);
-    uuid_unparse_lower(uuid, backup_state.stat.uuid_str);
+    if (uuid) {
+        uuid_copy(backup_state.stat.uuid, *uuid);
+        uuid_unparse_lower(*uuid, backup_state.stat.uuid_str);
+    } else {
+        backup_state.stat.uuid_str[0] = '\0';
+    }
 
     backup_state.stat.total = total;
     backup_state.stat.dirty = total - backup_state.stat.reused;
     backup_state.stat.transferred = 0;
     backup_state.stat.zero_bytes = 0;
     backup_state.stat.finishing = false;
-    backup_state.stat.starting = true;
+    backup_state.stat.starting = starting;
 }
 
 /*
@@ -876,6 +917,216 @@ static void backup_state_set_target_id(const char *target_id) {
     backup_state.target_id = g_strdup(target_id);
 }
 
+BackupAccessInfoList *coroutine_fn qmp_backup_access_setup(
+    const char *target_id,
+    const char *devlist,
+    Error **errp)
+{
+    assert(qemu_in_coroutine());
+
+    qemu_co_mutex_lock(&backup_state.backup_mutex);
+
+    Error *local_err = NULL;
+    GList *di_list = NULL;
+    GList *l;
+
+    if (backup_state.di_list) {
+        error_set(errp, ERROR_CLASS_GENERIC_ERROR,
+                  "previous backup for target '%s' not finished", backup_state.target_id);
+        qemu_co_mutex_unlock(&backup_state.backup_mutex);
+        return NULL;
+    }
+
+    bdrv_graph_co_rdlock();
+    di_list = get_device_info(devlist, fleecing_all, &local_err);
+    bdrv_graph_co_rdunlock();
+    if (local_err) {
+        error_propagate(errp, local_err);
+        goto err;
+    }
+    assert(di_list);
+
+    size_t total = 0;
+
+    l = di_list;
+    while (l) {
+        PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
+        l = g_list_next(l);
+
+        ssize_t size = bdrv_getlength(di->bs);
+        if (size < 0) {
+            error_setg_errno(errp, -size, "bdrv_getlength failed");
+            goto err;
+        }
+        di->size = size;
+        total += size;
+
+        di->completed_ret = INT_MAX;
+    }
+
+    qemu_mutex_lock(&backup_state.stat.lock);
+    backup_state.stat.reused = 0;
+
+    /* clear previous backup's bitmap_list */
+    clear_backup_state_bitmap_list();
+
+    /* starting=false, because there is no associated QEMU job */
+    initialize_backup_state_stat(NULL, NULL, total, false);
+
+    qemu_mutex_unlock(&backup_state.stat.lock);
+
+    backup_state_set_target_id(target_id);
+
+    backup_state.vmaw = NULL;
+    backup_state.pbs = NULL;
+
+    backup_state.di_list = di_list;
+
+    /* Run setup_all_snapshot_access_bh outside of coroutine (in BH) but keep
+    * backup_mutex locked. This is fine, a CoMutex can be held across yield
+    * points, and we'll release it as soon as the BH reschedules us.
+    */
+    CoCtxData waker = {
+        .co = qemu_coroutine_self(),
+        .ctx = qemu_get_current_aio_context(),
+        .data = &local_err,
+    };
+    aio_bh_schedule_oneshot(waker.ctx, setup_all_snapshot_access_bh, &waker);
+    qemu_coroutine_yield();
+
+    if (local_err) {
+        error_propagate(errp, local_err);
+        goto err;
+    }
+
+    qemu_co_mutex_unlock(&backup_state.backup_mutex);
+
+    BackupAccessInfoList *bai_head = NULL, **p_bai_next = &bai_head;
+
+    l = di_list;
+    while (l) {
+        PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
+        l = g_list_next(l);
+
+        BackupAccessInfoList *info = g_malloc0(sizeof(*info));
+        info->value = g_malloc0(sizeof(*info->value));
+        info->value->node_name = g_strdup(bdrv_get_node_name(di->fleecing.snapshot_access));
+        info->value->device = g_strdup(di->device_name);
+        info->value->size = di->size;
+
+        *p_bai_next = info;
+        p_bai_next = &info->next;
+    }
+
+    return bai_head;
+
+err:
+
+    l = di_list;
+    while (l) {
+        PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
+        l = g_list_next(l);
+
+        g_free(di->device_name);
+        di->device_name = NULL;
+
+        g_free(di);
+    }
+    g_list_free(di_list);
+    backup_state.di_list = NULL;
+
+    qemu_co_mutex_unlock(&backup_state.backup_mutex);
+    return NULL;
+}
+
+/*
+ * Caller needs to hold the backup mutex or the BQL.
+ */
+void backup_access_teardown(void)
+{
+    GList *l = backup_state.di_list;
+
+    qemu_mutex_lock(&backup_state.stat.lock);
+    backup_state.stat.finishing = true;
+    qemu_mutex_unlock(&backup_state.stat.lock);
+
+    while (l) {
+        PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
+        l = g_list_next(l);
+
+        if (di->fleecing.snapshot_access) {
+            bdrv_unref(di->fleecing.snapshot_access);
+            di->fleecing.snapshot_access = NULL;
+        }
+        if (di->fleecing.cbw) {
+            bdrv_cbw_drop(di->fleecing.cbw);
+            di->fleecing.cbw = NULL;
+        }
+
+        g_free(di->device_name);
+        di->device_name = NULL;
+
+        g_free(di);
+    }
+    g_list_free(backup_state.di_list);
+    backup_state.di_list = NULL;
+
+    qemu_mutex_lock(&backup_state.stat.lock);
+    backup_state.stat.end_time = time(NULL);
+    backup_state.stat.finishing = false;
+    qemu_mutex_unlock(&backup_state.stat.lock);
+}
+
+// Not done in a coroutine, because bdrv_co_unref() and cbw_drop() would just spawn BHs anyways.
+// Caller needs to hold the backup_state.backup_mutex lock
+static void backup_access_teardown_bh(void *opaque)
+{
+    CoCtxData *data = (CoCtxData*)opaque;
+
+    backup_access_teardown();
+
+    /* return */
+    aio_co_enter(data->ctx, data->co);
+}
+
+void coroutine_fn qmp_backup_access_teardown(const char *target_id, Error **errp)
+{
+    assert(qemu_in_coroutine());
+
+    qemu_co_mutex_lock(&backup_state.backup_mutex);
+
+    if (!backup_state.target_id) { // nothing to do
+        qemu_co_mutex_unlock(&backup_state.backup_mutex);
+        return;
+    }
+
+    /*
+     * Continue with target_id == NULL, used by the callback registered for qemu_cleanup()
+     */
+    if (target_id && strcmp(backup_state.target_id, target_id)) {
+        error_setg(errp, "cannot teardown backup access - got target %s instead of %s",
+                   target_id, backup_state.target_id);
+        qemu_co_mutex_unlock(&backup_state.backup_mutex);
+        return;
+    }
+
+    if (!strcmp(backup_state.target_id, "Proxmox VE")) {
+        error_setg(errp, "cannot teardown backup access for PVE - use backup-cancel instead");
+        qemu_co_mutex_unlock(&backup_state.backup_mutex);
+        return;
+    }
+
+    CoCtxData waker = {
+        .co = qemu_coroutine_self(),
+        .ctx = qemu_get_current_aio_context(),
+    };
+    aio_bh_schedule_oneshot(waker.ctx, backup_access_teardown_bh, &waker);
+    qemu_coroutine_yield();
+
+    qemu_co_mutex_unlock(&backup_state.backup_mutex);
+    return;
+}
+
 UuidInfo coroutine_fn *qmp_backup(
     const char *backup_file,
     const char *password,
@@ -1119,7 +1370,7 @@ UuidInfo coroutine_fn *qmp_backup(
         }
     }
     /* initialize global backup_state now */
-    initialize_backup_state_stat(backup_file, uuid, total);
+    initialize_backup_state_stat(backup_file, &uuid, total, true);
     char *uuid_str = g_strdup(backup_state.stat.uuid_str);
 
     qemu_mutex_unlock(&backup_state.stat.lock);
@@ -1298,5 +1549,6 @@ ProxmoxSupportStatus *qmp_query_proxmox_support(Error **errp)
     ret->pbs_masterkey = true;
     ret->backup_max_workers = true;
     ret->backup_fleecing = true;
+    ret->backup_access_api = true;
     return ret;
 }
diff --git a/pve-backup.h b/pve-backup.h
new file mode 100644
index 0000000000..4033bc848f
--- /dev/null
+++ b/pve-backup.h
@@ -0,0 +1,16 @@
+/*
+ * Bacup code used by Proxmox VE
+ *
+ * Copyright (C) Proxmox Server Solutions
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+#ifndef PVE_BACKUP_H
+#define PVE_BACKUP_H
+
+void backup_access_teardown(void);
+
+#endif /* PVE_BACKUP_H */
diff --git a/qapi/block-core.json b/qapi/block-core.json
index c581f1f238..3f092221ce 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -1019,6 +1019,9 @@
 #
 # @pbs-library-version: Running version of libproxmox-backup-qemu0 library.
 #
+# @backup-access-api: Whether backup access API for external providers is
+#     supported or not.
+#
 # @backup-fleecing: Whether backup fleecing is supported or not.
 #
 # @backup-max-workers: Whether the 'max-workers' @BackupPerf setting is
@@ -1032,6 +1035,7 @@
             'pbs-dirty-bitmap-migration': 'bool',
             'pbs-masterkey': 'bool',
             'pbs-library-version': 'str',
+            'backup-access-api': 'bool',
             'backup-fleecing': 'bool',
             'backup-max-workers': 'bool' } }
 
@@ -1098,6 +1102,51 @@
 ##
 { 'command': 'query-pbs-bitmap-info', 'returns': ['PBSBitmapInfo'] }
 
+##
+# @BackupAccessInfo:
+#
+# Info associated to a snapshot access for backup.  For more information about
+# the bitmap see @BackupAccessBitmapMode.
+#
+# @node-name: the block node name of the snapshot-access node.
+#
+# @device: the device on top of which the snapshot access was created.
+#
+# @size: the size of the block device in bytes.
+#
+##
+{ 'struct': 'BackupAccessInfo',
+  'data': { 'node-name': 'str', 'device': 'str', 'size': 'size' } }
+
+##
+# @backup-access-setup:
+#
+# Set up snapshot access to VM drives for an external backup provider.  No other
+# backup or backup access can be done before tearing down the backup access.
+#
+# @target-id: the unique ID of the backup target.
+#
+# @devlist: list of block device names (separated by ',', ';' or ':'). By
+#     default the backup includes all writable block devices.
+#
+# Returns: a list of @BackupAccessInfo, one for each device.
+#
+##
+{ 'command': 'backup-access-setup',
+  'data': { 'target-id': 'str', '*devlist': 'str' },
+  'returns': [ 'BackupAccessInfo' ], 'coroutine': true }
+
+##
+# @backup-access-teardown:
+#
+# Tear down previously setup snapshot access for the same target.
+#
+# @target-id: the ID of the backup target.
+#
+##
+{ 'command': 'backup-access-teardown', 'data': { 'target-id': 'str' },
+  'coroutine': true }
+
 ##
 # @BlockDeviceTimedStats:
 #
diff --git a/system/runstate.c b/system/runstate.c
index c2c9afa905..6f93d7c2fb 100644
--- a/system/runstate.c
+++ b/system/runstate.c
@@ -60,6 +60,7 @@
 #include "sysemu/sysemu.h"
 #include "sysemu/tpm.h"
 #include "trace.h"
+#include "pve-backup.h"
 
 static NotifierList exit_notifiers =
     NOTIFIER_LIST_INITIALIZER(exit_notifiers);
@@ -920,6 +921,11 @@ void qemu_cleanup(int status)
      * requests happening from here on anyway.
      */
     bdrv_drain_all_begin();
+    /*
+     * The backup access is set up by a QMP command, but is neither owned by a monitor nor
+     * associated to a BlockBackend. Need to tear it down manually here.
+     */
+    backup_access_teardown();
     job_cancel_sync_all();
     bdrv_close_all();
 
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 qemu 07/10] PVE backup: factor out get_single_device_info() helper
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (5 preceding siblings ...)
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu 06/10] PVE backup: implement backup access setup and teardown API for external providers Wolfgang Bumiller
@ 2025-04-03 12:30 ` Wolfgang Bumiller
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu 08/10] PVE backup: implement bitmap support for external backup access Wolfgang Bumiller
                   ` (31 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:30 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
[WB: free di and di->device_name on error]
Sigend-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
---
Changes in v8: described in the trailers above ^

 pve-backup.c | 90 +++++++++++++++++++++++++++++++---------------------
 1 file changed, 53 insertions(+), 37 deletions(-)

diff --git a/pve-backup.c b/pve-backup.c
index 091b5bd231..8b7414f057 100644
--- a/pve-backup.c
+++ b/pve-backup.c
@@ -760,6 +760,57 @@ static bool fleecing_all(const char *device_id)
     return true;
 }
 
+static PVEBackupDevInfo coroutine_fn GRAPH_RDLOCK *get_single_device_info(
+    const char *device,
+    bool (*device_uses_fleecing)(const char*),
+    Error **errp)
+{
+    BlockBackend *blk = blk_by_name(device);
+    if (!blk) {
+        error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
+                  "Device '%s' not found", device);
+        return NULL;
+    }
+    BlockDriverState *bs = blk_bs(blk);
+    if (!bdrv_co_is_inserted(bs)) {
+        error_setg(errp, "Device '%s' has no medium", device);
+        return NULL;
+    }
+    PVEBackupDevInfo *di = g_new0(PVEBackupDevInfo, 1);
+    di->bs = bs;
+    di->device_name = g_strdup(bdrv_get_device_name(bs));
+
+    if (device_uses_fleecing && device_uses_fleecing(device)) {
+        g_autofree gchar *fleecing_devid = g_strconcat(device, "-fleecing", NULL);
+        BlockBackend *fleecing_blk = blk_by_name(fleecing_devid);
+        if (!fleecing_blk) {
+            error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
+                      "Device '%s' not found", fleecing_devid);
+            goto fail;
+        }
+        BlockDriverState *fleecing_bs = blk_bs(fleecing_blk);
+        if (!bdrv_co_is_inserted(fleecing_bs)) {
+            error_setg(errp, "Device '%s' has no medium", fleecing_devid);
+            goto fail;
+        }
+        /*
+         * Fleecing image needs to be the same size to act as a cbw target.
+         */
+        if (bs->total_sectors != fleecing_bs->total_sectors) {
+            error_setg(errp, "Size mismatch for '%s' - sector count %ld != %ld",
+                       fleecing_devid, fleecing_bs->total_sectors, bs->total_sectors);
+            goto fail;
+        }
+        di->fleecing.bs = fleecing_bs;
+    }
+
+    return di;
+fail:
+    g_free(di->device_name);
+    g_free(di);
+    return NULL;
+}
+
 /*
  * Returns a list of device infos, which needs to be freed by the caller. In
  * case of an error, errp will be set, but the returned value might still be a
@@ -778,45 +829,10 @@ static GList coroutine_fn GRAPH_RDLOCK *get_device_info(
 
         gchar **d = devs;
         while (d && *d) {
-            BlockBackend *blk = blk_by_name(*d);
-            if (!blk) {
-                error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
-                          "Device '%s' not found", *d);
+            PVEBackupDevInfo *di = get_single_device_info(*d, device_uses_fleecing, errp);
+            if (!di) {
                 goto err;
             }
-            BlockDriverState *bs = blk_bs(blk);
-            if (!bdrv_co_is_inserted(bs)) {
-                error_setg(errp, "Device '%s' has no medium", *d);
-                goto err;
-            }
-            PVEBackupDevInfo *di = g_new0(PVEBackupDevInfo, 1);
-            di->bs = bs;
-            di->device_name = g_strdup(bdrv_get_device_name(bs));
-
-            if (device_uses_fleecing && device_uses_fleecing(*d)) {
-                g_autofree gchar *fleecing_devid = g_strconcat(*d, "-fleecing", NULL);
-                BlockBackend *fleecing_blk = blk_by_name(fleecing_devid);
-                if (!fleecing_blk) {
-                    error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,
-                              "Device '%s' not found", fleecing_devid);
-                    goto err;
-                }
-                BlockDriverState *fleecing_bs = blk_bs(fleecing_blk);
-                if (!bdrv_co_is_inserted(fleecing_bs)) {
-                    error_setg(errp, "Device '%s' has no medium", fleecing_devid);
-                    goto err;
-                }
-                /*
-                 * Fleecing image needs to be the same size to act as a cbw target.
-                 */
-                if (bs->total_sectors != fleecing_bs->total_sectors) {
-                    error_setg(errp, "Size mismatch for '%s' - sector count %ld != %ld",
-                               fleecing_devid, fleecing_bs->total_sectors, bs->total_sectors);
-                    goto err;
-                }
-                di->fleecing.bs = fleecing_bs;
-            }
-
             di_list = g_list_append(di_list, di);
             d++;
         }
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 qemu 08/10] PVE backup: implement bitmap support for external backup access
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (6 preceding siblings ...)
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu 07/10] PVE backup: factor out get_single_device_info() helper Wolfgang Bumiller
@ 2025-04-03 12:30 ` Wolfgang Bumiller
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu 09/10] PVE backup: backup-access api: indicate situation where a bitmap was recreated Wolfgang Bumiller
                   ` (30 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:30 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

There can be one dirty bitmap for each backup target ID for each
device (which are tracked in the backup_access_bitmaps hash table).
The QMP user can specify the ID of the bitmap it likes to use. This ID
is then compared to the current one for the given target and device.
If they match, the bitmap is re-used (should it still exist on the
drive, otherwise re-created). If there is a mismatch, the old bitmap
is removed and a new one is created.

The return value of the QMP command includes information about what
bitmap action was taken. Similar to what the query-backup QMP command
returns for regular backup. It also includes the bitmap name and
associated block node, so the management layer can then set up an NBD
export with the bitmap.

While the backup access is active, a background bitmap is also
required. This is necessary to implement bitmap handling according to
the original reference [0]. In particular:

- in the error case, new writes since the backup access was set up are
  in the background bitmap. Because of failure, the previously tracked
  writes from the backup access bitmap are still required too. Thus,
  the bitmap is merged with the background bitmap to get all new
  writes since the last backup.

- in the success case, continue tracking for the next incremental
  backup in the backup access bitmap. New writes since the backup
  access was set up are in the background bitmap. Because the backup
  was successfully, clear the backup access bitmap and merge back the
  background bitmap to get only the new writes.

Since QEMU cannot know if the backup was successful or not (except if
failure already happens during the setup QMP command), the management
layer needs to tell it via the teardown QMP command.

The bitmap action is also recorded in the device info now.

[0]: https://lore.kernel.org/qemu-devel/b68833dd-8864-4d72-7c61-c134a9835036@ya.ru/

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
---
No changes to v7.

 pve-backup.c         | 196 +++++++++++++++++++++++++++++++++++++++++--
 pve-backup.h         |   2 +-
 qapi/block-core.json |  36 ++++++--
 system/runstate.c    |   2 +-
 4 files changed, 220 insertions(+), 16 deletions(-)

diff --git a/pve-backup.c b/pve-backup.c
index 8b7414f057..0490d1f421 100644
--- a/pve-backup.c
+++ b/pve-backup.c
@@ -15,6 +15,7 @@
 #include "qapi/qmp/qdict.h"
 #include "qapi/qmp/qerror.h"
 #include "qemu/cutils.h"
+#include "qemu/error-report.h"
 
 #if defined(CONFIG_MALLOC_TRIM)
 #include <malloc.h>
@@ -41,6 +42,7 @@
  */
 
 const char *PBS_BITMAP_NAME = "pbs-incremental-dirty-bitmap";
+const char *BACKGROUND_BITMAP_NAME = "backup-access-background-bitmap";
 
 static struct PVEBackupState {
     struct {
@@ -72,6 +74,7 @@ static struct PVEBackupState {
     CoMutex backup_mutex;
     CoMutex dump_callback_mutex;
     char *target_id;
+    GHashTable *backup_access_bitmaps; // key=target_id, value=bitmap_name
 } backup_state;
 
 static void pvebackup_init(void)
@@ -99,8 +102,11 @@ typedef struct PVEBackupDevInfo {
     char* device_name;
     int completed_ret; // INT_MAX if not completed
     BdrvDirtyBitmap *bitmap;
+    BdrvDirtyBitmap *background_bitmap; // used for external backup access
+    PBSBitmapAction bitmap_action;
     BlockDriverState *target;
     BlockJob *job;
+    char *requested_bitmap_name; // used by external backup access during initialization
 } PVEBackupDevInfo;
 
 static void pvebackup_propagate_error(Error *err)
@@ -362,6 +368,67 @@ static void coroutine_fn pvebackup_co_complete_stream(void *opaque)
     qemu_co_mutex_unlock(&backup_state.backup_mutex);
 }
 
+/*
+ * New writes since the backup access was set up are in the background bitmap. Because of failure,
+ * the previously tracked writes in di->bitmap are still required too. Thus, merge with the
+ * background bitmap to get all new writes since the last backup.
+ */
+static void handle_backup_access_bitmaps_in_error_case(PVEBackupDevInfo *di)
+{
+    Error *local_err = NULL;
+
+    if (di->bs && di->background_bitmap) {
+        bdrv_drained_begin(di->bs);
+        if (di->bitmap) {
+            bdrv_enable_dirty_bitmap(di->bitmap);
+            if (!bdrv_merge_dirty_bitmap(di->bitmap, di->background_bitmap, NULL, &local_err)) {
+                warn_report("backup access: %s - could not merge bitmaps in error path - %s",
+                            di->device_name,
+                            local_err ? error_get_pretty(local_err) : "unknown error");
+                /*
+                 * Could not merge, drop original bitmap too.
+                 */
+                bdrv_release_dirty_bitmap(di->bitmap);
+            }
+        } else {
+            warn_report("backup access: %s - expected bitmap not present", di->device_name);
+        }
+        bdrv_release_dirty_bitmap(di->background_bitmap);
+        bdrv_drained_end(di->bs);
+    }
+}
+
+/*
+ * Continue tracking for next incremental backup in di->bitmap. New writes since the backup access
+ * was set up are in the background bitmap. Because the backup was successful, clear di->bitmap and
+ * merge back the background bitmap to get only the new writes.
+ */
+static void handle_backup_access_bitmaps_after_success(PVEBackupDevInfo *di)
+{
+    Error *local_err = NULL;
+
+    if (di->bs && di->background_bitmap) {
+        bdrv_drained_begin(di->bs);
+        if (di->bitmap) {
+            bdrv_enable_dirty_bitmap(di->bitmap);
+            bdrv_clear_dirty_bitmap(di->bitmap, NULL);
+            if (!bdrv_merge_dirty_bitmap(di->bitmap, di->background_bitmap, NULL, &local_err)) {
+                warn_report("backup access: %s - could not merge bitmaps after backup - %s",
+                            di->device_name,
+                            local_err ? error_get_pretty(local_err) : "unknown error");
+                /*
+                 * Could not merge, drop original bitmap too.
+                 */
+                bdrv_release_dirty_bitmap(di->bitmap);
+            }
+        } else {
+            warn_report("backup access: %s - expected bitmap not present", di->device_name);
+        }
+        bdrv_release_dirty_bitmap(di->background_bitmap);
+        bdrv_drained_end(di->bs);
+    }
+}
+
 static void cleanup_snapshot_access(PVEBackupDevInfo *di)
 {
     if (di->fleecing.snapshot_access) {
@@ -605,6 +672,21 @@ static void setup_all_snapshot_access_bh(void *opaque)
 
         bdrv_drained_begin(di->bs);
 
+        if (di->bitmap) {
+            BdrvDirtyBitmap *background_bitmap =
+                bdrv_create_dirty_bitmap(di->bs, PROXMOX_BACKUP_DEFAULT_CHUNK_SIZE,
+                                         BACKGROUND_BITMAP_NAME, &local_err);
+            if (!background_bitmap) {
+                error_setg(errp, "%s - creating background bitmap for backup access failed: %s",
+                           di->device_name,
+                           local_err ? error_get_pretty(local_err) : "unknown error");
+                bdrv_drained_end(di->bs);
+                break;
+            }
+            di->background_bitmap = background_bitmap;
+            bdrv_disable_dirty_bitmap(di->bitmap);
+        }
+
         if (setup_snapshot_access(di, &local_err) < 0) {
             bdrv_drained_end(di->bs);
             error_setg(errp, "%s - setting up snapshot access failed: %s", di->device_name,
@@ -935,7 +1017,7 @@ static void backup_state_set_target_id(const char *target_id) {
 
 BackupAccessInfoList *coroutine_fn qmp_backup_access_setup(
     const char *target_id,
-    const char *devlist,
+    BackupAccessSourceDeviceList *devices,
     Error **errp)
 {
     assert(qemu_in_coroutine());
@@ -954,12 +1036,17 @@ BackupAccessInfoList *coroutine_fn qmp_backup_access_setup(
     }
 
     bdrv_graph_co_rdlock();
-    di_list = get_device_info(devlist, fleecing_all, &local_err);
-    bdrv_graph_co_rdunlock();
-    if (local_err) {
-        error_propagate(errp, local_err);
-        goto err;
+    for (BackupAccessSourceDeviceList *it = devices; it; it = it->next) {
+        PVEBackupDevInfo *di = get_single_device_info(it->value->device, fleecing_all, &local_err);
+        if (!di) {
+            bdrv_graph_co_rdunlock();
+            error_propagate(errp, local_err);
+            goto err;
+        }
+        di->requested_bitmap_name = g_strdup(it->value->bitmap_name);
+        di_list = g_list_append(di_list, di);
     }
+    bdrv_graph_co_rdunlock();
     assert(di_list);
 
     size_t total = 0;
@@ -986,6 +1073,78 @@ BackupAccessInfoList *coroutine_fn qmp_backup_access_setup(
     /* clear previous backup's bitmap_list */
     clear_backup_state_bitmap_list();
 
+    if (!backup_state.backup_access_bitmaps) {
+        backup_state.backup_access_bitmaps =
+            g_hash_table_new_full(g_str_hash, g_str_equal, free, free);
+    }
+
+    /* create bitmaps if requested */
+    l = di_list;
+    while (l) {
+        PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
+        l = g_list_next(l);
+
+        di->block_size = PROXMOX_BACKUP_DEFAULT_CHUNK_SIZE;
+
+        PBSBitmapAction action = PBS_BITMAP_ACTION_NOT_USED;
+        size_t dirty = di->size;
+
+        const char *old_bitmap_name =
+            (const char*)g_hash_table_lookup(backup_state.backup_access_bitmaps, target_id);
+
+        bool same_bitmap_name = old_bitmap_name
+            && di->requested_bitmap_name
+            && strcmp(di->requested_bitmap_name, old_bitmap_name) == 0;
+
+        if (old_bitmap_name && !same_bitmap_name) {
+            BdrvDirtyBitmap *old_bitmap = bdrv_find_dirty_bitmap(di->bs, old_bitmap_name);
+            if (!old_bitmap) {
+                warn_report("setup backup access: expected old bitmap '%s' not found for drive "
+                            "'%s'", old_bitmap_name, di->device_name);
+            } else {
+                g_hash_table_remove(backup_state.backup_access_bitmaps, target_id);
+                bdrv_release_dirty_bitmap(old_bitmap);
+                action = PBS_BITMAP_ACTION_NOT_USED_REMOVED;
+            }
+        }
+
+        BdrvDirtyBitmap *bitmap = NULL;
+        if (di->requested_bitmap_name) {
+            bitmap = bdrv_find_dirty_bitmap(di->bs, di->requested_bitmap_name);
+            if (!bitmap) {
+                bitmap = bdrv_create_dirty_bitmap(di->bs, PROXMOX_BACKUP_DEFAULT_CHUNK_SIZE,
+                                                  di->requested_bitmap_name, errp);
+                if (!bitmap) {
+                    qemu_mutex_unlock(&backup_state.stat.lock);
+                    goto err;
+                }
+                bdrv_set_dirty_bitmap(bitmap, 0, di->size);
+                action = PBS_BITMAP_ACTION_NEW;
+            } else {
+                /* track clean chunks as reused */
+                dirty = MIN(bdrv_get_dirty_count(bitmap), di->size);
+                backup_state.stat.reused += di->size - dirty;
+                action = PBS_BITMAP_ACTION_USED;
+            }
+
+            if (!same_bitmap_name) {
+                g_hash_table_insert(backup_state.backup_access_bitmaps,
+                                    strdup(target_id), strdup(di->requested_bitmap_name));
+            }
+
+        }
+
+        PBSBitmapInfo *info = g_malloc(sizeof(*info));
+        info->drive = g_strdup(di->device_name);
+        info->action = action;
+        info->size = di->size;
+        info->dirty = dirty;
+        backup_state.stat.bitmap_list = g_list_append(backup_state.stat.bitmap_list, info);
+
+        di->bitmap = bitmap;
+        di->bitmap_action = action;
+    }
+
     /* starting=false, because there is no associated QEMU job */
     initialize_backup_state_stat(NULL, NULL, total, false);
 
@@ -1029,6 +1188,12 @@ BackupAccessInfoList *coroutine_fn qmp_backup_access_setup(
         info->value->node_name = g_strdup(bdrv_get_node_name(di->fleecing.snapshot_access));
         info->value->device = g_strdup(di->device_name);
         info->value->size = di->size;
+        if (di->requested_bitmap_name) {
+            info->value->bitmap_node_name = g_strdup(bdrv_get_node_name(di->bs));
+            info->value->bitmap_name = g_strdup(di->requested_bitmap_name);
+            info->value->bitmap_action = di->bitmap_action;
+            info->value->has_bitmap_action = true;
+        }
 
         *p_bai_next = info;
         p_bai_next = &info->next;
@@ -1043,6 +1208,8 @@ err:
         PVEBackupDevInfo *di = (PVEBackupDevInfo *)l->data;
         l = g_list_next(l);
 
+        handle_backup_access_bitmaps_in_error_case(di);
+
         g_free(di->device_name);
         di->device_name = NULL;
 
@@ -1058,7 +1225,7 @@ err:
 /*
  * Caller needs to hold the backup mutex or the BQL.
  */
-void backup_access_teardown(void)
+void backup_access_teardown(bool success)
 {
     GList *l = backup_state.di_list;
 
@@ -1079,9 +1246,18 @@ void backup_access_teardown(void)
             di->fleecing.cbw = NULL;
         }
 
+        if (success) {
+            handle_backup_access_bitmaps_after_success(di);
+        } else {
+            handle_backup_access_bitmaps_in_error_case(di);
+        }
+
         g_free(di->device_name);
         di->device_name = NULL;
 
+        g_free(di->requested_bitmap_name);
+        di->requested_bitmap_name = NULL;
+
         g_free(di);
     }
     g_list_free(backup_state.di_list);
@@ -1099,13 +1275,13 @@ static void backup_access_teardown_bh(void *opaque)
 {
     CoCtxData *data = (CoCtxData*)opaque;
 
-    backup_access_teardown();
+    backup_access_teardown(*((bool*)data->data));
 
     /* return */
     aio_co_enter(data->ctx, data->co);
 }
 
-void coroutine_fn qmp_backup_access_teardown(const char *target_id, Error **errp)
+void coroutine_fn qmp_backup_access_teardown(const char *target_id, bool success, Error **errp)
 {
     assert(qemu_in_coroutine());
 
@@ -1135,6 +1311,7 @@ void coroutine_fn qmp_backup_access_teardown(const char *target_id, Error **errp
     CoCtxData waker = {
         .co = qemu_coroutine_self(),
         .ctx = qemu_get_current_aio_context(),
+        .data = &success,
     };
     aio_bh_schedule_oneshot(waker.ctx, backup_access_teardown_bh, &waker);
     qemu_coroutine_yield();
@@ -1335,6 +1512,7 @@ UuidInfo coroutine_fn *qmp_backup(
             }
 
             di->dev_id = dev_id;
+            di->bitmap_action = action;
 
             PBSBitmapInfo *info = g_malloc(sizeof(*info));
             info->drive = g_strdup(di->device_name);
diff --git a/pve-backup.h b/pve-backup.h
index 4033bc848f..9ebeef7c8f 100644
--- a/pve-backup.h
+++ b/pve-backup.h
@@ -11,6 +11,6 @@
 #ifndef PVE_BACKUP_H
 #define PVE_BACKUP_H
 
-void backup_access_teardown(void);
+void backup_access_teardown(bool success);
 
 #endif /* PVE_BACKUP_H */
diff --git a/qapi/block-core.json b/qapi/block-core.json
index 3f092221ce..873db3f276 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -1114,9 +1114,33 @@
 #
 # @size: the size of the block device in bytes.
 #
+# @bitmap-node-name: the block node name the dirty bitmap is associated to.
+#
+# @bitmap-name: the name of the dirty bitmap associated to the backup access.
+#
+# @bitmap-action: the action taken on the dirty bitmap.
+#
 ##
 { 'struct': 'BackupAccessInfo',
-  'data': { 'node-name': 'str', 'device': 'str', 'size': 'size' } }
+  'data': { 'node-name': 'str', 'device': 'str', 'size': 'size',
+            '*bitmap-node-name': 'str', '*bitmap-name': 'str',
+            '*bitmap-action': 'PBSBitmapAction' } }
+
+##
+# @BackupAccessSourceDevice:
+#
+# Source block device information for creating a backup access.
+#
+# @device: the block device name.
+#
+# @bitmap-name: use/create a bitmap with this name for the device. Re-using the
+#     same name allows for making incremental backups. Check the @bitmap-action
+#     in the result to see if you can actually re-use the bitmap or if it had to
+#     be newly created.
+#
+##
+{ 'struct': 'BackupAccessSourceDevice',
+  'data': { 'device': 'str', '*bitmap-name': 'str' } }
 
 ##
 # @backup-access-setup:
@@ -1126,14 +1150,13 @@
 #
 # @target-id: the unique ID of the backup target.
 #
-# @devlist: list of block device names (separated by ',', ';' or ':'). By
-#     default the backup includes all writable block devices.
+# @devices: list of devices for which to create the backup access.
 #
 # Returns: a list of @BackupAccessInfo, one for each device.
 #
 ##
 { 'command': 'backup-access-setup',
-  'data': { 'target-id': 'str', '*devlist': 'str' },
+  'data': { 'target-id': 'str', 'devices': [ 'BackupAccessSourceDevice' ] },
   'returns': [ 'BackupAccessInfo' ], 'coroutine': true }
 
 ##
@@ -1143,8 +1166,11 @@
 #
 # @target-id: the ID of the backup target.
 #
+# @success: whether the backup done by the external provider was successful.
+#
 ##
-{ 'command': 'backup-access-teardown', 'data': { 'target-id': 'str' },
+{ 'command': 'backup-access-teardown',
+  'data': { 'target-id': 'str', 'success': 'bool' },
   'coroutine': true }
 
 ##
diff --git a/system/runstate.c b/system/runstate.c
index 6f93d7c2fb..ef3277930f 100644
--- a/system/runstate.c
+++ b/system/runstate.c
@@ -925,7 +925,7 @@ void qemu_cleanup(int status)
      * The backup access is set up by a QMP command, but is neither owned by a monitor nor
      * associated to a BlockBackend. Need to tear it down manually here.
      */
-    backup_access_teardown();
+    backup_access_teardown(false);
     job_cancel_sync_all();
     bdrv_close_all();
 
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 qemu 09/10] PVE backup: backup-access api: indicate situation where a bitmap was recreated
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (7 preceding siblings ...)
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu 08/10] PVE backup: implement bitmap support for external backup access Wolfgang Bumiller
@ 2025-04-03 12:30 ` Wolfgang Bumiller
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu 10/10] PVE backup: backup-access-api: explicit bitmap-mode parameter Wolfgang Bumiller
                   ` (29 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:30 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

The backup-access api keeps track of what bitmap names got used for
which devices and thus knows when a bitmap went missing. Propagate
this information to the QMP user with a new 'missing-recreated'
variant for the taken bitmap action.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
---
No changes to v7.

 pve-backup.c         | 6 +++++-
 qapi/block-core.json | 9 ++++++++-
 2 files changed, 13 insertions(+), 2 deletions(-)

diff --git a/pve-backup.c b/pve-backup.c
index 0490d1f421..8909842292 100644
--- a/pve-backup.c
+++ b/pve-backup.c
@@ -1119,7 +1119,11 @@ BackupAccessInfoList *coroutine_fn qmp_backup_access_setup(
                     goto err;
                 }
                 bdrv_set_dirty_bitmap(bitmap, 0, di->size);
-                action = PBS_BITMAP_ACTION_NEW;
+                if (same_bitmap_name) {
+                    action = PBS_BITMAP_ACTION_MISSING_RECREATED;
+                } else {
+                    action = PBS_BITMAP_ACTION_NEW;
+                }
             } else {
                 /* track clean chunks as reused */
                 dirty = MIN(bdrv_get_dirty_count(bitmap), di->size);
diff --git a/qapi/block-core.json b/qapi/block-core.json
index 873db3f276..58586170d9 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -1067,9 +1067,16 @@
 #           base snapshot did not match the base given for the current job or
 #           the crypt mode has changed.
 #
+# @missing-recreated: A bitmap for incremental backup was expected to be
+#     present, but was missing and thus got recreated. For example, this can
+#     happen if the drive was re-attached or if the bitmap was deleted for some
+#     other reason. PBS does not currently keep track of this; the backup-access
+#     mechanism does.
+#
 ##
 { 'enum': 'PBSBitmapAction',
-  'data': ['not-used', 'not-used-removed', 'new', 'used', 'invalid'] }
+  'data': ['not-used', 'not-used-removed', 'new', 'used', 'invalid',
+           'missing-recreated'] }
 
 ##
 # @PBSBitmapInfo:
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 qemu 10/10] PVE backup: backup-access-api: explicit bitmap-mode parameter
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (8 preceding siblings ...)
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu 09/10] PVE backup: backup-access api: indicate situation where a bitmap was recreated Wolfgang Bumiller
@ 2025-04-03 12:30 ` Wolfgang Bumiller
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 storage 1/8] add storage_has_feature() helper function Wolfgang Bumiller
                   ` (28 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:30 UTC (permalink / raw)
  To: pve-devel

This allows to explicitly request to re-create a bitmap under the same
name.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
---
New in v8

 pve-backup.c         | 17 ++++++++++++++++-
 qapi/block-core.json | 20 +++++++++++++++++++-
 2 files changed, 35 insertions(+), 2 deletions(-)

diff --git a/pve-backup.c b/pve-backup.c
index 8909842292..18bcf29533 100644
--- a/pve-backup.c
+++ b/pve-backup.c
@@ -1043,7 +1043,16 @@ BackupAccessInfoList *coroutine_fn qmp_backup_access_setup(
             error_propagate(errp, local_err);
             goto err;
         }
-        di->requested_bitmap_name = g_strdup(it->value->bitmap_name);
+        if (it->value->bitmap_mode == BACKUP_ACCESS_SETUP_BITMAP_MODE_NONE) {
+            di->bitmap_action = PBS_BITMAP_ACTION_NOT_USED;
+        } else {
+            di->requested_bitmap_name = g_strdup(it->value->bitmap_name);
+            if (it->value->bitmap_mode == BACKUP_ACCESS_SETUP_BITMAP_MODE_NEW) {
+                di->bitmap_action = PBS_BITMAP_ACTION_NEW;
+            } else if (it->value->bitmap_mode == BACKUP_ACCESS_SETUP_BITMAP_MODE_USE) {
+                di->bitmap_action = PBS_BITMAP_ACTION_USED;
+            }
+        }
         di_list = g_list_append(di_list, di);
     }
     bdrv_graph_co_rdunlock();
@@ -1096,6 +1105,12 @@ BackupAccessInfoList *coroutine_fn qmp_backup_access_setup(
             && di->requested_bitmap_name
             && strcmp(di->requested_bitmap_name, old_bitmap_name) == 0;
 
+        /* special case: if we explicitly requested a *new* bitmap, treat an
+         * existing bitmap as having a different name */
+        if (di->bitmap_action == PBS_BITMAP_ACTION_NEW) {
+            same_bitmap_name = false;
+        }
+
         if (old_bitmap_name && !same_bitmap_name) {
             BdrvDirtyBitmap *old_bitmap = bdrv_find_dirty_bitmap(di->bs, old_bitmap_name);
             if (!old_bitmap) {
diff --git a/qapi/block-core.json b/qapi/block-core.json
index 58586170d9..e1c79649fb 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -1145,9 +1145,27 @@
 #     in the result to see if you can actually re-use the bitmap or if it had to
 #     be newly created.
 #
+# @bitmap-mode: used to control whether the bitmap should be reused or
+#     recreated.
+#
 ##
 { 'struct': 'BackupAccessSourceDevice',
-  'data': { 'device': 'str', '*bitmap-name': 'str' } }
+  'data': { 'device': 'str', '*bitmap-name': 'str',
+            '*bitmap-mode': 'BackupAccessSetupBitmapMode' } }
+
+##
+# @BackupAccessSetupBitmapMode:
+#
+# How to setup a bitmap for a device for @backup-access-setup.
+# 
+# @none: do not use a bitmap. Removes an existing bitmap if present.
+#
+# @new: create and use a new bitmap.
+#
+# @use: try to re-use an existing bitmap. Create a new one if it doesn't exist.
+##
+{ 'enum': 'BackupAccessSetupBitmapMode',
+  'data': ['none', 'new', 'use' ] }
 
 ##
 # @backup-access-setup:
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 storage 1/8] add storage_has_feature() helper function
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (9 preceding siblings ...)
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu 10/10] PVE backup: backup-access-api: explicit bitmap-mode parameter Wolfgang Bumiller
@ 2025-04-03 12:30 ` Wolfgang Bumiller
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 storage 2/8] common: add deallocate " Wolfgang Bumiller
                   ` (27 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:30 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

Which looks up whether a storage supports a given feature in its
'plugindata'. This is intentionally kept simple and not implemented
as a plugin method for now. Should it ever become more complex
requiring plugins to override the default implementation, it can
later be changed to a method.

Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
No changes to v7.

 src/PVE/Storage.pm        |  8 ++++++++
 src/PVE/Storage/Plugin.pm | 10 ++++++++++
 2 files changed, 18 insertions(+)

diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index c5d4ff8..8cbfb4f 100755
--- a/src/PVE/Storage.pm
+++ b/src/PVE/Storage.pm
@@ -213,6 +213,14 @@ sub storage_check_enabled {
     return storage_check_node($cfg, $storeid, $node, $noerr);
 }
 
+sub storage_has_feature {
+    my ($cfg, $storeid, $feature) = @_;
+
+    my $scfg = storage_config($cfg, $storeid);
+
+    return PVE::Storage::Plugin::storage_has_feature($scfg->{type}, $feature);
+}
+
 # storage_can_replicate:
 # return true if storage supports replication
 # (volumes allocated with vdisk_alloc() has replication feature)
diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index 65cf43f..80daeea 100644
--- a/src/PVE/Storage/Plugin.pm
+++ b/src/PVE/Storage/Plugin.pm
@@ -246,6 +246,16 @@ sub dirs_hash_to_string {
     return join(',', map { "$_=$hash->{$_}" } sort keys %$hash);
 }
 
+sub storage_has_feature {
+    my ($type, $feature) = @_;
+
+    my $data = $defaultData->{plugindata}->{$type};
+    if (my $features = $data->{features}) {
+	return $features->{$feature};
+    }
+    return;
+}
+
 sub default_format {
     my ($scfg) = @_;
 
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 storage 2/8] common: add deallocate helper function
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (10 preceding siblings ...)
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 storage 1/8] add storage_has_feature() helper function Wolfgang Bumiller
@ 2025-04-03 12:30 ` Wolfgang Bumiller
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 storage 3/8] plugin: introduce new_backup_provider() method Wolfgang Bumiller
                   ` (26 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:30 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

For punching holes via fallocate. This will be useful for the external
backup provider API to discard parts of the source. The 'file-handle'
mechanism there uses a fuse mount, which does not implement the
BLKDISCARD ioctl, but does implement fallocate.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
No changes to v7.

 src/PVE/Storage/Common.pm | 30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)

diff --git a/src/PVE/Storage/Common.pm b/src/PVE/Storage/Common.pm
index 3ae20dd..32ed26a 100644
--- a/src/PVE/Storage/Common.pm
+++ b/src/PVE/Storage/Common.pm
@@ -3,6 +3,13 @@ package PVE::Storage::Common;
 use strict;
 use warnings;
 
+use PVE::Syscall;
+
+use constant {
+    FALLOC_FL_KEEP_SIZE => 0x01, # see linux/falloc.h
+    FALLOC_FL_PUNCH_HOLE => 0x02, # see linux/falloc.h
+};
+
 =pod
 
 =head1 NAME
@@ -51,4 +58,27 @@ sub align_size_up : prototype($$) {
     return $aligned_size;
 }
 
+=pod
+
+=head3 deallocate
+
+    deallocate($file_handle, $offset, $length)
+
+Deallocates the range with C<$length> many bytes starting from offset C<$offset>
+for the file associated to the file handle C<$file_handle>. Dies on failure.
+
+=cut
+
+sub deallocate : prototype($$$) {
+    my ($file_handle, $offset, $length) = @_;
+
+    my $mode = FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE;
+    $offset = int($offset);
+    $length = int($length);
+
+    if (syscall(PVE::Syscall::fallocate, fileno($file_handle), $mode, $offset, $length) != 0) {
+	die "fallocate: punch hole failed (offset: $offset, length: $length) - $!\n";
+    }
+}
+
 1;
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 storage 3/8] plugin: introduce new_backup_provider() method
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (11 preceding siblings ...)
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 storage 2/8] common: add deallocate " Wolfgang Bumiller
@ 2025-04-03 12:30 ` Wolfgang Bumiller
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 storage 4/8] config api/plugins: let plugins define sensitive properties themselves Wolfgang Bumiller
                   ` (25 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:30 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

The new_backup_provider() method can be used by storage plugins for
external backup providers. If the method returns a provider, Proxmox
VE will use callbacks to that provider for backups and restore instead
of using its usual backup/restore mechanisms.

The backup provider API is split into two parts, both of which again
need different implementations for VM and LXC guests:

1. Backup API

In Proxmox VE, a backup job consists of backup tasks for individual
guests. There are methods for initialization and cleanup of the job,
i.e. job_init() and job_cleanup() and for each guest backup, i.e.
backup_init() and backup_cleanup().

The backup_get_mechanism() method is used to decide on the backup
mechanism. Currently, 'file-handle' or 'nbd' for VMs, and 'directory'
for containers is possible. The method also let's the plugin indicate
whether to use a bitmap for incremental VM backup or not. It is enough
to implement one mechanism for VMs and one mechanism for containers.

Next, there are methods for backing up the guest's configuration and
data, backup_vm() for VM backup and backup_container() for container
backup, with the latter running

Finally, some helpers like getting the provider name or volume ID for
the backup target, as well as for handling the backup log.

The backup transaction looks as follows:

First, job_init() is called that can be used to check backup server
availability and prepare the connection. Then for each guest
backup_init() followed by backup_vm() or backup_container() and finally
backup_cleanup(). Afterwards job_cleanup() is called. For containers,
there is an additional backup_container_prepare() call while still
privileged. The actual backup_container() call happens as the
(unprivileged) container root user, so that the file owner and group IDs
match the container's perspective.

1.1 Backup Mechanisms

VM:

Access to the data on the VM's disk from the time the backup started
is made available via a so-called "snapshot access". This is either
the full image, or in case a bitmap is used, the dirty parts of the
image since the last time the bitmap was used for a successful backup.
Reading outside of the dirty parts will result in an error. After
backing up each part of the disk, it should be discarded in the export
to avoid unnecessary space usage on the Proxmox VE side (there is an
associated fleecing image).

VM mechanism 'file-handle':

The snapshot access is exposed via a file descriptor. A subroutine to
read the dirty regions for incremental backup is provided as well.

VM mechanism 'nbd':

The snapshot access and, if used, bitmap are exported via NBD.

Container mechanism 'directory':

A copy or snapshot of the container's filesystem state is made
available as a directory. The method is executed inside the user
namespace associated to the container.

2. Restore API

The restore_get_mechanism() method is used to decide on the restore
mechanism. Currently, 'qemu-img' for VMs, and 'directory' or 'tar' for
containers are possible. It is enough to implement one mechanism for
VMs and one mechanism for containers.

Next, methods for extracting the guest and firewall configuration and
the implementations of the restore mechanism via a pair of methods: an
init method, for making the data available to Proxmox VE and a cleanup
method that is called after restore.

2.1. Restore Mechanisms

VM mechanism 'qemu-img':

The backup provider gives a path to the disk image that will be
restored. The path needs to be something 'qemu-img' can deal with,
e.g. can also be an NBD URI or similar.

Container mechanism 'directory':

The backup provider gives the path to a directory with the full
filesystem structure of the container.

Container mechanism 'tar':

The backup provider gives the path to a (potentially compressed) tar
archive with the full filesystem structure of the container.

See the PVE::BackupProvider::Plugin module for the full API
documentation.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
[WB: replace backup_vm_available_bitmaps with
     backup_vm_query_incremental, which instead of a bitmap name
     provides a bitmap mode that is 'new' (create or *recreate* a
     bitmap) or 'use' (use an existing bitmap, or create one if none
     exists)]
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
---
Changes in v8: described in the trailers above ^

 src/PVE/BackupProvider/Makefile        |    3 +
 src/PVE/BackupProvider/Plugin/Base.pm  | 1126 ++++++++++++++++++++++++
 src/PVE/BackupProvider/Plugin/Makefile |    5 +
 src/PVE/Makefile                       |    1 +
 src/PVE/Storage.pm                     |    8 +
 src/PVE/Storage/Plugin.pm              |   15 +
 6 files changed, 1158 insertions(+)
 create mode 100644 src/PVE/BackupProvider/Makefile
 create mode 100644 src/PVE/BackupProvider/Plugin/Base.pm
 create mode 100644 src/PVE/BackupProvider/Plugin/Makefile

diff --git a/src/PVE/BackupProvider/Makefile b/src/PVE/BackupProvider/Makefile
new file mode 100644
index 0000000..f018cef
--- /dev/null
+++ b/src/PVE/BackupProvider/Makefile
@@ -0,0 +1,3 @@
+.PHONY: install
+install:
+	make -C Plugin install
diff --git a/src/PVE/BackupProvider/Plugin/Base.pm b/src/PVE/BackupProvider/Plugin/Base.pm
new file mode 100644
index 0000000..e382d57
--- /dev/null
+++ b/src/PVE/BackupProvider/Plugin/Base.pm
@@ -0,0 +1,1126 @@
+package PVE::BackupProvider::Plugin::Base;
+
+use strict;
+use warnings;
+
+=head1 NAME
+
+PVE::BackupProvider::Plugin::Base - Base Plugin for Backup Provider API
+
+=head1 SYNOPSIS
+
+    use base qw(PVE::BackupProvider::Plugin::Base);
+
+=head1 DESCRIPTION
+
+This module serves as the base for any module implementing the API that Proxmox
+VE uses to interface with external backup providers. The API is used for
+creating and restoring backups. A backup provider also needs to provide a
+storage plugin for integration with the front-end. The API here is used by the
+backup stack in the backend.
+
+=head2 1. Backup API
+
+In Proxmox VE, a backup job consists of backup tasks for individual guests.
+There are methods for initialization and cleanup of the job, i.e. job_init() and
+job_cleanup() and for each guest backup, i.e. backup_init() and
+backup_cleanup().
+
+The backup_get_mechanism() method is used to decide on the backup mechanism.
+Currently, 'file-handle' or 'nbd' for VMs, and 'directory' for containers is
+possible. The method also let's the plugin indicate whether to use a bitmap for
+incremental VM backup or not. It is enough to implement one mechanism for VMs
+and one mechanism for containers.
+
+Next, there are methods for backing up the guest's configuration and data,
+backup_vm() for VM backup and backup_container() for container backup.
+
+Finally, some helpers like provider_name() for getting the name of the backup
+provider and backup_handle_log_file() for handling the backup task log.
+
+The backup transaction looks as follows:
+
+First, job_init() is called that can be used to check backup server availability
+and prepare the connection. Then for each guest backup_init() followed by
+backup_vm() or backup_container() and finally backup_cleanup(). Afterwards
+job_cleanup() is called. For containers, there is an additional
+backup_container_prepare() call while still privileged. The actual
+backup_container() call happens as the (unprivileged) container root user, so
+that the file owner and group IDs match the container's perspective.
+
+=head3 1.1 Backup Mechanisms
+
+VM:
+
+Access to the data on the VM's disk is made available via a "snapshot access"
+abstraction. This is effectively a snapshot of the data from the time the backup
+is started. New guest writes after the backup started do not affect this. The
+"snapshot access" represents either the full image, or in case a bitmap is used,
+the dirty parts of the image since the last time the bitmap was used for a
+successful backup.
+
+NOTE: If a bitmap is used, the "snapshot access" is really only the dirty parts
+of the image. You have to query the bitmap to see which parts of the image are
+accessible/present. Reading or doing any other operation (like querying the
+block allocation status via NBD) outside of the dirty parts of the image will
+result in an error. In particular, if there were no new writes since the last
+successful backup, i.e. the bitmap is fully clean, then the image cannot be
+accessed at all, you can only query the dirty bitmap.
+
+After backing up each part of the disk, it should be discarded in the export to
+avoid unnecessary space usage on the Proxmox VE side (there is an associated
+fleecing image).
+
+VM mechanism 'file-handle':
+
+The snapshot access is exposed via a file descriptor. A subroutine to read the
+dirty regions for incremental backup is provided as well.
+
+VM mechanism 'nbd':
+
+The snapshot access and, if used, bitmap are exported via NBD. For the
+specification of the NBD metadata context for dirty bitmaps, see:
+L<https://qemu.readthedocs.io/en/master/interop/nbd.html>
+
+Container mechanism 'directory':
+
+A copy or snapshot of the container's filesystem state is made available as a
+directory.
+
+=head2 2. Restore API
+
+The restore_get_mechanism() method is used to decide on the restore mechanism.
+Currently, 'qemu-img' for VMs, and 'directory' or 'tar' for containers are
+possible. It is enough to implement one mechanism for VMs and one mechanism for
+containers.
+
+Next, methods for extracting the guest and firewall configuration and the
+implementations of the restore mechanism via a pair of methods: an init method,
+for making the data available to Proxmox VE and a cleanup method that is called
+after restore.
+
+=head3 2.1. Restore Mechanisms
+
+VM mechanism 'qemu-img':
+
+The backup provider gives a path to the disk image that will be restored. The
+path needs to be something 'qemu-img' can deal with, e.g. can also be an NBD URI
+or similar.
+
+Container mechanism 'directory':
+
+The backup provider gives the path to a directory with the full filesystem
+structure of the container.
+
+Container mechanism 'tar':
+
+The backup provider gives the path to a (potentially compressed) tar archive
+with the full filesystem structure of the container.
+
+=head1 METHODS
+
+=cut
+
+# plugin methods
+
+=over
+
+=item C<new>
+
+The constructor. Returns a blessed instance of the backup provider class.
+
+Parameters:
+
+=over
+
+=item C<$storage_plugin>
+
+The associated storage plugin class.
+
+=item C<$scfg>
+
+The storage configuration of the associated storage.
+
+=item C<$storeid>
+
+The storage ID of the associated storage.
+
+=item C<$log_function>
+
+The function signature is C<$log_function($log_level, $message)>. This log
+function can be used to write to the backup task log in Proxmox VE.
+
+=over
+
+=item C<$log_level>
+
+Either C<info>, C<warn> or C<err> for informational messages, warnings or error
+messages.
+
+=item C<$message>
+
+The message to be printed.
+
+=back
+
+=back
+
+=back
+
+=cut
+sub new {
+    my ($class, $storage_plugin, $scfg, $storeid, $log_function) = @_;
+
+    die "implement me in subclass";
+}
+
+=over
+
+=item C<provider_name>
+
+Returns the name of the backup provider. It will be printed in some log lines.
+
+=back
+
+=cut
+sub provider_name {
+    my ($self) = @_;
+
+    die "implement me in subclass";
+}
+
+=over
+
+=item C<job_init>
+
+Called when the job is started. Can be used to check the backup server
+availability and prepare for the upcoming backup tasks of individual guests. For
+example, to establish a connection to be used during C<backup_container()> or
+C<backup_vm()>.
+
+Parameters:
+
+=over
+
+=item C<$start_time>
+
+Unix time-stamp of when the job started.
+
+=back
+
+=back
+
+=cut
+sub job_init {
+    my ($self, $start_time) = @_;
+
+    die "implement me in subclass";
+}
+
+=over
+
+=item C<job_cleanup>
+
+Called when the job is finished to allow for any potential cleanup related to
+the backup server. Called in both, success and failure scenarios.
+
+=back
+
+=cut
+sub job_cleanup {
+    my ($self) = @_;
+
+    die "implement me in subclass";
+}
+
+=over
+
+=item C<backup_init>
+
+Called before the backup of the given guest is made. The archive name is
+determined for the backup task and returned to the caller via a hash reference:
+
+    my $res = $backup_provider->backup_init($vmid, $vmtype, $start_time);
+    my $archive_name = $res->{'archive-name'};
+
+The archive name must contain only characters from the
+C<$PVE::Storage::SAFE_CHAR_CLASS_RE> character class as well as forward slash
+C</> and colon C<:>.
+
+Use C<$self> to remember it for the C<backup_container()> or C<backup_vm()>
+method that will be called later.
+
+Parameters:
+
+=over
+
+=item C<$vmid>
+
+The ID of the guest being backed up.
+
+=item C<$vmtype>
+
+The type of the guest being backed up. Currently, either C<qemu> or C<lxc>.
+
+=item C<$start_time>
+
+Unix time-stamp of when the guest backup started.
+
+=back
+
+=back
+
+=cut
+sub backup_init {
+    my ($self, $vmid, $vmtype, $start_time) = @_;
+
+    die "implement me in subclass";
+}
+
+=over
+
+=item C<backup_cleanup>
+
+Called when the guest backup is finished. Called in both, success and failure
+scenarios. In the success case, statistics about the task after completion of
+the backup are returned via a hash reference. Currently, only the archive size
+is part of the result:
+
+    my $res = $backup_provider->backup_cleanup($vmid, $vmtype, $success, $info);
+    my $stats = $res->{stats};
+    my $archive_size = $stats->{'archive-size'};
+
+Parameters:
+
+=over
+
+=item C<$vmid>
+
+The ID of the guest being backed up.
+
+=item C<$vmtype>
+
+The type of the guest being backed up. Currently, either C<qemu> or C<lxc>.
+Might be C<undef> in phase C<abort> for certain error scenarios.
+
+=item C<$success>
+
+Boolean indicating whether the job was successful or not. Success means that all
+individual guest backups were successful.
+
+=item C<$info>
+
+A hash reference with optional information. Currently, the error message in case
+of a failure.
+
+=over
+
+=item C<< $info->{error} >>
+
+Present if there was a failure. The error message indicating the failure.
+
+=back
+
+=back
+
+=back
+
+=cut
+sub backup_cleanup {
+    my ($self, $vmid, $vmtype, $success, $info) = @_;
+
+    die "implement me in subclass";
+}
+
+=over
+
+=item C<backup_get_mechanism>
+
+Tell the caller what mechanism to use for backing up the guest. The backup
+method for the guest, i.e. C<backup_vm> for guest type C<qemu> or
+C<backup_container> for guest type C<lxc>, will later be called with
+mechanism-specific information. See those methods for more information.
+
+Returns the mechanism:
+
+    my $mechanism = $backup_provider->backup_get_mechanism($vmid, $vmtype);
+
+Currently C<nbd> and C<file-handle> for guest type C<qemu> and C<directory> for
+guest type C<lxc> are possible. If there is no support for one of the guest
+types, the method should either C<die> or return C<undef>.
+
+Parameters:
+
+=over
+
+=item C<$vmid>
+
+The ID of the guest being backed up.
+
+=item C<$vmtype>
+
+The type of the guest being backed up. Currently, either C<qemu> or C<lxc>.
+
+=back
+
+=back
+
+=cut
+sub backup_get_mechanism {
+    my ($self, $vmid, $vmtype) = @_;
+
+    die "implement me in subclass";
+}
+
+=over
+
+=item C<backup_handle_log_file>
+
+Handle the backup's log file which contains the task log for the backup. For
+example, a provider might want to upload a copy to the backup server.
+
+Parameters:
+
+=over
+
+=item C<$vmid>
+
+The ID of the guest being backed up.
+
+=item C<$filename>
+
+Path to the file with the backup log.
+
+=back
+
+=back
+
+=cut
+sub backup_handle_log_file {
+    my ($self, $vmid, $filename) = @_;
+
+    die "implement me in subclass";
+}
+
+=over
+
+=item C<backup_vm_query_incremental>
+
+Queries which devices can be backed up in an incrementally.
+If incremental backup is not supported, simply return nothing (or C<undef>).
+
+It cannot be guaranteed that the device on the QEMU-side still has the bitmap
+used for an incremental backup.
+For example, the VM might not be running, or the device might have been resized
+or detached and re-attached. The C<$volumes> parameter in C<backup_vm()>
+will contain the effective bitmap mode, see the C<backup_vm()> method for
+details.
+
+Parameters:
+
+=over
+
+=item C<$vmid>
+
+The ID of the guest being backed up.
+
+=item C<$volumes>
+
+Hash reference with information about the VM's volumes.
+
+=over
+
+=item C<< $volumes->{$device_name} >>
+
+Hash reference with information about the VM volume associated to the device
+C<$device_name>.
+
+=over
+
+=item C<< $volumes->{$device_name}->{size} >>
+
+Size of the volume in bytes. If the size does not match what you expect on the
+backup server side, the bitmap will not exist anymore on the QEMU side. In this
+case, it can be decided early to use a new bitmap name, but it is also possible
+to re-use the same name, in which case a bitmap with that name will be newly
+created on the volume.
+
+=back
+
+=back
+
+=back
+
+Return value:
+
+This should return a hash mapping the C<$device_name>s found in the C<$volumes>
+hash to either C<new> (to create a new bitmap, or force recreation if one
+already exists), C<use> (to use an existing bitmap, or create one if it does
+not exist). Volumes which do not appear in the return value will not use a
+bitmap and existing ones will be discarded.
+
+=back
+
+=cut
+sub backup_vm_query_incremental {
+    my ($self, $vmid, $volumes) = @_;
+
+    die "implement me in subclass";
+}
+
+=over
+
+=item C<backup_vm>
+
+Used when the guest type is C<qemu>. Back up the virtual machine's configuration
+and volumes that were made available according to the mechanism returned by
+C<backup_get_mechanism>. Returns when done backing up. Ideally, the method
+should log the progress during backup.
+
+Access to the data on the VM's disk is made available via a "snapshot access"
+abstraction. This is effectively a snapshot of the data from the time the backup
+is started. New guest writes after the backup started do not affect this. The
+"snapshot access" represents either the full image, or in case a bitmap is used,
+the dirty parts of the image since the last time the bitmap was used for a
+successful backup.
+
+NOTE: If a bitmap is used, the "snapshot access" is really only the dirty parts
+of the image. You have to query the bitmap to see which parts of the image are
+accessible/present. Reading or doing any other operation (like querying the
+block allocation status via NBD) outside of the dirty parts of the image will
+result in an error. In particular, if there were no new writes since the last
+successful backup, i.e. the bitmap is fully clean, then the image cannot be
+accessed at all, you can only query the dirty bitmap.
+
+After backing up each part of the disk, it should be discarded in the export to
+avoid unnecessary space usage on the Proxmox VE side (there is an associated
+fleecing image).
+
+Parameters:
+
+=over
+
+=item C<$vmid>
+
+The ID of the guest being backed up.
+
+=item C<$guest_config>
+
+The guest configuration as raw data.
+
+=item C<$volumes>
+
+Hash reference with information about the VM's volumes. Some parameters are
+mechanism-specific.
+
+=over
+
+=item C<< $volumes->{$device_name} >>
+
+Hash reference with information about the VM volume associated to the device
+C<$device_name>. The device name needs to be remembered for restoring. The
+device name is also the name of the NBD export when the C<nbd> mechanism is
+used.
+
+=item C<< $volumes->{$device_name}->{size} >>
+
+Size of the volume in bytes.
+
+=item C<< $volumes->{$device_name}->{'bitmap-mode'} >>
+
+How a bitmap is used for the current volume.
+
+=over
+
+=item C<none>
+
+No bitmap is used.
+
+=item C<new>
+
+A bitmap has been newly created on the volume.
+
+=item C<reuse>
+
+The bitmap with the same ID as requested is being re-used.
+
+=back
+
+=back
+
+Meachanism-specific parameters for mechanism:
+
+=over
+
+=item C<file-handle>
+
+=over
+
+=item C<< $volumes->{$device_name}->{'file-handle'} >>
+
+File handle the backup data can be read from. Discards should be issued via the
+C<PVE::Storage::Common::deallocate()> function for ranges that already have been
+backed-up successfully to reduce space usage on the source-side.
+
+=item C<< $volumes->{$device_name}->{'next-dirty-region'} >>
+
+A function that will return the offset and length of the next dirty region as a
+two-element list. After the last dirty region, it will return C<undef>. If no
+bitmap is used, it will return C<(0, $size)> and then C<undef>. If a bitmap is
+used, these are the dirty regions according to the bitmap.
+
+=back
+
+=item C<nbd>
+
+For the specification of the NBD metadata context for dirty bitmaps, see:
+L<https://qemu.readthedocs.io/en/master/interop/nbd.html>
+
+=over
+
+=item C<< $volumes->{$device_name}->{'nbd-path'} >>
+
+The path to the Unix socket providing the NBD export with the backup data and,
+if a bitmap is used, bitmap data. Discards should be issued after reading the
+data to reduce space usage on the source-side.
+
+=item C<< $volumes->{$device_name}->{'bitmap-name'} >>
+
+The name of the bitmap in case a bitmap is used.
+
+=back
+
+=back
+
+=item C<$info>
+
+A hash reference containing optional parameters.
+
+Optional parameters:
+
+=over
+
+=item C<< $info->{'bandwidth-limit'} >>
+
+The requested bandwidth limit. The value is in bytes/second. The backup
+provider is expected to honor this rate limit for IO on the backup source and
+network traffic. A value of C<0>, C<undef> or if there is no such key in the
+hash all mean that there is no limit.
+
+=item C<< $info->{'firewall-config'} >>
+
+Present if the firewall configuration exists. The guest's firewall
+configuration as raw data.
+
+=back
+
+=back
+
+=back
+
+=cut
+sub backup_vm {
+    my ($self, $vmid, $guest_config, $volumes, $info) = @_;
+
+    die "implement me in subclass";
+}
+
+=over
+
+=item C<backup_container_prepare>
+
+Called right before C<backup_container()> is called. The method
+C<backup_container()> is called as the ID-mapped root user of the container, so
+as a potentially unprivileged user. The hook is still called as a privileged
+user to allow for the necessary preparation.
+
+Parameters:
+
+=over
+
+=item C<$vmid>
+
+The ID of the guest being backed up.
+
+=item C<$info>
+
+The same information that's passed along to C<backup_container()>, see the
+description there.
+
+=back
+
+=back
+
+=cut
+sub backup_container_prepare {
+    my ($self, $vmid, $info) = @_;
+
+    die "implement me in subclass";
+}
+
+=over
+
+=item C<backup_container>
+
+Used when the guest type is C<lxc>. Back up the container filesystem structure
+that is made available for the mechanism returned by C<backup_get_mechanism>.
+Returns when done backing up. Ideally, the method should log the progress during
+backup.
+
+Note that this method is executed as the ID-mapped root user of the container,
+so a potentially unprivileged user. The ID is passed along as part of C<$info>.
+Use the C<backup_container_prepare()> method for preparation. For example, to
+make credentials available to the potentially unprivileged user.
+
+Note that changes to C<$self> made during this method will not be visible in
+later method calls. This is because the method is executed in a separate
+execution context after forking. Use the C<backup_container_prepare()> method
+if you need persistent changes to C<$self>.
+
+Parameters:
+
+=over
+
+=item C<$vmid>
+
+The ID of the guest being backed up.
+
+=item C<$guest_config>
+
+Guest configuration as raw data.
+
+=item C<$exclude_patterns>
+
+A list of glob patterns of files and directories to be excluded. C<**> is used
+to match current directory and subdirectories. See also the following (note
+that PBS implements more than required here, like explicit inclusion when
+starting with a C<!>):
+L<vzdump documentation|https://pve.proxmox.com/pve-docs/chapter-vzdump.html#_file_exclusions>
+and
+L<PBS documentation|https://pbs.proxmox.com/docs/backup-client.html#excluding-files-directories-from-a-backup>
+
+=item C<$info>
+
+A hash reference containing optional and mechanism-specific parameters.
+
+Optional parameters:
+
+=over
+
+=item C<< $info->{'bandwidth-limit'} >>
+
+The requested bandwidth limit. The value is in bytes/second. The backup
+provider is expected to honor this rate limit for IO on the backup source and
+network traffic. A value of C<0>, C<undef> or if there is no such key in the
+hash all mean that there is no limit.
+
+=item C<< $info->{'firewall-config'} >>
+
+Present if the firewall configuration exists. The guest's firewall
+configuration as raw data.
+
+=back
+
+Mechanism-specific parameters for mechanism:
+
+=over
+
+=item C<directory>
+
+=over
+
+=item C<< $info->{directory} >>
+
+Path to the directory with the container's file system structure.
+
+=item C<< $info->{sources} >>
+
+List of paths (for separate mount points, including "." for the root) inside the
+directory to be backed up.
+
+=item C<< $info->{'backup-user-id'} >>
+
+The user ID of the ID-mapped root user of the container. For example, C<100000>
+for unprivileged containers by default.
+
+=back
+
+=back
+
+=back
+
+=back
+
+=cut
+sub backup_container {
+    my ($self, $vmid, $guest_config, $exclude_patterns, $info) = @_;
+
+    die "implement me in subclass";
+}
+
+=over
+
+=item C<restore_get_mechanism>
+
+Tell the caller what mechanism to use for restoring the guest. The restore
+methods for the guest, i.e. C<restore_qemu_img_init> and
+C<restore_qemu_img_cleanup> for guest type C<qemu>, or C<restore_container_init>
+and C<restore_container_cleanup> for guest type C<lxc> will be called with
+mechanism-specific information and their return value might also depend on the
+mechanism. See those methods for more information. Returns
+C<($mechanism, $vmtype)>:
+
+=over
+
+=item C<$mechanism>
+
+Currently, C<'qemu-img'> for guest type C<'qemu'> and either C<'tar'> or
+C<'directory'> for type C<'lxc'> are possible.
+
+=item C<$vmtype>
+
+Either C<qemu> or C<lxc> depending on what type the guest in the backed-up
+archive is.
+
+=back
+
+Parameters:
+
+=over
+
+=item C<$volname>
+
+The volume ID of the archive being restored.
+
+=back
+
+=back
+
+=cut
+sub restore_get_mechanism {
+    my ($self, $volname) = @_;
+
+    die "implement me in subclass";
+}
+
+=over
+
+=item C<archive_get_guest_config>
+
+Extract the guest configuration from the given backup. Returns the raw contents
+of the backed-up configuration file. Note that this method is called
+independently from C<restore_container_init()> or C<restore_vm_init()>.
+
+Parameters:
+
+=over
+
+=item C<$volname>
+
+The volume ID of the archive being restored.
+
+=back
+
+=back
+
+=cut
+sub archive_get_guest_config {
+    my ($self, $volname) = @_;
+
+    die "implement me in subclass";
+}
+
+=over
+
+=item C<archive_get_firewall_config>
+
+Extract the guest's firewall configuration from the given backup. Returns the
+raw contents of the backed-up configuration file. Returns C<undef> if there is
+no firewall config in the archive, C<die> if the configuration can't be
+extracted. Note that this method is called independently from
+C<restore_container_init()> or C<restore_vm_init()>.
+
+Parameters:
+
+=over
+
+=item C<$volname>
+
+The volume ID of the archive being restored.
+
+=back
+
+=back
+
+=cut
+sub archive_get_firewall_config {
+    my ($self, $volname) = @_;
+
+    die "implement me in subclass";
+}
+
+=over
+
+=item C<restore_vm_init>
+
+Prepare a VM archive for restore. Returns the basic information about the
+volumes in the backup as a hash reference with the following structure:
+
+    {
+	$device_nameA => { size => $sizeA },
+	$device_nameB => { size => $sizeB },
+	...
+    }
+
+=over
+
+=item C<$device_name>
+
+The device name that was given as an argument to the backup routine when the
+backup was created.
+
+=item C<$size>
+
+The virtual size of the VM volume that was backed up. A volume with this size is
+created for the restore operation. In particular, for the C<qemu-img> mechanism,
+this should be the size of the block device referenced by the C<qemu-img-path>
+returned by C<restore_vm_volume>.
+
+=back
+
+Parameters:
+
+=over
+
+=item C<$volname>
+
+The volume ID of the archive being restored.
+
+=back
+
+=back
+
+=cut
+sub restore_vm_init {
+    my ($self, $volname) = @_;
+
+    die "implement me in subclass";
+}
+
+=over
+
+=item C<restore_vm_cleanup>
+
+For VM backups, clean up after the restore. Called in both, success and
+failure scenarios.
+
+Parameters:
+
+=over
+
+=item C<$volname>
+
+The volume ID of the archive being restored.
+
+=back
+
+=back
+
+=cut
+sub restore_vm_cleanup {
+    my ($self, $volname) = @_;
+
+    die "implement me in subclass";
+}
+
+=over
+
+=item C<restore_vm_volume_init>
+
+Prepare a VM volume in the archive for restore. Returns a hash reference with
+the mechanism-specific information for the restore:
+
+=over
+
+=item C<qemu-img>
+
+    { 'qemu-img-path' => $path }
+
+The volume will be restored using the C<qemu-img convert> command.
+
+=over
+
+=item C<$path>
+
+A path to the volume that C<qemu-img> can use as a source for the
+C<qemu-img convert> command. For example, the path could also be an NBD URI. The
+image contents are interpreted as being in C<raw> format and copied verbatim.
+Other formats like C<qcow2> will not be detected currently.
+
+=back
+
+=back
+
+Parameters:
+
+=over
+
+=item C<$volname>
+
+The volume ID of the archive being restored.
+
+=item C<$device_name>
+
+The device name associated to the volume that should be prepared for the
+restore. Same as the argument to the backup routine when the backup was created.
+
+=item C<$info>
+
+A hash reference with optional and mechanism-specific parameters. Currently
+empty.
+
+=back
+
+=back
+
+=cut
+sub restore_vm_volume_init {
+    my ($self, $volname, $device_name, $info) = @_;
+
+    die "implement me in subclass";
+}
+
+=over
+
+=item C<restore_vm_volume_cleanup>
+
+For VM backups, clean up after the restore of a given volume. Called in both,
+success and failure scenarios.
+
+Parameters:
+
+=over
+
+=item C<$volname>
+
+The volume ID of the archive being restored.
+
+=item C<$device_name>
+
+The device name associated to the volume that should be prepared for the
+restore. Same as the argument to the backup routine when the backup was created.
+
+=item C<$info>
+
+A hash reference with optional and mechanism-specific parameters. Currently
+empty.
+
+=back
+
+=back
+
+=cut
+sub restore_vm_volume_cleanup {
+    my ($self, $volname, $device_name, $info) = @_;
+
+    die "implement me in subclass";
+}
+
+=over
+
+=item C<restore_container_init>
+
+Prepare a container archive for restore. Returns a hash reference with the
+mechanism-specific information for the restore:
+
+=over
+
+=item C<tar>
+
+    { 'tar-path' => $path }
+
+The archive will be restored via the C<tar> command.
+
+=over
+
+=item C<$path>
+
+The path to the tar archive containing the full filesystem structure of the
+container.
+
+=back
+
+=item C<directory>
+
+    { 'archive-directory' => $path }
+
+The archive will be restored via C<rsync> from a directory containing the full
+filesystem structure of the container.
+
+=over
+
+=item C<$path>
+
+The path to the directory containing the full filesystem structure of the
+container.
+
+=back
+
+=back
+
+Parameters:
+
+=over
+
+=item C<$volname>
+
+The volume ID of the archive being restored.
+
+=item C<$info>
+
+A hash reference with optional and mechanism-specific parameters. Currently
+empty.
+
+=back
+
+=back
+
+=cut
+sub restore_container_init {
+    my ($self, $volname, $info) = @_;
+
+    die "implement me in subclass";
+}
+
+=over
+
+=item C<restore_container_cleanup>
+
+For container backups, clean up after the restore. Called in both, success and
+failure scenarios.
+
+Parameters:
+
+=over
+
+=item C<$volname>
+
+The volume ID of the archive being restored.
+
+=item C<$info>
+
+A hash reference with optional and mechanism-specific parameters. Currently
+empty.
+
+=back
+
+=back
+
+=cut
+sub restore_container_cleanup {
+    my ($self, $volname, $info) = @_;
+
+    die "implement me in subclass";
+}
+
+1;
diff --git a/src/PVE/BackupProvider/Plugin/Makefile b/src/PVE/BackupProvider/Plugin/Makefile
new file mode 100644
index 0000000..bbd7431
--- /dev/null
+++ b/src/PVE/BackupProvider/Plugin/Makefile
@@ -0,0 +1,5 @@
+SOURCES = Base.pm
+
+.PHONY: install
+install:
+	for i in ${SOURCES}; do install -D -m 0644 $$i ${DESTDIR}${PERLDIR}/PVE/BackupProvider/Plugin/$$i; done
diff --git a/src/PVE/Makefile b/src/PVE/Makefile
index 0af3081..9e9f6aa 100644
--- a/src/PVE/Makefile
+++ b/src/PVE/Makefile
@@ -9,6 +9,7 @@ install:
 	make -C Storage install
 	make -C GuestImport install
 	make -C API2 install
+	make -C BackupProvider install
 	make -C CLI install
 
 .PHONY: test
diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index 8cbfb4f..7fd97b7 100755
--- a/src/PVE/Storage.pm
+++ b/src/PVE/Storage.pm
@@ -2027,6 +2027,14 @@ sub volume_export_start {
     PVE::Tools::run_command($cmds, %$run_command_params);
 }
 
+sub new_backup_provider {
+    my ($cfg, $storeid, $log_function) = @_;
+
+    my $scfg = storage_config($cfg, $storeid);
+    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+    return $plugin->new_backup_provider($scfg, $storeid, $log_function);
+}
+
 # bash completion helper
 
 sub complete_storage {
diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index 80daeea..df2ddc5 100644
--- a/src/PVE/Storage/Plugin.pm
+++ b/src/PVE/Storage/Plugin.pm
@@ -1868,6 +1868,21 @@ sub rename_volume {
     return "${storeid}:${base}${target_vmid}/${target_volname}";
 }
 
+# Used by storage plugins for external backup providers. See PVE::BackupProvider::Plugin for the API
+# the provider needs to implement.
+#
+# $scfg - the storage configuration
+# $storeid - the storage ID
+# $log_function($log_level, $message) - this log function can be used to write to the backup task
+#   log in Proxmox VE. $log_level is 'info', 'warn' or 'err', $message is the message to be printed.
+#
+# Returns a blessed reference to the backup provider class.
+sub new_backup_provider {
+    my ($class, $scfg, $storeid, $log_function) = @_;
+
+    die "implement me if enabling the feature 'backup-provider' in plugindata()->{features}\n";
+}
+
 sub config_aware_base_mkdir {
     my ($class, $scfg, $path) = @_;
 
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 storage 4/8] config api/plugins: let plugins define sensitive properties themselves
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (12 preceding siblings ...)
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 storage 3/8] plugin: introduce new_backup_provider() method Wolfgang Bumiller
@ 2025-04-03 12:30 ` Wolfgang Bumiller
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 storage 5/8] plugin api: bump api version and age Wolfgang Bumiller
                   ` (24 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:30 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

Hard-coding a list of sensitive properties means that custom plugins
cannot define their own sensitive properties for the on_add/on_update
hooks.

Have plugins declare the list of their sensitive properties in the
plugin data. For backwards compatibility, return the previously
hard-coded list if no such declaration is present.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
No changes to v7.

 src/PVE/API2/Storage/Config.pm       |  4 ++--
 src/PVE/Storage/BTRFSPlugin.pm       |  1 +
 src/PVE/Storage/CIFSPlugin.pm        |  1 +
 src/PVE/Storage/CephFSPlugin.pm      |  1 +
 src/PVE/Storage/DirPlugin.pm         |  1 +
 src/PVE/Storage/ESXiPlugin.pm        |  1 +
 src/PVE/Storage/GlusterfsPlugin.pm   |  1 +
 src/PVE/Storage/ISCSIDirectPlugin.pm |  1 +
 src/PVE/Storage/ISCSIPlugin.pm       |  1 +
 src/PVE/Storage/LVMPlugin.pm         |  1 +
 src/PVE/Storage/LvmThinPlugin.pm     |  1 +
 src/PVE/Storage/NFSPlugin.pm         |  1 +
 src/PVE/Storage/PBSPlugin.pm         |  5 +++++
 src/PVE/Storage/Plugin.pm            | 12 ++++++++++++
 src/PVE/Storage/RBDPlugin.pm         |  1 +
 src/PVE/Storage/ZFSPlugin.pm         |  1 +
 src/PVE/Storage/ZFSPoolPlugin.pm     |  1 +
 17 files changed, 33 insertions(+), 2 deletions(-)

diff --git a/src/PVE/API2/Storage/Config.pm b/src/PVE/API2/Storage/Config.pm
index e04b6ab..7facc62 100755
--- a/src/PVE/API2/Storage/Config.pm
+++ b/src/PVE/API2/Storage/Config.pm
@@ -190,8 +190,6 @@ __PACKAGE__->register_method ({
 	return &$api_storage_config($cfg, $param->{storage});
     }});
 
-my $sensitive_params = [qw(password encryption-key master-pubkey keyring)];
-
 __PACKAGE__->register_method ({
     name => 'create',
     protected => 1,
@@ -239,6 +237,7 @@ __PACKAGE__->register_method ({
 	# fix me in section config create never need an empty entity.
 	delete $param->{nodes} if !$param->{nodes};
 
+	my $sensitive_params = PVE::Storage::Plugin::sensitive_properties($type);
 	my $sensitive = extract_sensitive_params($param, $sensitive_params, []);
 
 	my $plugin = PVE::Storage::Plugin->lookup($type);
@@ -344,6 +343,7 @@ __PACKAGE__->register_method ({
 	    my $scfg = PVE::Storage::storage_config($cfg, $storeid);
 	    $type = $scfg->{type};
 
+	    my $sensitive_params = PVE::Storage::Plugin::sensitive_properties($type);
 	    my $sensitive = extract_sensitive_params($param, $sensitive_params, $delete);
 
 	    my $plugin = PVE::Storage::Plugin->lookup($type);
diff --git a/src/PVE/Storage/BTRFSPlugin.pm b/src/PVE/Storage/BTRFSPlugin.pm
index 1966b6f..5ed910d 100644
--- a/src/PVE/Storage/BTRFSPlugin.pm
+++ b/src/PVE/Storage/BTRFSPlugin.pm
@@ -45,6 +45,7 @@ sub plugindata {
 	    { images => 1, rootdir => 1 },
 	],
 	format => [ { raw => 1, subvol => 1 }, 'raw', ],
+	'sensitive-properties' => {},
     };
 }
 
diff --git a/src/PVE/Storage/CIFSPlugin.pm b/src/PVE/Storage/CIFSPlugin.pm
index 475065a..f47861e 100644
--- a/src/PVE/Storage/CIFSPlugin.pm
+++ b/src/PVE/Storage/CIFSPlugin.pm
@@ -101,6 +101,7 @@ sub plugindata {
 	content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1,
 		   backup => 1, snippets => 1, import => 1}, { images => 1 }],
 	format => [ { raw => 1, qcow2 => 1, vmdk => 1 } , 'raw' ],
+	'sensitive-properties' => { password => 1 },
     };
 }
 
diff --git a/src/PVE/Storage/CephFSPlugin.pm b/src/PVE/Storage/CephFSPlugin.pm
index 36c64ea..73edecb 100644
--- a/src/PVE/Storage/CephFSPlugin.pm
+++ b/src/PVE/Storage/CephFSPlugin.pm
@@ -118,6 +118,7 @@ sub plugindata {
     return {
 	content => [ { vztmpl => 1, iso => 1, backup => 1, snippets => 1, import => 1 },
 		     { backup => 1 }],
+	'sensitive-properties' => { keyring => 1 },
     };
 }
 
diff --git a/src/PVE/Storage/DirPlugin.pm b/src/PVE/Storage/DirPlugin.pm
index fb23e0a..532701b 100644
--- a/src/PVE/Storage/DirPlugin.pm
+++ b/src/PVE/Storage/DirPlugin.pm
@@ -26,6 +26,7 @@ sub plugindata {
 	content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1, backup => 1, snippets => 1, none => 1, import => 1 },
 		     { images => 1,  rootdir => 1 }],
 	format => [ { raw => 1, qcow2 => 1, vmdk => 1, subvol => 1 } , 'raw' ],
+	'sensitive-properties' => {},
     };
 }
 
diff --git a/src/PVE/Storage/ESXiPlugin.pm b/src/PVE/Storage/ESXiPlugin.pm
index c8412c4..6131c51 100644
--- a/src/PVE/Storage/ESXiPlugin.pm
+++ b/src/PVE/Storage/ESXiPlugin.pm
@@ -31,6 +31,7 @@ sub plugindata {
     return {
 	content => [ { import => 1 }, { import => 1 }],
 	format => [ { raw => 1, qcow2 => 1, vmdk => 1 } , 'raw' ],
+	'sensitive-properties' => { password => 1 },
     };
 }
 
diff --git a/src/PVE/Storage/GlusterfsPlugin.pm b/src/PVE/Storage/GlusterfsPlugin.pm
index 9d17180..18493cb 100644
--- a/src/PVE/Storage/GlusterfsPlugin.pm
+++ b/src/PVE/Storage/GlusterfsPlugin.pm
@@ -100,6 +100,7 @@ sub plugindata {
 	content => [ { images => 1, vztmpl => 1, iso => 1, backup => 1, snippets => 1, import => 1},
 		     { images => 1 }],
 	format => [ { raw => 1, qcow2 => 1, vmdk => 1 } , 'raw' ],
+	'sensitive-properties' => {},
     };
 }
 
diff --git a/src/PVE/Storage/ISCSIDirectPlugin.pm b/src/PVE/Storage/ISCSIDirectPlugin.pm
index 60bc94e..829e0c4 100644
--- a/src/PVE/Storage/ISCSIDirectPlugin.pm
+++ b/src/PVE/Storage/ISCSIDirectPlugin.pm
@@ -60,6 +60,7 @@ sub plugindata {
     return {
 	content => [ {images => 1, none => 1}, { images => 1 }],
 	select_existing => 1,
+	'sensitive-properties' => {},
     };
 }
 
diff --git a/src/PVE/Storage/ISCSIPlugin.pm b/src/PVE/Storage/ISCSIPlugin.pm
index 2c608a4..39cb4a7 100644
--- a/src/PVE/Storage/ISCSIPlugin.pm
+++ b/src/PVE/Storage/ISCSIPlugin.pm
@@ -305,6 +305,7 @@ sub plugindata {
     return {
 	content => [ {images => 1, none => 1}, { images => 1 }],
 	select_existing => 1,
+	'sensitive-properties' => {},
     };
 }
 
diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
index 38f7fa1..2ebec88 100644
--- a/src/PVE/Storage/LVMPlugin.pm
+++ b/src/PVE/Storage/LVMPlugin.pm
@@ -218,6 +218,7 @@ sub type {
 sub plugindata {
     return {
 	content => [ {images => 1, rootdir => 1}, { images => 1 }],
+	'sensitive-properties' => {},
     };
 }
 
diff --git a/src/PVE/Storage/LvmThinPlugin.pm b/src/PVE/Storage/LvmThinPlugin.pm
index 4b23623..49a4dcb 100644
--- a/src/PVE/Storage/LvmThinPlugin.pm
+++ b/src/PVE/Storage/LvmThinPlugin.pm
@@ -31,6 +31,7 @@ sub type {
 sub plugindata {
     return {
 	content => [ {images => 1, rootdir => 1}, { images => 1, rootdir => 1}],
+	'sensitive-properties' => {},
     };
 }
 
diff --git a/src/PVE/Storage/NFSPlugin.pm b/src/PVE/Storage/NFSPlugin.pm
index 72e9c6d..cb2ae18 100644
--- a/src/PVE/Storage/NFSPlugin.pm
+++ b/src/PVE/Storage/NFSPlugin.pm
@@ -56,6 +56,7 @@ sub plugindata {
 	content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1, backup => 1, snippets => 1, import => 1 },
 		     { images => 1 }],
 	format => [ { raw => 1, qcow2 => 1, vmdk => 1 } , 'raw' ],
+	'sensitive-properties' => {},
     };
 }   
 
diff --git a/src/PVE/Storage/PBSPlugin.pm b/src/PVE/Storage/PBSPlugin.pm
index 0808bcc..9f75794 100644
--- a/src/PVE/Storage/PBSPlugin.pm
+++ b/src/PVE/Storage/PBSPlugin.pm
@@ -30,6 +30,11 @@ sub type {
 sub plugindata {
     return {
 	content => [ {backup => 1, none => 1}, { backup => 1 }],
+	'sensitive-properties' => {
+	    'encryption-key' => 1,
+	    'master-pubkey' => 1,
+	    password => 1,
+	},
     };
 }
 
diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index df2ddc5..0d9558c 100644
--- a/src/PVE/Storage/Plugin.pm
+++ b/src/PVE/Storage/Plugin.pm
@@ -246,6 +246,18 @@ sub dirs_hash_to_string {
     return join(',', map { "$_=$hash->{$_}" } sort keys %$hash);
 }
 
+sub sensitive_properties {
+    my ($type) = @_;
+
+    my $data = $defaultData->{plugindata}->{$type};
+    if (my $sensitive_properties = $data->{'sensitive-properties'}) {
+	return [sort keys $sensitive_properties->%*];
+    }
+
+    # For backwards compatibility. This list was hardcoded in the API module previously.
+    return [qw(encryption-key keyring master-pubkey password)];
+}
+
 sub storage_has_feature {
     my ($type, $feature) = @_;
 
diff --git a/src/PVE/Storage/RBDPlugin.pm b/src/PVE/Storage/RBDPlugin.pm
index 42eefc6..c78db00 100644
--- a/src/PVE/Storage/RBDPlugin.pm
+++ b/src/PVE/Storage/RBDPlugin.pm
@@ -380,6 +380,7 @@ sub type {
 sub plugindata {
     return {
 	content => [ {images => 1, rootdir => 1}, { images => 1 }],
+	'sensitive-properties' => { keyring => 1 },
     };
 }
 
diff --git a/src/PVE/Storage/ZFSPlugin.pm b/src/PVE/Storage/ZFSPlugin.pm
index d4dc2a4..94cb11f 100644
--- a/src/PVE/Storage/ZFSPlugin.pm
+++ b/src/PVE/Storage/ZFSPlugin.pm
@@ -175,6 +175,7 @@ sub type {
 sub plugindata {
     return {
 	content => [ {images => 1}, { images => 1 }],
+	'sensitive-properties' => {},
     };
 }
 
diff --git a/src/PVE/Storage/ZFSPoolPlugin.pm b/src/PVE/Storage/ZFSPoolPlugin.pm
index 3669fe1..26fb0a4 100644
--- a/src/PVE/Storage/ZFSPoolPlugin.pm
+++ b/src/PVE/Storage/ZFSPoolPlugin.pm
@@ -22,6 +22,7 @@ sub plugindata {
     return {
 	content => [ {images => 1, rootdir => 1}, {images => 1 , rootdir => 1}],
 	format => [ { raw => 1, subvol => 1 } , 'raw' ],
+	'sensitive-properties' => {},
     };
 }
 
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 storage 5/8] plugin api: bump api version and age
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (13 preceding siblings ...)
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 storage 4/8] config api/plugins: let plugins define sensitive properties themselves Wolfgang Bumiller
@ 2025-04-03 12:30 ` Wolfgang Bumiller
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 storage 6/8] extract backup config: delegate to backup provider for storages that support it Wolfgang Bumiller
                   ` (23 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:30 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

Changes for version 11:

* Allow declaring storage features via plugin data.
* Introduce new_backup_provider() plugin method.
* Allow declaring sensitive properties via plugin data.

See the api changelog file for details.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
No changes to v7.

 ApiChangeLog       | 32 ++++++++++++++++++++++++++++++++
 src/PVE/Storage.pm |  4 ++--
 2 files changed, 34 insertions(+), 2 deletions(-)

diff --git a/ApiChangeLog b/ApiChangeLog
index 98b5893..987da54 100644
--- a/ApiChangeLog
+++ b/ApiChangeLog
@@ -6,6 +6,38 @@ without breaking anything unaware of it.)
 
 Future changes should be documented in here.
 
+##  Version 11:
+
+* Allow declaring storage features via plugin data
+
+  A new `storage_has_feature()` helper function was added that checks a storage plugin's features.
+  Plugins can indicate support for certain features in their `plugindata`. The first such feature is
+  `backup-provider`, see below for more details. To declare support for this feature, return
+  `features => { 'backup-provider' => 1 }` as part of the plugin data.
+
+* Introduce new_backup_provider() plugin method
+
+  Proxmox VE now supports a `Backup Provider API` that can be used to implement custom backup
+  solutions tightly integrated in the Proxmox VE stack. See the `PVE::BackupProvider::Plugin::Base`
+  module for detailed documentation. A backup provider also needs to implement an associated storage
+  plugin for user-facing integration in Proxmox VE. Such a plugin needds to opt-in to the
+  `backup-provider` feature (see above) and implement the new_backup_provider() method, returning a
+  blessed reference to the backup provider class. The rest of the plugin methods, e.g. listing
+  content, providing usage information, etc., follow the same API as usual.
+
+* Allow declaring sensitive properties via plugin data
+
+  A new `sensitive_properties()` helper function was added to get the list of sensitive properties
+  a plugin uses via the plugin's `plugindata`. The sensitive properties are passed separately from
+  other properties to the `on_add_hook()` and `on_update_hook()` methods and should not be written
+  to the storage configuration file directly, but stored in the more restricted
+  `/etc/pve/priv/storage` directory on the Proxmox Cluster File System. For example, to declare that
+  a `ssh-private-key` property used by the plugin is sensitive, return
+  `'sensitive-properties' => { 'ssh-private-key' => 1 }` as part of the plugin data. The list of
+  sensitive properties was hard-coded previously, as `encryption-key`, `keyring`, `master-pubkey`,
+  `password`. For backwards compatibility, this list is still used if a plugin doesn't declare its
+  own sensitive properties.
+
 ##  Version 10:
 
 * a new `rename_volume` method has been added
diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index 7fd97b7..10a4abc 100755
--- a/src/PVE/Storage.pm
+++ b/src/PVE/Storage.pm
@@ -42,11 +42,11 @@ use PVE::Storage::BTRFSPlugin;
 use PVE::Storage::ESXiPlugin;
 
 # Storage API version. Increment it on changes in storage API interface.
-use constant APIVER => 10;
+use constant APIVER => 11;
 # Age is the number of versions we're backward compatible with.
 # This is like having 'current=APIVER' and age='APIAGE' in libtool,
 # see https://www.gnu.org/software/libtool/manual/html_node/Libtool-versioning.html
-use constant APIAGE => 1;
+use constant APIAGE => 2;
 
 our $KNOWN_EXPORT_FORMATS = ['raw+size', 'tar+size', 'qcow2+size', 'vmdk+size', 'zfs', 'btrfs'];
 
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 storage 6/8] extract backup config: delegate to backup provider for storages that support it
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (14 preceding siblings ...)
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 storage 5/8] plugin api: bump api version and age Wolfgang Bumiller
@ 2025-04-03 12:30 ` Wolfgang Bumiller
  2025-04-03 12:30 ` [pve-devel] [POC v8 storage 7/8] add backup provider example Wolfgang Bumiller
                   ` (22 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:30 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
No changes to v7.

 src/PVE/Storage.pm | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/src/PVE/Storage.pm b/src/PVE/Storage.pm
index 10a4abc..7174f0f 100755
--- a/src/PVE/Storage.pm
+++ b/src/PVE/Storage.pm
@@ -1759,6 +1759,17 @@ sub extract_vzdump_config {
 	    storage_check_enabled($cfg, $storeid);
 	    return PVE::Storage::PBSPlugin->extract_vzdump_config($scfg, $volname, $storeid);
 	}
+
+	if (storage_has_feature($cfg, $storeid, 'backup-provider')) {
+	    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+	    my $log_function = sub {
+		my ($log_level, $message) = @_;
+		my $prefix = $log_level eq 'err' ? 'ERROR' : uc($log_level);
+		print "$prefix: $message\n";
+	    };
+	    my $backup_provider = $plugin->new_backup_provider($scfg, $storeid, $log_function);
+	    return $backup_provider->archive_get_guest_config($volname, $storeid);
+	}
     }
 
     my $archive = abs_filesystem_path($cfg, $volid);
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [POC v8 storage 7/8] add backup provider example
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (15 preceding siblings ...)
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 storage 6/8] extract backup config: delegate to backup provider for storages that support it Wolfgang Bumiller
@ 2025-04-03 12:30 ` Wolfgang Bumiller
  2025-04-04  6:58   ` Wolfgang Bumiller
  2025-04-03 12:30 ` [pve-devel] [POC v8 storage 8/8] Borg example plugin Wolfgang Bumiller
                   ` (21 subsequent siblings)
  38 siblings, 1 reply; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:30 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

The example uses a simple directory structure to save the backups,
grouped by guest ID. VM backups are saved as configuration files and
qcow2 images, with backing files when doing incremental backups.
Container backups are saved as configuration files and a tar file or
squashfs image (added to test the 'directory' restore mechanism).

Whether to use incremental VM backups and which backup mechanisms to
use can be configured in the storage configuration.

The 'nbdinfo' binary from the 'libnbd-bin' package is required for
backup mechanism 'nbd' for VM backups, the 'mksquashfs' binary from the
'squashfs-tools' package is required for backup mechanism 'squashfs' for
containers.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
[WB: update from backup_vm_available_bitmaps() to
     backup_vm_query_incremental(), the previous-info file is now a
     json file mapping the individual volumes instead of a single
     backup id to support toggling the backup=0|0 property on
     individual drives between backups]
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
---
Changes in v8: described in the trailers above ^

 .../BackupProvider/Plugin/DirectoryExample.pm | 809 ++++++++++++++++++
 src/PVE/BackupProvider/Plugin/Makefile        |   2 +-
 .../Custom/BackupProviderDirExamplePlugin.pm  | 308 +++++++
 src/PVE/Storage/Custom/Makefile               |   5 +
 src/PVE/Storage/Makefile                      |   1 +
 5 files changed, 1124 insertions(+), 1 deletion(-)
 create mode 100644 src/PVE/BackupProvider/Plugin/DirectoryExample.pm
 create mode 100644 src/PVE/Storage/Custom/BackupProviderDirExamplePlugin.pm
 create mode 100644 src/PVE/Storage/Custom/Makefile

diff --git a/src/PVE/BackupProvider/Plugin/DirectoryExample.pm b/src/PVE/BackupProvider/Plugin/DirectoryExample.pm
new file mode 100644
index 0000000..4c5c8f6
--- /dev/null
+++ b/src/PVE/BackupProvider/Plugin/DirectoryExample.pm
@@ -0,0 +1,809 @@
+package PVE::BackupProvider::Plugin::DirectoryExample;
+
+use strict;
+use warnings;
+
+use Fcntl qw(SEEK_SET);
+use File::Path qw(make_path remove_tree);
+use IO::File;
+use IPC::Open3;
+use JSON qw(from_json to_json);
+
+use PVE::Storage::Common;
+use PVE::Storage::Plugin;
+use PVE::Tools qw(file_get_contents file_read_firstline file_set_contents run_command);
+
+use base qw(PVE::BackupProvider::Plugin::Base);
+
+# Private helpers
+
+my sub log_info {
+    my ($self, $message) = @_;
+
+    $self->{'log-function'}->('info', $message);
+}
+
+my sub log_warning {
+    my ($self, $message) = @_;
+
+    $self->{'log-function'}->('warn', $message);
+}
+
+my sub log_error {
+    my ($self, $message) = @_;
+
+    $self->{'log-function'}->('err', $message);
+}
+
+# NOTE: This is just for proof-of-concept testing! A backup provider plugin should either use the
+# 'nbd' backup mechansim and use the NBD protocol or use the 'file-handle' mechanism. There should
+# be no need to use /dev/nbdX nodes for proper plugins.
+my sub bind_next_free_dev_nbd_node {
+    my ($options) = @_;
+
+    # /dev/nbdX devices are reserved in a file. Those reservations expires after $expiretime.
+    # This avoids race conditions between allocation and use.
+
+    die "file '/sys/module/nbd' does not exist - 'nbd' kernel module not loaded?"
+	if !-e "/sys/module/nbd";
+
+    my $line = PVE::Tools::file_read_firstline("/sys/module/nbd/parameters/nbds_max")
+	or die "could not read 'nbds_max' parameter file for 'nbd' kernel module\n";
+    my ($nbds_max) = ($line =~ m/(\d+)/)
+	or die "could not determine 'nbds_max' parameter for 'nbd' kernel module\n";
+
+    my $filename = "/run/qemu-server/reserved-dev-nbd-nodes";
+
+    my $code = sub {
+	my $expiretime = 60;
+	my $ctime = time();
+
+	my $used = {};
+	my $latest = [0, 0];
+
+	if (my $fh = IO::File->new ($filename, "r")) {
+	    while (my $line = <$fh>) {
+		if ($line =~ m/^(\d+)\s(\d+)$/) {
+		    my ($n, $timestamp) = ($1, $2);
+
+		    $latest = [$n, $timestamp] if $latest->[1] <= $timestamp;
+
+		    if (($timestamp + $expiretime) > $ctime) {
+			$used->{$n} = $timestamp; # not expired
+		    }
+		}
+	    }
+	}
+
+	my $new_n;
+	for (my $count = 0; $count < $nbds_max; $count++) {
+	    my $n = ($latest->[0] + $count) % $nbds_max;
+	    my $block_device = "/dev/nbd${n}";
+	    next if $used->{$n}; # reserved
+	    next if !-e $block_device;
+
+	    my $st = File::stat::stat("/run/lock/qemu-nbd-nbd${n}");
+	    next if defined($st) && S_ISSOCK($st->mode) && $st->uid == 0; # in use
+
+	    # Used to avoid looping if there are other issues then the NBD node being in use
+	    my $socket_error = 0;
+	    eval {
+		my $errfunc = sub {
+		    my ($line) = @_;
+		    $socket_error = 1 if $line =~ m/^qemu-nbd: Failed to set NBD socket$/;
+		    log_warn($line);
+		};
+		run_command(["qemu-nbd", "-c", $block_device, $options->@*], errfunc => $errfunc);
+	    };
+	    if (my $err = $@) {
+		die $err if !$socket_error;
+		log_warn("unable to bind $block_device - trying next one");
+		next;
+	    }
+	    $used->{$n} = $ctime;
+	    $new_n = $n;
+	    last;
+	}
+
+	my $data = "";
+	$data .= "$_ $used->{$_}\n" for keys $used->%*;
+
+	PVE::Tools::file_set_contents($filename, $data);
+
+	return defined($new_n) ? "/dev/nbd${new_n}" : undef;
+    };
+
+    my $block_device =
+	PVE::Tools::lock_file('/run/lock/qemu-server/reserved-dev-nbd-nodes.lock', 10, $code);
+    die $@ if $@;
+
+    die "unable to find free /dev/nbdX block device node\n" if !$block_device;
+
+    return $block_device;
+}
+
+# Backup Provider API
+
+sub new {
+    my ($class, $storage_plugin, $scfg, $storeid, $log_function) = @_;
+
+    my $self = bless {
+	scfg => $scfg,
+	storeid => $storeid,
+	'storage-plugin' => $storage_plugin,
+	'log-function' => $log_function,
+    }, $class;
+
+    return $self;
+}
+
+sub provider_name {
+    my ($self) = @_;
+
+    return 'dir provider example';
+}
+
+# Hooks
+
+sub job_init {
+    my ($self, $start_time) = @_;
+
+    log_info($self, "job init called");
+
+    if (!-e '/sys/module/nbd/parameters') {
+	die "required 'nbd' kernel module not loaded - use 'modprobe nbd nbds_max=128' to load it"
+	    ." manually\n";
+    }
+
+    log_info($self, "backup provider initialized successfully for new job $start_time");
+
+    return;
+}
+
+sub job_cleanup {
+    my ($self) = @_;
+
+    log_info($self, "job cleanup called");
+
+    return;
+}
+
+sub backup_init {
+    my ($self, $vmid, $vmtype, $backup_time) = @_;
+
+    my $archive_subdir = "${vmtype}-${backup_time}";
+    my $archive = "${vmid}/${archive_subdir}";
+
+    log_info($self, "backup start hook called");
+
+    my $backup_dir = $self->{scfg}->{path} . "/" . $archive;
+
+    make_path($backup_dir);
+    die "unable to create directory $backup_dir\n" if !-d $backup_dir;
+
+    $self->{$vmid}->{'backup-time'} = $backup_time;
+    $self->{$vmid}->{'backup-dir'} = $backup_dir;
+
+    $self->{$vmid}->{'archive-subdir'} = $archive_subdir;
+    $self->{$vmid}->{archive} = $archive;
+    return { 'archive-name' => $archive };
+}
+
+my sub get_previous_info_tainted {
+    my ($self, $vmid) = @_;
+
+    my $previous_info_file = "$self->{scfg}->{path}/$vmid/previous-info";
+
+    return eval { from_json(file_get_contents($previous_info_file)) } // {};
+}
+
+my sub update_previous_info {
+    my ($self, $vmid) = @_;
+
+    my $previous_info_file = "$self->{scfg}->{path}/$vmid/previous-info";
+
+    if (defined(my $info = $self->{$vmid}->{previous})) {
+	file_set_contents($previous_info_file, to_json($info));
+    } else {
+	unlink($previous_info_file);
+    }
+}
+
+
+sub backup_cleanup {
+    my ($self, $vmid, $vmtype, $success, $info) = @_;
+
+    if ($success) {
+	log_info($self, "backup cleanup called - success");
+	eval {
+	    update_previous_info($self, $vmid, $self->{$vmid}->{previous});
+	};
+	if (my $err = $@) {
+	    log_error($self, "failed to update previous-info file: $err");
+	}
+	my $size = 0;
+	my $backup_dir = $self->{$vmid}->{'backup-dir'};
+	my @backup_files = glob("$backup_dir/*");
+	$size += -s $_ for @backup_files;
+	my $stats = { 'archive-size' => $size };
+	return { 'stats' => $stats };
+    } else {
+	log_info($self, "backup cleanup called - failure");
+
+	$self->{$vmid}->{failed} = 1;
+
+	if (my $dir = $self->{$vmid}->{'backup-dir'}) {
+	    eval { remove_tree($dir) };
+	    log_warning($self, "unable to clean up $dir - $@") if $@;
+	}
+
+	# Restore old previous-info so next attempt can re-use bitmap again
+	if (my $info = $self->{$vmid}->{'old-previous-info'}) {
+	    my $previous_info_dir = "$self->{scfg}->{path}/$vmid/";
+	    my $previous_info_file = "$previous_info_dir/previous-info";
+	    file_set_contents($previous_info_file, $info);
+	}
+    }
+}
+
+sub backup_container_prepare {
+    my ($self, $vmid, $info) = @_;
+
+    my $dir = $self->{$vmid}->{'backup-dir'};
+    chown($info->{'backup-user-id'}, -1, $dir) or die "unable to change owner for $dir\n";
+
+    return;
+}
+
+sub backup_vm_query_incremental {
+    my ($self, $vmid, $volumes) = @_;
+
+    # Try to use the last backup's disks for incremental backup if the storage
+    # is configured for incremental VM backup. Need to start fresh if there is
+    # no previous backup or the associated backup doesn't exist.
+
+    return if $self->{'storage-plugin'}->get_vm_backup_mode($self->{scfg}) ne 'incremental';
+
+    my $vmtype = 'qemu';
+
+    my $out = {};
+
+    my $info = get_previous_info_tainted($self, $vmid);
+    for my $device_name (keys $volumes->%*) {
+	my $prev_file = $info->{$device_name};
+	next if !defined $prev_file;
+	# it's type-time/disk.qcow2
+	next if $prev_file !~ m!^([^/]+/[^/]+\.qcow2)$!;
+	$prev_file = $1; # untaint
+
+	my $full_path = "$self->{scfg}->{path}/$vmid/$prev_file";
+
+	if (-e $full_path) {
+	    $self->{$vmid}->{previous}->{$device_name} = $prev_file;
+	    $out->{$device_name} = 'use';
+	} else {
+	    $out->{$device_name} = 'new';
+	}
+    }
+
+    return $out;
+}
+
+sub backup_get_mechanism {
+    my ($self, $vmid, $vmtype) = @_;
+
+    return 'directory' if $vmtype eq 'lxc';
+    return $self->{'storage-plugin'}->get_vm_backup_mechanism($self->{scfg}) if $vmtype eq 'qemu';
+
+    die "unsupported guest type '$vmtype'\n";
+}
+
+sub backup_handle_log_file {
+    my ($self, $vmid, $filename) = @_;
+
+    my $log_dir = $self->{$vmid}->{'backup-dir'};
+    if ($self->{$vmid}->{failed}) {
+	$log_dir .= ".failed";
+    }
+    make_path($log_dir);
+    die "unable to create directory $log_dir\n" if !-d $log_dir;
+
+    my $data = file_get_contents($filename);
+    my $target = "${log_dir}/backup.log";
+    file_set_contents($target, $data);
+}
+
+my sub backup_file {
+    my ($self, $vmid, $device_name, $size, $in_fh, $bitmap_mode, $next_dirty_region, $bandwidth_limit) = @_;
+
+    # TODO honor bandwidth_limit
+
+    my $target = "$self->{$vmid}->{'backup-dir'}/${device_name}.qcow2";
+
+    my $create_cmd = ["qemu-img", "create", "-f", "qcow2", $target, $size];
+    if (my $previous_file = $self->{$vmid}->{previous}->{$device_name}) {
+	my $target_base = "../$previous_file";
+	push $create_cmd->@*, "-b", $target_base, "-F", "qcow2";
+    }
+    run_command($create_cmd);
+
+    my $nbd_node;
+    eval {
+	# allows to easily write to qcow2 target
+	$nbd_node = bind_next_free_dev_nbd_node([$target, '--format=qcow2']);
+	# FIXME use nbdfuse like in qemu-server rather than qemu-nbd. Seems like there is a race and
+	# sysseek() can fail with "Invalid argument" if done too early...
+	sleep 1;
+
+	my $block_size = 4 * 1024 * 1024; # 4 MiB
+
+	my $out_fh = IO::File->new($nbd_node, "r+")
+	    or die "unable to open NBD backup target - $!\n";
+
+	my $buffer = '';
+	my $skip_discard;
+
+	while (scalar((my $region_offset, my $region_length) = $next_dirty_region->())) {
+	    sysseek($in_fh, $region_offset, SEEK_SET)
+		// die "unable to seek '$region_offset' in NBD backup source - $!\n";
+	    sysseek($out_fh, $region_offset, SEEK_SET)
+		// die "unable to seek '$region_offset' in NBD backup target - $!\n";
+
+	    my $local_offset = 0; # within the region
+	    while ($local_offset < $region_length) {
+		my $remaining = $region_length - $local_offset;
+		my $request_size = $remaining < $block_size ? $remaining : $block_size;
+		my $offset = $region_offset + $local_offset;
+
+		my $read = sysread($in_fh, $buffer, $request_size);
+		die "failed to read from backup source - $!\n" if !defined($read);
+		die "premature EOF while reading backup source\n" if $read == 0;
+
+		my $written = 0;
+		while ($written < $read) {
+		    my $res = syswrite($out_fh, $buffer, $request_size - $written, $written);
+		    die "failed to write to backup target - $!\n" if !defined($res);
+		    die "unable to progress writing to backup target\n" if $res == 0;
+		    $written += $res;
+		}
+
+		if (!$skip_discard) {
+		    eval { PVE::Storage::Common::deallocate($in_fh, $offset, $request_size); };
+		    if (my $err = $@) {
+			# Just assume that if one request didn't work, others won't either.
+			log_warning(
+			    $self, "discard source failed (skipping further discards) - $err");
+			$skip_discard = 1;
+		     }
+		 }
+
+		$local_offset += $request_size;
+	    }
+	}
+	$out_fh->sync();
+    };
+    my $err = $@;
+
+    $self->{$vmid}->{previous}->{$device_name} = "$self->{$vmid}->{'archive-subdir'}/${device_name}.qcow2";
+
+    eval { run_command(['qemu-nbd', '-d', $nbd_node ]); };
+    log_warning($self, "unable to disconnect NBD backup target - $@") if $@;
+
+    die $err if $err;
+}
+
+my sub backup_nbd {
+    my ($self, $vmid, $device_name, $size, $nbd_path, $bitmap_mode, $bitmap_name, $bandwidth_limit) = @_;
+
+    # TODO honor bandwidth_limit
+
+    die "need 'nbdinfo' binary from package libnbd-bin\n" if !-e "/usr/bin/nbdinfo";
+
+    my $nbd_info_uri = "nbd+unix:///${device_name}?socket=${nbd_path}";
+    my $qemu_nbd_uri = "nbd:unix:${nbd_path}:exportname=${device_name}";
+
+    my $cpid;
+    my $error_fh;
+    my $next_dirty_region;
+
+    # If there is no dirty bitmap, it can be treated as if there's a full dirty one. The output of
+    # nbdinfo is a list of tuples with offset, length, type, description. The first bit of 'type' is
+    # set when the bitmap is dirty, see QEMU's docs/interop/nbd.txt
+    my $dirty_bitmap = [];
+    if ($bitmap_mode ne 'none') {
+	my $input = IO::File->new();
+	my $info = IO::File->new();
+	$error_fh = IO::File->new();
+	my $nbdinfo_cmd = ["nbdinfo", $nbd_info_uri, "--map=qemu:dirty-bitmap:${bitmap_name}"];
+	$cpid = open3($input, $info, $error_fh, $nbdinfo_cmd->@*)
+	    or die "failed to spawn nbdinfo child - $!\n";
+
+	$next_dirty_region = sub {
+	    my ($offset, $length, $type);
+	    do {
+		my $line = <$info>;
+		return if !$line;
+		die "unexpected output from nbdinfo - $line\n"
+		    if $line !~ m/^\s*(\d+)\s*(\d+)\s*(\d+)/; # also untaints
+		($offset, $length, $type) = ($1, $2, $3);
+	    } while (($type & 0x1) == 0); # not dirty
+	    return ($offset, $length);
+	};
+    } else {
+	my $done = 0;
+	$next_dirty_region = sub {
+	    return if $done;
+	    $done = 1;
+	    return (0, $size);
+	};
+    }
+
+    my $nbd_node;
+    eval {
+	$nbd_node = bind_next_free_dev_nbd_node([$qemu_nbd_uri, "--format=raw", "--discard=on"]);
+
+	my $in_fh = IO::File->new($nbd_node, 'r+')
+	    or die "unable to open NBD backup source '$nbd_node' - $!\n";
+
+	backup_file(
+	    $self,
+	    $vmid,
+	    $device_name,
+	    $size,
+	    $in_fh,
+	    $bitmap_mode,
+	    $next_dirty_region,
+	    $bandwidth_limit,
+	);
+    };
+    my $err = $@;
+
+    eval { run_command(["qemu-nbd", "-d", $nbd_node ]); };
+    log_warning($self, "unable to disconnect NBD backup source - $@") if $@;
+
+    if ($cpid) {
+	my $waited;
+	my $wait_limit = 5;
+	for ($waited = 0; $waited < $wait_limit && waitpid($cpid, POSIX::WNOHANG) == 0; $waited++) {
+	    kill 15, $cpid if $waited == 0;
+	    sleep 1;
+	}
+	if ($waited == $wait_limit) {
+	    kill 9, $cpid;
+	    sleep 1;
+	    log_warning($self, "unable to collect nbdinfo child process")
+		if waitpid($cpid, POSIX::WNOHANG) == 0;
+	}
+    }
+
+    die $err if $err;
+}
+
+my sub backup_vm_volume {
+    my ($self, $vmid, $device_name, $info, $bandwidth_limit) = @_;
+
+    my $backup_mechanism = $self->{'storage-plugin'}->get_vm_backup_mechanism($self->{scfg});
+
+    if ($backup_mechanism eq 'nbd') {
+	backup_nbd(
+	    $self,
+	    $vmid,
+	    $device_name,
+	    $info->{size},
+	    $info->{'nbd-path'},
+	    $info->{'bitmap-mode'},
+	    $info->{'bitmap-name'},
+	    $bandwidth_limit,
+	);
+    } elsif ($backup_mechanism eq 'file-handle') {
+	backup_file(
+	    $self,
+	    $vmid,
+	    $device_name,
+	    $info->{size},
+	    $info->{'file-handle'},
+	    $info->{'bitmap-mode'},
+	    $info->{'next-dirty-region'},
+	    $bandwidth_limit,
+	);
+    } else {
+	die "internal error - unknown VM backup mechansim '$backup_mechanism'\n";
+    }
+}
+
+sub backup_vm {
+    my ($self, $vmid, $guest_config, $volumes, $info) = @_;
+
+    my $target = "$self->{$vmid}->{'backup-dir'}/guest.conf";
+    file_set_contents($target, $guest_config);
+
+    if (my $firewall_config = $info->{'firewall-config'}) {
+	$target = "$self->{$vmid}->{'backup-dir'}/firewall.conf";
+	file_set_contents($target, $firewall_config);
+    }
+
+    for my $device_name (sort keys $volumes->%*) {
+	backup_vm_volume(
+	    $self, $vmid, $device_name, $volumes->{$device_name}, $info->{'bandwidth-limit'});
+    }
+}
+
+my sub backup_directory_tar {
+    my ($self, $vmid, $directory, $exclude_patterns, $sources, $bandwidth_limit) = @_;
+
+    # essentially copied from PVE/VZDump/LXC.pm' archive()
+
+    # copied from PVE::Storage::Plugin::COMMON_TAR_FLAGS
+    my @tar_flags = qw(
+	--one-file-system
+	-p --sparse --numeric-owner --acls
+	--xattrs --xattrs-include=user.* --xattrs-include=security.capability
+	--warning=no-file-ignored --warning=no-xattr-write
+    );
+
+    my $tar = ['tar', 'cpf', '-', '--totals', @tar_flags];
+
+    push @$tar, "--directory=$directory";
+
+    my @exclude_no_anchored = ();
+    my @exclude_anchored = ();
+    for my $pattern ($exclude_patterns->@*) {
+	if ($pattern !~ m|^/|) {
+	    push @exclude_no_anchored, $pattern;
+	} else {
+	    push @exclude_anchored, $pattern;
+	}
+    }
+
+    push @$tar, '--no-anchored';
+    push @$tar, '--exclude=lost+found';
+    push @$tar, map { "--exclude=$_" } @exclude_no_anchored;
+
+    push @$tar, '--anchored';
+    push @$tar, map { "--exclude=.$_" } @exclude_anchored;
+
+    push @$tar, $sources->@*;
+
+    my $cmd = [ $tar ];
+
+    push @$cmd, [ 'cstream', '-t', $bandwidth_limit * 1024 ] if $bandwidth_limit;
+
+    my $target = "$self->{$vmid}->{'backup-dir'}/archive.tar";
+    push @{$cmd->[-1]}, \(">" . PVE::Tools::shellquote($target));
+
+    my $logfunc = sub {
+	my $line = shift;
+	log_info($self, "tar: $line");
+    };
+
+    PVE::Tools::run_command($cmd, logfunc => $logfunc);
+
+    return;
+};
+
+# NOTE This only serves as an example to illustrate the 'directory' restore mechanism. It is not
+# fleshed out properly, e.g. I didn't check if exclusion is compatible with
+# proxmox-backup-client/rsync or xattrs/ACL/etc. work as expected!
+my sub backup_directory_squashfs {
+    my ($self, $vmid, $directory, $exclude_patterns, $bandwidth_limit) = @_;
+
+    my $target = "$self->{$vmid}->{'backup-dir'}/archive.sqfs";
+
+    my $mksquashfs = ['mksquashfs', $directory, $target, '-quiet', '-no-progress'];
+
+    push $mksquashfs->@*, '-wildcards';
+
+    for my $pattern ($exclude_patterns->@*) {
+	if ($pattern !~ m|^/|) { # non-anchored
+	    push $mksquashfs->@*, '-e', "... $pattern";
+	} else { # anchored
+	    push $mksquashfs->@*, '-e', substr($pattern, 1); # need to strip leading slash
+	}
+    }
+
+    my $cmd = [ $mksquashfs ];
+
+    push @$cmd, [ 'cstream', '-t', $bandwidth_limit * 1024 ] if $bandwidth_limit;
+
+    my $logfunc = sub {
+	my $line = shift;
+	log_info($self, "mksquashfs: $line");
+    };
+
+    PVE::Tools::run_command($cmd, logfunc => $logfunc);
+
+    return;
+};
+
+sub backup_container {
+    my ($self, $vmid, $guest_config, $exclude_patterns, $info) = @_;
+
+    my $target = "$self->{$vmid}->{'backup-dir'}/guest.conf";
+    file_set_contents($target, $guest_config);
+
+    if (my $firewall_config = $info->{'firewall-config'}) {
+	$target = "$self->{$vmid}->{'backup-dir'}/firewall.conf";
+	file_set_contents($target, $firewall_config);
+    }
+
+    my $backup_mode = $self->{'storage-plugin'}->get_lxc_backup_mode($self->{scfg});
+    if ($backup_mode eq 'tar') {
+	backup_directory_tar(
+	    $self,
+	    $vmid,
+	    $info->{directory},
+	    $exclude_patterns,
+	    $info->{sources},
+	    $info->{'bandwidth-limit'},
+	);
+    } elsif ($backup_mode eq 'squashfs') {
+	backup_directory_squashfs(
+	    $self,
+	    $vmid,
+	    $info->{directory},
+	    $exclude_patterns,
+	    $info->{'bandwidth-limit'},
+	);
+    } else {
+	die "got unexpected backup mode '$backup_mode' from storage plugin\n";
+    }
+}
+
+# Restore API
+
+sub restore_get_mechanism {
+    my ($self, $volname) = @_;
+
+    my (undef, $relative_backup_dir) = $self->{'storage-plugin'}->parse_volname($volname);
+    my ($vmtype) = $relative_backup_dir =~ m!^\d+/([a-z]+)-!;
+
+    return ('qemu-img', $vmtype) if $vmtype eq 'qemu';
+
+    if ($vmtype eq 'lxc') {
+	my (undef, $relative_backup_dir) = $self->{'storage-plugin'}->parse_volname($volname);
+
+	if (-e "$self->{scfg}->{path}/${relative_backup_dir}/archive.tar") {
+	    $self->{'restore-mechanisms'}->{$volname} = 'tar';
+	    return ('tar', $vmtype);
+	}
+
+	if (-e "$self->{scfg}->{path}/${relative_backup_dir}/archive.sqfs") {
+	    $self->{'restore-mechanisms'}->{$volname} = 'directory';
+	    return ('directory', $vmtype)
+	}
+
+	die "unable to find archive '$volname'\n";
+    }
+
+    die "cannot restore unexpected guest type '$vmtype'\n";
+}
+
+sub archive_get_guest_config {
+    my ($self, $volname) = @_;
+
+    my (undef, $relative_backup_dir) = $self->{'storage-plugin'}->parse_volname($volname);
+    my $filename = "$self->{scfg}->{path}/${relative_backup_dir}/guest.conf";
+
+    return file_get_contents($filename);
+}
+
+sub archive_get_firewall_config {
+    my ($self, $volname) = @_;
+
+    my (undef, $relative_backup_dir) = $self->{'storage-plugin'}->parse_volname($volname);
+    my $filename = "$self->{scfg}->{path}/${relative_backup_dir}/firewall.conf";
+
+    return if !-e $filename;
+
+    return file_get_contents($filename);
+}
+
+sub restore_vm_init {
+    my ($self, $volname) = @_;
+
+    my $res = {};
+
+    my (undef, $relative_backup_dir) = $self->{'storage-plugin'}->parse_volname($volname);
+    my $backup_dir = "$self->{scfg}->{path}/${relative_backup_dir}";
+
+    my @backup_files = glob("$backup_dir/*");
+    for my $backup_file (@backup_files) {
+	next if $backup_file !~ m!^(.*/(.*)\.qcow2)$!;
+	$backup_file = $1; # untaint
+	$res->{$2}->{size} = PVE::Storage::Plugin::file_size_info($backup_file, undef, 'qcow2');
+    }
+
+    return $res;
+}
+
+sub restore_vm_cleanup {
+    my ($self, $volname) = @_;
+
+    return; # nothing to do
+}
+
+sub restore_vm_volume_init {
+    my ($self, $volname, $device_name, $info) = @_;
+
+    my (undef, $relative_backup_dir) = $self->{'storage-plugin'}->parse_volname($volname);
+    my $image = "$self->{scfg}->{path}/${relative_backup_dir}/${device_name}.qcow2";
+    # NOTE Backing files are not allowed by Proxmox VE when restoring. The reason is that an
+    # untrusted qcow2 image can specify an arbitrary backing file and thus leak data from the host.
+    # For the sake of the directory example plugin, an NBD export is created, but this side-steps
+    # the check and would allow the attack again. An actual implementation should check that the
+    # backing file (or rather, the whole backing chain) is safe first!
+    my $nbd_node = bind_next_free_dev_nbd_node([$image]);
+    $self->{"${volname}/${device_name}"}->{'nbd-node'} = $nbd_node;
+    return {
+	'qemu-img-path' => $nbd_node,
+    };
+}
+
+sub restore_vm_volume_cleanup {
+    my ($self, $volname, $device_name, $info) = @_;
+
+    if (my $nbd_node = delete($self->{"${volname}/${device_name}"}->{'nbd-node'})) {
+	PVE::Tools::run_command(['qemu-nbd', '-d', $nbd_node]);
+    }
+
+    return;
+}
+
+my sub restore_tar_init {
+    my ($self, $volname) = @_;
+
+    my (undef, $relative_backup_dir) = $self->{'storage-plugin'}->parse_volname($volname);
+    return { 'tar-path' => "$self->{scfg}->{path}/${relative_backup_dir}/archive.tar" };
+}
+
+my sub restore_directory_init {
+    my ($self, $volname) = @_;
+
+    my (undef, $relative_backup_dir, $vmid) = $self->{'storage-plugin'}->parse_volname($volname);
+    my $archive = "$self->{scfg}->{path}/${relative_backup_dir}/archive.sqfs";
+
+    my $mount_point = "/run/backup-provider-example/${vmid}.mount";
+    make_path($mount_point);
+    die "unable to create directory $mount_point\n" if !-d $mount_point;
+
+    run_command(['mount', '-o', 'ro', $archive, $mount_point]);
+
+    return { 'archive-directory' => $mount_point };
+}
+
+my sub restore_directory_cleanup {
+    my ($self, $volname) = @_;
+
+    my (undef, undef, $vmid) = $self->{'storage-plugin'}->parse_volname($volname);
+    my $mount_point = "/run/backup-provider-example/${vmid}.mount";
+
+    run_command(['umount', $mount_point]);
+
+    return;
+}
+
+sub restore_container_init {
+    my ($self, $volname, $info) = @_;
+
+    if ($self->{'restore-mechanisms'}->{$volname} eq 'tar') {
+	return restore_tar_init($self, $volname);
+    } elsif ($self->{'restore-mechanisms'}->{$volname} eq 'directory') {
+	return restore_directory_init($self, $volname);
+    } else {
+	die "no restore mechanism set for '$volname'\n";
+    }
+}
+
+sub restore_container_cleanup {
+    my ($self, $volname, $info) = @_;
+
+    if ($self->{'restore-mechanisms'}->{$volname} eq 'tar') {
+	return; # nothing to do
+    } elsif ($self->{'restore-mechanisms'}->{$volname} eq 'directory') {
+	return restore_directory_cleanup($self, $volname);
+    } else {
+	die "no restore mechanism set for '$volname'\n";
+    }
+}
+
+1;
diff --git a/src/PVE/BackupProvider/Plugin/Makefile b/src/PVE/BackupProvider/Plugin/Makefile
index bbd7431..bedc26e 100644
--- a/src/PVE/BackupProvider/Plugin/Makefile
+++ b/src/PVE/BackupProvider/Plugin/Makefile
@@ -1,4 +1,4 @@
-SOURCES = Base.pm
+SOURCES = Base.pm DirectoryExample.pm
 
 .PHONY: install
 install:
diff --git a/src/PVE/Storage/Custom/BackupProviderDirExamplePlugin.pm b/src/PVE/Storage/Custom/BackupProviderDirExamplePlugin.pm
new file mode 100644
index 0000000..d04d9d1
--- /dev/null
+++ b/src/PVE/Storage/Custom/BackupProviderDirExamplePlugin.pm
@@ -0,0 +1,308 @@
+package PVE::Storage::Custom::BackupProviderDirExamplePlugin;
+
+use strict;
+use warnings;
+
+use File::Basename qw(basename);
+
+use PVE::BackupProvider::Plugin::DirectoryExample;
+use PVE::Tools;
+
+use base qw(PVE::Storage::Plugin);
+
+# Helpers
+
+sub get_vm_backup_mechanism {
+    my ($class, $scfg) = @_;
+
+    return $scfg->{'vm-backup-mechanism'} // properties()->{'vm-backup-mechanism'}->{'default'};
+}
+
+sub get_vm_backup_mode {
+    my ($class, $scfg) = @_;
+
+    return $scfg->{'vm-backup-mode'} // properties()->{'vm-backup-mode'}->{'default'};
+}
+
+sub get_lxc_backup_mode {
+    my ($class, $scfg) = @_;
+
+    return $scfg->{'lxc-backup-mode'} // properties()->{'lxc-backup-mode'}->{'default'};
+}
+
+# Configuration
+
+sub api {
+    return 11;
+}
+
+sub type {
+    return 'backup-provider-dir-example';
+}
+
+sub plugindata {
+    return {
+	content => [ { backup => 1, none => 1 }, { backup => 1 } ],
+	features => { 'backup-provider' => 1 },
+	'sensitive-properties' => {},
+    };
+}
+
+sub properties {
+    return {
+	'lxc-backup-mode' => {
+	    description => "How to create LXC backups. tar - create a tar archive."
+		." squashfs - create a squashfs image. Requires squashfs-tools to be installed.",
+	    type => 'string',
+	    enum => [qw(tar squashfs)],
+	    default => 'tar',
+	},
+	'vm-backup-mechanism' => {
+	    description => "Which mechanism to use for creating VM backups. nbd - access data via "
+		." NBD export. file-handle - access data via file handle.",
+	    type => 'string',
+	    enum => [qw(nbd file-handle)],
+	    default => 'file-handle',
+	},
+	'vm-backup-mode' => {
+	    description => "How to create VM backups. full - always create full backups."
+		." incremental - create incremental backups when possible, fallback to full when"
+		." necessary, e.g. VM disk's bitmap is invalid.",
+	    type => 'string',
+	    enum => [qw(full incremental)],
+	    default => 'full',
+	},
+    };
+}
+
+sub options {
+    return {
+	path => { fixed => 1 },
+	'lxc-backup-mode' => { optional => 1 },
+	'vm-backup-mechanism' => { optional => 1 },
+	'vm-backup-mode' => { optional => 1 },
+	disable => { optional => 1 },
+	nodes => { optional => 1 },
+	'prune-backups' => { optional => 1 },
+	'max-protected-backups' => { optional => 1 },
+    };
+}
+
+# Storage implementation
+
+# NOTE a proper backup storage should implement this
+sub prune_backups {
+    my ($class, $scfg, $storeid, $keep, $vmid, $type, $dryrun, $logfunc) = @_;
+
+    die "not implemented";
+}
+
+sub parse_volname {
+    my ($class, $volname) = @_;
+
+    if ($volname =~ m!^backup/((\d+)/[a-z]+-\d+)$!) {
+	my ($filename, $vmid) = ($1, $2);
+	return ('backup', $filename, $vmid);
+    }
+
+    die "unable to parse volume name '$volname'\n";
+}
+
+sub path {
+    my ($class, $scfg, $volname, $storeid, $snapname) = @_;
+
+    die "volume snapshot is not possible on backup-provider-dir-example volume" if $snapname;
+
+    my ($type, $filename, $vmid) = $class->parse_volname($volname);
+
+    return ("$scfg->{path}/${filename}", $vmid, $type);
+}
+
+sub create_base {
+    my ($class, $storeid, $scfg, $volname) = @_;
+
+    die "cannot create base image in backup-provider-dir-example storage\n";
+}
+
+sub clone_image {
+    my ($class, $scfg, $storeid, $volname, $vmid, $snap) = @_;
+
+    die "can't clone images in backup-provider-dir-example storage\n";
+}
+
+sub alloc_image {
+    my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
+
+    die "can't allocate space in backup-provider-dir-example storage\n";
+}
+
+# NOTE a proper backup storage should implement this
+sub free_image {
+    my ($class, $storeid, $scfg, $volname, $isBase) = @_;
+
+    # if it's a backing file, it would need to be merged into the upper image first.
+
+    die "not implemented";
+}
+
+sub list_images {
+    my ($class, $storeid, $scfg, $vmid, $vollist, $cache) = @_;
+
+    my $res = [];
+
+    return $res;
+}
+
+sub list_volumes {
+    my ($class, $storeid, $scfg, $vmid, $content_types) = @_;
+
+    my $path = $scfg->{path};
+
+    my $res = [];
+    for my $type ($content_types->@*) {
+	next if $type ne 'backup';
+
+	my @guest_dirs = glob("$path/*");
+	for my $guest_dir (@guest_dirs) {
+	    next if !-d $guest_dir || $guest_dir !~ m!/(\d+)$!;
+
+	    my $backup_vmid = basename($guest_dir);
+
+	    next if defined($vmid) && $backup_vmid != $vmid;
+
+	    my @backup_dirs = glob("$guest_dir/*");
+	    for my $backup_dir (@backup_dirs) {
+		next if !-d $backup_dir || $backup_dir !~ m!/(lxc|qemu)-(\d+)$!;
+		my ($subtype, $backup_id) = ($1, $2);
+
+		my $size = 0;
+		my @backup_files = glob("$backup_dir/*");
+		$size += -s $_ for @backup_files;
+
+		push $res->@*, {
+		    volid => "$storeid:backup/${backup_vmid}/${subtype}-${backup_id}",
+		    vmid => $backup_vmid,
+		    format => "directory",
+		    ctime => $backup_id,
+		    size => $size,
+		    subtype => $subtype,
+		    content => $type,
+		    # TODO parent for incremental
+		};
+	    }
+	}
+    }
+
+    return $res;
+}
+
+sub activate_storage {
+    my ($class, $storeid, $scfg, $cache) = @_;
+
+    my $path = $scfg->{path};
+
+    my $timeout = 2;
+    if (!PVE::Tools::run_fork_with_timeout($timeout, sub {-d $path})) {
+	die "unable to activate storage '$storeid' - directory '$path' does not exist or is"
+	    ." unreachable\n";
+    }
+
+    return 1;
+}
+
+sub deactivate_storage {
+    my ($class, $storeid, $scfg, $cache) = @_;
+
+    return 1;
+}
+
+sub activate_volume {
+    my ($class, $storeid, $scfg, $volname, $snapname, $cache) = @_;
+
+    die "volume snapshot is not possible on backup-provider-dir-example volume" if $snapname;
+
+    return 1;
+}
+
+sub deactivate_volume {
+    my ($class, $storeid, $scfg, $volname, $snapname, $cache) = @_;
+
+    die "volume snapshot is not possible on backup-provider-dir-example volume" if $snapname;
+
+    return 1;
+}
+
+sub get_volume_attribute {
+    my ($class, $scfg, $storeid, $volname, $attribute) = @_;
+
+    return;
+}
+
+# NOTE a proper backup storage should implement this to support backup notes and
+# setting protected status.
+sub update_volume_attribute {
+    my ($class, $scfg, $storeid, $volname, $attribute, $value) = @_;
+
+    die "attribute '$attribute' is not supported on backup-provider-dir-example volume";
+}
+
+sub volume_size_info {
+    my ($class, $scfg, $storeid, $volname, $timeout) = @_;
+
+    my (undef, $relative_backup_dir) = $class->parse_volname($volname);
+    my ($ctime) = $relative_backup_dir =~ m/-(\d+)$/;
+    my $backup_dir = "$scfg->{path}/${relative_backup_dir}";
+
+    my $size = 0;
+    my @backup_files = glob("$backup_dir/*");
+    for my $backup_file (@backup_files) {
+	if ($backup_file =~ m!\.qcow2$!) {
+	    $size += $class->file_size_info($backup_file, undef, 'qcow2');
+	} else {
+	    $size += -s $backup_file;
+	}
+    }
+
+    my $parent; # TODO for incremental
+
+    return wantarray ? ($size, 'directory', $size, $parent, $ctime) : $size;
+}
+
+sub volume_resize {
+    my ($class, $scfg, $storeid, $volname, $size, $running) = @_;
+
+    die "volume resize is not possible on backup-provider-dir-example volume";
+}
+
+sub volume_snapshot {
+    my ($class, $scfg, $storeid, $volname, $snap) = @_;
+
+    die "volume snapshot is not possible on backup-provider-dir-example volume";
+}
+
+sub volume_snapshot_rollback {
+    my ($class, $scfg, $storeid, $volname, $snap) = @_;
+
+    die "volume snapshot rollback is not possible on backup-provider-dir-example volume";
+}
+
+sub volume_snapshot_delete {
+    my ($class, $scfg, $storeid, $volname, $snap) = @_;
+
+    die "volume snapshot delete is not possible on backup-provider-dir-example volume";
+}
+
+sub volume_has_feature {
+    my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running) = @_;
+
+    return 0;
+}
+
+sub new_backup_provider {
+    my ($class, $scfg, $storeid, $bandwidth_limit, $log_function) = @_;
+
+    return PVE::BackupProvider::Plugin::DirectoryExample->new(
+	$class, $scfg, $storeid, $bandwidth_limit, $log_function);
+}
+
+1;
diff --git a/src/PVE/Storage/Custom/Makefile b/src/PVE/Storage/Custom/Makefile
new file mode 100644
index 0000000..c1e3eca
--- /dev/null
+++ b/src/PVE/Storage/Custom/Makefile
@@ -0,0 +1,5 @@
+SOURCES = BackupProviderDirExamplePlugin.pm
+
+.PHONY: install
+install:
+	for i in ${SOURCES}; do install -D -m 0644 $$i ${DESTDIR}${PERLDIR}/PVE/Storage/Custom/$$i; done
diff --git a/src/PVE/Storage/Makefile b/src/PVE/Storage/Makefile
index ce3fd68..fc0431f 100644
--- a/src/PVE/Storage/Makefile
+++ b/src/PVE/Storage/Makefile
@@ -21,4 +21,5 @@ SOURCES= \
 install:
 	make -C Common install
 	for i in ${SOURCES}; do install -D -m 0644 $$i ${DESTDIR}${PERLDIR}/PVE/Storage/$$i; done
+	make -C Custom install
 	make -C LunCmd install
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [POC v8 storage 8/8] Borg example plugin
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (16 preceding siblings ...)
  2025-04-03 12:30 ` [pve-devel] [POC v8 storage 7/8] add backup provider example Wolfgang Bumiller
@ 2025-04-03 12:30 ` Wolfgang Bumiller
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu-server 01/11] backup: keep track of block-node size for fleecing Wolfgang Bumiller
                   ` (20 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:30 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

Archive names start with the guest type and ID and then the same
timestamp format as PBS.

Container archives have the following structure:
guest.config
firewall.config
filesystem/ # containing the whole filesystem structure

VM archives have the following structure
guest.config
firewall.config
volumes/ # containing a raw file for each device

A bindmount (respectively symlinks) are used to achieve this
structure, because Borg doesn't seem to support renaming on-the-fly.
(Prefix stripping via the "slashdot hack" would have helped slightly,
but is only in Borg >= 1.4
https://github.com/borgbackup/borg/actions/runs/7967940995)

NOTE: Bandwidth limit is not yet honored and the task size is not
calculated yet. Discard for VM backups would also be nice to have, but
it's not entirely clear how (parsing progress and discarding according
to that is one idea). There is no dirty bitmap support, not sure if
that is feasible to add.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
No changes to v7.

 src/PVE/BackupProvider/Plugin/Borg.pm      | 466 ++++++++++++++
 src/PVE/BackupProvider/Plugin/Makefile     |   2 +-
 src/PVE/Storage/Custom/BorgBackupPlugin.pm | 689 +++++++++++++++++++++
 src/PVE/Storage/Custom/Makefile            |   3 +-
 4 files changed, 1158 insertions(+), 2 deletions(-)
 create mode 100644 src/PVE/BackupProvider/Plugin/Borg.pm
 create mode 100644 src/PVE/Storage/Custom/BorgBackupPlugin.pm

diff --git a/src/PVE/BackupProvider/Plugin/Borg.pm b/src/PVE/BackupProvider/Plugin/Borg.pm
new file mode 100644
index 0000000..decc78a
--- /dev/null
+++ b/src/PVE/BackupProvider/Plugin/Borg.pm
@@ -0,0 +1,466 @@
+package PVE::BackupProvider::Plugin::Borg;
+
+use strict;
+use warnings;
+
+use File::chdir;
+use File::Basename qw(basename);
+use File::Path qw(make_path remove_tree);
+use POSIX qw(strftime);
+
+use PVE::Tools;
+
+# ($vmtype, $vmid, $time_string)
+our $ARCHIVE_RE_3 = qr!^pve-(lxc|qemu)-([0-9]+)-([0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}Z)$!;
+
+sub archive_name {
+    my ($vmtype, $vmid, $backup_time) = @_;
+
+    return "pve-${vmtype}-${vmid}-" . strftime("%FT%TZ", gmtime($backup_time));
+}
+
+# remove_tree can be very verbose by default, do explicit error handling and limit to one message
+my sub _remove_tree {
+    my ($path) = @_;
+
+    remove_tree($path, { error => \my $err });
+    if ($err && @$err) { # empty array if no error
+	for my $diag (@$err) {
+	    my ($file, $message) = %$diag;
+	    die "cannot remove_tree '$path': $message\n" if $file eq '';
+	    die "cannot remove_tree '$path': unlinking $file failed - $message\n";
+	}
+    }
+}
+
+my sub prepare_run_dir {
+    my ($storeid, $archive, $operation, $uid) = @_;
+
+    my $run_dir = "/run/pve-storage/borg-plugin/${storeid}.${archive}.${operation}.$$";
+    _remove_tree($run_dir);
+    make_path($run_dir) or die "unable to create directory $run_dir\n";
+    chmod(0700, $run_dir) or die "unable to chmod directory $run_dir - $!\n";
+    if ($uid) {
+	chown($uid, -1, $run_dir) or die "unable to change owner for $run_dir\n";
+    }
+
+    return $run_dir;
+}
+
+my sub log_info {
+    my ($self, $message) = @_;
+
+    $self->{'log-function'}->('info', $message);
+}
+
+my sub log_warning {
+    my ($self, $message) = @_;
+
+    $self->{'log-function'}->('warn', $message);
+}
+
+my sub log_error {
+    my ($self, $message) = @_;
+
+    $self->{'log-function'}->('err', $message);
+}
+
+my sub file_contents_from_archive {
+    my ($self, $archive, $file) = @_;
+
+    return $self->{'storage-plugin'}->borg_cmd_extract_contents(
+	$self->{scfg},
+	$self->{storeid},
+	$archive,
+	[$file],
+    );
+}
+
+# Plugin implementation
+
+sub new {
+    my ($class, $storage_plugin, $scfg, $storeid, $log_function) = @_;
+
+    my $self = bless {
+	scfg => $scfg,
+	storeid => $storeid,
+	'storage-plugin' => $storage_plugin,
+	'log-function' => $log_function,
+    }, $class;
+
+    return $self;
+}
+
+sub provider_name {
+    my ($self) = @_;
+
+    return "Borg";
+}
+
+sub job_init {
+    my ($self, $start_time) = @_;
+
+    $self->{'job-id'} = $start_time;
+    $self->{password} = $self->{'storage-plugin'}->borg_get_password(
+	$self->{scfg}, $self->{storeid});
+    $self->{'ssh-key-fh'} = $self->{'storage-plugin'}->borg_open_ssh_key(
+	$self->{scfg}, $self->{storeid});
+}
+
+sub job_cleanup {
+    my ($self) = @_;
+
+    delete($self->{password});
+    close($self->{'ssh-key-fh'});
+
+    return;
+}
+
+sub backup_init {
+    my ($self, $vmid, $vmtype, $start_time) = @_;
+
+    $self->{$vmid}->{archive} = archive_name($vmtype, $vmid, $start_time);
+
+    return { 'archive-name' => $self->{$vmid}->{archive} };
+}
+
+sub backup_cleanup {
+    my ($self, $vmid, $vmtype, $success, $info) = @_;
+
+    if (defined($vmtype) && $vmtype eq 'lxc') {
+	if (my $run_dir = $self->{$vmid}->{'run-dir'}) {
+	    eval {
+		# tmpfs for temporary SSH files gets mounted there in backup_container()
+		eval { PVE::Tools::run_command(['umount', "${run_dir}/ssh"]); };
+		eval { PVE::Tools::run_command(['umount', '-R', "${run_dir}/backup/filesystem"]); };
+		_remove_tree($run_dir);
+	    };
+	    die "unable to clean up $run_dir - $@" if $@;
+	}
+    }
+    return { stats => { 'archive-size' => 0 } }; # TODO get size
+}
+
+sub backup_container_prepare {
+    my ($self, $vmid, $info) = @_;
+
+    my $archive = $self->{$vmid}->{archive};
+    my $run_dir = prepare_run_dir(
+	$self->{storeid}, $archive, "backup-container", $info->{'backup-user-id'});
+    $self->{$vmid}->{'run-dir'} = $run_dir;
+
+    my $create_dir = sub {
+	my $dir = shift;
+	make_path($dir) or die "unable to create directory $dir\n";
+	chmod(0700, $dir) or die "unable to chmod directory $dir\n";
+	chown($info->{'backup-user-id'}, -1, $dir)
+	    or die "unable to change owner for $dir\n";
+    };
+
+    $create_dir->("${run_dir}/backup/");
+    $create_dir->("${run_dir}/backup/filesystem");
+    $create_dir->("${run_dir}/ssh");
+    $create_dir->("${run_dir}/.config");
+    $create_dir->("${run_dir}/.cache");
+
+    for my $subdir ($info->{sources}->@*) {
+	PVE::Tools::run_command([
+	    'mount',
+	    '-o', 'bind,ro',
+	    "$info->{directory}/${subdir}",
+	    "${run_dir}/backup/filesystem/${subdir}",
+	]);
+    }
+}
+
+sub backup_get_mechanism {
+    my ($self, $vmid, $vmtype) = @_;
+
+    return 'file-handle' if $vmtype eq 'qemu';
+    return 'directory' if $vmtype eq 'lxc';
+
+    die "unsupported VM type '$vmtype'\n";
+}
+
+sub backup_handle_log_file {
+    my ($self, $vmid, $filename) = @_;
+
+    return; # don't upload, Proxmox VE keeps the task log too
+}
+
+sub backup_vm_query_incremental {
+    my ($self, $vmid, $volumes) = @_;
+
+    return; # no support currently
+}
+
+my sub backup_vm_setup_loopdev {
+    my ($file) = @_;
+
+    my $device;
+    my $parser = sub {
+	my $line = shift;
+	if ($line =~ m@^(/dev/loop\d+)$@) {
+	    $device = $1;
+	}
+    };
+    my $losetup_cmd = [
+	'losetup',
+	'--show',
+	'-r',
+	'-f',
+	$file,
+    ];
+    PVE::Tools::run_command($losetup_cmd, outfunc => $parser);
+    return $device;
+}
+
+sub backup_vm {
+    my ($self, $vmid, $guest_config, $volumes, $info) = @_;
+
+    # TODO honor bandwith limit
+    # TODO discard?
+
+    my $archive = $self->{$vmid}->{archive};
+
+    my $run_dir = prepare_run_dir($self->{storeid}, $archive, "backup-vm");
+    my $volume_dir = "${run_dir}/volumes";
+    make_path($volume_dir) or die "unable to create directory $volume_dir\n";
+
+    PVE::Tools::file_set_contents("${run_dir}/guest.config", $guest_config);
+    my $paths = ['./guest.config'];
+
+    if (my $firewall_config = $info->{'firewall-config'}) {
+	PVE::Tools::file_set_contents("${run_dir}/firewall.config", $firewall_config);
+	push $paths->@*, './firewall.config';
+    }
+
+    my @blockdevs = ();
+
+    # TODO --stats for size?
+
+    eval {
+	for my $device_name (sort keys $volumes->%*) {
+	    # FIXME there is no option to follow symlinks except in combination with special files,
+	    # so loop devices are set up here for this purpose. Newer versions of Borg (since 1.4)
+	    # could use the slashdot hack instead:
+	    # https://github.com/borgbackup/borg/commit/e7bd18d7f38ddf9e58a4587ae4a2ad8a24d67374
+	    my $path = "/proc/$$/fd/" . fileno($volumes->{$device_name}->{'file-handle'});
+	    my $blockdev = backup_vm_setup_loopdev($path);
+	    push @blockdevs, $blockdev;
+
+	    my $link_name = "${volume_dir}/${device_name}.raw";
+	    symlink($blockdev, $link_name) or die "could not create symlink $link_name -> $blockdev\n";
+	    push $paths->@*, "./volumes/" . basename($link_name, ());
+	}
+
+	local $CWD = $run_dir;
+
+	$self->{'storage-plugin'}->borg_cmd_create(
+	    $self->{scfg},
+	    $self->{storeid},
+	    $self->{$vmid}->{archive},
+	    $paths,
+	    ['--read-special', '--progress'],
+	);
+    };
+    my $err = $@;
+    for my $blockdev (@blockdevs) {
+	eval { PVE::Tools::run_command(['losetup', '-d', $blockdev]); };
+	log_warning($self, "cannot cleanup loop device - $@") if $@;
+    }
+    eval { _remove_tree($run_dir) };
+    log_warning($self, $@) if $@;
+    die $err if $err;
+}
+
+sub backup_container {
+    my ($self, $vmid, $guest_config, $exclude_patterns, $info) = @_;
+
+    # TODO honor bandwith limit
+
+    my $run_dir = $self->{$vmid}->{'run-dir'};
+    my $backup_dir = "${run_dir}/backup";
+
+    my $archive = $self->{$vmid}->{archive};
+
+    my $ssh_key;
+    if ($self->{'ssh-key-fh'}) {
+	$ssh_key =
+	    PVE::Tools::safe_read_from($self->{'ssh-key-fh'}, 1024 * 1024, 0, "SSH key file");
+    }
+
+    my (undef, $ssh_options) =
+	$self->{'storage-plugin'}->borg_setup_ssh_dir($self->{scfg}, "${run_dir}/ssh", $ssh_key);
+
+    PVE::Tools::file_set_contents("${backup_dir}/guest.config", $guest_config);
+    my $paths = ['./guest.config'];
+
+    if (my $firewall_config = $info->{'firewall-config'}) {
+	PVE::Tools::file_set_contents("${backup_dir}/firewall.config", $firewall_config);
+	push $paths->@*, './firewall.config';
+    }
+
+    push $paths->@*, "./filesystem";
+
+    my $opts = ['--numeric-ids', '--sparse', '--progress'];
+
+    for my $pattern ($exclude_patterns->@*) {
+	if ($pattern =~ m|^/|) {
+	    push $opts->@*, '-e', "filesystem${pattern}";
+	} else {
+	    push $opts->@*, '-e', "filesystem/**${pattern}";
+	}
+    }
+
+    push $opts->@*, '-e', "filesystem/**lost+found" if $info->{'backup-user-id'} != 0;
+
+    # TODO --stats for size?
+
+    # Don't make it local to avoid permission denied error when changing back, because the method is
+    # executed in a user namespace.
+    $CWD = $backup_dir if $info->{'backup-user-id'} != 0;
+    {
+	local $CWD = $backup_dir;
+	local $ENV{BORG_BASE_DIR} = ${run_dir};
+	local $ENV{BORG_PASSPHRASE} = $self->{password};
+
+	local $ENV{BORG_RSH} = "ssh " . join(" ", $ssh_options->@*);
+
+	my $uri = $self->{'storage-plugin'}->borg_repository_uri($self->{scfg}, $self->{storeid});
+	my $archive = $self->{$vmid}->{archive};
+
+	my $cmd = ['borg', 'create', $opts->@*, "${uri}::${archive}", $paths->@*];
+
+	PVE::Tools::run_command($cmd, errmsg => "command @$cmd failed");
+    }
+}
+
+sub restore_get_mechanism {
+    my ($self, $volname) = @_;
+
+    my (undef, $archive) = $self->{'storage-plugin'}->parse_volname($volname);
+    my ($vmtype) = $archive =~ m!^pve-([^\s-]+)!
+	or die "cannot parse guest type from archive name '$archive'\n";
+
+    return ('qemu-img', $vmtype) if $vmtype eq 'qemu';
+    return ('directory', $vmtype) if $vmtype eq 'lxc';
+
+    die "unexpected guest type '$vmtype'\n";
+}
+
+sub archive_get_guest_config {
+    my ($self, $volname) = @_;
+
+    my (undef, $archive) = $self->{'storage-plugin'}->parse_volname($volname);
+    return file_contents_from_archive($self, $archive, 'guest.config');
+}
+
+sub archive_get_firewall_config {
+    my ($self, $volname) = @_;
+
+    my (undef, $archive) = $self->{'storage-plugin'}->parse_volname($volname);
+    my $config = eval {
+	file_contents_from_archive($self, $archive, 'firewall.config');
+    };
+    if (my $err = $@) {
+	return if $err =~ m!Include pattern 'firewall\.config' never matched\.!;
+	die $err;
+    }
+    return $config;
+}
+
+sub restore_vm_init {
+    my ($self, $volname) = @_;
+
+    my $res = {};
+
+    my (undef, $archive, $vmid) = $self->{'storage-plugin'}->parse_volname($volname);
+
+    my $run_dir = prepare_run_dir($self->{storeid}, $archive, "restore-vm");
+    $self->{$volname}->{'run-dir'} = $run_dir;
+
+    my $mount_point = "${run_dir}/mount";
+    make_path($mount_point) or die "unable to create directory $mount_point\n";
+    $self->{$volname}->{'mount-point'} = $mount_point;
+
+    $self->{'storage-plugin'}->borg_cmd_mount(
+	$self->{scfg},
+	$self->{storeid},
+	$archive,
+	$mount_point,
+    );
+
+    my @backup_files = glob("$mount_point/volumes/*");
+    for my $backup_file (@backup_files) {
+	next if $backup_file !~ m!^(.*/(.*)\.raw)$!; # untaint
+	($backup_file, my $device_name) = ($1, $2);
+	# TODO avoid dependency on base plugin?
+	$res->{$device_name}->{size} =
+	    PVE::Storage::Plugin::file_size_info($backup_file, undef, 'raw');
+    }
+
+    return $res;
+}
+
+sub restore_vm_cleanup {
+    my ($self, $volname) = @_;
+
+    my $run_dir = $self->{$volname}->{'run-dir'} or return;
+    my $mount_point = $self->{$volname}->{'mount-point'};
+
+    eval { PVE::Tools::run_command(['umount', $mount_point]) };
+    eval { _remove_tree($run_dir); };
+    die "unable to clean up $run_dir - $@" if $@;
+
+    return;
+}
+
+sub restore_vm_volume_init {
+    my ($self, $volname, $device_name, $info) = @_;
+
+    my $mount_point = $self->{$volname}->{'mount-point'}
+	or die "expected mount point for archive not present\n";
+
+    return { 'qemu-img-path' => "${mount_point}/volumes/${device_name}.raw" };
+}
+
+sub restore_vm_volume_cleanup {
+    my ($self, $volname, $device_name, $info) = @_;
+
+    return;
+}
+
+sub restore_container_init {
+    my ($self, $volname, $info) = @_;
+
+    my (undef, $archive, $vmid) = $self->{'storage-plugin'}->parse_volname($volname);
+    my $run_dir = prepare_run_dir($self->{storeid}, $archive, "restore-container");
+    $self->{$volname}->{'run-dir'} = $run_dir;
+
+    my $mount_point = "${run_dir}/mount";
+    make_path($mount_point) or die "unable to create directory $mount_point\n";
+    $self->{$volname}->{'mount-point'} = $mount_point;
+
+    $self->{'storage-plugin'}->borg_cmd_mount(
+	$self->{scfg},
+	$self->{storeid},
+	$archive,
+	$mount_point,
+    );
+
+    return { 'archive-directory' => "${mount_point}/filesystem" };
+}
+
+sub restore_container_cleanup {
+    my ($self, $volname, $info) = @_;
+
+    my $run_dir = $self->{$volname}->{'run-dir'} or return;
+    my $mount_point = $self->{$volname}->{'mount-point'};
+
+    eval { PVE::Tools::run_command(['umount', $mount_point]) };
+    eval { _remove_tree($run_dir); };
+    die "unable to clean up $run_dir - $@" if $@;
+}
+
+1;
diff --git a/src/PVE/BackupProvider/Plugin/Makefile b/src/PVE/BackupProvider/Plugin/Makefile
index bedc26e..db08c2d 100644
--- a/src/PVE/BackupProvider/Plugin/Makefile
+++ b/src/PVE/BackupProvider/Plugin/Makefile
@@ -1,4 +1,4 @@
-SOURCES = Base.pm DirectoryExample.pm
+SOURCES = Base.pm Borg.pm DirectoryExample.pm
 
 .PHONY: install
 install:
diff --git a/src/PVE/Storage/Custom/BorgBackupPlugin.pm b/src/PVE/Storage/Custom/BorgBackupPlugin.pm
new file mode 100644
index 0000000..84b12b9
--- /dev/null
+++ b/src/PVE/Storage/Custom/BorgBackupPlugin.pm
@@ -0,0 +1,689 @@
+package PVE::Storage::Custom::BorgBackupPlugin;
+
+use strict;
+use warnings;
+
+use Fcntl qw(F_GETFD F_SETFD FD_CLOEXEC);
+use File::Path qw(make_path remove_tree);
+use JSON qw(from_json);
+use MIME::Base64 qw(decode_base64 encode_base64);
+use Net::IP;
+use POSIX;
+
+use PVE::BackupProvider::Plugin::Borg;
+use PVE::Tools;
+
+use base qw(PVE::Storage::Plugin);
+
+sub api {
+    return 11;
+}
+
+sub check_config {
+    my ($class, $sectionId, $config, $create, $skipSchemaCheck) = @_;
+
+    if (my $ssh_public_keys = $config->{'ssh-server-public-keys'}) {
+	if ($ssh_public_keys !~ m/^[A-Za-z0-9+\/]+={0,2}$/) {
+	    $config->{'ssh-server-public-keys'} = encode_base64($ssh_public_keys, '');
+	}
+    }
+
+    return $class->SUPER::check_config($sectionId, $config, $create, $skipSchemaCheck);
+}
+
+sub borg_repository_uri {
+    my ($class, $scfg, $storeid) = @_;
+
+    my $uri = '';
+    my $server = $scfg->{server} or die "no server configured for $storeid\n";
+    my $username = $scfg->{username} or die "no username configured for $storeid\n";
+    my $prefix = "ssh://$username@";
+    $server = "[$server]" if Net::IP::ip_is_ipv6($server);
+    if (my $port = $scfg->{port}) {
+	$uri = "${prefix}${server}:${port}";
+    } else {
+	$uri = "${prefix}${server}";
+    }
+    $uri .= $scfg->{'repository-path'};
+
+    return $uri;
+}
+
+my sub borg_password_file_name {
+    my ($scfg, $storeid) = @_;
+
+    return "/etc/pve/priv/storage/${storeid}.pw";
+}
+
+my sub borg_set_password {
+    my ($scfg, $storeid, $password) = @_;
+
+    my $pwfile = borg_password_file_name($scfg, $storeid);
+    mkdir "/etc/pve/priv/storage";
+
+    PVE::Tools::file_set_contents($pwfile, "$password\n");
+}
+
+my sub borg_delete_password {
+    my ($scfg, $storeid) = @_;
+
+    my $pwfile = borg_password_file_name($scfg, $storeid);
+
+    unlink $pwfile;
+}
+
+sub borg_get_password {
+    my ($class, $scfg, $storeid) = @_;
+
+    my $pwfile = borg_password_file_name($scfg, $storeid);
+
+    return PVE::Tools::file_read_firstline($pwfile);
+}
+
+sub borg_setup_ssh_dir {
+    my ($class, $scfg, $ssh_dir, $ssh_key) = @_;
+
+    my $dir_created;
+    my $ssh_opts = [];
+
+    my $ensure_dir_created = sub {
+	return if $dir_created;
+	# for container backup, it needs to be created while privileged and already exists
+	if (!-d $ssh_dir) {
+	    make_path($ssh_dir) or die "unable to create directory $ssh_dir\n";
+	    chmod(0700, $ssh_dir) or die "unable to chmod directory $ssh_dir\n";
+	}
+	PVE::Tools::run_command(
+	    ['mount', '-t', 'tmpfs', '-o', 'size=1M,mode=0700', 'tmpfs', $ssh_dir]);
+	$dir_created = 1;
+    };
+
+    if ($ssh_key) {
+	$ensure_dir_created->();
+	PVE::Tools::file_set_contents("${ssh_dir}/ssh.key", $ssh_key, 0600);
+	push $ssh_opts->@*, '-i', "${ssh_dir}/ssh.key";
+    }
+
+    if ($scfg->{'ssh-server-public-keys'}) {
+	$ensure_dir_created->();
+	my $raw = decode_base64($scfg->{'ssh-server-public-keys'});
+	PVE::Tools::file_set_contents("${ssh_dir}/known_hosts", $raw, 0600);
+	push $ssh_opts->@*, '-o', "UserKnownHostsFile=${ssh_dir}/known_hosts";
+	push $ssh_opts->@*, '-o', "GlobalKnownHostsFile=none";
+    }
+
+    return ($dir_created, $ssh_opts);
+}
+
+sub borg_cmd_env {
+    my ($class, $scfg, $storeid, $sub) = @_;
+
+    my $ssh_dir = "/run/pve-storage/borg-plugin/${storeid}.ssh.$$";
+    my $ssh_key = borg_get_ssh_key($scfg, $storeid);
+    my ($uses_ssh_dir, $ssh_options) = $class->borg_setup_ssh_dir($scfg, $ssh_dir, $ssh_key);
+    local $ENV{BORG_RSH} = "ssh " . join(" ", $ssh_options->@*);
+
+    local $ENV{BORG_PASSPHRASE} = $class->borg_get_password($scfg, $storeid);
+
+    my $res = eval {
+	my $uri = $class->borg_repository_uri($scfg, $storeid);
+	return $sub->($uri);
+    };
+    my $err = $@;
+
+    if ($uses_ssh_dir) {
+	eval { PVE::Tools::run_command(['umount', "$ssh_dir"]); };
+	warn "unable to unmount directory $ssh_dir - $@" if $@;
+	eval { remove_tree($ssh_dir); };
+	warn "unable to cleanup directory $ssh_dir - $@" if $@;
+    }
+
+    die $err if $err;
+
+    return $res;
+}
+
+sub borg_cmd_list {
+    my ($class, $scfg, $storeid) = @_;
+
+    return $class->borg_cmd_env($scfg, $storeid, sub {
+	my ($uri) = @_;
+
+	my $json = '';
+	my $cmd = ['borg', 'list', '--json', $uri];
+
+	my $errfunc = sub { warn $_[0]; };
+	my $outfunc = sub { $json .= $_[0]; };
+
+	PVE::Tools::run_command(
+	    $cmd, errmsg => "command @$cmd failed", outfunc => $outfunc, errfunc => $errfunc);
+
+	my $res = eval { from_json($json) };
+	die "unable to parse 'borg list' output - $@\n" if $@;
+	return $res;
+    });
+}
+
+sub borg_cmd_create {
+    my ($class, $scfg, $storeid, $archive, $paths, $opts) = @_;
+
+    return $class->borg_cmd_env($scfg, $storeid, sub {
+	my ($uri) = @_;
+
+	my $cmd = ['borg', 'create', $opts->@*, "${uri}::${archive}", $paths->@*];
+
+	PVE::Tools::run_command($cmd, errmsg => "command @$cmd failed");
+
+	return;
+    });
+}
+
+sub borg_cmd_extract_contents {
+    my ($class, $scfg, $storeid, $archive, $paths) = @_;
+
+    return $class->borg_cmd_env($scfg, $storeid, sub {
+	my ($uri) = @_;
+
+	my $output = '';
+	my $outfunc = sub {
+	    $output .= "$_[0]\n";
+	};
+
+	my $cmd = ['borg', 'extract', '--stdout', "${uri}::${archive}", $paths->@*];
+
+	PVE::Tools::run_command($cmd, errmsg => "command @$cmd failed", outfunc => $outfunc);
+
+	return $output;
+    });
+}
+
+sub borg_cmd_delete {
+    my ($class, $scfg, $storeid, $archive) = @_;
+
+    return $class->borg_cmd_env($scfg, $storeid, sub {
+	my ($uri) = @_;
+
+	my $cmd = ['borg', 'delete', "${uri}::${archive}"];
+
+	PVE::Tools::run_command($cmd, errmsg => "command @$cmd failed");
+
+	return;
+    });
+}
+
+sub borg_cmd_info {
+    my ($class, $scfg, $storeid, $archive, $timeout) = @_;
+
+    return $class->borg_cmd_env($scfg, $storeid, sub {
+	my ($uri) = @_;
+
+	my $json = '';
+	my $cmd = ['borg', 'info', '--json', "${uri}::${archive}"];
+
+	my $errfunc = sub { warn $_[0]; };
+	my $outfunc = sub { $json .= $_[0]; };
+
+	PVE::Tools::run_command(
+	    $cmd,
+	    errmsg => "command @$cmd failed",
+	    timeout => $timeout,
+	    outfunc => $outfunc,
+	    errfunc => $errfunc,
+	);
+
+	my $res = eval { from_json($json) };
+	die "unable to parse 'borg info' output for archive '$archive' - $@\n" if $@;
+	return $res;
+    });
+}
+
+sub borg_cmd_mount {
+    my ($class, $scfg, $storeid, $archive, $mount_point) = @_;
+
+    return $class->borg_cmd_env($scfg, $storeid, sub {
+	my ($uri) = @_;
+
+	my $cmd = ['borg', 'mount', "${uri}::${archive}", $mount_point];
+
+	PVE::Tools::run_command($cmd, errmsg => "command @$cmd failed");
+
+	return;
+    });
+}
+
+my sub parse_backup_time {
+    my ($time_string) = @_;
+
+    my @tm = (POSIX::strptime($time_string, "%FT%TZ"));
+    # expect sec, min, hour, mday, mon, year
+    if (grep { !defined($_) } @tm[0..5]) {
+	warn "error parsing time from string '$time_string'\n";
+	return 0;
+    } else {
+	local $ENV{TZ} = 'UTC'; # time string is UTC
+
+	# Fill in isdst to avoid undef warning. No daylight saving time for UTC.
+	$tm[8] //= 0;
+
+	if (my $since_epoch = mktime(@tm)) {
+	    return int($since_epoch);
+	} else {
+	    warn "error parsing time from string '$time_string'\n";
+	    return 0;
+	}
+    }
+}
+
+# Helpers
+
+sub type {
+    return 'borg';
+}
+
+sub plugindata {
+    return {
+	content => [ { backup => 1, none => 1 }, { backup => 1 } ],
+	features => { 'backup-provider' => 1 },
+	'sensitive-properties' => {
+	    password => 1,
+	    'ssh-private-key' => 1,
+	},
+    };
+}
+
+sub properties {
+    return {
+	'repository-path' => {
+	    description => "Path to the backup repository",
+	    type => 'string',
+	},
+	'ssh-private-key' => {
+	    # Since 1 is written to the config when the key is present, the format is not checked.
+	    description => "SSH identity/private key for the client-side in PEM format.",
+	    type => 'string',
+	},
+	'ssh-server-public-keys' => {
+	    description => "SSH public key(s) for the server-side, (one key per line, OpenSSH"
+		." format).",
+	    type => 'string',
+	},
+    };
+}
+
+sub options {
+    return {
+	'repository-path' => { fixed => 1 },
+	server => { fixed => 1 },
+	port => { optional => 1 },
+	username => { fixed => 1 },
+	'ssh-private-key' => { optional => 1 },
+	'ssh-server-public-keys' => { optional => 1 },
+	password => { optional => 1 },
+	disable => { optional => 1 },
+	nodes => { optional => 1 },
+	'prune-backups' => { optional => 1 },
+	'max-protected-backups' => { optional => 1 },
+    };
+}
+
+sub borg_ssh_key_file_name {
+    my ($scfg, $storeid) = @_;
+
+    return "/etc/pve/priv/storage/${storeid}.ssh.key";
+}
+
+sub borg_set_ssh_key {
+    my ($scfg, $storeid, $key) = @_;
+
+    my $keyfile = borg_ssh_key_file_name($scfg, $storeid);
+    mkdir "/etc/pve/priv/storage";
+
+    PVE::Tools::file_set_contents($keyfile, "$key\n");
+}
+
+sub borg_delete_ssh_key {
+    my ($scfg, $storeid) = @_;
+
+    my $keyfile = borg_ssh_key_file_name($scfg, $storeid);
+
+    if (!unlink $keyfile) {
+	return if $! == ENOENT;
+	die "failed to delete SSH key! $!\n";
+    }
+    delete $scfg->{'ssh-private-key'};
+}
+
+sub borg_get_ssh_key {
+    my ($scfg, $storeid) = @_;
+
+    my $keyfile = borg_ssh_key_file_name($scfg, $storeid);
+
+    return if !-f $keyfile;
+
+    return PVE::Tools::file_get_contents($keyfile);
+}
+
+# Returns a file handle with FD_CLOEXEC disabled if there is an SSH key , or `undef` if there is
+# not. Dies on error.
+sub borg_open_ssh_key {
+    my ($self, $scfg, $storeid) = @_;
+
+    my $ssh_key_file = borg_ssh_key_file_name($scfg, $storeid);
+
+    my $keyfd;
+    if (!open($keyfd, '<', $ssh_key_file)) {
+	if ($! == ENOENT) {
+	    die "SSH key configured but no key file found!\n" if $scfg->{'ssh-private-key'};
+	    return undef;
+	}
+	die "failed to open SSH key: $ssh_key_file: $!\n";
+    }
+    my $flags = fcntl($keyfd, F_GETFD, 0)
+	// die "failed to get file descriptor flags for SSH key FD: $!\n";
+    fcntl($keyfd, F_SETFD, $flags & ~FD_CLOEXEC)
+	or die "failed to remove FD_CLOEXEC from SSH key file descriptor\n";
+
+    return $keyfd;
+}
+
+# Storage implementation
+
+sub on_add_hook {
+    my ($class, $storeid, $scfg, %param) = @_;
+
+    if (defined(my $password = $param{password})) {
+	borg_set_password($scfg, $storeid, $password);
+    } else {
+	borg_delete_password($scfg, $storeid);
+    }
+
+    if (defined(my $ssh_key = delete $param{'ssh-private-key'})) {
+	borg_set_ssh_key($scfg, $storeid, $ssh_key);
+	$scfg->{'ssh-private-key'} = 1;
+    } else {
+	borg_delete_ssh_key($scfg, $storeid);
+    }
+
+    if ($scfg->{'ssh-server-public-keys'}) {
+	my $ssh_public_keys = decode_base64($scfg->{'ssh-server-public-keys'});
+	PVE::Tools::validate_ssh_public_keys($ssh_public_keys);
+    }
+
+    return;
+}
+
+sub on_update_hook {
+    my ($class, $storeid, $scfg, %param) = @_;
+
+    if (exists($param{password})) {
+	if (defined($param{password})) {
+	    borg_set_password($scfg, $storeid, $param{password});
+	} else {
+	    borg_delete_password($scfg, $storeid);
+	}
+    }
+
+    if (exists($param{'ssh-private-key'})) {
+	if (defined(my $ssh_key = delete($param{'ssh-private-key'}))) {
+	    borg_set_ssh_key($scfg, $storeid, $ssh_key);
+	    $scfg->{'ssh-private-key'} = 1;
+	} else {
+	    borg_delete_ssh_key($scfg, $storeid);
+	}
+    }
+
+    if ($scfg->{'ssh-server-public-keys'}) {
+	my $ssh_public_keys = decode_base64($scfg->{'ssh-server-public-keys'});
+	PVE::Tools::validate_ssh_public_keys($ssh_public_keys);
+    }
+
+    return;
+}
+
+sub on_delete_hook {
+    my ($class, $storeid, $scfg) = @_;
+
+    borg_delete_password($scfg, $storeid);
+    borg_delete_ssh_key($scfg, $storeid);
+
+    return;
+}
+
+sub prune_backups {
+    my ($class, $scfg, $storeid, $keep, $vmid, $type, $dryrun, $logfunc) = @_;
+
+    # FIXME - is 'borg prune' compatible with ours?
+    die "not implemented";
+}
+
+sub parse_volname {
+    my ($class, $volname) = @_;
+
+    if ($volname =~ m!^backup/(.*)$!) {
+	my $archive = $1;
+	if ($archive =~ $PVE::BackupProvider::Plugin::Borg::ARCHIVE_RE_3) {
+	    return ('backup', $archive, $2);
+	}
+    }
+
+    die "unable to parse Borg volume name '$volname'\n";
+}
+
+sub path {
+    my ($class, $scfg, $volname, $storeid, $snapname) = @_;
+
+    die "volume snapshot is not possible on Borg volume" if $snapname;
+
+    my $uri = $class->borg_repository_uri($scfg, $storeid);
+    my (undef, $archive) = $class->parse_volname($volname);
+
+    return "${uri}::${archive}";
+}
+
+sub create_base {
+    my ($class, $storeid, $scfg, $volname) = @_;
+
+    die "cannot create base image in Borg storage\n";
+}
+
+sub clone_image {
+    my ($class, $scfg, $storeid, $volname, $vmid, $snap) = @_;
+
+    die "can't clone images in Borg storage\n";
+}
+
+sub alloc_image {
+    my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
+
+    die "can't allocate space in Borg storage\n";
+}
+
+sub free_image {
+    my ($class, $storeid, $scfg, $volname, $isBase) = @_;
+
+    my (undef, $archive) = $class->parse_volname($volname);
+
+    borg_cmd_delete($class, $scfg, $storeid, $archive);
+
+    return;
+}
+
+sub list_images {
+    my ($class, $storeid, $scfg, $vmid, $vollist, $cache) = @_;
+
+    return []; # guest images are not supported, only backups
+}
+
+sub list_volumes {
+    my ($class, $storeid, $scfg, $vmid, $content_types) = @_;
+
+    my $res = [];
+
+    return $res if !grep { $_ eq 'backup' } $content_types->@*;
+
+    my $archives = $class->borg_cmd_list($scfg, $storeid)->{archives}
+	or die "expected 'archives' key in 'borg list' JSON output missing\n";
+
+    for my $info ($archives->@*) {
+	my $archive = $info->{archive};
+	my ($vmtype, $backup_vmid, $time_string) =
+	    $archive =~ $PVE::BackupProvider::Plugin::Borg::ARCHIVE_RE_3 or next;
+
+	next if defined($vmid) && $vmid != $backup_vmid;
+
+	push $res->@*, {
+	    volid => "${storeid}:backup/${archive}",
+	    size => 0, # FIXME how to cheaply get?
+	    content => 'backup',
+	    ctime => parse_backup_time($time_string),
+	    vmid => $backup_vmid,
+	    format => "borg-archive",
+	    subtype => $vmtype,
+	}
+    }
+
+    return $res;
+}
+
+sub status {
+    my ($class, $storeid, $scfg, $cache) = @_;
+
+    my $uri = $class->borg_repository_uri($scfg, $storeid);
+
+    my $res;
+
+    if ($uri =~ m!^ssh://!) {
+	#FIXME ssh and df on target?
+	return;
+    } else { # $uri is a local path
+	my $timeout = 2;
+	$res = PVE::Tools::df($uri, $timeout);
+
+	return if !$res || !$res->{total};
+    }
+
+
+    return ($res->{total}, $res->{avail}, $res->{used}, 1);
+}
+
+sub activate_storage {
+    my ($class, $storeid, $scfg, $cache) = @_;
+
+    # TODO how to cheaply check? split ssh and non-ssh?
+
+    return 1;
+}
+
+sub deactivate_storage {
+    my ($class, $storeid, $scfg, $cache) = @_;
+
+    return 1;
+}
+
+sub activate_volume {
+    my ($class, $storeid, $scfg, $volname, $snapname, $cache) = @_;
+
+    die "volume snapshot is not possible on Borg volume" if $snapname;
+
+    return 1;
+}
+
+sub deactivate_volume {
+    my ($class, $storeid, $scfg, $volname, $snapname, $cache) = @_;
+
+    die "volume snapshot is not possible on Borg volume" if $snapname;
+
+    return 1;
+}
+
+sub get_volume_attribute {
+    my ($class, $scfg, $storeid, $volname, $attribute) = @_;
+
+    return;
+}
+
+sub update_volume_attribute {
+    my ($class, $scfg, $storeid, $volname, $attribute, $value) = @_;
+
+    # FIXME notes or protected possible?
+
+    die "attribute '$attribute' is not supported on Borg volume";
+}
+
+sub volume_size_info {
+    my ($class, $scfg, $storeid, $volname, $timeout) = @_;
+
+    my (undef, $archive) = $class->parse_volname($volname);
+    my (undef, undef, $time_string) =
+	$archive =~ $PVE::BackupProvider::Plugin::Borg::ARCHIVE_RE_3;
+
+    my $backup_time = 0;
+    if ($time_string) {
+	$backup_time = parse_backup_time($time_string)
+    } else {
+	warn "could not parse time from archive name '$archive'\n";
+    }
+
+    my $archives = borg_cmd_info($class, $scfg, $storeid, $archive, $timeout)->{archives}
+	or die "expected 'archives' key in 'borg info' JSON output missing\n";
+
+    my $stats = eval { $archives->[0]->{stats} }
+	or die "expected entry in 'borg info' JSON output missing\n";
+    my ($size, $used) = $stats->@{qw(original_size deduplicated_size)};
+
+    ($size) = ($size =~ /^(\d+)$/); # untaint
+    die "size '$size' not an integer\n" if !defined($size);
+    # coerce back from string
+    $size = int($size);
+    ($used) = ($used =~ /^(\d+)$/); # untaint
+    die "used '$used' not an integer\n" if !defined($used);
+    # coerce back from string
+    $used = int($used);
+
+    return wantarray ? ($size, 'borg-archive', $used, undef, $backup_time) : $size;
+}
+
+sub volume_resize {
+    my ($class, $scfg, $storeid, $volname, $size, $running) = @_;
+
+    die "volume resize is not possible on Borg volume";
+}
+
+sub volume_snapshot {
+    my ($class, $scfg, $storeid, $volname, $snap) = @_;
+
+    die "volume snapshot is not possible on Borg volume";
+}
+
+sub volume_snapshot_rollback {
+    my ($class, $scfg, $storeid, $volname, $snap) = @_;
+
+    die "volume snapshot rollback is not possible on Borg volume";
+}
+
+sub volume_snapshot_delete {
+    my ($class, $scfg, $storeid, $volname, $snap) = @_;
+
+    die "volume snapshot delete is not possible on Borg volume";
+}
+
+sub volume_has_feature {
+    my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running) = @_;
+
+    return 0;
+}
+
+sub rename_volume {
+    my ($class, $scfg, $storeid, $source_volname, $target_vmid, $target_volname) = @_;
+
+    die "volume rename is not implemented in Borg storage plugin\n";
+}
+
+sub new_backup_provider {
+    my ($class, $scfg, $storeid, $bandwidth_limit, $log_function) = @_;
+
+    return PVE::BackupProvider::Plugin::Borg->new(
+	$class, $scfg, $storeid, $bandwidth_limit, $log_function);
+}
+
+1;
diff --git a/src/PVE/Storage/Custom/Makefile b/src/PVE/Storage/Custom/Makefile
index c1e3eca..886442d 100644
--- a/src/PVE/Storage/Custom/Makefile
+++ b/src/PVE/Storage/Custom/Makefile
@@ -1,4 +1,5 @@
-SOURCES = BackupProviderDirExamplePlugin.pm
+SOURCES = BackupProviderDirExamplePlugin.pm \
+          BorgBackupPlugin.pm
 
 .PHONY: install
 install:
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 qemu-server 01/11] backup: keep track of block-node size for fleecing
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (17 preceding siblings ...)
  2025-04-03 12:30 ` [pve-devel] [POC v8 storage 8/8] Borg example plugin Wolfgang Bumiller
@ 2025-04-03 12:30 ` Wolfgang Bumiller
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 qemu-server 02/11] backup: fleecing: use exact size when allocating non-raw fleecing images Wolfgang Bumiller
                   ` (19 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:30 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

For fleecing, the size needs to match exactly what QEMU sees. In
particular, EFI disks might be attached with a 'size=' option, meaning
that size can be different from the volume's size. Commit 36377acf
("backup: disk info: also keep track of size") introduced size
tracking and it was used for fleecing since then, but the accurate
size information needs to be queried via QMP.

Should also help with the following issue reported in the community
forum:
https://forum.proxmox.com/threads/152202

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
No changes to v7.

 PVE/VZDump/QemuServer.pm | 31 +++++++++++++++++++++++++++++--
 1 file changed, 29 insertions(+), 2 deletions(-)

diff --git a/PVE/VZDump/QemuServer.pm b/PVE/VZDump/QemuServer.pm
index 3238e34..4a25889 100644
--- a/PVE/VZDump/QemuServer.pm
+++ b/PVE/VZDump/QemuServer.pm
@@ -558,7 +558,7 @@ my sub allocate_fleecing_images {
 		my $name = "vm-$vmid-fleece-$n";
 		$name .= ".$format" if $scfg->{path};
 
-		my $size = PVE::Tools::convert_size($di->{size}, 'b' => 'kb');
+		my $size = PVE::Tools::convert_size($di->{'block-node-size'}, 'b' => 'kb');
 
 		$di->{'fleece-volid'} = PVE::Storage::vdisk_alloc(
 		    $self->{storecfg}, $fleecing_storeid, $vmid, $format, $name, $size);
@@ -607,7 +607,7 @@ my sub attach_fleecing_images {
 	    my $drive = "file=$path,if=none,id=$devid,format=$format,discard=unmap";
 	    # Specify size explicitly, to make it work if storage backend rounded up size for
 	    # fleecing image when allocating.
-	    $drive .= ",size=$di->{size}" if $format eq 'raw';
+	    $drive .= ",size=$di->{'block-node-size'}" if $format eq 'raw';
 	    $drive =~ s/\\/\\\\/g;
 	    my $ret = PVE::QemuServer::Monitor::hmp_cmd($vmid, "drive_add auto \"$drive\"", 60);
 	    die "attaching fleecing image $volid failed - $ret\n" if $ret !~ m/OK/s;
@@ -633,6 +633,8 @@ my sub check_and_prepare_fleecing {
     }
 
     if ($use_fleecing) {
+	$self->query_block_node_sizes($vmid, $disks);
+
 	my ($default_format, $valid_formats) = PVE::Storage::storage_default_format(
 	    $self->{storecfg}, $fleecing_opts->{storage});
 	my $format = scalar(grep { $_ eq 'qcow2' } $valid_formats->@*) ? 'qcow2' : 'raw';
@@ -1038,6 +1040,31 @@ sub qga_fs_thaw {
     $self->logerr($@) if $@;
 }
 
+# The size for fleecing images needs to be exactly the same size as QEMU sees. E.g. EFI disk can bex
+# attached with a smaller size then the underyling image on the storage.
+sub query_block_node_sizes {
+    my ($self, $vmid, $disks) = @_;
+
+    my $block_info = mon_cmd($vmid, "query-block");
+    $block_info = { map { $_->{device} => $_ } $block_info->@* };
+
+    for my $diskinfo ($disks->@*) {
+	my $drive_key = $diskinfo->{virtdev};
+	$drive_key .= "-backup" if $drive_key eq 'tpmstate0';
+	my $block_node_size =
+	    eval { $block_info->{"drive-$drive_key"}->{inserted}->{image}->{'virtual-size'}; };
+	if (!$block_node_size) {
+	    $self->loginfo(
+		"could not determine block node size of drive '$drive_key' - using fallback");
+	    $block_node_size = $diskinfo->{size}
+		or die "could not determine size of drive '$drive_key'\n";
+	}
+	$diskinfo->{'block-node-size'} = $block_node_size;
+    }
+
+    return;
+}
+
 # we need a running QEMU/KVM process for backup, starts a paused (prelaunch)
 # one if VM isn't already running
 sub enforce_vm_running_for_backup {
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 qemu-server 02/11] backup: fleecing: use exact size when allocating non-raw fleecing images
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (18 preceding siblings ...)
  2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu-server 01/11] backup: keep track of block-node size for fleecing Wolfgang Bumiller
@ 2025-04-03 12:31 ` Wolfgang Bumiller
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 qemu-server 03/11] backup: allow adding fleecing images also for EFI and TPM Wolfgang Bumiller
                   ` (18 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:31 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

A non-1KiB aligned source image could cause issues when used with
qcow2 fleecing images, e.g. for an image with size 4.5 KiB:
> Size mismatch for 'drive-tpmstate0-backup-fleecing' - sector count 10 != 9

Raw images are attached to QEMU with an explicit 'size' argument, so
rounding up before allocation doesn't matter, but it does for qcow2.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
No changes to v7.

 PVE/VZDump/QemuServer.pm | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/PVE/VZDump/QemuServer.pm b/PVE/VZDump/QemuServer.pm
index 4a25889..6562aba 100644
--- a/PVE/VZDump/QemuServer.pm
+++ b/PVE/VZDump/QemuServer.pm
@@ -558,7 +558,15 @@ my sub allocate_fleecing_images {
 		my $name = "vm-$vmid-fleece-$n";
 		$name .= ".$format" if $scfg->{path};
 
-		my $size = PVE::Tools::convert_size($di->{'block-node-size'}, 'b' => 'kb');
+		my $size;
+		if ($format ne 'raw') {
+		    # Since non-raw images cannot be attached with an explicit 'size' parameter to
+		    # QEMU later, pass the exact size to the storage layer. This makes qcow2
+		    # fleecing images work for non-1KiB-aligned source images.
+		    $size = $di->{'block-node-size'}/1024;
+		} else {
+		    $size = PVE::Tools::convert_size($di->{'block-node-size'}, 'b' => 'kb');
+		}
 
 		$di->{'fleece-volid'} = PVE::Storage::vdisk_alloc(
 		    $self->{storecfg}, $fleecing_storeid, $vmid, $format, $name, $size);
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 qemu-server 03/11] backup: allow adding fleecing images also for EFI and TPM
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (19 preceding siblings ...)
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 qemu-server 02/11] backup: fleecing: use exact size when allocating non-raw fleecing images Wolfgang Bumiller
@ 2025-04-03 12:31 ` Wolfgang Bumiller
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 qemu-server 04/11] backup: implement backup for external providers Wolfgang Bumiller
                   ` (17 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:31 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

For the external backup API, it will be necessary to add a fleecing
image even for small disks like EFI and TPM, because there is no other
place the old data could be copied to when a new guest write comes in.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
No changes to v7.

 PVE/VZDump/QemuServer.pm | 14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/PVE/VZDump/QemuServer.pm b/PVE/VZDump/QemuServer.pm
index 6562aba..65f0179 100644
--- a/PVE/VZDump/QemuServer.pm
+++ b/PVE/VZDump/QemuServer.pm
@@ -541,7 +541,7 @@ my sub cleanup_fleecing_images {
 }
 
 my sub allocate_fleecing_images {
-    my ($self, $disks, $vmid, $fleecing_storeid, $format) = @_;
+    my ($self, $disks, $vmid, $fleecing_storeid, $format, $all_images) = @_;
 
     die "internal error - no fleecing storage specified\n" if !$fleecing_storeid;
 
@@ -552,7 +552,8 @@ my sub allocate_fleecing_images {
 	my $n = 0; # counter for fleecing image names
 
 	for my $di ($disks->@*) {
-	    next if $di->{virtdev} =~ m/^(?:tpmstate|efidisk)\d$/; # too small to be worth it
+	    # EFI/TPM are usually too small to be worth it, but it's required for external providers
+	    next if !$all_images && $di->{virtdev} =~ m/^(?:tpmstate|efidisk)\d$/;
 	    if ($di->{type} eq 'block' || $di->{type} eq 'file') {
 		my $scfg = PVE::Storage::storage_config($self->{storecfg}, $fleecing_storeid);
 		my $name = "vm-$vmid-fleece-$n";
@@ -624,7 +625,7 @@ my sub attach_fleecing_images {
 }
 
 my sub check_and_prepare_fleecing {
-    my ($self, $vmid, $fleecing_opts, $disks, $is_template, $qemu_support) = @_;
+    my ($self, $vmid, $fleecing_opts, $disks, $is_template, $qemu_support, $all_images) = @_;
 
     # Even if the VM was started specifically for fleecing, it's possible that the VM is resumed and
     # then starts doing IO. For VMs that are not resumed the fleecing images will just stay empty,
@@ -647,7 +648,8 @@ my sub check_and_prepare_fleecing {
 	    $self->{storecfg}, $fleecing_opts->{storage});
 	my $format = scalar(grep { $_ eq 'qcow2' } $valid_formats->@*) ? 'qcow2' : 'raw';
 
-	allocate_fleecing_images($self, $disks, $vmid, $fleecing_opts->{storage}, $format);
+	allocate_fleecing_images(
+	    $self, $disks, $vmid, $fleecing_opts->{storage}, $format, $all_images);
 	attach_fleecing_images($self, $disks, $vmid, $format);
     }
 
@@ -738,7 +740,7 @@ sub archive_pbs {
 	my $is_template = PVE::QemuConfig->is_template($self->{vmlist}->{$vmid});
 
 	$task->{'use-fleecing'} = check_and_prepare_fleecing(
-	    $self, $vmid, $opts->{fleecing}, $task->{disks}, $is_template, $qemu_support);
+	    $self, $vmid, $opts->{fleecing}, $task->{disks}, $is_template, $qemu_support, 0);
 
 	my $fs_frozen = $self->qga_fs_freeze($task, $vmid);
 
@@ -911,7 +913,7 @@ sub archive_vma {
 	$attach_tpmstate_drive->($self, $task, $vmid);
 
 	$task->{'use-fleecing'} = check_and_prepare_fleecing(
-	    $self, $vmid, $opts->{fleecing}, $task->{disks}, $is_template, $qemu_support);
+	    $self, $vmid, $opts->{fleecing}, $task->{disks}, $is_template, $qemu_support, 0);
 
 	my $outfh;
 	if ($opts->{stdout}) {
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 qemu-server 04/11] backup: implement backup for external providers
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (20 preceding siblings ...)
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 qemu-server 03/11] backup: allow adding fleecing images also for EFI and TPM Wolfgang Bumiller
@ 2025-04-03 12:31 ` Wolfgang Bumiller
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 qemu-server 05/11] test: qemu img convert: add test cases for snapshots Wolfgang Bumiller
                   ` (16 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:31 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

The state of the VM's disk images at the time the backup is started is
preserved via a snapshot-access block node. Old data is moved to the
fleecing image when new guest writes come in. The snapshot-access
block node, as well as the associated bitmap in case of incremental
backup, will be made available to the external provider. They are
exported via NBD and for 'nbd' mechanism, the NBD socket path is
passed to the provider, while for 'file-handle' mechanism, the NBD
export is made accessible via a file handle and the bitmap information
is made available via a $next_dirty_region->() function. For
'file-handle', the 'nbdinfo' and 'nbdfuse' binaries are required.

The provider can indicate that it wants to do an incremental backup by
returning the bitmap ID that was used for a previous backup and it
will then be told if the bitmap was newly created (either first backup
or old bitmap was invalid) or if the bitmap can be reused.

The provider then reads the parts of the NBD or virtual file it needs,
either the full disk for full backup, or the dirty parts according to
the bitmap for incremental backup. The bitmap has to be respected,
reads to other parts of the image will return an error. After backing
up each part of the disk, it should be discarded in the export to
avoid unnecessary space usage in the fleecing image (requires the
storage underlying the fleecing image to support discard too).

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
[WB: - instead of backup_vm_available_bitmaps call
       backup_vm_query_incremental, which provides a bitmap-mode
       instead, pass this along and use just the storage id as bitmap
       name]
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
---
Changes in v8: described in the trailers above ^

 PVE/VZDump/QemuServer.pm | 418 ++++++++++++++++++++++++++++++++++++++-
 1 file changed, 416 insertions(+), 2 deletions(-)

diff --git a/PVE/VZDump/QemuServer.pm b/PVE/VZDump/QemuServer.pm
index 65f0179..a4a5627 100644
--- a/PVE/VZDump/QemuServer.pm
+++ b/PVE/VZDump/QemuServer.pm
@@ -3,12 +3,15 @@ package PVE::VZDump::QemuServer;
 use strict;
 use warnings;
 
+use Fcntl qw(:mode);
 use File::Basename;
-use File::Path;
+use File::Path qw(make_path remove_tree);
+use File::stat qw();
 use IO::File;
 use IPC::Open3;
 use JSON;
 use POSIX qw(EINTR EAGAIN);
+use Time::HiRes qw(usleep);
 
 use PVE::Cluster qw(cfs_read_file);
 use PVE::INotify;
@@ -20,7 +23,7 @@ use PVE::QMPClient;
 use PVE::Storage::Plugin;
 use PVE::Storage::PBSPlugin;
 use PVE::Storage;
-use PVE::Tools;
+use PVE::Tools qw(run_command);
 use PVE::VZDump;
 use PVE::Format qw(render_duration render_bytes);
 
@@ -30,6 +33,7 @@ use PVE::QemuServer::Drive qw(checked_volume_format);
 use PVE::QemuServer::Helpers;
 use PVE::QemuServer::Machine;
 use PVE::QemuServer::Monitor qw(mon_cmd);
+use PVE::QemuServer::QMPHelpers;
 
 use base qw (PVE::VZDump::Plugin);
 
@@ -284,6 +288,8 @@ sub archive {
 
     if ($self->{vzdump}->{opts}->{pbs}) {
 	$self->archive_pbs($task, $vmid);
+    } elsif ($self->{vzdump}->{'backup-provider'}) {
+	$self->archive_external($task, $vmid);
     } else {
 	$self->archive_vma($task, $vmid, $filename, $comp);
     }
@@ -1148,11 +1154,90 @@ sub snapshot {
     # nothing to do
 }
 
+my sub cleanup_file_handles {
+    my ($self, $file_handles) = @_;
+
+    for my $file_handle ($file_handles->@*) {
+	close($file_handle) or $self->log('warn', "unable to close file handle - $!");
+    }
+}
+
+my sub cleanup_nbd_mounts {
+    my ($self, $info) = @_;
+
+    for my $mount_point (keys $info->%*) {
+	my $pid_file = delete($info->{$mount_point}->{'pid-file'});
+	unlink($pid_file) or $self->log('warn', "unable to unlink '$pid_file' - $!");
+	# Do a lazy unmount, because the target might still be busy even if the file handle was
+	# already closed.
+	eval { run_command(['fusermount', '-z', '-u', $mount_point ]); };
+	if (my $err = $@) {
+	    delete $info->{$mount_point};
+	    $self->log('warn', "unable to unmount NBD backup source '$mount_point' - $err");
+	}
+    }
+
+    # Wait for the unmount before cleaning up child PIDs to avoid 'nbdfuse' processes being
+    # interrupted by the signals issued there.
+    my $waited;
+    my $wait_limit = 50; # 5 seconds
+    for ($waited = 0; $waited < $wait_limit && scalar(keys $info->%*); $waited++) {
+	for my $mount_point (keys $info->%*) {
+	    delete($info->{$mount_point}) if !-e $info->{$mount_point}->{'virtual-file'};
+	    eval { remove_tree($mount_point); };
+	}
+	usleep(100_000);
+    }
+    # just informational, remaining child processes will be killed afterwards
+    $self->loginfo("unable to gracefully cleanup NBD fuse mounts") if scalar(keys $info->%*) != 0;
+}
+
+my sub cleanup_child_processes {
+    my ($self, $cpids) = @_;
+
+    my $waited;
+    my $wait_limit = 5;
+    for ($waited = 0; $waited < $wait_limit && scalar(keys $cpids->%*); $waited++) {
+	for my $cpid (keys $cpids->%*) {
+	    delete($cpids->{$cpid}) if waitpid($cpid, POSIX::WNOHANG) > 0;
+	}
+	if ($waited == 0) {
+	    kill 15, $_ for keys $cpids->%*;
+	}
+	sleep 1;
+    }
+    if ($waited == $wait_limit && scalar(keys $cpids->%*)) {
+	kill 9, $_ for keys $cpids->%*;
+	sleep 1;
+	for my $cpid (keys $cpids->%*) {
+	    delete($cpids->{$cpid}) if waitpid($cpid, POSIX::WNOHANG) > 0;
+	}
+	$self->log('warn', "unable to collect child process '$_'") for keys $cpids->%*;
+    }
+}
+
 sub cleanup {
     my ($self, $task, $vmid) = @_;
 
     # If VM was started only for backup, it is already stopped now.
     if (PVE::QemuServer::Helpers::vm_running_locally($vmid)) {
+	if ($task->{cleanup}->{'nbd-stop'}) {
+	    eval { PVE::QemuServer::QMPHelpers::nbd_stop($vmid); };
+	    $self->logerr($@) if $@;
+	}
+
+	if (my $info = $task->{cleanup}->{'backup-access-teardown'}) {
+	    my $params = {
+		'target-id' => $info->{'target-id'},
+		timeout => 60,
+		success => $info->{success} ? JSON::true : JSON::false,
+	    };
+
+	    $self->loginfo("tearing down backup-access");
+	    eval { mon_cmd($vmid, "backup-access-teardown", $params->%*) };
+	    $self->logerr($@) if $@;
+	}
+
 	$detach_tpmstate_drive->($task, $vmid);
 	detach_fleecing_images($task->{disks}, $vmid) if $task->{'use-fleecing'};
     }
@@ -1162,6 +1247,335 @@ sub cleanup {
     if ($self->{qmeventd_fh}) {
 	close($self->{qmeventd_fh});
     }
+
+    cleanup_file_handles($self, $task->{cleanup}->{'file-handles'})
+	if $task->{cleanup}->{'file-handles'};
+
+    cleanup_nbd_mounts($self, $task->{cleanup}->{'nbd-mounts'})
+	if $task->{cleanup}->{'nbd-mounts'};
+
+    cleanup_child_processes($self, $task->{cleanup}->{'child-pids'})
+	if $task->{cleanup}->{'child-pids'};
+
+    if (my $dir = $task->{'backup-access-root-dir'}) {
+	eval { remove_tree($dir) };
+	$self->log('warn', "unable to cleanup directory $dir - $@") if $@;
+    }
+}
+
+my sub virtual_file_backup_prepare {
+    my ($self, $vmid, $task, $device_name, $size, $nbd_path, $bitmap_name) = @_;
+
+    my $cleanup = $task->{cleanup};
+
+    my $nbd_uri = "nbd+unix:///${device_name}?socket=${nbd_path}";
+
+    my $error_fh;
+    my $next_dirty_region;
+
+    # If there is no dirty bitmap, it can be treated as if there's a full dirty one. The output of
+    # nbdinfo is a list of tuples with offset, length, type, description. The first bit of 'type' is
+    # set when the bitmap is dirty, see QEMU's docs/interop/nbd.txt
+    my $dirty_bitmap = [];
+    if ($bitmap_name) {
+	my $input = IO::File->new();
+	my $info = IO::File->new();
+	$error_fh = IO::File->new();
+	my $nbdinfo_cmd = ["nbdinfo", $nbd_uri, "--map=qemu:dirty-bitmap:${bitmap_name}"];
+	my $cpid = open3($input, $info, $error_fh, $nbdinfo_cmd->@*)
+	    or die "failed to spawn nbdinfo child - $!\n";
+	$cleanup->{'child-pids'}->{$cpid} = 1;
+
+	$next_dirty_region = sub {
+	    my ($offset, $length, $type);
+	    do {
+		my $line = <$info>;
+		return if !$line;
+		die "unexpected output from nbdinfo - $line\n"
+		    if $line !~ m/^\s*(\d+)\s*(\d+)\s*(\d+)/; # also untaints
+		($offset, $length, $type) = ($1, $2, $3);
+	    } while (($type & 0x1) == 0); # not dirty
+	    return ($offset, $length);
+	};
+    } else {
+	my $done = 0;
+	$next_dirty_region = sub {
+	    return if $done;
+	    $done = 1;
+	    return (0, $size);
+	};
+    }
+
+    my $mount_point = $task->{'backup-access-root-dir'}
+	."/${vmid}-nbd.backup-access.${device_name}.$$";
+    make_path($mount_point) or die "unable to create directory $mount_point\n";
+    $cleanup->{'nbd-mounts'}->{$mount_point} = {};
+
+    # Note that nbdfuse requires "$dir/$file". A single name would be treated as a dir and the file
+    # would be named "$dir/nbd" then
+    my $virtual_file = "${mount_point}/${device_name}";
+    $cleanup->{'nbd-mounts'}->{$mount_point}->{'virtual-file'} = $virtual_file;
+
+    my $pid_file = "${mount_point}.pid";
+    PVE::Tools::file_set_contents($pid_file, '', 0600);
+    $cleanup->{'nbd-mounts'}->{$mount_point}->{'pid-file'} = $pid_file;
+
+    my $cpid = fork() // die "fork failed: $!\n";
+    if (!$cpid) {
+	# By default, access will be restricted to the current user, because the allow_other fuse
+	# mount option is not used.
+	eval {
+	    run_command(
+		["nbdfuse", '--pidfile', $pid_file, $virtual_file, $nbd_uri],
+		logfunc => sub { $self->loginfo("nbdfuse '$virtual_file': $_[0]") },
+	    );
+	};
+	if (my $err = $@) {
+	    eval { $self->loginfo($err); };
+	    POSIX::_exit(1);
+	}
+	POSIX::_exit(0);
+    }
+    $cleanup->{'child-pids'}->{$cpid} = 1;
+
+    my ($virtual_file_ready, $waited) = (0, 0);
+    while (!$virtual_file_ready && $waited < 30) { # 3 seconds
+	my $pid = PVE::Tools::file_read_firstline($pid_file);
+	if ($pid) {
+	    $virtual_file_ready = 1;
+	} else {
+	    usleep(100_000);
+	    $waited++;
+	}
+    }
+    die "timeout setting up virtual file '$virtual_file'" if !$virtual_file_ready;
+
+    $self->loginfo("provided NBD export as a virtual file '$virtual_file'");
+
+    # NOTE O_DIRECT, because each block should be read exactly once and also because fuse will try
+    # to read ahead otherwise, which would produce warning messages if the next block is not
+    # mapped/allocated for the NBD export in case of incremental backup. Open as writable to support
+    # discard.
+    my $fh = IO::File->new($virtual_file, O_RDWR | O_DIRECT)
+	or die "unable to open backup source '$virtual_file' - $!\n";
+    push $cleanup->{'file-handles'}->@*, $fh;
+
+    return ($fh, $next_dirty_region);
+}
+
+my sub backup_access_to_volume_info {
+    my ($self, $vmid, $task, $backup_access_info, $mechanism, $nbd_path) = @_;
+
+    my $bitmap_action_to_status = {
+	'not-used' => 'none',
+	'not-used-removed' => 'none',
+	'new' => 'new',
+	'used' => 'reuse',
+	'invalid' => 'new',
+    };
+
+    my $volumes = {};
+
+    for my $info ($backup_access_info->@*) {
+	my $bitmap_status = 'none';
+	my $bitmap_name;
+	if (my $bitmap_action = $info->{'bitmap-action'}) {
+	    $bitmap_status = $bitmap_action_to_status->{$bitmap_action}
+		or die "got unexpected bitmap action '$bitmap_action'\n";
+
+	    $bitmap_name = $info->{'bitmap-name'} or die "bitmap-name is not present\n";
+	}
+
+	my ($device, $size) = $info->@{qw(device size)};
+
+	$volumes->{$device}->{'bitmap-mode'} = $bitmap_status;
+	$volumes->{$device}->{size} = $size;
+
+	if ($mechanism eq 'file-handle') {
+	    my ($fh, $next_dirty_region) = virtual_file_backup_prepare(
+		$self, $vmid, $task, $device, $size, $nbd_path, $bitmap_name);
+	    $volumes->{$device}->{'file-handle'} = $fh;
+	    $volumes->{$device}->{'next-dirty-region'} = $next_dirty_region;
+	} elsif ($mechanism eq 'nbd') {
+	    $volumes->{$device}->{'nbd-path'} = $nbd_path;
+	    $volumes->{$device}->{'bitmap-name'} = $bitmap_name;
+	} else {
+	    die "internal error - unkown mechanism '$mechanism'";
+	}
+    }
+
+    return $volumes;
+}
+
+sub archive_external {
+    my ($self, $task, $vmid) = @_;
+
+    $task->{'backup-access-root-dir'} = "/run/qemu-server/${vmid}.backup-access.$$/";
+    make_path($task->{'backup-access-root-dir'})
+	or die "unable to create directory $task->{'backup-access-root-dir'}\n";
+    chmod(0700, $task->{'backup-access-root-dir'})
+	or die "unable to chmod directory $task->{'backup-access-root-dir'}\n";
+
+    my $guest_config = PVE::Tools::file_get_contents("$task->{tmpdir}/qemu-server.conf");
+    my $firewall_file = "$task->{tmpdir}/qemu-server.fw";
+
+    my $opts = $self->{vzdump}->{opts};
+
+    my $backup_provider = $self->{vzdump}->{'backup-provider'};
+
+    $self->loginfo("starting external backup via " . $backup_provider->provider_name());
+
+    my $starttime = time();
+
+    my $devices = {};
+    for my $di ($task->{disks}->@*) {
+	my $device_name = $di->{qmdevice};
+	die "implement me (type '$di->{type}')" if $di->{type} ne 'block' && $di->{type} ne 'file';
+	$devices->{$device_name}->{size} = $di->{'block-node-size'};
+    }
+
+    $self->enforce_vm_running_for_backup($vmid);
+    $self->{qmeventd_fh} = PVE::QemuServer::register_qmeventd_handle($vmid);
+
+    eval {
+	$SIG{INT} = $SIG{TERM} = $SIG{QUIT} = $SIG{HUP} = $SIG{PIPE} = sub {
+	    die "interrupted by signal\n";
+	};
+
+	my $qemu_support = mon_cmd($vmid, "query-proxmox-support");
+
+	if (!$qemu_support->{'backup-access-api'}) {
+	    die "backups access API required for external provider backup is not supported by"
+		." the running QEMU version. Please make sure you've installed the latest "
+		." version and the VM has been restarted.\n";
+	}
+
+	$attach_tpmstate_drive->($self, $task, $vmid);
+
+	my $is_template = PVE::QemuConfig->is_template($self->{vmlist}->{$vmid});
+
+	my $fleecing = check_and_prepare_fleecing(
+	    $self, $vmid, $opts->{fleecing}, $task->{disks}, $is_template, $qemu_support, 1);
+	die "cannot setup backup access without fleecing\n" if !$fleecing;
+
+	$task->{'use-fleecing'} = 1;
+
+	my $target_id = "snapshot-access:$opts->{storage}";
+
+	my $mechanism = $backup_provider->backup_get_mechanism($vmid, 'qemu');
+	die "mechanism '$mechanism' requested by backup provider is not supported for VMs\n"
+	    if $mechanism ne 'file-handle' && $mechanism ne 'nbd';
+
+	$self->loginfo("using backup mechanism '$mechanism'");
+
+	if ($mechanism eq 'file-handle') {
+	    # For mechanism 'file-handle', the nbdfuse binary is required. Also, the bitmap needs
+	    # to be passed to the provider. The bitmap cannot be dumped via QMP and doing it via
+	    # qemu-img is experimental, so use nbdinfo. Both are in libnbd-bin.
+	    die "need 'nbdfuse' binary from package libnbd-bin\n" if !-e "/usr/bin/nbdfuse";
+	}
+
+	my $incremental_info = $backup_provider->backup_vm_query_incremental($vmid, $devices);
+
+	my $qmp_devices = [];
+	for my $device (sort keys $devices->%*) {
+	    my $qmp_device = { device => $device };
+	    if (defined(my $mode = $incremental_info->{$device})) {
+		if ($mode eq 'new') {
+		    $qmp_device->{'bitmap-name'} = $opts->{storage};
+		    $qmp_device->{'bitmap-mode'} = 'new';
+		} elsif ($mode eq 'use') {
+		    $qmp_device->{'bitmap-name'} = $opts->{storage};
+		    $qmp_device->{'bitmap-mode'} = 'use';
+		} elsif ($mode eq 'none') {
+		    $qmp_device->{'bitmap-mode'} = 'none';
+		} else {
+		    die "invalid incremental mode '$mode' returned by backup provider plugin\n";
+		}
+	    }
+	    push($qmp_devices->@*, $qmp_device);
+	}
+
+	my $params = {
+	    'target-id' => $target_id,
+	    devices => $qmp_devices,
+	    timeout => 60,
+	};
+
+	my $fs_frozen = $self->qga_fs_freeze($task, $vmid);
+
+	$self->loginfo("setting up snapshot-access for backup");
+
+	$task->{cleanup}->{'backup-access-teardown'} = { 'target-id' => $target_id, success => 0 };
+
+	my $backup_access_info = eval { mon_cmd($vmid, "backup-access-setup", $params->%*) };
+	my $qmperr = $@;
+
+	if ($fs_frozen) {
+	    $self->qga_fs_thaw($vmid);
+	}
+
+	die $qmperr if $qmperr;
+
+	$self->resume_vm_after_job_start($task, $vmid);
+
+	my $bitmap_info = mon_cmd($vmid, 'query-pbs-bitmap-info');
+	for my $info (sort { $a->{drive} cmp $b->{drive} } $bitmap_info->@*) {
+	    my $text = $bitmap_action_to_human->($self, $info);
+	    my $drive = $info->{drive};
+	    $drive =~ s/^drive-//; # for consistency
+	    $self->loginfo("$drive: dirty-bitmap status: $text");
+	}
+
+	$self->loginfo("starting NBD server");
+
+	my $nbd_path = "$task->{'backup-access-root-dir'}/${vmid}-nbd.backup-access";
+	mon_cmd(
+	    $vmid, "nbd-server-start", addr => { type => 'unix', data => { path => $nbd_path } } );
+	$task->{cleanup}->{'nbd-stop'} = 1;
+
+	for my $info ($backup_access_info->@*) {
+	    $self->loginfo("adding NBD export for $info->{device}");
+
+	    my $export_params = {
+		id => $info->{device},
+		'node-name' => $info->{'node-name'},
+		writable => JSON::true, # for discard
+		type => "nbd",
+		name => $info->{device}, # NBD export name
+	    };
+
+	    if ($info->{'bitmap-name'}) {
+		$export_params->{bitmaps} = [{
+		    node => $info->{'bitmap-node-name'},
+		    name => $info->{'bitmap-name'},
+		}];
+	    }
+
+	    mon_cmd($vmid, "block-export-add", $export_params->%*);
+	}
+
+	my $volumes = backup_access_to_volume_info(
+	    $self, $vmid, $task, $backup_access_info, $mechanism, $nbd_path);
+
+	my $param = {};
+	$param->{'bandwidth-limit'} = $opts->{bwlimit} * 1024 if $opts->{bwlimit};
+	$param->{'firewall-config'} = PVE::Tools::file_get_contents($firewall_file)
+	    if -e $firewall_file;
+
+	$backup_provider->backup_vm($vmid, $guest_config, $volumes, $param);
+    };
+    my $err = $@;
+
+    if ($err) {
+	$self->logerr($err);
+	$self->resume_vm_after_job_start($task, $vmid);
+    } else {
+	$task->{cleanup}->{'backup-access-teardown'}->{success} = 1;
+    }
+    $self->restore_vm_power_state($vmid);
+
+    die $err if $err;
 }
 
 1;
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 qemu-server 05/11] test: qemu img convert: add test cases for snapshots
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (21 preceding siblings ...)
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 qemu-server 04/11] backup: implement backup for external providers Wolfgang Bumiller
@ 2025-04-03 12:31 ` Wolfgang Bumiller
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 qemu-server 06/11] image convert: collect options in hash argument Wolfgang Bumiller
                   ` (15 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:31 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
No changes to v7.

 test/run_qemu_img_convert_tests.pl | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/test/run_qemu_img_convert_tests.pl b/test/run_qemu_img_convert_tests.pl
index 20ff387..29c188d 100755
--- a/test/run_qemu_img_convert_tests.pl
+++ b/test/run_qemu_img_convert_tests.pl
@@ -194,6 +194,24 @@ my $tests = [
 	    "/var/lib/vz/images/$vmid/vm-$vmid-disk-0.raw",
 	]
     },
+    {
+	name => "lvmsnapshot",
+	parameters => [ "local-lvm:vm-$vmid-disk-0", "local:$vmid/vm-$vmid-disk-0.raw", 1024*10, 'foo', 0, undef ],
+	expected => [
+	    "/usr/bin/qemu-img", "convert", "-p", "-n", "-f", "raw", "-O", "raw",
+	    "/dev/pve/snap_vm-$vmid-disk-0_foo",
+	    "/var/lib/vz/images/$vmid/vm-$vmid-disk-0.raw",
+	]
+    },
+    {
+	name => "qcow2snapshot",
+	parameters => [ "local:$vmid/vm-$vmid-disk-0.qcow2", "local:$vmid/vm-$vmid-disk-0.raw", 1024*10, 'snap', 0, undef ],
+	expected => [
+	    "/usr/bin/qemu-img", "convert", "-p", "-n", "-l", "snapshot.name=snap", "-f", "qcow2", "-O", "raw",
+	    "/var/lib/vz/images/$vmid/vm-$vmid-disk-0.qcow2",
+	    "/var/lib/vz/images/$vmid/vm-$vmid-disk-0.raw",
+	]
+    },
 ];
 
 my $command;
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 qemu-server 06/11] image convert: collect options in hash argument
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (22 preceding siblings ...)
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 qemu-server 05/11] test: qemu img convert: add test cases for snapshots Wolfgang Bumiller
@ 2025-04-03 12:31 ` Wolfgang Bumiller
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 qemu-server 07/11] image convert: allow caller to specify the format of the source path Wolfgang Bumiller
                   ` (14 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:31 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

In preparation to add another option and to improve style for the
callers.

One of the test cases that specified $is_zero_initialized is for a
non-existent storage, so the option was not added there.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
No changes to v7.

 PVE/QemuServer.pm                  | 19 +++++++---
 PVE/QemuServer/ImportDisk.pm       |  3 +-
 test/run_qemu_img_convert_tests.pl | 58 ++++++++++++++++++++----------
 3 files changed, 56 insertions(+), 24 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index ffd5d56..5c6cb94 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -7787,8 +7787,14 @@ sub convert_iscsi_path {
     die "cannot convert iscsi path '$path', unknown format\n";
 }
 
+# The possible options are:
+# bwlimit - The bandwidth limit in KiB/s.
+# is-zero-initialized - If the destination image is zero-initialized.
+# snapname - Use this snapshot of the source image.
 sub qemu_img_convert {
-    my ($src_volid, $dst_volid, $size, $snapname, $is_zero_initialized, $bwlimit) = @_;
+    my ($src_volid, $dst_volid, $size, $opts) = @_;
+
+    my ($bwlimit, $snapname) = $opts->@{qw(bwlimit snapname)};
 
     my $storecfg = PVE::Storage::config();
     my ($src_storeid) = PVE::Storage::parse_volume_id($src_volid, 1);
@@ -7846,7 +7852,7 @@ sub qemu_img_convert {
 
     push @$cmd, $src_path;
 
-    if (!$dst_is_iscsi && $is_zero_initialized) {
+    if (!$dst_is_iscsi && $opts->{'is-zero-initialized'}) {
 	push @$cmd, "zeroinit:$dst_path";
     } else {
 	push @$cmd, $dst_path;
@@ -8307,7 +8313,12 @@ sub clone_disk {
 		push $cmd->@*, "bs=$bs", "osize=$size", "if=$src_path", "of=$dst_path";
 		run_command($cmd);
 	    } else {
-		qemu_img_convert($drive->{file}, $newvolid, $size, $snapname, $sparseinit, $bwlimit);
+		my $opts = {
+		    bwlimit => $bwlimit,
+		    'is-zero-initialized' => $sparseinit,
+		    snapname => $snapname,
+		};
+		qemu_img_convert($drive->{file}, $newvolid, $size, $opts);
 	    }
 	}
     }
@@ -8391,7 +8402,7 @@ sub create_efidisk($$$$$$$) {
     my $volid = PVE::Storage::vdisk_alloc($storecfg, $storeid, $vmid, $fmt, undef, $vars_size);
     PVE::Storage::activate_volumes($storecfg, [$volid]);
 
-    qemu_img_convert($ovmf_vars, $volid, $vars_size_b, undef, 0);
+    qemu_img_convert($ovmf_vars, $volid, $vars_size_b);
     my $size = PVE::Storage::volume_size_info($storecfg, $volid, 3);
 
     return ($volid, $size/1024);
diff --git a/PVE/QemuServer/ImportDisk.pm b/PVE/QemuServer/ImportDisk.pm
index 30d56ba..75b5e0d 100755
--- a/PVE/QemuServer/ImportDisk.pm
+++ b/PVE/QemuServer/ImportDisk.pm
@@ -70,7 +70,8 @@ sub do_import {
 	    local $SIG{PIPE} = sub { die "interrupted by signal $!\n"; };
 
 	PVE::Storage::activate_volumes($storecfg, [$dst_volid]);
-	PVE::QemuServer::qemu_img_convert($src_path, $dst_volid, $src_size, undef, $zeroinit);
+	PVE::QemuServer::qemu_img_convert(
+	    $src_path, $dst_volid, $src_size, { 'is-zero-initialized' => $zeroinit });
 	PVE::Storage::deactivate_volumes($storecfg, [$dst_volid]);
 	PVE::QemuConfig->lock_config($vmid, $create_drive) if !$params->{'skip-config-update'};
     };
diff --git a/test/run_qemu_img_convert_tests.pl b/test/run_qemu_img_convert_tests.pl
index 29c188d..652e61f 100755
--- a/test/run_qemu_img_convert_tests.pl
+++ b/test/run_qemu_img_convert_tests.pl
@@ -55,7 +55,7 @@ my $storage_config = {
 my $tests = [
     {
 	name => 'qcow2raw',
-	parameters => [ "local:$vmid/vm-$vmid-disk-0.qcow2", "local:$vmid/vm-$vmid-disk-0.raw", 1024*10, undef, 0, undef ],
+	parameters => [ "local:$vmid/vm-$vmid-disk-0.qcow2", "local:$vmid/vm-$vmid-disk-0.raw", 1024*10 ],
 	expected => [
 	    "/usr/bin/qemu-img", "convert", "-p", "-n", "-f", "qcow2", "-O", "raw",
 	    "/var/lib/vz/images/$vmid/vm-$vmid-disk-0.qcow2", "/var/lib/vz/images/$vmid/vm-$vmid-disk-0.raw"
@@ -63,7 +63,7 @@ my $tests = [
     },
     {
 	name => "raw2qcow2",
-	parameters => [ "local:$vmid/vm-$vmid-disk-0.raw", "local:$vmid/vm-$vmid-disk-0.qcow2", 1024*10, undef, 0, undef ],
+	parameters => [ "local:$vmid/vm-$vmid-disk-0.raw", "local:$vmid/vm-$vmid-disk-0.qcow2", 1024*10 ],
 	expected => [
 	    "/usr/bin/qemu-img", "convert", "-p", "-n", "-f", "raw", "-O", "qcow2",
 	    "/var/lib/vz/images/$vmid/vm-$vmid-disk-0.raw", "/var/lib/vz/images/$vmid/vm-$vmid-disk-0.qcow2"
@@ -71,7 +71,7 @@ my $tests = [
     },
     {
 	name => "local2rbd",
-	parameters => [ "local:$vmid/vm-$vmid-disk-0.raw", "rbd-store:vm-$vmid-disk-0", 1024*10, undef, 0, undef ],
+	parameters => [ "local:$vmid/vm-$vmid-disk-0.raw", "rbd-store:vm-$vmid-disk-0", 1024*10 ],
 	expected => [
 	    "/usr/bin/qemu-img", "convert", "-p", "-n", "-f", "raw", "-O", "raw",
 	    "/var/lib/vz/images/$vmid/vm-$vmid-disk-0.raw", "rbd:cpool/vm-$vmid-disk-0:mon_host=127.0.0.42;127.0.0.21;[\\:\\:1]:auth_supported=none"
@@ -79,7 +79,7 @@ my $tests = [
     },
     {
 	name => "rbd2local",
-	parameters => [ "rbd-store:vm-$vmid-disk-0", "local:$vmid/vm-$vmid-disk-0.raw", 1024*10, undef, 0, undef ],
+	parameters => [ "rbd-store:vm-$vmid-disk-0", "local:$vmid/vm-$vmid-disk-0.raw", 1024*10 ],
 	expected => [
 	    "/usr/bin/qemu-img", "convert", "-p", "-n", "-f", "raw", "-O", "raw",
 	    "rbd:cpool/vm-$vmid-disk-0:mon_host=127.0.0.42;127.0.0.21;[\\:\\:1]:auth_supported=none", "/var/lib/vz/images/$vmid/vm-$vmid-disk-0.raw"
@@ -87,7 +87,7 @@ my $tests = [
     },
     {
 	name => "local2zos",
-	parameters => [ "local:$vmid/vm-$vmid-disk-0.raw", "zfs-over-iscsi:vm-$vmid-disk-0", 1024*10, undef, 0, undef ],
+	parameters => [ "local:$vmid/vm-$vmid-disk-0.raw", "zfs-over-iscsi:vm-$vmid-disk-0", 1024*10 ],
 	expected => [
 	    "/usr/bin/qemu-img", "convert", "-p", "-n", "-f", "raw", "--target-image-opts",
 	    "/var/lib/vz/images/$vmid/vm-$vmid-disk-0.raw",
@@ -96,7 +96,7 @@ my $tests = [
     },
     {
 	name => "zos2local",
-	parameters => [ "zfs-over-iscsi:vm-$vmid-disk-0", "local:$vmid/vm-$vmid-disk-0.raw", 1024*10, undef, 0, undef ],
+	parameters => [ "zfs-over-iscsi:vm-$vmid-disk-0", "local:$vmid/vm-$vmid-disk-0.raw", 1024*10 ],
 	expected => [
 	    "/usr/bin/qemu-img", "convert", "-p", "-n", "--image-opts", "-O", "raw",
 	    "file.driver=iscsi,file.transport=tcp,file.initiator-name=foobar,file.portal=127.0.0.1,file.target=iqn.2019-10.org.test:foobar,file.lun=1,driver=raw",
@@ -105,7 +105,7 @@ my $tests = [
     },
     {
 	name => "zos2rbd",
-	parameters => [ "zfs-over-iscsi:vm-$vmid-disk-0", "rbd-store:vm-$vmid-disk-0", 1024*10, undef, 0, undef ],
+	parameters => [ "zfs-over-iscsi:vm-$vmid-disk-0", "rbd-store:vm-$vmid-disk-0", 1024*10 ],
 	expected => [
 	    "/usr/bin/qemu-img", "convert", "-p", "-n", "--image-opts", "-O", "raw",
 	    "file.driver=iscsi,file.transport=tcp,file.initiator-name=foobar,file.portal=127.0.0.1,file.target=iqn.2019-10.org.test:foobar,file.lun=1,driver=raw",
@@ -114,7 +114,7 @@ my $tests = [
     },
     {
 	name => "rbd2zos",
-	parameters => [ "rbd-store:vm-$vmid-disk-0", "zfs-over-iscsi:vm-$vmid-disk-0", 1024*10, undef, 0, undef ],
+	parameters => [ "rbd-store:vm-$vmid-disk-0", "zfs-over-iscsi:vm-$vmid-disk-0", 1024*10 ],
 	expected => [
 	    "/usr/bin/qemu-img", "convert", "-p", "-n", "-f", "raw", "--target-image-opts",
 	    "rbd:cpool/vm-$vmid-disk-0:mon_host=127.0.0.42;127.0.0.21;[\\:\\:1]:auth_supported=none",
@@ -123,7 +123,7 @@ my $tests = [
     },
     {
 	name => "local2lvmthin",
-	parameters => [ "local:$vmid/vm-$vmid-disk-0.raw", "local-lvm:vm-$vmid-disk-0", 1024*10, undef, 0, undef ],
+	parameters => [ "local:$vmid/vm-$vmid-disk-0.raw", "local-lvm:vm-$vmid-disk-0", 1024*10 ],
 	expected => [
 	    "/usr/bin/qemu-img", "convert", "-p", "-n", "-f", "raw", "-O", "raw",
 	    "/var/lib/vz/images/$vmid/vm-$vmid-disk-0.raw",
@@ -132,7 +132,7 @@ my $tests = [
     },
     {
 	name => "lvmthin2local",
-	parameters => [ "local-lvm:vm-$vmid-disk-0", "local:$vmid/vm-$vmid-disk-0.raw", 1024*10, undef, 0, undef ],
+	parameters => [ "local-lvm:vm-$vmid-disk-0", "local:$vmid/vm-$vmid-disk-0.raw", 1024*10 ],
 	expected => [
 	    "/usr/bin/qemu-img", "convert", "-p", "-n", "-f", "raw", "-O", "raw",
 	    "/dev/pve/vm-$vmid-disk-0",
@@ -141,7 +141,12 @@ my $tests = [
     },
     {
 	name => "zeroinit",
-	parameters => [ "local-lvm:vm-$vmid-disk-0", "local:$vmid/vm-$vmid-disk-0.raw", 1024*10, undef, 1, undef ],
+	parameters => [
+	    "local-lvm:vm-$vmid-disk-0",
+	    "local:$vmid/vm-$vmid-disk-0.raw",
+	    1024*10,
+	    { 'is-zero-initialized' => 1 },
+	],
 	expected => [
 	    "/usr/bin/qemu-img", "convert", "-p", "-n", "-f", "raw", "-O", "raw",
 	    "/dev/pve/vm-$vmid-disk-0",
@@ -150,12 +155,12 @@ my $tests = [
     },
     {
 	name => "notexistingstorage",
-	parameters => [ "local-lvm:vm-$vmid-disk-0", "not-existing:$vmid/vm-$vmid-disk-0.raw", 1024*10, undef, 1, undef ],
+	parameters => [ "local-lvm:vm-$vmid-disk-0", "not-existing:$vmid/vm-$vmid-disk-0.raw", 1024*10 ],
 	expected => "storage 'not-existing' does not exist\n",
     },
     {
 	name => "vmdkfile",
-	parameters => [ "./test.vmdk", "local:$vmid/vm-$vmid-disk-0.raw", 1024*10, undef, 0, undef ],
+	parameters => [ "./test.vmdk", "local:$vmid/vm-$vmid-disk-0.raw", 1024*10 ],
 	expected => [
 	    "/usr/bin/qemu-img", "convert", "-p", "-n", "-f", "vmdk", "-O", "raw",
 	    "./test.vmdk",
@@ -164,12 +169,12 @@ my $tests = [
     },
     {
 	name => "notexistingfile",
-	parameters => [ "/foo/bar", "local:$vmid/vm-$vmid-disk-0.raw", 1024*10, undef, 0, undef ],
+	parameters => [ "/foo/bar", "local:$vmid/vm-$vmid-disk-0.raw", 1024*10 ],
 	expected => "source '/foo/bar' is not a valid volid nor path for qemu-img convert\n",
     },
     {
 	name => "efidisk",
-	parameters => [ "/usr/share/kvm/OVMF_VARS-pure-efi.fd", "local:$vmid/vm-$vmid-disk-0.raw", 1024*10, undef, 0, undef ],
+	parameters => [ "/usr/share/kvm/OVMF_VARS-pure-efi.fd", "local:$vmid/vm-$vmid-disk-0.raw", 1024*10 ],
 	expected => [
 	    "/usr/bin/qemu-img", "convert", "-p", "-n", "-O", "raw",
 	    "/usr/share/kvm/OVMF_VARS-pure-efi.fd",
@@ -178,7 +183,7 @@ my $tests = [
     },
     {
 	name => "efi2zos",
-	parameters => [ "/usr/share/kvm/OVMF_VARS-pure-efi.fd", "zfs-over-iscsi:vm-$vmid-disk-0", 1024*10, undef, 0, undef ],
+	parameters => [ "/usr/share/kvm/OVMF_VARS-pure-efi.fd", "zfs-over-iscsi:vm-$vmid-disk-0", 1024*10 ],
 	expected => [
 	    "/usr/bin/qemu-img", "convert", "-p", "-n", "--target-image-opts",
 	    "/usr/share/kvm/OVMF_VARS-pure-efi.fd",
@@ -187,7 +192,12 @@ my $tests = [
     },
     {
 	name => "bwlimit",
-	parameters => [ "local-lvm:vm-$vmid-disk-0", "local:$vmid/vm-$vmid-disk-0.raw", 1024*10, undef, 0, 1024 ],
+	parameters => [
+	    "local-lvm:vm-$vmid-disk-0",
+	    "local:$vmid/vm-$vmid-disk-0.raw",
+	    1024*10,
+	    { bwlimit => 1024 },
+	],
 	expected => [
 	    "/usr/bin/qemu-img", "convert", "-p", "-n", "-r", "1024K", "-f", "raw", "-O", "raw",
 	    "/dev/pve/vm-$vmid-disk-0",
@@ -196,7 +206,12 @@ my $tests = [
     },
     {
 	name => "lvmsnapshot",
-	parameters => [ "local-lvm:vm-$vmid-disk-0", "local:$vmid/vm-$vmid-disk-0.raw", 1024*10, 'foo', 0, undef ],
+	parameters => [
+	    "local-lvm:vm-$vmid-disk-0",
+	    "local:$vmid/vm-$vmid-disk-0.raw",
+	    1024*10,
+	    { snapname => 'foo'},
+	],
 	expected => [
 	    "/usr/bin/qemu-img", "convert", "-p", "-n", "-f", "raw", "-O", "raw",
 	    "/dev/pve/snap_vm-$vmid-disk-0_foo",
@@ -205,7 +220,12 @@ my $tests = [
     },
     {
 	name => "qcow2snapshot",
-	parameters => [ "local:$vmid/vm-$vmid-disk-0.qcow2", "local:$vmid/vm-$vmid-disk-0.raw", 1024*10, 'snap', 0, undef ],
+	parameters => [
+	    "local:$vmid/vm-$vmid-disk-0.qcow2",
+	    "local:$vmid/vm-$vmid-disk-0.raw",
+	    1024*10,
+	    { snapname => 'snap' },
+	],
 	expected => [
 	    "/usr/bin/qemu-img", "convert", "-p", "-n", "-l", "snapshot.name=snap", "-f", "qcow2", "-O", "raw",
 	    "/var/lib/vz/images/$vmid/vm-$vmid-disk-0.qcow2",
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 qemu-server 07/11] image convert: allow caller to specify the format of the source path
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (23 preceding siblings ...)
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 qemu-server 06/11] image convert: collect options in hash argument Wolfgang Bumiller
@ 2025-04-03 12:31 ` Wolfgang Bumiller
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 qemu-server 08/11] backup: implement restore for external providers Wolfgang Bumiller
                   ` (13 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:31 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

In preparation for the restore API for backup providers that doesn't
want detection based on the file extension but always requires raw.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
No changes to v7.

 PVE/QemuServer.pm | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 5c6cb94..93f985b 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -7791,6 +7791,8 @@ sub convert_iscsi_path {
 # bwlimit - The bandwidth limit in KiB/s.
 # is-zero-initialized - If the destination image is zero-initialized.
 # snapname - Use this snapshot of the source image.
+# source-path-format - Indicate the format of the source when the source is a path. For PVE-managed
+# volumes, the format from the storage layer is always used.
 sub qemu_img_convert {
     my ($src_volid, $dst_volid, $size, $opts) = @_;
 
@@ -7816,7 +7818,9 @@ sub qemu_img_convert {
 	$cachemode = 'none' if $src_scfg->{type} eq 'zfspool';
     } elsif (-f $src_volid || -b $src_volid) {
 	$src_path = $src_volid;
-	if ($src_path =~ m/\.($PVE::QemuServer::Drive::QEMU_FORMAT_RE)$/) {
+	if ($opts->{'source-path-format'}) {
+	    $src_format = $opts->{'source-path-format'};
+	} elsif ($src_path =~ m/\.($PVE::QemuServer::Drive::QEMU_FORMAT_RE)$/) {
 	    $src_format = $1;
 	}
     }
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 qemu-server 08/11] backup: implement restore for external providers
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (24 preceding siblings ...)
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 qemu-server 07/11] image convert: allow caller to specify the format of the source path Wolfgang Bumiller
@ 2025-04-03 12:31 ` Wolfgang Bumiller
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 qemu-server 09/11] backup: future-proof checks for QEMU feature support Wolfgang Bumiller
                   ` (12 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:31 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

First, the provider is asked about what restore mechanism to use.
Currently, only 'qemu-img' is possible. Then the configuration files
are restored, the provider gives information about volumes contained
in the backup and finally the volumes are restored via
'qemu-img convert'.

The code for the restore_external_archive() function was copied and
adapted from the restore_proxmox_backup_archive() function. Together
with restore_vma_archive() it seems sensible to extract the common
parts and use a dedicated module for restore code.

The parse_restore_archive() helper was renamed, because it's not just
parsing.

While currently, the format for the source can only be raw, do an
untrusted check for the source for future-proofing. Still serves as a
basic sanity check currently.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
[WB: fix 'bwlimit' typo]
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
---
Changes in v8: described in the trailers above ^


 PVE/API2/Qemu.pm  |  30 +++++++++-
 PVE/QemuServer.pm | 149 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 176 insertions(+), 3 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 156b1c7..6c7c1d0 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -927,7 +927,7 @@ __PACKAGE__->register_method({
 	return $res;
     }});
 
-my $parse_restore_archive = sub {
+my $classify_restore_archive = sub {
     my ($storecfg, $archive) = @_;
 
     my ($archive_storeid, $archive_volname) = PVE::Storage::parse_volume_id($archive, 1);
@@ -941,6 +941,22 @@ my $parse_restore_archive = sub {
 	    $res->{type} = 'pbs';
 	    return $res;
 	}
+	if (PVE::Storage::storage_has_feature($storecfg, $archive_storeid, 'backup-provider')) {
+	    my $log_function = sub {
+		my ($log_level, $message) = @_;
+		my $prefix = $log_level eq 'err' ? 'ERROR' : uc($log_level);
+		print "$prefix: $message\n";
+	    };
+	    my $backup_provider = PVE::Storage::new_backup_provider(
+		$storecfg,
+		$archive_storeid,
+		$log_function,
+	    );
+
+	    $res->{type} = 'external';
+	    $res->{'backup-provider'} = $backup_provider;
+	    return $res;
+	}
     }
     my $path = PVE::Storage::abs_filesystem_path($storecfg, $archive);
     $res->{type} = 'file';
@@ -1101,7 +1117,7 @@ __PACKAGE__->register_method({
 		    'backup',
 		);
 
-		$archive = $parse_restore_archive->($storecfg, $archive);
+		$archive = $classify_restore_archive->($storecfg, $archive);
 	    }
 	}
 
@@ -1160,7 +1176,15 @@ __PACKAGE__->register_method({
 			PVE::QemuServer::check_restore_permissions($rpcenv, $authuser, $merged);
 		    }
 		}
-		if ($archive->{type} eq 'file' || $archive->{type} eq 'pipe') {
+		if (my $backup_provider = $archive->{'backup-provider'}) {
+		    PVE::QemuServer::restore_external_archive(
+			$backup_provider,
+			$archive->{volid},
+			$vmid,
+			$authuser,
+			$restore_options,
+		    );
+		} elsif ($archive->{type} eq 'file' || $archive->{type} eq 'pipe') {
 		    die "live-restore is only compatible with backup images from a Proxmox Backup Server\n"
 			if $live_restore;
 		    PVE::QemuServer::restore_file_archive($archive->{path} // '-', $vmid, $authuser, $restore_options);
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 93f985b..491da44 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -7175,6 +7175,155 @@ sub restore_proxmox_backup_archive {
     }
 }
 
+sub restore_external_archive {
+    my ($backup_provider, $archive, $vmid, $user, $options) = @_;
+
+    die "live restore from backup provider is not implemented\n" if $options->{live};
+
+    my $storecfg = PVE::Storage::config();
+
+    my ($storeid, $volname) = PVE::Storage::parse_volume_id($archive);
+    my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
+
+    my $tmpdir = "/run/qemu-server/vzdumptmp$$";
+    rmtree($tmpdir);
+    mkpath($tmpdir) or die "unable to create $tmpdir\n";
+
+    my $conffile = PVE::QemuConfig->config_file($vmid);
+    # disable interrupts (always do cleanups)
+    local $SIG{INT} =
+	local $SIG{TERM} =
+	local $SIG{QUIT} =
+	local $SIG{HUP} = sub { print STDERR "got interrupt - ignored\n"; };
+
+    # Note: $oldconf is undef if VM does not exists
+    my $cfs_path = PVE::QemuConfig->cfs_config_path($vmid);
+    my $oldconf = PVE::Cluster::cfs_read_file($cfs_path);
+    my $new_conf_raw = '';
+
+    my $rpcenv = PVE::RPCEnvironment::get();
+    my $devinfo = {}; # info about drives included in backup
+    my $virtdev_hash = {}; # info about allocated drives
+
+    eval {
+	# enable interrupts
+	local $SIG{INT} =
+	    local $SIG{TERM} =
+	    local $SIG{QUIT} =
+	    local $SIG{HUP} =
+	    local $SIG{PIPE} = sub { die "interrupted by signal\n"; };
+
+	my $cfgfn = "$tmpdir/qemu-server.conf";
+	my $firewall_config_fn = "$tmpdir/fw.conf";
+
+	my $cmd = "restore";
+
+	my ($mechanism, $vmtype) =
+	    $backup_provider->restore_get_mechanism($volname);
+	die "mechanism '$mechanism' requested by backup provider is not supported for VMs\n"
+	    if $mechanism ne 'qemu-img';
+	die "cannot restore non-VM guest of type '$vmtype'\n" if $vmtype ne 'qemu';
+
+	$devinfo = $backup_provider->restore_vm_init($volname);
+
+	my $data = $backup_provider->archive_get_guest_config($volname)
+	    or die "backup provider failed to extract guest configuration\n";
+	PVE::Tools::file_set_contents($cfgfn, $data);
+
+	if ($data = $backup_provider->archive_get_firewall_config($volname)) {
+	    PVE::Tools::file_set_contents($firewall_config_fn, $data);
+	    my $pve_firewall_dir = '/etc/pve/firewall';
+	    mkdir $pve_firewall_dir; # make sure the dir exists
+	    PVE::Tools::file_copy($firewall_config_fn, "${pve_firewall_dir}/$vmid.fw");
+	}
+
+	my $fh = IO::File->new($cfgfn, "r") or die "unable to read qemu-server.conf - $!\n";
+
+	$virtdev_hash = $parse_backup_hints->($rpcenv, $user, $storecfg, $fh, $devinfo, $options);
+
+	# create empty/temp config
+	PVE::Tools::file_set_contents($conffile, "memory: 128\nlock: create");
+
+	$restore_cleanup_oldconf->($storecfg, $vmid, $oldconf, $virtdev_hash) if $oldconf;
+
+	# allocate volumes
+	my $map = $restore_allocate_devices->($storecfg, $virtdev_hash, $vmid);
+
+	for my $virtdev (sort keys $virtdev_hash->%*) {
+	    my $d = $virtdev_hash->{$virtdev};
+	    next if $d->{is_cloudinit}; # no need to restore cloudinit
+
+	    my $sparseinit = PVE::Storage::volume_has_feature($storecfg, 'sparseinit', $d->{volid});
+	    my $source_format = 'raw';
+
+	    my $info = $backup_provider->restore_vm_volume_init($volname, $d->{devname}, {});
+	    my $source_path = $info->{'qemu-img-path'}
+		or die "did not get source image path from backup provider\n";
+
+	    print "importing drive '$d->{devname}' from '$source_path'\n";
+
+	    # safety check for untrusted source image
+	    PVE::Storage::file_size_info($source_path, undef, $source_format, 1);
+
+	    eval {
+		my $convert_opts = {
+		    bwlimit => $options->{bwlimit},
+		    'is-zero-initialized' => $sparseinit,
+		    'source-path-format' => $source_format,
+		};
+		qemu_img_convert($source_path, $d->{volid}, $d->{size}, $convert_opts);
+	    };
+	    my $err = $@;
+	    eval { $backup_provider->restore_vm_volume_cleanup($volname, $d->{devname}, {}); };
+	    if (my $cleanup_err = $@) {
+		die $cleanup_err if !$err;
+		warn $cleanup_err;
+	    }
+	    die $err if $err
+	}
+
+	$fh->seek(0, 0) || die "seek failed - $!\n";
+
+	my $cookie = { netcount => 0 };
+	while (defined(my $line = <$fh>)) {
+	    $new_conf_raw .= restore_update_config_line(
+		$cookie,
+		$map,
+		$line,
+		$options->{unique},
+	    );
+	}
+
+	$fh->close();
+    };
+    my $err = $@;
+
+    eval { $backup_provider->restore_vm_cleanup($volname); };
+    warn "backup provider cleanup after restore failed - $@" if $@;
+
+    if ($err) {
+	$restore_deactivate_volumes->($storecfg, $virtdev_hash);
+    }
+
+    rmtree($tmpdir);
+
+    if ($err) {
+	$restore_destroy_volumes->($storecfg, $virtdev_hash);
+	die $err;
+    }
+
+    my $new_conf = restore_merge_config($conffile, $new_conf_raw, $options->{override_conf});
+    check_restore_permissions($rpcenv, $user, $new_conf);
+    PVE::QemuConfig->write_config($vmid, $new_conf);
+
+    eval { rescan($vmid, 1); };
+    warn $@ if $@;
+
+    PVE::AccessControl::add_vm_to_pool($vmid, $options->{pool}) if $options->{pool};
+
+    return;
+}
+
 sub pbs_live_restore {
     my ($vmid, $conf, $storecfg, $restored_disks, $opts) = @_;
 
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 qemu-server 09/11] backup: future-proof checks for QEMU feature support
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (25 preceding siblings ...)
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 qemu-server 08/11] backup: implement restore for external providers Wolfgang Bumiller
@ 2025-04-03 12:31 ` Wolfgang Bumiller
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 qemu-server 10/11] backup: support 'missing-recreated' bitmap action Wolfgang Bumiller
                   ` (11 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:31 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

The features returned by the 'query-proxmox-support' QMP command are
booleans, so just checking for definedness is not enough in principle.
In practice, a feature is currently always true if defined. Still, fix
the checks, should the need to disable support for a feature ever
arise in the future and to avoid propagating the pattern further.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
No changes to v7.

 PVE/VZDump/QemuServer.pm | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/PVE/VZDump/QemuServer.pm b/PVE/VZDump/QemuServer.pm
index a4a5627..676dad2 100644
--- a/PVE/VZDump/QemuServer.pm
+++ b/PVE/VZDump/QemuServer.pm
@@ -639,7 +639,7 @@ my sub check_and_prepare_fleecing {
 
     my $use_fleecing = $fleecing_opts && $fleecing_opts->{enabled} && !$is_template;
 
-    if ($use_fleecing && !defined($qemu_support->{'backup-fleecing'})) {
+    if ($use_fleecing && !$qemu_support->{'backup-fleecing'}) {
 	$self->log(
 	    'warn',
 	    "running QEMU version does not support backup fleecing - continuing without",
@@ -739,7 +739,7 @@ sub archive_pbs {
 
 	# pve-qemu supports it since 5.2.0-1 (PVE 6.4), so safe to die since PVE 8
 	die "master key configured but running QEMU version does not support master keys\n"
-	    if !defined($qemu_support->{'pbs-masterkey'}) && defined($master_keyfile);
+	    if !$qemu_support->{'pbs-masterkey'} && defined($master_keyfile);
 
 	$attach_tpmstate_drive->($self, $task, $vmid);
 
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 qemu-server 10/11] backup: support 'missing-recreated' bitmap action
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (26 preceding siblings ...)
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 qemu-server 09/11] backup: future-proof checks for QEMU feature support Wolfgang Bumiller
@ 2025-04-03 12:31 ` Wolfgang Bumiller
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 qemu-server 11/11] backup: bitmap action to human: lie about TPM state Wolfgang Bumiller
                   ` (10 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:31 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

A new 'missing-recreated' action was added on the QEMU side.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
No changes to v7.

 PVE/VZDump/QemuServer.pm | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/PVE/VZDump/QemuServer.pm b/PVE/VZDump/QemuServer.pm
index 676dad2..894e337 100644
--- a/PVE/VZDump/QemuServer.pm
+++ b/PVE/VZDump/QemuServer.pm
@@ -316,6 +316,8 @@ my $bitmap_action_to_human = sub {
 	}
     } elsif ($action eq "invalid") {
 	return "existing bitmap was invalid and has been cleared";
+    } elsif ($action eq "missing-recreated") {
+	return "expected bitmap was missing and has been recreated";
     } else {
 	return "unknown";
     }
@@ -1372,6 +1374,7 @@ my sub backup_access_to_volume_info {
 	'new' => 'new',
 	'used' => 'reuse',
 	'invalid' => 'new',
+	'missing-recreated' => 'new',
     };
 
     my $volumes = {};
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 qemu-server 11/11] backup: bitmap action to human: lie about TPM state
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (27 preceding siblings ...)
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 qemu-server 10/11] backup: support 'missing-recreated' bitmap action Wolfgang Bumiller
@ 2025-04-03 12:31 ` Wolfgang Bumiller
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 container 1/7] add LXC::Namespaces module Wolfgang Bumiller
                   ` (9 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:31 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

The TPM state drive is newly attached each time, so it is fully
expected that a bitmap from last time would be missing.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
No changes to v7.

 PVE/VZDump/QemuServer.pm | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/PVE/VZDump/QemuServer.pm b/PVE/VZDump/QemuServer.pm
index 894e337..5cfe841 100644
--- a/PVE/VZDump/QemuServer.pm
+++ b/PVE/VZDump/QemuServer.pm
@@ -317,6 +317,8 @@ my $bitmap_action_to_human = sub {
     } elsif ($action eq "invalid") {
 	return "existing bitmap was invalid and has been cleared";
     } elsif ($action eq "missing-recreated") {
+	# Lie about the TPM state, because it is newly attached each time.
+	return "created new" if $info->{drive} eq 'drive-tpmstate0-backup';
 	return "expected bitmap was missing and has been recreated";
     } else {
 	return "unknown";
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 container 1/7] add LXC::Namespaces module
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (28 preceding siblings ...)
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 qemu-server 11/11] backup: bitmap action to human: lie about TPM state Wolfgang Bumiller
@ 2025-04-03 12:31 ` Wolfgang Bumiller
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 container 2/7] backup: implement backup for external providers Wolfgang Bumiller
                   ` (8 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:31 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

The module includes a run_in_userns() helper to run a Perl subroutine
in a user namespace.

The first use case is running the container backup subroutine for
external providers inside a user namespace. That allows them to see
the filesystem to back-up from the containers perspective and also
improves security because of isolation.

Heavily adapted from code by Wolfgang from the pve-buildpkg
repository.

[FE: add $idmap parameter, drop $aux_groups parameter
     use different fork helper
     use newuidmap and newgidmap binaries]

Originally-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
No changes to v7.

 src/PVE/LXC/Makefile      |  1 +
 src/PVE/LXC/Namespaces.pm | 57 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 58 insertions(+)
 create mode 100644 src/PVE/LXC/Namespaces.pm

diff --git a/src/PVE/LXC/Makefile b/src/PVE/LXC/Makefile
index a190260..4b05e98 100644
--- a/src/PVE/LXC/Makefile
+++ b/src/PVE/LXC/Makefile
@@ -5,6 +5,7 @@ SOURCES= \
 	Create.pm \
 	Migrate.pm \
 	Monitor.pm \
+	Namespaces.pm \
 	Setup.pm \
 	Tools.pm
 
diff --git a/src/PVE/LXC/Namespaces.pm b/src/PVE/LXC/Namespaces.pm
new file mode 100644
index 0000000..bd39c07
--- /dev/null
+++ b/src/PVE/LXC/Namespaces.pm
@@ -0,0 +1,57 @@
+package PVE::LXC::Namespaces;
+
+use strict;
+use warnings;
+
+use Fcntl qw(O_WRONLY);
+use Socket;
+
+use PVE::Tools qw(CLONE_NEWNS CLONE_NEWUSER);
+
+my sub set_id_map($$) {
+    my ($pid, $id_map) = @_;
+
+    my @gid_args = ();
+    my @uid_args = ();
+
+    for my $map ($id_map->@*) {
+	my ($type, $ct, $host, $length) = $map->@*;
+
+	push @gid_args, $ct, $host, $length if $type eq 'g';
+	push @uid_args, $ct, $host, $length if $type eq 'u';
+    }
+
+    PVE::Tools::run_command(['newgidmap', $pid, @gid_args]) if scalar(@gid_args);
+    PVE::Tools::run_command(['newuidmap', $pid, @uid_args]) if scalar(@uid_args);
+}
+
+sub run_in_userns(&;$) {
+    my ($code, $id_map) = @_;
+    socketpair(my $sp, my $sc, AF_UNIX, SOCK_STREAM, PF_UNSPEC)
+	or die "socketpair: $!\n";
+    my $child = sub {
+	close($sp);
+	PVE::Tools::unshare(CLONE_NEWUSER|CLONE_NEWNS) or die "unshare(NEWUSER|NEWNS): $!\n";
+	syswrite($sc, "1\n") == 2 or die "write: $!\n";
+	shutdown($sc, 1);
+	my $two = <$sc>;
+	die "failed to sync with parent process\n" if $two ne "2\n";
+	close($sc);
+	$! = undef;
+	($(, $)) = (0, 0); die "$!\n" if $!;
+	($<, $>) = (0, 0); die "$!\n" if $!;
+	return $code->();
+    };
+    my $parent = sub {
+	my ($pid) = @_;
+	close($sc);
+	my $one = <$sp>;
+	die "failed to sync with userprocess\n" if $one ne "1\n";
+	set_id_map($pid, $id_map);
+	syswrite($sp, "2\n") == 2 or die "write: $!\n";
+	close($sp);
+    };
+    PVE::Tools::run_fork($child, { afterfork => $parent });
+}
+
+1;
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 container 2/7] backup: implement backup for external providers
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (29 preceding siblings ...)
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 container 1/7] add LXC::Namespaces module Wolfgang Bumiller
@ 2025-04-03 12:31 ` Wolfgang Bumiller
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 container 3/7] backup: implement restore " Wolfgang Bumiller
                   ` (7 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:31 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

The filesystem structure is made available as a directory in a
consistent manner (with details depending on the vzdump backup mode)
just like for regular backup via tar.

The backup_container() method of the backup provider is executed in
a user namespace with the container's ID mapping applied. This allows
the backup provider to see the container's filesystem from the
container's perspective.

The 'prepare' phase of the backup hook is executed right before and
allows the backup provider to prepare for the (usually) unprivileged
execution context in the user namespace.

The backup provider needs to back up the guest and firewall
configuration and the filesystem structure of the container, honoring
file exclusions and the bandwidth limit.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
No changes to v7.

 src/PVE/VZDump/LXC.pm | 40 +++++++++++++++++++++++++++++++++++++++-
 1 file changed, 39 insertions(+), 1 deletion(-)

diff --git a/src/PVE/VZDump/LXC.pm b/src/PVE/VZDump/LXC.pm
index 9029387..e68ba74 100644
--- a/src/PVE/VZDump/LXC.pm
+++ b/src/PVE/VZDump/LXC.pm
@@ -11,6 +11,7 @@ use PVE::Cluster qw(cfs_read_file);
 use PVE::GuestHelpers;
 use PVE::INotify;
 use PVE::LXC::Config;
+use PVE::LXC::Namespaces;
 use PVE::LXC;
 use PVE::Storage;
 use PVE::Tools;
@@ -124,6 +125,7 @@ sub prepare {
 
     my ($id_map, $root_uid, $root_gid) = PVE::LXC::parse_id_maps($conf);
     $task->{userns_cmd} = PVE::LXC::userns_command($id_map);
+    $task->{id_map} = $id_map;
     $task->{root_uid} = $root_uid;
     $task->{root_gid} = $root_gid;
 
@@ -373,7 +375,43 @@ sub archive {
     my $userns_cmd = $task->{userns_cmd};
     my $findexcl = $self->{vzdump}->{findexcl};
 
-    if ($self->{vzdump}->{opts}->{pbs}) {
+    if (my $backup_provider = $self->{vzdump}->{'backup-provider'}) {
+	$self->loginfo("starting external backup via " . $backup_provider->provider_name());
+
+	if (!scalar($task->{id_map}->@*) || $task->{root_uid} == 0 || $task->{root_gid} == 0) {
+	    $self->log("warn", "external backup of privileged container can only be restored as"
+		." unprivileged which might not work in all cases");
+	}
+
+	my $mechanism = $backup_provider->backup_get_mechanism($vmid, 'lxc');
+	die "mechanism '$mechanism' requested by backup provider is not supported for containers\n"
+	    if $mechanism ne 'directory';
+
+	$self->loginfo("using backup mechanism '$mechanism'");
+
+	my $guest_config = PVE::Tools::file_get_contents("$tmpdir/etc/vzdump/pct.conf");
+	my $firewall_file = "$tmpdir/etc/vzdump/pct.fw";
+
+	my $info = {
+	    directory => $snapdir,
+	    sources => [@sources],
+	    'backup-user-id' => $task->{root_uid},
+	};
+	$info->{'firewall-config'} = PVE::Tools::file_get_contents($firewall_file)
+	    if -e $firewall_file;
+	$info->{'bandwidth-limit'} = $opts->{bwlimit} * 1024 if $opts->{bwlimit};
+
+	$backup_provider->backup_container_prepare($vmid, $info);
+
+	if (scalar($task->{id_map}->@*)) {
+	    PVE::LXC::Namespaces::run_in_userns(
+		sub { $backup_provider->backup_container($vmid, $guest_config, $findexcl, $info); },
+		$task->{id_map},
+	    );
+	} else {
+	    $backup_provider->backup_container($vmid, $guest_config, $findexcl, $info);
+	}
+    } elsif ($self->{vzdump}->{opts}->{pbs}) {
 
 	my $param = [];
 	push @$param, "pct.conf:$tmpdir/etc/vzdump/pct.conf";
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 container 3/7] backup: implement restore for external providers
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (30 preceding siblings ...)
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 container 2/7] backup: implement backup for external providers Wolfgang Bumiller
@ 2025-04-03 12:31 ` Wolfgang Bumiller
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 container 4/7] external restore: don't use 'one-file-system' tar flag when restoring from a directory Wolfgang Bumiller
                   ` (6 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:31 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

First, the provider is asked about what restore mechanism to use.
Currently, 'directory' and 'tar' are possible. The 'directory'
mechanism is for restoring from a directory containing the container's
full filesystem structure, which is restored by piping from a
privileged tar cf - to tar xf - in the associated user namespace. The
'tar' mechanism is for restoring a (potentially compressed) tar file
containing the container's full filesystem structure.

The new functions are copied and adapted from the existing ones for
PBS or tar and it might be worth to factor out the common parts.

Restore of containers as privileged are prohibited, because the
archives from an external provider are considered less trusted than
from Proxmox VE storages. If ever allowing that in the future, at
least it would be worth extracting the tar archive in a restricted
context (e.g. user namespace with ID mapped mount or seccomp).

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
No changes to v7.

 src/PVE/LXC/Create.pm | 149 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 149 insertions(+)

diff --git a/src/PVE/LXC/Create.pm b/src/PVE/LXC/Create.pm
index 8c8cb9a..c3c7640 100644
--- a/src/PVE/LXC/Create.pm
+++ b/src/PVE/LXC/Create.pm
@@ -7,6 +7,7 @@ use File::Path;
 use Fcntl;
 
 use PVE::RPCEnvironment;
+use PVE::RESTEnvironment qw(log_warn);
 use PVE::Storage::PBSPlugin;
 use PVE::Storage::Plugin;
 use PVE::Storage;
@@ -26,6 +27,24 @@ sub restore_archive {
 	if ($scfg->{type} eq 'pbs') {
 	    return restore_proxmox_backup_archive($storage_cfg, $archive, $rootdir, $conf, $no_unpack_error, $bwlimit);
 	}
+	if (PVE::Storage::storage_has_feature($storage_cfg, $storeid, 'backup-provider')) {
+	    my $log_function = sub {
+		my ($log_level, $message) = @_;
+		my $prefix = $log_level eq 'err' ? 'ERROR' : uc($log_level);
+		print "$prefix: $message\n";
+	    };
+	    my $backup_provider =
+		PVE::Storage::new_backup_provider($storage_cfg, $storeid, $log_function);
+	    return restore_external_archive(
+		$backup_provider,
+		$storeid,
+		$volname,
+		$rootdir,
+		$conf,
+		$no_unpack_error,
+		$bwlimit,
+	    );
+	}
     }
 
     $archive = PVE::Storage::abs_filesystem_path($storage_cfg, $archive) if $archive ne '-';
@@ -127,6 +146,62 @@ sub restore_tar_archive {
     die $err if $err && !$no_unpack_error;
 }
 
+sub restore_external_archive {
+    my ($backup_provider, $storeid, $volname, $rootdir, $conf, $no_unpack_error, $bwlimit) = @_;
+
+    die "refusing to restore privileged container backup from external source\n"
+	if !$conf->{unprivileged};
+
+    my ($mechanism, $vmtype) = $backup_provider->restore_get_mechanism($volname, $storeid);
+    die "cannot restore non-LXC guest of type '$vmtype'\n" if $vmtype ne 'lxc';
+
+    my $info = $backup_provider->restore_container_init($volname, $storeid, {});
+    eval {
+	if ($mechanism eq 'tar') {
+	    my $tar_path = $info->{'tar-path'}
+		or die "did not get path to tar file from backup provider\n";
+	    die "not a regular file '$tar_path'" if !-f $tar_path;
+	    restore_tar_archive($tar_path, $rootdir, $conf, $no_unpack_error, $bwlimit);
+	} elsif ($mechanism eq 'directory') {
+	    my $directory = $info->{'archive-directory'}
+		or die "did not get path to archive directory from backup provider\n";
+	    die "not a directory '$directory'" if !-d $directory;
+
+	    my $create_cmd = [
+		'tar',
+		'cpf',
+		'-',
+		@PVE::Storage::Plugin::COMMON_TAR_FLAGS,
+		"--directory=$directory",
+		'.',
+	    ];
+
+	    my $extract_cmd = restore_tar_archive_command($conf, undef, $rootdir, $bwlimit);
+
+	    my $cmd;
+	    # if there is a bandwidth limit, the command is already a nested array reference
+	    if (ref($extract_cmd) eq 'ARRAY' && ref($extract_cmd->[0]) eq 'ARRAY') {
+		$cmd = [$create_cmd, $extract_cmd->@*];
+	    } else {
+		$cmd = [$create_cmd, $extract_cmd];
+	    }
+
+	    eval { PVE::Tools::run_command($cmd); };
+	    die $@ if $@ && !$no_unpack_error;
+	} else {
+	    die "mechanism '$mechanism' requested by backup provider is not supported for LXCs\n";
+	}
+    };
+    my $err = $@;
+    eval { $backup_provider->restore_container_cleanup($volname, $storeid, {}); };
+    if (my $cleanup_err = $@) {
+	die $cleanup_err if !$err;
+	warn $cleanup_err;
+    }
+    die $err if $err;
+
+}
+
 sub recover_config {
     my ($storage_cfg, $volid, $vmid) = @_;
 
@@ -135,6 +210,8 @@ sub recover_config {
 	my $scfg = PVE::Storage::storage_check_enabled($storage_cfg, $storeid);
 	if ($scfg->{type} eq 'pbs') {
 	    return recover_config_from_proxmox_backup($storage_cfg, $volid, $vmid);
+	} elsif (PVE::Storage::storage_has_feature($storage_cfg, $storeid, 'backup-provider')) {
+	    return recover_config_from_external_backup($storage_cfg, $volid, $vmid);
 	}
     }
 
@@ -209,6 +286,26 @@ sub recover_config_from_tar {
     return wantarray ? ($conf, $mp_param) : $conf;
 }
 
+sub recover_config_from_external_backup {
+    my ($storage_cfg, $volid, $vmid) = @_;
+
+    $vmid //= 0;
+
+    my $raw = PVE::Storage::extract_vzdump_config($storage_cfg, $volid);
+
+    my $conf = PVE::LXC::Config::parse_pct_config("/lxc/${vmid}.conf" , $raw);
+
+    delete $conf->{snapshots};
+
+    my $mp_param = {};
+    PVE::LXC::Config->foreach_volume($conf, sub {
+	my ($ms, $mountpoint) = @_;
+	$mp_param->{$ms} = $conf->{$ms};
+    });
+
+    return wantarray ? ($conf, $mp_param) : $conf;
+}
+
 sub restore_configuration {
     my ($vmid, $storage_cfg, $archive, $rootdir, $conf, $restricted, $unique, $skip_fw) = @_;
 
@@ -218,6 +315,26 @@ sub restore_configuration {
 	if ($scfg->{type} eq 'pbs') {
 	    return restore_configuration_from_proxmox_backup($vmid, $storage_cfg, $archive, $rootdir, $conf, $restricted, $unique, $skip_fw);
 	}
+	if (PVE::Storage::storage_has_feature($storage_cfg, $storeid, 'backup-provider')) {
+	    my $log_function = sub {
+		my ($log_level, $message) = @_;
+		my $prefix = $log_level eq 'err' ? 'ERROR' : uc($log_level);
+		print "$prefix: $message\n";
+	    };
+	    my $backup_provider =
+		PVE::Storage::new_backup_provider($storage_cfg, $storeid, $log_function);
+	    return restore_configuration_from_external_backup(
+		$backup_provider,
+		$vmid,
+		$storage_cfg,
+		$archive,
+		$rootdir,
+		$conf,
+		$restricted,
+		$unique,
+		$skip_fw,
+	    );
+	}
     }
     restore_configuration_from_etc_vzdump($vmid, $rootdir, $conf, $restricted, $unique, $skip_fw);
 }
@@ -258,6 +375,38 @@ sub restore_configuration_from_proxmox_backup {
     }
 }
 
+sub restore_configuration_from_external_backup {
+    my ($backup_provider, $vmid, $storage_cfg, $archive, $rootdir, $conf, $restricted, $unique, $skip_fw) = @_;
+
+    my ($storeid, $volname) = PVE::Storage::parse_volume_id($archive);
+    my $scfg = PVE::Storage::storage_config($storage_cfg, $storeid);
+
+    my ($vtype, $name, undef, undef, undef, undef, $format) =
+	PVE::Storage::parse_volname($storage_cfg, $archive);
+
+    my $oldconf = recover_config_from_external_backup($storage_cfg, $archive, $vmid);
+
+    sanitize_and_merge_config($conf, $oldconf, $restricted, $unique);
+
+    my $firewall_config =
+	$backup_provider->archive_get_firewall_config($volname, $storeid);
+
+    if ($firewall_config) {
+	my $pve_firewall_dir = '/etc/pve/firewall';
+	my $pct_fwcfg_target = "${pve_firewall_dir}/${vmid}.fw";
+	if ($skip_fw) {
+	    warn "ignoring firewall config from backup archive, lacking API permission to modify firewall.\n";
+	    warn "old firewall configuration in '$pct_fwcfg_target' left in place!\n"
+		if -e $pct_fwcfg_target;
+	} else {
+	    mkdir $pve_firewall_dir; # make sure the directory exists
+	    PVE::Tools::file_set_contents($pct_fwcfg_target, $firewall_config);
+	}
+    }
+
+    return;
+}
+
 sub sanitize_and_merge_config {
     my ($conf, $oldconf, $restricted, $unique) = @_;
 
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 container 4/7] external restore: don't use 'one-file-system' tar flag when restoring from a directory
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (31 preceding siblings ...)
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 container 3/7] backup: implement restore " Wolfgang Bumiller
@ 2025-04-03 12:31 ` Wolfgang Bumiller
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 container 5/7] create: factor out compression option helper Wolfgang Bumiller
                   ` (5 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:31 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

This gives backup providers more freedom, e.g. mount backed-up mount
point volumes individually.

Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
No changes to v7.

 src/PVE/LXC/Create.pm | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/src/PVE/LXC/Create.pm b/src/PVE/LXC/Create.pm
index c3c7640..d0cbb1e 100644
--- a/src/PVE/LXC/Create.pm
+++ b/src/PVE/LXC/Create.pm
@@ -167,11 +167,15 @@ sub restore_external_archive {
 		or die "did not get path to archive directory from backup provider\n";
 	    die "not a directory '$directory'" if !-d $directory;
 
+	    # Give backup provider more freedom, e.g. mount backed-up mount point volumes
+	    # individually.
+	    my @flags = grep { $_ ne '--one-file-system' } @PVE::Storage::Plugin::COMMON_TAR_FLAGS;
+
 	    my $create_cmd = [
 		'tar',
 		'cpf',
 		'-',
-		@PVE::Storage::Plugin::COMMON_TAR_FLAGS,
+		@flags,
 		"--directory=$directory",
 		'.',
 	    ];
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 container 5/7] create: factor out compression option helper
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (32 preceding siblings ...)
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 container 4/7] external restore: don't use 'one-file-system' tar flag when restoring from a directory Wolfgang Bumiller
@ 2025-04-03 12:31 ` Wolfgang Bumiller
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 container 6/7] restore tar archive: check potentially untrusted archive Wolfgang Bumiller
                   ` (4 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:31 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

In preparation to re-use it for checking potentially untrusted
archives.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
No changes to v7.

 src/PVE/LXC/Create.pm | 51 +++++++++++++++++++++++++------------------
 1 file changed, 30 insertions(+), 21 deletions(-)

diff --git a/src/PVE/LXC/Create.pm b/src/PVE/LXC/Create.pm
index d0cbb1e..43fc5fe 100644
--- a/src/PVE/LXC/Create.pm
+++ b/src/PVE/LXC/Create.pm
@@ -78,15 +78,38 @@ sub restore_proxmox_backup_archive {
 	$scfg, $storeid, $cmd, $param, userns_cmd => $userns_cmd);
 }
 
+my sub tar_compression_option {
+    my ($archive) = @_;
+
+    my %compression_map = (
+	'.gz'  => '-z',
+	'.bz2' => '-j',
+	'.xz'  => '-J',
+	'.lzo'  => '--lzop',
+	'.zst'  => '--zstd',
+    );
+    if ($archive =~ /\.tar(\.[^.]+)?$/) {
+	if (defined($1)) {
+	    die "unrecognized compression format: $1\n" if !defined($compression_map{$1});
+	    return $compression_map{$1};
+	}
+	return;
+    } else {
+	die "file does not look like a template archive: $archive\n";
+    }
+}
+
 my sub restore_tar_archive_command {
-    my ($conf, $opts, $rootdir, $bwlimit) = @_;
+    my ($conf, $compression_opt, $rootdir, $bwlimit) = @_;
 
     my ($id_map, $root_uid, $root_gid) = PVE::LXC::parse_id_maps($conf);
     my $userns_cmd = PVE::LXC::userns_command($id_map);
 
-    my $cmd = [@$userns_cmd, 'tar', 'xpf', '-', $opts->@*, '--totals',
-               @PVE::Storage::Plugin::COMMON_TAR_FLAGS,
-               '-C', $rootdir];
+    my $cmd = [@$userns_cmd, 'tar', 'xpf', '-'];
+    push $cmd->@*, $compression_opt if $compression_opt;
+    push $cmd->@*, '--totals';
+    push $cmd->@*, @PVE::Storage::Plugin::COMMON_TAR_FLAGS;
+    push $cmd->@*, '-C', $rootdir;
 
     # skip-old-files doesn't have anything to do with time (old/new), but is
     # simply -k (annoyingly also called --keep-old-files) without the 'treat
@@ -108,24 +131,10 @@ sub restore_tar_archive {
 
     my $archive_fh;
     my $tar_input = '<&STDIN';
-    my @compression_opt;
+    my $compression_opt;
     if ($archive ne '-') {
 	# GNU tar refuses to autodetect this... *sigh*
-	my %compression_map = (
-	    '.gz'  => '-z',
-	    '.bz2' => '-j',
-	    '.xz'  => '-J',
-	    '.lzo'  => '--lzop',
-	    '.zst'  => '--zstd',
-	);
-	if ($archive =~ /\.tar(\.[^.]+)?$/) {
-	    if (defined($1)) {
-		die "unrecognized compression format: $1\n" if !defined($compression_map{$1});
-		@compression_opt = $compression_map{$1};
-	    }
-	} else {
-	    die "file does not look like a template archive: $archive\n";
-	}
+	$compression_opt = tar_compression_option($archive);
 	sysopen($archive_fh, $archive, O_RDONLY)
 	    or die "failed to open '$archive': $!\n";
 	my $flags = $archive_fh->fcntl(Fcntl::F_GETFD(), 0);
@@ -133,7 +142,7 @@ sub restore_tar_archive {
 	$tar_input = '<&'.fileno($archive_fh);
     }
 
-    my $cmd = restore_tar_archive_command($conf, [@compression_opt], $rootdir, $bwlimit);
+    my $cmd = restore_tar_archive_command($conf, $compression_opt, $rootdir, $bwlimit);
 
     if ($archive eq '-') {
 	print "extracting archive from STDIN\n";
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 container 6/7] restore tar archive: check potentially untrusted archive
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (33 preceding siblings ...)
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 container 5/7] create: factor out compression option helper Wolfgang Bumiller
@ 2025-04-03 12:31 ` Wolfgang Bumiller
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 container 7/7] api: add early check against restoring privileged container from external source Wolfgang Bumiller
                   ` (3 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:31 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

'tar' itself already protects against '..' in component names and
strips absolute member names when extracting (if not used with the
--absolute-names option) and in general seems sane for extracting.
Additionally, the extraction already happens in the user namespace
associated to the container. So for now, start out with some basic
sanity checks. The helper can still be extended with more checks.

Checks:

* list files in archive - will already catch many corrupted/bogus
  archives.

* check that there are at least 10 members - should also catch
  archives not actually containing a container root filesystem or
  structural issues early.

* check that /sbin directory or link exists in archive - ideally the
  check would be for /sbin/init, but this cannot be done efficiently
  before extraction, because it would require to keep track of the
  whole archive to be able to follow symlinks.

* abort if there is a multi-volume member in the archive - cheap and
  is never expected.

Checks that were considered, but not (yet) added:

* abort when a file has unrealistically large size - while this could
  help to detect certain kinds of bogus archives, there can be valid.
  use cases for extremely large sparse files, so it's not clear what
  a good limit would be (1 EiB maybe?). Also, an attacker could just
  adapt to such a limit creating multiple files and the actual
  extraction is already limited by the size of the allocated container
  volume.

* check that /sbin/init exists after extracting - cannot be done
  efficiently before extraction, because it would require to keep
  track of the whole archive to be able to follow symlinks. During
  setup there already is detection of /etc/os-release, so issues with
  the structure will already be noticed. Adding a hard fail for
  untrusted archives would require either passing that information to
  the setup phase or extracting the protected_call method from there
  into a helper.

* adding 'restrict' to the (common) tar flags - the tar manual (not
  the man page) documents: "Disable use of some potentially harmful
  'tar' options.  Currently this option disables shell invocation from
  multi-volume menu.". The flag was introduced in 2005 and this is
  still the only thing it is used for. Trying to restore a
  multi-volume archive already fails without giving multiple '--file'
  arguments and '--multi-volume', so don't bother adding the flag.

* check format of tar file - would require yet another invocation of
  the decompressor and there seems to be no built-in way to just
  display the format with 'tar'. The 'file' program could be used, but
  it seems to not make a distinction between old GNU and GNU and old
  POSIX and POSIX formats, with the old ones being candidates to
  prohibit. So that would leave just detecting the old 'v7' format.

Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
No changes to v7.

 src/PVE/LXC/Create.pm | 75 ++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 71 insertions(+), 4 deletions(-)

diff --git a/src/PVE/LXC/Create.pm b/src/PVE/LXC/Create.pm
index 43fc5fe..53c584b 100644
--- a/src/PVE/LXC/Create.pm
+++ b/src/PVE/LXC/Create.pm
@@ -99,12 +99,73 @@ my sub tar_compression_option {
     }
 }
 
+# Basic checks trying to detect issues with a potentially untrusted or bogus tar archive.
+# Just listing the files is already a good check against corruption.
+# 'tar' itself already protects against '..' in component names and strips absolute member names
+# when extracting, so no need to check for those here.
+my sub check_tar_archive {
+    my ($archive) = @_;
+
+    print "checking archive..\n";
+
+    # To resolve links to get to 'sbin/init' would mean keeping track of everything in the archive,
+    # because the target might be ordered first. Check only that 'sbin' exists here.
+    my $found_sbin;
+
+    # Just to detect bogus archives, any valid container filesystem should have more than this.
+    my $required_members = 10;
+    my $member_count = 0;
+
+    my $check_file_list = sub {
+	my ($line) = @_;
+
+	$member_count++;
+
+	# Not always just a single number, e.g. for character devices.
+	my $size_re = qr/\d+(?:,\d+)?/;
+
+	# The date is in ISO 8601 format. The last part contains the potentially quoted file name,
+	# potentially followed by some additional info (e.g. where a link points to).
+	my ($type, $perms, $uid, $gid, $size, $date, $time, $file_info) =
+	    $line =~ m!^([a-zA-Z\-])(\S+)\s+(\d+)/(\d+)\s+($size_re)\s+(\S+)\s+(\S+)\s+(.*)$!;
+	if (!defined($type)) {
+	    print "check tar: unable to parse line: $line\n";
+	    return;
+	}
+
+	die "found multi-volume member in archive\n" if $type eq 'M';
+
+	if (!$found_sbin && (
+	    ($file_info =~ m!^(?:\./)?sbin/$! && $type eq 'd')
+	    || ($file_info =~ m!^(?:\./)?sbin ->! && $type eq 'l')
+	    || ($file_info =~ m!^(?:\./)?sbin link to! && $type eq 'h')
+	)) {
+	    $found_sbin = 1;
+	}
+
+    };
+
+    my $compression_opt = tar_compression_option($archive);
+
+    my $cmd = ['tar', '-tvf', $archive];
+    push $cmd->@*, $compression_opt if $compression_opt;
+    push $cmd->@*, '--numeric-owner';
+
+    PVE::Tools::run_command($cmd, outfunc => $check_file_list);
+
+    die "no 'sbin' directory (or link) found in archive '$archive'\n" if !$found_sbin;
+    die "less than 10 members in archive '$archive'\n" if $member_count < $required_members;
+}
+
 my sub restore_tar_archive_command {
-    my ($conf, $compression_opt, $rootdir, $bwlimit) = @_;
+    my ($conf, $compression_opt, $rootdir, $bwlimit, $untrusted) = @_;
 
     my ($id_map, $root_uid, $root_gid) = PVE::LXC::parse_id_maps($conf);
     my $userns_cmd = PVE::LXC::userns_command($id_map);
 
+    die "refusing to restore privileged container backup from external source\n"
+	if $untrusted && ($root_uid == 0 || $root_gid == 0);
+
     my $cmd = [@$userns_cmd, 'tar', 'xpf', '-'];
     push $cmd->@*, $compression_opt if $compression_opt;
     push $cmd->@*, '--totals';
@@ -127,7 +188,7 @@ my sub restore_tar_archive_command {
 }
 
 sub restore_tar_archive {
-    my ($archive, $rootdir, $conf, $no_unpack_error, $bwlimit) = @_;
+    my ($archive, $rootdir, $conf, $no_unpack_error, $bwlimit, $untrusted) = @_;
 
     my $archive_fh;
     my $tar_input = '<&STDIN';
@@ -142,7 +203,12 @@ sub restore_tar_archive {
 	$tar_input = '<&'.fileno($archive_fh);
     }
 
-    my $cmd = restore_tar_archive_command($conf, $compression_opt, $rootdir, $bwlimit);
+    if ($untrusted) {
+	die "cannot verify untrusted archive on STDIN\n" if $archive eq '-';
+	check_tar_archive($archive);
+    }
+
+    my $cmd = restore_tar_archive_command($conf, $compression_opt, $rootdir, $bwlimit, $untrusted);
 
     if ($archive eq '-') {
 	print "extracting archive from STDIN\n";
@@ -170,7 +236,7 @@ sub restore_external_archive {
 	    my $tar_path = $info->{'tar-path'}
 		or die "did not get path to tar file from backup provider\n";
 	    die "not a regular file '$tar_path'" if !-f $tar_path;
-	    restore_tar_archive($tar_path, $rootdir, $conf, $no_unpack_error, $bwlimit);
+	    restore_tar_archive($tar_path, $rootdir, $conf, $no_unpack_error, $bwlimit, 1);
 	} elsif ($mechanism eq 'directory') {
 	    my $directory = $info->{'archive-directory'}
 		or die "did not get path to archive directory from backup provider\n";
@@ -189,6 +255,7 @@ sub restore_external_archive {
 		'.',
 	    ];
 
+	    # archive is trusted, we created it
 	    my $extract_cmd = restore_tar_archive_command($conf, undef, $rootdir, $bwlimit);
 
 	    my $cmd;
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 container 7/7] api: add early check against restoring privileged container from external source
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (34 preceding siblings ...)
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 container 6/7] restore tar archive: check potentially untrusted archive Wolfgang Bumiller
@ 2025-04-03 12:31 ` Wolfgang Bumiller
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 manager 1/2] ui: backup: also check for backup subtype to classify archive Wolfgang Bumiller
                   ` (2 subsequent siblings)
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:31 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

While restore_external_archive() already has a check, that happens
after an existing container is destroyed.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
No changes to v7.

 src/PVE/API2/LXC.pm | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 7cb5122..6cd771c 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -40,6 +40,17 @@ BEGIN {
     }
 }
 
+my sub assert_not_restore_from_external {
+    my ($archive, $storage_cfg) = @_;
+
+    my ($storeid, undef) = PVE::Storage::parse_volume_id($archive, 1);
+
+    return if !defined($storeid);
+    return if !PVE::Storage::storage_has_feature($storage_cfg, $storeid, 'backup-provider');
+
+    die "refusing to restore privileged container backup from external source\n";
+}
+
 my $check_storage_access_migrate = sub {
     my ($rpcenv, $authuser, $storecfg, $storage, $node) = @_;
 
@@ -409,6 +420,9 @@ __PACKAGE__->register_method({
 			$conf->{unprivileged} = $orig_conf->{unprivileged}
 			    if !defined($unprivileged) && defined($orig_conf->{unprivileged});
 
+			assert_not_restore_from_external($archive, $storage_cfg)
+			    if !$conf->{unprivileged};
+
 			# implicit privileged change is checked here
 			if ($old_conf->{unprivileged} && !$conf->{unprivileged}) {
 			    $rpcenv->check_vm_perm($authuser, $vmid, $pool, ['VM.Allocate']);
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 manager 1/2] ui: backup: also check for backup subtype to classify archive
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (35 preceding siblings ...)
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 container 7/7] api: add early check against restoring privileged container from external source Wolfgang Bumiller
@ 2025-04-03 12:31 ` Wolfgang Bumiller
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 manager 2/2] backup: implement backup for external providers Wolfgang Bumiller
  2025-04-03 16:10 ` [pve-devel] partially-applied-series: [PATCH v8 storage 0/9] backup provider API Thomas Lamprecht
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:31 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

In anticipation of future storage plugins that might not have
PBS-specific formats or adhere to the vzdump naming scheme for
backups.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
No changes to v7.

 www/manager6/Utils.js              | 10 ++++++----
 www/manager6/grid/BackupView.js    |  4 ++--
 www/manager6/storage/BackupView.js |  4 ++--
 3 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index aa415759c..1ed3de5d5 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -694,12 +694,14 @@ Ext.define('PVE.Utils', {
 	'import': gettext('Import'),
     },
 
-    volume_is_qemu_backup: function(volid, format) {
-	return format === 'pbs-vm' || volid.match(':backup/vzdump-qemu-');
+    volume_is_qemu_backup: function(volume) {
+	return volume.format === 'pbs-vm' || volume.volid.match(':backup/vzdump-qemu-') ||
+	    volume.subtype === 'qemu';
     },
 
-    volume_is_lxc_backup: function(volid, format) {
-	return format === 'pbs-ct' || volid.match(':backup/vzdump-(lxc|openvz)-');
+    volume_is_lxc_backup: function(volume) {
+	return volume.format === 'pbs-ct' || volume.volid.match(':backup/vzdump-(lxc|openvz)-') ||
+	    volume.subtype === 'lxc';
     },
 
     authSchema: {
diff --git a/www/manager6/grid/BackupView.js b/www/manager6/grid/BackupView.js
index e71d1c88a..ef3649c68 100644
--- a/www/manager6/grid/BackupView.js
+++ b/www/manager6/grid/BackupView.js
@@ -29,11 +29,11 @@ Ext.define('PVE.grid.BackupView', {
 	var vmtypeFilter;
 	if (vmtype === 'lxc' || vmtype === 'openvz') {
 	    vmtypeFilter = function(item) {
-		return PVE.Utils.volume_is_lxc_backup(item.data.volid, item.data.format);
+		return PVE.Utils.volume_is_lxc_backup(item.data);
 	    };
 	} else if (vmtype === 'qemu') {
 	    vmtypeFilter = function(item) {
-		return PVE.Utils.volume_is_qemu_backup(item.data.volid, item.data.format);
+		return PVE.Utils.volume_is_qemu_backup(item.data);
 	    };
 	} else {
 	    throw "unsupported VM type '" + vmtype + "'";
diff --git a/www/manager6/storage/BackupView.js b/www/manager6/storage/BackupView.js
index db184def6..749c21360 100644
--- a/www/manager6/storage/BackupView.js
+++ b/www/manager6/storage/BackupView.js
@@ -86,9 +86,9 @@ Ext.define('PVE.storage.BackupView', {
 		disabled: true,
 		handler: function(b, e, rec) {
 		    let vmtype;
-		    if (PVE.Utils.volume_is_qemu_backup(rec.data.volid, rec.data.format)) {
+		    if (PVE.Utils.volume_is_qemu_backup(rec.data)) {
 			vmtype = 'qemu';
-		    } else if (PVE.Utils.volume_is_lxc_backup(rec.data.volid, rec.data.format)) {
+		    } else if (PVE.Utils.volume_is_lxc_backup(rec.data)) {
 			vmtype = 'lxc';
 		    } else {
 			return;
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] [PATCH v8 manager 2/2] backup: implement backup for external providers
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (36 preceding siblings ...)
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 manager 1/2] ui: backup: also check for backup subtype to classify archive Wolfgang Bumiller
@ 2025-04-03 12:31 ` Wolfgang Bumiller
  2025-04-03 16:10 ` [pve-devel] partially-applied-series: [PATCH v8 storage 0/9] backup provider API Thomas Lamprecht
  38 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-03 12:31 UTC (permalink / raw)
  To: pve-devel

From: Fiona Ebner <f.ebner@proxmox.com>

Call job_{init,cleanup}() and backup_{init,cleanup}() methods so that
backup providers can prepare and clean up for the whole backup job and
for individual guest backups.

It is necessary to adapt some log messages and special case some
things like is already done for PBS, e.g. log file handling.

Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
No changes to v7.

 PVE/VZDump.pm           | 61 ++++++++++++++++++++++++++++++++++++-----
 test/vzdump_new_test.pl |  3 ++
 2 files changed, 57 insertions(+), 7 deletions(-)

diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index fd89945ee..2ccbe6462 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -217,7 +217,10 @@ sub storage_info {
     $info->{'prune-backups'} = PVE::JSONSchema::parse_property_string('prune-backups', $scfg->{'prune-backups'})
 	if defined($scfg->{'prune-backups'});
 
-    if ($type eq 'pbs') {
+    if (PVE::Storage::storage_has_feature($cfg, $storage, 'backup-provider')) {
+	$info->{'backup-provider'} =
+	    PVE::Storage::new_backup_provider($cfg, $storage, sub { debugmsg($_[0], $_[1]); });
+    } elsif ($type eq 'pbs') {
 	$info->{pbs} = 1;
     } else {
 	$info->{dumpdir} = PVE::Storage::get_backup_dir($cfg, $storage);
@@ -717,6 +720,7 @@ sub new {
 	    $opts->{scfg} = $info->{scfg};
 	    $opts->{pbs} = $info->{pbs};
 	    $opts->{'prune-backups'} //= $info->{'prune-backups'};
+	    $self->{'backup-provider'} = $info->{'backup-provider'} if $info->{'backup-provider'};
 	}
     } elsif ($opts->{dumpdir}) {
 	$add_error->("dumpdir '$opts->{dumpdir}' does not exist")
@@ -1001,7 +1005,7 @@ sub exec_backup_task {
 	    }
 	}
 
-	if (!$self->{opts}->{pbs}) {
+	if (!$self->{opts}->{pbs} && !$self->{'backup-provider'}) {
 	    $task->{logfile} = "$opts->{dumpdir}/$basename.log";
 	}
 
@@ -1011,7 +1015,10 @@ sub exec_backup_task {
 	    $ext .= ".${comp_ext}";
 	}
 
-	if ($self->{opts}->{pbs}) {
+	if ($self->{'backup-provider'}) {
+	    die "unable to pipe backup to stdout\n" if $opts->{stdout};
+	    # the archive name $task->{target} is returned by the start hook a bit later
+	} elsif ($self->{opts}->{pbs}) {
 	    die "unable to pipe backup to stdout\n" if $opts->{stdout};
 	    $task->{target} = $pbs_snapshot_name;
 	} else {
@@ -1029,7 +1036,7 @@ sub exec_backup_task {
 	my $pid = $$;
 	if ($opts->{tmpdir}) {
 	    $task->{tmpdir} = "$opts->{tmpdir}/vzdumptmp${pid}_$vmid/";
-	} elsif ($self->{opts}->{pbs}) {
+	} elsif ($self->{opts}->{pbs} || $self->{'backup-provider'}) {
 	    $task->{tmpdir} = "/var/tmp/vzdumptmp${pid}_$vmid";
 	} else {
 	    # dumpdir is posix? then use it as temporary dir
@@ -1098,9 +1105,23 @@ sub exec_backup_task {
 	debugmsg ('info', "bandwidth limit: $opts->{bwlimit} KiB/s", $logfd)  if $opts->{bwlimit};
 	debugmsg ('info', "ionice priority: $opts->{ionice}", $logfd);
 
+	my $backup_provider_init = sub {
+	    my $init_result =
+		$self->{'backup-provider'}->backup_init($vmid, $vmtype, $task->{backup_time});
+	    die "backup init failed: did not receive a valid result from the backup provider\n"
+		if !defined($init_result) || ref($init_result) ne 'HASH';
+	    my $archive_name = $init_result->{'archive-name'};
+	    die "backup init failed: did not receive an archive name from backup provider\n"
+		if !defined($archive_name) || length($archive_name) == 0;
+	    die "backup init failed: illegal characters in archive name '$archive_name'\n"
+		if $archive_name !~ m!^(${PVE::Storage::SAFE_CHAR_CLASS_RE}|/|:)+$!;
+	    $task->{target} = $archive_name;
+	};
+
 	if ($mode eq 'stop') {
 	    $plugin->prepare ($task, $vmid, $mode);
 
+	    $backup_provider_init->() if $self->{'backup-provider'};
 	    $self->run_hook_script ('backup-start', $task, $logfd);
 
 	    if ($running) {
@@ -1115,6 +1136,7 @@ sub exec_backup_task {
 	} elsif ($mode eq 'suspend') {
 	    $plugin->prepare ($task, $vmid, $mode);
 
+	    $backup_provider_init->() if $self->{'backup-provider'};
 	    $self->run_hook_script ('backup-start', $task, $logfd);
 
 	    if ($vmtype eq 'lxc') {
@@ -1141,6 +1163,7 @@ sub exec_backup_task {
 	    }
 
 	} elsif ($mode eq 'snapshot') {
+	    $backup_provider_init->() if $self->{'backup-provider'};
 	    $self->run_hook_script ('backup-start', $task, $logfd);
 
 	    my $snapshot_count = $task->{snapshot_count} || 0;
@@ -1183,11 +1206,13 @@ sub exec_backup_task {
 	    return;
 	}
 
-	my $archive_txt = $self->{opts}->{pbs} ? 'Proxmox Backup Server' : 'vzdump';
+	my $archive_txt = 'vzdump';
+	$archive_txt = 'Proxmox Backup Server' if $self->{opts}->{pbs};
+	$archive_txt = $self->{'backup-provider'}->provider_name() if $self->{'backup-provider'};
 	debugmsg('info', "creating $archive_txt archive '$task->{target}'", $logfd);
 	$plugin->archive($task, $vmid, $task->{tmptar}, $comp);
 
-	if ($self->{opts}->{pbs}) {
+	if ($self->{'backup-provider'} || $self->{opts}->{pbs}) {
 	    # size is added to task struct in guest vzdump plugins
 	} else {
 	    rename ($task->{tmptar}, $task->{target}) ||
@@ -1201,7 +1226,8 @@ sub exec_backup_task {
 
 	# Mark as protected before pruning.
 	if (my $storeid = $opts->{storage}) {
-	    my $volname = $opts->{pbs} ? $task->{target} : basename($task->{target});
+	    my $volname = $opts->{pbs} || $self->{'backup-provider'} ? $task->{target}
+	                                                             : basename($task->{target});
 	    my $volid = "${storeid}:backup/${volname}";
 
 	    if ($opts->{'notes-template'} && $opts->{'notes-template'} ne '') {
@@ -1254,6 +1280,10 @@ sub exec_backup_task {
 	    debugmsg ('info', "pruned $pruned backup(s)${log_pruned_extra}", $logfd);
 	}
 
+	if ($self->{'backup-provider'}) {
+	    my $cleanup_result = $self->{'backup-provider'}->backup_cleanup($vmid, $vmtype, 1, {});
+	    $task->{size} = $cleanup_result->{stats}->{'archive-size'};
+	}
 	$self->run_hook_script ('backup-end', $task, $logfd);
     };
     my $err = $@;
@@ -1313,6 +1343,14 @@ sub exec_backup_task {
 	debugmsg ('err', "Backup of VM $vmid failed - $err", $logfd, 1);
 	debugmsg ('info', "Failed at " . strftime("%F %H:%M:%S", localtime()));
 
+	if ($self->{'backup-provider'}) {
+	    eval {
+		$self->{'backup-provider'}->backup_cleanup(
+		    $vmid, $task->{vmtype}, 0, { error => $err });
+	    };
+	    debugmsg('warn', "backup cleanup for external provider failed - $@") if $@;
+	}
+
 	eval { $self->run_hook_script ('backup-abort', $task, $logfd); };
 	debugmsg('warn', $@) if $@; # message already contains command with phase name
 
@@ -1340,6 +1378,8 @@ sub exec_backup_task {
 		};
 		debugmsg('warn', "$@") if $@; # $@ contains already error prefix
 	    }
+	} elsif ($self->{'backup-provider'}) {
+	    $self->{'backup-provider'}->backup_handle_log_file($vmid, $task->{tmplog});
 	} elsif ($task->{logfile}) {
 	    system {'cp'} 'cp', $task->{tmplog}, $task->{logfile};
 	}
@@ -1398,6 +1438,7 @@ sub exec_backup {
     my $errcount = 0;
     eval {
 
+	$self->{'backup-provider'}->job_init($starttime) if $self->{'backup-provider'};
 	$self->run_hook_script ('job-start', undef, $job_start_fd);
 
 	foreach my $task (@$tasklist) {
@@ -1405,11 +1446,17 @@ sub exec_backup {
 	    $errcount += 1 if $task->{state} ne 'ok';
 	}
 
+	$self->{'backup-provider'}->job_cleanup() if $self->{'backup-provider'};
 	$self->run_hook_script ('job-end', undef, $job_end_fd);
     };
     my $err = $@;
 
     if ($err) {
+	if ($self->{'backup-provider'}) {
+	    eval { $self->{'backup-provider'}->job_cleanup(); };
+	    $err .= "job cleanup for external provider failed - $@" if $@;
+	}
+
 	eval { $self->run_hook_script ('job-abort', undef, $job_end_fd); };
 	$err .= $@ if $@;
 	debugmsg ('err', "Backup job failed - $err", undef, 1);
diff --git a/test/vzdump_new_test.pl b/test/vzdump_new_test.pl
index 8cd730758..01f2a6619 100755
--- a/test/vzdump_new_test.pl
+++ b/test/vzdump_new_test.pl
@@ -51,6 +51,9 @@ $pve_storage_module->mock(
     activate_storage => sub {
 	return;
     },
+    get_backup_provider => sub {
+	return;
+    },
 );
 
 my $pve_cluster_module = Test::MockModule->new('PVE::Cluster');
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [pve-devel] partially-applied-series: [PATCH v8 storage 0/9] backup provider API
  2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
                   ` (37 preceding siblings ...)
  2025-04-03 12:31 ` [pve-devel] [PATCH v8 manager 2/2] backup: implement backup for external providers Wolfgang Bumiller
@ 2025-04-03 16:10 ` Thomas Lamprecht
  38 siblings, 0 replies; 41+ messages in thread
From: Thomas Lamprecht @ 2025-04-03 16:10 UTC (permalink / raw)
  To: Proxmox VE development discussion, Wolfgang Bumiller

Am 03.04.25 um 14:30 schrieb Wolfgang Bumiller:
> v7: https://lore.proxmox.com/pve-devel/20250401173435.221892-1-f.ebner@proxmox.com/
> 

> Changes in v8:
> - storage: replace backup_vm_available_bitmaps() with backup_vm_query_incremental()
> - qemu: add an explicit optional 'bitmap-mode' parameter to
>   qmp_backup_access_setup()
> - qemu-sever: adapt to the storage api change and pass the bitmap-mode
>   to the qmp-backup-access-setup command
> - storage: adapt directory example
> - storage: change the previous-info file to be a json hash mapping disks
>   to their last backups
> - qemu-server: fix 'bwlimit'/'bwlimiit' typo
> - qemu: fix a leak on error in `get_single_device_info()`
> 
I applied the QEMU part already to get it build for more broad QA; that does
not mean the new interfaces are set in stone yet though, I have no issue with
adapting them until we actually use them, i.e., apply the rest of this series.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [pve-devel] [POC v8 storage 7/8] add backup provider example
  2025-04-03 12:30 ` [pve-devel] [POC v8 storage 7/8] add backup provider example Wolfgang Bumiller
@ 2025-04-04  6:58   ` Wolfgang Bumiller
  0 siblings, 0 replies; 41+ messages in thread
From: Wolfgang Bumiller @ 2025-04-04  6:58 UTC (permalink / raw)
  To: pve-devel

On Thu, Apr 03, 2025 at 02:30:57PM +0200, Wolfgang Bumiller wrote:
> From: Fiona Ebner <f.ebner@proxmox.com>
> 
> The example uses a simple directory structure to save the backups,
> grouped by guest ID. VM backups are saved as configuration files and
> qcow2 images, with backing files when doing incremental backups.
> Container backups are saved as configuration files and a tar file or
> squashfs image (added to test the 'directory' restore mechanism).
> 
> Whether to use incremental VM backups and which backup mechanisms to
> use can be configured in the storage configuration.
> 
> The 'nbdinfo' binary from the 'libnbd-bin' package is required for
> backup mechanism 'nbd' for VM backups, the 'mksquashfs' binary from the
> 'squashfs-tools' package is required for backup mechanism 'squashfs' for
> containers.
> 
> Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
> [WB: update from backup_vm_available_bitmaps() to
>      backup_vm_query_incremental(), the previous-info file is now a
>      json file mapping the individual volumes instead of a single
>      backup id to support toggling the backup=0|0 property on
>      individual drives between backups]
> Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
> ---
> Changes in v8: described in the trailers above ^
> 
>  .../BackupProvider/Plugin/DirectoryExample.pm | 809 ++++++++++++++++++
>  src/PVE/BackupProvider/Plugin/Makefile        |   2 +-
>  .../Custom/BackupProviderDirExamplePlugin.pm  | 308 +++++++
>  src/PVE/Storage/Custom/Makefile               |   5 +
>  src/PVE/Storage/Makefile                      |   1 +
>  5 files changed, 1124 insertions(+), 1 deletion(-)
>  create mode 100644 src/PVE/BackupProvider/Plugin/DirectoryExample.pm
>  create mode 100644 src/PVE/Storage/Custom/BackupProviderDirExamplePlugin.pm
>  create mode 100644 src/PVE/Storage/Custom/Makefile
> 
> diff --git a/src/PVE/BackupProvider/Plugin/DirectoryExample.pm b/src/PVE/BackupProvider/Plugin/DirectoryExample.pm
> new file mode 100644
> index 0000000..4c5c8f6
> --- /dev/null
> +++ b/src/PVE/BackupProvider/Plugin/DirectoryExample.pm
> @@ -0,0 +1,809 @@
> +package PVE::BackupProvider::Plugin::DirectoryExample;
> +
> +use strict;
> +use warnings;
> +
> +use Fcntl qw(SEEK_SET);
> +use File::Path qw(make_path remove_tree);
> +use IO::File;
> +use IPC::Open3;
> +use JSON qw(from_json to_json);
> +
> +use PVE::Storage::Common;
> +use PVE::Storage::Plugin;
> +use PVE::Tools qw(file_get_contents file_read_firstline file_set_contents run_command);
> +
> +use base qw(PVE::BackupProvider::Plugin::Base);
> +
> +# Private helpers
> +
> +my sub log_info {
> +    my ($self, $message) = @_;
> +
> +    $self->{'log-function'}->('info', $message);
> +}
> +
> +my sub log_warning {
> +    my ($self, $message) = @_;
> +
> +    $self->{'log-function'}->('warn', $message);
> +}
> +
> +my sub log_error {
> +    my ($self, $message) = @_;
> +
> +    $self->{'log-function'}->('err', $message);
> +}
> +
> +# NOTE: This is just for proof-of-concept testing! A backup provider plugin should either use the
> +# 'nbd' backup mechansim and use the NBD protocol or use the 'file-handle' mechanism. There should
> +# be no need to use /dev/nbdX nodes for proper plugins.
> +my sub bind_next_free_dev_nbd_node {
> +    my ($options) = @_;
> +
> +    # /dev/nbdX devices are reserved in a file. Those reservations expires after $expiretime.
> +    # This avoids race conditions between allocation and use.
> +
> +    die "file '/sys/module/nbd' does not exist - 'nbd' kernel module not loaded?"
> +	if !-e "/sys/module/nbd";
> +
> +    my $line = PVE::Tools::file_read_firstline("/sys/module/nbd/parameters/nbds_max")
> +	or die "could not read 'nbds_max' parameter file for 'nbd' kernel module\n";
> +    my ($nbds_max) = ($line =~ m/(\d+)/)
> +	or die "could not determine 'nbds_max' parameter for 'nbd' kernel module\n";
> +
> +    my $filename = "/run/qemu-server/reserved-dev-nbd-nodes";
> +
> +    my $code = sub {
> +	my $expiretime = 60;
> +	my $ctime = time();
> +
> +	my $used = {};
> +	my $latest = [0, 0];
> +
> +	if (my $fh = IO::File->new ($filename, "r")) {
> +	    while (my $line = <$fh>) {
> +		if ($line =~ m/^(\d+)\s(\d+)$/) {
> +		    my ($n, $timestamp) = ($1, $2);
> +
> +		    $latest = [$n, $timestamp] if $latest->[1] <= $timestamp;
> +
> +		    if (($timestamp + $expiretime) > $ctime) {
> +			$used->{$n} = $timestamp; # not expired
> +		    }
> +		}
> +	    }
> +	}
> +
> +	my $new_n;
> +	for (my $count = 0; $count < $nbds_max; $count++) {
> +	    my $n = ($latest->[0] + $count) % $nbds_max;
> +	    my $block_device = "/dev/nbd${n}";
> +	    next if $used->{$n}; # reserved
> +	    next if !-e $block_device;
> +
> +	    my $st = File::stat::stat("/run/lock/qemu-nbd-nbd${n}");
> +	    next if defined($st) && S_ISSOCK($st->mode) && $st->uid == 0; # in use
> +
> +	    # Used to avoid looping if there are other issues then the NBD node being in use
> +	    my $socket_error = 0;
> +	    eval {
> +		my $errfunc = sub {
> +		    my ($line) = @_;
> +		    $socket_error = 1 if $line =~ m/^qemu-nbd: Failed to set NBD socket$/;
> +		    log_warn($line);
> +		};
> +		run_command(["qemu-nbd", "-c", $block_device, $options->@*], errfunc => $errfunc);
> +	    };
> +	    if (my $err = $@) {
> +		die $err if !$socket_error;
> +		log_warn("unable to bind $block_device - trying next one");
> +		next;
> +	    }
> +	    $used->{$n} = $ctime;
> +	    $new_n = $n;
> +	    last;
> +	}
> +
> +	my $data = "";
> +	$data .= "$_ $used->{$_}\n" for keys $used->%*;
> +
> +	PVE::Tools::file_set_contents($filename, $data);
> +
> +	return defined($new_n) ? "/dev/nbd${new_n}" : undef;
> +    };
> +
> +    my $block_device =
> +	PVE::Tools::lock_file('/run/lock/qemu-server/reserved-dev-nbd-nodes.lock', 10, $code);
> +    die $@ if $@;
> +
> +    die "unable to find free /dev/nbdX block device node\n" if !$block_device;
> +
> +    return $block_device;
> +}
> +
> +# Backup Provider API
> +
> +sub new {
> +    my ($class, $storage_plugin, $scfg, $storeid, $log_function) = @_;
> +
> +    my $self = bless {
> +	scfg => $scfg,
> +	storeid => $storeid,
> +	'storage-plugin' => $storage_plugin,
> +	'log-function' => $log_function,
> +    }, $class;
> +
> +    return $self;
> +}
> +
> +sub provider_name {
> +    my ($self) = @_;
> +
> +    return 'dir provider example';
> +}
> +
> +# Hooks
> +
> +sub job_init {
> +    my ($self, $start_time) = @_;
> +
> +    log_info($self, "job init called");
> +
> +    if (!-e '/sys/module/nbd/parameters') {
> +	die "required 'nbd' kernel module not loaded - use 'modprobe nbd nbds_max=128' to load it"
> +	    ." manually\n";
> +    }
> +
> +    log_info($self, "backup provider initialized successfully for new job $start_time");
> +
> +    return;
> +}
> +
> +sub job_cleanup {
> +    my ($self) = @_;
> +
> +    log_info($self, "job cleanup called");
> +
> +    return;
> +}
> +
> +sub backup_init {
> +    my ($self, $vmid, $vmtype, $backup_time) = @_;
> +
> +    my $archive_subdir = "${vmtype}-${backup_time}";
> +    my $archive = "${vmid}/${archive_subdir}";
> +
> +    log_info($self, "backup start hook called");
> +
> +    my $backup_dir = $self->{scfg}->{path} . "/" . $archive;
> +
> +    make_path($backup_dir);
> +    die "unable to create directory $backup_dir\n" if !-d $backup_dir;
> +
> +    $self->{$vmid}->{'backup-time'} = $backup_time;
> +    $self->{$vmid}->{'backup-dir'} = $backup_dir;
> +
> +    $self->{$vmid}->{'archive-subdir'} = $archive_subdir;
> +    $self->{$vmid}->{archive} = $archive;
> +    return { 'archive-name' => $archive };
> +}
> +
> +my sub get_previous_info_tainted {
> +    my ($self, $vmid) = @_;
> +
> +    my $previous_info_file = "$self->{scfg}->{path}/$vmid/previous-info";
> +
> +    return eval { from_json(file_get_contents($previous_info_file)) } // {};
> +}
> +
> +my sub update_previous_info {
> +    my ($self, $vmid) = @_;
> +
> +    my $previous_info_file = "$self->{scfg}->{path}/$vmid/previous-info";
> +
> +    if (defined(my $info = $self->{$vmid}->{previous})) {
> +	file_set_contents($previous_info_file, to_json($info));
> +    } else {
> +	unlink($previous_info_file);
> +    }
> +}
> +
> +
> +sub backup_cleanup {
> +    my ($self, $vmid, $vmtype, $success, $info) = @_;
> +
> +    if ($success) {
> +	log_info($self, "backup cleanup called - success");
> +	eval {
> +	    update_previous_info($self, $vmid, $self->{$vmid}->{previous});
> +	};
> +	if (my $err = $@) {
> +	    log_error($self, "failed to update previous-info file: $err");
> +	}
> +	my $size = 0;
> +	my $backup_dir = $self->{$vmid}->{'backup-dir'};
> +	my @backup_files = glob("$backup_dir/*");
> +	$size += -s $_ for @backup_files;
> +	my $stats = { 'archive-size' => $size };
> +	return { 'stats' => $stats };
> +    } else {
> +	log_info($self, "backup cleanup called - failure");
> +
> +	$self->{$vmid}->{failed} = 1;
> +
> +	if (my $dir = $self->{$vmid}->{'backup-dir'}) {
> +	    eval { remove_tree($dir) };
> +	    log_warning($self, "unable to clean up $dir - $@") if $@;
> +	}
> +
> +	# Restore old previous-info so next attempt can re-use bitmap again
> +	if (my $info = $self->{$vmid}->{'old-previous-info'}) {
> +	    my $previous_info_dir = "$self->{scfg}->{path}/$vmid/";
> +	    my $previous_info_file = "$previous_info_dir/previous-info";
> +	    file_set_contents($previous_info_file, $info);
> +	}
> +    }
> +}
> +
> +sub backup_container_prepare {
> +    my ($self, $vmid, $info) = @_;
> +
> +    my $dir = $self->{$vmid}->{'backup-dir'};
> +    chown($info->{'backup-user-id'}, -1, $dir) or die "unable to change owner for $dir\n";
> +
> +    return;
> +}
> +
> +sub backup_vm_query_incremental {
> +    my ($self, $vmid, $volumes) = @_;
> +
> +    # Try to use the last backup's disks for incremental backup if the storage
> +    # is configured for incremental VM backup. Need to start fresh if there is
> +    # no previous backup or the associated backup doesn't exist.
> +
> +    return if $self->{'storage-plugin'}->get_vm_backup_mode($self->{scfg}) ne 'incremental';
> +
> +    my $vmtype = 'qemu';
> +
> +    my $out = {};
> +
> +    my $info = get_previous_info_tainted($self, $vmid);
> +    for my $device_name (keys $volumes->%*) {
> +	my $prev_file = $info->{$device_name};
> +	next if !defined $prev_file;
> +	# it's type-time/disk.qcow2
> +	next if $prev_file !~ m!^([^/]+/[^/]+\.qcow2)$!;
> +	$prev_file = $1; # untaint
> +
> +	my $full_path = "$self->{scfg}->{path}/$vmid/$prev_file";
> +
> +	if (-e $full_path) {
> +	    $self->{$vmid}->{previous}->{$device_name} = $prev_file;
> +	    $out->{$device_name} = 'use';
> +	} else {
> +	    $out->{$device_name} = 'new';
> +	}
> +    }
> +
> +    return $out;
> +}
> +
> +sub backup_get_mechanism {
> +    my ($self, $vmid, $vmtype) = @_;
> +
> +    return 'directory' if $vmtype eq 'lxc';
> +    return $self->{'storage-plugin'}->get_vm_backup_mechanism($self->{scfg}) if $vmtype eq 'qemu';
> +
> +    die "unsupported guest type '$vmtype'\n";
> +}
> +
> +sub backup_handle_log_file {
> +    my ($self, $vmid, $filename) = @_;
> +
> +    my $log_dir = $self->{$vmid}->{'backup-dir'};
> +    if ($self->{$vmid}->{failed}) {
> +	$log_dir .= ".failed";
> +    }
> +    make_path($log_dir);
> +    die "unable to create directory $log_dir\n" if !-d $log_dir;
> +
> +    my $data = file_get_contents($filename);
> +    my $target = "${log_dir}/backup.log";
> +    file_set_contents($target, $data);
> +}
> +
> +my sub backup_file {
> +    my ($self, $vmid, $device_name, $size, $in_fh, $bitmap_mode, $next_dirty_region, $bandwidth_limit) = @_;
> +
> +    # TODO honor bandwidth_limit
> +
> +    my $target = "$self->{$vmid}->{'backup-dir'}/${device_name}.qcow2";
> +
> +    my $create_cmd = ["qemu-img", "create", "-f", "qcow2", $target, $size];
> +    if (my $previous_file = $self->{$vmid}->{previous}->{$device_name}) {
> +	my $target_base = "../$previous_file";
> +	push $create_cmd->@*, "-b", $target_base, "-F", "qcow2";
> +    }
> +    run_command($create_cmd);
> +
> +    my $nbd_node;
> +    eval {
> +	# allows to easily write to qcow2 target
> +	$nbd_node = bind_next_free_dev_nbd_node([$target, '--format=qcow2']);
> +	# FIXME use nbdfuse like in qemu-server rather than qemu-nbd. Seems like there is a race and
> +	# sysseek() can fail with "Invalid argument" if done too early...
> +	sleep 1;
> +
> +	my $block_size = 4 * 1024 * 1024; # 4 MiB
> +
> +	my $out_fh = IO::File->new($nbd_node, "r+")
> +	    or die "unable to open NBD backup target - $!\n";
> +
> +	my $buffer = '';
> +	my $skip_discard;
> +
> +	while (scalar((my $region_offset, my $region_length) = $next_dirty_region->())) {
> +	    sysseek($in_fh, $region_offset, SEEK_SET)
> +		// die "unable to seek '$region_offset' in NBD backup source - $!\n";
> +	    sysseek($out_fh, $region_offset, SEEK_SET)
> +		// die "unable to seek '$region_offset' in NBD backup target - $!\n";
> +
> +	    my $local_offset = 0; # within the region
> +	    while ($local_offset < $region_length) {
> +		my $remaining = $region_length - $local_offset;
> +		my $request_size = $remaining < $block_size ? $remaining : $block_size;
> +		my $offset = $region_offset + $local_offset;
> +
> +		my $read = sysread($in_fh, $buffer, $request_size);
> +		die "failed to read from backup source - $!\n" if !defined($read);
> +		die "premature EOF while reading backup source\n" if $read == 0;
> +
> +		my $written = 0;
> +		while ($written < $read) {
> +		    my $res = syswrite($out_fh, $buffer, $request_size - $written, $written);
> +		    die "failed to write to backup target - $!\n" if !defined($res);
> +		    die "unable to progress writing to backup target\n" if $res == 0;
> +		    $written += $res;
> +		}
> +
> +		if (!$skip_discard) {
> +		    eval { PVE::Storage::Common::deallocate($in_fh, $offset, $request_size); };
> +		    if (my $err = $@) {
> +			# Just assume that if one request didn't work, others won't either.
> +			log_warning(
> +			    $self, "discard source failed (skipping further discards) - $err");
> +			$skip_discard = 1;
> +		     }
> +		 }
> +
> +		$local_offset += $request_size;
> +	    }
> +	}
> +	$out_fh->sync();
> +    };
> +    my $err = $@;
> +
> +    $self->{$vmid}->{previous}->{$device_name} = "$self->{$vmid}->{'archive-subdir'}/${device_name}.qcow2";
> +
> +    eval { run_command(['qemu-nbd', '-d', $nbd_node ]); };
> +    log_warning($self, "unable to disconnect NBD backup target - $@") if $@;
> +
> +    die $err if $err;
> +}
> +
> +my sub backup_nbd {
> +    my ($self, $vmid, $device_name, $size, $nbd_path, $bitmap_mode, $bitmap_name, $bandwidth_limit) = @_;
> +
> +    # TODO honor bandwidth_limit
> +
> +    die "need 'nbdinfo' binary from package libnbd-bin\n" if !-e "/usr/bin/nbdinfo";
> +
> +    my $nbd_info_uri = "nbd+unix:///${device_name}?socket=${nbd_path}";
> +    my $qemu_nbd_uri = "nbd:unix:${nbd_path}:exportname=${device_name}";
> +
> +    my $cpid;
> +    my $error_fh;
> +    my $next_dirty_region;
> +
> +    # If there is no dirty bitmap, it can be treated as if there's a full dirty one. The output of
> +    # nbdinfo is a list of tuples with offset, length, type, description. The first bit of 'type' is
> +    # set when the bitmap is dirty, see QEMU's docs/interop/nbd.txt
> +    my $dirty_bitmap = [];
> +    if ($bitmap_mode ne 'none') {
> +	my $input = IO::File->new();
> +	my $info = IO::File->new();
> +	$error_fh = IO::File->new();
> +	my $nbdinfo_cmd = ["nbdinfo", $nbd_info_uri, "--map=qemu:dirty-bitmap:${bitmap_name}"];
> +	$cpid = open3($input, $info, $error_fh, $nbdinfo_cmd->@*)
> +	    or die "failed to spawn nbdinfo child - $!\n";
> +
> +	$next_dirty_region = sub {
> +	    my ($offset, $length, $type);
> +	    do {
> +		my $line = <$info>;
> +		return if !$line;
> +		die "unexpected output from nbdinfo - $line\n"
> +		    if $line !~ m/^\s*(\d+)\s*(\d+)\s*(\d+)/; # also untaints
> +		($offset, $length, $type) = ($1, $2, $3);
> +	    } while (($type & 0x1) == 0); # not dirty
> +	    return ($offset, $length);
> +	};
> +    } else {
> +	my $done = 0;
> +	$next_dirty_region = sub {
> +	    return if $done;
> +	    $done = 1;
> +	    return (0, $size);
> +	};
> +    }
> +
> +    my $nbd_node;
> +    eval {
> +	$nbd_node = bind_next_free_dev_nbd_node([$qemu_nbd_uri, "--format=raw", "--discard=on"]);
> +
> +	my $in_fh = IO::File->new($nbd_node, 'r+')
> +	    or die "unable to open NBD backup source '$nbd_node' - $!\n";
> +
> +	backup_file(
> +	    $self,
> +	    $vmid,
> +	    $device_name,
> +	    $size,
> +	    $in_fh,
> +	    $bitmap_mode,
> +	    $next_dirty_region,
> +	    $bandwidth_limit,
> +	);
> +    };
> +    my $err = $@;
> +
> +    eval { run_command(["qemu-nbd", "-d", $nbd_node ]); };
> +    log_warning($self, "unable to disconnect NBD backup source - $@") if $@;
> +
> +    if ($cpid) {
> +	my $waited;
> +	my $wait_limit = 5;
> +	for ($waited = 0; $waited < $wait_limit && waitpid($cpid, POSIX::WNOHANG) == 0; $waited++) {
> +	    kill 15, $cpid if $waited == 0;
> +	    sleep 1;
> +	}
> +	if ($waited == $wait_limit) {
> +	    kill 9, $cpid;
> +	    sleep 1;
> +	    log_warning($self, "unable to collect nbdinfo child process")
> +		if waitpid($cpid, POSIX::WNOHANG) == 0;
> +	}
> +    }
> +
> +    die $err if $err;
> +}
> +
> +my sub backup_vm_volume {
> +    my ($self, $vmid, $device_name, $info, $bandwidth_limit) = @_;
> +
> +    my $backup_mechanism = $self->{'storage-plugin'}->get_vm_backup_mechanism($self->{scfg});
> +
> +    if ($backup_mechanism eq 'nbd') {
> +	backup_nbd(
> +	    $self,
> +	    $vmid,
> +	    $device_name,
> +	    $info->{size},
> +	    $info->{'nbd-path'},
> +	    $info->{'bitmap-mode'},
> +	    $info->{'bitmap-name'},
> +	    $bandwidth_limit,
> +	);
> +    } elsif ($backup_mechanism eq 'file-handle') {
> +	backup_file(
> +	    $self,
> +	    $vmid,
> +	    $device_name,
> +	    $info->{size},
> +	    $info->{'file-handle'},
> +	    $info->{'bitmap-mode'},
> +	    $info->{'next-dirty-region'},
> +	    $bandwidth_limit,
> +	);
> +    } else {
> +	die "internal error - unknown VM backup mechansim '$backup_mechanism'\n";
> +    }
> +}
> +
> +sub backup_vm {
> +    my ($self, $vmid, $guest_config, $volumes, $info) = @_;
> +
> +    my $target = "$self->{$vmid}->{'backup-dir'}/guest.conf";
> +    file_set_contents($target, $guest_config);
> +
> +    if (my $firewall_config = $info->{'firewall-config'}) {
> +	$target = "$self->{$vmid}->{'backup-dir'}/firewall.conf";
> +	file_set_contents($target, $firewall_config);
> +    }
> +
> +    for my $device_name (sort keys $volumes->%*) {
> +	backup_vm_volume(
> +	    $self, $vmid, $device_name, $volumes->{$device_name}, $info->{'bandwidth-limit'});
> +    }
> +}
> +
> +my sub backup_directory_tar {
> +    my ($self, $vmid, $directory, $exclude_patterns, $sources, $bandwidth_limit) = @_;
> +
> +    # essentially copied from PVE/VZDump/LXC.pm' archive()
> +
> +    # copied from PVE::Storage::Plugin::COMMON_TAR_FLAGS
> +    my @tar_flags = qw(
> +	--one-file-system
> +	-p --sparse --numeric-owner --acls
> +	--xattrs --xattrs-include=user.* --xattrs-include=security.capability
> +	--warning=no-file-ignored --warning=no-xattr-write
> +    );
> +
> +    my $tar = ['tar', 'cpf', '-', '--totals', @tar_flags];
> +
> +    push @$tar, "--directory=$directory";
> +
> +    my @exclude_no_anchored = ();
> +    my @exclude_anchored = ();
> +    for my $pattern ($exclude_patterns->@*) {
> +	if ($pattern !~ m|^/|) {
> +	    push @exclude_no_anchored, $pattern;
> +	} else {
> +	    push @exclude_anchored, $pattern;
> +	}
> +    }
> +
> +    push @$tar, '--no-anchored';
> +    push @$tar, '--exclude=lost+found';
> +    push @$tar, map { "--exclude=$_" } @exclude_no_anchored;
> +
> +    push @$tar, '--anchored';
> +    push @$tar, map { "--exclude=.$_" } @exclude_anchored;
> +
> +    push @$tar, $sources->@*;
> +
> +    my $cmd = [ $tar ];
> +
> +    push @$cmd, [ 'cstream', '-t', $bandwidth_limit * 1024 ] if $bandwidth_limit;
> +
> +    my $target = "$self->{$vmid}->{'backup-dir'}/archive.tar";
> +    push @{$cmd->[-1]}, \(">" . PVE::Tools::shellquote($target));
> +
> +    my $logfunc = sub {
> +	my $line = shift;
> +	log_info($self, "tar: $line");
> +    };
> +
> +    PVE::Tools::run_command($cmd, logfunc => $logfunc);
> +
> +    return;
> +};
> +
> +# NOTE This only serves as an example to illustrate the 'directory' restore mechanism. It is not
> +# fleshed out properly, e.g. I didn't check if exclusion is compatible with
> +# proxmox-backup-client/rsync or xattrs/ACL/etc. work as expected!
> +my sub backup_directory_squashfs {
> +    my ($self, $vmid, $directory, $exclude_patterns, $bandwidth_limit) = @_;
> +
> +    my $target = "$self->{$vmid}->{'backup-dir'}/archive.sqfs";
> +
> +    my $mksquashfs = ['mksquashfs', $directory, $target, '-quiet', '-no-progress'];
> +
> +    push $mksquashfs->@*, '-wildcards';
> +
> +    for my $pattern ($exclude_patterns->@*) {
> +	if ($pattern !~ m|^/|) { # non-anchored
> +	    push $mksquashfs->@*, '-e', "... $pattern";
> +	} else { # anchored
> +	    push $mksquashfs->@*, '-e', substr($pattern, 1); # need to strip leading slash
> +	}
> +    }
> +
> +    my $cmd = [ $mksquashfs ];
> +
> +    push @$cmd, [ 'cstream', '-t', $bandwidth_limit * 1024 ] if $bandwidth_limit;
> +
> +    my $logfunc = sub {
> +	my $line = shift;
> +	log_info($self, "mksquashfs: $line");
> +    };
> +
> +    PVE::Tools::run_command($cmd, logfunc => $logfunc);
> +
> +    return;
> +};
> +
> +sub backup_container {
> +    my ($self, $vmid, $guest_config, $exclude_patterns, $info) = @_;
> +
> +    my $target = "$self->{$vmid}->{'backup-dir'}/guest.conf";
> +    file_set_contents($target, $guest_config);
> +
> +    if (my $firewall_config = $info->{'firewall-config'}) {
> +	$target = "$self->{$vmid}->{'backup-dir'}/firewall.conf";
> +	file_set_contents($target, $firewall_config);
> +    }
> +
> +    my $backup_mode = $self->{'storage-plugin'}->get_lxc_backup_mode($self->{scfg});
> +    if ($backup_mode eq 'tar') {
> +	backup_directory_tar(
> +	    $self,
> +	    $vmid,
> +	    $info->{directory},
> +	    $exclude_patterns,
> +	    $info->{sources},
> +	    $info->{'bandwidth-limit'},
> +	);
> +    } elsif ($backup_mode eq 'squashfs') {
> +	backup_directory_squashfs(
> +	    $self,
> +	    $vmid,
> +	    $info->{directory},
> +	    $exclude_patterns,
> +	    $info->{'bandwidth-limit'},
> +	);
> +    } else {
> +	die "got unexpected backup mode '$backup_mode' from storage plugin\n";
> +    }
> +}
> +
> +# Restore API
> +
> +sub restore_get_mechanism {
> +    my ($self, $volname) = @_;
> +
> +    my (undef, $relative_backup_dir) = $self->{'storage-plugin'}->parse_volname($volname);
> +    my ($vmtype) = $relative_backup_dir =~ m!^\d+/([a-z]+)-!;
> +
> +    return ('qemu-img', $vmtype) if $vmtype eq 'qemu';
> +
> +    if ($vmtype eq 'lxc') {
> +	my (undef, $relative_backup_dir) = $self->{'storage-plugin'}->parse_volname($volname);
> +
> +	if (-e "$self->{scfg}->{path}/${relative_backup_dir}/archive.tar") {
> +	    $self->{'restore-mechanisms'}->{$volname} = 'tar';
> +	    return ('tar', $vmtype);
> +	}
> +
> +	if (-e "$self->{scfg}->{path}/${relative_backup_dir}/archive.sqfs") {
> +	    $self->{'restore-mechanisms'}->{$volname} = 'directory';
> +	    return ('directory', $vmtype)
> +	}
> +
> +	die "unable to find archive '$volname'\n";
> +    }
> +
> +    die "cannot restore unexpected guest type '$vmtype'\n";
> +}
> +
> +sub archive_get_guest_config {
> +    my ($self, $volname) = @_;
> +
> +    my (undef, $relative_backup_dir) = $self->{'storage-plugin'}->parse_volname($volname);
> +    my $filename = "$self->{scfg}->{path}/${relative_backup_dir}/guest.conf";
> +
> +    return file_get_contents($filename);
> +}
> +
> +sub archive_get_firewall_config {
> +    my ($self, $volname) = @_;
> +
> +    my (undef, $relative_backup_dir) = $self->{'storage-plugin'}->parse_volname($volname);
> +    my $filename = "$self->{scfg}->{path}/${relative_backup_dir}/firewall.conf";
> +
> +    return if !-e $filename;
> +
> +    return file_get_contents($filename);
> +}
> +
> +sub restore_vm_init {
> +    my ($self, $volname) = @_;
> +
> +    my $res = {};
> +
> +    my (undef, $relative_backup_dir) = $self->{'storage-plugin'}->parse_volname($volname);
> +    my $backup_dir = "$self->{scfg}->{path}/${relative_backup_dir}";
> +
> +    my @backup_files = glob("$backup_dir/*");
> +    for my $backup_file (@backup_files) {
> +	next if $backup_file !~ m!^(.*/(.*)\.qcow2)$!;
> +	$backup_file = $1; # untaint
> +	$res->{$2}->{size} = PVE::Storage::Plugin::file_size_info($backup_file, undef, 'qcow2');
> +    }
> +
> +    return $res;
> +}
> +
> +sub restore_vm_cleanup {
> +    my ($self, $volname) = @_;
> +
> +    return; # nothing to do
> +}
> +
> +sub restore_vm_volume_init {
> +    my ($self, $volname, $device_name, $info) = @_;
> +
> +    my (undef, $relative_backup_dir) = $self->{'storage-plugin'}->parse_volname($volname);
> +    my $image = "$self->{scfg}->{path}/${relative_backup_dir}/${device_name}.qcow2";
> +    # NOTE Backing files are not allowed by Proxmox VE when restoring. The reason is that an
> +    # untrusted qcow2 image can specify an arbitrary backing file and thus leak data from the host.
> +    # For the sake of the directory example plugin, an NBD export is created, but this side-steps
> +    # the check and would allow the attack again. An actual implementation should check that the
> +    # backing file (or rather, the whole backing chain) is safe first!
> +    my $nbd_node = bind_next_free_dev_nbd_node([$image]);
> +    $self->{"${volname}/${device_name}"}->{'nbd-node'} = $nbd_node;
> +    return {
> +	'qemu-img-path' => $nbd_node,
> +    };
> +}
> +
> +sub restore_vm_volume_cleanup {
> +    my ($self, $volname, $device_name, $info) = @_;
> +
> +    if (my $nbd_node = delete($self->{"${volname}/${device_name}"}->{'nbd-node'})) {
> +	PVE::Tools::run_command(['qemu-nbd', '-d', $nbd_node]);
> +    }
> +
> +    return;
> +}
> +
> +my sub restore_tar_init {
> +    my ($self, $volname) = @_;
> +
> +    my (undef, $relative_backup_dir) = $self->{'storage-plugin'}->parse_volname($volname);
> +    return { 'tar-path' => "$self->{scfg}->{path}/${relative_backup_dir}/archive.tar" };
> +}
> +
> +my sub restore_directory_init {
> +    my ($self, $volname) = @_;
> +
> +    my (undef, $relative_backup_dir, $vmid) = $self->{'storage-plugin'}->parse_volname($volname);
> +    my $archive = "$self->{scfg}->{path}/${relative_backup_dir}/archive.sqfs";
> +
> +    my $mount_point = "/run/backup-provider-example/${vmid}.mount";
> +    make_path($mount_point);
> +    die "unable to create directory $mount_point\n" if !-d $mount_point;
> +
> +    run_command(['mount', '-o', 'ro', $archive, $mount_point]);
> +
> +    return { 'archive-directory' => $mount_point };
> +}
> +
> +my sub restore_directory_cleanup {
> +    my ($self, $volname) = @_;
> +
> +    my (undef, undef, $vmid) = $self->{'storage-plugin'}->parse_volname($volname);
> +    my $mount_point = "/run/backup-provider-example/${vmid}.mount";
> +
> +    run_command(['umount', $mount_point]);
> +
> +    return;
> +}
> +
> +sub restore_container_init {
> +    my ($self, $volname, $info) = @_;
> +
> +    if ($self->{'restore-mechanisms'}->{$volname} eq 'tar') {
> +	return restore_tar_init($self, $volname);
> +    } elsif ($self->{'restore-mechanisms'}->{$volname} eq 'directory') {
> +	return restore_directory_init($self, $volname);
> +    } else {
> +	die "no restore mechanism set for '$volname'\n";
> +    }
> +}
> +
> +sub restore_container_cleanup {
> +    my ($self, $volname, $info) = @_;
> +
> +    if ($self->{'restore-mechanisms'}->{$volname} eq 'tar') {
> +	return; # nothing to do
> +    } elsif ($self->{'restore-mechanisms'}->{$volname} eq 'directory') {
> +	return restore_directory_cleanup($self, $volname);
> +    } else {
> +	die "no restore mechanism set for '$volname'\n";
> +    }
> +}
> +
> +1;
> diff --git a/src/PVE/BackupProvider/Plugin/Makefile b/src/PVE/BackupProvider/Plugin/Makefile
> index bbd7431..bedc26e 100644
> --- a/src/PVE/BackupProvider/Plugin/Makefile
> +++ b/src/PVE/BackupProvider/Plugin/Makefile
> @@ -1,4 +1,4 @@
> -SOURCES = Base.pm
> +SOURCES = Base.pm DirectoryExample.pm
>  
>  .PHONY: install
>  install:
> diff --git a/src/PVE/Storage/Custom/BackupProviderDirExamplePlugin.pm b/src/PVE/Storage/Custom/BackupProviderDirExamplePlugin.pm
> new file mode 100644
> index 0000000..d04d9d1
> --- /dev/null
> +++ b/src/PVE/Storage/Custom/BackupProviderDirExamplePlugin.pm
> @@ -0,0 +1,308 @@
> +package PVE::Storage::Custom::BackupProviderDirExamplePlugin;
> +
> +use strict;
> +use warnings;
> +
> +use File::Basename qw(basename);
> +
> +use PVE::BackupProvider::Plugin::DirectoryExample;
> +use PVE::Tools;
> +
> +use base qw(PVE::Storage::Plugin);
> +
> +# Helpers
> +
> +sub get_vm_backup_mechanism {
> +    my ($class, $scfg) = @_;
> +
> +    return $scfg->{'vm-backup-mechanism'} // properties()->{'vm-backup-mechanism'}->{'default'};
> +}
> +
> +sub get_vm_backup_mode {
> +    my ($class, $scfg) = @_;
> +
> +    return $scfg->{'vm-backup-mode'} // properties()->{'vm-backup-mode'}->{'default'};
> +}
> +
> +sub get_lxc_backup_mode {
> +    my ($class, $scfg) = @_;
> +
> +    return $scfg->{'lxc-backup-mode'} // properties()->{'lxc-backup-mode'}->{'default'};
> +}
> +
> +# Configuration
> +
> +sub api {
> +    return 11;
> +}
> +
> +sub type {
> +    return 'backup-provider-dir-example';
> +}
> +
> +sub plugindata {
> +    return {
> +	content => [ { backup => 1, none => 1 }, { backup => 1 } ],
> +	features => { 'backup-provider' => 1 },
> +	'sensitive-properties' => {},
> +    };
> +}
> +
> +sub properties {
> +    return {
> +	'lxc-backup-mode' => {
> +	    description => "How to create LXC backups. tar - create a tar archive."
> +		." squashfs - create a squashfs image. Requires squashfs-tools to be installed.",
> +	    type => 'string',
> +	    enum => [qw(tar squashfs)],
> +	    default => 'tar',
> +	},
> +	'vm-backup-mechanism' => {
> +	    description => "Which mechanism to use for creating VM backups. nbd - access data via "
> +		." NBD export. file-handle - access data via file handle.",
> +	    type => 'string',
> +	    enum => [qw(nbd file-handle)],
> +	    default => 'file-handle',
> +	},
> +	'vm-backup-mode' => {
> +	    description => "How to create VM backups. full - always create full backups."
> +		." incremental - create incremental backups when possible, fallback to full when"
> +		." necessary, e.g. VM disk's bitmap is invalid.",
> +	    type => 'string',
> +	    enum => [qw(full incremental)],
> +	    default => 'full',
> +	},
> +    };
> +}
> +
> +sub options {
> +    return {
> +	path => { fixed => 1 },
> +	'lxc-backup-mode' => { optional => 1 },
> +	'vm-backup-mechanism' => { optional => 1 },
> +	'vm-backup-mode' => { optional => 1 },
> +	disable => { optional => 1 },
> +	nodes => { optional => 1 },
> +	'prune-backups' => { optional => 1 },
> +	'max-protected-backups' => { optional => 1 },
> +    };
> +}
> +
> +# Storage implementation
> +
> +# NOTE a proper backup storage should implement this
> +sub prune_backups {
> +    my ($class, $scfg, $storeid, $keep, $vmid, $type, $dryrun, $logfunc) = @_;
> +
> +    die "not implemented";
> +}
> +
> +sub parse_volname {
> +    my ($class, $volname) = @_;
> +
> +    if ($volname =~ m!^backup/((\d+)/[a-z]+-\d+)$!) {
> +	my ($filename, $vmid) = ($1, $2);
> +	return ('backup', $filename, $vmid);
> +    }
> +
> +    die "unable to parse volume name '$volname'\n";
> +}
> +
> +sub path {
> +    my ($class, $scfg, $volname, $storeid, $snapname) = @_;
> +
> +    die "volume snapshot is not possible on backup-provider-dir-example volume" if $snapname;
> +
> +    my ($type, $filename, $vmid) = $class->parse_volname($volname);
> +
> +    return ("$scfg->{path}/${filename}", $vmid, $type);
> +}
> +
> +sub create_base {
> +    my ($class, $storeid, $scfg, $volname) = @_;
> +
> +    die "cannot create base image in backup-provider-dir-example storage\n";
> +}
> +
> +sub clone_image {
> +    my ($class, $scfg, $storeid, $volname, $vmid, $snap) = @_;
> +
> +    die "can't clone images in backup-provider-dir-example storage\n";
> +}
> +
> +sub alloc_image {
> +    my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
> +
> +    die "can't allocate space in backup-provider-dir-example storage\n";
> +}
> +
> +# NOTE a proper backup storage should implement this
> +sub free_image {
> +    my ($class, $storeid, $scfg, $volname, $isBase) = @_;
> +
> +    # if it's a backing file, it would need to be merged into the upper image first.
> +
> +    die "not implemented";
> +}
> +
> +sub list_images {
> +    my ($class, $storeid, $scfg, $vmid, $vollist, $cache) = @_;
> +
> +    my $res = [];
> +
> +    return $res;
> +}
> +
> +sub list_volumes {
> +    my ($class, $storeid, $scfg, $vmid, $content_types) = @_;
> +
> +    my $path = $scfg->{path};
> +
> +    my $res = [];
> +    for my $type ($content_types->@*) {
> +	next if $type ne 'backup';
> +
> +	my @guest_dirs = glob("$path/*");
> +	for my $guest_dir (@guest_dirs) {
> +	    next if !-d $guest_dir || $guest_dir !~ m!/(\d+)$!;
> +
> +	    my $backup_vmid = basename($guest_dir);
> +
> +	    next if defined($vmid) && $backup_vmid != $vmid;
> +
> +	    my @backup_dirs = glob("$guest_dir/*");
> +	    for my $backup_dir (@backup_dirs) {
> +		next if !-d $backup_dir || $backup_dir !~ m!/(lxc|qemu)-(\d+)$!;
> +		my ($subtype, $backup_id) = ($1, $2);
> +
> +		my $size = 0;
> +		my @backup_files = glob("$backup_dir/*");
> +		$size += -s $_ for @backup_files;
> +
> +		push $res->@*, {
> +		    volid => "$storeid:backup/${backup_vmid}/${subtype}-${backup_id}",
> +		    vmid => $backup_vmid,
> +		    format => "directory",
> +		    ctime => $backup_id,
> +		    size => $size,
> +		    subtype => $subtype,
> +		    content => $type,
> +		    # TODO parent for incremental
> +		};
> +	    }
> +	}
> +    }
> +
> +    return $res;
> +}
> +
> +sub activate_storage {
> +    my ($class, $storeid, $scfg, $cache) = @_;
> +
> +    my $path = $scfg->{path};
> +
> +    my $timeout = 2;
> +    if (!PVE::Tools::run_fork_with_timeout($timeout, sub {-d $path})) {
> +	die "unable to activate storage '$storeid' - directory '$path' does not exist or is"
> +	    ." unreachable\n";
> +    }
> +
> +    return 1;
> +}
> +
> +sub deactivate_storage {
> +    my ($class, $storeid, $scfg, $cache) = @_;
> +
> +    return 1;
> +}
> +
> +sub activate_volume {
> +    my ($class, $storeid, $scfg, $volname, $snapname, $cache) = @_;
> +
> +    die "volume snapshot is not possible on backup-provider-dir-example volume" if $snapname;
> +
> +    return 1;
> +}
> +
> +sub deactivate_volume {
> +    my ($class, $storeid, $scfg, $volname, $snapname, $cache) = @_;
> +
> +    die "volume snapshot is not possible on backup-provider-dir-example volume" if $snapname;
> +
> +    return 1;
> +}
> +
> +sub get_volume_attribute {
> +    my ($class, $scfg, $storeid, $volname, $attribute) = @_;
> +
> +    return;
> +}
> +
> +# NOTE a proper backup storage should implement this to support backup notes and
> +# setting protected status.
> +sub update_volume_attribute {
> +    my ($class, $scfg, $storeid, $volname, $attribute, $value) = @_;
> +
> +    die "attribute '$attribute' is not supported on backup-provider-dir-example volume";
> +}
> +
> +sub volume_size_info {
> +    my ($class, $scfg, $storeid, $volname, $timeout) = @_;
> +
> +    my (undef, $relative_backup_dir) = $class->parse_volname($volname);
> +    my ($ctime) = $relative_backup_dir =~ m/-(\d+)$/;
> +    my $backup_dir = "$scfg->{path}/${relative_backup_dir}";
> +
> +    my $size = 0;
> +    my @backup_files = glob("$backup_dir/*");
> +    for my $backup_file (@backup_files) {
> +	if ($backup_file =~ m!\.qcow2$!) {
> +	    $size += $class->file_size_info($backup_file, undef, 'qcow2');

Apparently I forgot to commit the fixup for this, found by Friedrich:

    - $size += $class->file_size_info($backup_file, undef, 'qcow2');
    + $size += PVE::Storage::Plugin::file_size_info($backup_file, undef, 'qcow2');

> +	} else {
> +	    $size += -s $backup_file;
> +	}
> +    }
> +
> +    my $parent; # TODO for incremental
> +
> +    return wantarray ? ($size, 'directory', $size, $parent, $ctime) : $size;
> +}
> +
> +sub volume_resize {
> +    my ($class, $scfg, $storeid, $volname, $size, $running) = @_;
> +
> +    die "volume resize is not possible on backup-provider-dir-example volume";
> +}
> +
> +sub volume_snapshot {
> +    my ($class, $scfg, $storeid, $volname, $snap) = @_;
> +
> +    die "volume snapshot is not possible on backup-provider-dir-example volume";
> +}
> +
> +sub volume_snapshot_rollback {
> +    my ($class, $scfg, $storeid, $volname, $snap) = @_;
> +
> +    die "volume snapshot rollback is not possible on backup-provider-dir-example volume";
> +}
> +
> +sub volume_snapshot_delete {
> +    my ($class, $scfg, $storeid, $volname, $snap) = @_;
> +
> +    die "volume snapshot delete is not possible on backup-provider-dir-example volume";
> +}
> +
> +sub volume_has_feature {
> +    my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running) = @_;
> +
> +    return 0;
> +}
> +
> +sub new_backup_provider {
> +    my ($class, $scfg, $storeid, $bandwidth_limit, $log_function) = @_;
> +
> +    return PVE::BackupProvider::Plugin::DirectoryExample->new(
> +	$class, $scfg, $storeid, $bandwidth_limit, $log_function);
> +}
> +
> +1;
> diff --git a/src/PVE/Storage/Custom/Makefile b/src/PVE/Storage/Custom/Makefile
> new file mode 100644
> index 0000000..c1e3eca
> --- /dev/null
> +++ b/src/PVE/Storage/Custom/Makefile
> @@ -0,0 +1,5 @@
> +SOURCES = BackupProviderDirExamplePlugin.pm
> +
> +.PHONY: install
> +install:
> +	for i in ${SOURCES}; do install -D -m 0644 $$i ${DESTDIR}${PERLDIR}/PVE/Storage/Custom/$$i; done
> diff --git a/src/PVE/Storage/Makefile b/src/PVE/Storage/Makefile
> index ce3fd68..fc0431f 100644
> --- a/src/PVE/Storage/Makefile
> +++ b/src/PVE/Storage/Makefile
> @@ -21,4 +21,5 @@ SOURCES= \
>  install:
>  	make -C Common install
>  	for i in ${SOURCES}; do install -D -m 0644 $$i ${DESTDIR}${PERLDIR}/PVE/Storage/$$i; done
> +	make -C Custom install
>  	make -C LunCmd install
> -- 
> 2.39.5


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 41+ messages in thread

end of thread, other threads:[~2025-04-04  6:58 UTC | newest]

Thread overview: 41+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-04-03 12:30 [pve-devel] [PATCH v8 storage 0/9] backup provider API Wolfgang Bumiller
2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu 01/10] PVE backup: clean up directly in setup_snapshot_access() when it fails Wolfgang Bumiller
2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu 02/10] PVE backup: factor out helper to clear backup state's bitmap list Wolfgang Bumiller
2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu 03/10] PVE backup: factor out helper to initialize backup state stat struct Wolfgang Bumiller
2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu 04/10] PVE backup: add target ID in backup state Wolfgang Bumiller
2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu 05/10] PVE backup: get device info: allow caller to specify filter for which devices use fleecing Wolfgang Bumiller
2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu 06/10] PVE backup: implement backup access setup and teardown API for external providers Wolfgang Bumiller
2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu 07/10] PVE backup: factor out get_single_device_info() helper Wolfgang Bumiller
2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu 08/10] PVE backup: implement bitmap support for external backup access Wolfgang Bumiller
2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu 09/10] PVE backup: backup-access api: indicate situation where a bitmap was recreated Wolfgang Bumiller
2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu 10/10] PVE backup: backup-access-api: explicit bitmap-mode parameter Wolfgang Bumiller
2025-04-03 12:30 ` [pve-devel] [PATCH v8 storage 1/8] add storage_has_feature() helper function Wolfgang Bumiller
2025-04-03 12:30 ` [pve-devel] [PATCH v8 storage 2/8] common: add deallocate " Wolfgang Bumiller
2025-04-03 12:30 ` [pve-devel] [PATCH v8 storage 3/8] plugin: introduce new_backup_provider() method Wolfgang Bumiller
2025-04-03 12:30 ` [pve-devel] [PATCH v8 storage 4/8] config api/plugins: let plugins define sensitive properties themselves Wolfgang Bumiller
2025-04-03 12:30 ` [pve-devel] [PATCH v8 storage 5/8] plugin api: bump api version and age Wolfgang Bumiller
2025-04-03 12:30 ` [pve-devel] [PATCH v8 storage 6/8] extract backup config: delegate to backup provider for storages that support it Wolfgang Bumiller
2025-04-03 12:30 ` [pve-devel] [POC v8 storage 7/8] add backup provider example Wolfgang Bumiller
2025-04-04  6:58   ` Wolfgang Bumiller
2025-04-03 12:30 ` [pve-devel] [POC v8 storage 8/8] Borg example plugin Wolfgang Bumiller
2025-04-03 12:30 ` [pve-devel] [PATCH v8 qemu-server 01/11] backup: keep track of block-node size for fleecing Wolfgang Bumiller
2025-04-03 12:31 ` [pve-devel] [PATCH v8 qemu-server 02/11] backup: fleecing: use exact size when allocating non-raw fleecing images Wolfgang Bumiller
2025-04-03 12:31 ` [pve-devel] [PATCH v8 qemu-server 03/11] backup: allow adding fleecing images also for EFI and TPM Wolfgang Bumiller
2025-04-03 12:31 ` [pve-devel] [PATCH v8 qemu-server 04/11] backup: implement backup for external providers Wolfgang Bumiller
2025-04-03 12:31 ` [pve-devel] [PATCH v8 qemu-server 05/11] test: qemu img convert: add test cases for snapshots Wolfgang Bumiller
2025-04-03 12:31 ` [pve-devel] [PATCH v8 qemu-server 06/11] image convert: collect options in hash argument Wolfgang Bumiller
2025-04-03 12:31 ` [pve-devel] [PATCH v8 qemu-server 07/11] image convert: allow caller to specify the format of the source path Wolfgang Bumiller
2025-04-03 12:31 ` [pve-devel] [PATCH v8 qemu-server 08/11] backup: implement restore for external providers Wolfgang Bumiller
2025-04-03 12:31 ` [pve-devel] [PATCH v8 qemu-server 09/11] backup: future-proof checks for QEMU feature support Wolfgang Bumiller
2025-04-03 12:31 ` [pve-devel] [PATCH v8 qemu-server 10/11] backup: support 'missing-recreated' bitmap action Wolfgang Bumiller
2025-04-03 12:31 ` [pve-devel] [PATCH v8 qemu-server 11/11] backup: bitmap action to human: lie about TPM state Wolfgang Bumiller
2025-04-03 12:31 ` [pve-devel] [PATCH v8 container 1/7] add LXC::Namespaces module Wolfgang Bumiller
2025-04-03 12:31 ` [pve-devel] [PATCH v8 container 2/7] backup: implement backup for external providers Wolfgang Bumiller
2025-04-03 12:31 ` [pve-devel] [PATCH v8 container 3/7] backup: implement restore " Wolfgang Bumiller
2025-04-03 12:31 ` [pve-devel] [PATCH v8 container 4/7] external restore: don't use 'one-file-system' tar flag when restoring from a directory Wolfgang Bumiller
2025-04-03 12:31 ` [pve-devel] [PATCH v8 container 5/7] create: factor out compression option helper Wolfgang Bumiller
2025-04-03 12:31 ` [pve-devel] [PATCH v8 container 6/7] restore tar archive: check potentially untrusted archive Wolfgang Bumiller
2025-04-03 12:31 ` [pve-devel] [PATCH v8 container 7/7] api: add early check against restoring privileged container from external source Wolfgang Bumiller
2025-04-03 12:31 ` [pve-devel] [PATCH v8 manager 1/2] ui: backup: also check for backup subtype to classify archive Wolfgang Bumiller
2025-04-03 12:31 ` [pve-devel] [PATCH v8 manager 2/2] backup: implement backup for external providers Wolfgang Bumiller
2025-04-03 16:10 ` [pve-devel] partially-applied-series: [PATCH v8 storage 0/9] backup provider API Thomas Lamprecht

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal