public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH-SERIES v2 qemu/qemu-server/widget-toolkit/manager] more flexible restore
@ 2022-04-21 11:26 Fabian Ebner
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu 1/2] vma: restore: call blk_unref for all opened block devices Fabian Ebner
                   ` (12 more replies)
  0 siblings, 13 replies; 30+ messages in thread
From: Fabian Ebner @ 2022-04-21 11:26 UTC (permalink / raw)
  To: pve-devel

Allows preserving disks and overriding VM settings upon restore.
For containers, overriding settings was already possible, but managing
partial restore is more involved because of nested mount structure,
etc.

Exposes the functionality in the UI, allowing to set (host)name,
cores(+sockets), memory, and, for VMs, whether disks should be
preserved.

Also includes related improvements, like cleaning up snapshots on
drives and cloundinit correctly and, in the UI, detecting if a storage
needed by the restore is not available.


Changes from v1:
    * Use skip=<drivename> to communicate skipping restore to VMA
      rather than treating path /dev/null in a special way.
    * Use explicit preserve-drives parameter for VM create API to
      avoid automagic and conflict with existing syntax for LXC.
    * Add UI patches.


Necessary dependency bumps are pve-manager -> widget-toolkit
and pve-manager -> qemu-server -> qemu.


Still missing: add documentation for the new restore functionality
for VMs and existing restore functionality for containers.


qemu:

Fabian Ebner (2):
  vma: restore: call blk_unref for all opened block devices
  vma: allow partial restore

 vma-reader.c |  67 ++++++++++++---------
 vma.c        | 163 ++++++++++++++++++++++++++++-----------------------
 vma.h        |   2 +-
 3 files changed, 131 insertions(+), 101 deletions(-)


qemu-server:

Fabian Ebner (7):
  restore: cleanup oldconf: also clean up snapshots from kept volumes
  restore destroy volumes: remove check for absolute path
  restore deactivate volumes: never die
  restore: also deactivate/destroy cloud-init disk upon error
  api: create: refactor parameter check logic
  api: create: allow overriding non-disk options during restore
  restore: allow preserving drives during restore

 PVE/API2/Qemu.pm  |  63 +++++++++++++++++++-------
 PVE/QemuServer.pm | 112 +++++++++++++++++++++++++++++++++-------------
 2 files changed, 126 insertions(+), 49 deletions(-)


widget-toolkit:

Fabian Ebner (1):
  css: add proxmox-good-row class

 src/css/ext6-pmx.css | 4 ++++
 1 file changed, 4 insertions(+)


manager:

Fabian Ebner (3):
  ui: restore: disallow empty storage selection if it wouldn't work
  ui: restore: allow override of some settings
  ui: restore: allow preserving disks

 www/manager6/Makefile                |   1 +
 www/manager6/grid/RestoreDiskGrid.js | 125 +++++++++++++++++++++++
 www/manager6/window/Restore.js       | 146 ++++++++++++++++++++++++++-
 3 files changed, 269 insertions(+), 3 deletions(-)
 create mode 100644 www/manager6/grid/RestoreDiskGrid.js

-- 
2.30.2





^ permalink raw reply	[flat|nested] 30+ messages in thread

* [pve-devel] [PATCH v2 qemu 1/2] vma: restore: call blk_unref for all opened block devices
  2022-04-21 11:26 [pve-devel] [PATCH-SERIES v2 qemu/qemu-server/widget-toolkit/manager] more flexible restore Fabian Ebner
@ 2022-04-21 11:26 ` Fabian Ebner
  2022-04-25  6:37   ` Wolfgang Bumiller
  2022-04-25 16:10   ` [pve-devel] applied: " Thomas Lamprecht
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu 2/2] vma: allow partial restore Fabian Ebner
                   ` (11 subsequent siblings)
  12 siblings, 2 replies; 30+ messages in thread
From: Fabian Ebner @ 2022-04-21 11:26 UTC (permalink / raw)
  To: pve-devel

rather than just the last one. This is the only caller registering
block devices with the reader, so other callers are already fine.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---

New in v2.

Best squashed into 0026-PVE-Backup-add-vma-backup-format-code.patch

 vma-reader.c | 3 +++
 vma.c        | 6 ++----
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/vma-reader.c b/vma-reader.c
index 2b1d1cdab3..4f4ee2b47b 100644
--- a/vma-reader.c
+++ b/vma-reader.c
@@ -192,6 +192,9 @@ void vma_reader_destroy(VmaReader *vmar)
         if (vmar->rstate[i].bitmap) {
             g_free(vmar->rstate[i].bitmap);
         }
+        if (vmar->rstate[i].target) {
+            blk_unref(vmar->rstate[i].target);
+        }
     }
 
     if (vmar->md5csum) {
diff --git a/vma.c b/vma.c
index df542b7732..89440733b1 100644
--- a/vma.c
+++ b/vma.c
@@ -309,8 +309,6 @@ static int extract_content(int argc, char **argv)
     int vmstate_fd = -1;
     guint8 vmstate_stream = 0;
 
-    BlockBackend *blk = NULL;
-
     for (i = 1; i < 255; i++) {
         VmaDeviceInfo *di = vma_reader_get_device_info(vmar, i);
         if (di && (strcmp(di->devname, "vmstate") == 0)) {
@@ -331,6 +329,8 @@ static int extract_content(int argc, char **argv)
             int flags = BDRV_O_RDWR;
             bool write_zero = true;
 
+            BlockBackend *blk = NULL;
+
             if (readmap) {
                 RestoreMap *map;
                 map = (RestoreMap *)g_hash_table_lookup(devmap, di->devname);
@@ -443,8 +443,6 @@ static int extract_content(int argc, char **argv)
 
     vma_reader_destroy(vmar);
 
-    blk_unref(blk);
-
     bdrv_close_all();
 
     return ret;
-- 
2.30.2





^ permalink raw reply	[flat|nested] 30+ messages in thread

* [pve-devel] [PATCH v2 qemu 2/2] vma: allow partial restore
  2022-04-21 11:26 [pve-devel] [PATCH-SERIES v2 qemu/qemu-server/widget-toolkit/manager] more flexible restore Fabian Ebner
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu 1/2] vma: restore: call blk_unref for all opened block devices Fabian Ebner
@ 2022-04-21 11:26 ` Fabian Ebner
  2022-04-25  6:40   ` Wolfgang Bumiller
  2022-04-25 16:10   ` [pve-devel] applied: " Thomas Lamprecht
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu-server 1/7] restore: cleanup oldconf: also clean up snapshots from kept volumes Fabian Ebner
                   ` (10 subsequent siblings)
  12 siblings, 2 replies; 30+ messages in thread
From: Fabian Ebner @ 2022-04-21 11:26 UTC (permalink / raw)
  To: pve-devel

Introduce a new map line for skipping a certain drive, of the form
skip=drive-scsi0

Since in PVE, most archives are compressed and piped to vma for
restore, it's not easily possible to skip reads.

For the reader, a new skip flag for VmaRestoreState is added and the
target is allowed to be NULL if skip is specified when registering. If
the skip flag is set, no writes will be made as well as no check for
duplicate clusters. Therefore, the flag is not set for verify.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---

Better viewed with -w

Changes from v1:
    * Use dedicated map line skip=<drivename> rather than a special path
      "/dev/null" to communicate that a drive should be skipped.

 vma-reader.c |  64 ++++++++++++---------
 vma.c        | 157 +++++++++++++++++++++++++++++----------------------
 vma.h        |   2 +-
 3 files changed, 126 insertions(+), 97 deletions(-)

diff --git a/vma-reader.c b/vma-reader.c
index 4f4ee2b47b..844d95a5ba 100644
--- a/vma-reader.c
+++ b/vma-reader.c
@@ -29,6 +29,7 @@ typedef struct VmaRestoreState {
     bool write_zeroes;
     unsigned long *bitmap;
     int bitmap_size;
+    bool skip;
 }  VmaRestoreState;
 
 struct VmaReader {
@@ -426,13 +427,14 @@ VmaDeviceInfo *vma_reader_get_device_info(VmaReader *vmar, guint8 dev_id)
 }
 
 static void allocate_rstate(VmaReader *vmar,  guint8 dev_id,
-                            BlockBackend *target, bool write_zeroes)
+                            BlockBackend *target, bool write_zeroes, bool skip)
 {
     assert(vmar);
     assert(dev_id);
 
     vmar->rstate[dev_id].target = target;
     vmar->rstate[dev_id].write_zeroes = write_zeroes;
+    vmar->rstate[dev_id].skip = skip;
 
     int64_t size = vmar->devinfo[dev_id].size;
 
@@ -447,28 +449,30 @@ static void allocate_rstate(VmaReader *vmar,  guint8 dev_id,
 }
 
 int vma_reader_register_bs(VmaReader *vmar, guint8 dev_id, BlockBackend *target,
-                           bool write_zeroes, Error **errp)
+                           bool write_zeroes, bool skip, Error **errp)
 {
     assert(vmar);
-    assert(target != NULL);
+    assert(target != NULL || skip);
     assert(dev_id);
-    assert(vmar->rstate[dev_id].target == NULL);
-
-    int64_t size = blk_getlength(target);
-    int64_t size_diff = size - vmar->devinfo[dev_id].size;
-
-    /* storage types can have different size restrictions, so it
-     * is not always possible to create an image with exact size.
-     * So we tolerate a size difference up to 4MB.
-     */
-    if ((size_diff < 0) || (size_diff > 4*1024*1024)) {
-        error_setg(errp, "vma_reader_register_bs for stream %s failed - "
-                   "unexpected size %zd != %zd", vmar->devinfo[dev_id].devname,
-                   size, vmar->devinfo[dev_id].size);
-        return -1;
+    assert(vmar->rstate[dev_id].target == NULL && !vmar->rstate[dev_id].skip);
+
+    if (target != NULL) {
+        int64_t size = blk_getlength(target);
+        int64_t size_diff = size - vmar->devinfo[dev_id].size;
+
+        /* storage types can have different size restrictions, so it
+         * is not always possible to create an image with exact size.
+         * So we tolerate a size difference up to 4MB.
+         */
+        if ((size_diff < 0) || (size_diff > 4*1024*1024)) {
+            error_setg(errp, "vma_reader_register_bs for stream %s failed - "
+                       "unexpected size %zd != %zd", vmar->devinfo[dev_id].devname,
+                       size, vmar->devinfo[dev_id].size);
+            return -1;
+        }
     }
 
-    allocate_rstate(vmar, dev_id, target, write_zeroes);
+    allocate_rstate(vmar, dev_id, target, write_zeroes, skip);
 
     return 0;
 }
@@ -561,19 +565,23 @@ static int restore_extent(VmaReader *vmar, unsigned char *buf,
         VmaRestoreState *rstate = &vmar->rstate[dev_id];
         BlockBackend *target = NULL;
 
+        bool skip = rstate->skip;
+
         if (dev_id != vmar->vmstate_stream) {
             target = rstate->target;
-            if (!verify && !target) {
+            if (!verify && !target && !skip) {
                 error_setg(errp, "got wrong dev id %d", dev_id);
                 return -1;
             }
 
-            if (vma_reader_get_bitmap(rstate, cluster_num)) {
-                error_setg(errp, "found duplicated cluster %zd for stream %s",
-                          cluster_num, vmar->devinfo[dev_id].devname);
-                return -1;
+            if (!skip) {
+                if (vma_reader_get_bitmap(rstate, cluster_num)) {
+                    error_setg(errp, "found duplicated cluster %zd for stream %s",
+                              cluster_num, vmar->devinfo[dev_id].devname);
+                    return -1;
+                }
+                vma_reader_set_bitmap(rstate, cluster_num, 1);
             }
-            vma_reader_set_bitmap(rstate, cluster_num, 1);
 
             max_sector = vmar->devinfo[dev_id].size/BDRV_SECTOR_SIZE;
         } else {
@@ -619,7 +627,7 @@ static int restore_extent(VmaReader *vmar, unsigned char *buf,
                 return -1;
             }
 
-            if (!verify) {
+            if (!verify && !skip) {
                 int nb_sectors = end_sector - sector_num;
                 if (restore_write_data(vmar, dev_id, target, vmstate_fd,
                                        buf + start, sector_num, nb_sectors,
@@ -655,7 +663,7 @@ static int restore_extent(VmaReader *vmar, unsigned char *buf,
                         return -1;
                     }
 
-                    if (!verify) {
+                    if (!verify && !skip) {
                         int nb_sectors = end_sector - sector_num;
                         if (restore_write_data(vmar, dev_id, target, vmstate_fd,
                                                buf + start, sector_num,
@@ -680,7 +688,7 @@ static int restore_extent(VmaReader *vmar, unsigned char *buf,
                             vmar->partial_zero_cluster_data += zero_size;
                         }
 
-                        if (rstate->write_zeroes && !verify) {
+                        if (rstate->write_zeroes && !verify && !skip) {
                             if (restore_write_data(vmar, dev_id, target, vmstate_fd,
                                                    zero_vma_block, sector_num,
                                                    nb_sectors, errp) < 0) {
@@ -851,7 +859,7 @@ int vma_reader_verify(VmaReader *vmar, bool verbose, Error **errp)
 
     for (dev_id = 1; dev_id < 255; dev_id++) {
         if (vma_reader_get_device_info(vmar, dev_id)) {
-            allocate_rstate(vmar, dev_id, NULL, false);
+            allocate_rstate(vmar, dev_id, NULL, false, false);
         }
     }
 
diff --git a/vma.c b/vma.c
index 89440733b1..21e765a469 100644
--- a/vma.c
+++ b/vma.c
@@ -138,6 +138,7 @@ typedef struct RestoreMap {
     char *throttling_group;
     char *cache;
     bool write_zero;
+    bool skip;
 } RestoreMap;
 
 static bool try_parse_option(char **line, const char *optname, char **out, const char *inbuf) {
@@ -245,47 +246,61 @@ static int extract_content(int argc, char **argv)
             char *bps = NULL;
             char *group = NULL;
             char *cache = NULL;
+            char *devname = NULL;
+            bool skip = false;
+            uint64_t bps_value = 0;
+            const char *path = NULL;
+            bool write_zero = true;
+
             if (!line || line[0] == '\0' || !strcmp(line, "done\n")) {
                 break;
             }
             int len = strlen(line);
             if (line[len - 1] == '\n') {
                 line[len - 1] = '\0';
-                if (len == 1) {
+                len = len - 1;
+                if (len == 0) {
                     break;
                 }
             }
 
-            while (1) {
-                if (!try_parse_option(&line, "format", &format, inbuf) &&
-                    !try_parse_option(&line, "throttling.bps", &bps, inbuf) &&
-                    !try_parse_option(&line, "throttling.group", &group, inbuf) &&
-                    !try_parse_option(&line, "cache", &cache, inbuf))
-                {
-                    break;
+            if (strncmp(line, "skip", 4) == 0) {
+                if (len < 6 || line[4] != '=') {
+                    g_error("read map failed - option 'skip' has no value ('%s')",
+                            inbuf);
+                } else {
+                    devname = line + 5;
+                    skip = true;
+                }
+            } else {
+                while (1) {
+                    if (!try_parse_option(&line, "format", &format, inbuf) &&
+                        !try_parse_option(&line, "throttling.bps", &bps, inbuf) &&
+                        !try_parse_option(&line, "throttling.group", &group, inbuf) &&
+                        !try_parse_option(&line, "cache", &cache, inbuf))
+                    {
+                        break;
+                    }
                 }
-            }
 
-            uint64_t bps_value = 0;
-            if (bps) {
-                bps_value = verify_u64(bps);
-                g_free(bps);
-            }
+                if (bps) {
+                    bps_value = verify_u64(bps);
+                    g_free(bps);
+                }
 
-            const char *path;
-            bool write_zero;
-            if (line[0] == '0' && line[1] == ':') {
-                path = line + 2;
-                write_zero = false;
-            } else if (line[0] == '1' && line[1] == ':') {
-                path = line + 2;
-                write_zero = true;
-            } else {
-                g_error("read map failed - parse error ('%s')", inbuf);
+                if (line[0] == '0' && line[1] == ':') {
+                    path = line + 2;
+                    write_zero = false;
+                } else if (line[0] == '1' && line[1] == ':') {
+                    path = line + 2;
+                    write_zero = true;
+                } else {
+                    g_error("read map failed - parse error ('%s')", inbuf);
+                }
+
+                path = extract_devname(path, &devname, -1);
             }
 
-            char *devname = NULL;
-            path = extract_devname(path, &devname, -1);
             if (!devname) {
                 g_error("read map failed - no dev name specified ('%s')",
                         inbuf);
@@ -299,6 +314,7 @@ static int extract_content(int argc, char **argv)
             map->throttling_group = group;
             map->cache = cache;
             map->write_zero = write_zero;
+            map->skip = skip;
 
             g_hash_table_insert(devmap, map->devname, map);
 
@@ -328,6 +344,7 @@ static int extract_content(int argc, char **argv)
             const char *cache = NULL;
             int flags = BDRV_O_RDWR;
             bool write_zero = true;
+            bool skip = false;
 
             BlockBackend *blk = NULL;
 
@@ -343,6 +360,7 @@ static int extract_content(int argc, char **argv)
                 throttling_group = map->throttling_group;
                 cache = map->cache;
                 write_zero = map->write_zero;
+                skip = map->skip;
             } else {
                 devfn = g_strdup_printf("%s/tmp-disk-%s.raw",
                                         dirname, di->devname);
@@ -361,57 +379,60 @@ static int extract_content(int argc, char **argv)
                 write_zero = false;
             }
 
-	    size_t devlen = strlen(devfn);
-	    QDict *options = NULL;
-            bool writethrough;
-            if (format) {
-                /* explicit format from commandline */
-                options = qdict_new();
-                qdict_put_str(options, "driver", format);
-            } else if ((devlen > 4 && strcmp(devfn+devlen-4, ".raw") == 0) ||
-	               strncmp(devfn, "/dev/", 5) == 0)
-	    {
-                /* This part is now deprecated for PVE as well (just as qemu
-                 * deprecated not specifying an explicit raw format, too.
-                 */
-		/* explicit raw format */
-		options = qdict_new();
-		qdict_put_str(options, "driver", "raw");
-	    }
-            if (cache && bdrv_parse_cache_mode(cache, &flags, &writethrough)) {
-                g_error("invalid cache option: %s\n", cache);
-            }
+            if (!skip) {
+                size_t devlen = strlen(devfn);
+                QDict *options = NULL;
+                bool writethrough;
+                if (format) {
+                    /* explicit format from commandline */
+                    options = qdict_new();
+                    qdict_put_str(options, "driver", format);
+                } else if ((devlen > 4 && strcmp(devfn+devlen-4, ".raw") == 0) ||
+                    strncmp(devfn, "/dev/", 5) == 0)
+                {
+                    /* This part is now deprecated for PVE as well (just as qemu
+                     * deprecated not specifying an explicit raw format, too.
+                     */
+                    /* explicit raw format */
+                    options = qdict_new();
+                    qdict_put_str(options, "driver", "raw");
+                }
 
-	    if (errp || !(blk = blk_new_open(devfn, NULL, options, flags, &errp))) {
-                g_error("can't open file %s - %s", devfn,
-                        error_get_pretty(errp));
-            }
+                if (cache && bdrv_parse_cache_mode(cache, &flags, &writethrough)) {
+                    g_error("invalid cache option: %s\n", cache);
+                }
 
-            if (cache) {
-                blk_set_enable_write_cache(blk, !writethrough);
-            }
+                if (errp || !(blk = blk_new_open(devfn, NULL, options, flags, &errp))) {
+                    g_error("can't open file %s - %s", devfn,
+                            error_get_pretty(errp));
+                }
 
-            if (throttling_group) {
-                blk_io_limits_enable(blk, throttling_group);
-            }
+                if (cache) {
+                    blk_set_enable_write_cache(blk, !writethrough);
+                }
 
-            if (throttling_bps) {
-                if (!throttling_group) {
-                    blk_io_limits_enable(blk, devfn);
+                if (throttling_group) {
+                    blk_io_limits_enable(blk, throttling_group);
                 }
 
-                ThrottleConfig cfg;
-                throttle_config_init(&cfg);
-                cfg.buckets[THROTTLE_BPS_WRITE].avg = throttling_bps;
-                Error *err = NULL;
-                if (!throttle_is_valid(&cfg, &err)) {
-                    error_report_err(err);
-                    g_error("failed to apply throttling");
+                if (throttling_bps) {
+                    if (!throttling_group) {
+                        blk_io_limits_enable(blk, devfn);
+                    }
+
+                    ThrottleConfig cfg;
+                    throttle_config_init(&cfg);
+                    cfg.buckets[THROTTLE_BPS_WRITE].avg = throttling_bps;
+                    Error *err = NULL;
+                    if (!throttle_is_valid(&cfg, &err)) {
+                        error_report_err(err);
+                        g_error("failed to apply throttling");
+                    }
+                    blk_set_io_limits(blk, &cfg);
                 }
-                blk_set_io_limits(blk, &cfg);
             }
 
-            if (vma_reader_register_bs(vmar, i, blk, write_zero, &errp) < 0) {
+            if (vma_reader_register_bs(vmar, i, blk, write_zero, skip, &errp) < 0) {
                 g_error("%s", error_get_pretty(errp));
             }
 
diff --git a/vma.h b/vma.h
index c895c97f6d..1b62859165 100644
--- a/vma.h
+++ b/vma.h
@@ -142,7 +142,7 @@ GList *vma_reader_get_config_data(VmaReader *vmar);
 VmaDeviceInfo *vma_reader_get_device_info(VmaReader *vmar, guint8 dev_id);
 int vma_reader_register_bs(VmaReader *vmar, guint8 dev_id,
                            BlockBackend *target, bool write_zeroes,
-                           Error **errp);
+                           bool skip, Error **errp);
 int vma_reader_restore(VmaReader *vmar, int vmstate_fd, bool verbose,
                        Error **errp);
 int vma_reader_verify(VmaReader *vmar, bool verbose, Error **errp);
-- 
2.30.2





^ permalink raw reply	[flat|nested] 30+ messages in thread

* [pve-devel] [PATCH v2 qemu-server 1/7] restore: cleanup oldconf: also clean up snapshots from kept volumes
  2022-04-21 11:26 [pve-devel] [PATCH-SERIES v2 qemu/qemu-server/widget-toolkit/manager] more flexible restore Fabian Ebner
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu 1/2] vma: restore: call blk_unref for all opened block devices Fabian Ebner
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu 2/2] vma: allow partial restore Fabian Ebner
@ 2022-04-21 11:26 ` Fabian Ebner
  2022-04-25 16:20   ` [pve-devel] applied: " Thomas Lamprecht
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu-server 2/7] restore destroy volumes: remove check for absolute path Fabian Ebner
                   ` (9 subsequent siblings)
  12 siblings, 1 reply; 30+ messages in thread
From: Fabian Ebner @ 2022-04-21 11:26 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---

No changes from v1.

 PVE/QemuServer.pm | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 0be6be90..f221d93c 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -6228,6 +6228,8 @@ sub restore_file_archive {
 my $restore_cleanup_oldconf = sub {
     my ($storecfg, $vmid, $oldconf, $virtdev_hash) = @_;
 
+    my $kept_disks = {};
+
     PVE::QemuConfig->foreach_volume($oldconf, sub {
 	my ($ds, $drive) = @_;
 
@@ -6246,11 +6248,13 @@ my $restore_cleanup_oldconf = sub {
 	    if (my $err = $@) {
 		warn $err;
 	    }
+	} else {
+	    $kept_disks->{$volid} = 1;
 	}
     });
 
-    # delete vmstate files, after the restore we have no snapshots anymore
-    foreach my $snapname (keys %{$oldconf->{snapshots}}) {
+    # after the restore we have no snapshots anymore
+    for my $snapname (keys $oldconf->{snapshots}->%*) {
 	my $snap = $oldconf->{snapshots}->{$snapname};
 	if ($snap->{vmstate}) {
 	    eval { PVE::Storage::vdisk_free($storecfg, $snap->{vmstate}); };
@@ -6258,6 +6262,11 @@ my $restore_cleanup_oldconf = sub {
 		warn $err;
 	    }
 	}
+
+	for my $volid (keys $kept_disks->%*) {
+	    eval { PVE::Storage::volume_snapshot_delete($storecfg, $volid, $snapname); };
+	    warn $@ if $@;
+	}
     }
 };
 
-- 
2.30.2





^ permalink raw reply	[flat|nested] 30+ messages in thread

* [pve-devel] [PATCH v2 qemu-server 2/7] restore destroy volumes: remove check for absolute path
  2022-04-21 11:26 [pve-devel] [PATCH-SERIES v2 qemu/qemu-server/widget-toolkit/manager] more flexible restore Fabian Ebner
                   ` (2 preceding siblings ...)
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu-server 1/7] restore: cleanup oldconf: also clean up snapshots from kept volumes Fabian Ebner
@ 2022-04-21 11:26 ` Fabian Ebner
  2022-04-25 16:20   ` [pve-devel] applied: " Thomas Lamprecht
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu-server 3/7] restore deactivate volumes: never die Fabian Ebner
                   ` (8 subsequent siblings)
  12 siblings, 1 reply; 30+ messages in thread
From: Fabian Ebner @ 2022-04-21 11:26 UTC (permalink / raw)
  To: pve-devel

Only a result from vdisk_alloc is assigned as a volid and that's never
an absolute path.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---

No changes from v1.

 PVE/QemuServer.pm | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index f221d93c..be0694f9 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -6470,11 +6470,7 @@ my $restore_destroy_volumes = sub {
 	my $volid = $devinfo->{$devname}->{volid};
 	next if !$volid;
 	eval {
-	    if ($volid =~ m|^/|) {
-		unlink $volid || die 'unlink failed\n';
-	    } else {
-		PVE::Storage::vdisk_free($storecfg, $volid);
-	    }
+	    PVE::Storage::vdisk_free($storecfg, $volid);
 	    print STDERR "temporary volume '$volid' sucessfuly removed\n";
 	};
 	print STDERR "unable to cleanup '$volid' - $@" if $@;
-- 
2.30.2





^ permalink raw reply	[flat|nested] 30+ messages in thread

* [pve-devel] [PATCH v2 qemu-server 3/7] restore deactivate volumes: never die
  2022-04-21 11:26 [pve-devel] [PATCH-SERIES v2 qemu/qemu-server/widget-toolkit/manager] more flexible restore Fabian Ebner
                   ` (3 preceding siblings ...)
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu-server 2/7] restore destroy volumes: remove check for absolute path Fabian Ebner
@ 2022-04-21 11:26 ` Fabian Ebner
  2022-04-23  9:18   ` Thomas Lamprecht
  2022-04-25 16:21   ` [pve-devel] applied: " Thomas Lamprecht
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu-server 4/7] restore: also deactivate/destroy cloud-init disk upon error Fabian Ebner
                   ` (7 subsequent siblings)
  12 siblings, 2 replies; 30+ messages in thread
From: Fabian Ebner @ 2022-04-21 11:26 UTC (permalink / raw)
  To: pve-devel

Such an error shouldn't abort the whole operation.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---

No changes from v1.

 PVE/QemuServer.pm | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index be0694f9..9a16b617 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -6460,7 +6460,8 @@ my $restore_deactivate_volumes = sub {
 	push @$vollist, $volid if $volid;
     }
 
-    PVE::Storage::deactivate_volumes($storecfg, $vollist);
+    eval { PVE::Storage::deactivate_volumes($storecfg, $vollist); };
+    print STDERR $@ if $@;
 };
 
 my $restore_destroy_volumes = sub {
-- 
2.30.2





^ permalink raw reply	[flat|nested] 30+ messages in thread

* [pve-devel] [PATCH v2 qemu-server 4/7] restore: also deactivate/destroy cloud-init disk upon error
  2022-04-21 11:26 [pve-devel] [PATCH-SERIES v2 qemu/qemu-server/widget-toolkit/manager] more flexible restore Fabian Ebner
                   ` (4 preceding siblings ...)
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu-server 3/7] restore deactivate volumes: never die Fabian Ebner
@ 2022-04-21 11:26 ` Fabian Ebner
  2022-04-25 16:21   ` [pve-devel] applied: " Thomas Lamprecht
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu-server 5/7] api: create: refactor parameter check logic Fabian Ebner
                   ` (6 subsequent siblings)
  12 siblings, 1 reply; 30+ messages in thread
From: Fabian Ebner @ 2022-04-21 11:26 UTC (permalink / raw)
  To: pve-devel

by re-using the same hash that's used when allocating/activating the
disks in the helpers doing the opposite.

Also in preparation to allow skipping certain disks upon restore.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---

No changes from v1.

 PVE/QemuServer.pm | 32 ++++++++++++++++----------------
 1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 9a16b617..741a5e89 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -6452,12 +6452,11 @@ sub restore_update_config_line {
 }
 
 my $restore_deactivate_volumes = sub {
-    my ($storecfg, $devinfo) = @_;
+    my ($storecfg, $virtdev_hash) = @_;
 
     my $vollist = [];
-    foreach my $devname (keys %$devinfo) {
-	my $volid = $devinfo->{$devname}->{volid};
-	push @$vollist, $volid if $volid;
+    for my $dev (values $virtdev_hash->%*) {
+	push $vollist->@*, $dev->{volid} if $dev->{volid};
     }
 
     eval { PVE::Storage::deactivate_volumes($storecfg, $vollist); };
@@ -6465,11 +6464,10 @@ my $restore_deactivate_volumes = sub {
 };
 
 my $restore_destroy_volumes = sub {
-    my ($storecfg, $devinfo) = @_;
+    my ($storecfg, $virtdev_hash) = @_;
 
-    foreach my $devname (keys %$devinfo) {
-	my $volid = $devinfo->{$devname}->{volid};
-	next if !$volid;
+    for my $dev (values $virtdev_hash->%*) {
+	my $volid = $dev->{volid} or next;
 	eval {
 	    PVE::Storage::vdisk_free($storecfg, $volid);
 	    print STDERR "temporary volume '$volid' sucessfuly removed\n";
@@ -6655,7 +6653,8 @@ sub restore_proxmox_backup_archive {
     my $new_conf_raw = '';
 
     my $rpcenv = PVE::RPCEnvironment::get();
-    my $devinfo = {};
+    my $devinfo = {}; # info about drives included in backup
+    my $virtdev_hash = {}; # info about allocated drives
 
     eval {
 	# enable interrupts
@@ -6710,7 +6709,7 @@ sub restore_proxmox_backup_archive {
 	my $fh = IO::File->new($cfgfn, "r") ||
 	    die "unable to read qemu-server.conf - $!\n";
 
-	my $virtdev_hash = $parse_backup_hints->($rpcenv, $user, $storecfg, $fh, $devinfo, $options);
+	$virtdev_hash = $parse_backup_hints->($rpcenv, $user, $storecfg, $fh, $devinfo, $options);
 
 	# fixme: rate limit?
 
@@ -6771,13 +6770,13 @@ sub restore_proxmox_backup_archive {
     my $err = $@;
 
     if ($err || !$options->{live}) {
-	$restore_deactivate_volumes->($storecfg, $devinfo);
+	$restore_deactivate_volumes->($storecfg, $virtdev_hash);
     }
 
     rmtree $tmpdir;
 
     if ($err) {
-	$restore_destroy_volumes->($storecfg, $devinfo);
+	$restore_destroy_volumes->($storecfg, $virtdev_hash);
 	die $err;
     }
 
@@ -6947,7 +6946,8 @@ sub restore_vma_archive {
     my $oldtimeout;
     my $timeout = 5;
 
-    my $devinfo = {};
+    my $devinfo = {}; # info about drives included in backup
+    my $virtdev_hash = {}; # info about allocated drives
 
     my $rpcenv = PVE::RPCEnvironment::get();
 
@@ -6974,7 +6974,7 @@ sub restore_vma_archive {
 	    PVE::Tools::file_copy($fwcfgfn, "${pve_firewall_dir}/$vmid.fw");
 	}
 
-	my $virtdev_hash = $parse_backup_hints->($rpcenv, $user, $cfg, $fh, $devinfo, $opts);
+	$virtdev_hash = $parse_backup_hints->($rpcenv, $user, $cfg, $fh, $devinfo, $opts);
 
 	foreach my $info (values %{$virtdev_hash}) {
 	    my $storeid = $info->{storeid};
@@ -7080,14 +7080,14 @@ sub restore_vma_archive {
 
     alarm($oldtimeout) if $oldtimeout;
 
-    $restore_deactivate_volumes->($cfg, $devinfo);
+    $restore_deactivate_volumes->($cfg, $virtdev_hash);
 
     close($fifofh) if $fifofh;
     unlink $mapfifo;
     rmtree $tmpdir;
 
     if ($err) {
-	$restore_destroy_volumes->($cfg, $devinfo);
+	$restore_destroy_volumes->($cfg, $virtdev_hash);
 	die $err;
     }
 
-- 
2.30.2





^ permalink raw reply	[flat|nested] 30+ messages in thread

* [pve-devel] [PATCH v2 qemu-server 5/7] api: create: refactor parameter check logic
  2022-04-21 11:26 [pve-devel] [PATCH-SERIES v2 qemu/qemu-server/widget-toolkit/manager] more flexible restore Fabian Ebner
                   ` (5 preceding siblings ...)
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu-server 4/7] restore: also deactivate/destroy cloud-init disk upon error Fabian Ebner
@ 2022-04-21 11:26 ` Fabian Ebner
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu-server 6/7] api: create: allow overriding non-disk options during restore Fabian Ebner
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 30+ messages in thread
From: Fabian Ebner @ 2022-04-21 11:26 UTC (permalink / raw)
  To: pve-devel

In preparation to allow passing along certain parameters together with
'archive'. Moving the parameter checks to after the
conflicts-with-'archive' to ensure that the more telling error will
trigger first.

All check helpers should handle empty params fine, but check first
just to make sure and to avoid all the superfluous function calls.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---

No changes from v1.

 PVE/API2/Qemu.pm | 34 ++++++++++++++++++----------------
 1 file changed, 18 insertions(+), 16 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 3af21325..976d1bd6 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -822,22 +822,7 @@ __PACKAGE__->register_method({
 	    raise_perm_exc();
 	}
 
-	if (!$archive) {
-	    &$resolve_cdrom_alias($param);
-
-	    &$check_storage_access($rpcenv, $authuser, $storecfg, $vmid, $param, $storage);
-
-	    &$check_vm_modify_config_perm($rpcenv, $authuser, $vmid, $pool, [ keys %$param]);
-
-	    &$check_vm_create_serial_perm($rpcenv, $authuser, $vmid, $pool, $param);
-	    &$check_vm_create_usb_perm($rpcenv, $authuser, $vmid, $pool, $param);
-
-	    &$check_cpu_model_access($rpcenv, $authuser, $param);
-
-	    $check_drive_param->($param, $storecfg);
-
-	    PVE::QemuServer::add_random_macs($param);
-	} else {
+	if ($archive) {
 	    my $keystr = join(' ', keys %$param);
 	    raise_param_exc({ archive => "option conflicts with other options ($keystr)"}) if $keystr;
 
@@ -858,6 +843,23 @@ __PACKAGE__->register_method({
 	    }
 	}
 
+	if (scalar(keys $param->%*) > 0) {
+	    &$resolve_cdrom_alias($param);
+
+	    &$check_storage_access($rpcenv, $authuser, $storecfg, $vmid, $param, $storage);
+
+	    &$check_vm_modify_config_perm($rpcenv, $authuser, $vmid, $pool, [ keys %$param]);
+
+	    &$check_vm_create_serial_perm($rpcenv, $authuser, $vmid, $pool, $param);
+	    &$check_vm_create_usb_perm($rpcenv, $authuser, $vmid, $pool, $param);
+
+	    &$check_cpu_model_access($rpcenv, $authuser, $param);
+
+	    $check_drive_param->($param, $storecfg);
+
+	    PVE::QemuServer::add_random_macs($param);
+	}
+
 	my $emsg = $is_restore ? "unable to restore VM $vmid -" : "unable to create VM $vmid -";
 
 	eval { PVE::QemuConfig->create_and_lock_config($vmid, $force) };
-- 
2.30.2





^ permalink raw reply	[flat|nested] 30+ messages in thread

* [pve-devel] [PATCH v2 qemu-server 6/7] api: create: allow overriding non-disk options during restore
  2022-04-21 11:26 [pve-devel] [PATCH-SERIES v2 qemu/qemu-server/widget-toolkit/manager] more flexible restore Fabian Ebner
                   ` (6 preceding siblings ...)
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu-server 5/7] api: create: refactor parameter check logic Fabian Ebner
@ 2022-04-21 11:26 ` Fabian Ebner
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu-server 7/7] restore: allow preserving drives " Fabian Ebner
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 30+ messages in thread
From: Fabian Ebner @ 2022-04-21 11:26 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---

Changes from v1:
    * Split this part into its own patch.

 PVE/API2/Qemu.pm  |  8 ++++++--
 PVE/QemuServer.pm | 26 ++++++++++++++++++++------
 2 files changed, 26 insertions(+), 8 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 976d1bd6..8df2cc8d 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -823,8 +823,11 @@ __PACKAGE__->register_method({
 	}
 
 	if ($archive) {
-	    my $keystr = join(' ', keys %$param);
-	    raise_param_exc({ archive => "option conflicts with other options ($keystr)"}) if $keystr;
+	    for my $opt (sort keys $param->%*) {
+		if (PVE::QemuServer::Drive::is_valid_drivename($opt)) {
+		    raise_param_exc({ $opt => "option conflicts with option 'archive'" });
+		}
+	    }
 
 	    if ($archive eq '-') {
 		die "pipe requires cli environment\n" if $rpcenv->{type} ne 'cli';
@@ -880,6 +883,7 @@ __PACKAGE__->register_method({
 		    unique => $unique,
 		    bwlimit => $bwlimit,
 		    live => $live_restore,
+		    override_conf => $param,
 		};
 		if ($archive->{type} eq 'file' || $archive->{type} eq 'pipe') {
 		    die "live-restore is only compatible with backup images from a Proxmox Backup Server\n"
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 741a5e89..e64a9c7a 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -6476,6 +6476,17 @@ my $restore_destroy_volumes = sub {
     }
 };
 
+my $restore_merge_config = sub {
+    my ($filename, $backup_conf_raw, $override_conf) = @_;
+
+    my $backup_conf = parse_vm_config($filename, $backup_conf_raw);
+    for my $key (keys $override_conf->%*) {
+	$backup_conf->{$key} = $override_conf->{$key};
+    }
+
+    return $backup_conf;
+};
+
 sub scan_volids {
     my ($cfg, $vmid) = @_;
 
@@ -6785,9 +6796,8 @@ sub restore_proxmox_backup_archive {
 	$new_conf_raw .= "\nlock: create";
     }
 
-    PVE::Tools::file_set_contents($conffile, $new_conf_raw);
-
-    PVE::Cluster::cfs_update(); # make sure we read new file
+    my $new_conf = $restore_merge_config->($conffile, $new_conf_raw, $options->{override_conf});
+    PVE::QemuConfig->write_config($vmid, $new_conf);
 
     eval { rescan($vmid, 1); };
     warn $@ if $@;
@@ -7091,9 +7101,8 @@ sub restore_vma_archive {
 	die $err;
     }
 
-    PVE::Tools::file_set_contents($conffile, $new_conf_raw);
-
-    PVE::Cluster::cfs_update(); # make sure we read new file
+    my $new_conf = $restore_merge_config->($conffile, $new_conf_raw, $opts->{override_conf});
+    PVE::QemuConfig->write_config($vmid, $new_conf);
 
     eval { rescan($vmid, 1); };
     warn $@ if $@;
@@ -7104,6 +7113,11 @@ sub restore_vma_archive {
 sub restore_tar_archive {
     my ($archive, $vmid, $user, $opts) = @_;
 
+    if (scalar(keys $opts->{override_conf}->%*) > 0) {
+	my $keystring = join(' ', keys $opts->{override_conf}->%*);
+	die "cannot pass along options ($keystring) when restoring from tar archive\n";
+    }
+
     if ($archive ne '-') {
 	my $firstfile = tar_archive_read_firstfile($archive);
 	die "ERROR: file '$archive' does not look like a QemuServer vzdump backup\n"
-- 
2.30.2





^ permalink raw reply	[flat|nested] 30+ messages in thread

* [pve-devel] [PATCH v2 qemu-server 7/7] restore: allow preserving drives during restore
  2022-04-21 11:26 [pve-devel] [PATCH-SERIES v2 qemu/qemu-server/widget-toolkit/manager] more flexible restore Fabian Ebner
                   ` (7 preceding siblings ...)
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu-server 6/7] api: create: allow overriding non-disk options during restore Fabian Ebner
@ 2022-04-21 11:26 ` Fabian Ebner
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 widget-toolkit 1/1] css: add proxmox-good-row class Fabian Ebner
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 30+ messages in thread
From: Fabian Ebner @ 2022-04-21 11:26 UTC (permalink / raw)
  To: pve-devel

Preserving a drive means:
  * The disk it references will not be touched by the restore
    operation.
  * The drive will be left as is in the VM configuration.
  * If the drive is present in the backup, that disk will not be
    restored.

A drive that is not present in the backup was/is re-added as unused by
default, but when preserving, it's kept configured instead.

If a drive is not currently configured and passed as part of preserve,
restore is skipped and its entry in the restored config will be
removed.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---

Dependency bump for pve-qemu is needed.

Changes from v1:
  * Allow skipping restore of drive that's not currently configured
    (see below).
  * Adapt to skip=<devname> syntax when passing drive map to VMA.
  * When merging configs, remove key from the final config if value
    is explicitly undef.
  * Add 'preserve-drives' parameter instead of treating passing
    existing disk in a special way. While this makes it less
    flexible, the advantages are:
      * Can be used to skip restoring a disk that's not currently
        configured.
      * Less automagic behavior, it might not be intuitive what
        passing existing disk will do.
      * No conflict with currently existing syntax for containers.
      * No need to specify the disk's options again. The disk will
        be preserved like it's currently in the config.

 PVE/API2/Qemu.pm  | 25 ++++++++++++++++++++++++-
 PVE/QemuServer.pm | 34 +++++++++++++++++++++++++++++++---
 2 files changed, 55 insertions(+), 4 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 8df2cc8d..86359cef 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -749,6 +749,14 @@ __PACKAGE__->register_method({
 		    description => "Start the VM immediately from the backup and restore in background. PBS only.",
 		    requires => 'archive',
 		},
+		'preserve-drives' => {
+		    optional => 1,
+		    type => 'string',
+		    format => 'pve-configid-list',
+		    description => "List of drives (e.g. scsi0) for which to preserve the current ".
+			"configuration and disk contents.",
+		    requires => 'archive',
+		},
 		pool => {
 		    optional => 1,
 		    type => 'string', format => 'pve-poolid',
@@ -793,6 +801,7 @@ __PACKAGE__->register_method({
 	my $storage = extract_param($param, 'storage');
 	my $unique = extract_param($param, 'unique');
 	my $live_restore = extract_param($param, 'live-restore');
+	my $preserve_drives = extract_param($param, 'preserve-drives');
 
 	if (defined(my $ssh_keys = $param->{sshkeys})) {
 		$ssh_keys = URI::Escape::uri_unescape($ssh_keys);
@@ -825,10 +834,18 @@ __PACKAGE__->register_method({
 	if ($archive) {
 	    for my $opt (sort keys $param->%*) {
 		if (PVE::QemuServer::Drive::is_valid_drivename($opt)) {
-		    raise_param_exc({ $opt => "option conflicts with option 'archive'" });
+		    raise_param_exc({
+			$opt => "option conflicts with option 'archive' (do you mean to use ".
+			    "'preserve-drives'?)",
+		    });
 		}
 	    }
 
+	    for my $opt (PVE::Tools::split_list($preserve_drives)) {
+		raise_param_exc({ 'preserve-drives' => "$opt - not a drive key" })
+		    if !PVE::QemuServer::Drive::is_valid_drivename($opt);
+	    }
+
 	    if ($archive eq '-') {
 		die "pipe requires cli environment\n" if $rpcenv->{type} ne 'cli';
 		$archive = { type => 'pipe' };
@@ -876,6 +893,12 @@ __PACKAGE__->register_method({
 
 	    die "$emsg vm is running\n" if PVE::QemuServer::check_running($vmid);
 
+	    for my $opt (PVE::Tools::split_list($preserve_drives)) {
+		die "internal error - expected drive key\n"
+		    if !PVE::QemuServer::Drive::is_valid_drivename($opt);
+		$param->{$opt} = $conf->{$opt};
+	    }
+
 	    my $realcmd = sub {
 		my $restore_options = {
 		    storage => $storage,
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index e64a9c7a..affdd0bb 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -6206,6 +6206,15 @@ sub tar_restore_cleanup {
     }
 }
 
+# Possilbe options are
+# $format: archive format
+# $storage: target storage
+# $pool: add VM to this pool
+# $unique: make VM unique (mac addressses, vmgenid)
+# $bwlimit: limit restore speed
+# $live: live restore (PBS only)
+# $override_conf: Settings that will be overwritten. If a key is explicitly set to undef, it will be
+#     deleted from the final config. Drives whose keys appear in this config are not restored.
 sub restore_file_archive {
     my ($archive, $vmid, $user, $opts) = @_;
 
@@ -6309,6 +6318,8 @@ my $parse_backup_hints = sub {
 	    $devinfo->{$devname}->{format} = $format;
 	    $devinfo->{$devname}->{storeid} = $storeid;
 
+	    next if exists($options->{override_conf}->{$virtdev}); # not being restored
+
 	    my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
 	    $check_storage->($storeid, $scfg); # permission and content type check
 
@@ -6481,7 +6492,11 @@ my $restore_merge_config = sub {
 
     my $backup_conf = parse_vm_config($filename, $backup_conf_raw);
     for my $key (keys $override_conf->%*) {
-	$backup_conf->{$key} = $override_conf->{$key};
+	if (defined($override_conf->{$key})) {
+	    $backup_conf->{$key} = $override_conf->{$key};
+	} else {
+	    delete $backup_conf->{$key};
+	}
     }
 
     return $backup_conf;
@@ -6818,6 +6833,12 @@ sub restore_proxmox_backup_archive {
 	# these special drives are already restored before start
 	delete $devinfo->{'drive-efidisk0'};
 	delete $devinfo->{'drive-tpmstate0-backup'};
+
+	for my $key (keys $options->{override_conf}->%*) {
+	    next if !is_valid_drivename($key);
+	    delete $devinfo->{"drive-$key"};
+	}
+
 	pbs_live_restore($vmid, $conf, $storecfg, $devinfo, $repo, $keyfile, $pbs_backup_name);
 
 	PVE::QemuConfig->remove_lock($vmid, "create");
@@ -7010,8 +7031,15 @@ sub restore_vma_archive {
 	my $map = $restore_allocate_devices->($cfg, $virtdev_hash, $vmid);
 
 	# print restore information to $fifofh
-	foreach my $virtdev (sort keys %$virtdev_hash) {
-	    my $d = $virtdev_hash->{$virtdev};
+	for my $devname (sort keys $devinfo->%*) {
+	    my $d = $devinfo->{$devname};
+
+	    if (!$virtdev_hash->{$d->{virtdev}}) { # skipped
+		print $fifofh "skip=$d->{devname}\n";
+		print "not restoring '$d->{devname}', but keeping current disk\n";
+		next;
+	    }
+
 	    next if $d->{is_cloudinit}; # no need to restore cloudinit
 
 	    my $storeid = $d->{storeid};
-- 
2.30.2





^ permalink raw reply	[flat|nested] 30+ messages in thread

* [pve-devel] [PATCH v2 widget-toolkit 1/1] css: add proxmox-good-row class
  2022-04-21 11:26 [pve-devel] [PATCH-SERIES v2 qemu/qemu-server/widget-toolkit/manager] more flexible restore Fabian Ebner
                   ` (8 preceding siblings ...)
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu-server 7/7] restore: allow preserving drives " Fabian Ebner
@ 2022-04-21 11:26 ` Fabian Ebner
  2022-04-23  8:32   ` [pve-devel] applied: " Thomas Lamprecht
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 manager 1/3] ui: restore: disallow empty storage selection if it wouldn't work Fabian Ebner
                   ` (2 subsequent siblings)
  12 siblings, 1 reply; 30+ messages in thread
From: Fabian Ebner @ 2022-04-21 11:26 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---

New in v2.

 src/css/ext6-pmx.css | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/src/css/ext6-pmx.css b/src/css/ext6-pmx.css
index c26b533..886e24a 100644
--- a/src/css/ext6-pmx.css
+++ b/src/css/ext6-pmx.css
@@ -23,6 +23,10 @@
     background-color: #f5e5d8;
 }
 
+.proxmox-good-row {
+    background-color: #21BF4B;
+}
+
 /* some icons have to be color manually */
 .black {
     color: #000;
-- 
2.30.2





^ permalink raw reply	[flat|nested] 30+ messages in thread

* [pve-devel] [PATCH v2 manager 1/3] ui: restore: disallow empty storage selection if it wouldn't work
  2022-04-21 11:26 [pve-devel] [PATCH-SERIES v2 qemu/qemu-server/widget-toolkit/manager] more flexible restore Fabian Ebner
                   ` (9 preceding siblings ...)
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 widget-toolkit 1/1] css: add proxmox-good-row class Fabian Ebner
@ 2022-04-21 11:26 ` Fabian Ebner
  2022-04-23  9:38   ` Thomas Lamprecht
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 manager 2/3] ui: restore: allow override of some settings Fabian Ebner
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 manager 3/3] ui: restore: allow preserving disks Fabian Ebner
  12 siblings, 1 reply; 30+ messages in thread
From: Fabian Ebner @ 2022-04-21 11:26 UTC (permalink / raw)
  To: pve-devel

Namely, if there is a storage in the backup configuration that's not
available on the current node.

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---

New in v2.

 www/manager6/window/Restore.js | 34 ++++++++++++++++++++++++++++++++++
 1 file changed, 34 insertions(+)

diff --git a/www/manager6/window/Restore.js b/www/manager6/window/Restore.js
index e7b3e945..25babf89 100644
--- a/www/manager6/window/Restore.js
+++ b/www/manager6/window/Restore.js
@@ -15,6 +15,40 @@ Ext.define('PVE.window.Restore', {
 		},
 	    },
 	},
+
+	afterRender: function() {
+	    let view = this.getView();
+
+	    Proxmox.Utils.API2Request({
+		url: `/nodes/${view.nodename}/vzdump/extractconfig`,
+		method: 'GET',
+		params: {
+		    volume: view.volid,
+		},
+		failure: (response) => Ext.Msg.alert('Error', response.htmlStatus),
+		success: function(response, options) {
+		    let storagesAvailable = true;
+
+		    response.result.data.split('\n').forEach(function(line) {
+			let match = line.match(
+			    /^#qmdump#map:(\S+):(\S+):(\S*):(\S*):$/,
+			);
+			if (match) {
+			    let currentAvailable = !!PVE.data.ResourceStore.getById(
+				`storage/${view.nodename}/${match[3]}`,
+			    );
+			    storagesAvailable = storagesAvailable && currentAvailable;
+			}
+		    });
+
+		    if (!storagesAvailable) {
+			let storagesel = view.down('pveStorageSelector[name=storage]');
+			storagesel.allowBlank = false;
+			storagesel.setEmptyText('');
+		    }
+		},
+	    });
+	},
     },
 
     initComponent: function() {
-- 
2.30.2





^ permalink raw reply	[flat|nested] 30+ messages in thread

* [pve-devel] [PATCH v2 manager 2/3] ui: restore: allow override of some settings
  2022-04-21 11:26 [pve-devel] [PATCH-SERIES v2 qemu/qemu-server/widget-toolkit/manager] more flexible restore Fabian Ebner
                   ` (10 preceding siblings ...)
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 manager 1/3] ui: restore: disallow empty storage selection if it wouldn't work Fabian Ebner
@ 2022-04-21 11:26 ` Fabian Ebner
  2022-04-23 10:07   ` Thomas Lamprecht
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 manager 3/3] ui: restore: allow preserving disks Fabian Ebner
  12 siblings, 1 reply; 30+ messages in thread
From: Fabian Ebner @ 2022-04-21 11:26 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---

New in v2.

Dependency bump for qemu-server needed

 www/manager6/window/Restore.js | 77 +++++++++++++++++++++++++++++++++-
 1 file changed, 75 insertions(+), 2 deletions(-)

diff --git a/www/manager6/window/Restore.js b/www/manager6/window/Restore.js
index 25babf89..23c244f3 100644
--- a/www/manager6/window/Restore.js
+++ b/www/manager6/window/Restore.js
@@ -21,6 +21,7 @@ Ext.define('PVE.window.Restore', {
 
 	    Proxmox.Utils.API2Request({
 		url: `/nodes/${view.nodename}/vzdump/extractconfig`,
+		waitMsgTarget: view,
 		method: 'GET',
 		params: {
 		    volume: view.volid,
@@ -38,6 +39,28 @@ Ext.define('PVE.window.Restore', {
 				`storage/${view.nodename}/${match[3]}`,
 			    );
 			    storagesAvailable = storagesAvailable && currentAvailable;
+			} else {
+			    match = line.match(/^([^:]+):\s*(\S+)\s*$/);
+			    if (match) {
+				let [_, key, value] = match;
+				switch (key) {
+				    case 'name':
+				    case 'hostname':
+					view.lookupReference('nameField').setEmptyText(value);
+					break;
+				    case 'memory':
+					view.lookupReference('memoryField').setEmptyText(value);
+					break;
+				    case 'cores':
+					view.lookupReference('coresField').setEmptyText(value);
+					break;
+				    case 'sockets':
+					view.lookupReference('socketsField').setEmptyText(value);
+					break;
+				    default:
+					break;
+				}
+			    }
 			}
 		    });
 
@@ -207,6 +230,49 @@ Ext.define('PVE.window.Restore', {
 	    });
 	}
 
+	items.push(
+	    {
+		xtype: 'displayfield',
+		value: `${gettext('Override Settings')}:`,
+	    },
+	    {
+		xtype: 'textfield',
+		fieldLabel: gettext('Name'),
+		name: 'name',
+		reference: 'nameField',
+		allowBlank: true,
+	    },
+	    {
+		xtype: 'pveMemoryField',
+		fieldLabel: gettext('Memory'),
+		name: 'memory',
+		reference: 'memoryField',
+		value: '',
+		allowBlank: true,
+	    },
+	    {
+		xtype: 'proxmoxintegerfield',
+		fieldLabel: gettext('Cores'),
+		name: 'cores',
+		reference: 'coresField',
+		minValue: 1,
+		maxValue: 128,
+		allowBlank: true,
+	    },
+	);
+
+	if (me.vmtype === 'qemu') {
+	    items.push({
+		xtype: 'proxmoxintegerfield',
+		fieldLabel: gettext('Sockets'),
+		name: 'sockets',
+		reference: 'socketsField',
+		minValue: 1,
+		maxValue: 4,
+		allowBlank: true,
+	    });
+	}
+
 	me.formPanel = Ext.create('Ext.form.Panel', {
 	    bodyPadding: 10,
 	    border: false,
@@ -254,8 +320,15 @@ Ext.define('PVE.window.Restore', {
 		if (values['live-restore']) { params['live-restore'] = 1; }
 		if (values.storage) { params.storage = values.storage; }
 
-		if (values.bwlimit !== undefined) {
-		    params.bwlimit = values.bwlimit;
+		['bwlimit', 'cores', 'name', 'memory', 'sockets'].forEach(function(opt) {
+		    if (values[opt] !== undefined && values[opt] !== null && values[opt] !== '') {
+			params[opt] = values[opt];
+		    }
+		});
+
+		if (params.name && me.vmtype === 'lxc') {
+		    params.hostname = params.name;
+		    delete params.name;
 		}
 
 		var url;
-- 
2.30.2





^ permalink raw reply	[flat|nested] 30+ messages in thread

* [pve-devel] [PATCH v2 manager 3/3] ui: restore: allow preserving disks
  2022-04-21 11:26 [pve-devel] [PATCH-SERIES v2 qemu/qemu-server/widget-toolkit/manager] more flexible restore Fabian Ebner
                   ` (11 preceding siblings ...)
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 manager 2/3] ui: restore: allow override of some settings Fabian Ebner
@ 2022-04-21 11:26 ` Fabian Ebner
  12 siblings, 0 replies; 30+ messages in thread
From: Fabian Ebner @ 2022-04-21 11:26 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---

New in v2.

Dependency bump for proxmox-widget-toolkit needed

 www/manager6/Makefile                |   1 +
 www/manager6/grid/RestoreDiskGrid.js | 125 +++++++++++++++++++++++++++
 www/manager6/window/Restore.js       |  35 +++++++-
 3 files changed, 160 insertions(+), 1 deletion(-)
 create mode 100644 www/manager6/grid/RestoreDiskGrid.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 2c7b1e70..93f9f9c6 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -80,6 +80,7 @@ JSSRC= 							\
 	grid/PoolMembers.js				\
 	grid/Replication.js				\
 	grid/ResourceGrid.js				\
+	grid/RestoreDiskGrid.js				\
 	panel/ConfigPanel.js				\
 	panel/BackupJobPrune.js				\
 	panel/HealthWidget.js				\
diff --git a/www/manager6/grid/RestoreDiskGrid.js b/www/manager6/grid/RestoreDiskGrid.js
new file mode 100644
index 00000000..5384a5e6
--- /dev/null
+++ b/www/manager6/grid/RestoreDiskGrid.js
@@ -0,0 +1,125 @@
+Ext.define('pve-restore-disk', {
+    extend: 'Ext.data.Model',
+    fields: [
+	{ name: 'preserve', type: 'bool' },
+	{ name: 'drivename', type: 'string' },
+	{ name: 'vm', type: 'string' },
+	{ name: 'backup', type: 'string' },
+    ],
+});
+
+Ext.define('PVE.grid.RestoreDiskGrid', {
+    extend: 'Ext.grid.Panel',
+    alias: ['widget.pveRestoreDiskGrid'],
+
+    vmConfig: undefined,
+    backupConfig: undefined,
+
+    setVMConfig: function(config) {
+	let me = this;
+
+	if (me.vmConfig) {
+	    throw "vm config already set";
+	}
+	me.vmConfig = config;
+
+	if (me.vmConfig && me.backupConfig) {
+	    me.updateData();
+	}
+    },
+
+    setBackupConfig: function(config) {
+	let me = this;
+
+	if (me.backupConfig) {
+	    throw "backup config already set";
+	}
+	me.backupConfig = config;
+
+	if (me.vmConfig && me.backupConfig) {
+	    me.updateData();
+	}
+    },
+
+    updateData: function() {
+	let me = this;
+
+	let data = [];
+
+	Object.keys(me.vmConfig).concat(Object.keys(me.backupConfig)).forEach(function(key) {
+	    if (!key.match(PVE.Utils.bus_match) && !key.match(/^(efidisk|tpmstate)\d+$/)) {
+		return;
+	    }
+
+	    if (data.find(item => item.drivename === key)) {
+		return;
+	    }
+
+	    data.push({
+		drivename: key,
+		preserve: false,
+		vm: me.vmConfig[key],
+		backup: me.backupConfig[key],
+	    });
+	});
+
+	me.getStore().setData(data);
+    },
+
+    preserveDrives: function() {
+	let drives = [];
+
+	this.getStore().getData().each(function(item) {
+	    if (item.data.preserve) {
+		drives.push(item.data.drivename);
+	    }
+	});
+
+	return drives;
+    },
+
+
+    store: {
+	model: 'pve-restore-disk',
+	sorters: 'drivename',
+    },
+
+    columns: [
+	{
+	    xtype: 'checkcolumn',
+	    header: gettext('Preserve'),
+	    dataIndex: 'preserve',
+	    width: 80,
+	},
+	{
+	    header: gettext('Drive'),
+	    dataIndex: 'drivename',
+	    width: 80,
+	},
+	{
+	    header: gettext('Current'),
+	    dataIndex: 'vm',
+	    flex: 1,
+	    renderer: function(drive, metaData, record) {
+		if (record.data.preserve) {
+		    metaData.tdCls = 'proxmox-good-row';
+		} else if (!record.data.backup) {
+		    metaData.tdCls = 'proxmox-good-row';
+		    return `(${gettext('as unused')}) ${drive}`;
+		}
+		return drive;
+	    },
+	},
+	{
+	    header: gettext('Backup'),
+	    dataIndex: 'backup',
+	    flex: 1,
+	    renderer: function(drive, metaData, record) {
+		if (!record.data.preserve && drive) {
+		    metaData.tdCls = 'proxmox-good-row';
+		}
+		return drive;
+	    },
+	},
+    ],
+});
diff --git a/www/manager6/window/Restore.js b/www/manager6/window/Restore.js
index 23c244f3..8b0134b8 100644
--- a/www/manager6/window/Restore.js
+++ b/www/manager6/window/Restore.js
@@ -18,6 +18,7 @@ Ext.define('PVE.window.Restore', {
 
 	afterRender: function() {
 	    let view = this.getView();
+	    let diskGrid = view.lookupReference('restoreDiskGrid');
 
 	    Proxmox.Utils.API2Request({
 		url: `/nodes/${view.nodename}/vzdump/extractconfig`,
@@ -29,6 +30,7 @@ Ext.define('PVE.window.Restore', {
 		failure: (response) => Ext.Msg.alert('Error', response.htmlStatus),
 		success: function(response, options) {
 		    let storagesAvailable = true;
+		    let config = {};
 
 		    response.result.data.split('\n').forEach(function(line) {
 			let match = line.match(
@@ -43,6 +45,8 @@ Ext.define('PVE.window.Restore', {
 			    match = line.match(/^([^:]+):\s*(\S+)\s*$/);
 			    if (match) {
 				let [_, key, value] = match;
+				config[key] = value;
+
 				switch (key) {
 				    case 'name':
 				    case 'hostname':
@@ -69,8 +73,22 @@ Ext.define('PVE.window.Restore', {
 			storagesel.allowBlank = false;
 			storagesel.setEmptyText('');
 		    }
+
+		    diskGrid?.setBackupConfig(config);
 		},
 	    });
+
+	    if (!diskGrid) {
+		return;
+	    }
+
+	    Proxmox.Utils.API2Request({
+		url: `/nodes/${view.nodename}/qemu/${view.vmid}/config`,
+		waitMsgTarget: diskGrid,
+		method: 'GET',
+		failure: (response) => Ext.Msg.alert(gettext('Error'), response.htmlStatus),
+		success: (response) => diskGrid.setVMConfig(response.result.data),
+	    });
 	},
     },
 
@@ -271,6 +289,13 @@ Ext.define('PVE.window.Restore', {
 		maxValue: 4,
 		allowBlank: true,
 	    });
+
+	    if (me.vmid) {
+		items.push({
+		    xtype: 'pveRestoreDiskGrid',
+		    reference: 'restoreDiskGrid',
+		});
+	    }
 	}
 
 	me.formPanel = Ext.create('Ext.form.Panel', {
@@ -331,6 +356,14 @@ Ext.define('PVE.window.Restore', {
 		    delete params.name;
 		}
 
+		let diskGrid = me.lookupReference('restoreDiskGrid');
+		if (diskGrid) {
+		    let preserveDrives = diskGrid.preserveDrives();
+		    if (preserveDrives.length > 0) {
+			params['preserve-drives'] = preserveDrives.join(',');
+		    }
+		}
+
 		var url;
 		var msg;
 		if (me.vmtype === 'lxc') {
@@ -376,7 +409,7 @@ Ext.define('PVE.window.Restore', {
 
 	Ext.apply(me, {
 	    title: title,
-	    width: 500,
+	    width: 700,
 	    modal: true,
 	    layout: 'auto',
 	    border: false,
-- 
2.30.2





^ permalink raw reply	[flat|nested] 30+ messages in thread

* [pve-devel] applied: [PATCH v2 widget-toolkit 1/1] css: add proxmox-good-row class
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 widget-toolkit 1/1] css: add proxmox-good-row class Fabian Ebner
@ 2022-04-23  8:32   ` Thomas Lamprecht
  0 siblings, 0 replies; 30+ messages in thread
From: Thomas Lamprecht @ 2022-04-23  8:32 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fabian Ebner

On 21.04.22 13:26, Fabian Ebner wrote:
> Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
> ---
> 
> New in v2.
> 
>  src/css/ext6-pmx.css | 4 ++++
>  1 file changed, 4 insertions(+)
> 
>

applied, thanks!




^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [pve-devel] [PATCH v2 qemu-server 3/7] restore deactivate volumes: never die
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu-server 3/7] restore deactivate volumes: never die Fabian Ebner
@ 2022-04-23  9:18   ` Thomas Lamprecht
  2022-04-25  6:45     ` Fabian Ebner
  2022-04-25 16:21   ` [pve-devel] applied: " Thomas Lamprecht
  1 sibling, 1 reply; 30+ messages in thread
From: Thomas Lamprecht @ 2022-04-23  9:18 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fabian Ebner

On 21.04.22 13:26, Fabian Ebner wrote:
> Such an error shouldn't abort the whole operation.
> 
> Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
> ---
> 
> No changes from v1.
> 
>  PVE/QemuServer.pm | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index be0694f9..9a16b617 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -6460,7 +6460,8 @@ my $restore_deactivate_volumes = sub {
>  	push @$vollist, $volid if $volid;
>      }
>  
> -    PVE::Storage::deactivate_volumes($storecfg, $vollist);
> +    eval { PVE::Storage::deactivate_volumes($storecfg, $vollist); };
> +    print STDERR $@ if $@;

why not use warn "$@" which normally also gets printed on stderr?

>  };
>  
>  my $restore_destroy_volumes = sub {





^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [pve-devel] [PATCH v2 manager 1/3] ui: restore: disallow empty storage selection if it wouldn't work
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 manager 1/3] ui: restore: disallow empty storage selection if it wouldn't work Fabian Ebner
@ 2022-04-23  9:38   ` Thomas Lamprecht
  2022-04-25  7:28     ` Fabian Ebner
  0 siblings, 1 reply; 30+ messages in thread
From: Thomas Lamprecht @ 2022-04-23  9:38 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fabian Ebner

On 21.04.22 13:26, Fabian Ebner wrote:
> Namely, if there is a storage in the backup configuration that's not
> available on the current node.

Better than the status quo, but in the long run all the "force all volumes to a single storage"
on restore and also migrate isn't ideal for the case where one or more storages do not exist on
the target node. An per-volume override would be nicer, but may require some gui adaptions to
present that in a sensible way with good UX.

> 
> Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
> ---
> 
> New in v2.
> 
>  www/manager6/window/Restore.js | 34 ++++++++++++++++++++++++++++++++++
>  1 file changed, 34 insertions(+)
> 
> diff --git a/www/manager6/window/Restore.js b/www/manager6/window/Restore.js
> index e7b3e945..25babf89 100644
> --- a/www/manager6/window/Restore.js
> +++ b/www/manager6/window/Restore.js
> @@ -15,6 +15,40 @@ Ext.define('PVE.window.Restore', {
>  		},
>  	    },
>  	},
> +
> +	afterRender: function() {
> +	    let view = this.getView();
> +
> +	    Proxmox.Utils.API2Request({
> +		url: `/nodes/${view.nodename}/vzdump/extractconfig`,
> +		method: 'GET',
> +		params: {
> +		    volume: view.volid,
> +		},
> +		failure: (response) => Ext.Msg.alert('Error', response.htmlStatus),

nit: no parenthesis required for single parameter arrow fns

> +		success: function(response, options) {
> +		    let storagesAvailable = true;
> +
> +		    response.result.data.split('\n').forEach(function(line) {

style nit: makes no big difference but for new code we prefer arrow fns for inline closures.

> +			let match = line.match(
> +			    /^#qmdump#map:(\S+):(\S+):(\S*):(\S*):$/,
> +			);

fits in 100 cc so would rather keep in in a single line

> +			if (match) {
> +			    let currentAvailable = !!PVE.data.ResourceStore.getById(
> +				`storage/${view.nodename}/${match[3]}`,
> +			    );
> +			    storagesAvailable = storagesAvailable && currentAvailable;



JS has a &&= operator, so could be:

storagesAvailable &&= !!PVE.data.ResourceStore.getById(
    `storage/${view.nodename}/${match[3]}`);

but as we only want to do one thing if any storage is not available we could do:

```
let allStoragesAvailable = response.result.data.split('\n').every(line => {
    let match = line.match(/^#qmdump#map:(\S+):(\S+):(\S*):(\S*):$/);
    if (match) {
        return !!PVE.data.ResourceStore.getById(`storage/${view.nodename}/${match[3]}`);
    }
    return true;
}

if (!allStoragesAvailable) {
    // ...
}
```

IMO a bit easier to follow.

> +			}
> +		    });
> +
> +		    if (!storagesAvailable) {
> +			let storagesel = view.down('pveStorageSelector[name=storage]');
> +			storagesel.allowBlank = false;
> +			storagesel.setEmptyText('');
> +		    }
> +		},
> +	    });
> +	},
>      },
>  
>      initComponent: function() {





^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [pve-devel] [PATCH v2 manager 2/3] ui: restore: allow override of some settings
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 manager 2/3] ui: restore: allow override of some settings Fabian Ebner
@ 2022-04-23 10:07   ` Thomas Lamprecht
  0 siblings, 0 replies; 30+ messages in thread
From: Thomas Lamprecht @ 2022-04-23 10:07 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fabian Ebner

On 21.04.22 13:26, Fabian Ebner wrote:
> Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
> ---
> 
> New in v2.
> 
> Dependency bump for qemu-server needed
> 
>  www/manager6/window/Restore.js | 77 +++++++++++++++++++++++++++++++++-
>  1 file changed, 75 insertions(+), 2 deletions(-)
> 
> diff --git a/www/manager6/window/Restore.js b/www/manager6/window/Restore.js
> index 25babf89..23c244f3 100644
> --- a/www/manager6/window/Restore.js
> +++ b/www/manager6/window/Restore.js
> @@ -21,6 +21,7 @@ Ext.define('PVE.window.Restore', {
>  
>  	    Proxmox.Utils.API2Request({
>  		url: `/nodes/${view.nodename}/vzdump/extractconfig`,
> +		waitMsgTarget: view,
>  		method: 'GET',
>  		params: {
>  		    volume: view.volid,
> @@ -38,6 +39,28 @@ Ext.define('PVE.window.Restore', {
>  				`storage/${view.nodename}/${match[3]}`,
>  			    );
>  			    storagesAvailable = storagesAvailable && currentAvailable;
> +			} else {

hmm, ok you reuse the forEach, so the every of the previous patch review may not work, but I'd 
still do some changes, see below.

> +			    match = line.match(/^([^:]+):\s*(\S+)\s*$/);

can be: 

let [_, key, value] = line.match(/^([^:]+):\s*(\S+)\s*$/) ?? [];

if (!key) {
    return;
}

...

And then this could be move out of this if and handled like:

if (key === #qmdump#map) {
     // ...
} else if (key === 'name' || 'hostname') {
     view.lookupReference('nameField').setEmptyText(value);
) else if (key === 'memory') {
   // ...
} ...

Alternatively a single else and an object to map from key to xField would be also an option, but the
switch is almost never too verbose and often, and that's probably just my opinion, slightly consufing
to parse when skimming over code quickly.

> +			    if (match) {
> +				let [_, key, value] = match;
> +				switch (key) {
> +				    case 'name':
> +				    case 'hostname':
> +					view.lookupReference('nameField').setEmptyText(value);
> +					break;
> +				    case 'memory':
> +					view.lookupReference('memoryField').setEmptyText(value);
> +					break;
> +				    case 'cores':
> +					view.lookupReference('coresField').setEmptyText(value);
> +					break;
> +				    case 'sockets':
> +					view.lookupReference('socketsField').setEmptyText(value);
> +					break;
> +				    default:
> +					break;
> +				}
> +			    }
>  			}
>  		    });
>  
> @@ -207,6 +230,49 @@ Ext.define('PVE.window.Restore', {
>  	    });
>  	}
>  
> +	items.push(
> +	    {
> +		xtype: 'displayfield',
> +		value: `${gettext('Override Settings')}:`,

I'd maybe even use a 'fieldset' widget here for grouping this nicely.

> +	    },
> +	    {
> +		xtype: 'textfield',
> +		fieldLabel: gettext('Name'),
> +		name: 'name',
> +		reference: 'nameField',
> +		allowBlank: true,
> +	    },
> +	    {
> +		xtype: 'pveMemoryField',
> +		fieldLabel: gettext('Memory'),
> +		name: 'memory',
> +		reference: 'memoryField',
> +		value: '',
> +		allowBlank: true,
> +	    },
> +	    {
> +		xtype: 'proxmoxintegerfield',
> +		fieldLabel: gettext('Cores'),
> +		name: 'cores',
> +		reference: 'coresField',
> +		minValue: 1,
> +		maxValue: 128,
> +		allowBlank: true,
> +	    },
> +	);
> +
> +	if (me.vmtype === 'qemu') {
> +	    items.push({
> +		xtype: 'proxmoxintegerfield',
> +		fieldLabel: gettext('Sockets'),
> +		name: 'sockets',
> +		reference: 'socketsField',
> +		minValue: 1,
> +		maxValue: 4,
> +		allowBlank: true,
> +	    });
> +	}
> +
>  	me.formPanel = Ext.create('Ext.form.Panel', {
>  	    bodyPadding: 10,
>  	    border: false,
> @@ -254,8 +320,15 @@ Ext.define('PVE.window.Restore', {
>  		if (values['live-restore']) { params['live-restore'] = 1; }
>  		if (values.storage) { params.storage = values.storage; }
>  
> -		if (values.bwlimit !== undefined) {
> -		    params.bwlimit = values.bwlimit;
> +		['bwlimit', 'cores', 'name', 'memory', 'sockets'].forEach(function(opt) {
> +		    if (values[opt] !== undefined && values[opt] !== null && values[opt] !== '') {

should be collapsible via:

if ((values[opt] ?? '') !== '') 

at which point we could move it into a filter combinator, like:

['bwlimit', 'cores', 'name', 'memory', 'sockets']
    .filter(opt => (values[opt] ?? '') !== '')
    .forEach(opt => param[opt] = values[opt]);

> +			params[opt] = values[opt];
> +		    }
> +		});
> +
> +		if (params.name && me.vmtype === 'lxc') {
> +		    params.hostname = params.name;
> +		    delete params.name;
>  		}
>  
>  		var url;





^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [pve-devel] [PATCH v2 qemu 1/2] vma: restore: call blk_unref for all opened block devices
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu 1/2] vma: restore: call blk_unref for all opened block devices Fabian Ebner
@ 2022-04-25  6:37   ` Wolfgang Bumiller
  2022-04-25 16:10   ` [pve-devel] applied: " Thomas Lamprecht
  1 sibling, 0 replies; 30+ messages in thread
From: Wolfgang Bumiller @ 2022-04-25  6:37 UTC (permalink / raw)
  To: Fabian Ebner; +Cc: pve-devel

lgtm

While looking at the code again, I do find the error handling in
`extract_content` generally a bit lacking, though, eg. the `blk` var
moved down below is only set if `errp` is not set, but we still go ahead
with everything else, making me wonder if we shouldn't bail out then
instead...

But that's out of scope of this patch, this definitely fixes what
appears to be buggy cleanup code.

On Thu, Apr 21, 2022 at 01:26:47PM +0200, Fabian Ebner wrote:
> rather than just the last one. This is the only caller registering
> block devices with the reader, so other callers are already fine.
> 
> Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
> ---
> 
> New in v2.
> 
> Best squashed into 0026-PVE-Backup-add-vma-backup-format-code.patch
> 
>  vma-reader.c | 3 +++
>  vma.c        | 6 ++----
>  2 files changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/vma-reader.c b/vma-reader.c
> index 2b1d1cdab3..4f4ee2b47b 100644
> --- a/vma-reader.c
> +++ b/vma-reader.c
> @@ -192,6 +192,9 @@ void vma_reader_destroy(VmaReader *vmar)
>          if (vmar->rstate[i].bitmap) {
>              g_free(vmar->rstate[i].bitmap);
>          }
> +        if (vmar->rstate[i].target) {
> +            blk_unref(vmar->rstate[i].target);
> +        }
>      }
>  
>      if (vmar->md5csum) {
> diff --git a/vma.c b/vma.c
> index df542b7732..89440733b1 100644
> --- a/vma.c
> +++ b/vma.c
> @@ -309,8 +309,6 @@ static int extract_content(int argc, char **argv)
>      int vmstate_fd = -1;
>      guint8 vmstate_stream = 0;
>  
> -    BlockBackend *blk = NULL;
> -
>      for (i = 1; i < 255; i++) {
>          VmaDeviceInfo *di = vma_reader_get_device_info(vmar, i);
>          if (di && (strcmp(di->devname, "vmstate") == 0)) {
> @@ -331,6 +329,8 @@ static int extract_content(int argc, char **argv)
>              int flags = BDRV_O_RDWR;
>              bool write_zero = true;
>  
> +            BlockBackend *blk = NULL;
> +
>              if (readmap) {
>                  RestoreMap *map;
>                  map = (RestoreMap *)g_hash_table_lookup(devmap, di->devname);
> @@ -443,8 +443,6 @@ static int extract_content(int argc, char **argv)
>  
>      vma_reader_destroy(vmar);
>  
> -    blk_unref(blk);
> -
>      bdrv_close_all();
>  
>      return ret;
> -- 
> 2.30.2




^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [pve-devel] [PATCH v2 qemu 2/2] vma: allow partial restore
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu 2/2] vma: allow partial restore Fabian Ebner
@ 2022-04-25  6:40   ` Wolfgang Bumiller
  2022-04-25 16:10   ` [pve-devel] applied: " Thomas Lamprecht
  1 sibling, 0 replies; 30+ messages in thread
From: Wolfgang Bumiller @ 2022-04-25  6:40 UTC (permalink / raw)
  To: Fabian Ebner; +Cc: pve-devel

On Thu, Apr 21, 2022 at 01:26:48PM +0200, Fabian Ebner wrote:
> Introduce a new map line for skipping a certain drive, of the form
> skip=drive-scsi0
> 
> Since in PVE, most archives are compressed and piped to vma for
> restore, it's not easily possible to skip reads.
> 
> For the reader, a new skip flag for VmaRestoreState is added and the
> target is allowed to be NULL if skip is specified when registering. If
> the skip flag is set, no writes will be made as well as no check for
> duplicate clusters. Therefore, the flag is not set for verify.
> 
> Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>

Acked-by: Wolfgang Bumiller <w.bumiller@proxmox.com>




^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [pve-devel] [PATCH v2 qemu-server 3/7] restore deactivate volumes: never die
  2022-04-23  9:18   ` Thomas Lamprecht
@ 2022-04-25  6:45     ` Fabian Ebner
  0 siblings, 0 replies; 30+ messages in thread
From: Fabian Ebner @ 2022-04-25  6:45 UTC (permalink / raw)
  To: Thomas Lamprecht, Proxmox VE development discussion

Am 23.04.22 um 11:18 schrieb Thomas Lamprecht:
> On 21.04.22 13:26, Fabian Ebner wrote:
>> Such an error shouldn't abort the whole operation.
>>
>> Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
>> ---
>>
>> No changes from v1.
>>
>>  PVE/QemuServer.pm | 3 ++-
>>  1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
>> index be0694f9..9a16b617 100644
>> --- a/PVE/QemuServer.pm
>> +++ b/PVE/QemuServer.pm
>> @@ -6460,7 +6460,8 @@ my $restore_deactivate_volumes = sub {
>>  	push @$vollist, $volid if $volid;
>>      }
>>  
>> -    PVE::Storage::deactivate_volumes($storecfg, $vollist);
>> +    eval { PVE::Storage::deactivate_volumes($storecfg, $vollist); };
>> +    print STDERR $@ if $@;
> 
> why not use warn "$@" which normally also gets printed on stderr?
>

I kept it in line with how restore_destroy_volumes does it, but I can
change it in v3.

>>  };
>>  
>>  my $restore_destroy_volumes = sub {
> 




^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [pve-devel] [PATCH v2 manager 1/3] ui: restore: disallow empty storage selection if it wouldn't work
  2022-04-23  9:38   ` Thomas Lamprecht
@ 2022-04-25  7:28     ` Fabian Ebner
  2022-04-27  7:05       ` Thomas Lamprecht
  0 siblings, 1 reply; 30+ messages in thread
From: Fabian Ebner @ 2022-04-25  7:28 UTC (permalink / raw)
  To: Thomas Lamprecht, Proxmox VE development discussion

Am 23.04.22 um 11:38 schrieb Thomas Lamprecht:
> On 21.04.22 13:26, Fabian Ebner wrote:
>> Namely, if there is a storage in the backup configuration that's not
>> available on the current node.
> 
> Better than the status quo, but in the long run all the "force all volumes to a single storage"
> on restore and also migrate isn't ideal for the case where one or more storages do not exist on
> the target node. An per-volume override would be nicer, but may require some gui adaptions to
> present that in a sensible way with good UX.
> 

In the UI, it could simply be part of the disk grid (proposed in patch
manager 3/3), only showing up for drives selected from the backup?

In the back-end for migration, we have a storage-storage map, but here
we'd need a drive-storage map. It'd be possible to extend the 'storage'
parameter for the create/restore API call to be such a map, but I wonder
if going for a 'restore-drives' parameter being such a map (and
replacing the proposed 'preserve-drives' parameter) would be better?

The downside is, we'd have to choose between
A) preserve disk and config
B) preserve disk as unused
for the drives that are not present in the backup. A) would be more
convenient in the partial restore context, but B) is the current
default. Thus we need to keep B) if 'restore-drives' is not specified at
all for backwards-compatibility, but can choose A) if 'restore-drives'
is specified. But doing so seems a little inconsistent regarding user
expectation.




^ permalink raw reply	[flat|nested] 30+ messages in thread

* [pve-devel] applied: [PATCH v2 qemu 1/2] vma: restore: call blk_unref for all opened block devices
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu 1/2] vma: restore: call blk_unref for all opened block devices Fabian Ebner
  2022-04-25  6:37   ` Wolfgang Bumiller
@ 2022-04-25 16:10   ` Thomas Lamprecht
  1 sibling, 0 replies; 30+ messages in thread
From: Thomas Lamprecht @ 2022-04-25 16:10 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fabian Ebner

On 21.04.22 13:26, Fabian Ebner wrote:
> rather than just the last one. This is the only caller registering
> block devices with the reader, so other callers are already fine.
> 
> Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
> ---
> 
> New in v2.
> 
> Best squashed into 0026-PVE-Backup-add-vma-backup-format-code.patch
> 

done

>  vma-reader.c | 3 +++
>  vma.c        | 6 ++----
>  2 files changed, 5 insertions(+), 4 deletions(-)
> 
>

applied, thanks!




^ permalink raw reply	[flat|nested] 30+ messages in thread

* [pve-devel] applied: [PATCH v2 qemu 2/2] vma: allow partial restore
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu 2/2] vma: allow partial restore Fabian Ebner
  2022-04-25  6:40   ` Wolfgang Bumiller
@ 2022-04-25 16:10   ` Thomas Lamprecht
  1 sibling, 0 replies; 30+ messages in thread
From: Thomas Lamprecht @ 2022-04-25 16:10 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fabian Ebner

On 21.04.22 13:26, Fabian Ebner wrote:
> Introduce a new map line for skipping a certain drive, of the form
> skip=drive-scsi0
> 
> Since in PVE, most archives are compressed and piped to vma for
> restore, it's not easily possible to skip reads.
> 
> For the reader, a new skip flag for VmaRestoreState is added and the
> target is allowed to be NULL if skip is specified when registering. If
> the skip flag is set, no writes will be made as well as no check for
> duplicate clusters. Therefore, the flag is not set for verify.
> 
> Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
> ---
> 
> Better viewed with -w
> 
> Changes from v1:
>     * Use dedicated map line skip=<drivename> rather than a special path
>       "/dev/null" to communicate that a drive should be skipped.
> 
>  vma-reader.c |  64 ++++++++++++---------
>  vma.c        | 157 +++++++++++++++++++++++++++++----------------------
>  vma.h        |   2 +-
>  3 files changed, 126 insertions(+), 97 deletions(-)
> 
>

applied, with Wolfgang's Ack, thanks!




^ permalink raw reply	[flat|nested] 30+ messages in thread

* [pve-devel] applied: [PATCH v2 qemu-server 1/7] restore: cleanup oldconf: also clean up snapshots from kept volumes
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu-server 1/7] restore: cleanup oldconf: also clean up snapshots from kept volumes Fabian Ebner
@ 2022-04-25 16:20   ` Thomas Lamprecht
  0 siblings, 0 replies; 30+ messages in thread
From: Thomas Lamprecht @ 2022-04-25 16:20 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fabian Ebner

On 21.04.22 13:26, Fabian Ebner wrote:
> Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
> ---
> 
> No changes from v1.
> 
>  PVE/QemuServer.pm | 13 +++++++++++--
>  1 file changed, 11 insertions(+), 2 deletions(-)
> 
>

applied, thanks!




^ permalink raw reply	[flat|nested] 30+ messages in thread

* [pve-devel] applied: [PATCH v2 qemu-server 2/7] restore destroy volumes: remove check for absolute path
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu-server 2/7] restore destroy volumes: remove check for absolute path Fabian Ebner
@ 2022-04-25 16:20   ` Thomas Lamprecht
  0 siblings, 0 replies; 30+ messages in thread
From: Thomas Lamprecht @ 2022-04-25 16:20 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fabian Ebner

On 21.04.22 13:26, Fabian Ebner wrote:
> Only a result from vdisk_alloc is assigned as a volid and that's never
> an absolute path.
> 
> Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
> ---
> 
> No changes from v1.
> 
>  PVE/QemuServer.pm | 6 +-----
>  1 file changed, 1 insertion(+), 5 deletions(-)
> 
>

applied, thanks!




^ permalink raw reply	[flat|nested] 30+ messages in thread

* [pve-devel] applied: [PATCH v2 qemu-server 3/7] restore deactivate volumes: never die
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu-server 3/7] restore deactivate volumes: never die Fabian Ebner
  2022-04-23  9:18   ` Thomas Lamprecht
@ 2022-04-25 16:21   ` Thomas Lamprecht
  1 sibling, 0 replies; 30+ messages in thread
From: Thomas Lamprecht @ 2022-04-25 16:21 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fabian Ebner

On 21.04.22 13:26, Fabian Ebner wrote:
> Such an error shouldn't abort the whole operation.
> 
> Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
> ---
> 
> No changes from v1.
> 
>  PVE/QemuServer.pm | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
>

applied, thanks!




^ permalink raw reply	[flat|nested] 30+ messages in thread

* [pve-devel] applied: [PATCH v2 qemu-server 4/7] restore: also deactivate/destroy cloud-init disk upon error
  2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu-server 4/7] restore: also deactivate/destroy cloud-init disk upon error Fabian Ebner
@ 2022-04-25 16:21   ` Thomas Lamprecht
  0 siblings, 0 replies; 30+ messages in thread
From: Thomas Lamprecht @ 2022-04-25 16:21 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fabian Ebner

On 21.04.22 13:26, Fabian Ebner wrote:
> by re-using the same hash that's used when allocating/activating the
> disks in the helpers doing the opposite.
> 
> Also in preparation to allow skipping certain disks upon restore.
> 
> Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
> ---
> 
> No changes from v1.
> 
>  PVE/QemuServer.pm | 32 ++++++++++++++++----------------
>  1 file changed, 16 insertions(+), 16 deletions(-)
> 
>

applied, thanks!




^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [pve-devel] [PATCH v2 manager 1/3] ui: restore: disallow empty storage selection if it wouldn't work
  2022-04-25  7:28     ` Fabian Ebner
@ 2022-04-27  7:05       ` Thomas Lamprecht
  2022-04-27  8:21         ` Fabian Ebner
  0 siblings, 1 reply; 30+ messages in thread
From: Thomas Lamprecht @ 2022-04-27  7:05 UTC (permalink / raw)
  To: Fabian Ebner, Proxmox VE development discussion

On 25.04.22 09:28, Fabian Ebner wrote:
> Am 23.04.22 um 11:38 schrieb Thomas Lamprecht:
>> On 21.04.22 13:26, Fabian Ebner wrote:
>>> Namely, if there is a storage in the backup configuration that's not
>>> available on the current node.
>>
>> Better than the status quo, but in the long run all the "force all volumes to a single storage"
>> on restore and also migrate isn't ideal for the case where one or more storages do not exist on
>> the target node. An per-volume override would be nicer, but may require some gui adaptions to
>> present that in a sensible way with good UX.
>>
> 
> In the UI, it could simply be part of the disk grid (proposed in patch
> manager 3/3), only showing up for drives selected from the backup?

exactly what I thought too.

> 
> In the back-end for migration, we have a storage-storage map, but here
> we'd need a drive-storage map. It'd be possible to extend the 'storage'
> parameter for the create/restore API call to be such a map, but I wonder
> if going for a 'restore-drives' parameter being such a map (and
> replacing the proposed 'preserve-drives' parameter) would be better?

hmm, possibly

> 
> The downside is, we'd have to choose between
> A) preserve disk and config
> B) preserve disk as unused
> for the drives that are not present in the backup. A) would be more
> convenient in the partial restore context, but B) is the current
> default. Thus we need to keep B) if 'restore-drives' is not specified at
> all for backwards-compatibility, but can choose A) if 'restore-drives'
> is specified. But doing so seems a little inconsistent regarding user
> expectation.

would a more general src:dest map help, for example (just to give the very
rough direction meaning here):

not present or `scsi1:backup` <- would be restored as in the backup (config)
`scsi1:store=foo` <- as in config put on another storage
`scsi1:preserve` <- preserve from existing installation being overwritten

The actual left/right hand sides would need to get fleshed out to fit our
use cases best, but 




^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [pve-devel] [PATCH v2 manager 1/3] ui: restore: disallow empty storage selection if it wouldn't work
  2022-04-27  7:05       ` Thomas Lamprecht
@ 2022-04-27  8:21         ` Fabian Ebner
  0 siblings, 0 replies; 30+ messages in thread
From: Fabian Ebner @ 2022-04-27  8:21 UTC (permalink / raw)
  To: Thomas Lamprecht, Proxmox VE development discussion

Am 27.04.22 um 09:05 schrieb Thomas Lamprecht:
> On 25.04.22 09:28, Fabian Ebner wrote:
>> Am 23.04.22 um 11:38 schrieb Thomas Lamprecht:
>>> On 21.04.22 13:26, Fabian Ebner wrote:
>>>> Namely, if there is a storage in the backup configuration that's not
>>>> available on the current node.
>>>
>>> Better than the status quo, but in the long run all the "force all volumes to a single storage"
>>> on restore and also migrate isn't ideal for the case where one or more storages do not exist on
>>> the target node. An per-volume override would be nicer, but may require some gui adaptions to
>>> present that in a sensible way with good UX.
>>>
>>
>> In the UI, it could simply be part of the disk grid (proposed in patch
>> manager 3/3), only showing up for drives selected from the backup?
> 
> exactly what I thought too.

My first attempt using a widgetcolumn with a storage selector failed, as
it would issue an API call for each disk...I'll try to come up with some
way of sharing/fixing the store to make it work. In v3, I forgot to
group the override settings in a fieldset, will do so in v4.

> 
>>
>> In the back-end for migration, we have a storage-storage map, but here
>> we'd need a drive-storage map. It'd be possible to extend the 'storage'
>> parameter for the create/restore API call to be such a map, but I wonder
>> if going for a 'restore-drives' parameter being such a map (and
>> replacing the proposed 'preserve-drives' parameter) would be better?
> 
> hmm, possibly
> 
>>
>> The downside is, we'd have to choose between
>> A) preserve disk and config
>> B) preserve disk as unused
>> for the drives that are not present in the backup. A) would be more
>> convenient in the partial restore context, but B) is the current
>> default. Thus we need to keep B) if 'restore-drives' is not specified at
>> all for backwards-compatibility, but can choose A) if 'restore-drives'
>> is specified. But doing so seems a little inconsistent regarding user
>> expectation.
> 
> would a more general src:dest map help, for example (just to give the very
> rough direction meaning here):
> 
> not present or `scsi1:backup` <- would be restored as in the backup (config)
> `scsi1:store=foo` <- as in config put on another storage
> `scsi1:preserve` <- preserve from existing installation being overwritten
> 
> The actual left/right hand sides would need to get fleshed out to fit our
> use cases best, but 

I sent a v3 yesterday with something similar.




^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2022-04-27  8:22 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-21 11:26 [pve-devel] [PATCH-SERIES v2 qemu/qemu-server/widget-toolkit/manager] more flexible restore Fabian Ebner
2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu 1/2] vma: restore: call blk_unref for all opened block devices Fabian Ebner
2022-04-25  6:37   ` Wolfgang Bumiller
2022-04-25 16:10   ` [pve-devel] applied: " Thomas Lamprecht
2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu 2/2] vma: allow partial restore Fabian Ebner
2022-04-25  6:40   ` Wolfgang Bumiller
2022-04-25 16:10   ` [pve-devel] applied: " Thomas Lamprecht
2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu-server 1/7] restore: cleanup oldconf: also clean up snapshots from kept volumes Fabian Ebner
2022-04-25 16:20   ` [pve-devel] applied: " Thomas Lamprecht
2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu-server 2/7] restore destroy volumes: remove check for absolute path Fabian Ebner
2022-04-25 16:20   ` [pve-devel] applied: " Thomas Lamprecht
2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu-server 3/7] restore deactivate volumes: never die Fabian Ebner
2022-04-23  9:18   ` Thomas Lamprecht
2022-04-25  6:45     ` Fabian Ebner
2022-04-25 16:21   ` [pve-devel] applied: " Thomas Lamprecht
2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu-server 4/7] restore: also deactivate/destroy cloud-init disk upon error Fabian Ebner
2022-04-25 16:21   ` [pve-devel] applied: " Thomas Lamprecht
2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu-server 5/7] api: create: refactor parameter check logic Fabian Ebner
2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu-server 6/7] api: create: allow overriding non-disk options during restore Fabian Ebner
2022-04-21 11:26 ` [pve-devel] [PATCH v2 qemu-server 7/7] restore: allow preserving drives " Fabian Ebner
2022-04-21 11:26 ` [pve-devel] [PATCH v2 widget-toolkit 1/1] css: add proxmox-good-row class Fabian Ebner
2022-04-23  8:32   ` [pve-devel] applied: " Thomas Lamprecht
2022-04-21 11:26 ` [pve-devel] [PATCH v2 manager 1/3] ui: restore: disallow empty storage selection if it wouldn't work Fabian Ebner
2022-04-23  9:38   ` Thomas Lamprecht
2022-04-25  7:28     ` Fabian Ebner
2022-04-27  7:05       ` Thomas Lamprecht
2022-04-27  8:21         ` Fabian Ebner
2022-04-21 11:26 ` [pve-devel] [PATCH v2 manager 2/3] ui: restore: allow override of some settings Fabian Ebner
2022-04-23 10:07   ` Thomas Lamprecht
2022-04-21 11:26 ` [pve-devel] [PATCH v2 manager 3/3] ui: restore: allow preserving disks Fabian Ebner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal