public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH v2 storage qemu-server container 0/9] move disk or volume to other guets
@ 2021-08-06 13:46 Aaron Lauterer
  2021-08-06 13:46 ` [pve-devel] [PATCH v2 storage 1/9] storage: add new find_free_volname Aaron Lauterer
                   ` (9 more replies)
  0 siblings, 10 replies; 18+ messages in thread
From: Aaron Lauterer @ 2021-08-06 13:46 UTC (permalink / raw)
  To: pve-devel

This is the continuation of 'disk-reassign' but instead of a separate
API endpoint we now follow the approach to make it part of the
'move-disk' and 'move-volume' endpoints for VMs and containers.

The main idea is to make it easy to move a disk/volume to another guest.
Currently this is a manual and error prone process that requires
knowledge of how PVE handles disks/volumes and the mapping which guest
they belong to.

With this, the 'qm move-disk' and 'pct move-volume' are changed in the
way that the storage parameter is optional as well as the new
target-vmid and target-{disk,volume}. This will keep old calls to move the
disk/volume to another storage working. To move to another guest, the
storage needs to be omitted.

Major changes since the last iteration as dedicated API endpoint [0] are
that the storage layer only implements the renaming itself. The layer
above (qemu-server and pve-container) define the name of the new
volume/disk.  Therefore it was necessary to expose the new
'find_free_volname' function.  The rename function on the storage layer
handles possible template references and the creation of the new volid
as that is highly dependent on the actual storage.

The following storage types are implemented at the moment:
* dir based ones
* ZFS
* (thin) LVM
* Ceph RBD


Most parts of the disk-reassign code has been taken and moved into the
'move_disk' and 'move_volume' endpoints with conditional checking if the
reassign code or the move to other storage code is meant to run
depending on the given parameters.

Changes since v1 [2] (thx @ Fabian_E for the reviews!):
* drop exposed 'find_free_diskname' method
* drop 'wants_fmt_suffix' method (not needed anymore)
* introduce 'find_free_volname' which decides if only the diskname is
  needed or the longer path for directory based storages
* use $source_volname instead of $source_volid -> avoids some extra
  calls to get to $source_volname again
* make --target-{disk,volume} optional and fall back to source key
* smaller fixes in code quality and using existing functions like
  'parse_volname' instead of a custom regex (possible with the new
  changes)


Changes since the RFC [1]:
* added check if target guest is replicated and fail if storage does not
  support replication
* only pass minimum of needed parameters to the storage layer and infer
  other needed information from that
* lock storage and check if the volume aready exists (handling a
  possible race condition between calling find_free_disk and the actual
  renaming)
* use a helper method to determine if the plugin needs the fmt suffix
  in the volume name
* getting format of the source and pass it to find_free_disk
* style fixes (long lines, multiline post-if, ...)

[0] https://lists.proxmox.com/pipermail/pve-devel/2021-April/047481.html
[1] https://lists.proxmox.com/pipermail/pve-devel/2021-June/048400.html
[2] https://lists.proxmox.com/pipermail/pve-devel/2021-July/049445.html

storage: Aaron Lauterer (2):
  storage: add new find_free_volname
  add disk rename feature

 PVE/Storage.pm               | 29 +++++++++++++++++--
 PVE/Storage/LVMPlugin.pm     | 24 ++++++++++++++++
 PVE/Storage/LvmThinPlugin.pm |  1 +
 PVE/Storage/Plugin.pm        | 54 ++++++++++++++++++++++++++++++++++++
 PVE/Storage/RBDPlugin.pm     | 28 +++++++++++++++++++
 PVE/Storage/ZFSPoolPlugin.pm | 24 ++++++++++++++++
 6 files changed, 158 insertions(+), 2 deletions(-)


qemu-server: Aaron Lauterer (4):
  cli: qm: change move_disk to move-disk
  Drive: add valid_drive_names_with_unused
  api: move-disk: add move to other VM
  api: move-disk: cleanup very long lines

 PVE/API2/Qemu.pm        | 263 ++++++++++++++++++++++++++++++++++++++--
 PVE/CLI/qm.pm           |   3 +-
 PVE/QemuServer/Drive.pm |   4 +
 3 files changed, 259 insertions(+), 11 deletions(-)

container: Aaron Lauterer (3):
  cli: pct: change move_volume to move-volume
  api: move-volume: add move to another container
  api: move-volume: cleanup very long lines

 src/PVE/API2/LXC.pm | 301 ++++++++++++++++++++++++++++++++++++++++----
 src/PVE/CLI/pct.pm  |   3 +-
 2 files changed, 276 insertions(+), 28 deletions(-)

-- 
2.30.2





^ permalink raw reply	[flat|nested] 18+ messages in thread

* [pve-devel] [PATCH v2 storage 1/9] storage: add new find_free_volname
  2021-08-06 13:46 [pve-devel] [PATCH v2 storage qemu-server container 0/9] move disk or volume to other guets Aaron Lauterer
@ 2021-08-06 13:46 ` Aaron Lauterer
  2021-08-12 12:50   ` Fabian Ebner
  2021-08-06 13:46 ` [pve-devel] [PATCH storage 2/9] add disk rename feature Aaron Lauterer
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 18+ messages in thread
From: Aaron Lauterer @ 2021-08-06 13:46 UTC (permalink / raw)
  To: pve-devel

This new method exposes the functionality to request a new, not yet
used, volname for a storage.

The default implementation will return the result from
'find_free_diskname' prefixed with "<VMID>/" if $scfg->{path} exists.
Otherwise it will only return the result from 'find_free_diskname'.

If the format suffix is added also depends on the existence of
$scfg->{path}.

$scfg->{path} will be present for directory based storage types.

Should a storage need to return a different volname, it needs to override
the 'find_free_volname' method.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---

v1 -> v2:
* drop exposed 'find_free_diskname' in favor of 'find_free_volname'
* dropped 'wants_fmt_suffix' as 'find_free_volname' now decides that itself

rfc -> v1:
dropped $add_fmt_suffix parameter and added the "wants_fmt_suffix"
helper method in each plugin.

 PVE/Storage.pm        |  9 +++++++++
 PVE/Storage/Plugin.pm | 11 +++++++++++
 2 files changed, 20 insertions(+)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index c04b5a2..c38fe7b 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -203,6 +203,15 @@ sub storage_can_replicate {
     return $plugin->storage_can_replicate($scfg, $storeid, $format);
 }
 
+sub find_free_volname {
+    my ($cfg, $storeid, $vmid, $fmt) = @_;
+
+    my $scfg = storage_config($cfg, $storeid);
+    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+
+    return $plugin->find_free_volname($storeid, $scfg, $vmid, $fmt);
+}
+
 sub storage_ids {
     my ($cfg) = @_;
 
giff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index b1865cb..e043329 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -685,6 +685,17 @@ sub find_free_diskname {
     return get_next_vm_diskname($disk_list, $storeid, $vmid, $fmt, $scfg, $add_fmt_suffix);
 }
 
+sub find_free_volname {
+    my ($class, $storeid, $scfg, $vmid, $fmt) = @_;
+
+    my $diskname = $class->find_free_diskname($storeid, $scfg, $vmid, $fmt, exists($scfg->{path}));
+
+    if (exists($scfg->{path})) {
+	return "${vmid}/$diskname";
+    }
+    return $diskname;
+}
+
 sub clone_image {
     my ($class, $scfg, $storeid, $volname, $vmid, $snap) = @_;
 
-- 
2.30.2





^ permalink raw reply	[flat|nested] 18+ messages in thread

* [pve-devel] [PATCH storage 2/9] add disk rename feature
  2021-08-06 13:46 [pve-devel] [PATCH v2 storage qemu-server container 0/9] move disk or volume to other guets Aaron Lauterer
  2021-08-06 13:46 ` [pve-devel] [PATCH v2 storage 1/9] storage: add new find_free_volname Aaron Lauterer
@ 2021-08-06 13:46 ` Aaron Lauterer
  2021-08-12 12:51   ` Fabian Ebner
  2021-08-06 13:46 ` [pve-devel] [PATCH v2 qemu-server 3/9] cli: qm: change move_disk to move-disk Aaron Lauterer
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 18+ messages in thread
From: Aaron Lauterer @ 2021-08-06 13:46 UTC (permalink / raw)
  To: pve-devel

Functionality has been added for the following storage types:

* directory ones, based on the default implementation:
    * directory
    * NFS
    * CIFS
    * gluster
* ZFS
* (thin) LVM
* Ceph

A new feature `rename` has been introduced to mark which storage
plugin supports the feature.

Version API and AGE have been bumped.

The storage gets locked and each plugin checks if the target volume
already exists prior renaming.
This is done because there could be a race condition from the time the
external caller requests a new free disk name to the time the volume is
actually renamed.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
v1 -> v2:
* many small fixes and improvements
* rename_volume now accepts $source_volname instead of $source_volid
    helps us to avoid to parse the volid a second time

rfc -> v1:
* reduced number of parameters to minimum needed, plugins infer needed
  information themselves
* added storage locking and checking if volume already exists
* parse target_volname prior to renaming to check if valid

old dedicated API endpoint -> rfc:
only do rename now but the rename function handles templates and returns
the new volid as this can be differently handled on some storages.


 PVE/Storage.pm               | 20 +++++++++++++++--
 PVE/Storage/LVMPlugin.pm     | 24 ++++++++++++++++++++
 PVE/Storage/LvmThinPlugin.pm |  1 +
 PVE/Storage/Plugin.pm        | 43 ++++++++++++++++++++++++++++++++++++
 PVE/Storage/RBDPlugin.pm     | 28 +++++++++++++++++++++++
 PVE/Storage/ZFSPoolPlugin.pm | 24 ++++++++++++++++++++
 6 files changed, 138 insertions(+), 2 deletions(-)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index c38fe7b..2430991 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -41,11 +41,11 @@ use PVE::Storage::PBSPlugin;
 use PVE::Storage::BTRFSPlugin;
 
 # Storage API version. Increment it on changes in storage API interface.
-use constant APIVER => 9;
+use constant APIVER => 10;
 # Age is the number of versions we're backward compatible with.
 # This is like having 'current=APIVER' and age='APIAGE' in libtool,
 # see https://www.gnu.org/software/libtool/manual/html_node/Libtool-versioning.html
-use constant APIAGE => 0;
+use constant APIAGE => 1;
 
 # load standard plugins
 PVE::Storage::DirPlugin->register();
@@ -358,6 +358,7 @@ sub volume_snapshot_needs_fsfreeze {
 #            snapshot - taking a snapshot is possible
 #            sparseinit - volume is sparsely initialized
 #            template - conversion to base image is possible
+#            rename - renaming volumes is possible
 # $snap - check if the feature is supported for a given snapshot
 # $running - if the guest owning the volume is running
 # $opts - hash with further options:
@@ -1866,6 +1867,21 @@ sub complete_volume {
     return $res;
 }
 
+sub rename_volume {
+    my ($cfg, $source_volid, $target_volname) = @_;
+
+    my ($storeid, $source_volname) = parse_volume_id($source_volid);
+
+    activate_storage($cfg, $storeid);
+
+    my $scfg = storage_config($cfg, $storeid);
+    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+
+    return $plugin->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
+	return $plugin->rename_volume($scfg, $storeid, $source_volname, $target_volname);
+    });
+}
+
 # Various io-heavy operations require io/bandwidth limits which can be
 # configured on multiple levels: The global defaults in datacenter.cfg, and
 # per-storage overrides. When we want to do a restore from storage A to storage
diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
index 139d391..d28a94c 100644
--- a/PVE/Storage/LVMPlugin.pm
+++ b/PVE/Storage/LVMPlugin.pm
@@ -339,6 +339,15 @@ sub lvcreate {
     run_command($cmd, errmsg => "lvcreate '$vg/$name' error");
 }
 
+sub lvrename {
+    my ($vg, $oldname, $newname) = @_;
+
+    run_command(
+	['/sbin/lvrename', $vg, $oldname, $newname],
+	errmsg => "lvrename '${vg}/${oldname}' to '${newname}' error"
+    );
+}
+
 sub alloc_image {
     my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
 
@@ -584,6 +593,7 @@ sub volume_has_feature {
 
     my $features = {
 	copy => { base => 1, current => 1},
+	rename => {current => 1},
     };
 
     my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
@@ -692,4 +702,18 @@ sub volume_import_write {
 	input => '<&'.fileno($input_fh));
 }
 
+sub rename_volume {
+    my ($class, $scfg, $storeid, $source_volname, $target_volname) = @_;
+
+    $class->parse_volname($target_volname);
+
+    my $vg = $scfg->{vgname};
+    my $lvs = lvm_list_volumes($vg);
+    die "target volume '${target_volname}' already exists\n"
+	if ($lvs->{$vg}->{$target_volname});
+
+    lvrename($vg, $source_volname, $target_volname);
+    return "${storeid}:${target_volname}";
+}
+
 1;
diff --git a/PVE/Storage/LvmThinPlugin.pm b/PVE/Storage/LvmThinPlugin.pm
index 4ba6f90..c24af22 100644
--- a/PVE/Storage/LvmThinPlugin.pm
+++ b/PVE/Storage/LvmThinPlugin.pm
@@ -355,6 +355,7 @@ sub volume_has_feature {
 	template => { current => 1},
 	copy => { base => 1, current => 1, snap => 1},
 	sparseinit => { base => 1, current => 1},
+	rename => {current => 1},
     };
 
     my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index e043329..7b75252 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -971,6 +971,7 @@ sub volume_has_feature {
 		  snap => {qcow2 => 1} },
 	sparseinit => { base => {qcow2 => 1, raw => 1, vmdk => 1},
 			current => {qcow2 => 1, raw => 1, vmdk => 1} },
+	rename => { current => {qcow2 => 1, raw => 1, vmdk => 1} },
     };
 
     # clone_image creates a qcow2 volume
@@ -978,6 +979,14 @@ sub volume_has_feature {
 		defined($opts->{valid_target_formats}) &&
 		!(grep { $_ eq 'qcow2' } @{$opts->{valid_target_formats}});
 
+    if (
+	$feature eq 'rename'
+	&& $class->can('api')
+	&& $class->api() < 10
+    ) {
+	return 0;
+    }
+
     my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
 	$class->parse_volname($volname);
 
@@ -1495,4 +1504,38 @@ sub volume_import_formats {
     return ();
 }
 
+sub rename_volume {
+    my ($class, $scfg, $storeid, $source_volname, $target_volname) = @_;
+    die "no path found\n" if !exists($scfg->{path});
+    die "not implemented in storage plugin '$class'\n" if $class->can('api') && $class->api() < 10;
+
+    my (undef, undef, $target_vmid) = $class->parse_volname($target_volname);
+
+    my (
+	undef,
+	$source_name,
+	$source_vmid,
+	$base_name,
+	$base_vmid,
+	undef,
+	$format
+    ) = $class->parse_volname($source_volname);
+
+    my $basedir = $class->get_subdir($scfg, 'images');
+
+    mkpath "${basedir}/${target_vmid}";;
+
+    my $old_path = "${basedir}/${source_volname}";
+    my $new_path = "${basedir}/${target_volname}";
+
+    die "target volume '${target_volname}' already exists\n" if -e $new_path;
+
+    my $base = $base_name ? "${base_vmid}/${base_name}/" : '';
+
+    rename($old_path, $new_path) ||
+	die "rename '$old_path' to '$new_path' failed - $!\n";
+
+    return "${storeid}:${base}${target_volname}";
+}
+
 1;
diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index a8d1243..88350d2 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -728,6 +728,7 @@ sub volume_has_feature {
 	template => { current => 1},
 	copy => { base => 1, current => 1, snap => 1},
 	sparseinit => { base => 1, current => 1},
+	rename => {current => 1},
     };
 
     my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) = $class->parse_volname($volname);
@@ -743,4 +744,31 @@ sub volume_has_feature {
     return undef;
 }
 
+sub rename_volume {
+    my ($class, $scfg, $storeid, $source_volname, $target_volname) = @_;
+
+    $class->parse_volname($target_volname);
+
+    my (undef, undef, $source_vmid, $base_name) = $class->parse_volname($source_volname);
+
+    eval {
+	my $cmd = $rbd_cmd->($scfg, $storeid, 'info', $target_volname);
+	run_rbd_command($cmd, errmsg => "exist check",  quiet => 1, noerr => 1);
+    };
+    my $err = $@;
+    die "target volume '${target_volname}' already exists\n"
+	if !$err;
+
+    my $cmd = $rbd_cmd->($scfg, $storeid, 'rename', $source_volname, $target_volname);
+
+    run_rbd_command(
+	$cmd,
+	errmsg => "could not rename image '${source_volname}' to '${target_volname}'",
+    );
+
+    $base_name = $base_name ? "${base_name}/" : '';
+
+    return "${storeid}:${base_name}${target_volname}";
+}
+
 1;
diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index c4be70f..543eef9 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++ b/PVE/Storage/ZFSPoolPlugin.pm
@@ -687,6 +687,7 @@ sub volume_has_feature {
 	copy => { base => 1, current => 1},
 	sparseinit => { base => 1, current => 1},
 	replicate => { base => 1, current => 1},
+	rename => {current => 1},
     };
 
     my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
@@ -789,4 +790,27 @@ sub volume_import_formats {
     return $class->volume_export_formats($scfg, $storeid, $volname, $snapshot, $base_snapshot, $with_snapshots);
 }
 
+sub rename_volume {
+    my ($class, $scfg, $storeid, $source_volname, $target_volname) = @_;
+
+    print "parsing volname: $target_volname\n";
+    $class->parse_volname($target_volname);
+
+    my (undef, undef, $source_vmid, $base_name) = $class->parse_volname($source_volname);
+
+    my $pool = $scfg->{pool};
+    my $source_zfspath = "${pool}/${source_volname}";
+    my $target_zfspath = "${pool}/${target_volname}";
+
+    my $exists = 0 == run_command(['zfs', 'get', '-H', 'name', $target_zfspath],
+				  noerr => 1, quiet => 1);
+    die "target volume '${target_volname}' already exists\n" if $exists;
+
+    $class->zfs_request($scfg, 5, 'rename', ${source_zfspath}, ${target_zfspath});
+
+    $base_name = $base_name ? "${base_name}/" : '';
+
+    return "${storeid}:${base_name}${target_volname}";
+}
+
 1;
-- 
2.30.2





^ permalink raw reply	[flat|nested] 18+ messages in thread

* [pve-devel] [PATCH v2 qemu-server 3/9] cli: qm: change move_disk to move-disk
  2021-08-06 13:46 [pve-devel] [PATCH v2 storage qemu-server container 0/9] move disk or volume to other guets Aaron Lauterer
  2021-08-06 13:46 ` [pve-devel] [PATCH v2 storage 1/9] storage: add new find_free_volname Aaron Lauterer
  2021-08-06 13:46 ` [pve-devel] [PATCH storage 2/9] add disk rename feature Aaron Lauterer
@ 2021-08-06 13:46 ` Aaron Lauterer
  2021-08-06 13:46 ` [pve-devel] [PATCH v2 qemu-server 4/9] Drive: add valid_drive_names_with_unused Aaron Lauterer
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 18+ messages in thread
From: Aaron Lauterer @ 2021-08-06 13:46 UTC (permalink / raw)
  To: pve-devel

also add alias to keep move_disk working.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
 PVE/CLI/qm.pm | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index 8307dc1..ef99b6d 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -910,7 +910,8 @@ our $cmddef = {
 
     resize => [ "PVE::API2::Qemu", 'resize_vm', ['vmid', 'disk', 'size'], { node => $nodename } ],
 
-    move_disk => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage'], { node => $nodename }, $upid_exit ],
+    'move-disk' => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage'], { node => $nodename }, $upid_exit ],
+    move_disk => { alias => 'move-disk' },
 
     unlink => [ "PVE::API2::Qemu", 'unlink', ['vmid'], { node => $nodename } ],
 
-- 
2.30.2





^ permalink raw reply	[flat|nested] 18+ messages in thread

* [pve-devel] [PATCH v2 qemu-server 4/9] Drive: add valid_drive_names_with_unused
  2021-08-06 13:46 [pve-devel] [PATCH v2 storage qemu-server container 0/9] move disk or volume to other guets Aaron Lauterer
                   ` (2 preceding siblings ...)
  2021-08-06 13:46 ` [pve-devel] [PATCH v2 qemu-server 3/9] cli: qm: change move_disk to move-disk Aaron Lauterer
@ 2021-08-06 13:46 ` Aaron Lauterer
  2021-08-06 13:46 ` [pve-devel] [PATCH v2 qemu-server 5/9] api: move-disk: add move to other VM Aaron Lauterer
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 18+ messages in thread
From: Aaron Lauterer @ 2021-08-06 13:46 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
v1 -> v2: fixed spacing between - and 1

 PVE/QemuServer/Drive.pm | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index 5110190..8828aad 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -393,6 +393,10 @@ sub valid_drive_names {
             'efidisk0');
 }
 
+sub valid_drive_names_with_unused {
+    return (valid_drive_names(), map {"unused$_"} (0 .. ($MAX_UNUSED_DISKS - 1)));
+}
+
 sub is_valid_drivename {
     my $dev = shift;
 
-- 
2.30.2





^ permalink raw reply	[flat|nested] 18+ messages in thread

* [pve-devel] [PATCH v2 qemu-server 5/9] api: move-disk: add move to other VM
  2021-08-06 13:46 [pve-devel] [PATCH v2 storage qemu-server container 0/9] move disk or volume to other guets Aaron Lauterer
                   ` (3 preceding siblings ...)
  2021-08-06 13:46 ` [pve-devel] [PATCH v2 qemu-server 4/9] Drive: add valid_drive_names_with_unused Aaron Lauterer
@ 2021-08-06 13:46 ` Aaron Lauterer
  2021-08-13  7:41   ` Fabian Ebner
  2021-08-06 13:46 ` [pve-devel] [PATCH v2 qemu-server 6/9] api: move-disk: cleanup very long lines Aaron Lauterer
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 18+ messages in thread
From: Aaron Lauterer @ 2021-08-06 13:46 UTC (permalink / raw)
  To: pve-devel

The goal of this is to expand the move-disk API endpoint to make it
possible to move a disk to another VM. Previously this was only possible
with manual intervertion either by renaming the VM disk or by manually
adding the disks volid to the config of the other VM.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
v1 -> v2:
* make --target-disk optional and use source disk key as fallback
* use parse_volname instead of custom regex
* adapt to find_free_volname
* smaller (style) fixes

rfc -> v1:
* add check if target guest is replicated and fail if the moved volume
  does not support it
* check if source volume has a format suffix and pass it to
  'find_free_disk'
* fixed some style nits

old dedicated api endpoint -> rfc:
There are some big changes here. The old [0] dedicated API endpoint is
gone and most of its code is now part of move_disk. Error messages have
been changed accordingly and sometimes enahnced by adding disk keys and
VMIDs where appropriate.

Since a move to other guests should be possible for unused disks, we
need to check before doing a move to storage to make sure to not
handle unused disks.

 PVE/API2/Qemu.pm | 238 ++++++++++++++++++++++++++++++++++++++++++++++-
 PVE/CLI/qm.pm    |   2 +-
 2 files changed, 234 insertions(+), 6 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index ef0d877..30e222a 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -35,6 +35,7 @@ use PVE::API2::Qemu::Agent;
 use PVE::VZDump::Plugin;
 use PVE::DataCenterConfig;
 use PVE::SSHInfo;
+use PVE::Replication;
 
 BEGIN {
     if (!$ENV{PVE_GENERATING_DOCS}) {
@@ -3274,9 +3275,11 @@ __PACKAGE__->register_method({
     method => 'POST',
     protected => 1,
     proxyto => 'node',
-    description => "Move volume to different storage.",
+    description => "Move volume to different storage or to a different VM.",
     permissions => {
-	description => "You need 'VM.Config.Disk' permissions on /vms/{vmid}, and 'Datastore.AllocateSpace' permissions on the storage.",
+	description => "You need 'VM.Config.Disk' permissions on /vms/{vmid}, " .
+	    "and 'Datastore.AllocateSpace' permissions on the storage. To move ".
+	    "a disk to another VM, you need the permissions on the target VM as well.",
 	check => [ 'and',
 		   ['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
 		   ['perm', '/storage/{storage}', [ 'Datastore.AllocateSpace' ]],
@@ -3287,14 +3290,19 @@ __PACKAGE__->register_method({
 	properties => {
 	    node => get_standard_option('pve-node'),
 	    vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
+	    'target-vmid' => get_standard_option('pve-vmid', {
+		completion => \&PVE::QemuServer::complete_vmid,
+		optional => 1,
+	    }),
 	    disk => {
 	        type => 'string',
 		description => "The disk you want to move.",
-		enum => [PVE::QemuServer::Drive::valid_drive_names()],
+		enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
 	    },
             storage => get_standard_option('pve-storage-id', {
 		description => "Target storage.",
 		completion => \&PVE::QemuServer::complete_storage,
+		optional => 1,
             }),
             'format' => {
                 type => 'string',
@@ -3321,6 +3329,20 @@ __PACKAGE__->register_method({
 		minimum => '0',
 		default => 'move limit from datacenter or storage config',
 	    },
+	    'target-disk' => {
+	        type => 'string',
+		description => "The config key the disk will be moved to on the target VM " .
+		    "(for example, ide0 or scsi1).",
+		enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
+		optional => 1,
+	    },
+	    'target-digest' => {
+		type => 'string',
+		description => 'Prevent changes if current configuration file of the target VM has " .
+		    "a different SHA1 digest. This can be used to prevent concurrent modifications.',
+		maxLength => 40,
+		optional => 1,
+	    },
 	},
     },
     returns => {
@@ -3335,14 +3357,22 @@ __PACKAGE__->register_method({
 
 	my $node = extract_param($param, 'node');
 	my $vmid = extract_param($param, 'vmid');
+	my $target_vmid = extract_param($param, 'target-vmid');
 	my $digest = extract_param($param, 'digest');
+	my $target_digest = extract_param($param, 'target-digest');
 	my $disk = extract_param($param, 'disk');
+	my $target_disk = extract_param($param, 'target-disk') // $disk;
 	my $storeid = extract_param($param, 'storage');
 	my $format = extract_param($param, 'format');
 
+	die "either set storage or target-vmid, but not both\n"
+	    if $storeid && $target_vmid;
+
+
 	my $storecfg = PVE::Storage::config();
+	my $source_volid;
 
-	my $updatefn =  sub {
+	my $move_updatefn =  sub {
 	    my $conf = PVE::QemuConfig->load_config($vmid);
 	    PVE::QemuConfig->check_lock($conf);
 
@@ -3452,7 +3482,205 @@ __PACKAGE__->register_method({
             return $rpcenv->fork_worker('qmmove', $vmid, $authuser, $realcmd);
 	};
 
-	return PVE::QemuConfig->lock_config($vmid, $updatefn);
+	my $load_and_check_reassign_configs = sub {
+	    my $vmlist = PVE::Cluster::get_vmlist()->{ids};
+
+	    if ($vmlist->{$vmid}->{node} ne $vmlist->{$target_vmid}->{node}) {
+		die "Both VMs need to be on the same node $vmlist->{$vmid}->{node}) ".
+		    "but target VM is on $vmlist->{$target_vmid}->{node}.\n";
+	    }
+
+	    my $source_conf = PVE::QemuConfig->load_config($vmid);
+	    PVE::QemuConfig->check_lock($source_conf);
+	    my $target_conf = PVE::QemuConfig->load_config($target_vmid);
+	    PVE::QemuConfig->check_lock($target_conf);
+
+	    die "Can't move disks from or to template VMs\n"
+		if ($source_conf->{template} || $target_conf->{template});
+
+	    if ($digest) {
+		eval { PVE::Tools::assert_if_modified($digest, $source_conf->{digest}) };
+		if (my $err = $@) {
+		    die "VM ${vmid}: ${err}";
+		}
+	    }
+
+	    if ($target_digest) {
+		eval { PVE::Tools::assert_if_modified($target_digest, $target_conf->{digest}) };
+		if (my $err = $@) {
+		    die "VM ${target_vmid}: ${err}";
+		}
+	    }
+
+	    die "Disk '${disk}' for VM '$vmid' does not exist\n"
+		if !defined($source_conf->{$disk});
+
+	    die "Target disk key '${target_disk}' is already in use for VM '$target_vmid'\n"
+		if exists($target_conf->{$target_disk});
+
+	    my $drive = PVE::QemuServer::parse_drive(
+		$disk,
+		$source_conf->{$disk},
+	    );
+	    $source_volid = $drive->{file};
+
+	    die "disk '${disk}' has no associated volume\n"
+		if !$source_volid;
+	    die "CD drive contents can't be moved to another VM\n"
+		if PVE::QemuServer::drive_is_cdrom($drive, 1);
+	    die "Can't move  physical disk to another VM\n" if $source_volid =~ m|^/dev/|;
+	    if (PVE::QemuServer::Drive::is_volume_in_use(
+		    $storecfg,
+		    $source_conf,
+		    $disk,
+		    $source_volid,
+		)) {
+		die "Can't move disk used by a snapshot to another VM\n"
+	    }
+
+	    if (!PVE::Storage::volume_has_feature(
+		    $storecfg,
+		    'rename',
+		    $source_volid,
+		)) {
+		die "Storage does not support moving of this disk to another VM\n"
+	    }
+
+	    die "Cannot move disk to another VM while the source VM is running\n"
+		if PVE::QemuServer::check_running($vmid) && $disk !~ m/^unused\d+$/;
+
+	    if ($target_disk !~ m/^unused\d+$/ && $target_disk =~ m/^([^\d]+)\d+$/) {
+		my $interface = $1;
+		my $desc = PVE::JSONSchema::get_standard_option("pve-qm-${interface}");
+		eval {
+		    PVE::JSONSchema::parse_property_string(
+			$desc->{format},
+			$source_conf->{$disk},
+		    )
+		};
+		if (my $err = $@) {
+		    die "Cannot move disk to another VM: ${err}";
+		}
+	    }
+
+	    my $repl_conf = PVE::ReplicationConfig->new();
+	    my $is_replicated = $repl_conf->check_for_existing_jobs($target_vmid, 1);
+	    my ($storeid, undef) = PVE::Storage::parse_volume_id($source_volid);
+	    my $format = (PVE::Storage::parse_volname($storecfg, $source_volid))[6];
+	    if ($is_replicated && !PVE::Storage::storage_can_replicate($storecfg, $storeid, $format)) {
+		die "Cannot move disk to a replicated VM. Storage does not support replication!\n";
+	    }
+
+	    return ($source_conf, $target_conf);
+	};
+
+	my $logfunc = sub {
+	    my ($msg) = @_;
+	    print STDERR "$msg\n";
+	};
+
+	my $disk_reassignfn = sub {
+	    return PVE::QemuConfig->lock_config($vmid, sub {
+		return PVE::QemuConfig->lock_config($target_vmid, sub {
+		    my ($source_conf, $target_conf) = &$load_and_check_reassign_configs();
+
+		    my $drive_param = PVE::QemuServer::parse_drive(
+			$target_disk,
+			$source_conf->{$disk},
+		    );
+
+		    print "moving disk '$disk' from VM '$vmid' to '$target_vmid'\n";
+		    my ($storeid, $source_volname) = PVE::Storage::parse_volume_id($source_volid);
+
+		    my (
+			undef,
+			undef,
+			undef,
+			undef,
+			undef,
+			undef,
+			$fmt
+		    ) = PVE::Storage::parse_volname($storecfg, $source_volid);
+
+		    my $target_volname = PVE::Storage::find_free_volname(
+			$storecfg,
+			$storeid,
+			$target_vmid,
+			$fmt
+		    );
+
+		    my $new_volid = PVE::Storage::rename_volume(
+			$storecfg,
+			$source_volid,
+			$target_volname,
+		    );
+
+		    $drive_param->{file} = $new_volid;
+
+		    delete $source_conf->{$disk};
+		    print "removing disk '${disk}' from VM '${vmid}' config\n";
+		    PVE::QemuConfig->write_config($vmid, $source_conf);
+
+		    my $drive_string = PVE::QemuServer::print_drive($drive_param);
+		    &$update_vm_api(
+			{
+			    node => $node,
+			    vmid => $target_vmid,
+			    digest => $target_digest,
+			    $target_disk => $drive_string,
+			},
+			1,
+		    );
+
+		    # remove possible replication snapshots
+		    if (PVE::Storage::volume_has_feature(
+			    $storecfg,
+			    'replicate',
+			    $source_volid),
+		    ) {
+			eval {
+			    PVE::Replication::prepare(
+				$storecfg,
+				[$new_volid],
+				undef,
+				1,
+				undef,
+				$logfunc,
+			    )
+			};
+			if (my $err = $@) {
+			    print "Failed to remove replication snapshots on moved disk " .
+				"'$target_disk'. Manual cleanup could be necessary.\n";
+			}
+		    }
+		});
+	    });
+	};
+
+	if ($target_vmid) {
+	    $rpcenv->check_vm_perm($authuser, $target_vmid, undef, ['VM.Config.Disk'])
+		if $authuser ne 'root@pam';
+
+	    die "Moving a disk to the same VM is not possible. Did you mean to ".
+		"move the disk to a different storage?\n"
+		if $vmid eq $target_vmid;
+
+	    &$load_and_check_reassign_configs();
+	    return $rpcenv->fork_worker(
+		'qmmove',
+		"${vmid}-${disk}>${target_vmid}-${target_disk}",
+		$authuser,
+		$disk_reassignfn
+	    );
+	} elsif ($storeid) {
+	    die "cannot move disk '$disk', only configured disks can be moved to another storage\n"
+		if $disk =~ m/^unused\d+$/;
+	    return PVE::QemuConfig->lock_config($vmid, $move_updatefn);
+	} else {
+	    die "Provide either a 'storage' to move the disk to a different " .
+		"storage or 'target-vmid' and 'target-disk' to move the disk " .
+		"to another VM\n";
+	}
     }});
 
 my $check_vm_disks_local = sub {
diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index ef99b6d..a92d301 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -910,7 +910,7 @@ our $cmddef = {
 
     resize => [ "PVE::API2::Qemu", 'resize_vm', ['vmid', 'disk', 'size'], { node => $nodename } ],
 
-    'move-disk' => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage'], { node => $nodename }, $upid_exit ],
+    'move-disk' => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage', 'target-vmid', 'target-disk'], { node => $nodename }, $upid_exit ],
     move_disk => { alias => 'move-disk' },
 
     unlink => [ "PVE::API2::Qemu", 'unlink', ['vmid'], { node => $nodename } ],
-- 
2.30.2





^ permalink raw reply	[flat|nested] 18+ messages in thread

* [pve-devel] [PATCH v2 qemu-server 6/9] api: move-disk: cleanup very long lines
  2021-08-06 13:46 [pve-devel] [PATCH v2 storage qemu-server container 0/9] move disk or volume to other guets Aaron Lauterer
                   ` (4 preceding siblings ...)
  2021-08-06 13:46 ` [pve-devel] [PATCH v2 qemu-server 5/9] api: move-disk: add move to other VM Aaron Lauterer
@ 2021-08-06 13:46 ` Aaron Lauterer
  2021-08-06 13:46 ` [pve-devel] [PATCH v2 container 7/9] cli: pct: change move_volume to move-volume Aaron Lauterer
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 18+ messages in thread
From: Aaron Lauterer @ 2021-08-06 13:46 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
 PVE/API2/Qemu.pm | 25 ++++++++++++++++++++-----
 1 file changed, 20 insertions(+), 5 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 30e222a..1b3ec90 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -3312,13 +3312,15 @@ __PACKAGE__->register_method({
             },
 	    delete => {
 		type => 'boolean',
-		description => "Delete the original disk after successful copy. By default the original disk is kept as unused disk.",
+		description => "Delete the original disk after successful copy. By default the " .
+		    "original disk is kept as unused disk.",
 		optional => 1,
 		default => 0,
 	    },
 	    digest => {
 		type => 'string',
-		description => 'Prevent changes if current configuration file has different SHA1 digest. This can be used to prevent concurrent modifications.',
+		description => 'Prevent changes if current configuration file has different SHA1 " .
+		    "digest. This can be used to prevent concurrent modifications.',
 		maxLength => 40,
 		optional => 1,
 	    },
@@ -3397,11 +3399,20 @@ __PACKAGE__->register_method({
                 (!$format || !$oldfmt || $oldfmt eq $format);
 
 	    # this only checks snapshots because $disk is passed!
-	    my $snapshotted = PVE::QemuServer::Drive::is_volume_in_use($storecfg, $conf, $disk, $old_volid);
+	    my $snapshotted = PVE::QemuServer::Drive::is_volume_in_use(
+		$storecfg,
+		$conf,
+		$disk,
+		$old_volid
+	    );
 	    die "you can't move a disk with snapshots and delete the source\n"
 		if $snapshotted && $param->{delete};
 
-	    PVE::Cluster::log_msg('info', $authuser, "move disk VM $vmid: move --disk $disk --storage $storeid");
+	    PVE::Cluster::log_msg(
+		'info',
+		$authuser,
+		"move disk VM $vmid: move --disk $disk --storage $storeid"
+	    );
 
 	    my $running = PVE::QemuServer::check_running($vmid);
 
@@ -3420,7 +3431,11 @@ __PACKAGE__->register_method({
 			if $snapshotted;
 
 		    my $bwlimit = extract_param($param, 'bwlimit');
-		    my $movelimit = PVE::Storage::get_bandwidth_limit('move', [$oldstoreid, $storeid], $bwlimit);
+		    my $movelimit = PVE::Storage::get_bandwidth_limit(
+			'move',
+			[$oldstoreid, $storeid],
+			$bwlimit
+		    );
 
 		    my $newdrive = PVE::QemuServer::clone_disk(
 			$storecfg,
-- 
2.30.2





^ permalink raw reply	[flat|nested] 18+ messages in thread

* [pve-devel] [PATCH v2 container 7/9] cli: pct: change move_volume to move-volume
  2021-08-06 13:46 [pve-devel] [PATCH v2 storage qemu-server container 0/9] move disk or volume to other guets Aaron Lauterer
                   ` (5 preceding siblings ...)
  2021-08-06 13:46 ` [pve-devel] [PATCH v2 qemu-server 6/9] api: move-disk: cleanup very long lines Aaron Lauterer
@ 2021-08-06 13:46 ` Aaron Lauterer
  2021-08-06 13:46 ` [pve-devel] [PATCH v2 container 8/9] api: move-volume: add move to another container Aaron Lauterer
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 18+ messages in thread
From: Aaron Lauterer @ 2021-08-06 13:46 UTC (permalink / raw)
  To: pve-devel

also add alias to keep move_volume working

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
v1 -> v2: fix alias to actually point to move-volume

 src/PVE/CLI/pct.pm | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/src/PVE/CLI/pct.pm b/src/PVE/CLI/pct.pm
index 254b3b3..462917b 100755
--- a/src/PVE/CLI/pct.pm
+++ b/src/PVE/CLI/pct.pm
@@ -849,7 +849,8 @@ our $cmddef = {
 
     clone => [ "PVE::API2::LXC", 'clone_vm', ['vmid', 'newid'], { node => $nodename }, $upid_exit ],
     migrate => [ "PVE::API2::LXC", 'migrate_vm', ['vmid', 'target'], { node => $nodename }, $upid_exit],
-    move_volume => [ "PVE::API2::LXC", 'move_volume', ['vmid', 'volume', 'storage'], { node => $nodename }, $upid_exit ],
+    'move-volume' => [ "PVE::API2::LXC", 'move_volume', ['vmid', 'volume', 'storage', 'target-vmid', 'target-volume'], { node => $nodename }, $upid_exit ],
+    move_volume => { alias => 'move-volume' },
 
     snapshot => [ "PVE::API2::LXC::Snapshot", 'snapshot', ['vmid', 'snapname'], { node => $nodename } , $upid_exit ],
     delsnapshot => [ "PVE::API2::LXC::Snapshot", 'delsnapshot', ['vmid', 'snapname'], { node => $nodename } , $upid_exit ],
-- 
2.30.2





^ permalink raw reply	[flat|nested] 18+ messages in thread

* [pve-devel] [PATCH v2 container 8/9] api: move-volume: add move to another container
  2021-08-06 13:46 [pve-devel] [PATCH v2 storage qemu-server container 0/9] move disk or volume to other guets Aaron Lauterer
                   ` (6 preceding siblings ...)
  2021-08-06 13:46 ` [pve-devel] [PATCH v2 container 7/9] cli: pct: change move_volume to move-volume Aaron Lauterer
@ 2021-08-06 13:46 ` Aaron Lauterer
  2021-08-13  8:21   ` Fabian Ebner
  2021-08-06 13:46 ` [pve-devel] [PATCH v2 container 9/9] api: move-volume: cleanup very long lines Aaron Lauterer
  2021-08-13  8:29 ` [pve-devel] [PATCH v2 storage qemu-server container 0/9] move disk or volume to other guets Fabian Ebner
  9 siblings, 1 reply; 18+ messages in thread
From: Aaron Lauterer @ 2021-08-06 13:46 UTC (permalink / raw)
  To: pve-devel

The goal of this is to expand the move-volume API endpoint to make it
possible to move a container volume / mountpoint to another container.

Currently it works for regular mountpoints though it would be nice to be
able to do it for unused mounpoints as well.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
This is mostly the code from qemu-server with some adaptions. Mainly
error messages and some checks.

Previous checks have been moved to '$move_to_storage_checks'.

v1 -> v2:
* change --target-mp to --target-volume
* make --target-volume optional and fallback to source mount point
* use parse_volname instead of custom regex
* adapt to find_free_volname
* print warnings from update_pct_config
* move running check back in both lock_config sections
* fixed a few style issues

rfc -> v1:
* add check if target guest is replicated and fail if the moved volume
  does not support it
* check if source volume has a format suffix and pass it to
  'find_free_disk' or if the prefix is vm/subvol as those also have
  their own meaning, see the comment in the code
* fixed some style nits
 src/PVE/API2/LXC.pm | 268 ++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 247 insertions(+), 21 deletions(-)

diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index afef7ec..89f9de8 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -1784,10 +1784,12 @@ __PACKAGE__->register_method({
     method => 'POST',
     protected => 1,
     proxyto => 'node',
-    description => "Move a rootfs-/mp-volume to a different storage",
+    description => "Move a rootfs-/mp-volume to a different storage or to a different container.",
     permissions => {
 	description => "You need 'VM.Config.Disk' permissions on /vms/{vmid}, " .
-	    "and 'Datastore.AllocateSpace' permissions on the storage.",
+	    "and 'Datastore.AllocateSpace' permissions on the storage. To move ".
+	    "a volume to another container, you need the permissions on the ".
+	    "target container as well.",
 	check =>
 	[ 'and',
 	  ['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
@@ -1799,14 +1801,20 @@ __PACKAGE__->register_method({
 	properties => {
 	    node => get_standard_option('pve-node'),
 	    vmid => get_standard_option('pve-vmid', { completion => \&PVE::LXC::complete_ctid }),
+	    'target-vmid' => get_standard_option('pve-vmid', {
+		completion => \&PVE::LXC::complete_ctid,
+		optional => 1,
+	    }),
 	    volume => {
 		type => 'string',
+		#TODO: check how to handle unused mount points as the mp parameter is not configured
 		enum => [ PVE::LXC::Config->valid_volume_keys() ],
 		description => "Volume which will be moved.",
 	    },
 	    storage => get_standard_option('pve-storage-id', {
 		description => "Target Storage.",
 		completion => \&PVE::Storage::complete_storage_enabled,
+		optional => 1,
 	    }),
 	    delete => {
 		type => 'boolean',
@@ -1827,6 +1835,20 @@ __PACKAGE__->register_method({
 		minimum => '0',
 		default => 'clone limit from datacenter or storage config',
 	    },
+	    'target-volume' => {
+	        type => 'string',
+		description => "The config key the volume will be moved to.",
+		enum => [PVE::LXC::Config->valid_volume_keys()],
+		optional => 1,
+	    },
+	    'target-digest' => {
+		type => 'string',
+		description => 'Prevent changes if current configuration file of the target " .
+		    "container has a different SHA1 digest. This can be used to prevent " .
+		    "concurrent modifications.',
+		maxLength => 40,
+		optional => 1,
+	    },
 	},
     },
     returns => {
@@ -1841,32 +1863,48 @@ __PACKAGE__->register_method({
 
 	my $vmid = extract_param($param, 'vmid');
 
+	my $target_vmid = extract_param($param, 'target-vmid');
+
 	my $storage = extract_param($param, 'storage');
 
 	my $mpkey = extract_param($param, 'volume');
 
+	my $target_mpkey = extract_param($param, 'target-volume') // $mpkey;
+
+	my $digest = extract_param($param, 'digest');
+
+	my $target_digest = extract_param($param, 'target-digest');
+
 	my $lockname = 'disk';
 
 	my ($mpdata, $old_volid);
 
-	PVE::LXC::Config->lock_config($vmid, sub {
-	    my $conf = PVE::LXC::Config->load_config($vmid);
-	    PVE::LXC::Config->check_lock($conf);
+	die "either set storage or target-vmid, but not both\n"
+	    if $storage && $target_vmid;
 
-	    die "cannot move volumes of a running container\n" if PVE::LXC::check_running($vmid);
+	my $storecfg = PVE::Storage::config();
+	my $source_volid;
 
-	    $mpdata = PVE::LXC::Config->parse_volume($mpkey, $conf->{$mpkey});
-	    $old_volid = $mpdata->{volume};
+	my $move_to_storage_checks = sub {
+	    PVE::LXC::Config->lock_config($vmid, sub {
+		my $conf = PVE::LXC::Config->load_config($vmid);
+		PVE::LXC::Config->check_lock($conf);
 
-	    die "you can't move a volume with snapshots and delete the source\n"
-		if $param->{delete} && PVE::LXC::Config->is_volume_in_use_by_snapshots($conf, $old_volid);
+		die "cannot move volumes of a running container\n" if PVE::LXC::check_running($vmid);
 
-	    PVE::Tools::assert_if_modified($param->{digest}, $conf->{digest});
+		$mpdata = PVE::LXC::Config->parse_volume($mpkey, $conf->{$mpkey});
+		$old_volid = $mpdata->{volume};
 
-	    PVE::LXC::Config->set_lock($vmid, $lockname);
-	});
+		die "you can't move a volume with snapshots and delete the source\n"
+		    if $param->{delete} && PVE::LXC::Config->is_volume_in_use_by_snapshots($conf, $old_volid);
 
-	my $realcmd = sub {
+		PVE::Tools::assert_if_modified($digest, $conf->{digest});
+
+		PVE::LXC::Config->set_lock($vmid, $lockname);
+	    });
+	};
+
+	my $storage_realcmd = sub {
 	    eval {
 		PVE::Cluster::log_msg('info', $authuser, "move volume CT $vmid: move --volume $mpkey --storage $storage");
 
@@ -1936,15 +1974,203 @@ __PACKAGE__->register_method({
 	    warn $@ if $@;
 	    die $err if $err;
 	};
-	my $task = eval {
-	    $rpcenv->fork_worker('move_volume', $vmid, $authuser, $realcmd);
+
+	my $load_and_check_reassign_configs = sub {
+	    my $vmlist = PVE::Cluster::get_vmlist()->{ids};
+	    die "Both containers need to be on the same node ($vmlist->{$vmid}->{node}) ".
+		"but target continer is on $vmlist->{$target_vmid}->{node}.\n"
+		if $vmlist->{$vmid}->{node} ne $vmlist->{$target_vmid}->{node};
+
+	    my $source_conf = PVE::LXC::Config->load_config($vmid);
+	    PVE::LXC::Config->check_lock($source_conf);
+	    my $target_conf = PVE::LXC::Config->load_config($target_vmid);
+	    PVE::LXC::Config->check_lock($target_conf);
+
+	    die "Can't move volumes from or to template VMs\n"
+		if ($source_conf->{template} || $target_conf->{template});
+
+	    if ($digest) {
+		eval { PVE::Tools::assert_if_modified($digest, $source_conf->{digest}) };
+		if (my $err = $@) {
+		    die "Container ${vmid}: ${err}";
+		}
+	    }
+
+	    if ($target_digest) {
+		eval { PVE::Tools::assert_if_modified($target_digest, $target_conf->{digest}) };
+		if (my $err = $@) {
+		    die "Container ${target_vmid}: ${err}";
+		}
+	    }
+
+	    die "volume '${mpkey}' for container '$vmid' does not exist\n"
+		if !defined($source_conf->{$mpkey});
+
+	    die "Target volume key '${target_mpkey}' is already in use for container '$target_vmid'\n"
+		if exists $target_conf->{$target_mpkey};
+
+	    my $drive = PVE::LXC::Config->parse_volume(
+		$mpkey,
+		$source_conf->{$mpkey},
+	    );
+
+	    $source_volid = $drive->{volume};
+
+	    die "disk '${mpkey}' has no associated volume\n"
+		if !$source_volid;
+
+	    die "Storage does not support moving of this disk to another container\n"
+		if !PVE::Storage::volume_has_feature(
+		    $storecfg,
+		    'rename',
+		    $source_volid,
+		);
+
+	    die "Cannot move a bindmound or device mount to another container\n"
+		if $drive->{type} ne "volume";
+	    die "Cannot move volume to another container while the source container is running\n"
+		if PVE::LXC::check_running($vmid) && $mpkey !~ m/^unused\d+$/;
+
+	    my $repl_conf = PVE::ReplicationConfig->new();
+	    my $is_replicated = $repl_conf->check_for_existing_jobs($target_vmid, 1);
+	    my ($storeid, undef) = PVE::Storage::parse_volume_id($source_volid);
+	    my $format = (PVE::Storage::parse_volname($storecfg, $source_volid))[6];
+	    if (
+		$is_replicated
+		&& !PVE::Storage::storage_can_replicate($storecfg, $storeid, $format)
+	    ) {
+		die "Cannot move volume to a replicated container. Storage " .
+		    "does not support replication!\n";
+	    }
+	    return ($source_conf, $target_conf);
+	};
+
+	my $logfunc = sub {
+	    my ($msg) = @_;
+	    print STDERR "$msg\n";
 	};
-	if (my $err = $@) {
-	    eval { PVE::LXC::Config->remove_lock($vmid, $lockname) };
-	    warn $@ if $@;
-	    die $err;
+
+	my $volume_reassignfn = sub {
+	    return PVE::LXC::Config->lock_config($vmid, sub {
+		return PVE::LXC::Config->lock_config($target_vmid, sub {
+		    my ($source_conf, $target_conf) = &$load_and_check_reassign_configs();
+		    die "cannot move volumes of a running container\n" if PVE::LXC::check_running($vmid);
+
+		    my $drive_param = PVE::LXC::Config->parse_volume(
+			$target_mpkey,
+			$source_conf->{$mpkey},
+		    );
+
+		    print "moving volume '$mpkey' from container '$vmid' to '$target_vmid'\n";
+		    my ($storage, $source_volname) = PVE::Storage::parse_volume_id($source_volid);
+
+		    my (
+			undef,
+			undef,
+			undef,
+			undef,
+			undef,
+			undef,
+			$fmt
+		    ) = PVE::Storage::parse_volname($storecfg, $source_volid);
+
+		    my $target_volname = PVE::Storage::find_free_volname(
+			$storecfg,
+			$storage,
+			$target_vmid,
+			$fmt
+		    );
+
+		    my $new_volid = PVE::Storage::rename_volume(
+			$storecfg,
+			$source_volid,
+			$target_volname,
+		    );
+
+		    $drive_param->{volume} = $new_volid;
+
+		    delete $source_conf->{$mpkey};
+		    print "removing volume '${mpkey}' from container '${vmid}' config\n";
+		    PVE::LXC::Config->write_config($vmid, $source_conf);
+
+		    my $drive_string = PVE::LXC::Config->print_volume($target_mpkey, $drive_param);
+		    my $running = PVE::LXC::check_running($target_vmid);
+		    my $param = { $target_mpkey => $drive_string };
+
+		    my $errors = PVE::LXC::Config->update_pct_config(
+			    $target_vmid,
+			    $target_conf,
+			    $running,
+			    $param
+			);
+
+		    my $rpcenv = PVE::RPCEnvironment::get();
+		    foreach my $key (keys %$errors) {
+			$rpcenv->warn( $errors->{$key});
+		    }
+
+		    PVE::LXC::Config->write_config($target_vmid, $target_conf);
+		    $target_conf = PVE::LXC::Config->load_config($target_vmid);
+
+		    PVE::LXC::update_lxc_config($target_vmid, $target_conf);
+		    print "target container '$target_vmid' updated with '$target_mpkey'\n";
+
+		    # remove possible replication snapshots
+		    if (PVE::Storage::volume_has_feature(
+			    $storecfg,
+			    'replicate',
+			    $source_volid),
+		    ) {
+			eval {
+			    PVE::Replication::prepare(
+				$storecfg,
+				[$new_volid],
+				undef,
+				1,
+				undef,
+				$logfunc,
+			    )
+			};
+			if (my $err = $@) {
+			    print "Failed to remove replication snapshots on volume ".
+				"'$target_mpkey'. Manual cleanup could be necessary.\n";
+			}
+		    }
+		});
+	    });
+	};
+
+	if ($target_vmid) {
+	    $rpcenv->check_vm_perm($authuser, $target_vmid, undef, ['VM.Config.Disk'])
+		if $authuser ne 'root@pam';
+
+	    die "Moving a volume to the same container is not possible. Did you ".
+		"mean to move the volume to a different storage?\n"
+		if $vmid eq $target_vmid;
+
+	    &$load_and_check_reassign_configs();
+	    return $rpcenv->fork_worker(
+		'move_volume',
+		"${vmid}-${mpkey}>${target_vmid}-${target_mpkey}",
+		$authuser,
+		$volume_reassignfn
+	    );
+	} elsif ($storage) {
+	    &$move_to_storage_checks();
+	    my $task = eval {
+		$rpcenv->fork_worker('move_volume', $vmid, $authuser, $storage_realcmd);
+	    };
+	    if (my $err = $@) {
+		eval { PVE::LXC::Config->remove_lock($vmid, $lockname) };
+		warn $@ if $@;
+		die $err;
+	    }
+	    return $task;
+	} else {
+	    die "Provide either a 'storage' to move the mount point to a ".
+		"different storage or 'target-vmid' and 'target-mp' to move ".
+		"the mount point to another container\n";
 	}
-	return $task;
   }});
 
 __PACKAGE__->register_method({
-- 
2.30.2





^ permalink raw reply	[flat|nested] 18+ messages in thread

* [pve-devel] [PATCH v2 container 9/9] api: move-volume: cleanup very long lines
  2021-08-06 13:46 [pve-devel] [PATCH v2 storage qemu-server container 0/9] move disk or volume to other guets Aaron Lauterer
                   ` (7 preceding siblings ...)
  2021-08-06 13:46 ` [pve-devel] [PATCH v2 container 8/9] api: move-volume: add move to another container Aaron Lauterer
@ 2021-08-06 13:46 ` Aaron Lauterer
  2021-08-13  8:29 ` [pve-devel] [PATCH v2 storage qemu-server container 0/9] move disk or volume to other guets Fabian Ebner
  9 siblings, 0 replies; 18+ messages in thread
From: Aaron Lauterer @ 2021-08-06 13:46 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
 src/PVE/API2/LXC.pm | 33 +++++++++++++++++++++++++++------
 1 file changed, 27 insertions(+), 6 deletions(-)

diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 89f9de8..88388b5 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -1818,13 +1818,15 @@ __PACKAGE__->register_method({
 	    }),
 	    delete => {
 		type => 'boolean',
-		description => "Delete the original volume after successful copy. By default the original is kept as an unused volume entry.",
+		description => "Delete the original volume after successful copy. By default the " .
+		    "original is kept as an unused volume entry.",
 		optional => 1,
 		default => 0,
 	    },
 	    digest => {
 		type => 'string',
-		description => 'Prevent changes if current configuration file has different SHA1 digest. This can be used to prevent concurrent modifications.',
+		description => 'Prevent changes if current configuration file has different SHA1 " .
+		    "digest. This can be used to prevent concurrent modifications.',
 		maxLength => 40,
 		optional => 1,
 	    },
@@ -1906,7 +1908,11 @@ __PACKAGE__->register_method({
 
 	my $storage_realcmd = sub {
 	    eval {
-		PVE::Cluster::log_msg('info', $authuser, "move volume CT $vmid: move --volume $mpkey --storage $storage");
+		PVE::Cluster::log_msg(
+		    'info',
+		    $authuser,
+		    "move volume CT $vmid: move --volume $mpkey --storage $storage"
+		);
 
 		my $conf = PVE::LXC::Config->load_config($vmid);
 		my $storage_cfg = PVE::Storage::config();
@@ -1917,8 +1923,20 @@ __PACKAGE__->register_method({
 		    PVE::Storage::activate_volumes($storage_cfg, [ $old_volid ]);
 		    my $bwlimit = extract_param($param, 'bwlimit');
 		    my $source_storage = PVE::Storage::parse_volume_id($old_volid);
-		    my $movelimit = PVE::Storage::get_bandwidth_limit('move', [$source_storage, $storage], $bwlimit);
-		    $new_volid = PVE::LXC::copy_volume($mpdata, $vmid, $storage, $storage_cfg, $conf, undef, $movelimit);
+		    my $movelimit = PVE::Storage::get_bandwidth_limit(
+			'move',
+			[$source_storage, $storage],
+			$bwlimit
+		    );
+		    $new_volid = PVE::LXC::copy_volume(
+			$mpdata,
+			$vmid,
+			$storage,
+			$storage_cfg,
+			$conf,
+			undef,
+			$movelimit
+		    );
 		    if (PVE::LXC::Config->is_template($conf)) {
 			PVE::Storage::activate_volumes($storage_cfg, [ $new_volid ]);
 			my $template_volid = PVE::Storage::vdisk_create_base($storage_cfg, $new_volid);
@@ -1932,7 +1950,10 @@ __PACKAGE__->register_method({
 			$conf = PVE::LXC::Config->load_config($vmid);
 			PVE::Tools::assert_if_modified($digest, $conf->{digest});
 
-			$conf->{$mpkey} = PVE::LXC::Config->print_ct_mountpoint($mpdata, $mpkey eq 'rootfs');
+			$conf->{$mpkey} = PVE::LXC::Config->print_ct_mountpoint(
+			    $mpdata,
+			    $mpkey eq 'rootfs'
+			);
 
 			PVE::LXC::Config->add_unused_volume($conf, $old_volid) if !$param->{delete};
 
-- 
2.30.2





^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [pve-devel] [PATCH v2 storage 1/9] storage: add new find_free_volname
  2021-08-06 13:46 ` [pve-devel] [PATCH v2 storage 1/9] storage: add new find_free_volname Aaron Lauterer
@ 2021-08-12 12:50   ` Fabian Ebner
  2021-08-13 12:46     ` Aaron Lauterer
  0 siblings, 1 reply; 18+ messages in thread
From: Fabian Ebner @ 2021-08-12 12:50 UTC (permalink / raw)
  To: pve-devel, Aaron Lauterer

Am 06.08.21 um 15:46 schrieb Aaron Lauterer:
> This new method exposes the functionality to request a new, not yet
> used, volname for a storage.
> 
> The default implementation will return the result from
> 'find_free_diskname' prefixed with "<VMID>/" if $scfg->{path} exists.
> Otherwise it will only return the result from 'find_free_diskname'.
> 
> If the format suffix is added also depends on the existence of
> $scfg->{path}.
> 
> $scfg->{path} will be present for directory based storage types.
> 
> Should a storage need to return a different volname, it needs to override
> the 'find_free_volname' method.
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> 
> v1 -> v2:
> * drop exposed 'find_free_diskname' in favor of 'find_free_volname'
> * dropped 'wants_fmt_suffix' as 'find_free_volname' now decides that itself
> 
> rfc -> v1:
> dropped $add_fmt_suffix parameter and added the "wants_fmt_suffix"
> helper method in each plugin.
> 
>   PVE/Storage.pm        |  9 +++++++++
>   PVE/Storage/Plugin.pm | 11 +++++++++++
>   2 files changed, 20 insertions(+)
> 
> diff --git a/PVE/Storage.pm b/PVE/Storage.pm
> index c04b5a2..c38fe7b 100755
> --- a/PVE/Storage.pm
> +++ b/PVE/Storage.pm
> @@ -203,6 +203,15 @@ sub storage_can_replicate {
>       return $plugin->storage_can_replicate($scfg, $storeid, $format);
>   }
>   
> +sub find_free_volname {
> +    my ($cfg, $storeid, $vmid, $fmt) = @_;
> +
> +    my $scfg = storage_config($cfg, $storeid);
> +    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
> +
> +    return $plugin->find_free_volname($storeid, $scfg, $vmid, $fmt);
> +}
> +
>   sub storage_ids {
>       my ($cfg) = @_;
>   
> giff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
> index b1865cb..e043329 100644
> --- a/PVE/Storage/Plugin.pm
> +++ b/PVE/Storage/Plugin.pm
> @@ -685,6 +685,17 @@ sub find_free_diskname {
>       return get_next_vm_diskname($disk_list, $storeid, $vmid, $fmt, $scfg, $add_fmt_suffix);
>   }
>   
> +sub find_free_volname {
> +    my ($class, $storeid, $scfg, $vmid, $fmt) = @_;
> +
> +    my $diskname = $class->find_free_diskname($storeid, $scfg, $vmid, $fmt, exists($scfg->{path}));
> +
> +    if (exists($scfg->{path})) {

More of a nit, but I don't like the usage of exists here, for two reasons:

1. The main reason: it's inconsistent with all the other checks for 
$scfg->{path} that are used throughout the file.

2. It's more likely to break. Consider some code I've made up, but 
similar code might exist somewhere in our code base:
     ...
     my $path = get_path_from_config($cfg, $storeid);
     my $scfg = {
	path => $path,
	content => $content,
	...
     };
or
     my $start = $scfg->{path}[0];
     # auto-vivication means the 'path' key now always exists (with 
undef value if it was undef before)
     if (defined($start) && $start eq '/') {
	...
     }

Using exists here makes it necessary to check that such code is not used 
for $scfg, currently and in the future.

> +	return "${vmid}/$diskname";
> +    }
> +    return $diskname;
> +}
> +
>   sub clone_image {
>       my ($class, $scfg, $storeid, $volname, $vmid, $snap) = @_;
>   
> 




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [pve-devel] [PATCH storage 2/9] add disk rename feature
  2021-08-06 13:46 ` [pve-devel] [PATCH storage 2/9] add disk rename feature Aaron Lauterer
@ 2021-08-12 12:51   ` Fabian Ebner
  0 siblings, 0 replies; 18+ messages in thread
From: Fabian Ebner @ 2021-08-12 12:51 UTC (permalink / raw)
  To: pve-devel, Aaron Lauterer

Am 06.08.21 um 15:46 schrieb Aaron Lauterer:
> Functionality has been added for the following storage types:
> 
> * directory ones, based on the default implementation:
>      * directory
>      * NFS
>      * CIFS
>      * gluster
> * ZFS
> * (thin) LVM
> * Ceph
> 
> A new feature `rename` has been introduced to mark which storage
> plugin supports the feature.
> 
> Version API and AGE have been bumped.
> 
> The storage gets locked and each plugin checks if the target volume
> already exists prior renaming.
> This is done because there could be a race condition from the time the
> external caller requests a new free disk name to the time the volume is
> actually renamed.
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> v1 -> v2:
> * many small fixes and improvements
> * rename_volume now accepts $source_volname instead of $source_volid
>      helps us to avoid to parse the volid a second time
> 
> rfc -> v1:
> * reduced number of parameters to minimum needed, plugins infer needed
>    information themselves
> * added storage locking and checking if volume already exists
> * parse target_volname prior to renaming to check if valid
> 
> old dedicated API endpoint -> rfc:
> only do rename now but the rename function handles templates and returns
> the new volid as this can be differently handled on some storages.
> 
> 
>   PVE/Storage.pm               | 20 +++++++++++++++--
>   PVE/Storage/LVMPlugin.pm     | 24 ++++++++++++++++++++
>   PVE/Storage/LvmThinPlugin.pm |  1 +
>   PVE/Storage/Plugin.pm        | 43 ++++++++++++++++++++++++++++++++++++
>   PVE/Storage/RBDPlugin.pm     | 28 +++++++++++++++++++++++
>   PVE/Storage/ZFSPoolPlugin.pm | 24 ++++++++++++++++++++
>   6 files changed, 138 insertions(+), 2 deletions(-)
> 
> diff --git a/PVE/Storage.pm b/PVE/Storage.pm
> index c38fe7b..2430991 100755
> --- a/PVE/Storage.pm
> +++ b/PVE/Storage.pm
> @@ -41,11 +41,11 @@ use PVE::Storage::PBSPlugin;
>   use PVE::Storage::BTRFSPlugin;
>   
>   # Storage API version. Increment it on changes in storage API interface.
> -use constant APIVER => 9;
> +use constant APIVER => 10;
>   # Age is the number of versions we're backward compatible with.
>   # This is like having 'current=APIVER' and age='APIAGE' in libtool,
>   # see https://www.gnu.org/software/libtool/manual/html_node/Libtool-versioning.html
> -use constant APIAGE => 0;
> +use constant APIAGE => 1;
>   
>   # load standard plugins
>   PVE::Storage::DirPlugin->register();
> @@ -358,6 +358,7 @@ sub volume_snapshot_needs_fsfreeze {
>   #            snapshot - taking a snapshot is possible
>   #            sparseinit - volume is sparsely initialized
>   #            template - conversion to base image is possible
> +#            rename - renaming volumes is possible
>   # $snap - check if the feature is supported for a given snapshot
>   # $running - if the guest owning the volume is running
>   # $opts - hash with further options:
> @@ -1866,6 +1867,21 @@ sub complete_volume {
>       return $res;
>   }
>   
> +sub rename_volume {
> +    my ($cfg, $source_volid, $target_volname) = @_;
> +
> +    my ($storeid, $source_volname) = parse_volume_id($source_volid);
> +
> +    activate_storage($cfg, $storeid);
> +
> +    my $scfg = storage_config($cfg, $storeid);
> +    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
> +
> +    return $plugin->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
> +	return $plugin->rename_volume($scfg, $storeid, $source_volname, $target_volname);
> +    });
> +}
> +
>   # Various io-heavy operations require io/bandwidth limits which can be
>   # configured on multiple levels: The global defaults in datacenter.cfg, and
>   # per-storage overrides. When we want to do a restore from storage A to storage
> diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
> index 139d391..d28a94c 100644
> --- a/PVE/Storage/LVMPlugin.pm
> +++ b/PVE/Storage/LVMPlugin.pm
> @@ -339,6 +339,15 @@ sub lvcreate {
>       run_command($cmd, errmsg => "lvcreate '$vg/$name' error");
>   }
>   
> +sub lvrename {
> +    my ($vg, $oldname, $newname) = @_;
> +
> +    run_command(
> +	['/sbin/lvrename', $vg, $oldname, $newname],
> +	errmsg => "lvrename '${vg}/${oldname}' to '${newname}' error"

style nit: missing trailing comma

> +    );
> +}
> +
>   sub alloc_image {
>       my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
>   
> @@ -584,6 +593,7 @@ sub volume_has_feature {
>   
>       my $features = {
>   	copy => { base => 1, current => 1},
> +	rename => {current => 1},
>       };
>   
>       my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
> @@ -692,4 +702,18 @@ sub volume_import_write {
>   	input => '<&'.fileno($input_fh));
>   }
>   
> +sub rename_volume {
> +    my ($class, $scfg, $storeid, $source_volname, $target_volname) = @_;
> +
> +    $class->parse_volname($target_volname);
> +
> +    my $vg = $scfg->{vgname};
> +    my $lvs = lvm_list_volumes($vg);
> +    die "target volume '${target_volname}' already exists\n"
> +	if ($lvs->{$vg}->{$target_volname});
> +
> +    lvrename($vg, $source_volname, $target_volname);
> +    return "${storeid}:${target_volname}";
> +}
> +
>   1;
> diff --git a/PVE/Storage/LvmThinPlugin.pm b/PVE/Storage/LvmThinPlugin.pm
> index 4ba6f90..c24af22 100644
> --- a/PVE/Storage/LvmThinPlugin.pm
> +++ b/PVE/Storage/LvmThinPlugin.pm
> @@ -355,6 +355,7 @@ sub volume_has_feature {
>   	template => { current => 1},
>   	copy => { base => 1, current => 1, snap => 1},
>   	sparseinit => { base => 1, current => 1},
> +	rename => {current => 1},
>       };
>   
>       my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
> diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
> index e043329..7b75252 100644
> --- a/PVE/Storage/Plugin.pm
> +++ b/PVE/Storage/Plugin.pm
> @@ -971,6 +971,7 @@ sub volume_has_feature {
>   		  snap => {qcow2 => 1} },
>   	sparseinit => { base => {qcow2 => 1, raw => 1, vmdk => 1},
>   			current => {qcow2 => 1, raw => 1, vmdk => 1} },
> +	rename => { current => {qcow2 => 1, raw => 1, vmdk => 1} },
>       };
>   
>       # clone_image creates a qcow2 volume
> @@ -978,6 +979,14 @@ sub volume_has_feature {
>   		defined($opts->{valid_target_formats}) &&
>   		!(grep { $_ eq 'qcow2' } @{$opts->{valid_target_formats}});
>   
> +    if (
> +	$feature eq 'rename'
> +	&& $class->can('api')
> +	&& $class->api() < 10
> +    ) {
> +	return 0;
> +    }
> +
>       my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
>   	$class->parse_volname($volname);
>   
> @@ -1495,4 +1504,38 @@ sub volume_import_formats {
>       return ();
>   }
>   
> +sub rename_volume {
> +    my ($class, $scfg, $storeid, $source_volname, $target_volname) = @_;
> +    die "no path found\n" if !exists($scfg->{path});

Same nit as for the previous patch: using exists is not consistent with 
existing checks for $scfg->{path}.

> +    die "not implemented in storage plugin '$class'\n" if $class->can('api') && $class->api() < 10;
> +
> +    my (undef, undef, $target_vmid) = $class->parse_volname($target_volname);
> +
> +    my (
> +	undef,
> +	$source_name,
> +	$source_vmid,
> +	$base_name,
> +	$base_vmid,
> +	undef,
> +	$format
> +    ) = $class->parse_volname($source_volname);
> +
> +    my $basedir = $class->get_subdir($scfg, 'images');
> +
> +    mkpath "${basedir}/${target_vmid}";;

style nit: double semicolon

> +
> +    my $old_path = "${basedir}/${source_volname}";
> +    my $new_path = "${basedir}/${target_volname}";
> +
> +    die "target volume '${target_volname}' already exists\n" if -e $new_path;
> +
> +    my $base = $base_name ? "${base_vmid}/${base_name}/" : '';
> +
> +    rename($old_path, $new_path) ||
> +	die "rename '$old_path' to '$new_path' failed - $!\n";
> +

There is an issue when the source is a linked clone:
     # qm move-disk 123 scsi7 --target-vmid 122
     moving disk 'scsi7' from VM '123' to '122'
     rename 
'/mnt/pve/mycifs/images/116/base-116-disk-0.qcow2/123/vm-123-disk-0.qcow2' 
to '/mnt/pve/mycifs/images/122/vm-122-disk-0.qcow2' failed - Not a directory

> +    return "${storeid}:${base}${target_volname}";
> +}
> +
>   1;
> diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
> index a8d1243..88350d2 100644
> --- a/PVE/Storage/RBDPlugin.pm
> +++ b/PVE/Storage/RBDPlugin.pm
> @@ -728,6 +728,7 @@ sub volume_has_feature {
>   	template => { current => 1},
>   	copy => { base => 1, current => 1, snap => 1},
>   	sparseinit => { base => 1, current => 1},
> +	rename => {current => 1},
>       };
>   
>       my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) = $class->parse_volname($volname);
> @@ -743,4 +744,31 @@ sub volume_has_feature {
>       return undef;
>   }
>   
> +sub rename_volume {
> +    my ($class, $scfg, $storeid, $source_volname, $target_volname) = @_;
> +
> +    $class->parse_volname($target_volname);
> +
> +    my (undef, undef, $source_vmid, $base_name) = $class->parse_volname($source_volname);
> +
> +    eval {
> +	my $cmd = $rbd_cmd->($scfg, $storeid, 'info', $target_volname);
> +	run_rbd_command($cmd, errmsg => "exist check",  quiet => 1, noerr => 1);
> +    };
> +    my $err = $@;
> +    die "target volume '${target_volname}' already exists\n"
> +	if !$err;
> +
> +    my $cmd = $rbd_cmd->($scfg, $storeid, 'rename', $source_volname, $target_volname);
> +
> +    run_rbd_command(
> +	$cmd,
> +	errmsg => "could not rename image '${source_volname}' to '${target_volname}'",
> +    );

There is an issue when the source is a linked clone:
     # qm move-disk 123 scsi6 --target-vmid 122
     moving disk 'scsi6' from VM '123' to '122'
     could not rename image 'base-116-disk-0/vm-123-disk-0' to 
'vm-122-disk-0': rbd: error opening pool 'base-116-disk-0': (2) No such 
file or directory


> +
> +    $base_name = $base_name ? "${base_name}/" : '';
> +
> +    return "${storeid}:${base_name}${target_volname}";
> +}
> +
>   1;
> diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
> index c4be70f..543eef9 100644
> --- a/PVE/Storage/ZFSPoolPlugin.pm
> +++ b/PVE/Storage/ZFSPoolPlugin.pm
> @@ -687,6 +687,7 @@ sub volume_has_feature {
>   	copy => { base => 1, current => 1},
>   	sparseinit => { base => 1, current => 1},
>   	replicate => { base => 1, current => 1},
> +	rename => {current => 1},
>       };
>   
>       my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
> @@ -789,4 +790,27 @@ sub volume_import_formats {
>       return $class->volume_export_formats($scfg, $storeid, $volname, $snapshot, $base_snapshot, $with_snapshots);
>   }
>   
> +sub rename_volume {
> +    my ($class, $scfg, $storeid, $source_volname, $target_volname) = @_;
> +
> +    print "parsing volname: $target_volname\n";

debug print?

> +    $class->parse_volname($target_volname);
> +
> +    my (undef, undef, $source_vmid, $base_name) = $class->parse_volname($source_volname);

Also has an issue when the source is a linked clone:
     # qm move-disk 123 scsi5 --target-vmid 122
     moving disk 'scsi5' from VM '123' to '122'
     parsing volname: vm-122-disk-0
     zfs error: cannot open 'myzpool/base-116-disk-0/vm-123-disk-0': 
dataset does not exist


> +
> +    my $pool = $scfg->{pool};
> +    my $source_zfspath = "${pool}/${source_volname}";
> +    my $target_zfspath = "${pool}/${target_volname}";
> +
> +    my $exists = 0 == run_command(['zfs', 'get', '-H', 'name', $target_zfspath],
> +				  noerr => 1, quiet => 1);
> +    die "target volume '${target_volname}' already exists\n" if $exists;
> +
> +    $class->zfs_request($scfg, 5, 'rename', ${source_zfspath}, ${target_zfspath});
> +
> +    $base_name = $base_name ? "${base_name}/" : '';
> +
> +    return "${storeid}:${base_name}${target_volname}";
> +}
> +
>   1;
> 




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [pve-devel] [PATCH v2 qemu-server 5/9] api: move-disk: add move to other VM
  2021-08-06 13:46 ` [pve-devel] [PATCH v2 qemu-server 5/9] api: move-disk: add move to other VM Aaron Lauterer
@ 2021-08-13  7:41   ` Fabian Ebner
  2021-08-13 15:35     ` Aaron Lauterer
  0 siblings, 1 reply; 18+ messages in thread
From: Fabian Ebner @ 2021-08-13  7:41 UTC (permalink / raw)
  To: pve-devel, Aaron Lauterer

Am 06.08.21 um 15:46 schrieb Aaron Lauterer:
> The goal of this is to expand the move-disk API endpoint to make it
> possible to move a disk to another VM. Previously this was only possible
> with manual intervertion either by renaming the VM disk or by manually
> adding the disks volid to the config of the other VM.
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> v1 -> v2:
> * make --target-disk optional and use source disk key as fallback
> * use parse_volname instead of custom regex
> * adapt to find_free_volname
> * smaller (style) fixes
> 
> rfc -> v1:
> * add check if target guest is replicated and fail if the moved volume
>    does not support it
> * check if source volume has a format suffix and pass it to
>    'find_free_disk'
> * fixed some style nits
> 
> old dedicated api endpoint -> rfc:
> There are some big changes here. The old [0] dedicated API endpoint is
> gone and most of its code is now part of move_disk. Error messages have
> been changed accordingly and sometimes enahnced by adding disk keys and
> VMIDs where appropriate.
> 
> Since a move to other guests should be possible for unused disks, we
> need to check before doing a move to storage to make sure to not
> handle unused disks.
> 
>   PVE/API2/Qemu.pm | 238 ++++++++++++++++++++++++++++++++++++++++++++++-
>   PVE/CLI/qm.pm    |   2 +-
>   2 files changed, 234 insertions(+), 6 deletions(-)
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index ef0d877..30e222a 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -35,6 +35,7 @@ use PVE::API2::Qemu::Agent;
>   use PVE::VZDump::Plugin;
>   use PVE::DataCenterConfig;
>   use PVE::SSHInfo;
> +use PVE::Replication;
>   
>   BEGIN {
>       if (!$ENV{PVE_GENERATING_DOCS}) {
> @@ -3274,9 +3275,11 @@ __PACKAGE__->register_method({
>       method => 'POST',
>       protected => 1,
>       proxyto => 'node',
> -    description => "Move volume to different storage.",
> +    description => "Move volume to different storage or to a different VM.",
>       permissions => {
> -	description => "You need 'VM.Config.Disk' permissions on /vms/{vmid}, and 'Datastore.AllocateSpace' permissions on the storage.",
> +	description => "You need 'VM.Config.Disk' permissions on /vms/{vmid}, " .
> +	    "and 'Datastore.AllocateSpace' permissions on the storage. To move ".
> +	    "a disk to another VM, you need the permissions on the target VM as well.",
>   	check => [ 'and',
>   		   ['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
>   		   ['perm', '/storage/{storage}', [ 'Datastore.AllocateSpace' ]],
> @@ -3287,14 +3290,19 @@ __PACKAGE__->register_method({
>   	properties => {
>   	    node => get_standard_option('pve-node'),
>   	    vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
> +	    'target-vmid' => get_standard_option('pve-vmid', {
> +		completion => \&PVE::QemuServer::complete_vmid,
> +		optional => 1,
> +	    }),
>   	    disk => {
>   	        type => 'string',
>   		description => "The disk you want to move.",
> -		enum => [PVE::QemuServer::Drive::valid_drive_names()],
> +		enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
>   	    },
>               storage => get_standard_option('pve-storage-id', {
>   		description => "Target storage.",
>   		completion => \&PVE::QemuServer::complete_storage,
> +		optional => 1,
>               }),
>               'format' => {
>                   type => 'string',
> @@ -3321,6 +3329,20 @@ __PACKAGE__->register_method({
>   		minimum => '0',
>   		default => 'move limit from datacenter or storage config',
>   	    },
> +	    'target-disk' => {
> +	        type => 'string',
> +		description => "The config key the disk will be moved to on the target VM " .
> +		    "(for example, ide0 or scsi1).",
> +		enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
> +		optional => 1,

The default could be mentioned here.

> +	    },
> +	    'target-digest' => {
> +		type => 'string',
> +		description => 'Prevent changes if current configuration file of the target VM has " .
> +		    "a different SHA1 digest. This can be used to prevent concurrent modifications.',
> +		maxLength => 40,
> +		optional => 1,
> +	    },
>   	},
>       },
>       returns => {
> @@ -3335,14 +3357,22 @@ __PACKAGE__->register_method({
>   
>   	my $node = extract_param($param, 'node');
>   	my $vmid = extract_param($param, 'vmid');
> +	my $target_vmid = extract_param($param, 'target-vmid');
>   	my $digest = extract_param($param, 'digest');
> +	my $target_digest = extract_param($param, 'target-digest');
>   	my $disk = extract_param($param, 'disk');
> +	my $target_disk = extract_param($param, 'target-disk') // $disk;
>   	my $storeid = extract_param($param, 'storage');
>   	my $format = extract_param($param, 'format');
>   
> +	die "either set storage or target-vmid, but not both\n"
> +	    if $storeid && $target_vmid;
> +
> +
>   	my $storecfg = PVE::Storage::config();
> +	my $source_volid;
>   
> -	my $updatefn =  sub {
> +	my $move_updatefn =  sub {
>   	    my $conf = PVE::QemuConfig->load_config($vmid);
>   	    PVE::QemuConfig->check_lock($conf);
>   
> @@ -3452,7 +3482,205 @@ __PACKAGE__->register_method({
>               return $rpcenv->fork_worker('qmmove', $vmid, $authuser, $realcmd);
>   	};
>   
> -	return PVE::QemuConfig->lock_config($vmid, $updatefn);
> +	my $load_and_check_reassign_configs = sub {
> +	    my $vmlist = PVE::Cluster::get_vmlist()->{ids};
> +
> +	    if ($vmlist->{$vmid}->{node} ne $vmlist->{$target_vmid}->{node}) {
> +		die "Both VMs need to be on the same node $vmlist->{$vmid}->{node}) ".
> +		    "but target VM is on $vmlist->{$target_vmid}->{node}.\n";
> +	    }
> +
> +	    my $source_conf = PVE::QemuConfig->load_config($vmid);
> +	    PVE::QemuConfig->check_lock($source_conf);
> +	    my $target_conf = PVE::QemuConfig->load_config($target_vmid);
> +	    PVE::QemuConfig->check_lock($target_conf);
> +
> +	    die "Can't move disks from or to template VMs\n"
> +		if ($source_conf->{template} || $target_conf->{template});
> +
> +	    if ($digest) {
> +		eval { PVE::Tools::assert_if_modified($digest, $source_conf->{digest}) };
> +		if (my $err = $@) {
> +		    die "VM ${vmid}: ${err}";
> +		}
> +	    }
> +
> +	    if ($target_digest) {
> +		eval { PVE::Tools::assert_if_modified($target_digest, $target_conf->{digest}) };
> +		if (my $err = $@) {
> +		    die "VM ${target_vmid}: ${err}";
> +		}
> +	    }
> +
> +	    die "Disk '${disk}' for VM '$vmid' does not exist\n"
> +		if !defined($source_conf->{$disk});
> +
> +	    die "Target disk key '${target_disk}' is already in use for VM '$target_vmid'\n"
> +		if exists($target_conf->{$target_disk});
> +
> +	    my $drive = PVE::QemuServer::parse_drive(
> +		$disk,
> +		$source_conf->{$disk},
> +	    );
> +	    $source_volid = $drive->{file};
> +
> +	    die "disk '${disk}' has no associated volume\n"
> +		if !$source_volid;
> +	    die "CD drive contents can't be moved to another VM\n"
> +		if PVE::QemuServer::drive_is_cdrom($drive, 1);
> +	    die "Can't move  physical disk to another VM\n" if $source_volid =~ m|^/dev/|;
> +	    if (PVE::QemuServer::Drive::is_volume_in_use(
> +		    $storecfg,
> +		    $source_conf,
> +		    $disk,
> +		    $source_volid,
> +		)) {
> +		die "Can't move disk used by a snapshot to another VM\n"
> +	    }

This looks weird to me style-wise. Also missing semicolon after die.

> +
> +	    if (!PVE::Storage::volume_has_feature(
> +		    $storecfg,
> +		    'rename',
> +		    $source_volid,
> +		)) {
> +		die "Storage does not support moving of this disk to another VM\n"
> +	    }

Same here, but this time the if-condition could fit on one line within 
the 100 character limit ;) Again, missing semicolon.

> +
> +	    die "Cannot move disk to another VM while the source VM is running\n"
> +		if PVE::QemuServer::check_running($vmid) && $disk !~ m/^unused\d+$/;
> +
> +	    if ($target_disk !~ m/^unused\d+$/ && $target_disk =~ m/^([^\d]+)\d+$/) {
> +		my $interface = $1;

Nit: Isn't the interface already present in the result from parse_drive?

> +		my $desc = PVE::JSONSchema::get_standard_option("pve-qm-${interface}");
> +		eval {
> +		    PVE::JSONSchema::parse_property_string(
> +			$desc->{format},
> +			$source_conf->{$disk},
> +		    )
> +		};
> +		if (my $err = $@) {
> +		    die "Cannot move disk to another VM: ${err}";
> +		}
> +	    }
> +
> +	    my $repl_conf = PVE::ReplicationConfig->new();
> +	    my $is_replicated = $repl_conf->check_for_existing_jobs($target_vmid, 1);
> +	    my ($storeid, undef) = PVE::Storage::parse_volume_id($source_volid);
> +	    my $format = (PVE::Storage::parse_volname($storecfg, $source_volid))[6];
> +	    if ($is_replicated && !PVE::Storage::storage_can_replicate($storecfg, $storeid, $format)) {
> +		die "Cannot move disk to a replicated VM. Storage does not support replication!\n";
> +	    }
> +
> +	    return ($source_conf, $target_conf);
> +	};
> +
> +	my $logfunc = sub {
> +	    my ($msg) = @_;
> +	    print STDERR "$msg\n";
> +	};
> +
> +	my $disk_reassignfn = sub {
> +	    return PVE::QemuConfig->lock_config($vmid, sub {
> +		return PVE::QemuConfig->lock_config($target_vmid, sub {
> +		    my ($source_conf, $target_conf) = &$load_and_check_reassign_configs();
> +
> +		    my $drive_param = PVE::QemuServer::parse_drive(
> +			$target_disk,
> +			$source_conf->{$disk},
> +		    );
> +
> +		    print "moving disk '$disk' from VM '$vmid' to '$target_vmid'\n";
> +		    my ($storeid, $source_volname) = PVE::Storage::parse_volume_id($source_volid);
> +
> +		    my (
> +			undef,
> +			undef,
> +			undef,
> +			undef,
> +			undef,
> +			undef,
> +			$fmt
> +		    ) = PVE::Storage::parse_volname($storecfg, $source_volid);

Nit: using
     my $fmt = (PVE::Storage::parse_volname($storecfg, $source_volid))[6];
like above is shorter.

> +
> +		    my $target_volname = PVE::Storage::find_free_volname(
> +			$storecfg,
> +			$storeid,
> +			$target_vmid,
> +			$fmt
> +		    );
> +
> +		    my $new_volid = PVE::Storage::rename_volume(
> +			$storecfg,
> +			$source_volid,
> +			$target_volname,
> +		    );
> +
> +		    $drive_param->{file} = $new_volid;
> +
> +		    delete $source_conf->{$disk};
> +		    print "removing disk '${disk}' from VM '${vmid}' config\n";
> +		    PVE::QemuConfig->write_config($vmid, $source_conf);
> +
> +		    my $drive_string = PVE::QemuServer::print_drive($drive_param);
> +		    &$update_vm_api(
> +			{
> +			    node => $node,
> +			    vmid => $target_vmid,
> +			    digest => $target_digest,
> +			    $target_disk => $drive_string,
> +			},
> +			1,
> +		    );
> +
> +		    # remove possible replication snapshots
> +		    if (PVE::Storage::volume_has_feature(
> +			    $storecfg,
> +			    'replicate',
> +			    $source_volid),
> +		    ) {
> +			eval {
> +			    PVE::Replication::prepare(
> +				$storecfg,
> +				[$new_volid],
> +				undef,
> +				1,
> +				undef,
> +				$logfunc,
> +			    )
> +			};
> +			if (my $err = $@) {
> +			    print "Failed to remove replication snapshots on moved disk " .
> +				"'$target_disk'. Manual cleanup could be necessary.\n";
> +			}
> +		    }
> +		});
> +	    });
> +	};
> +
> +	if ($target_vmid) {
> +	    $rpcenv->check_vm_perm($authuser, $target_vmid, undef, ['VM.Config.Disk'])
> +		if $authuser ne 'root@pam';
> +
> +	    die "Moving a disk to the same VM is not possible. Did you mean to ".
> +		"move the disk to a different storage?\n"
> +		if $vmid eq $target_vmid;
> +
> +	    &$load_and_check_reassign_configs();
> +	    return $rpcenv->fork_worker(
> +		'qmmove',
> +		"${vmid}-${disk}>${target_vmid}-${target_disk}",
> +		$authuser,
> +		$disk_reassignfn
> +	    );
> +	} elsif ($storeid) {
> +	    die "cannot move disk '$disk', only configured disks can be moved to another storage\n"
> +		if $disk =~ m/^unused\d+$/;
> +	    return PVE::QemuConfig->lock_config($vmid, $move_updatefn);
> +	} else {
> +	    die "Provide either a 'storage' to move the disk to a different " .
> +		"storage or 'target-vmid' and 'target-disk' to move the disk " .
> +		"to another VM\n";
> +	}
>       }});
>   
>   my $check_vm_disks_local = sub {
> diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
> index ef99b6d..a92d301 100755
> --- a/PVE/CLI/qm.pm
> +++ b/PVE/CLI/qm.pm
> @@ -910,7 +910,7 @@ our $cmddef = {
>   
>       resize => [ "PVE::API2::Qemu", 'resize_vm', ['vmid', 'disk', 'size'], { node => $nodename } ],
>   
> -    'move-disk' => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage'], { node => $nodename }, $upid_exit ],
> +    'move-disk' => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage', 'target-vmid', 'target-disk'], { node => $nodename }, $upid_exit ],
>       move_disk => { alias => 'move-disk' },
>   
>       unlink => [ "PVE::API2::Qemu", 'unlink', ['vmid'], { node => $nodename } ],
> 




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [pve-devel] [PATCH v2 container 8/9] api: move-volume: add move to another container
  2021-08-06 13:46 ` [pve-devel] [PATCH v2 container 8/9] api: move-volume: add move to another container Aaron Lauterer
@ 2021-08-13  8:21   ` Fabian Ebner
  0 siblings, 0 replies; 18+ messages in thread
From: Fabian Ebner @ 2021-08-13  8:21 UTC (permalink / raw)
  To: pve-devel, Aaron Lauterer

Am 06.08.21 um 15:46 schrieb Aaron Lauterer:
> The goal of this is to expand the move-volume API endpoint to make it
> possible to move a container volume / mountpoint to another container.
> 
> Currently it works for regular mountpoints though it would be nice to be
> able to do it for unused mounpoints as well.
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> This is mostly the code from qemu-server with some adaptions. Mainly
> error messages and some checks.
> 
> Previous checks have been moved to '$move_to_storage_checks'.
> 
> v1 -> v2:
> * change --target-mp to --target-volume
> * make --target-volume optional and fallback to source mount point
> * use parse_volname instead of custom regex
> * adapt to find_free_volname
> * print warnings from update_pct_config
> * move running check back in both lock_config sections
> * fixed a few style issues
> 
> rfc -> v1:
> * add check if target guest is replicated and fail if the moved volume
>    does not support it
> * check if source volume has a format suffix and pass it to
>    'find_free_disk' or if the prefix is vm/subvol as those also have
>    their own meaning, see the comment in the code
> * fixed some style nits
>   src/PVE/API2/LXC.pm | 268 ++++++++++++++++++++++++++++++++++++++++----
>   1 file changed, 247 insertions(+), 21 deletions(-)
> 
> diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
> index afef7ec..89f9de8 100644
> --- a/src/PVE/API2/LXC.pm
> +++ b/src/PVE/API2/LXC.pm
> @@ -1784,10 +1784,12 @@ __PACKAGE__->register_method({
>       method => 'POST',
>       protected => 1,
>       proxyto => 'node',
> -    description => "Move a rootfs-/mp-volume to a different storage",
> +    description => "Move a rootfs-/mp-volume to a different storage or to a different container.",
>       permissions => {
>   	description => "You need 'VM.Config.Disk' permissions on /vms/{vmid}, " .
> -	    "and 'Datastore.AllocateSpace' permissions on the storage.",
> +	    "and 'Datastore.AllocateSpace' permissions on the storage. To move ".
> +	    "a volume to another container, you need the permissions on the ".
> +	    "target container as well.",
>   	check =>
>   	[ 'and',
>   	  ['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
> @@ -1799,14 +1801,20 @@ __PACKAGE__->register_method({
>   	properties => {
>   	    node => get_standard_option('pve-node'),
>   	    vmid => get_standard_option('pve-vmid', { completion => \&PVE::LXC::complete_ctid }),
> +	    'target-vmid' => get_standard_option('pve-vmid', {
> +		completion => \&PVE::LXC::complete_ctid,
> +		optional => 1,
> +	    }),
>   	    volume => {
>   		type => 'string',
> +		#TODO: check how to handle unused mount points as the mp parameter is not configured
>   		enum => [ PVE::LXC::Config->valid_volume_keys() ],
>   		description => "Volume which will be moved.",
>   	    },
>   	    storage => get_standard_option('pve-storage-id', {
>   		description => "Target Storage.",
>   		completion => \&PVE::Storage::complete_storage_enabled,
> +		optional => 1,
>   	    }),
>   	    delete => {
>   		type => 'boolean',
> @@ -1827,6 +1835,20 @@ __PACKAGE__->register_method({
>   		minimum => '0',
>   		default => 'clone limit from datacenter or storage config',
>   	    },
> +	    'target-volume' => {
> +	        type => 'string',
> +		description => "The config key the volume will be moved to.",
> +		enum => [PVE::LXC::Config->valid_volume_keys()],
> +		optional => 1,

The default could be mentioned here.

> +	    },
> +	    'target-digest' => {
> +		type => 'string',
> +		description => 'Prevent changes if current configuration file of the target " .
> +		    "container has a different SHA1 digest. This can be used to prevent " .
> +		    "concurrent modifications.',
> +		maxLength => 40,
> +		optional => 1,
> +	    },
>   	},
>       },
>       returns => {
> @@ -1841,32 +1863,48 @@ __PACKAGE__->register_method({
>   
>   	my $vmid = extract_param($param, 'vmid');
>   
> +	my $target_vmid = extract_param($param, 'target-vmid');
> +
>   	my $storage = extract_param($param, 'storage');
>   
>   	my $mpkey = extract_param($param, 'volume');
>   
> +	my $target_mpkey = extract_param($param, 'target-volume') // $mpkey;
> +
> +	my $digest = extract_param($param, 'digest');
> +
> +	my $target_digest = extract_param($param, 'target-digest');
> +
>   	my $lockname = 'disk';
>   
>   	my ($mpdata, $old_volid);
>   
> -	PVE::LXC::Config->lock_config($vmid, sub {
> -	    my $conf = PVE::LXC::Config->load_config($vmid);
> -	    PVE::LXC::Config->check_lock($conf);
> +	die "either set storage or target-vmid, but not both\n"
> +	    if $storage && $target_vmid;
>   
> -	    die "cannot move volumes of a running container\n" if PVE::LXC::check_running($vmid);
> +	my $storecfg = PVE::Storage::config();
> +	my $source_volid;
>   
> -	    $mpdata = PVE::LXC::Config->parse_volume($mpkey, $conf->{$mpkey});
> -	    $old_volid = $mpdata->{volume};
> +	my $move_to_storage_checks = sub {
> +	    PVE::LXC::Config->lock_config($vmid, sub {
> +		my $conf = PVE::LXC::Config->load_config($vmid);
> +		PVE::LXC::Config->check_lock($conf);
>   
> -	    die "you can't move a volume with snapshots and delete the source\n"
> -		if $param->{delete} && PVE::LXC::Config->is_volume_in_use_by_snapshots($conf, $old_volid);
> +		die "cannot move volumes of a running container\n" if PVE::LXC::check_running($vmid);
>   
> -	    PVE::Tools::assert_if_modified($param->{digest}, $conf->{digest});
> +		$mpdata = PVE::LXC::Config->parse_volume($mpkey, $conf->{$mpkey});
> +		$old_volid = $mpdata->{volume};
>   
> -	    PVE::LXC::Config->set_lock($vmid, $lockname);
> -	});
> +		die "you can't move a volume with snapshots and delete the source\n"
> +		    if $param->{delete} && PVE::LXC::Config->is_volume_in_use_by_snapshots($conf, $old_volid);
>   
> -	my $realcmd = sub {
> +		PVE::Tools::assert_if_modified($digest, $conf->{digest});
> +
> +		PVE::LXC::Config->set_lock($vmid, $lockname);
> +	    });
> +	};
> +
> +	my $storage_realcmd = sub {
>   	    eval {
>   		PVE::Cluster::log_msg('info', $authuser, "move volume CT $vmid: move --volume $mpkey --storage $storage");
>   
> @@ -1936,15 +1974,203 @@ __PACKAGE__->register_method({
>   	    warn $@ if $@;
>   	    die $err if $err;
>   	};
> -	my $task = eval {
> -	    $rpcenv->fork_worker('move_volume', $vmid, $authuser, $realcmd);
> +
> +	my $load_and_check_reassign_configs = sub {
> +	    my $vmlist = PVE::Cluster::get_vmlist()->{ids};
> +	    die "Both containers need to be on the same node ($vmlist->{$vmid}->{node}) ".
> +		"but target continer is on $vmlist->{$target_vmid}->{node}.\n"
> +		if $vmlist->{$vmid}->{node} ne $vmlist->{$target_vmid}->{node};
> +
> +	    my $source_conf = PVE::LXC::Config->load_config($vmid);
> +	    PVE::LXC::Config->check_lock($source_conf);
> +	    my $target_conf = PVE::LXC::Config->load_config($target_vmid);
> +	    PVE::LXC::Config->check_lock($target_conf);
> +
> +	    die "Can't move volumes from or to template VMs\n"
> +		if ($source_conf->{template} || $target_conf->{template});
> +
> +	    if ($digest) {
> +		eval { PVE::Tools::assert_if_modified($digest, $source_conf->{digest}) };
> +		if (my $err = $@) {
> +		    die "Container ${vmid}: ${err}";
> +		}
> +	    }
> +
> +	    if ($target_digest) {
> +		eval { PVE::Tools::assert_if_modified($target_digest, $target_conf->{digest}) };
> +		if (my $err = $@) {
> +		    die "Container ${target_vmid}: ${err}";
> +		}
> +	    }
> +
> +	    die "volume '${mpkey}' for container '$vmid' does not exist\n"
> +		if !defined($source_conf->{$mpkey});
> +
> +	    die "Target volume key '${target_mpkey}' is already in use for container '$target_vmid'\n"
> +		if exists $target_conf->{$target_mpkey};
> +
> +	    my $drive = PVE::LXC::Config->parse_volume(
> +		$mpkey,
> +		$source_conf->{$mpkey},
> +	    );
> +
> +	    $source_volid = $drive->{volume};
> +
> +	    die "disk '${mpkey}' has no associated volume\n"
> +		if !$source_volid;
> +
> +	    die "Storage does not support moving of this disk to another container\n"
> +		if !PVE::Storage::volume_has_feature(
> +		    $storecfg,
> +		    'rename',
> +		    $source_volid,
> +		);

style nit: post-if condition should be one line

> +
> +	    die "Cannot move a bindmound or device mount to another container\n"
> +		if $drive->{type} ne "volume";
> +	    die "Cannot move volume to another container while the source container is running\n"
> +		if PVE::LXC::check_running($vmid) && $mpkey !~ m/^unused\d+$/;
> +
> +	    my $repl_conf = PVE::ReplicationConfig->new();
> +	    my $is_replicated = $repl_conf->check_for_existing_jobs($target_vmid, 1);
> +	    my ($storeid, undef) = PVE::Storage::parse_volume_id($source_volid);
> +	    my $format = (PVE::Storage::parse_volname($storecfg, $source_volid))[6];
> +	    if (
> +		$is_replicated
> +		&& !PVE::Storage::storage_can_replicate($storecfg, $storeid, $format)
> +	    ) {
> +		die "Cannot move volume to a replicated container. Storage " .
> +		    "does not support replication!\n";
> +	    }
> +	    return ($source_conf, $target_conf);
> +	};
> +
> +	my $logfunc = sub {
> +	    my ($msg) = @_;
> +	    print STDERR "$msg\n";
>   	};
> -	if (my $err = $@) {
> -	    eval { PVE::LXC::Config->remove_lock($vmid, $lockname) };
> -	    warn $@ if $@;
> -	    die $err;
> +
> +	my $volume_reassignfn = sub {
> +	    return PVE::LXC::Config->lock_config($vmid, sub {
> +		return PVE::LXC::Config->lock_config($target_vmid, sub {
> +		    my ($source_conf, $target_conf) = &$load_and_check_reassign_configs();
> +		    die "cannot move volumes of a running container\n" if PVE::LXC::check_running($vmid);
> +
> +		    my $drive_param = PVE::LXC::Config->parse_volume(
> +			$target_mpkey,
> +			$source_conf->{$mpkey},
> +		    );
> +
> +		    print "moving volume '$mpkey' from container '$vmid' to '$target_vmid'\n";
> +		    my ($storage, $source_volname) = PVE::Storage::parse_volume_id($source_volid);
> +
> +		    my (
> +			undef,
> +			undef,
> +			undef,
> +			undef,
> +			undef,
> +			undef,
> +			$fmt
> +		    ) = PVE::Storage::parse_volname($storecfg, $source_volid);

Can be shorter like above.

> +
> +		    my $target_volname = PVE::Storage::find_free_volname(
> +			$storecfg,
> +			$storage,
> +			$target_vmid,
> +			$fmt
> +		    );
> +
> +		    my $new_volid = PVE::Storage::rename_volume(
> +			$storecfg,
> +			$source_volid,
> +			$target_volname,
> +		    );
> +
> +		    $drive_param->{volume} = $new_volid;
> +
> +		    delete $source_conf->{$mpkey};
> +		    print "removing volume '${mpkey}' from container '${vmid}' config\n";
> +		    PVE::LXC::Config->write_config($vmid, $source_conf);
> +
> +		    my $drive_string = PVE::LXC::Config->print_volume($target_mpkey, $drive_param);
> +		    my $running = PVE::LXC::check_running($target_vmid);
> +		    my $param = { $target_mpkey => $drive_string };
> +
> +		    my $errors = PVE::LXC::Config->update_pct_config(
> +			    $target_vmid,
> +			    $target_conf,
> +			    $running,
> +			    $param
> +			);
> +
> +		    my $rpcenv = PVE::RPCEnvironment::get();

The $rpcenv variable already exists (from the beginning of the call).

> +		    foreach my $key (keys %$errors) {
> +			$rpcenv->warn( $errors->{$key});

style nit: extra space

> +		    }
> +
> +		    PVE::LXC::Config->write_config($target_vmid, $target_conf);
> +		    $target_conf = PVE::LXC::Config->load_config($target_vmid);
> +
> +		    PVE::LXC::update_lxc_config($target_vmid, $target_conf);
> +		    print "target container '$target_vmid' updated with '$target_mpkey'\n";
> +
> +		    # remove possible replication snapshots
> +		    if (PVE::Storage::volume_has_feature(
> +			    $storecfg,
> +			    'replicate',
> +			    $source_volid),

style nit: parenthesis from function call should be on its own line, 
comma shouldn't be after parenthesis

> +		    ) {
> +			eval {
> +			    PVE::Replication::prepare(
> +				$storecfg,
> +				[$new_volid],
> +				undef,
> +				1,
> +				undef,
> +				$logfunc,
> +			    )
> +			};
> +			if (my $err = $@) {
> +			    print "Failed to remove replication snapshots on volume ".
> +				"'$target_mpkey'. Manual cleanup could be necessary.\n";

Could use $rpcenv->warn here as well. Maybe include original error?

> +			}
> +		    }
> +		});
> +	    });
> +	};
> +
> +	if ($target_vmid) {
> +	    $rpcenv->check_vm_perm($authuser, $target_vmid, undef, ['VM.Config.Disk'])
> +		if $authuser ne 'root@pam';
> +
> +	    die "Moving a volume to the same container is not possible. Did you ".
> +		"mean to move the volume to a different storage?\n"
> +		if $vmid eq $target_vmid;
> +
> +	    &$load_and_check_reassign_configs();
> +	    return $rpcenv->fork_worker(
> +		'move_volume',
> +		"${vmid}-${mpkey}>${target_vmid}-${target_mpkey}",
> +		$authuser,
> +		$volume_reassignfn
> +	    );
> +	} elsif ($storage) {
> +	    &$move_to_storage_checks();
> +	    my $task = eval {
> +		$rpcenv->fork_worker('move_volume', $vmid, $authuser, $storage_realcmd);
> +	    };
> +	    if (my $err = $@) {
> +		eval { PVE::LXC::Config->remove_lock($vmid, $lockname) };
> +		warn $@ if $@;
> +		die $err;
> +	    }
> +	    return $task;
> +	} else {
> +	    die "Provide either a 'storage' to move the mount point to a ".
> +		"different storage or 'target-vmid' and 'target-mp' to move ".
> +		"the mount point to another container\n";
>   	}
> -	return $task;
>     }});
>   
>   __PACKAGE__->register_method({
> 




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [pve-devel] [PATCH v2 storage qemu-server container 0/9] move disk or volume to other guets
  2021-08-06 13:46 [pve-devel] [PATCH v2 storage qemu-server container 0/9] move disk or volume to other guets Aaron Lauterer
                   ` (8 preceding siblings ...)
  2021-08-06 13:46 ` [pve-devel] [PATCH v2 container 9/9] api: move-volume: cleanup very long lines Aaron Lauterer
@ 2021-08-13  8:29 ` Fabian Ebner
  9 siblings, 0 replies; 18+ messages in thread
From: Fabian Ebner @ 2021-08-13  8:29 UTC (permalink / raw)
  To: pve-devel, Aaron Lauterer

Looking very good to me. Except for the issue(s) with the source being a 
linked clone, I only had very minor complaints. Everything else worked 
as advertised, and the find_free_volname keeps the interface tidy.

Am 06.08.21 um 15:46 schrieb Aaron Lauterer:
> This is the continuation of 'disk-reassign' but instead of a separate
> API endpoint we now follow the approach to make it part of the
> 'move-disk' and 'move-volume' endpoints for VMs and containers.
> 
> The main idea is to make it easy to move a disk/volume to another guest.
> Currently this is a manual and error prone process that requires
> knowledge of how PVE handles disks/volumes and the mapping which guest
> they belong to.
> 
> With this, the 'qm move-disk' and 'pct move-volume' are changed in the
> way that the storage parameter is optional as well as the new
> target-vmid and target-{disk,volume}. This will keep old calls to move the
> disk/volume to another storage working. To move to another guest, the
> storage needs to be omitted.
> 
> Major changes since the last iteration as dedicated API endpoint [0] are
> that the storage layer only implements the renaming itself. The layer
> above (qemu-server and pve-container) define the name of the new
> volume/disk.  Therefore it was necessary to expose the new
> 'find_free_volname' function.  The rename function on the storage layer
> handles possible template references and the creation of the new volid
> as that is highly dependent on the actual storage.
> 
> The following storage types are implemented at the moment:
> * dir based ones
> * ZFS
> * (thin) LVM
> * Ceph RBD
> 
> 
> Most parts of the disk-reassign code has been taken and moved into the
> 'move_disk' and 'move_volume' endpoints with conditional checking if the
> reassign code or the move to other storage code is meant to run
> depending on the given parameters.
> 
> Changes since v1 [2] (thx @ Fabian_E for the reviews!):
> * drop exposed 'find_free_diskname' method
> * drop 'wants_fmt_suffix' method (not needed anymore)
> * introduce 'find_free_volname' which decides if only the diskname is
>    needed or the longer path for directory based storages
> * use $source_volname instead of $source_volid -> avoids some extra
>    calls to get to $source_volname again
> * make --target-{disk,volume} optional and fall back to source key
> * smaller fixes in code quality and using existing functions like
>    'parse_volname' instead of a custom regex (possible with the new
>    changes)
> 
> 
> Changes since the RFC [1]:
> * added check if target guest is replicated and fail if storage does not
>    support replication
> * only pass minimum of needed parameters to the storage layer and infer
>    other needed information from that
> * lock storage and check if the volume aready exists (handling a
>    possible race condition between calling find_free_disk and the actual
>    renaming)
> * use a helper method to determine if the plugin needs the fmt suffix
>    in the volume name
> * getting format of the source and pass it to find_free_disk
> * style fixes (long lines, multiline post-if, ...)
> 
> [0] https://lists.proxmox.com/pipermail/pve-devel/2021-April/047481.html
> [1] https://lists.proxmox.com/pipermail/pve-devel/2021-June/048400.html
> [2] https://lists.proxmox.com/pipermail/pve-devel/2021-July/049445.html
> 
> storage: Aaron Lauterer (2):
>    storage: add new find_free_volname
>    add disk rename feature
> 
>   PVE/Storage.pm               | 29 +++++++++++++++++--
>   PVE/Storage/LVMPlugin.pm     | 24 ++++++++++++++++
>   PVE/Storage/LvmThinPlugin.pm |  1 +
>   PVE/Storage/Plugin.pm        | 54 ++++++++++++++++++++++++++++++++++++
>   PVE/Storage/RBDPlugin.pm     | 28 +++++++++++++++++++
>   PVE/Storage/ZFSPoolPlugin.pm | 24 ++++++++++++++++
>   6 files changed, 158 insertions(+), 2 deletions(-)
> 
> 
> qemu-server: Aaron Lauterer (4):
>    cli: qm: change move_disk to move-disk
>    Drive: add valid_drive_names_with_unused
>    api: move-disk: add move to other VM
>    api: move-disk: cleanup very long lines
> 
>   PVE/API2/Qemu.pm        | 263 ++++++++++++++++++++++++++++++++++++++--
>   PVE/CLI/qm.pm           |   3 +-
>   PVE/QemuServer/Drive.pm |   4 +
>   3 files changed, 259 insertions(+), 11 deletions(-)
> 
> container: Aaron Lauterer (3):
>    cli: pct: change move_volume to move-volume
>    api: move-volume: add move to another container
>    api: move-volume: cleanup very long lines
> 
>   src/PVE/API2/LXC.pm | 301 ++++++++++++++++++++++++++++++++++++++++----
>   src/PVE/CLI/pct.pm  |   3 +-
>   2 files changed, 276 insertions(+), 28 deletions(-)
> 




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [pve-devel] [PATCH v2 storage 1/9] storage: add new find_free_volname
  2021-08-12 12:50   ` Fabian Ebner
@ 2021-08-13 12:46     ` Aaron Lauterer
  0 siblings, 0 replies; 18+ messages in thread
From: Aaron Lauterer @ 2021-08-13 12:46 UTC (permalink / raw)
  To: Fabian Ebner, pve-devel



On 8/12/21 2:50 PM, Fabian Ebner wrote:
> Am 06.08.21 um 15:46 schrieb Aaron Lauterer:
>> This new method exposes the functionality to request a new, not yet
>> used, volname for a storage.
>>
>> The default implementation will return the result from
>> 'find_free_diskname' prefixed with "<VMID>/" if $scfg->{path} exists.
>> Otherwise it will only return the result from 'find_free_diskname'.
>>
>> If the format suffix is added also depends on the existence of
>> $scfg->{path}.
>>
>> $scfg->{path} will be present for directory based storage types.
>>
>> Should a storage need to return a different volname, it needs to override
>> the 'find_free_volname' method.
>>
>> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
>> ---
>>
>> v1 -> v2:
>> * drop exposed 'find_free_diskname' in favor of 'find_free_volname'
>> * dropped 'wants_fmt_suffix' as 'find_free_volname' now decides that itself
>>
>> rfc -> v1:
>> dropped $add_fmt_suffix parameter and added the "wants_fmt_suffix"
>> helper method in each plugin.
>>
>>   PVE/Storage.pm        |  9 +++++++++
>>   PVE/Storage/Plugin.pm | 11 +++++++++++
>>   2 files changed, 20 insertions(+)
>>
>> diff --git a/PVE/Storage.pm b/PVE/Storage.pm
>> index c04b5a2..c38fe7b 100755
>> --- a/PVE/Storage.pm
>> +++ b/PVE/Storage.pm
>> @@ -203,6 +203,15 @@ sub storage_can_replicate {
>>       return $plugin->storage_can_replicate($scfg, $storeid, $format);
>>   }
>> +sub find_free_volname {
>> +    my ($cfg, $storeid, $vmid, $fmt) = @_;
>> +
>> +    my $scfg = storage_config($cfg, $storeid);
>> +    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
>> +
>> +    return $plugin->find_free_volname($storeid, $scfg, $vmid, $fmt);
>> +}
>> +
>>   sub storage_ids {
>>       my ($cfg) = @_;
>> giff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
>> index b1865cb..e043329 100644
>> --- a/PVE/Storage/Plugin.pm
>> +++ b/PVE/Storage/Plugin.pm
>> @@ -685,6 +685,17 @@ sub find_free_diskname {
>>       return get_next_vm_diskname($disk_list, $storeid, $vmid, $fmt, $scfg, $add_fmt_suffix);
>>   }
>> +sub find_free_volname {
>> +    my ($class, $storeid, $scfg, $vmid, $fmt) = @_;
>> +
>> +    my $diskname = $class->find_free_diskname($storeid, $scfg, $vmid, $fmt, exists($scfg->{path}));
>> +
>> +    if (exists($scfg->{path})) {
> 
> More of a nit, but I don't like the usage of exists here, for two reasons:
> 
> 1. The main reason: it's inconsistent with all the other checks for $scfg->{path} that are used throughout the file.
> 
> 2. It's more likely to break. Consider some code I've made up, but similar code might exist somewhere in our code base:
>      ...
>      my $path = get_path_from_config($cfg, $storeid);
>      my $scfg = {
>      path => $path,
>      content => $content,
>      ...
>      };
> or
>      my $start = $scfg->{path}[0];
>      # auto-vivication means the 'path' key now always exists (with undef value if it was undef before)
>      if (defined($start) && $start eq '/') {
>      ...
>      }
> 
> Using exists here makes it necessary to check that such code is not used for $scfg, currently and in the future.

Okay I see your point. So following the regular approach throughout the pve-storage repo should be okay. That is, evaluating $scfg->{path} directly.

> 
>> +    return "${vmid}/$diskname";
>> +    }
>> +    return $diskname;
>> +}
>> +
>>   sub clone_image {
>>       my ($class, $scfg, $storeid, $volname, $vmid, $snap) = @_;
>>




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [pve-devel] [PATCH v2 qemu-server 5/9] api: move-disk: add move to other VM
  2021-08-13  7:41   ` Fabian Ebner
@ 2021-08-13 15:35     ` Aaron Lauterer
  2021-09-01  9:48       ` Fabian Ebner
  0 siblings, 1 reply; 18+ messages in thread
From: Aaron Lauterer @ 2021-08-13 15:35 UTC (permalink / raw)
  To: Fabian Ebner, pve-devel



On 8/13/21 9:41 AM, Fabian Ebner wrote:
> Am 06.08.21 um 15:46 schrieb Aaron Lauterer:
>> The goal of this is to expand the move-disk API endpoint to make it
>> possible to move a disk to another VM. Previously this was only possible
>> with manual intervertion either by renaming the VM disk or by manually
>> adding the disks volid to the config of the other VM.
>>
>> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
>> ---
>> v1 -> v2:
>> * make --target-disk optional and use source disk key as fallback
>> * use parse_volname instead of custom regex
>> * adapt to find_free_volname
>> * smaller (style) fixes
>>
>> rfc -> v1:
>> * add check if target guest is replicated and fail if the moved volume
>>    does not support it
>> * check if source volume has a format suffix and pass it to
>>    'find_free_disk'
>> * fixed some style nits
>>
>> old dedicated api endpoint -> rfc:
>> There are some big changes here. The old [0] dedicated API endpoint is
>> gone and most of its code is now part of move_disk. Error messages have
>> been changed accordingly and sometimes enahnced by adding disk keys and
>> VMIDs where appropriate.
>>
>> Since a move to other guests should be possible for unused disks, we
>> need to check before doing a move to storage to make sure to not
>> handle unused disks.
>>
>>   PVE/API2/Qemu.pm | 238 ++++++++++++++++++++++++++++++++++++++++++++++-
>>   PVE/CLI/qm.pm    |   2 +-
>>   2 files changed, 234 insertions(+), 6 deletions(-)
>>
>> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
>> index ef0d877..30e222a 100644
>> --- a/PVE/API2/Qemu.pm
>> +++ b/PVE/API2/Qemu.pm
>> @@ -35,6 +35,7 @@ use PVE::API2::Qemu::Agent;
>>   use PVE::VZDump::Plugin;
>>   use PVE::DataCenterConfig;
>>   use PVE::SSHInfo;
>> +use PVE::Replication;
>>   BEGIN {
>>       if (!$ENV{PVE_GENERATING_DOCS}) {
>> @@ -3274,9 +3275,11 @@ __PACKAGE__->register_method({
>>       method => 'POST',
>>       protected => 1,
>>       proxyto => 'node',
>> -    description => "Move volume to different storage.",
>> +    description => "Move volume to different storage or to a different VM.",
>>       permissions => {
>> -    description => "You need 'VM.Config.Disk' permissions on /vms/{vmid}, and 'Datastore.AllocateSpace' permissions on the storage.",
>> +    description => "You need 'VM.Config.Disk' permissions on /vms/{vmid}, " .
>> +        "and 'Datastore.AllocateSpace' permissions on the storage. To move ".
>> +        "a disk to another VM, you need the permissions on the target VM as well.",
>>       check => [ 'and',
>>              ['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
>>              ['perm', '/storage/{storage}', [ 'Datastore.AllocateSpace' ]],
>> @@ -3287,14 +3290,19 @@ __PACKAGE__->register_method({
>>       properties => {
>>           node => get_standard_option('pve-node'),
>>           vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
>> +        'target-vmid' => get_standard_option('pve-vmid', {
>> +        completion => \&PVE::QemuServer::complete_vmid,
>> +        optional => 1,
>> +        }),
>>           disk => {
>>               type => 'string',
>>           description => "The disk you want to move.",
>> -        enum => [PVE::QemuServer::Drive::valid_drive_names()],
>> +        enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
>>           },
>>               storage => get_standard_option('pve-storage-id', {
>>           description => "Target storage.",
>>           completion => \&PVE::QemuServer::complete_storage,
>> +        optional => 1,
>>               }),
>>               'format' => {
>>                   type => 'string',
>> @@ -3321,6 +3329,20 @@ __PACKAGE__->register_method({
>>           minimum => '0',
>>           default => 'move limit from datacenter or storage config',
>>           },
>> +        'target-disk' => {
>> +            type => 'string',
>> +        description => "The config key the disk will be moved to on the target VM " .
>> +            "(for example, ide0 or scsi1).",
>> +        enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
>> +        optional => 1,
> 
> The default could be mentioned here.

Good point.

> 
>> +        },
>> +        'target-digest' => {
>> +        type => 'string',
>> +        description => 'Prevent changes if current configuration file of the target VM has " .
>> +            "a different SHA1 digest. This can be used to prevent concurrent modifications.',
>> +        maxLength => 40,
>> +        optional => 1,
>> +        },
>>       },
>>       },
>>       returns => {
>> @@ -3335,14 +3357,22 @@ __PACKAGE__->register_method({
>>       my $node = extract_param($param, 'node');
>>       my $vmid = extract_param($param, 'vmid');
>> +    my $target_vmid = extract_param($param, 'target-vmid');
>>       my $digest = extract_param($param, 'digest');
>> +    my $target_digest = extract_param($param, 'target-digest');
>>       my $disk = extract_param($param, 'disk');
>> +    my $target_disk = extract_param($param, 'target-disk') // $disk;
>>       my $storeid = extract_param($param, 'storage');
>>       my $format = extract_param($param, 'format');
>> +    die "either set storage or target-vmid, but not both\n"
>> +        if $storeid && $target_vmid;
>> +
>> +
>>       my $storecfg = PVE::Storage::config();
>> +    my $source_volid;
>> -    my $updatefn =  sub {
>> +    my $move_updatefn =  sub {
>>           my $conf = PVE::QemuConfig->load_config($vmid);
>>           PVE::QemuConfig->check_lock($conf);
>> @@ -3452,7 +3482,205 @@ __PACKAGE__->register_method({
>>               return $rpcenv->fork_worker('qmmove', $vmid, $authuser, $realcmd);
>>       };
>> -    return PVE::QemuConfig->lock_config($vmid, $updatefn);
>> +    my $load_and_check_reassign_configs = sub {
>> +        my $vmlist = PVE::Cluster::get_vmlist()->{ids};
>> +
>> +        if ($vmlist->{$vmid}->{node} ne $vmlist->{$target_vmid}->{node}) {
>> +        die "Both VMs need to be on the same node $vmlist->{$vmid}->{node}) ".
>> +            "but target VM is on $vmlist->{$target_vmid}->{node}.\n";
>> +        }
>> +
>> +        my $source_conf = PVE::QemuConfig->load_config($vmid);
>> +        PVE::QemuConfig->check_lock($source_conf);
>> +        my $target_conf = PVE::QemuConfig->load_config($target_vmid);
>> +        PVE::QemuConfig->check_lock($target_conf);
>> +
>> +        die "Can't move disks from or to template VMs\n"
>> +        if ($source_conf->{template} || $target_conf->{template});
>> +
>> +        if ($digest) {
>> +        eval { PVE::Tools::assert_if_modified($digest, $source_conf->{digest}) };
>> +        if (my $err = $@) {
>> +            die "VM ${vmid}: ${err}";
>> +        }
>> +        }
>> +
>> +        if ($target_digest) {
>> +        eval { PVE::Tools::assert_if_modified($target_digest, $target_conf->{digest}) };
>> +        if (my $err = $@) {
>> +            die "VM ${target_vmid}: ${err}";
>> +        }
>> +        }
>> +
>> +        die "Disk '${disk}' for VM '$vmid' does not exist\n"
>> +        if !defined($source_conf->{$disk});
>> +
>> +        die "Target disk key '${target_disk}' is already in use for VM '$target_vmid'\n"
>> +        if exists($target_conf->{$target_disk});
>> +
>> +        my $drive = PVE::QemuServer::parse_drive(
>> +        $disk,
>> +        $source_conf->{$disk},
>> +        );
>> +        $source_volid = $drive->{file};
>> +
>> +        die "disk '${disk}' has no associated volume\n"
>> +        if !$source_volid;
>> +        die "CD drive contents can't be moved to another VM\n"
>> +        if PVE::QemuServer::drive_is_cdrom($drive, 1);
>> +        die "Can't move  physical disk to another VM\n" if $source_volid =~ m|^/dev/|;
>> +        if (PVE::QemuServer::Drive::is_volume_in_use(
>> +            $storecfg,
>> +            $source_conf,
>> +            $disk,
>> +            $source_volid,
>> +        )) {
>> +        die "Can't move disk used by a snapshot to another VM\n"
>> +        }
> 
> This looks weird to me style-wise. Also missing semicolon after die.

Yeah, no matter how, it either looks weird or will be a few characters over the 100 limit...
For the sake of readability I think I'll opt for the slightly too long post if variant.

> 
>> +
>> +        if (!PVE::Storage::volume_has_feature(
>> +            $storecfg,
>> +            'rename',
>> +            $source_volid,
>> +        )) {
>> +        die "Storage does not support moving of this disk to another VM\n"
>> +        }
> 
> Same here, but this time the if-condition could fit on one line within the 100 character limit ;) Again, missing semicolon.

You are right, switchting this to post if

> 
>> +
>> +        die "Cannot move disk to another VM while the source VM is running\n"
>> +        if PVE::QemuServer::check_running($vmid) && $disk !~ m/^unused\d+$/;
>> +
>> +        if ($target_disk !~ m/^unused\d+$/ && $target_disk =~ m/^([^\d]+)\d+$/) {
>> +        my $interface = $1;
> 
> Nit: Isn't the interface already present in the result from parse_drive?

The previous parse_drive is done on the source disk. Here we are checking against the target disk which can use a different config key / interface. Also we cannot use parse_drive for the target disk using the source_conf for the data as it will just return undef in case the parse_property_string fails, which is exactly what we are trying to set up here so that we can check if the target disk key supports all config options, and if not, present them to the user, so they have an idea why it does not work.

> 
>> +        my $desc = PVE::JSONSchema::get_standard_option("pve-qm-${interface}");
>> +        eval {
>> +            PVE::JSONSchema::parse_property_string(
>> +            $desc->{format},
>> +            $source_conf->{$disk},
>> +            )
>> +        };
>> +        if (my $err = $@) {
>> +            die "Cannot move disk to another VM: ${err}";
>> +        }
>> +        }
>> +
>> +        my $repl_conf = PVE::ReplicationConfig->new();
>> +        my $is_replicated = $repl_conf->check_for_existing_jobs($target_vmid, 1);
>> +        my ($storeid, undef) = PVE::Storage::parse_volume_id($source_volid);
>> +        my $format = (PVE::Storage::parse_volname($storecfg, $source_volid))[6];
>> +        if ($is_replicated && !PVE::Storage::storage_can_replicate($storecfg, $storeid, $format)) {
>> +        die "Cannot move disk to a replicated VM. Storage does not support replication!\n";
>> +        }
>> +
>> +        return ($source_conf, $target_conf);
>> +    };
>> +
>> +    my $logfunc = sub {
>> +        my ($msg) = @_;
>> +        print STDERR "$msg\n";
>> +    };
>> +
>> +    my $disk_reassignfn = sub {
>> +        return PVE::QemuConfig->lock_config($vmid, sub {
>> +        return PVE::QemuConfig->lock_config($target_vmid, sub {
>> +            my ($source_conf, $target_conf) = &$load_and_check_reassign_configs();
>> +
>> +            my $drive_param = PVE::QemuServer::parse_drive(
>> +            $target_disk,
>> +            $source_conf->{$disk},
>> +            );
>> +
>> +            print "moving disk '$disk' from VM '$vmid' to '$target_vmid'\n";
>> +            my ($storeid, $source_volname) = PVE::Storage::parse_volume_id($source_volid);
>> +
>> +            my (
>> +            undef,
>> +            undef,
>> +            undef,
>> +            undef,
>> +            undef,
>> +            undef,
>> +            $fmt
>> +            ) = PVE::Storage::parse_volname($storecfg, $source_volid);
> 
> Nit: using
>      my $fmt = (PVE::Storage::parse_volname($storecfg, $source_volid))[6];
> like above is shorter.

thx!

> 
>> +
>> +            my $target_volname = PVE::Storage::find_free_volname(
>> +            $storecfg,
>> +            $storeid,
>> +            $target_vmid,
>> +            $fmt
>> +            );
>> +
>> +            my $new_volid = PVE::Storage::rename_volume(
>> +            $storecfg,
>> +            $source_volid,
>> +            $target_volname,
>> +            );
>> +
>> +            $drive_param->{file} = $new_volid;
>> +
>> +            delete $source_conf->{$disk};
>> +            print "removing disk '${disk}' from VM '${vmid}' config\n";
>> +            PVE::QemuConfig->write_config($vmid, $source_conf);
>> +
>> +            my $drive_string = PVE::QemuServer::print_drive($drive_param);
>> +            &$update_vm_api(
>> +            {
>> +                node => $node,
>> +                vmid => $target_vmid,
>> +                digest => $target_digest,
>> +                $target_disk => $drive_string,
>> +            },
>> +            1,
>> +            );
>> +
>> +            # remove possible replication snapshots
>> +            if (PVE::Storage::volume_has_feature(
>> +                $storecfg,
>> +                'replicate',
>> +                $source_volid),
>> +            ) {
>> +            eval {
>> +                PVE::Replication::prepare(
>> +                $storecfg,
>> +                [$new_volid],
>> +                undef,
>> +                1,
>> +                undef,
>> +                $logfunc,
>> +                )
>> +            };
>> +            if (my $err = $@) {
>> +                print "Failed to remove replication snapshots on moved disk " .
>> +                "'$target_disk'. Manual cleanup could be necessary.\n";
>> +            }
>> +            }
>> +        });
>> +        });
>> +    };
>> +
>> +    if ($target_vmid) {
>> +        $rpcenv->check_vm_perm($authuser, $target_vmid, undef, ['VM.Config.Disk'])
>> +        if $authuser ne 'root@pam';
>> +
>> +        die "Moving a disk to the same VM is not possible. Did you mean to ".
>> +        "move the disk to a different storage?\n"
>> +        if $vmid eq $target_vmid;
>> +
>> +        &$load_and_check_reassign_configs();
>> +        return $rpcenv->fork_worker(
>> +        'qmmove',
>> +        "${vmid}-${disk}>${target_vmid}-${target_disk}",
>> +        $authuser,
>> +        $disk_reassignfn
>> +        );
>> +    } elsif ($storeid) {
>> +        die "cannot move disk '$disk', only configured disks can be moved to another storage\n"
>> +        if $disk =~ m/^unused\d+$/;
>> +        return PVE::QemuConfig->lock_config($vmid, $move_updatefn);
>> +    } else {
>> +        die "Provide either a 'storage' to move the disk to a different " .
>> +        "storage or 'target-vmid' and 'target-disk' to move the disk " .
>> +        "to another VM\n";
>> +    }
>>       }});
>>   my $check_vm_disks_local = sub {
>> diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
>> index ef99b6d..a92d301 100755
>> --- a/PVE/CLI/qm.pm
>> +++ b/PVE/CLI/qm.pm
>> @@ -910,7 +910,7 @@ our $cmddef = {
>>       resize => [ "PVE::API2::Qemu", 'resize_vm', ['vmid', 'disk', 'size'], { node => $nodename } ],
>> -    'move-disk' => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage'], { node => $nodename }, $upid_exit ],
>> +    'move-disk' => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage', 'target-vmid', 'target-disk'], { node => $nodename }, $upid_exit ],
>>       move_disk => { alias => 'move-disk' },
>>       unlink => [ "PVE::API2::Qemu", 'unlink', ['vmid'], { node => $nodename } ],
>>




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [pve-devel] [PATCH v2 qemu-server 5/9] api: move-disk: add move to other VM
  2021-08-13 15:35     ` Aaron Lauterer
@ 2021-09-01  9:48       ` Fabian Ebner
  0 siblings, 0 replies; 18+ messages in thread
From: Fabian Ebner @ 2021-09-01  9:48 UTC (permalink / raw)
  To: Aaron Lauterer, pve-devel

Am 13.08.21 um 17:35 schrieb Aaron Lauterer:
> 
> 
> On 8/13/21 9:41 AM, Fabian Ebner wrote:
>> Am 06.08.21 um 15:46 schrieb Aaron Lauterer:
>>> The goal of this is to expand the move-disk API endpoint to make it
>>> possible to move a disk to another VM. Previously this was only possible
>>> with manual intervertion either by renaming the VM disk or by manually
>>> adding the disks volid to the config of the other VM.
>>>
>>> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
>>> ---
>>> v1 -> v2:
>>> * make --target-disk optional and use source disk key as fallback
>>> * use parse_volname instead of custom regex
>>> * adapt to find_free_volname
>>> * smaller (style) fixes
>>>
>>> rfc -> v1:
>>> * add check if target guest is replicated and fail if the moved volume
>>>    does not support it
>>> * check if source volume has a format suffix and pass it to
>>>    'find_free_disk'
>>> * fixed some style nits
>>>
>>> old dedicated api endpoint -> rfc:
>>> There are some big changes here. The old [0] dedicated API endpoint is
>>> gone and most of its code is now part of move_disk. Error messages have
>>> been changed accordingly and sometimes enahnced by adding disk keys and
>>> VMIDs where appropriate.
>>>
>>> Since a move to other guests should be possible for unused disks, we
>>> need to check before doing a move to storage to make sure to not
>>> handle unused disks.
>>>
>>>   PVE/API2/Qemu.pm | 238 ++++++++++++++++++++++++++++++++++++++++++++++-
>>>   PVE/CLI/qm.pm    |   2 +-
>>>   2 files changed, 234 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
>>> index ef0d877..30e222a 100644
>>> --- a/PVE/API2/Qemu.pm
>>> +++ b/PVE/API2/Qemu.pm
>>> @@ -35,6 +35,7 @@ use PVE::API2::Qemu::Agent;
>>>   use PVE::VZDump::Plugin;
>>>   use PVE::DataCenterConfig;
>>>   use PVE::SSHInfo;
>>> +use PVE::Replication;
>>>   BEGIN {
>>>       if (!$ENV{PVE_GENERATING_DOCS}) {
>>> @@ -3274,9 +3275,11 @@ __PACKAGE__->register_method({
>>>       method => 'POST',
>>>       protected => 1,
>>>       proxyto => 'node',
>>> -    description => "Move volume to different storage.",
>>> +    description => "Move volume to different storage or to a 
>>> different VM.",
>>>       permissions => {
>>> -    description => "You need 'VM.Config.Disk' permissions on 
>>> /vms/{vmid}, and 'Datastore.AllocateSpace' permissions on the storage.",
>>> +    description => "You need 'VM.Config.Disk' permissions on 
>>> /vms/{vmid}, " .
>>> +        "and 'Datastore.AllocateSpace' permissions on the storage. 
>>> To move ".
>>> +        "a disk to another VM, you need the permissions on the 
>>> target VM as well.",
>>>       check => [ 'and',
>>>              ['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
>>>              ['perm', '/storage/{storage}', [ 
>>> 'Datastore.AllocateSpace' ]],
>>> @@ -3287,14 +3290,19 @@ __PACKAGE__->register_method({
>>>       properties => {
>>>           node => get_standard_option('pve-node'),
>>>           vmid => get_standard_option('pve-vmid', { completion => 
>>> \&PVE::QemuServer::complete_vmid }),
>>> +        'target-vmid' => get_standard_option('pve-vmid', {
>>> +        completion => \&PVE::QemuServer::complete_vmid,
>>> +        optional => 1,
>>> +        }),
>>>           disk => {
>>>               type => 'string',
>>>           description => "The disk you want to move.",
>>> -        enum => [PVE::QemuServer::Drive::valid_drive_names()],
>>> +        enum => 
>>> [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
>>>           },
>>>               storage => get_standard_option('pve-storage-id', {
>>>           description => "Target storage.",
>>>           completion => \&PVE::QemuServer::complete_storage,
>>> +        optional => 1,
>>>               }),
>>>               'format' => {
>>>                   type => 'string',
>>> @@ -3321,6 +3329,20 @@ __PACKAGE__->register_method({
>>>           minimum => '0',
>>>           default => 'move limit from datacenter or storage config',
>>>           },
>>> +        'target-disk' => {
>>> +            type => 'string',
>>> +        description => "The config key the disk will be moved to on 
>>> the target VM " .
>>> +            "(for example, ide0 or scsi1).",
>>> +        enum => 
>>> [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
>>> +        optional => 1,
>>
>> The default could be mentioned here.
> 
> Good point.
> 
>>
>>> +        },
>>> +        'target-digest' => {
>>> +        type => 'string',
>>> +        description => 'Prevent changes if current configuration 
>>> file of the target VM has " .
>>> +            "a different SHA1 digest. This can be used to prevent 
>>> concurrent modifications.',
>>> +        maxLength => 40,
>>> +        optional => 1,
>>> +        },
>>>       },
>>>       },
>>>       returns => {
>>> @@ -3335,14 +3357,22 @@ __PACKAGE__->register_method({
>>>       my $node = extract_param($param, 'node');
>>>       my $vmid = extract_param($param, 'vmid');
>>> +    my $target_vmid = extract_param($param, 'target-vmid');
>>>       my $digest = extract_param($param, 'digest');
>>> +    my $target_digest = extract_param($param, 'target-digest');
>>>       my $disk = extract_param($param, 'disk');
>>> +    my $target_disk = extract_param($param, 'target-disk') // $disk;
>>>       my $storeid = extract_param($param, 'storage');
>>>       my $format = extract_param($param, 'format');
>>> +    die "either set storage or target-vmid, but not both\n"
>>> +        if $storeid && $target_vmid;
>>> +
>>> +
>>>       my $storecfg = PVE::Storage::config();
>>> +    my $source_volid;
>>> -    my $updatefn =  sub {
>>> +    my $move_updatefn =  sub {
>>>           my $conf = PVE::QemuConfig->load_config($vmid);
>>>           PVE::QemuConfig->check_lock($conf);
>>> @@ -3452,7 +3482,205 @@ __PACKAGE__->register_method({
>>>               return $rpcenv->fork_worker('qmmove', $vmid, $authuser, 
>>> $realcmd);
>>>       };
>>> -    return PVE::QemuConfig->lock_config($vmid, $updatefn);
>>> +    my $load_and_check_reassign_configs = sub {
>>> +        my $vmlist = PVE::Cluster::get_vmlist()->{ids};
>>> +
>>> +        if ($vmlist->{$vmid}->{node} ne 
>>> $vmlist->{$target_vmid}->{node}) {
>>> +        die "Both VMs need to be on the same node 
>>> $vmlist->{$vmid}->{node}) ".
>>> +            "but target VM is on $vmlist->{$target_vmid}->{node}.\n";
>>> +        }
>>> +
>>> +        my $source_conf = PVE::QemuConfig->load_config($vmid);
>>> +        PVE::QemuConfig->check_lock($source_conf);
>>> +        my $target_conf = PVE::QemuConfig->load_config($target_vmid);
>>> +        PVE::QemuConfig->check_lock($target_conf);
>>> +
>>> +        die "Can't move disks from or to template VMs\n"
>>> +        if ($source_conf->{template} || $target_conf->{template});
>>> +
>>> +        if ($digest) {
>>> +        eval { PVE::Tools::assert_if_modified($digest, 
>>> $source_conf->{digest}) };
>>> +        if (my $err = $@) {
>>> +            die "VM ${vmid}: ${err}";
>>> +        }
>>> +        }
>>> +
>>> +        if ($target_digest) {
>>> +        eval { PVE::Tools::assert_if_modified($target_digest, 
>>> $target_conf->{digest}) };
>>> +        if (my $err = $@) {
>>> +            die "VM ${target_vmid}: ${err}";
>>> +        }
>>> +        }
>>> +
>>> +        die "Disk '${disk}' for VM '$vmid' does not exist\n"
>>> +        if !defined($source_conf->{$disk});
>>> +
>>> +        die "Target disk key '${target_disk}' is already in use for 
>>> VM '$target_vmid'\n"
>>> +        if exists($target_conf->{$target_disk});
>>> +
>>> +        my $drive = PVE::QemuServer::parse_drive(
>>> +        $disk,
>>> +        $source_conf->{$disk},
>>> +        );
>>> +        $source_volid = $drive->{file};
>>> +
>>> +        die "disk '${disk}' has no associated volume\n"
>>> +        if !$source_volid;
>>> +        die "CD drive contents can't be moved to another VM\n"
>>> +        if PVE::QemuServer::drive_is_cdrom($drive, 1);
>>> +        die "Can't move  physical disk to another VM\n" if 
>>> $source_volid =~ m|^/dev/|;
>>> +        if (PVE::QemuServer::Drive::is_volume_in_use(
>>> +            $storecfg,
>>> +            $source_conf,
>>> +            $disk,
>>> +            $source_volid,
>>> +        )) {
>>> +        die "Can't move disk used by a snapshot to another VM\n"
>>> +        }
>>
>> This looks weird to me style-wise. Also missing semicolon after die.
> 
> Yeah, no matter how, it either looks weird or will be a few characters 
> over the 100 limit...
> For the sake of readability I think I'll opt for the slightly too long 
> post if variant.
> 

The style guide doesn't have an explicit example of a long if with a 
function call, but the two examples for long if conditions (at the end 
of the "Wrapping post-if section") [0] look different.

To match one of those, you could either:
1. move the '{' to its own line.
2. use
if (
     function call
) {

But maybe the current style is also acceptable?

[0]: https://pve.proxmox.com/wiki/Perl_Style_Guide#Wrapping_Post-If

>>
>>> +
>>> +        if (!PVE::Storage::volume_has_feature(
>>> +            $storecfg,
>>> +            'rename',
>>> +            $source_volid,
>>> +        )) {
>>> +        die "Storage does not support moving of this disk to another 
>>> VM\n"
>>> +        }
>>
>> Same here, but this time the if-condition could fit on one line within 
>> the 100 character limit ;) Again, missing semicolon.
> 
> You are right, switchting this to post if
>

Could also be a normal if with the condition on one line, but both are fine.

>>
>>> +
>>> +        die "Cannot move disk to another VM while the source VM is 
>>> running\n"
>>> +        if PVE::QemuServer::check_running($vmid) && $disk !~ 
>>> m/^unused\d+$/;
>>> +
>>> +        if ($target_disk !~ m/^unused\d+$/ && $target_disk =~ 
>>> m/^([^\d]+)\d+$/) {
>>> +        my $interface = $1;
>>
>> Nit: Isn't the interface already present in the result from parse_drive?
> 
> The previous parse_drive is done on the source disk. Here we are 
> checking against the target disk which can use a different config key / 
> interface. Also we cannot use parse_drive for the target disk using the 
> source_conf for the data as it will just return undef in case the 
> parse_property_string fails, which is exactly what we are trying to set 
> up here so that we can check if the target disk key supports all config 
> options, and if not, present them to the user, so they have an idea why 
> it does not work.
> 

Right, there are potentially two different interfaces. Please just 
ignore my wrong suggestion.

>>
>>> +        my $desc = 
>>> PVE::JSONSchema::get_standard_option("pve-qm-${interface}");
>>> +        eval {
>>> +            PVE::JSONSchema::parse_property_string(
>>> +            $desc->{format},
>>> +            $source_conf->{$disk},
>>> +            )
>>> +        };
>>> +        if (my $err = $@) {
>>> +            die "Cannot move disk to another VM: ${err}";
>>> +        }
>>> +        }
>>> +
>>> +        my $repl_conf = PVE::ReplicationConfig->new();
>>> +        my $is_replicated = 
>>> $repl_conf->check_for_existing_jobs($target_vmid, 1);
>>> +        my ($storeid, undef) = 
>>> PVE::Storage::parse_volume_id($source_volid);
>>> +        my $format = (PVE::Storage::parse_volname($storecfg, 
>>> $source_volid))[6];
>>> +        if ($is_replicated && 
>>> !PVE::Storage::storage_can_replicate($storecfg, $storeid, $format)) {
>>> +        die "Cannot move disk to a replicated VM. Storage does not 
>>> support replication!\n";
>>> +        }
>>> +
>>> +        return ($source_conf, $target_conf);
>>> +    };
>>> +
>>> +    my $logfunc = sub {
>>> +        my ($msg) = @_;
>>> +        print STDERR "$msg\n";
>>> +    };
>>> +
>>> +    my $disk_reassignfn = sub {
>>> +        return PVE::QemuConfig->lock_config($vmid, sub {
>>> +        return PVE::QemuConfig->lock_config($target_vmid, sub {
>>> +            my ($source_conf, $target_conf) = 
>>> &$load_and_check_reassign_configs();
>>> +
>>> +            my $drive_param = PVE::QemuServer::parse_drive(
>>> +            $target_disk,
>>> +            $source_conf->{$disk},
>>> +            );
>>> +
>>> +            print "moving disk '$disk' from VM '$vmid' to 
>>> '$target_vmid'\n";
>>> +            my ($storeid, $source_volname) = 
>>> PVE::Storage::parse_volume_id($source_volid);
>>> +
>>> +            my (
>>> +            undef,
>>> +            undef,
>>> +            undef,
>>> +            undef,
>>> +            undef,
>>> +            undef,
>>> +            $fmt
>>> +            ) = PVE::Storage::parse_volname($storecfg, $source_volid);
>>
>> Nit: using
>>      my $fmt = (PVE::Storage::parse_volname($storecfg, 
>> $source_volid))[6];
>> like above is shorter.
> 
> thx!
> 
>>
>>> +
>>> +            my $target_volname = PVE::Storage::find_free_volname(
>>> +            $storecfg,
>>> +            $storeid,
>>> +            $target_vmid,
>>> +            $fmt
>>> +            );
>>> +
>>> +            my $new_volid = PVE::Storage::rename_volume(
>>> +            $storecfg,
>>> +            $source_volid,
>>> +            $target_volname,
>>> +            );
>>> +
>>> +            $drive_param->{file} = $new_volid;
>>> +
>>> +            delete $source_conf->{$disk};
>>> +            print "removing disk '${disk}' from VM '${vmid}' config\n";
>>> +            PVE::QemuConfig->write_config($vmid, $source_conf);
>>> +
>>> +            my $drive_string = 
>>> PVE::QemuServer::print_drive($drive_param);
>>> +            &$update_vm_api(
>>> +            {
>>> +                node => $node,
>>> +                vmid => $target_vmid,
>>> +                digest => $target_digest,
>>> +                $target_disk => $drive_string,
>>> +            },
>>> +            1,
>>> +            );
>>> +
>>> +            # remove possible replication snapshots
>>> +            if (PVE::Storage::volume_has_feature(
>>> +                $storecfg,
>>> +                'replicate',
>>> +                $source_volid),
>>> +            ) {
>>> +            eval {
>>> +                PVE::Replication::prepare(
>>> +                $storecfg,
>>> +                [$new_volid],
>>> +                undef,
>>> +                1,
>>> +                undef,
>>> +                $logfunc,
>>> +                )
>>> +            };
>>> +            if (my $err = $@) {
>>> +                print "Failed to remove replication snapshots on 
>>> moved disk " .
>>> +                "'$target_disk'. Manual cleanup could be necessary.\n";
>>> +            }
>>> +            }
>>> +        });
>>> +        });
>>> +    };
>>> +
>>> +    if ($target_vmid) {
>>> +        $rpcenv->check_vm_perm($authuser, $target_vmid, undef, 
>>> ['VM.Config.Disk'])
>>> +        if $authuser ne 'root@pam';
>>> +
>>> +        die "Moving a disk to the same VM is not possible. Did you 
>>> mean to ".
>>> +        "move the disk to a different storage?\n"
>>> +        if $vmid eq $target_vmid;
>>> +
>>> +        &$load_and_check_reassign_configs();
>>> +        return $rpcenv->fork_worker(
>>> +        'qmmove',
>>> +        "${vmid}-${disk}>${target_vmid}-${target_disk}",
>>> +        $authuser,
>>> +        $disk_reassignfn
>>> +        );
>>> +    } elsif ($storeid) {
>>> +        die "cannot move disk '$disk', only configured disks can be 
>>> moved to another storage\n"
>>> +        if $disk =~ m/^unused\d+$/;
>>> +        return PVE::QemuConfig->lock_config($vmid, $move_updatefn);
>>> +    } else {
>>> +        die "Provide either a 'storage' to move the disk to a 
>>> different " .
>>> +        "storage or 'target-vmid' and 'target-disk' to move the disk 
>>> " .
>>> +        "to another VM\n";
>>> +    }
>>>       }});
>>>   my $check_vm_disks_local = sub {
>>> diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
>>> index ef99b6d..a92d301 100755
>>> --- a/PVE/CLI/qm.pm
>>> +++ b/PVE/CLI/qm.pm
>>> @@ -910,7 +910,7 @@ our $cmddef = {
>>>       resize => [ "PVE::API2::Qemu", 'resize_vm', ['vmid', 'disk', 
>>> 'size'], { node => $nodename } ],
>>> -    'move-disk' => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 
>>> 'disk', 'storage'], { node => $nodename }, $upid_exit ],
>>> +    'move-disk' => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 
>>> 'disk', 'storage', 'target-vmid', 'target-disk'], { node => $nodename 
>>> }, $upid_exit ],
>>>       move_disk => { alias => 'move-disk' },
>>>       unlink => [ "PVE::API2::Qemu", 'unlink', ['vmid'], { node => 
>>> $nodename } ],
>>>




^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2021-09-01  9:49 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-06 13:46 [pve-devel] [PATCH v2 storage qemu-server container 0/9] move disk or volume to other guets Aaron Lauterer
2021-08-06 13:46 ` [pve-devel] [PATCH v2 storage 1/9] storage: add new find_free_volname Aaron Lauterer
2021-08-12 12:50   ` Fabian Ebner
2021-08-13 12:46     ` Aaron Lauterer
2021-08-06 13:46 ` [pve-devel] [PATCH storage 2/9] add disk rename feature Aaron Lauterer
2021-08-12 12:51   ` Fabian Ebner
2021-08-06 13:46 ` [pve-devel] [PATCH v2 qemu-server 3/9] cli: qm: change move_disk to move-disk Aaron Lauterer
2021-08-06 13:46 ` [pve-devel] [PATCH v2 qemu-server 4/9] Drive: add valid_drive_names_with_unused Aaron Lauterer
2021-08-06 13:46 ` [pve-devel] [PATCH v2 qemu-server 5/9] api: move-disk: add move to other VM Aaron Lauterer
2021-08-13  7:41   ` Fabian Ebner
2021-08-13 15:35     ` Aaron Lauterer
2021-09-01  9:48       ` Fabian Ebner
2021-08-06 13:46 ` [pve-devel] [PATCH v2 qemu-server 6/9] api: move-disk: cleanup very long lines Aaron Lauterer
2021-08-06 13:46 ` [pve-devel] [PATCH v2 container 7/9] cli: pct: change move_volume to move-volume Aaron Lauterer
2021-08-06 13:46 ` [pve-devel] [PATCH v2 container 8/9] api: move-volume: add move to another container Aaron Lauterer
2021-08-13  8:21   ` Fabian Ebner
2021-08-06 13:46 ` [pve-devel] [PATCH v2 container 9/9] api: move-volume: cleanup very long lines Aaron Lauterer
2021-08-13  8:29 ` [pve-devel] [PATCH v2 storage qemu-server container 0/9] move disk or volume to other guets Fabian Ebner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal