public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [RFC series 0/7] move disk or volume to other guests
@ 2021-06-01 16:10 Aaron Lauterer
  2021-06-01 16:10 ` [pve-devel] [RFC storage 1/7] storage: expose find_free_diskname Aaron Lauterer
                   ` (6 more replies)
  0 siblings, 7 replies; 14+ messages in thread
From: Aaron Lauterer @ 2021-06-01 16:10 UTC (permalink / raw)
  To: pve-devel

This is the continuation of 'disk-reassign' but instead of a separate
API endpoint we now follow the approach to make it part of the
'move-disk' and 'move-volume' endpoints for VMs and containers.

The main idea is to make it easy to move a disk/volume to another guest.
Currently this is a manual and error prone process that requires
knowledge of how PVE handles disks/volumes and the mapping which guest
they belong to.

With this, the 'qm move-disk' and 'pct move-volume' are changed in the
way that the storage parameter is optional as well as the new
target-vmid and target-{disk,mp}. This will keep old calls to move the
disk/volume to another storage working. To move to another guest, the
storage needs to be omitted.

Major changes since the last patches and discussions [0] are that the
storage layer only implements the renaming itself. The layer above
(qemu-server and pve-container) define the name of the new volume/disk.
Therefore it was necessary to expose the 'find_free_diskname' function.
The rename function on the storage layer handles possible template
referneces and the creation of the new volid as that is highly dependent
on the actual storage.

The following storage types are implemented at the moment:
* dir based ones
* ZFS
* (thin) LVM
* Ceph RBD


Most parts of the disk-reassign code has been taken and moved into the
'move_disk' and 'move_volume' endpoints with conditional checking if the
reassign code or the move to other storage code is meant to run
depending on the given parameters.

As this is the first iteration of this, I am happy about feedback if
this is the right direction that I am taking and what I might have
missed.

[0] https://lists.proxmox.com/pipermail/pve-devel/2021-April/047481.html
[1] https://lists.proxmox.com/pipermail/pve-devel/2021-April/047481.html

storage: Aaron Lauterer (2):
  storage: expose find_free_diskname
  add disk rename feature

 PVE/Storage.pm               | 27 +++++++++++++++++++++++++--
 PVE/Storage/LVMPlugin.pm     | 18 ++++++++++++++++++
 PVE/Storage/LvmThinPlugin.pm |  1 +
 PVE/Storage/Plugin.pm        | 29 +++++++++++++++++++++++++++++
 PVE/Storage/RBDPlugin.pm     | 16 ++++++++++++++++
 PVE/Storage/ZFSPoolPlugin.pm | 12 ++++++++++++
 6 files changed, 101 insertions(+), 2 deletions(-)


qemu-server: Aaron Lauterer (3):
  cli: qm: change move_disk to move-disk
  Drive: add valid_drive_names_with_unused
  api: move-disk: add move to other VM

 PVE/API2/Qemu.pm        | 204 +++++++++++++++++++++++++++++++++++++++-
 PVE/CLI/qm.pm           |   3 +-
 PVE/QemuServer/Drive.pm |   4 +
 3 files changed, 205 insertions(+), 6 deletions(-)



container: Aaron Lauterer (2):
  cli: pct: change move_volume to move-volume
  api: move-volume: add move to another container

 src/PVE/API2/LXC.pm | 229 ++++++++++++++++++++++++++++++++++++++++----
 src/PVE/CLI/pct.pm  |   3 +-
 2 files changed, 210 insertions(+), 22 deletions(-)





-- 
2.20.1





^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pve-devel] [RFC storage 1/7] storage: expose find_free_diskname
  2021-06-01 16:10 [pve-devel] [RFC series 0/7] move disk or volume to other guests Aaron Lauterer
@ 2021-06-01 16:10 ` Aaron Lauterer
  2021-06-02  8:29   ` Fabian Ebner
  2021-06-01 16:10 ` [pve-devel] [RFC storage 2/7] add disk rename feature Aaron Lauterer
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 14+ messages in thread
From: Aaron Lauterer @ 2021-06-01 16:10 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
 PVE/Storage.pm | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index aa36bad..93d09ce 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -201,6 +201,14 @@ sub storage_can_replicate {
     return $plugin->storage_can_replicate($scfg, $storeid, $format);
 }
 
+sub find_free_diskname {
+    my ($cfg, $storeid, $vmid, $fmt, $add_fmt_suffix) = @_;
+
+    my $scfg = storage_config($cfg, $storeid);
+    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+    return $plugin->find_free_diskname($storeid, $scfg, $vmid, $fmt, $add_fmt_suffix);
+}
+
 sub storage_ids {
     my ($cfg) = @_;
 
-- 
2.20.1





^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pve-devel] [RFC storage 2/7] add disk rename feature
  2021-06-01 16:10 [pve-devel] [RFC series 0/7] move disk or volume to other guests Aaron Lauterer
  2021-06-01 16:10 ` [pve-devel] [RFC storage 1/7] storage: expose find_free_diskname Aaron Lauterer
@ 2021-06-01 16:10 ` Aaron Lauterer
  2021-06-02  8:36   ` Fabian Ebner
  2021-06-01 16:10 ` [pve-devel] [RFC qemu-server 3/7] cli: qm: change move_disk to move-disk Aaron Lauterer
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 14+ messages in thread
From: Aaron Lauterer @ 2021-06-01 16:10 UTC (permalink / raw)
  To: pve-devel

Functionality has been added for the following storage types:

* dir based ones
    * directory
    * NFS
    * CIFS
    * gluster
* ZFS
* (thin) LVM
* Ceph

A new feature `rename` has been introduced to mark which storage
plugin supports the feature.

Version API and AGE have been bumped.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
changes:
only do rename now but the rename function handles templates and returns
the new volid as this can be differently handled on some storages.

 PVE/Storage.pm               | 19 +++++++++++++++++--
 PVE/Storage/LVMPlugin.pm     | 18 ++++++++++++++++++
 PVE/Storage/LvmThinPlugin.pm |  1 +
 PVE/Storage/Plugin.pm        | 29 +++++++++++++++++++++++++++++
 PVE/Storage/RBDPlugin.pm     | 16 ++++++++++++++++
 PVE/Storage/ZFSPoolPlugin.pm | 12 ++++++++++++
 6 files changed, 93 insertions(+), 2 deletions(-)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 93d09ce..6936abd 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -40,11 +40,11 @@ use PVE::Storage::ZFSPlugin;
 use PVE::Storage::PBSPlugin;
 
 # Storage API version. Increment it on changes in storage API interface.
-use constant APIVER => 8;
+use constant APIVER => 9;
 # Age is the number of versions we're backward compatible with.
 # This is like having 'current=APIVER' and age='APIAGE' in libtool,
 # see https://www.gnu.org/software/libtool/manual/html_node/Libtool-versioning.html
-use constant APIAGE => 7;
+use constant APIAGE => 8;
 
 # load standard plugins
 PVE::Storage::DirPlugin->register();
@@ -355,6 +355,7 @@ sub volume_snapshot_needs_fsfreeze {
 #            snapshot - taking a snapshot is possible
 #            sparseinit - volume is sparsely initialized
 #            template - conversion to base image is possible
+#            rename - renaming volumes is possible
 # $snap - check if the feature is supported for a given snapshot
 # $running - if the guest owning the volume is running
 # $opts - hash with further options:
@@ -1849,6 +1850,20 @@ sub complete_volume {
     return $res;
 }
 
+sub rename_volume {
+    my ($cfg, $source_volid, $source_vmid, $target_volname, $target_vmid) = @_;
+
+    my ($storeid) = parse_volume_id($source_volid);
+    my (undef, $source_volname, undef, $base_name, $base_vmid, $isBase, $format) = parse_volname($cfg, $source_volid);
+
+    activate_storage($cfg, $storeid);
+
+    my $scfg = storage_config($cfg, $storeid);
+    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+
+    return $plugin->rename_volume($scfg, $storeid, $source_volname, $source_vmid, $target_volname, $target_vmid, $base_name, $base_vmid);
+}
+
 # Various io-heavy operations require io/bandwidth limits which can be
 # configured on multiple levels: The global defaults in datacenter.cfg, and
 # per-storage overrides. When we want to do a restore from storage A to storage
diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
index df49b76..977e6a4 100644
--- a/PVE/Storage/LVMPlugin.pm
+++ b/PVE/Storage/LVMPlugin.pm
@@ -339,6 +339,16 @@ sub lvcreate {
     run_command($cmd, errmsg => "lvcreate '$vg/$name' error");
 }
 
+sub lvrename {
+    my ($vg, $oldname, $newname) = @_;
+
+    my $cmd = ['/sbin/lvrename', $vg, $oldname, $newname];
+    run_command(
+	['/sbin/lvrename', $vg, $oldname, $newname],
+	errmsg => "lvrename '${vg}/${oldname}' to '${newname}' error"
+    );
+}
+
 sub alloc_image {
     my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
 
@@ -584,6 +594,7 @@ sub volume_has_feature {
 
     my $features = {
 	copy => { base => 1, current => 1},
+	rename => {current => 1},
     };
 
     my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
@@ -684,4 +695,11 @@ sub volume_import_write {
 	input => '<&'.fileno($input_fh));
 }
 
+sub rename_volume {
+    my ($class, $scfg, $storeid, $source_volname, $source_vmid, $target_volname, $target_vmid, $base_name, $base_vmid) = @_;
+
+    lvrename($scfg->{vgname}, $source_volname, $target_volname);
+    return "${storeid}:${target_volname}";
+}
+
 1;
diff --git a/PVE/Storage/LvmThinPlugin.pm b/PVE/Storage/LvmThinPlugin.pm
index c9e127c..45ad0ad 100644
--- a/PVE/Storage/LvmThinPlugin.pm
+++ b/PVE/Storage/LvmThinPlugin.pm
@@ -355,6 +355,7 @@ sub volume_has_feature {
 	template => { current => 1},
 	copy => { base => 1, current => 1, snap => 1},
 	sparseinit => { base => 1, current => 1},
+	rename => {current => 1},
     };
 
     my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index 4a10a1f..4e6e288 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -939,6 +939,7 @@ sub volume_has_feature {
 		  snap => {qcow2 => 1} },
 	sparseinit => { base => {qcow2 => 1, raw => 1, vmdk => 1},
 			current => {qcow2 => 1, raw => 1, vmdk => 1} },
+	rename => { current =>{qcow2 => 1, raw => 1, vmdk => 1} },
     };
 
     # clone_image creates a qcow2 volume
@@ -946,6 +947,10 @@ sub volume_has_feature {
 		defined($opts->{valid_target_formats}) &&
 		!(grep { $_ eq 'qcow2' } @{$opts->{valid_target_formats}});
 
+    return 0 if $feature eq 'rename'
+	&& $class->can('api')
+	&& $class->api() < 9;
+
     my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
 	$class->parse_volname($volname);
 
@@ -1463,4 +1468,28 @@ sub volume_import_formats {
     return ();
 }
 
+sub rename_volume {
+    my ($class, $scfg, $storeid, $source_volname, $source_vmid, $target_volname, $target_vmid, $base_name, $base_vmid) = @_;
+    die "not implemented in storage plugin '$class'\n" if $class->can('api') && $class->api() < 9;
+
+    my $basedir = $class->get_subdir($scfg, 'images');
+    my $sourcedir = "${basedir}/${source_vmid}";
+    my $targetdir = "${basedir}/${target_vmid}";
+
+    mkpath $targetdir;
+
+    my (undef, $format, undef) = parse_name_dir($source_volname);
+    $target_volname = "${target_volname}.${format}";
+
+    my $old_path = "${sourcedir}/${source_volname}";
+    my $new_path = "${targetdir}/${target_volname}";
+
+    my $base = $base_name ? "${base_vmid}/${base_name}/" : '';
+
+    rename($old_path, $new_path) ||
+	die "rename '$old_path' to '$new_path' failed - $!\n";
+
+    return "${storeid}:${base}${target_vmid}/${target_volname}";
+}
+
 1;
diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index a8d1243..6e9bf8f 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -728,6 +728,7 @@ sub volume_has_feature {
 	template => { current => 1},
 	copy => { base => 1, current => 1, snap => 1},
 	sparseinit => { base => 1, current => 1},
+	rename => {current => 1},
     };
 
     my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) = $class->parse_volname($volname);
@@ -743,4 +744,19 @@ sub volume_has_feature {
     return undef;
 }
 
+sub rename_volume {
+    my ($class, $scfg, $storeid, $source_volname, $source_vmid, $target_volname, $target_vmid, $base_name, $base_vmid) = @_;
+
+    my $cmd = $rbd_cmd->($scfg, $storeid, 'rename', $source_volname, $target_volname);
+
+    run_rbd_command(
+	$cmd,
+	errmsg => "could not rename image '${source_volname}' to '${target_volname}'",
+    );
+
+    $base_name = $base_name ? "${base_name}/" : '';
+
+    return "${storeid}:${base_name}${target_volname}";
+}
+
 1;
diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index 2e2abe3..f04f443 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++ b/PVE/Storage/ZFSPoolPlugin.pm
@@ -687,6 +687,7 @@ sub volume_has_feature {
 	copy => { base => 1, current => 1},
 	sparseinit => { base => 1, current => 1},
 	replicate => { base => 1, current => 1},
+	rename => {current => 1},
     };
 
     my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
@@ -789,4 +790,15 @@ sub volume_import_formats {
     return $class->volume_export_formats($scfg, $storeid, $volname, undef, $base_snapshot, $with_snapshots);
 }
 
+sub rename_volume {
+    my ($class, $scfg, $storeid, $source_volname, $source_vmid, $target_volname, $target_vmid, $base_name, $base_vmid) = @_;
+
+    my $pool = $scfg->{pool};
+    $class->zfs_request($scfg, 5, 'rename', "${pool}/$source_volname", "${pool}/$target_volname");
+
+    $base_name = $base_name ? "${base_name}/" : '';
+
+    return "${storeid}:${base_name}${target_volname}";
+}
+
 1;
-- 
2.20.1





^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pve-devel] [RFC qemu-server 3/7] cli: qm: change move_disk to move-disk
  2021-06-01 16:10 [pve-devel] [RFC series 0/7] move disk or volume to other guests Aaron Lauterer
  2021-06-01 16:10 ` [pve-devel] [RFC storage 1/7] storage: expose find_free_diskname Aaron Lauterer
  2021-06-01 16:10 ` [pve-devel] [RFC storage 2/7] add disk rename feature Aaron Lauterer
@ 2021-06-01 16:10 ` Aaron Lauterer
  2021-06-01 16:10 ` [pve-devel] [RFC qemu-server 4/7] Drive: add valid_drive_names_with_unused Aaron Lauterer
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 14+ messages in thread
From: Aaron Lauterer @ 2021-06-01 16:10 UTC (permalink / raw)
  To: pve-devel

also add alias to keep move_disk working.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
 PVE/CLI/qm.pm | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index 1c199b6..60360c8 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -910,7 +910,8 @@ our $cmddef = {
 
     resize => [ "PVE::API2::Qemu", 'resize_vm', ['vmid', 'disk', 'size'], { node => $nodename } ],
 
-    move_disk => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage'], { node => $nodename }, $upid_exit ],
+    'move-disk' => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage'], { node => $nodename }, $upid_exit ],
+    move_disk => { alias => 'move-disk' },
 
     unlink => [ "PVE::API2::Qemu", 'unlink', ['vmid'], { node => $nodename } ],
 
-- 
2.20.1





^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pve-devel] [RFC qemu-server 4/7] Drive: add valid_drive_names_with_unused
  2021-06-01 16:10 [pve-devel] [RFC series 0/7] move disk or volume to other guests Aaron Lauterer
                   ` (2 preceding siblings ...)
  2021-06-01 16:10 ` [pve-devel] [RFC qemu-server 3/7] cli: qm: change move_disk to move-disk Aaron Lauterer
@ 2021-06-01 16:10 ` Aaron Lauterer
  2021-06-01 16:10 ` [pve-devel] [RFC qemu-server 5/7] api: move-disk: add move to other VM Aaron Lauterer
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 14+ messages in thread
From: Aaron Lauterer @ 2021-06-01 16:10 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
 PVE/QemuServer/Drive.pm | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index 146a4ab..e6606b0 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -392,6 +392,10 @@ sub valid_drive_names {
             'efidisk0');
 }
 
+sub valid_drive_names_with_unused {
+    return (valid_drive_names(), map {"unused$_"} (0 .. ($MAX_UNUSED_DISKS -1)));
+}
+
 sub is_valid_drivename {
     my $dev = shift;
 
-- 
2.20.1





^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pve-devel] [RFC qemu-server 5/7] api: move-disk: add move to other VM
  2021-06-01 16:10 [pve-devel] [RFC series 0/7] move disk or volume to other guests Aaron Lauterer
                   ` (3 preceding siblings ...)
  2021-06-01 16:10 ` [pve-devel] [RFC qemu-server 4/7] Drive: add valid_drive_names_with_unused Aaron Lauterer
@ 2021-06-01 16:10 ` Aaron Lauterer
  2021-06-02  8:52   ` Fabian Ebner
  2021-06-01 16:10 ` [pve-devel] [PATCH container 6/7] cli: pct: change move_volume to move-volume Aaron Lauterer
  2021-06-01 16:10 ` [pve-devel] [PATCH container 7/7] api: move-volume: add move to another container Aaron Lauterer
  6 siblings, 1 reply; 14+ messages in thread
From: Aaron Lauterer @ 2021-06-01 16:10 UTC (permalink / raw)
  To: pve-devel

The goal of this is to expand the move-disk API endpoint to make it
possible to move a disk to another VM. Previously this was only possible
with manual intervertion either by renaming the VM disk or by manually
adding the disks volid to the config of the other VM.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---

There are some big changes here. The old [0] dedicated API endpoint is
gone and most of its code is now part of move_disk. Error messages have
been changed accordingly and sometimes enahnced by adding disk keys and
VMIDs where appropriate.

Since a move to other guests should be possible for unused disks, we
need to check before doing a move to storage to make sure to not
handle unused disks.

[0] https://lists.proxmox.com/pipermail/pve-devel/2021-April/047738.html

 PVE/API2/Qemu.pm | 204 +++++++++++++++++++++++++++++++++++++++++++++--
 PVE/CLI/qm.pm    |   2 +-
 2 files changed, 200 insertions(+), 6 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 24dba86..f1aee8d 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -35,6 +35,7 @@ use PVE::API2::Qemu::Agent;
 use PVE::VZDump::Plugin;
 use PVE::DataCenterConfig;
 use PVE::SSHInfo;
+use PVE::Replication;
 
 BEGIN {
     if (!$ENV{PVE_GENERATING_DOCS}) {
@@ -3258,9 +3259,11 @@ __PACKAGE__->register_method({
     method => 'POST',
     protected => 1,
     proxyto => 'node',
-    description => "Move volume to different storage.",
+    description => "Move volume to different storage or to a different VM.",
     permissions => {
-	description => "You need 'VM.Config.Disk' permissions on /vms/{vmid}, and 'Datastore.AllocateSpace' permissions on the storage.",
+	description => "You need 'VM.Config.Disk' permissions on /vms/{vmid}, " .
+	    "and 'Datastore.AllocateSpace' permissions on the storage. To move ".
+	    "a disk to another VM, you need the permissions on the target VM as well.",
 	check => [ 'and',
 		   ['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
 		   ['perm', '/storage/{storage}', [ 'Datastore.AllocateSpace' ]],
@@ -3271,14 +3274,19 @@ __PACKAGE__->register_method({
 	properties => {
 	    node => get_standard_option('pve-node'),
 	    vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
+	    'target-vmid' => get_standard_option('pve-vmid', {
+		completion => \&PVE::QemuServer::complete_vmid,
+		optional => 1,
+	    }),
 	    disk => {
 	        type => 'string',
 		description => "The disk you want to move.",
-		enum => [PVE::QemuServer::Drive::valid_drive_names()],
+		enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
 	    },
             storage => get_standard_option('pve-storage-id', {
 		description => "Target storage.",
 		completion => \&PVE::QemuServer::complete_storage,
+		optional => 1,
             }),
             'format' => {
                 type => 'string',
@@ -3305,6 +3313,18 @@ __PACKAGE__->register_method({
 		minimum => '0',
 		default => 'move limit from datacenter or storage config',
 	    },
+	    'target-disk' => {
+	        type => 'string',
+		description => "The config key the disk will be moved to on the target VM (for example, ide0 or scsi1).",
+		enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
+		optional => 1,
+	    },
+	    'target-digest' => {
+		type => 'string',
+		description => 'Prevent changes if current configuration file of the target VM has a different SHA1 digest. This can be used to prevent concurrent modifications.',
+		maxLength => 40,
+		optional => 1,
+	    },
 	},
     },
     returns => {
@@ -3319,14 +3339,22 @@ __PACKAGE__->register_method({
 
 	my $node = extract_param($param, 'node');
 	my $vmid = extract_param($param, 'vmid');
+	my $target_vmid = extract_param($param, 'target-vmid');
 	my $digest = extract_param($param, 'digest');
+	my $target_digest = extract_param($param, 'target-digest');
 	my $disk = extract_param($param, 'disk');
+	my $target_disk = extract_param($param, 'target-disk');
 	my $storeid = extract_param($param, 'storage');
 	my $format = extract_param($param, 'format');
 
+	die "either set storage or target-vmid, but not both\n"
+	    if $storeid && $target_vmid;
+
+
 	my $storecfg = PVE::Storage::config();
+	my $source_volid;
 
-	my $updatefn =  sub {
+	my $move_updatefn =  sub {
 	    my $conf = PVE::QemuConfig->load_config($vmid);
 	    PVE::QemuConfig->check_lock($conf);
 
@@ -3436,7 +3464,173 @@ __PACKAGE__->register_method({
             return $rpcenv->fork_worker('qmmove', $vmid, $authuser, $realcmd);
 	};
 
-	return PVE::QemuConfig->lock_config($vmid, $updatefn);
+	my $load_and_check_reassign_configs = sub {
+	    my $vmlist = PVE::Cluster::get_vmlist()->{ids};
+	    die "Both VMs need to be on the same node ($vmlist->{$vmid}->{node}) but target VM is on $vmlist->{$target_vmid}->{node}.\n"
+		if $vmlist->{$vmid}->{node} ne $vmlist->{$target_vmid}->{node};
+
+	    my $source_conf = PVE::QemuConfig->load_config($vmid);
+	    PVE::QemuConfig->check_lock($source_conf);
+	    my $target_conf = PVE::QemuConfig->load_config($target_vmid);
+	    PVE::QemuConfig->check_lock($target_conf);
+
+	    die "Can't move disks from or to template VMs\n"
+		if ($source_conf->{template} || $target_conf->{template});
+
+	    if ($digest) {
+		eval { PVE::Tools::assert_if_modified($digest, $source_conf->{digest}) };
+		if (my $err = $@) {
+		    die "VM ${vmid}: ${err}";
+		}
+	    }
+
+	    if ($target_digest) {
+		eval { PVE::Tools::assert_if_modified($target_digest, $target_conf->{digest}) };
+		if (my $err = $@) {
+		    die "VM ${target_vmid}: ${err}";
+		}
+	    }
+
+	    die "Disk '${disk}' for VM '$vmid' does not exist\n"
+		if !defined($source_conf->{$disk});
+
+	    die "Target disk key '${target_disk}' is already in use for VM '$target_vmid'\n"
+		if exists $target_conf->{$target_disk};
+
+	    my $drive = PVE::QemuServer::parse_drive(
+		$disk,
+		$source_conf->{$disk},
+	    );
+	    $source_volid = $drive->{file};
+
+	    die "disk '${disk}' has no associated volume\n"
+		if !$source_volid;
+	    die "CD drive contents can't be moved to another VM\n"
+		if PVE::QemuServer::drive_is_cdrom($drive, 1);
+	    die "Can't move  physical disk to another VM\n" if $drive->{file} =~ m|^/dev/|;
+	    die "Can't move disk used by a snapshot to another VM\n"
+		if PVE::QemuServer::Drive::is_volume_in_use(
+		    $storecfg,
+		    $source_conf,
+		    $disk,
+		    $source_volid,
+		);
+
+	    die "Storage does not support moving of this disk to another VM\n"
+		if !PVE::Storage::volume_has_feature(
+		    $storecfg,
+		    'rename',
+		    $source_volid,
+		);
+
+	    die "Cannot move disk to another VM while the source VM is running\n"
+		if PVE::QemuServer::check_running($vmid)
+		    && $disk !~ m/^unused\d+$/;
+
+	    if ($target_disk !~ m/^unused\d+$/ && $target_disk =~ m/^([^\d]+)\d+$/) {
+		my $interface = $1;
+		my $desc = PVE::JSONSchema::get_standard_option("pve-qm-${interface}");
+		eval {
+		    PVE::JSONSchema::parse_property_string(
+			$desc->{format},
+			$source_conf->{$disk},
+		    )
+		};
+		if (my $err = $@) {
+		    die "Cannot move disk to another VM: ${err}";
+		}
+	    }
+
+	    return ($source_conf, $target_conf);
+	};
+
+	my $logfunc = sub {
+	    my ($msg) = @_;
+	    print STDERR "$msg\n";
+	};
+
+	my $disk_reassignfn = sub {
+	    return PVE::QemuConfig->lock_config($vmid, sub {
+		return PVE::QemuConfig->lock_config($target_vmid, sub {
+		    my ($source_conf, $target_conf) = &$load_and_check_reassign_configs();
+
+		    my $drive_param = PVE::QemuServer::parse_drive(
+			$target_disk,
+			$source_conf->{$disk},
+		    );
+
+		    print "moving disk '$disk' from VM '$vmid' to '$target_vmid'\n";
+		    my ($storeid, undef) = PVE::Storage::parse_volume_id($source_volid);
+
+		    my $target_volname = PVE::Storage::find_free_diskname($storecfg, $storeid, $target_vmid, $format);
+
+		    my $new_volid = PVE::Storage::rename_volume(
+			$storecfg,
+			$source_volid,
+			$vmid,
+			$target_volname,
+			$target_vmid,
+		    );
+
+		    $drive_param->{file} = $new_volid;
+
+		    delete $source_conf->{$disk};
+		    print "removing disk '${disk}' from VM '${vmid}' config\n";
+		    PVE::QemuConfig->write_config($vmid, $source_conf);
+
+		    my $drive_string = PVE::QemuServer::print_drive($drive_param);
+		    &$update_vm_api(
+			{
+			    node => $node,
+			    vmid => $target_vmid,
+			    digest => $target_digest,
+			    $target_disk => $drive_string,
+			},
+			1,
+		    );
+
+		    # remove possible replication snapshots
+		    if (PVE::Storage::volume_has_feature(
+			    $storecfg,
+			    'replicate',
+			    $source_volid),
+		    ) {
+			eval {
+			    PVE::Replication::prepare(
+				$storecfg,
+				[$new_volid],
+				undef,
+				1,
+				undef,
+				$logfunc,
+			    )
+			};
+			if (my $err = $@) {
+			    print "Failed to remove replication snapshots on moved disk '$target_disk'. Manual cleanup could be necessary.\n";
+			}
+		    }
+		});
+	    });
+	};
+
+	if ($target_vmid) {
+	    $rpcenv->check_vm_perm($authuser, $target_vmid, undef, ['VM.Config.Disk'])
+		if $authuser ne 'root@pam';
+
+	    die "Moving a disk to the same VM is not possible. Did you mean to move the disk to a different storage?\n"
+		if $vmid eq $target_vmid;
+
+	    &$load_and_check_reassign_configs();
+	    return $rpcenv->fork_worker('qmmove', "${vmid}-${disk}>${target_vmid}-${target_disk}", $authuser, $disk_reassignfn);
+	} elsif ($storeid) {
+	    die "cannot move disk '$disk', only configured disks can be moved to another storage\n"
+		if $disk =~ m/^unused\d+$/;
+	    return PVE::QemuConfig->lock_config($vmid, $move_updatefn);
+	} else {
+	    die "Provide either a 'storage' to move the disk to a different " .
+		"storage or 'target-vmid' and 'target-disk' to move the disk " .
+		"to another VM\n";
+	}
     }});
 
 my $check_vm_disks_local = sub {
diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index 60360c8..4b475f8 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -910,7 +910,7 @@ our $cmddef = {
 
     resize => [ "PVE::API2::Qemu", 'resize_vm', ['vmid', 'disk', 'size'], { node => $nodename } ],
 
-    'move-disk' => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage'], { node => $nodename }, $upid_exit ],
+    'move-disk' => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage', 'target-vmid', 'target-disk'], { node => $nodename }, $upid_exit ],
     move_disk => { alias => 'move-disk' },
 
     unlink => [ "PVE::API2::Qemu", 'unlink', ['vmid'], { node => $nodename } ],
-- 
2.20.1





^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pve-devel] [PATCH container 6/7] cli: pct: change move_volume to move-volume
  2021-06-01 16:10 [pve-devel] [RFC series 0/7] move disk or volume to other guests Aaron Lauterer
                   ` (4 preceding siblings ...)
  2021-06-01 16:10 ` [pve-devel] [RFC qemu-server 5/7] api: move-disk: add move to other VM Aaron Lauterer
@ 2021-06-01 16:10 ` Aaron Lauterer
  2021-06-01 16:10 ` [pve-devel] [PATCH container 7/7] api: move-volume: add move to another container Aaron Lauterer
  6 siblings, 0 replies; 14+ messages in thread
From: Aaron Lauterer @ 2021-06-01 16:10 UTC (permalink / raw)
  To: pve-devel

also add alias to keep move_volume working

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
 src/PVE/CLI/pct.pm | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/src/PVE/CLI/pct.pm b/src/PVE/CLI/pct.pm
index 6b63915..0db23a4 100755
--- a/src/PVE/CLI/pct.pm
+++ b/src/PVE/CLI/pct.pm
@@ -854,7 +854,8 @@ our $cmddef = {
 
     clone => [ "PVE::API2::LXC", 'clone_vm', ['vmid', 'newid'], { node => $nodename }, $upid_exit ],
     migrate => [ "PVE::API2::LXC", 'migrate_vm', ['vmid', 'target'], { node => $nodename }, $upid_exit],
-    move_volume => [ "PVE::API2::LXC", 'move_volume', ['vmid', 'volume', 'storage'], { node => $nodename }, $upid_exit ],
+    'move-volume' => [ "PVE::API2::LXC", 'move_volume', ['vmid', 'volume', 'storage'], { node => $nodename }, $upid_exit ],
+    move_volume => { alias => 'move-disk' },
 
     snapshot => [ "PVE::API2::LXC::Snapshot", 'snapshot', ['vmid', 'snapname'], { node => $nodename } , $upid_exit ],
     delsnapshot => [ "PVE::API2::LXC::Snapshot", 'delsnapshot', ['vmid', 'snapname'], { node => $nodename } , $upid_exit ],
-- 
2.20.1





^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pve-devel] [PATCH container 7/7] api: move-volume: add move to another container
  2021-06-01 16:10 [pve-devel] [RFC series 0/7] move disk or volume to other guests Aaron Lauterer
                   ` (5 preceding siblings ...)
  2021-06-01 16:10 ` [pve-devel] [PATCH container 6/7] cli: pct: change move_volume to move-volume Aaron Lauterer
@ 2021-06-01 16:10 ` Aaron Lauterer
  6 siblings, 0 replies; 14+ messages in thread
From: Aaron Lauterer @ 2021-06-01 16:10 UTC (permalink / raw)
  To: pve-devel

The goal of this is to expand the move-volume API endpoint to make it
possible to move a container volume / mountpoint to another container.

Currently it works for regular mountpoints though it would be nice to be
able to do it for unused mounpoints as well.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
This is mostly the code from qemu-server with some adaptions. Mainly
error messages and some checks.

Previous checks have been moved to '$move_to_storage_checks'.


 src/PVE/API2/LXC.pm | 229 ++++++++++++++++++++++++++++++++++++++++----
 src/PVE/CLI/pct.pm  |   2 +-
 2 files changed, 209 insertions(+), 22 deletions(-)

diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 4965f5d..373dff1 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -27,6 +27,8 @@ use PVE::API2::LXC::Snapshot;
 use PVE::JSONSchema qw(get_standard_option);
 use base qw(PVE::RESTHandler);
 
 BEGIN {
     if (!$ENV{PVE_GENERATING_DOCS}) {
 	require PVE::HA::Env::PVE2;
@@ -1750,10 +1752,12 @@ __PACKAGE__->register_method({
     method => 'POST',
     protected => 1,
     proxyto => 'node',
-    description => "Move a rootfs-/mp-volume to a different storage",
+    description => "Move a rootfs-/mp-volume to a different storage or to a different container.",
     permissions => {
 	description => "You need 'VM.Config.Disk' permissions on /vms/{vmid}, " .
-	    "and 'Datastore.AllocateSpace' permissions on the storage.",
+	    "and 'Datastore.AllocateSpace' permissions on the storage. To move ".
+	    "a volume to another container, you need the permissions on the ".
+	    "target container as well.",
 	check =>
 	[ 'and',
 	  ['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
@@ -1765,14 +1769,20 @@ __PACKAGE__->register_method({
 	properties => {
 	    node => get_standard_option('pve-node'),
 	    vmid => get_standard_option('pve-vmid', { completion => \&PVE::LXC::complete_ctid }),
+	    'target-vmid' => get_standard_option('pve-vmid', {
+		completion => \&PVE::LXC::complete_ctid,
+		optional => 1,
+	    }),
 	    volume => {
 		type => 'string',
+		#TODO: check how to handle unused mount points as the mp parameter is not configured
 		enum => [ PVE::LXC::Config->valid_volume_keys() ],
 		description => "Volume which will be moved.",
 	    },
 	    storage => get_standard_option('pve-storage-id', {
 		description => "Target Storage.",
 		completion => \&PVE::Storage::complete_storage_enabled,
+		optional => 1,
 	    }),
 	    delete => {
 		type => 'boolean',
@@ -1793,6 +1803,18 @@ __PACKAGE__->register_method({
 		minimum => '0',
 		default => 'clone limit from datacenter or storage config',
 	    },
+	    'target-mp' => {
+	        type => 'string',
+		description => "The config key the mp will be moved to.",
+		enum => [PVE::LXC::Config->valid_volume_keys()],
+		optional => 1,
+	    },
+	    'target-digest' => {
+		type => 'string',
+		description => 'Prevent changes if current configuration file of the target container has a different SHA1 digest. This can be used to prevent concurrent modifications.',
+		maxLength => 40,
+		optional => 1,
+	    },
 	},
     },
     returns => {
@@ -1807,32 +1829,49 @@ __PACKAGE__->register_method({
 
 	my $vmid = extract_param($param, 'vmid');
 
+	my $target_vmid = extract_param($param, 'target-vmid');
+
 	my $storage = extract_param($param, 'storage');
 
 	my $mpkey = extract_param($param, 'volume');
 
+	my $target_mp = extract_param($param, 'target-mp');
+
+	my $digest = extract_param($param, 'digest');
+
+	my $target_digest = extract_param($param, 'target-digest');
+
 	my $lockname = 'disk';
 
 	my ($mpdata, $old_volid);
 
-	PVE::LXC::Config->lock_config($vmid, sub {
-	    my $conf = PVE::LXC::Config->load_config($vmid);
-	    PVE::LXC::Config->check_lock($conf);
+	die "either set storage or target-vmid, but not both\n"
+	    if $storage && $target_vmid;
 
-	    die "cannot move volumes of a running container\n" if PVE::LXC::check_running($vmid);
+	die "cannot move volumes of a running container\n" if PVE::LXC::check_running($vmid);
 
-	    $mpdata = PVE::LXC::Config->parse_volume($mpkey, $conf->{$mpkey});
-	    $old_volid = $mpdata->{volume};
+	my $storecfg = PVE::Storage::config();
+	my $source_volid;
 
-	    die "you can't move a volume with snapshots and delete the source\n"
-		if $param->{delete} && PVE::LXC::Config->is_volume_in_use_by_snapshots($conf, $old_volid);
+	my $move_to_storage_checks = sub {
+	    PVE::LXC::Config->lock_config($vmid, sub {
+		my $conf = PVE::LXC::Config->load_config($vmid);
+		PVE::LXC::Config->check_lock($conf);
 
-	    PVE::Tools::assert_if_modified($param->{digest}, $conf->{digest});
 
-	    PVE::LXC::Config->set_lock($vmid, $lockname);
-	});
+		$mpdata = PVE::LXC::Config->parse_volume($mpkey, $conf->{$mpkey});
+		$old_volid = $mpdata->{volume};
 
-	my $realcmd = sub {
+		die "you can't move a volume with snapshots and delete the source\n"
+		    if $param->{delete} && PVE::LXC::Config->is_volume_in_use_by_snapshots($conf, $old_volid);
+
+		PVE::Tools::assert_if_modified($digest, $conf->{digest});
+
+		PVE::LXC::Config->set_lock($vmid, $lockname);
+	    });
+	};
+
+	my $storage_realcmd = sub {
 	    eval {
 		PVE::Cluster::log_msg('info', $authuser, "move volume CT $vmid: move --volume $mpkey --storage $storage");
 
@@ -1902,15 +1941,163 @@ __PACKAGE__->register_method({
 	    warn $@ if $@;
 	    die $err if $err;
 	};
-	my $task = eval {
-	    $rpcenv->fork_worker('move_volume', $vmid, $authuser, $realcmd);
+
+	my $load_and_check_reassign_configs = sub {
+	    my $vmlist = PVE::Cluster::get_vmlist()->{ids};
+	    die "Both containers need to be on the same node ($vmlist->{$vmid}->{node}) but target continer is on $vmlist->{$target_vmid}->{node}.\n"
+		if $vmlist->{$vmid}->{node} ne $vmlist->{$target_vmid}->{node};
+
+	    my $source_conf = PVE::LXC::Config->load_config($vmid);
+	    PVE::LXC::Config->check_lock($source_conf);
+	    my $target_conf = PVE::LXC::Config->load_config($target_vmid);
+	    PVE::LXC::Config->check_lock($target_conf);
+
+	    die "Can't move volumes from or to template VMs\n"
+		if ($source_conf->{template} || $target_conf->{template});
+
+	    if ($digest) {
+		eval { PVE::Tools::assert_if_modified($digest, $source_conf->{digest}) };
+		if (my $err = $@) {
+		    die "Container ${vmid}: ${err}";
+		}
+	    }
+
+	    if ($target_digest) {
+		eval { PVE::Tools::assert_if_modified($target_digest, $target_conf->{digest}) };
+		if (my $err = $@) {
+		    die "Container ${target_vmid}: ${err}";
+		}
+	    }
+
+	    die "volume '${mpkey}' for container '$vmid' does not exist\n"
+		if !defined($source_conf->{$mpkey});
+
+	    die "Target volume key '${target_mp}' is already in use for container '$target_vmid'\n"
+		if exists $target_conf->{$target_mp};
+
+	    my $drive = PVE::LXC::Config->parse_volume(
+		$mpkey,
+		$source_conf->{$mpkey},
+	    );
+
+	    $source_volid = $drive->{volume};
+
+	    die "disk '${mpkey}' has no associated volume\n"
+		if !$source_volid;
+
+	    die "Storage does not support moving of this disk to another container\n"
+		if !PVE::Storage::volume_has_feature(
+		    $storecfg,
+		    'rename',
+		    $source_volid,
+		);
+
+	    die "Cannot move a bindmound or device mount to another container\n"
+		if PVE::LXC::Config->classify_mountpoint($source_volid) ne "volume";
+	    die "Cannot move volume to another container while the source container is running\n"
+		if PVE::LXC::check_running($vmid)
+		    && $mpkey !~ m/^unused\d+$/;
+
+	    return ($source_conf, $target_conf);
+	};
+
+	my $logfunc = sub {
+	    my ($msg) = @_;
+	    print STDERR "$msg\n";
 	};
-	if (my $err = $@) {
-	    eval { PVE::LXC::Config->remove_lock($vmid, $lockname) };
-	    warn $@ if $@;
-	    die $err;
+
+	my $volume_reassignfn = sub {
+	    return PVE::LXC::Config->lock_config($vmid, sub {
+		return PVE::LXC::Config->lock_config($target_vmid, sub {
+		    my ($source_conf, $target_conf) = &$load_and_check_reassign_configs();
+
+		    my $drive_param = PVE::LXC::Config->parse_volume(
+			$target_mp,
+			$source_conf->{$mpkey},
+		    );
+
+		    print "moving volume '$mpkey' from container '$vmid' to '$target_vmid'\n";
+		    my ($storage, undef) = PVE::Storage::parse_volume_id($source_volid);
+
+		    my $format = PVE::Storage::storage_default_format($storecfg, $storage);
+
+		    my $target_volname = PVE::Storage::find_free_diskname($storecfg, $storage, $target_vmid, $format);
+
+		    my $new_volid = PVE::Storage::rename_volume(
+			$storecfg,
+			$source_volid,
+			$vmid,
+			$target_volname,
+			$target_vmid,
+		    );
+
+		    $drive_param->{volume} = $new_volid;
+
+		    delete $source_conf->{$mpkey};
+		    print "removing volume '${mpkey}' from container '${vmid}' config\n";
+		    print "$new_volid\n";
+		    PVE::LXC::Config->write_config($vmid, $source_conf);
+
+		    my $drive_string = PVE::LXC::Config->print_volume($target_mp, $drive_param);
+		    my $running = PVE::LXC::check_running($target_vmid);
+		    my $param = { $target_mp => $drive_string };
+
+		    my $err = Dumper(PVE::LXC::Config->update_pct_config($target_vmid, $target_conf, $running, $param));
+
+		    PVE::LXC::Config->write_config($target_vmid, $target_conf);
+		    $target_conf = PVE::LXC::Config->load_config($target_vmid);
+
+		    PVE::LXC::update_lxc_config($target_vmid, $target_conf);
+		    print "target container '$target_vmid' updated with '$target_mp'\n";
+
+		    # remove possible replication snapshots
+		    if (PVE::Storage::volume_has_feature(
+			    $storecfg,
+			    'replicate',
+			    $source_volid),
+		    ) {
+			eval {
+			    PVE::Replication::prepare(
+				$storecfg,
+				[$new_volid],
+				undef,
+				1,
+				undef,
+				$logfunc,
+			    )
+			};
+			if (my $err = $@) {
+			    print "Failed to remove replication snapshots on volume '$target_mp'. Manual cleanup could be necessary.\n";
+			}
+		    }
+		});
+	    });
+	};
+
+	if ($target_vmid) {
+	    $rpcenv->check_vm_perm($authuser, $target_vmid, undef, ['VM.Config.Disk'])
+		if $authuser ne 'root@pam';
+
+	    die "Moving a volume to the same container is not possible. Did you mean to move the volume to a different storage?\n"
+		if $vmid eq $target_vmid;
+	    &$load_and_check_reassign_configs();
+	    return $rpcenv->fork_worker('move_volume', "${vmid}-${mpkey}>${target_vmid}-${target_mp}", $authuser, $volume_reassignfn);
+	} elsif ($storage) {
+	    &$move_to_storage_checks();
+	    my $task = eval {
+		$rpcenv->fork_worker('move_volume', $vmid, $authuser, $storage_realcmd);
+	    };
+	    if (my $err = $@) {
+		eval { PVE::LXC::Config->remove_lock($vmid, $lockname) };
+		warn $@ if $@;
+		die $err;
+	    }
+	    return $task;
+	} else {
+	    die "Provide either a 'storage' to move the mount point to a ".
+		"different storage or 'target-vmid' and 'target-mp' to move ".
+		"the mount point to another container\n";
 	}
-	return $task;
   }});
 
 __PACKAGE__->register_method({
diff --git a/src/PVE/CLI/pct.pm b/src/PVE/CLI/pct.pm
index 0db23a4..de2d4e2 100755
--- a/src/PVE/CLI/pct.pm
+++ b/src/PVE/CLI/pct.pm
@@ -854,7 +854,7 @@ our $cmddef = {
 
     clone => [ "PVE::API2::LXC", 'clone_vm', ['vmid', 'newid'], { node => $nodename }, $upid_exit ],
     migrate => [ "PVE::API2::LXC", 'migrate_vm', ['vmid', 'target'], { node => $nodename }, $upid_exit],
-    'move-volume' => [ "PVE::API2::LXC", 'move_volume', ['vmid', 'volume', 'storage'], { node => $nodename }, $upid_exit ],
+    'move-volume' => [ "PVE::API2::LXC", 'move_volume', ['vmid', 'volume', 'storage', 'target-vmid', 'target-mp'], { node => $nodename }, $upid_exit ],
     move_volume => { alias => 'move-disk' },
 
     snapshot => [ "PVE::API2::LXC::Snapshot", 'snapshot', ['vmid', 'snapname'], { node => $nodename } , $upid_exit ],
-- 
2.20.1





^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pve-devel] [RFC storage 1/7] storage: expose find_free_diskname
  2021-06-01 16:10 ` [pve-devel] [RFC storage 1/7] storage: expose find_free_diskname Aaron Lauterer
@ 2021-06-02  8:29   ` Fabian Ebner
  2021-07-02 13:38     ` Aaron Lauterer
  0 siblings, 1 reply; 14+ messages in thread
From: Fabian Ebner @ 2021-06-02  8:29 UTC (permalink / raw)
  To: pve-devel, Aaron Lauterer

Am 01.06.21 um 18:10 schrieb Aaron Lauterer:
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
>   PVE/Storage.pm | 8 ++++++++
>   1 file changed, 8 insertions(+)
> 
> diff --git a/PVE/Storage.pm b/PVE/Storage.pm
> index aa36bad..93d09ce 100755
> --- a/PVE/Storage.pm
> +++ b/PVE/Storage.pm
> @@ -201,6 +201,14 @@ sub storage_can_replicate {
>       return $plugin->storage_can_replicate($scfg, $storeid, $format);
>   }
>   
> +sub find_free_diskname {
> +    my ($cfg, $storeid, $vmid, $fmt, $add_fmt_suffix) = @_;

Nit: Ideally, the $add_fmt_suffix should be decided by the plugin, as an 
outside caller cannot know what a plugin wants/expects. Don't know if 
that's easy to do the way things are though.

> +
> +    my $scfg = storage_config($cfg, $storeid);
> +    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
> +    return $plugin->find_free_diskname($storeid, $scfg, $vmid, $fmt, $add_fmt_suffix);
> +}
> +
>   sub storage_ids {
>       my ($cfg) = @_;
>   
> 




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pve-devel] [RFC storage 2/7] add disk rename feature
  2021-06-01 16:10 ` [pve-devel] [RFC storage 2/7] add disk rename feature Aaron Lauterer
@ 2021-06-02  8:36   ` Fabian Ebner
  2021-06-09 14:20     ` Aaron Lauterer
  0 siblings, 1 reply; 14+ messages in thread
From: Fabian Ebner @ 2021-06-02  8:36 UTC (permalink / raw)
  To: pve-devel, Aaron Lauterer

Am 01.06.21 um 18:10 schrieb Aaron Lauterer:
> Functionality has been added for the following storage types:
> 
> * dir based ones
>      * directory
>      * NFS
>      * CIFS
>      * gluster
> * ZFS
> * (thin) LVM
> * Ceph
> 
> A new feature `rename` has been introduced to mark which storage
> plugin supports the feature.
> 
> Version API and AGE have been bumped.
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> changes:
> only do rename now but the rename function handles templates and returns
> the new volid as this can be differently handled on some storages.
> 
>   PVE/Storage.pm               | 19 +++++++++++++++++--
>   PVE/Storage/LVMPlugin.pm     | 18 ++++++++++++++++++
>   PVE/Storage/LvmThinPlugin.pm |  1 +
>   PVE/Storage/Plugin.pm        | 29 +++++++++++++++++++++++++++++
>   PVE/Storage/RBDPlugin.pm     | 16 ++++++++++++++++
>   PVE/Storage/ZFSPoolPlugin.pm | 12 ++++++++++++
>   6 files changed, 93 insertions(+), 2 deletions(-)
> 
> diff --git a/PVE/Storage.pm b/PVE/Storage.pm
> index 93d09ce..6936abd 100755
> --- a/PVE/Storage.pm
> +++ b/PVE/Storage.pm
> @@ -40,11 +40,11 @@ use PVE::Storage::ZFSPlugin;
>   use PVE::Storage::PBSPlugin;
>   
>   # Storage API version. Increment it on changes in storage API interface.
> -use constant APIVER => 8;
> +use constant APIVER => 9;
>   # Age is the number of versions we're backward compatible with.
>   # This is like having 'current=APIVER' and age='APIAGE' in libtool,
>   # see https://www.gnu.org/software/libtool/manual/html_node/Libtool-versioning.html
> -use constant APIAGE => 7;
> +use constant APIAGE => 8;
>   
>   # load standard plugins
>   PVE::Storage::DirPlugin->register();
> @@ -355,6 +355,7 @@ sub volume_snapshot_needs_fsfreeze {
>   #            snapshot - taking a snapshot is possible
>   #            sparseinit - volume is sparsely initialized
>   #            template - conversion to base image is possible
> +#            rename - renaming volumes is possible
>   # $snap - check if the feature is supported for a given snapshot
>   # $running - if the guest owning the volume is running
>   # $opts - hash with further options:
> @@ -1849,6 +1850,20 @@ sub complete_volume {
>       return $res;
>   }
>   
> +sub rename_volume {
> +    my ($cfg, $source_volid, $source_vmid, $target_volname, $target_vmid) = @_;

Can't the vmid arguments be dropped and extracted directly from the 
volid/volname instead? Would prevent potential mismatch for careless 
callers.

> +
> +    my ($storeid) = parse_volume_id($source_volid);
> +    my (undef, $source_volname, undef, $base_name, $base_vmid, $isBase, $format) = parse_volname($cfg, $source_volid);
> +
> +    activate_storage($cfg, $storeid);
> +
> +    my $scfg = storage_config($cfg, $storeid);
> +    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
> +

Check that target_volname is valid (by parsing) either here or within 
each plugin's implementation?

> +    return $plugin->rename_volume($scfg, $storeid, $source_volname, $source_vmid, $target_volname, $target_vmid, $base_name, $base_vmid);

Similar here, not all plugins need all of those parameters. Isn't taking 
$scfg, $storeid, $source_volid, $target_volname enough and let the 
plugin itself extract additional information if needed?

> +}
> +
>   # Various io-heavy operations require io/bandwidth limits which can be
>   # configured on multiple levels: The global defaults in datacenter.cfg, and
>   # per-storage overrides. When we want to do a restore from storage A to storage
> diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
> index df49b76..977e6a4 100644
> --- a/PVE/Storage/LVMPlugin.pm
> +++ b/PVE/Storage/LVMPlugin.pm
> @@ -339,6 +339,16 @@ sub lvcreate {
>       run_command($cmd, errmsg => "lvcreate '$vg/$name' error");
>   }
>   
> +sub lvrename {
> +    my ($vg, $oldname, $newname) = @_;
> +
> +    my $cmd = ['/sbin/lvrename', $vg, $oldname, $newname];
> +    run_command(
> +	['/sbin/lvrename', $vg, $oldname, $newname],
> +	errmsg => "lvrename '${vg}/${oldname}' to '${newname}' error"
> +    );
> +}
> +
>   sub alloc_image {
>       my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
>   
> @@ -584,6 +594,7 @@ sub volume_has_feature {
>   
>       my $features = {
>   	copy => { base => 1, current => 1},
> +	rename => {current => 1},
>       };
>   
>       my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
> @@ -684,4 +695,11 @@ sub volume_import_write {
>   	input => '<&'.fileno($input_fh));
>   }
>   
> +sub rename_volume {
> +    my ($class, $scfg, $storeid, $source_volname, $source_vmid, $target_volname, $target_vmid, $base_name, $base_vmid) = @_;
> +
> +    lvrename($scfg->{vgname}, $source_volname, $target_volname);
> +    return "${storeid}:${target_volname}";
> +}
> +
>   1;
> diff --git a/PVE/Storage/LvmThinPlugin.pm b/PVE/Storage/LvmThinPlugin.pm
> index c9e127c..45ad0ad 100644
> --- a/PVE/Storage/LvmThinPlugin.pm
> +++ b/PVE/Storage/LvmThinPlugin.pm
> @@ -355,6 +355,7 @@ sub volume_has_feature {
>   	template => { current => 1},
>   	copy => { base => 1, current => 1, snap => 1},
>   	sparseinit => { base => 1, current => 1},
> +	rename => {current => 1},
>       };
>   
>       my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
> diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
> index 4a10a1f..4e6e288 100644
> --- a/PVE/Storage/Plugin.pm
> +++ b/PVE/Storage/Plugin.pm
> @@ -939,6 +939,7 @@ sub volume_has_feature {
>   		  snap => {qcow2 => 1} },
>   	sparseinit => { base => {qcow2 => 1, raw => 1, vmdk => 1},
>   			current => {qcow2 => 1, raw => 1, vmdk => 1} },
> +	rename => { current =>{qcow2 => 1, raw => 1, vmdk => 1} },
>       };
>   
>       # clone_image creates a qcow2 volume
> @@ -946,6 +947,10 @@ sub volume_has_feature {
>   		defined($opts->{valid_target_formats}) &&
>   		!(grep { $_ eq 'qcow2' } @{$opts->{valid_target_formats}});
>   
> +    return 0 if $feature eq 'rename'
> +	&& $class->can('api')
> +	&& $class->api() < 9;

Style-nit: multiline post-if

> +
>       my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
>   	$class->parse_volname($volname);
>   
> @@ -1463,4 +1468,28 @@ sub volume_import_formats {
>       return ();
>   }
>   
> +sub rename_volume {
> +    my ($class, $scfg, $storeid, $source_volname, $source_vmid, $target_volname, $target_vmid, $base_name, $base_vmid) = @_;
> +    die "not implemented in storage plugin '$class'\n" if $class->can('api') && $class->api() < 9;
> +
> +    my $basedir = $class->get_subdir($scfg, 'images');
> +    my $sourcedir = "${basedir}/${source_vmid}";
> +    my $targetdir = "${basedir}/${target_vmid}";
> +
> +    mkpath $targetdir;
> +
> +    my (undef, $format, undef) = parse_name_dir($source_volname);
> +    $target_volname = "${target_volname}.${format}";

Ideally, the $target_volname already contains the format suffix at this 
point. Otherwise, passing a $target_volname with a format suffix results 
in a second suffix.

> +
> +    my $old_path = "${sourcedir}/${source_volname}";
> +    my $new_path = "${targetdir}/${target_volname}";
> +
> +    my $base = $base_name ? "${base_vmid}/${base_name}/" : '';
> +
> +    rename($old_path, $new_path) ||
> +	die "rename '$old_path' to '$new_path' failed - $!\n";
> +
> +    return "${storeid}:${base}${target_vmid}/${target_volname}";
> +}
> +
>   1;
> diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
> index a8d1243..6e9bf8f 100644
> --- a/PVE/Storage/RBDPlugin.pm
> +++ b/PVE/Storage/RBDPlugin.pm
> @@ -728,6 +728,7 @@ sub volume_has_feature {
>   	template => { current => 1},
>   	copy => { base => 1, current => 1, snap => 1},
>   	sparseinit => { base => 1, current => 1},
> +	rename => {current => 1},
>       };
>   
>       my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) = $class->parse_volname($volname);
> @@ -743,4 +744,19 @@ sub volume_has_feature {
>       return undef;
>   }
>   
> +sub rename_volume {
> +    my ($class, $scfg, $storeid, $source_volname, $source_vmid, $target_volname, $target_vmid, $base_name, $base_vmid) = @_;
> +
> +    my $cmd = $rbd_cmd->($scfg, $storeid, 'rename', $source_volname, $target_volname);
> +
> +    run_rbd_command(
> +	$cmd,
> +	errmsg => "could not rename image '${source_volname}' to '${target_volname}'",
> +    );
> +
> +    $base_name = $base_name ? "${base_name}/" : '';
> +
> +    return "${storeid}:${base_name}${target_volname}";
> +}
> +
>   1;
> diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
> index 2e2abe3..f04f443 100644
> --- a/PVE/Storage/ZFSPoolPlugin.pm
> +++ b/PVE/Storage/ZFSPoolPlugin.pm
> @@ -687,6 +687,7 @@ sub volume_has_feature {
>   	copy => { base => 1, current => 1},
>   	sparseinit => { base => 1, current => 1},
>   	replicate => { base => 1, current => 1},
> +	rename => {current => 1},
>       };
>   
>       my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
> @@ -789,4 +790,15 @@ sub volume_import_formats {
>       return $class->volume_export_formats($scfg, $storeid, $volname, undef, $base_snapshot, $with_snapshots);
>   }
>   
> +sub rename_volume {
> +    my ($class, $scfg, $storeid, $source_volname, $source_vmid, $target_volname, $target_vmid, $base_name, $base_vmid) = @_;
> +
> +    my $pool = $scfg->{pool};
> +    $class->zfs_request($scfg, 5, 'rename', "${pool}/$source_volname", "${pool}/$target_volname");
> +
> +    $base_name = $base_name ? "${base_name}/" : '';
> +
> +    return "${storeid}:${base_name}${target_volname}";
> +}
> +
>   1;
> 




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pve-devel] [RFC qemu-server 5/7] api: move-disk: add move to other VM
  2021-06-01 16:10 ` [pve-devel] [RFC qemu-server 5/7] api: move-disk: add move to other VM Aaron Lauterer
@ 2021-06-02  8:52   ` Fabian Ebner
  0 siblings, 0 replies; 14+ messages in thread
From: Fabian Ebner @ 2021-06-02  8:52 UTC (permalink / raw)
  To: pve-devel, Aaron Lauterer

Am 01.06.21 um 18:10 schrieb Aaron Lauterer:
> The goal of this is to expand the move-disk API endpoint to make it
> possible to move a disk to another VM. Previously this was only possible
> with manual intervertion either by renaming the VM disk or by manually
> adding the disks volid to the config of the other VM.
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> 
> There are some big changes here. The old [0] dedicated API endpoint is
> gone and most of its code is now part of move_disk. Error messages have
> been changed accordingly and sometimes enahnced by adding disk keys and
> VMIDs where appropriate.
> 
> Since a move to other guests should be possible for unused disks, we
> need to check before doing a move to storage to make sure to not
> handle unused disks.
> 
> [0] https://lists.proxmox.com/pipermail/pve-devel/2021-April/047738.html
> 
>   PVE/API2/Qemu.pm | 204 +++++++++++++++++++++++++++++++++++++++++++++--
>   PVE/CLI/qm.pm    |   2 +-
>   2 files changed, 200 insertions(+), 6 deletions(-)
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index 24dba86..f1aee8d 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -35,6 +35,7 @@ use PVE::API2::Qemu::Agent;
>   use PVE::VZDump::Plugin;
>   use PVE::DataCenterConfig;
>   use PVE::SSHInfo;
> +use PVE::Replication;
>   
>   BEGIN {
>       if (!$ENV{PVE_GENERATING_DOCS}) {
> @@ -3258,9 +3259,11 @@ __PACKAGE__->register_method({
>       method => 'POST',
>       protected => 1,
>       proxyto => 'node',
> -    description => "Move volume to different storage.",
> +    description => "Move volume to different storage or to a different VM.",
>       permissions => {
> -	description => "You need 'VM.Config.Disk' permissions on /vms/{vmid}, and 'Datastore.AllocateSpace' permissions on the storage.",
> +	description => "You need 'VM.Config.Disk' permissions on /vms/{vmid}, " .
> +	    "and 'Datastore.AllocateSpace' permissions on the storage. To move ".
> +	    "a disk to another VM, you need the permissions on the target VM as well.",
>   	check => [ 'and',
>   		   ['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
>   		   ['perm', '/storage/{storage}', [ 'Datastore.AllocateSpace' ]],
> @@ -3271,14 +3274,19 @@ __PACKAGE__->register_method({
>   	properties => {
>   	    node => get_standard_option('pve-node'),
>   	    vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
> +	    'target-vmid' => get_standard_option('pve-vmid', {
> +		completion => \&PVE::QemuServer::complete_vmid,
> +		optional => 1,
> +	    }),
>   	    disk => {
>   	        type => 'string',
>   		description => "The disk you want to move.",
> -		enum => [PVE::QemuServer::Drive::valid_drive_names()],
> +		enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
>   	    },
>               storage => get_standard_option('pve-storage-id', {
>   		description => "Target storage.",
>   		completion => \&PVE::QemuServer::complete_storage,
> +		optional => 1,
>               }),
>               'format' => {
>                   type => 'string',
> @@ -3305,6 +3313,18 @@ __PACKAGE__->register_method({
>   		minimum => '0',
>   		default => 'move limit from datacenter or storage config',
>   	    },
> +	    'target-disk' => {
> +	        type => 'string',
> +		description => "The config key the disk will be moved to on the target VM (for example, ide0 or scsi1).",
> +		enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
> +		optional => 1,
> +	    },
> +	    'target-digest' => {
> +		type => 'string',
> +		description => 'Prevent changes if current configuration file of the target VM has a different SHA1 digest. This can be used to prevent concurrent modifications.',
> +		maxLength => 40,
> +		optional => 1,
> +	    },
>   	},
>       },
>       returns => {
> @@ -3319,14 +3339,22 @@ __PACKAGE__->register_method({
>   
>   	my $node = extract_param($param, 'node');
>   	my $vmid = extract_param($param, 'vmid');
> +	my $target_vmid = extract_param($param, 'target-vmid');
>   	my $digest = extract_param($param, 'digest');
> +	my $target_digest = extract_param($param, 'target-digest');
>   	my $disk = extract_param($param, 'disk');
> +	my $target_disk = extract_param($param, 'target-disk');
>   	my $storeid = extract_param($param, 'storage');
>   	my $format = extract_param($param, 'format');
>   
> +	die "either set storage or target-vmid, but not both\n"
> +	    if $storeid && $target_vmid;
> +
> +
>   	my $storecfg = PVE::Storage::config();
> +	my $source_volid;
>   
> -	my $updatefn =  sub {
> +	my $move_updatefn =  sub {
>   	    my $conf = PVE::QemuConfig->load_config($vmid);
>   	    PVE::QemuConfig->check_lock($conf);
>   
> @@ -3436,7 +3464,173 @@ __PACKAGE__->register_method({
>               return $rpcenv->fork_worker('qmmove', $vmid, $authuser, $realcmd);
>   	};
>   
> -	return PVE::QemuConfig->lock_config($vmid, $updatefn);
> +	my $load_and_check_reassign_configs = sub {
> +	    my $vmlist = PVE::Cluster::get_vmlist()->{ids};
> +	    die "Both VMs need to be on the same node ($vmlist->{$vmid}->{node}) but target VM is on $vmlist->{$target_vmid}->{node}.\n"
> +		if $vmlist->{$vmid}->{node} ne $vmlist->{$target_vmid}->{node};
> +

Style nit: long line

> +	    my $source_conf = PVE::QemuConfig->load_config($vmid);
> +	    PVE::QemuConfig->check_lock($source_conf);
> +	    my $target_conf = PVE::QemuConfig->load_config($target_vmid);
> +	    PVE::QemuConfig->check_lock($target_conf);
> +
> +	    die "Can't move disks from or to template VMs\n"
> +		if ($source_conf->{template} || $target_conf->{template});
> +
> +	    if ($digest) {
> +		eval { PVE::Tools::assert_if_modified($digest, $source_conf->{digest}) };
> +		if (my $err = $@) {
> +		    die "VM ${vmid}: ${err}";
> +		}
> +	    }
> +
> +	    if ($target_digest) {
> +		eval { PVE::Tools::assert_if_modified($target_digest, $target_conf->{digest}) };
> +		if (my $err = $@) {
> +		    die "VM ${target_vmid}: ${err}";
> +		}
> +	    }
> +
> +	    die "Disk '${disk}' for VM '$vmid' does not exist\n"
> +		if !defined($source_conf->{$disk});
> +
> +	    die "Target disk key '${target_disk}' is already in use for VM '$target_vmid'\n"
> +		if exists $target_conf->{$target_disk};
> +
> +	    my $drive = PVE::QemuServer::parse_drive(
> +		$disk,
> +		$source_conf->{$disk},
> +	    );
> +	    $source_volid = $drive->{file};
> +
> +	    die "disk '${disk}' has no associated volume\n"
> +		if !$source_volid;
> +	    die "CD drive contents can't be moved to another VM\n"
> +		if PVE::QemuServer::drive_is_cdrom($drive, 1);
> +	    die "Can't move  physical disk to another VM\n" if $drive->{file} =~ m|^/dev/|;
> +	    die "Can't move disk used by a snapshot to another VM\n"
> +		if PVE::QemuServer::Drive::is_volume_in_use(
> +		    $storecfg,
> +		    $source_conf,
> +		    $disk,
> +		    $source_volid,
> +		);
> +
> +	    die "Storage does not support moving of this disk to another VM\n"
> +		if !PVE::Storage::volume_has_feature(
> +		    $storecfg,
> +		    'rename',
> +		    $source_volid,
> +		);
> +
> +	    die "Cannot move disk to another VM while the source VM is running\n"
> +		if PVE::QemuServer::check_running($vmid)
> +		    && $disk !~ m/^unused\d+$/;
> +

Style nit: multiline post-ifs

> +	    if ($target_disk !~ m/^unused\d+$/ && $target_disk =~ m/^([^\d]+)\d+$/) {
> +		my $interface = $1;
> +		my $desc = PVE::JSONSchema::get_standard_option("pve-qm-${interface}");
> +		eval {
> +		    PVE::JSONSchema::parse_property_string(
> +			$desc->{format},
> +			$source_conf->{$disk},
> +		    )
> +		};
> +		if (my $err = $@) {
> +		    die "Cannot move disk to another VM: ${err}";
> +		}
> +	    }
> +
> +	    return ($source_conf, $target_conf);
> +	};
> +
> +	my $logfunc = sub {
> +	    my ($msg) = @_;
> +	    print STDERR "$msg\n";
> +	};
> +
> +	my $disk_reassignfn = sub {
> +	    return PVE::QemuConfig->lock_config($vmid, sub {
> +		return PVE::QemuConfig->lock_config($target_vmid, sub {
> +		    my ($source_conf, $target_conf) = &$load_and_check_reassign_configs();
> +
> +		    my $drive_param = PVE::QemuServer::parse_drive(
> +			$target_disk,
> +			$source_conf->{$disk},
> +		    );
> +
> +		    print "moving disk '$disk' from VM '$vmid' to '$target_vmid'\n";
> +		    my ($storeid, undef) = PVE::Storage::parse_volume_id($source_volid);
> +
> +		    my $target_volname = PVE::Storage::find_free_diskname($storecfg, $storeid, $target_vmid, $format);

The disk name from here might not be free anymore by the time the rename 
happens. Possible fix:
Have a storage lock within PVE::Storage::rename_volume and have each 
plugin's implementation check that the name is actually free to use. 
Similar to volume_import, but there no lock is needed as we can check 
that the allocate name matches the expected one (not possible here, so a 
lock is needed AFAICT).

> +
> +		    my $new_volid = PVE::Storage::rename_volume(
> +			$storecfg,
> +			$source_volid,
> +			$vmid,
> +			$target_volname,
> +			$target_vmid,
> +		    );
> +
> +		    $drive_param->{file} = $new_volid;
> +
> +		    delete $source_conf->{$disk};
> +		    print "removing disk '${disk}' from VM '${vmid}' config\n";
> +		    PVE::QemuConfig->write_config($vmid, $source_conf);
> +
> +		    my $drive_string = PVE::QemuServer::print_drive($drive_param);
> +		    &$update_vm_api(
> +			{
> +			    node => $node,
> +			    vmid => $target_vmid,
> +			    digest => $target_digest,
> +			    $target_disk => $drive_string,
> +			},
> +			1,
> +		    );
> +
> +		    # remove possible replication snapshots
> +		    if (PVE::Storage::volume_has_feature(
> +			    $storecfg,
> +			    'replicate',
> +			    $source_volid),
> +		    ) {
> +			eval {
> +			    PVE::Replication::prepare(
> +				$storecfg,
> +				[$new_volid],
> +				undef,
> +				1,
> +				undef,
> +				$logfunc,
> +			    )
> +			};
> +			if (my $err = $@) {
> +			    print "Failed to remove replication snapshots on moved disk '$target_disk'. Manual cleanup could be necessary.\n";
> +			}
> +		    }
> +		});
> +	    });
> +	};
> +
> +	if ($target_vmid) {
> +	    $rpcenv->check_vm_perm($authuser, $target_vmid, undef, ['VM.Config.Disk'])
> +		if $authuser ne 'root@pam';
> +
> +	    die "Moving a disk to the same VM is not possible. Did you mean to move the disk to a different storage?\n"
> +		if $vmid eq $target_vmid;
> +
> +	    &$load_and_check_reassign_configs();
> +	    return $rpcenv->fork_worker('qmmove', "${vmid}-${disk}>${target_vmid}-${target_disk}", $authuser, $disk_reassignfn);
> +	} elsif ($storeid) {
> +	    die "cannot move disk '$disk', only configured disks can be moved to another storage\n"
> +		if $disk =~ m/^unused\d+$/;
> +	    return PVE::QemuConfig->lock_config($vmid, $move_updatefn);
> +	} else {
> +	    die "Provide either a 'storage' to move the disk to a different " .
> +		"storage or 'target-vmid' and 'target-disk' to move the disk " .
> +		"to another VM\n";
> +	}
>       }});
>   
>   my $check_vm_disks_local = sub {
> diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
> index 60360c8..4b475f8 100755
> --- a/PVE/CLI/qm.pm
> +++ b/PVE/CLI/qm.pm
> @@ -910,7 +910,7 @@ our $cmddef = {
>   
>       resize => [ "PVE::API2::Qemu", 'resize_vm', ['vmid', 'disk', 'size'], { node => $nodename } ],
>   
> -    'move-disk' => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage'], { node => $nodename }, $upid_exit ],
> +    'move-disk' => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage', 'target-vmid', 'target-disk'], { node => $nodename }, $upid_exit ],
>       move_disk => { alias => 'move-disk' },
>   
>       unlink => [ "PVE::API2::Qemu", 'unlink', ['vmid'], { node => $nodename } ],
> 




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pve-devel] [RFC storage 2/7] add disk rename feature
  2021-06-02  8:36   ` Fabian Ebner
@ 2021-06-09 14:20     ` Aaron Lauterer
  0 siblings, 0 replies; 14+ messages in thread
From: Aaron Lauterer @ 2021-06-09 14:20 UTC (permalink / raw)
  To: Fabian Ebner, pve-devel

Thanks for the review :)

On 6/2/21 10:36 AM, Fabian Ebner wrote:
> Am 01.06.21 um 18:10 schrieb Aaron Lauterer:
>> Functionality has been added for the following storage types:
>>
>>
>> diff --git a/PVE/Storage.pm b/PVE/Storage.pm
>> index 93d09ce..6936abd 100755
>> --- a/PVE/Storage.pm
>> +++ b/PVE/Storage.pm
>> @@ -40,11 +40,11 @@ use PVE::Storage::ZFSPlugin;
>>   use PVE::Storage::PBSPlugin;
>>   # Storage API version. Increment it on changes in storage API interface.
>> -use constant APIVER => 8;
>> +use constant APIVER => 9;
>>   # Age is the number of versions we're backward compatible with.
>>   # This is like having 'current=APIVER' and age='APIAGE' in libtool,
>>   # see https://www.gnu.org/software/libtool/manual/html_node/Libtool-versioning.html
>> -use constant APIAGE => 7;
>> +use constant APIAGE => 8;
>>   # load standard plugins
>>   PVE::Storage::DirPlugin->register();
>> @@ -355,6 +355,7 @@ sub volume_snapshot_needs_fsfreeze {
>>   #            snapshot - taking a snapshot is possible
>>   #            sparseinit - volume is sparsely initialized
>>   #            template - conversion to base image is possible
>> +#            rename - renaming volumes is possible
>>   # $snap - check if the feature is supported for a given snapshot
>>   # $running - if the guest owning the volume is running
>>   # $opts - hash with further options:
>> @@ -1849,6 +1850,20 @@ sub complete_volume {
>>       return $res;
>>   }
>> +sub rename_volume {
>> +    my ($cfg, $source_volid, $source_vmid, $target_volname, $target_vmid) = @_;
> 
> Can't the vmid arguments be dropped and extracted directly from the volid/volname instead? Would prevent potential mismatch for careless callers.

Good point

> 
>> +
>> +    my ($storeid) = parse_volume_id($source_volid);
>> +    my (undef, $source_volname, undef, $base_name, $base_vmid, $isBase, $format) = parse_volname($cfg, $source_volid);
>> +
>> +    activate_storage($cfg, $storeid);
>> +
>> +    my $scfg = storage_config($cfg, $storeid);
>> +    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
>> +
> 
> Check that target_volname is valid (by parsing) either here or within each plugin's implementation?

Will do

> 
>> +    return $plugin->rename_volume($scfg, $storeid, $source_volname, $source_vmid, $target_volname, $target_vmid, $base_name, $base_vmid);
> 
> Similar here, not all plugins need all of those parameters. Isn't taking $scfg, $storeid, $source_volid, $target_volname enough and let the plugin itself extract additional information if needed?

I'll look into it

> 
>> +}
>> +

>> diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
>> index 4a10a1f..4e6e288 100644
>> --- a/PVE/Storage/Plugin.pm
>> +++ b/PVE/Storage/Plugin.pm
>> @@ -939,6 +939,7 @@ sub volume_has_feature {
>>             snap => {qcow2 => 1} },
>>       sparseinit => { base => {qcow2 => 1, raw => 1, vmdk => 1},
>>               current => {qcow2 => 1, raw => 1, vmdk => 1} },
>> +    rename => { current =>{qcow2 => 1, raw => 1, vmdk => 1} },
>>       };
>>       # clone_image creates a qcow2 volume
>> @@ -946,6 +947,10 @@ sub volume_has_feature {
>>           defined($opts->{valid_target_formats}) &&
>>           !(grep { $_ eq 'qcow2' } @{$opts->{valid_target_formats}});
>> +    return 0 if $feature eq 'rename'
>> +    && $class->can('api')
>> +    && $class->api() < 9;
> 
> Style-nit: multiline post-if
> 
>> +
>>       my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
>>       $class->parse_volname($volname);
>> @@ -1463,4 +1468,28 @@ sub volume_import_formats {
>>       return ();
>>   }
>> +sub rename_volume {
>> +    my ($class, $scfg, $storeid, $source_volname, $source_vmid, $target_volname, $target_vmid, $base_name, $base_vmid) = @_;
>> +    die "not implemented in storage plugin '$class'\n" if $class->can('api') && $class->api() < 9;
>> +
>> +    my $basedir = $class->get_subdir($scfg, 'images');
>> +    my $sourcedir = "${basedir}/${source_vmid}";
>> +    my $targetdir = "${basedir}/${target_vmid}";
>> +
>> +    mkpath $targetdir;
>> +
>> +    my (undef, $format, undef) = parse_name_dir($source_volname);
>> +    $target_volname = "${target_volname}.${format}";
> 
> Ideally, the $target_volname already contains the format suffix at this point. Otherwise, passing a $target_volname with a format suffix results in a second suffix.
> 

I need to look into that as well and do a few tests.




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pve-devel] [RFC storage 1/7] storage: expose find_free_diskname
  2021-06-02  8:29   ` Fabian Ebner
@ 2021-07-02 13:38     ` Aaron Lauterer
  2021-07-05  7:58       ` Fabian Ebner
  0 siblings, 1 reply; 14+ messages in thread
From: Aaron Lauterer @ 2021-07-02 13:38 UTC (permalink / raw)
  To: Fabian Ebner, pve-devel



On 6/2/21 10:29 AM, Fabian Ebner wrote:
> Am 01.06.21 um 18:10 schrieb Aaron Lauterer:
>> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
>> ---
>>   PVE/Storage.pm | 8 ++++++++
>>   1 file changed, 8 insertions(+)
>>
>> diff --git a/PVE/Storage.pm b/PVE/Storage.pm
>> index aa36bad..93d09ce 100755
>> --- a/PVE/Storage.pm
>> +++ b/PVE/Storage.pm
>> @@ -201,6 +201,14 @@ sub storage_can_replicate {
>>       return $plugin->storage_can_replicate($scfg, $storeid, $format);
>>   }
>> +sub find_free_diskname {
>> +    my ($cfg, $storeid, $vmid, $fmt, $add_fmt_suffix) = @_;
> 
> Nit: Ideally, the $add_fmt_suffix should be decided by the plugin, as an outside caller cannot know what a plugin wants/expects. Don't know if that's easy to do the way things are though.

Yeah, I am not sure how to handle that... most of our storage plugins don't use it at all (ZFS, RBD, LVM). That leaves the Plugin.pm (storage based) and potential 3rd party plugins that have their own implementation.
Depending on where you would use the now public `find_free_diskname`, it could be possible to know the format and if the suffix should already be added. For the move-disk to other guest (disk reassign) stuff I cannot do that and leave it undefined.

In the Plugin.pm implementation of `rename_volume` I added a check if a suffix is present, and if not, take the one from the source volume.
Thinking about it, I might even strip any potential suffixes and always use the one from the source as that should be kept since the file format probably should not change with a rename.

> 
>> +
>> +    my $scfg = storage_config($cfg, $storeid);
>> +    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
>> +    return $plugin->find_free_diskname($storeid, $scfg, $vmid, $fmt, $add_fmt_suffix);
>> +}
>> +
>>   sub storage_ids {
>>       my ($cfg) = @_;
>>




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pve-devel] [RFC storage 1/7] storage: expose find_free_diskname
  2021-07-02 13:38     ` Aaron Lauterer
@ 2021-07-05  7:58       ` Fabian Ebner
  0 siblings, 0 replies; 14+ messages in thread
From: Fabian Ebner @ 2021-07-05  7:58 UTC (permalink / raw)
  To: Aaron Lauterer, pve-devel

Am 02.07.21 um 15:38 schrieb Aaron Lauterer:
> 
> 
> On 6/2/21 10:29 AM, Fabian Ebner wrote:
>> Am 01.06.21 um 18:10 schrieb Aaron Lauterer:
>>> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
>>> ---
>>>   PVE/Storage.pm | 8 ++++++++
>>>   1 file changed, 8 insertions(+)
>>>
>>> diff --git a/PVE/Storage.pm b/PVE/Storage.pm
>>> index aa36bad..93d09ce 100755
>>> --- a/PVE/Storage.pm
>>> +++ b/PVE/Storage.pm
>>> @@ -201,6 +201,14 @@ sub storage_can_replicate {
>>>       return $plugin->storage_can_replicate($scfg, $storeid, $format);
>>>   }
>>> +sub find_free_diskname {
>>> +    my ($cfg, $storeid, $vmid, $fmt, $add_fmt_suffix) = @_;
>>
>> Nit: Ideally, the $add_fmt_suffix should be decided by the plugin, as 
>> an outside caller cannot know what a plugin wants/expects. Don't know 
>> if that's easy to do the way things are though.
> 
> Yeah, I am not sure how to handle that... most of our storage plugins 
> don't use it at all (ZFS, RBD, LVM). That leaves the Plugin.pm (storage 
> based) and potential 3rd party plugins that have their own implementation.
> Depending on where you would use the now public `find_free_diskname`, it 
> could be possible to know the format and if the suffix should already be 
> added. For the move-disk to other guest (disk reassign) stuff I cannot 
> do that and leave it undefined.
> 

The problem is that half of the plugins will quietly ignore the 
parameter and the other half (dir based plugins) will return a disk name 
they cannot even handle when add_fmt_suffix=0. I'd rather avoid such a 
surprising interface.

The same criticism applies to the plugin's interface, but at least there 
the calls come from the plugins themselves, which know what they need/want.

Two potential solutions:

1) Remove the add_fmt_suffix parameter from the plugin's 
find_free_diskname and either:
a) add custom implementations to the non-dir-based plugins, never adding 
the suffix.
b) adapt the generic implementation to add the suffix depending on 
whether $scfg->{path} is present or not.

2) Add a new function to the plugin interface, returning whether the 
plugin expects a suffix to be added or not, and use that in this patch's 
proposed find_free_diskname.

IMHO 1) is much cleaner, but also a breaking change to the plugin API. 
But currently APIAGE is 0, so it wouldn't be as bad.

> In the Plugin.pm implementation of `rename_volume` I added a check if a 
> suffix is present, and if not, take the one from the source volume.
> Thinking about it, I might even strip any potential suffixes and always 
> use the one from the source as that should be kept since the file format 
> probably should not change with a rename.
> 
>>
>>> +
>>> +    my $scfg = storage_config($cfg, $storeid);
>>> +    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
>>> +    return $plugin->find_free_diskname($storeid, $scfg, $vmid, $fmt, 
>>> $add_fmt_suffix);
>>> +}
>>> +
>>>   sub storage_ids {
>>>       my ($cfg) = @_;
>>>




^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2021-07-05  7:59 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-01 16:10 [pve-devel] [RFC series 0/7] move disk or volume to other guests Aaron Lauterer
2021-06-01 16:10 ` [pve-devel] [RFC storage 1/7] storage: expose find_free_diskname Aaron Lauterer
2021-06-02  8:29   ` Fabian Ebner
2021-07-02 13:38     ` Aaron Lauterer
2021-07-05  7:58       ` Fabian Ebner
2021-06-01 16:10 ` [pve-devel] [RFC storage 2/7] add disk rename feature Aaron Lauterer
2021-06-02  8:36   ` Fabian Ebner
2021-06-09 14:20     ` Aaron Lauterer
2021-06-01 16:10 ` [pve-devel] [RFC qemu-server 3/7] cli: qm: change move_disk to move-disk Aaron Lauterer
2021-06-01 16:10 ` [pve-devel] [RFC qemu-server 4/7] Drive: add valid_drive_names_with_unused Aaron Lauterer
2021-06-01 16:10 ` [pve-devel] [RFC qemu-server 5/7] api: move-disk: add move to other VM Aaron Lauterer
2021-06-02  8:52   ` Fabian Ebner
2021-06-01 16:10 ` [pve-devel] [PATCH container 6/7] cli: pct: change move_volume to move-volume Aaron Lauterer
2021-06-01 16:10 ` [pve-devel] [PATCH container 7/7] api: move-volume: add move to another container Aaron Lauterer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal