public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH v6 series 0/5] disk reassign: add new feature
@ 2021-04-02 10:19 Aaron Lauterer
  2021-04-02 10:19 ` [pve-devel] [PATCH v6 storage 1/1] add disk reassign feature Aaron Lauterer
                   ` (4 more replies)
  0 siblings, 5 replies; 18+ messages in thread
From: Aaron Lauterer @ 2021-04-02 10:19 UTC (permalink / raw)
  To: pve-devel

This series implements a new feature which allows users to easily
reassign disks between VMs. Currently this is only possible with one of
the following manual steps:

* rename the disk image/file and do a `qm rescan`
* configure the disk manually and use the old image name, having an
    image for VM A assigned to VM B

The latter can cause unexpected behavior because PVE expects that the
VMID in a disk name always corresponds to the VM it is assigned to. Thus
when a disk, original from VM A was manually configured as disk for VM B
it happens that, when deleting VM A, the disk in question will be
deleted as well because it still had the VMID of VM A in it's name.

To issue a reassign from the CLI run:

qm reassign_disk <source VMID> <target VMID> <drive name>

where <drive name> is the config key of the disk, e.g. ide0, scsi1 and so
on.

The following storage types are implemented at the moment:
* dir based ones
    * directory
    * NFS
    * CIFS
    * gluster
* ZFS
* (thin) LVM
* Ceph RBD

v5 -> v6:
* guard Replication snapshot cleanup
* add permission check for target vmid
* changed regex to match unused keys better
* refactor dir based feature check to reduce code repetition

v4 -> v5:
* rebase on current master
* reorder patches
* rename `drive_key` to `drive_name`
    thanks @Dominic for pointing out that there already are a lot of
    different names in use for this [0] and not to invent another one ;)
* implemented suggested changes from Fabian [1][2]. More directly in the
    patches themselves

v3 -> v4:
* revert intermediate storage plugin for directory based plugins
* add a `die "not supported"` method in Plugin.pm
* dir based plugins now call the file_reassign_volume method in
  Plugin.pm as the generic file/directory based method
* restored old `volume_has_feature` method in Plugin.pm and override it
  in directory based plugins to check against the new `reassign` feature
  (not too happy about the repetition for each plugin)
* task description mapping has been moved from widget-toolkit to
  pve-manager/utils


v2 -> v3:
* change locking approach
* add more checks
* add intermedia storage plugin for directory based plugins
* use feature flags
* split up the reassign method to have a dedicated method for the
    renaming itself
* handle linked clones
* clean up if disk used to be replicated

I hope I didn't forget anything major.

v1 -> v2:
print info about the new disk volid and key at the end of the job so it
shows up in the CLI output and task log

Changes from RFC -> V1:
* support to reassign unused disks
* digest for target vm config
* reorder the checks a bit
* adding another one to check if the given key for the disk even exists
  in the config.

[0] https://lists.proxmox.com/pipermail/pve-devel/2020-November/045986.html
[1] https://lists.proxmox.com/pipermail/pve-devel/2020-November/046031.html
[2] https://lists.proxmox.com/pipermail/pve-devel/2020-November/046030.html

storage: Aaron Lauterer (1):
  add disk reassign feature

 PVE/Storage.pm                 | 15 +++++++--
 PVE/Storage/CIFSPlugin.pm      | 13 ++++++++
 PVE/Storage/DirPlugin.pm       | 13 ++++++++
 PVE/Storage/GlusterfsPlugin.pm | 13 ++++++++
 PVE/Storage/LVMPlugin.pm       | 24 ++++++++++++++
 PVE/Storage/LvmThinPlugin.pm   |  1 +
 PVE/Storage/NFSPlugin.pm       | 13 ++++++++
 PVE/Storage/Plugin.pm          | 60 ++++++++++++++++++++++++++++++++++
 PVE/Storage/RBDPlugin.pm       | 31 ++++++++++++++++++
 PVE/Storage/ZFSPoolPlugin.pm   | 26 +++++++++++++++
 10 files changed, 207 insertions(+), 2 deletions(-)


qemu-server: Aaron Lauterer (2):
  disk reassign: add API endpoint
  cli: disk reassign: add reassign_disk to qm command

 PVE/API2/Qemu.pm        | 172 ++++++++++++++++++++++++++++++++++++++++
 PVE/CLI/qm.pm           |   2 +
 PVE/QemuServer/Drive.pm |   4 +
 3 files changed, 178 insertions(+)

guest-common: Aaron Lauterer (1):
  Replication: mention disk reassign in comment of possible reasons

 PVE/Replication.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

manager: Aaron Lauterer (1):
  ui: tasks: add qmreassign task description

 www/manager6/Utils.js | 1 +
 1 file changed, 1 insertion(+)


-- 
2.20.1





^ permalink raw reply	[flat|nested] 18+ messages in thread

* [pve-devel] [PATCH v6 storage 1/1] add disk reassign feature
  2021-04-02 10:19 [pve-devel] [PATCH v6 series 0/5] disk reassign: add new feature Aaron Lauterer
@ 2021-04-02 10:19 ` Aaron Lauterer
  2021-04-13  7:53   ` Dominic Jäger
  2021-04-15 11:07   ` Fabian Ebner
  2021-04-02 10:19 ` [pve-devel] [PATCH v6 qemu-server 2/5] disk reassign: add API endpoint Aaron Lauterer
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 18+ messages in thread
From: Aaron Lauterer @ 2021-04-02 10:19 UTC (permalink / raw)
  To: pve-devel

Functionality has been added for the following storage types:

* dir based ones
    * directory
    * NFS
    * CIFS
    * gluster
* ZFS
* (thin) LVM
* Ceph

A new feature `reassign` has been introduced to mark which storage
plugin supports the feature.

Version API and AGE have been bumped.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
v5 -> v6:
* refactor dir based feature check to reduce code repetition by
  introducing the file_can_reassign_volume sub that does the actual check

v4 -> v5:
* rebased on master
* bumped api ver and api age
* rephrased "not implemented" message as suggested [0].

v3 -> v4:
* revert intermediate storage plugin for directory based plugins
* add a `die "not supported"` method in Plugin.pm
* dir based plugins now call the file_reassign_volume method in
  Plugin.pm as the generic file/directory based method
* restored old `volume_has_feature` method in Plugin.pm and override it
  in directory based plugins to check against the new `reassign` feature
  (not too happy about the repetition for each plugin)

v2 -> v3:
* added feature flags instead of dummy "not implemented" methods in
  plugins which do not support it as that would break compatibility with
  3rd party plugins.
  Had to make $features available outside the `has_features` method in
  Plugins.pm in order to be able to individually add features in the
  `BaseDirPlugin.pm`.
* added the BaseDirPlugin.pm to maintain compat with 3rd party plugins,
  this is explained in the commit message
* moved the actual renaming from the `reassign_volume` to a dedicated
  `rename_volume` method to make this functionality available to other
  possible uses in the future.
* added support for linked clones ($base)


rfc -> v1 -> v2: nothing changed

[0] https://lists.proxmox.com/pipermail/pve-devel/2020-November/046031.html


 PVE/Storage.pm                 | 15 +++++++--
 PVE/Storage/CIFSPlugin.pm      | 13 ++++++++
 PVE/Storage/DirPlugin.pm       | 13 ++++++++
 PVE/Storage/GlusterfsPlugin.pm | 13 ++++++++
 PVE/Storage/LVMPlugin.pm       | 24 ++++++++++++++
 PVE/Storage/LvmThinPlugin.pm   |  1 +
 PVE/Storage/NFSPlugin.pm       | 13 ++++++++
 PVE/Storage/Plugin.pm          | 60 ++++++++++++++++++++++++++++++++++
 PVE/Storage/RBDPlugin.pm       | 31 ++++++++++++++++++
 PVE/Storage/ZFSPoolPlugin.pm   | 26 +++++++++++++++
 10 files changed, 207 insertions(+), 2 deletions(-)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 122c3e9..b950c82 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -41,11 +41,11 @@ use PVE::Storage::DRBDPlugin;
 use PVE::Storage::PBSPlugin;
 
 # Storage API version. Increment it on changes in storage API interface.
-use constant APIVER => 8;
+use constant APIVER => 9;
 # Age is the number of versions we're backward compatible with.
 # This is like having 'current=APIVER' and age='APIAGE' in libtool,
 # see https://www.gnu.org/software/libtool/manual/html_node/Libtool-versioning.html
-use constant APIAGE => 7;
+use constant APIAGE => 8;
 
 # load standard plugins
 PVE::Storage::DirPlugin->register();
@@ -349,6 +349,7 @@ sub volume_snapshot_needs_fsfreeze {
 #            snapshot - taking a snapshot is possible
 #            sparseinit - volume is sparsely initialized
 #            template - conversion to base image is possible
+#            reassign - reassigning disks to other guest is possible
 # $snap - check if the feature is supported for a given snapshot
 # $running - if the guest owning the volume is running
 # $opts - hash with further options:
@@ -1843,6 +1844,16 @@ sub complete_volume {
     return $res;
 }
 
+sub reassign_volume {
+    my ($cfg, $volid, $target_vmid) = @_;
+
+    my ($storeid, $volname) = parse_volume_id($volid);
+    my $scfg = storage_config($cfg, $storeid);
+    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+
+    return $plugin->reassign_volume($scfg, $storeid, $volname, $target_vmid);
+}
+
 # Various io-heavy operations require io/bandwidth limits which can be
 # configured on multiple levels: The global defaults in datacenter.cfg, and
 # per-storage overrides. When we want to do a restore from storage A to storage
diff --git a/PVE/Storage/CIFSPlugin.pm b/PVE/Storage/CIFSPlugin.pm
index be06cc7..3affa5d 100644
--- a/PVE/Storage/CIFSPlugin.pm
+++ b/PVE/Storage/CIFSPlugin.pm
@@ -293,4 +293,17 @@ sub update_volume_notes {
     PVE::Storage::DirPlugin::update_volume_notes($class, @_);
 }
 
+sub reassign_volume {
+    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+    return $class->file_reassign_volume($scfg, $storeid, $volname, $target_vmid);
+}
+
+sub volume_has_feature {
+    my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
+
+    shift @_;
+    return $class->file_can_reassign_volume(@_) if $feature eq 'reassign';
+    return $class->SUPER::volume_has_feature(@_);
+}
+
 1;
diff --git a/PVE/Storage/DirPlugin.pm b/PVE/Storage/DirPlugin.pm
index 2267f11..cfdb575 100644
--- a/PVE/Storage/DirPlugin.pm
+++ b/PVE/Storage/DirPlugin.pm
@@ -156,4 +156,17 @@ sub check_config {
     return $opts;
 }
 
+sub reassign_volume {
+    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+    return $class->file_reassign_volume($scfg, $storeid, $volname, $target_vmid);
+}
+
+sub volume_has_feature {
+    my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
+
+    shift @_;
+    return $class->file_can_reassign_volume(@_) if $feature eq 'reassign';
+    return $class->SUPER::volume_has_feature(@_);
+}
+
 1;
diff --git a/PVE/Storage/GlusterfsPlugin.pm b/PVE/Storage/GlusterfsPlugin.pm
index 2dd414d..8132a3e 100644
--- a/PVE/Storage/GlusterfsPlugin.pm
+++ b/PVE/Storage/GlusterfsPlugin.pm
@@ -348,4 +348,17 @@ sub check_connection {
     return defined($server) ? 1 : 0;
 }
 
+sub reassign_volume {
+    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+    return $class->file_reassign_volume($scfg, $storeid, $volname, $target_vmid);
+}
+
+sub volume_has_feature {
+    my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
+
+    shift @_;
+    return $class->file_can_reassign_volume(@_) if $feature eq 'reassign';
+    return $class->SUPER::volume_has_feature(@_);
+}
+
 1;
diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
index df49b76..0d9a691 100644
--- a/PVE/Storage/LVMPlugin.pm
+++ b/PVE/Storage/LVMPlugin.pm
@@ -339,6 +339,13 @@ sub lvcreate {
     run_command($cmd, errmsg => "lvcreate '$vg/$name' error");
 }
 
+sub lvrename {
+    my ($vg, $oldname, $newname) = @_;
+
+    my $cmd = ['/sbin/lvrename', $vg, $oldname, $newname];
+    run_command($cmd, errmsg => "lvrename '${vg}/${oldname}' to '${newname}' error");
+}
+
 sub alloc_image {
     my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
 
@@ -584,6 +591,7 @@ sub volume_has_feature {
 
     my $features = {
 	copy => { base => 1, current => 1},
+	reassign => {current => 1},
     };
 
     my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
@@ -684,4 +692,20 @@ sub volume_import_write {
 	input => '<&'.fileno($input_fh));
 }
 
+sub reassign_volume {
+    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+
+    $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
+	my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid);
+	return $class->rename_volume($scfg, $storeid, $volname, $target_volname, $target_vmid);
+    });
+}
+
+sub rename_volume {
+    my ($class, $scfg, $storeid, $source_volname, $target_volname, $vmid) = @_;
+
+    lvrename($scfg->{vgname}, $source_volname, $target_volname);
+    return "${storeid}:${target_volname}";
+}
+
 1;
diff --git a/PVE/Storage/LvmThinPlugin.pm b/PVE/Storage/LvmThinPlugin.pm
index c9e127c..895af8b 100644
--- a/PVE/Storage/LvmThinPlugin.pm
+++ b/PVE/Storage/LvmThinPlugin.pm
@@ -355,6 +355,7 @@ sub volume_has_feature {
 	template => { current => 1},
 	copy => { base => 1, current => 1, snap => 1},
 	sparseinit => { base => 1, current => 1},
+	reassign => {current => 1},
     };
 
     my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
diff --git a/PVE/Storage/NFSPlugin.pm b/PVE/Storage/NFSPlugin.pm
index 39bf15a..7217ca2 100644
--- a/PVE/Storage/NFSPlugin.pm
+++ b/PVE/Storage/NFSPlugin.pm
@@ -197,4 +197,17 @@ sub update_volume_notes {
     PVE::Storage::DirPlugin::update_volume_notes($class, @_);
 }
 
+sub reassign_volume {
+    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+    return $class->file_reassign_volume($scfg, $storeid, $volname, $target_vmid);
+}
+
+sub volume_has_feature {
+    my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
+
+    shift @_;
+    return $class->file_can_reassign_volume(@_) if $feature eq 'reassign';
+    return $class->SUPER::volume_has_feature(@_);
+}
+
 1;
diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index d2d8184..c1ea41d 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -927,6 +927,7 @@ sub storage_can_replicate {
     return 0;
 }
 
+
 sub volume_has_feature {
     my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
 
@@ -1456,4 +1457,63 @@ sub volume_import_formats {
     return ();
 }
 
+sub reassign_volume {
+    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+    die "not implemented in storage plugin '$class'\n";
+}
+
+# general reassignment method for file/directory based storages
+sub file_reassign_volume {
+    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+
+    my $base;
+    my $base_vmid;
+    my $format;
+    my $source_vmid;
+
+    (undef, $volname, $source_vmid, $base, $base_vmid, undef, $format) = $class->parse_volname($volname);
+
+    # parse_volname strips the directory from volname
+    my $source_volname = "${source_vmid}/${volname}";
+
+    if ($base) {
+	$base = "${base_vmid}/${base}/";
+    } else {
+	$base = '';
+    }
+
+    $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
+	my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid, $format, 1);
+
+	return $class->rename_volume($scfg, $storeid, $source_volname, $target_volname, $target_vmid, $base);
+    });
+}
+
+sub rename_volume {
+    my ($class, $scfg, $storeid, $source_volname, $target_volname, $vmid, $base) = @_;
+
+    my $basedir = $class->get_subdir($scfg, 'images');
+    my $imagedir = "${basedir}/${vmid}";
+
+    mkpath $imagedir;
+
+    my $old_path = "${basedir}/${source_volname}";
+    my $new_path = "${imagedir}/${target_volname}";
+
+    rename($old_path, $new_path) ||
+	die "rename '$old_path' to '$new_path' failed - $!\n";
+
+    return "${storeid}:${base}${vmid}/${target_volname}";
+}
+
+sub file_can_reassign_volume {
+    my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
+
+    my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
+	$class->parse_volname($volname);
+
+    return 1 if !$isBase && $format =~ /^raw$|^qcow2$|^vmdk$/;
+    return undef;
+}
+
 1;
diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index fab6d57..86a9a0d 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -712,6 +712,7 @@ sub volume_has_feature {
 	template => { current => 1},
 	copy => { base => 1, current => 1, snap => 1},
 	sparseinit => { base => 1, current => 1},
+	reassign => {current => 1},
     };
 
     my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
@@ -728,4 +729,34 @@ sub volume_has_feature {
     return undef;
 }
 
+sub reassign_volume {
+    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+
+    my $base;;
+    (undef, $volname, undef, $base) = $class->parse_volname($volname);
+
+    if ($base) {
+	$base = $base . '/';
+    } else {
+	$base = '';
+    }
+
+    $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
+	my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid);
+	return $class->rename_volume($scfg, $storeid, $volname, $target_volname, $target_vmid, $base);
+    });
+}
+
+sub rename_volume {
+    my ($class, $scfg, $storeid, $source_volname, $target_volname, $vmid, $base) = @_;
+
+    my $cmd = $rbd_cmd->($scfg, $storeid, 'rename', $source_volname, $target_volname);
+
+    run_rbd_command(
+	$cmd,
+	errmsg => "could not rename image '${source_volname}' to '${target_volname}'",
+    );
+    return "${storeid}:${base}${target_volname}";
+}
+
 1;
diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index fe65ae4..c63a94b 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++ b/PVE/Storage/ZFSPoolPlugin.pm
@@ -686,6 +686,7 @@ sub volume_has_feature {
 	copy => { base => 1, current => 1},
 	sparseinit => { base => 1, current => 1},
 	replicate => { base => 1, current => 1},
+	reassign => {current => 1},
     };
 
     my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
@@ -788,4 +789,29 @@ sub volume_import_formats {
     return $class->volume_export_formats($scfg, $storeid, $volname, undef, $base_snapshot, $with_snapshots);
 }
 
+sub reassign_volume {
+    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+
+    my $base;;
+    (undef, $volname, undef, $base) = $class->parse_volname($volname);
+
+    if ($base) {
+	$base = $base . '/';
+    } else {
+	$base = '';
+    }
+
+    $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
+	my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid);
+	return $class->rename_volume($scfg, $storeid, $volname, $target_volname, $target_vmid, $base);
+    });
+}
+
+sub rename_volume {
+    my ($class, $scfg, $storeid, $source_volname, $target_volname, $vmid, $base) = @_;
+
+    $class->zfs_request($scfg, 5, 'rename', "$scfg->{pool}/$source_volname", "$scfg->{pool}/$target_volname");
+    return "${storeid}:${base}${target_volname}";
+}
+
 1;
-- 
2.20.1





^ permalink raw reply	[flat|nested] 18+ messages in thread

* [pve-devel] [PATCH v6 qemu-server 2/5] disk reassign: add API endpoint
  2021-04-02 10:19 [pve-devel] [PATCH v6 series 0/5] disk reassign: add new feature Aaron Lauterer
  2021-04-02 10:19 ` [pve-devel] [PATCH v6 storage 1/1] add disk reassign feature Aaron Lauterer
@ 2021-04-02 10:19 ` Aaron Lauterer
  2021-04-15 11:52   ` Fabian Ebner
  2021-04-18 15:24   ` Thomas Lamprecht
  2021-04-02 10:19 ` [pve-devel] [PATCH v6 qemu-server 3/5] cli: disk reassign: add reassign_disk to qm command Aaron Lauterer
                   ` (2 subsequent siblings)
  4 siblings, 2 replies; 18+ messages in thread
From: Aaron Lauterer @ 2021-04-02 10:19 UTC (permalink / raw)
  To: pve-devel

The goal of this new API endpoint is to provide an easy way to move a
disk between VMs as this was only possible with manual intervention
until now. Either by renaming the VM disk or by manually adding the
disks volid to the config of the other VM.

The latter can easily cause unexpected behavior such as disks attached
to VM B would be deleted if it used to be a disk of VM A. This happens
because PVE assumes that the VMID in the volname always matches the VM
the disk is attached to and thus, would remove any disk with VMID A
when VM A was deleted.

The term `reassign` was chosen as it is not yet used
for VM disks.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
v5 -> v6:
* guard Replication snapshot cleanup
    additionally to the eval, that code is now only run if the volume is
    on a storage with the 'replicate' feature
* add permission check for target vmid
* changed regex to match unused keys better

thx @Fabian for these suggestions/catching problems

v4 -> v5:
* implemented suggestions from Fabian [1]
    * logging before action
    * improving description
    * improving error messages
    * using Replication::prepare to remove replication snapshots
    * check if disk is physical disk using /dev/...

v3 -> v4: nothing

v2 -> v3:
* reordered the locking as discussed with fabian [0] to
run checks
    fork worker
	lock source config
	    lock target config
		run checks
		...

* added more checks
    * will not reassign to or from templates
    * will not reassign if VM has snapshots present
* cleanup if disk used to be replicated
* made task log slightly more verbose
* integrated general recommendations regarding code
* renamed `disk` to `drive_key`
* prepended some vars with `source_` for easier distinction

v1 -> v2: print config key and volid info at the end of the job so it
shows up on the CLI and task log

rfc -> v1:
* add support to reassign unused disks
* add support to provide a config digest for the target vm
* add additional check if disk key is present in config
* reorder checks a bit

In order to support unused disk I had to extend
PVE::QemuServer::Drive::valid_drive_names for the API parameter
validation.

Checks are ordered so that cheap tests are run at the first chance to
fail early.

The check if both VMs are present on the node is a bit redundant because
locking the config files will fail if the VM is not present. But with
the additional check we can provide a useful error message to the user
instead of a "Configuration file xyz does not exist" error.

[0] https://lists.proxmox.com/pipermail/pve-devel/2020-September/044930.html
[1] https://lists.proxmox.com/pipermail/pve-devel/2020-November/046030.html

 PVE/API2/Qemu.pm        | 172 ++++++++++++++++++++++++++++++++++++++++
 PVE/QemuServer/Drive.pm |   4 +
 2 files changed, 176 insertions(+)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index e95ab13..9642b9b 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -35,6 +35,7 @@ use PVE::API2::Qemu::Agent;
 use PVE::VZDump::Plugin;
 use PVE::DataCenterConfig;
 use PVE::SSHInfo;
+use PVE::Replication;
 
 BEGIN {
     if (!$ENV{PVE_GENERATING_DOCS}) {
@@ -4377,4 +4378,175 @@ __PACKAGE__->register_method({
 	return PVE::QemuServer::Cloudinit::dump_cloudinit_config($conf, $param->{vmid}, $param->{type});
     }});
 
+__PACKAGE__->register_method({
+    name => 'reassign_vm_disk',
+    path => '{vmid}/reassign_disk',
+    method => 'POST',
+    protected => 1,
+    proxyto => 'node',
+    description => "Reassign a disk to another VM",
+    permissions => {
+	description => "You need 'VM.Config.Disk' permissions on /vms/{vmid} and /vms/{target vmid}, and 'Datastore.Allocate' permissions on the storage.",
+	check => [ 'and',
+		   ['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
+		   ['perm', '/storage/{storage}', [ 'Datastore.Allocate' ]],
+	    ],
+    },
+    parameters => {
+        additionalProperties => 0,
+	properties => {
+	    node => get_standard_option('pve-node'),
+	    vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
+	    target_vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
+	    drive_name => {
+	        type => 'string',
+		description => "The config key of the disk to reassign (for example, ide0 or scsi1).",
+		enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
+	    },
+	    digest => {
+		type => 'string',
+		description => 'Prevent changes if current the configuration file of the source VM has a different SHA1 digest. This can be used to prevent concurrent modifications.',
+		maxLength => 40,
+		optional => 1,
+	    },
+	    target_digest => {
+		type => 'string',
+		description => 'Prevent changes if current the configuration file of the target VM has a different SHA1 digest. This can be used to prevent concurrent modifications.',
+		maxLength => 40,
+		optional => 1,
+	    },
+	},
+    },
+    returns => {
+	type => 'string',
+	description => "the task ID.",
+    },
+    code => sub {
+	my ($param) = @_;
+
+	my $rpcenv = PVE::RPCEnvironment::get();
+	my $authuser = $rpcenv->get_user();
+
+	my $node = extract_param($param, 'node');
+	my $source_vmid = extract_param($param, 'vmid');
+	my $target_vmid = extract_param($param, 'target_vmid');
+	my $source_digest = extract_param($param, 'digest');
+	my $target_digest = extract_param($param, 'target_digest');
+	my $drive_name = extract_param($param, 'drive_name');
+
+	my $storecfg = PVE::Storage::config();
+	my $vmlist;
+	my $drive;
+	my $source_volid;
+
+	$rpcenv->check_vm_perm($authuser, $target_vmid, undef, ['VM.Config.Disk'])
+	    if $authuser ne 'root@pam';
+
+	die "Reassigning a disk to the same VM is not possible. Did you mean to move the disk?\n"
+	    if $source_vmid eq $target_vmid;
+
+	my $load_and_check_configs = sub {
+	    $vmlist = PVE::Cluster::get_vmlist()->{ids};
+	    die "Both VMs need to be on the same node ($vmlist->{$source_vmid}->{node}) but target VM is on $vmlist->{$target_vmid}->{node}.\n"
+		if $vmlist->{$source_vmid}->{node} ne $vmlist->{$target_vmid}->{node};
+
+	    my $source_conf = PVE::QemuConfig->load_config($source_vmid);
+	    PVE::QemuConfig->check_lock($source_conf);
+	    my $target_conf = PVE::QemuConfig->load_config($target_vmid);
+	    PVE::QemuConfig->check_lock($target_conf);
+
+	    die "Can't reassign disks from or to templates\n"
+		if ($source_conf->{template} || $target_conf->{template});
+
+	    if ($source_digest) {
+		eval { PVE::Tools::assert_if_modified($source_digest, $source_conf->{digest}) };
+		if (my $err = $@) {
+		    die "VM ${source_vmid}: ${err}";
+		}
+	    }
+
+	    if ($target_digest) {
+		eval { PVE::Tools::assert_if_modified($target_digest, $target_conf->{digest}) };
+		if (my $err = $@) {
+		    die "VM ${target_vmid}: ${err}";
+		}
+	    }
+
+	    die "Disk '${drive_name}' does not exist\n"
+		if !defined($source_conf->{$drive_name});
+
+	    $drive = PVE::QemuServer::parse_drive($drive_name, $source_conf->{$drive_name});
+	    $source_volid = $drive->{file};
+	    die "disk '${drive_name}' has no associated volume\n" if !$source_volid;
+	    die "CD drive contents can't be reassigned\n" if PVE::QemuServer::drive_is_cdrom($drive, 1);
+	    die "Can't reassign physical disk\n" if $drive->{file} =~ m|^/dev/|;
+	    die "Can't reassign disk used by a snapshot\n"
+		if PVE::QemuServer::Drive::is_volume_in_use($storecfg, $source_conf, $drive_name, $source_volid);
+
+	    die "Storage does not support the reassignment of this disk\n"
+		if !PVE::Storage::volume_has_feature($storecfg, 'reassign', $source_volid);
+
+	    die "Cannot reassign disk while the source VM is running\n"
+		if PVE::QemuServer::check_running($source_vmid) && $drive_name !~ m/^unused\d+$/;
+
+	    return ($source_conf, $target_conf);
+	};
+
+	my $logfunc = sub {
+	    my ($msg) = @_;
+	    print STDERR "$msg\n";
+	};
+
+	my $reassign_func = sub {
+	    return PVE::QemuConfig->lock_config($source_vmid, sub {
+		return PVE::QemuConfig->lock_config($target_vmid, sub {
+		    my ($source_conf, $target_conf) = &$load_and_check_configs();
+
+		    PVE::Cluster::log_msg('info', $authuser, "reassign disk VM $source_vmid: reassign --disk ${drive_name} --target_vmid $target_vmid");
+
+		    my $new_volid = PVE::Storage::reassign_volume(
+			$storecfg,
+			$source_volid,
+			$target_vmid,
+		    );
+
+		    delete $source_conf->{$drive_name};
+		    print "removing disk '${drive_name}' from VM '${source_vmid}'\n";
+		    PVE::QemuConfig->write_config($source_vmid, $source_conf);
+
+		    # remove possible replication snapshots
+		    if (PVE::Storage::volume_has_feature($storecfg, 'replicate', $source_volid)) {
+			eval {
+			    PVE::Replication::prepare(
+				$storecfg,
+				[$new_volid],
+				undef,
+				undef,
+				undef,
+				$logfunc,
+			    )
+			};
+			if (my $err = $@) {
+			    print "Failed to remove replication snapshots on reassigned disk. Manual cleanup could be necessary.\n";
+			}
+		    }
+
+		    my $key;
+		    eval { $key = PVE::QemuConfig->add_unused_volume($target_conf, $new_volid) };
+		    if (my $err = $@) {
+			print "failed to add reassigned disk '${new_volid}' to VM '${target_vmid}'. Try to free an 'unused' disk slot and run 'qm rescan ${target_vmid}'.\n";
+			return 0;
+		    }
+
+		    print "adding disk to VM '${target_vmid}' as '${key}: ${new_volid}'\n";
+		    PVE::QemuConfig->write_config($target_vmid, $target_conf);
+		});
+	    });
+	};
+
+	&$load_and_check_configs();
+
+	return $rpcenv->fork_worker('qmreassign', $source_vmid, $authuser, $reassign_func);
+    }});
+
 1;
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index 01ea8d7..e938b9b 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -392,6 +392,10 @@ sub valid_drive_names {
             'efidisk0');
 }
 
+sub valid_drive_names_with_unused {
+    return (valid_drive_names(), map {"unused$_"} (0 .. ($MAX_UNUSED_DISKS -1)));
+}
+
 sub is_valid_drivename {
     my $dev = shift;
 
-- 
2.20.1





^ permalink raw reply	[flat|nested] 18+ messages in thread

* [pve-devel] [PATCH v6 qemu-server 3/5] cli: disk reassign: add reassign_disk to qm command
  2021-04-02 10:19 [pve-devel] [PATCH v6 series 0/5] disk reassign: add new feature Aaron Lauterer
  2021-04-02 10:19 ` [pve-devel] [PATCH v6 storage 1/1] add disk reassign feature Aaron Lauterer
  2021-04-02 10:19 ` [pve-devel] [PATCH v6 qemu-server 2/5] disk reassign: add API endpoint Aaron Lauterer
@ 2021-04-02 10:19 ` Aaron Lauterer
  2021-04-18 14:50   ` Thomas Lamprecht
  2021-04-02 10:19 ` [pve-devel] [PATCH v6 guest-common 4/5] Replication: mention disk reassign in comment of possible reasons Aaron Lauterer
  2021-04-02 10:19 ` [pve-devel] [PATCH v6 manager 5/5] ui: tasks: add qmreassign task description Aaron Lauterer
  4 siblings, 1 reply; 18+ messages in thread
From: Aaron Lauterer @ 2021-04-02 10:19 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
v5 -> v6: nothing
v4 -> v5: renamed `drive_key` to `drive_name`
v3 ->v4: nothing
v2 -> v3: renamed parameter `disk` to `drive_key`
rfc -> v1 -> v2: nothing changed
 PVE/CLI/qm.pm | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index f8972bd..36ae8e7 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -914,6 +914,8 @@ our $cmddef = {
 
     move_disk => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage'], { node => $nodename }, $upid_exit ],
 
+    reassign_disk => [ "PVE::API2::Qemu", 'reassign_vm_disk', ['vmid', 'target_vmid', 'drive_name'], { node => $nodename } ],
+
     unlink => [ "PVE::API2::Qemu", 'unlink', ['vmid'], { node => $nodename } ],
 
     config => [ "PVE::API2::Qemu", 'vm_config', ['vmid'],
-- 
2.20.1





^ permalink raw reply	[flat|nested] 18+ messages in thread

* [pve-devel] [PATCH v6 guest-common 4/5] Replication: mention disk reassign in comment of possible reasons
  2021-04-02 10:19 [pve-devel] [PATCH v6 series 0/5] disk reassign: add new feature Aaron Lauterer
                   ` (2 preceding siblings ...)
  2021-04-02 10:19 ` [pve-devel] [PATCH v6 qemu-server 3/5] cli: disk reassign: add reassign_disk to qm command Aaron Lauterer
@ 2021-04-02 10:19 ` Aaron Lauterer
  2021-04-02 10:19 ` [pve-devel] [PATCH v6 manager 5/5] ui: tasks: add qmreassign task description Aaron Lauterer
  4 siblings, 0 replies; 18+ messages in thread
From: Aaron Lauterer @ 2021-04-02 10:19 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
v6: nothing
v5: newly added

 PVE/Replication.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/Replication.pm b/PVE/Replication.pm
index ae0f145..47f5434 100644
--- a/PVE/Replication.pm
+++ b/PVE/Replication.pm
@@ -169,7 +169,7 @@ sub prepare {
 			# logfunc will written in replication log.
 			$logfunc->("delete stale replication snapshot error: $err");
 		    }		
-		# Last_sync=0 and a replication snapshot only occur, if the VM was stolen
+		# Last_sync=0 and a replication snapshot only occur, if the VM was stolen or the disk reassigned
 		} else {
 		    $last_snapshots->{$volid}->{$snap} = 1;
 		}
-- 
2.20.1





^ permalink raw reply	[flat|nested] 18+ messages in thread

* [pve-devel] [PATCH v6 manager 5/5] ui: tasks: add qmreassign task description
  2021-04-02 10:19 [pve-devel] [PATCH v6 series 0/5] disk reassign: add new feature Aaron Lauterer
                   ` (3 preceding siblings ...)
  2021-04-02 10:19 ` [pve-devel] [PATCH v6 guest-common 4/5] Replication: mention disk reassign in comment of possible reasons Aaron Lauterer
@ 2021-04-02 10:19 ` Aaron Lauterer
  4 siblings, 0 replies; 18+ messages in thread
From: Aaron Lauterer @ 2021-04-02 10:19 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
v4->v5->v6: rebased

 www/manager6/Utils.js | 1 +
 1 file changed, 1 insertion(+)

diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index f502950f..51942938 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -1801,6 +1801,7 @@ Ext.define('PVE.Utils', {
 	    qmigrate: ['VM', gettext('Migrate')],
 	    qmmove: ['VM', gettext('Move disk')],
 	    qmpause: ['VM', gettext('Pause')],
+	    qmreassign: ['VM', gettext('Reassign disk')],
 	    qmreboot: ['VM', gettext('Reboot')],
 	    qmreset: ['VM', gettext('Reset')],
 	    qmrestore: ['VM', gettext('Restore')],
-- 
2.20.1





^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [pve-devel] [PATCH v6 storage 1/1] add disk reassign feature
  2021-04-02 10:19 ` [pve-devel] [PATCH v6 storage 1/1] add disk reassign feature Aaron Lauterer
@ 2021-04-13  7:53   ` Dominic Jäger
  2021-04-15 11:07   ` Fabian Ebner
  1 sibling, 0 replies; 18+ messages in thread
From: Dominic Jäger @ 2021-04-13  7:53 UTC (permalink / raw)
  To: Proxmox VE development discussion, Aaron Lauterer

Needs a rebase since the rbd patches

On Fri, Apr 02, 2021 at 12:19:19PM +0200, Aaron Lauterer wrote:
> +sub reassign_volume {
> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> +
> +    my $base;;
> +    (undef, $volname, undef, $base) = $class->parse_volname($volname);
> +
> +    if ($base) {
> +	$base = $base . '/';
> +    } else {
> +	$base = '';
> +    }
> +
> +    $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
> +	my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid);
> +	return $class->rename_volume($scfg, $storeid, $volname, $target_volname, $target_vmid, $base);
> +    });
> +}
> +
> +sub rename_volume {
> +    my ($class, $scfg, $storeid, $source_volname, $target_volname, $vmid, $base) = @_;
> +
> +    $class->zfs_request($scfg, 5, 'rename', "$scfg->{pool}/$source_volname", "$scfg->{pool}/$target_volname");
> +    return "${storeid}:${base}${target_volname}";
> +}
1. imo a ternary would not decrease readability much here and it is only one line
> $base = $base ? "$base/" : "";
2. We could maybe move the if to rename_volume. As far as I see, it is not used
   in reassign_volume and we could avoid silently guaranteeing properties of $base
   across subs.

And there is a double ;; somewhere




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [pve-devel] [PATCH v6 storage 1/1] add disk reassign feature
  2021-04-02 10:19 ` [pve-devel] [PATCH v6 storage 1/1] add disk reassign feature Aaron Lauterer
  2021-04-13  7:53   ` Dominic Jäger
@ 2021-04-15 11:07   ` Fabian Ebner
  2021-04-15 11:31     ` Fabian Ebner
  1 sibling, 1 reply; 18+ messages in thread
From: Fabian Ebner @ 2021-04-15 11:07 UTC (permalink / raw)
  To: pve-devel, a.lauterer

We often have a generic implementation in Plugin.pm for filesystem-based 
storages (maybe erroring out if there is no $scfg->{path}). Is there a 
reason not to use such an approach, i.e. making file_reassign_volume the 
default reassign_volume implementation in Plugin.pm? This would avoid 
some fragmentation/duplication. Of course, the plugins that cannot use 
the generic implementation need to provide their own or signal that they 
do not have the feature.

More comments inline:

Am 02.04.21 um 12:19 schrieb Aaron Lauterer:
> Functionality has been added for the following storage types:
> 
> * dir based ones
>      * directory
>      * NFS
>      * CIFS
>      * gluster

What about CephFS? Can it use the generic implementation or not? If it 
can and you choose the approach I suggested at the beginning, you don't 
even need to add code to it ;)

> * ZFS
> * (thin) LVM
> * Ceph
> 
> A new feature `reassign` has been introduced to mark which storage
> plugin supports the feature.
> 
> Version API and AGE have been bumped.
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> v5 -> v6:
> * refactor dir based feature check to reduce code repetition by
>    introducing the file_can_reassign_volume sub that does the actual check
> 
> v4 -> v5:
> * rebased on master
> * bumped api ver and api age
> * rephrased "not implemented" message as suggested [0].
> 
> v3 -> v4:
> * revert intermediate storage plugin for directory based plugins
> * add a `die "not supported"` method in Plugin.pm
> * dir based plugins now call the file_reassign_volume method in
>    Plugin.pm as the generic file/directory based method
> * restored old `volume_has_feature` method in Plugin.pm and override it
>    in directory based plugins to check against the new `reassign` feature
>    (not too happy about the repetition for each plugin)
> 
> v2 -> v3:
> * added feature flags instead of dummy "not implemented" methods in
>    plugins which do not support it as that would break compatibility with
>    3rd party plugins.
>    Had to make $features available outside the `has_features` method in
>    Plugins.pm in order to be able to individually add features in the
>    `BaseDirPlugin.pm`.
> * added the BaseDirPlugin.pm to maintain compat with 3rd party plugins,
>    this is explained in the commit message
> * moved the actual renaming from the `reassign_volume` to a dedicated
>    `rename_volume` method to make this functionality available to other
>    possible uses in the future.
> * added support for linked clones ($base)
> 
> 
> rfc -> v1 -> v2: nothing changed
> 
> [0] https://lists.proxmox.com/pipermail/pve-devel/2020-November/046031.html
> 
> 
>   PVE/Storage.pm                 | 15 +++++++--
>   PVE/Storage/CIFSPlugin.pm      | 13 ++++++++
>   PVE/Storage/DirPlugin.pm       | 13 ++++++++
>   PVE/Storage/GlusterfsPlugin.pm | 13 ++++++++
>   PVE/Storage/LVMPlugin.pm       | 24 ++++++++++++++
>   PVE/Storage/LvmThinPlugin.pm   |  1 +
>   PVE/Storage/NFSPlugin.pm       | 13 ++++++++
>   PVE/Storage/Plugin.pm          | 60 ++++++++++++++++++++++++++++++++++
>   PVE/Storage/RBDPlugin.pm       | 31 ++++++++++++++++++
>   PVE/Storage/ZFSPoolPlugin.pm   | 26 +++++++++++++++
>   10 files changed, 207 insertions(+), 2 deletions(-)
> 
> diff --git a/PVE/Storage.pm b/PVE/Storage.pm
> index 122c3e9..b950c82 100755
> --- a/PVE/Storage.pm
> +++ b/PVE/Storage.pm
> @@ -41,11 +41,11 @@ use PVE::Storage::DRBDPlugin;
>   use PVE::Storage::PBSPlugin;
>   
>   # Storage API version. Increment it on changes in storage API interface.
> -use constant APIVER => 8;
> +use constant APIVER => 9;
>   # Age is the number of versions we're backward compatible with.
>   # This is like having 'current=APIVER' and age='APIAGE' in libtool,
>   # see https://www.gnu.org/software/libtool/manual/html_node/Libtool-versioning.html
> -use constant APIAGE => 7;
> +use constant APIAGE => 8;
>   
>   # load standard plugins
>   PVE::Storage::DirPlugin->register();
> @@ -349,6 +349,7 @@ sub volume_snapshot_needs_fsfreeze {
>   #            snapshot - taking a snapshot is possible
>   #            sparseinit - volume is sparsely initialized
>   #            template - conversion to base image is possible
> +#            reassign - reassigning disks to other guest is possible
>   # $snap - check if the feature is supported for a given snapshot
>   # $running - if the guest owning the volume is running
>   # $opts - hash with further options:
> @@ -1843,6 +1844,16 @@ sub complete_volume {
>       return $res;
>   }
>   
> +sub reassign_volume {
> +    my ($cfg, $volid, $target_vmid) = @_;
> +
> +    my ($storeid, $volname) = parse_volume_id($volid);
> +    my $scfg = storage_config($cfg, $storeid);
> +    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
> +
> +    return $plugin->reassign_volume($scfg, $storeid, $volname, $target_vmid);
> +}
> +
>   # Various io-heavy operations require io/bandwidth limits which can be
>   # configured on multiple levels: The global defaults in datacenter.cfg, and
>   # per-storage overrides. When we want to do a restore from storage A to storage
> diff --git a/PVE/Storage/CIFSPlugin.pm b/PVE/Storage/CIFSPlugin.pm
> index be06cc7..3affa5d 100644
> --- a/PVE/Storage/CIFSPlugin.pm
> +++ b/PVE/Storage/CIFSPlugin.pm
> @@ -293,4 +293,17 @@ sub update_volume_notes {
>       PVE::Storage::DirPlugin::update_volume_notes($class, @_);
>   }
>   
> +sub reassign_volume {
> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> +    return $class->file_reassign_volume($scfg, $storeid, $volname, $target_vmid);
> +}
> +
> +sub volume_has_feature {
> +    my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
> +
> +    shift @_;
> +    return $class->file_can_reassign_volume(@_) if $feature eq 'reassign';
> +    return $class->SUPER::volume_has_feature(@_);
> +}
> +
>   1;
> diff --git a/PVE/Storage/DirPlugin.pm b/PVE/Storage/DirPlugin.pm
> index 2267f11..cfdb575 100644
> --- a/PVE/Storage/DirPlugin.pm
> +++ b/PVE/Storage/DirPlugin.pm
> @@ -156,4 +156,17 @@ sub check_config {
>       return $opts;
>   }
>   
> +sub reassign_volume {
> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> +    return $class->file_reassign_volume($scfg, $storeid, $volname, $target_vmid);
> +}
> +
> +sub volume_has_feature {
> +    my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
> +
> +    shift @_;
> +    return $class->file_can_reassign_volume(@_) if $feature eq 'reassign';
> +    return $class->SUPER::volume_has_feature(@_);
> +}
> +
>   1;
> diff --git a/PVE/Storage/GlusterfsPlugin.pm b/PVE/Storage/GlusterfsPlugin.pm
> index 2dd414d..8132a3e 100644
> --- a/PVE/Storage/GlusterfsPlugin.pm
> +++ b/PVE/Storage/GlusterfsPlugin.pm
> @@ -348,4 +348,17 @@ sub check_connection {
>       return defined($server) ? 1 : 0;
>   }
>   
> +sub reassign_volume {
> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> +    return $class->file_reassign_volume($scfg, $storeid, $volname, $target_vmid);
> +}
> +
> +sub volume_has_feature {
> +    my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
> +
> +    shift @_;
> +    return $class->file_can_reassign_volume(@_) if $feature eq 'reassign';
> +    return $class->SUPER::volume_has_feature(@_);
> +}
> +
>   1;
> diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
> index df49b76..0d9a691 100644
> --- a/PVE/Storage/LVMPlugin.pm
> +++ b/PVE/Storage/LVMPlugin.pm
> @@ -339,6 +339,13 @@ sub lvcreate {
>       run_command($cmd, errmsg => "lvcreate '$vg/$name' error");
>   }
>   
> +sub lvrename {
> +    my ($vg, $oldname, $newname) = @_;
> +
> +    my $cmd = ['/sbin/lvrename', $vg, $oldname, $newname];
> +    run_command($cmd, errmsg => "lvrename '${vg}/${oldname}' to '${newname}' error");
> +}
> +
>   sub alloc_image {
>       my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
>   
> @@ -584,6 +591,7 @@ sub volume_has_feature {
>   
>       my $features = {
>   	copy => { base => 1, current => 1},
> +	reassign => {current => 1},
>       };
>   
>       my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
> @@ -684,4 +692,20 @@ sub volume_import_write {
>   	input => '<&'.fileno($input_fh));
>   }
>   
> +sub reassign_volume {
> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> +
> +    $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
> +	my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid);
> +	return $class->rename_volume($scfg, $storeid, $volname, $target_volname, $target_vmid);
> +    });
> +}
> +
> +sub rename_volume {
> +    my ($class, $scfg, $storeid, $source_volname, $target_volname, $vmid) = @_;

Nit: The signature is different than the corresponding one in Plugin.pm. 
I know you're not using $base here, but maybe add it for consistency.

That said, why not assemble the new volume ID in the reassign function 
itself and have the rename function only be concerned with renaming? 
Then you could drop the $base form the signature altogether.

> +
> +    lvrename($scfg->{vgname}, $source_volname, $target_volname);
> +    return "${storeid}:${target_volname}";
> +}
> +
>   1;
> diff --git a/PVE/Storage/LvmThinPlugin.pm b/PVE/Storage/LvmThinPlugin.pm
> index c9e127c..895af8b 100644
> --- a/PVE/Storage/LvmThinPlugin.pm
> +++ b/PVE/Storage/LvmThinPlugin.pm
> @@ -355,6 +355,7 @@ sub volume_has_feature {
>   	template => { current => 1},
>   	copy => { base => 1, current => 1, snap => 1},
>   	sparseinit => { base => 1, current => 1},
> +	reassign => {current => 1},
>       };
>   
>       my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
> diff --git a/PVE/Storage/NFSPlugin.pm b/PVE/Storage/NFSPlugin.pm
> index 39bf15a..7217ca2 100644
> --- a/PVE/Storage/NFSPlugin.pm
> +++ b/PVE/Storage/NFSPlugin.pm
> @@ -197,4 +197,17 @@ sub update_volume_notes {
>       PVE::Storage::DirPlugin::update_volume_notes($class, @_);
>   }
>   
> +sub reassign_volume {
> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> +    return $class->file_reassign_volume($scfg, $storeid, $volname, $target_vmid);
> +}
> +
> +sub volume_has_feature {
> +    my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
> +
> +    shift @_;
> +    return $class->file_can_reassign_volume(@_) if $feature eq 'reassign';
> +    return $class->SUPER::volume_has_feature(@_);
> +}
> +
>   1;
> diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
> index d2d8184..c1ea41d 100644
> --- a/PVE/Storage/Plugin.pm
> +++ b/PVE/Storage/Plugin.pm
> @@ -927,6 +927,7 @@ sub storage_can_replicate {
>       return 0;
>   }
>   
> +

Nit: accidental newline

>   sub volume_has_feature {
>       my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
>   
> @@ -1456,4 +1457,63 @@ sub volume_import_formats {
>       return ();
>   }
>   
> +sub reassign_volume {
> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> +    die "not implemented in storage plugin '$class'\n";
> +}
> +
> +# general reassignment method for file/directory based storages
> +sub file_reassign_volume {
> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> +
> +    my $base;
> +    my $base_vmid;
> +    my $format;
> +    my $source_vmid;
> +
> +    (undef, $volname, $source_vmid, $base, $base_vmid, undef, $format) = $class->parse_volname($volname);
> +
> +    # parse_volname strips the directory from volname
> +    my $source_volname = "${source_vmid}/${volname}";
> +
> +    if ($base) {
> +	$base = "${base_vmid}/${base}/";
> +    } else {
> +	$base = '';
> +    }
> +
> +    $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
> +	my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid, $format, 1);
> +
> +	return $class->rename_volume($scfg, $storeid, $source_volname, $target_volname, $target_vmid, $base);
> +    });
> +}
> +
> +sub rename_volume {
> +    my ($class, $scfg, $storeid, $source_volname, $target_volname, $vmid, $base) = @_;
> +
> +    my $basedir = $class->get_subdir($scfg, 'images');
> +    my $imagedir = "${basedir}/${vmid}";
> +
> +    mkpath $imagedir;
> +
> +    my $old_path = "${basedir}/${source_volname}";
> +    my $new_path = "${imagedir}/${target_volname}";
> +
> +    rename($old_path, $new_path) ||
> +	die "rename '$old_path' to '$new_path' failed - $!\n";
> +
> +    return "${storeid}:${base}${vmid}/${target_volname}";
> +}
> +
> +sub file_can_reassign_volume {
> +    my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
> +
> +    my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
> +	$class->parse_volname($volname);
> +
> +    return 1 if !$isBase && $format =~ /^raw$|^qcow2$|^vmdk$/;
> +    return undef;
> +}
> +
>   1;
> diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
> index fab6d57..86a9a0d 100644
> --- a/PVE/Storage/RBDPlugin.pm
> +++ b/PVE/Storage/RBDPlugin.pm
> @@ -712,6 +712,7 @@ sub volume_has_feature {
>   	template => { current => 1},
>   	copy => { base => 1, current => 1, snap => 1},
>   	sparseinit => { base => 1, current => 1},
> +	reassign => {current => 1},
>       };
>   
>       my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
> @@ -728,4 +729,34 @@ sub volume_has_feature {
>       return undef;
>   }
>   
> +sub reassign_volume {
> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> +
> +    my $base;;
> +    (undef, $volname, undef, $base) = $class->parse_volname($volname);
> +
> +    if ($base) {
> +	$base = $base . '/';
> +    } else {
> +	$base = '';
> +    }
> +
> +    $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
> +	my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid);
> +	return $class->rename_volume($scfg, $storeid, $volname, $target_volname, $target_vmid, $base);
> +    });
> +}
> +
> +sub rename_volume {
> +    my ($class, $scfg, $storeid, $source_volname, $target_volname, $vmid, $base) = @_;
> +
> +    my $cmd = $rbd_cmd->($scfg, $storeid, 'rename', $source_volname, $target_volname);
> +
> +    run_rbd_command(
> +	$cmd,
> +	errmsg => "could not rename image '${source_volname}' to '${target_volname}'",
> +    );
> +    return "${storeid}:${base}${target_volname}";
> +}
> +
>   1;
> diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
> index fe65ae4..c63a94b 100644
> --- a/PVE/Storage/ZFSPoolPlugin.pm
> +++ b/PVE/Storage/ZFSPoolPlugin.pm
> @@ -686,6 +686,7 @@ sub volume_has_feature {
>   	copy => { base => 1, current => 1},
>   	sparseinit => { base => 1, current => 1},
>   	replicate => { base => 1, current => 1},
> +	reassign => {current => 1},
>       };
>   
>       my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
> @@ -788,4 +789,29 @@ sub volume_import_formats {
>       return $class->volume_export_formats($scfg, $storeid, $volname, undef, $base_snapshot, $with_snapshots);
>   }
>   
> +sub reassign_volume {
> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> +
> +    my $base;;
> +    (undef, $volname, undef, $base) = $class->parse_volname($volname);
> +
> +    if ($base) {
> +	$base = $base . '/';
> +    } else {
> +	$base = '';
> +    }
> +
> +    $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
> +	my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid);
> +	return $class->rename_volume($scfg, $storeid, $volname, $target_volname, $target_vmid, $base);
> +    });
> +}
> +
> +sub rename_volume {
> +    my ($class, $scfg, $storeid, $source_volname, $target_volname, $vmid, $base) = @_;
> +
> +    $class->zfs_request($scfg, 5, 'rename', "$scfg->{pool}/$source_volname", "$scfg->{pool}/$target_volname");
> +    return "${storeid}:${base}${target_volname}";
> +}
> +
>   1;
> 




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [pve-devel] [PATCH v6 storage 1/1] add disk reassign feature
  2021-04-15 11:07   ` Fabian Ebner
@ 2021-04-15 11:31     ` Fabian Ebner
  2021-04-15 11:53       ` Aaron Lauterer
  0 siblings, 1 reply; 18+ messages in thread
From: Fabian Ebner @ 2021-04-15 11:31 UTC (permalink / raw)
  To: pve-devel, a.lauterer

Am 15.04.21 um 13:07 schrieb Fabian Ebner:
> We often have a generic implementation in Plugin.pm for filesystem-based 
> storages (maybe erroring out if there is no $scfg->{path}). Is there a 
> reason not to use such an approach, i.e. making file_reassign_volume the 
> default reassign_volume implementation in Plugin.pm? This would avoid 

I mean together with adapting the default implementation for 
volume_has_feature too.

One more thing I noticed below:

> some fragmentation/duplication. Of course, the plugins that cannot use 
> the generic implementation need to provide their own or signal that they 
> do not have the feature.
> 
> More comments inline:
> 
> Am 02.04.21 um 12:19 schrieb Aaron Lauterer:
>> Functionality has been added for the following storage types:
>>
>> * dir based ones
>>      * directory
>>      * NFS
>>      * CIFS
>>      * gluster
> 
> What about CephFS? Can it use the generic implementation or not? If it 
> can and you choose the approach I suggested at the beginning, you don't 
> even need to add code to it ;)
> 
>> * ZFS
>> * (thin) LVM
>> * Ceph
>>
>> A new feature `reassign` has been introduced to mark which storage
>> plugin supports the feature.
>>
>> Version API and AGE have been bumped.
>>
>> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
>> ---
>> v5 -> v6:
>> * refactor dir based feature check to reduce code repetition by
>>    introducing the file_can_reassign_volume sub that does the actual 
>> check
>>
>> v4 -> v5:
>> * rebased on master
>> * bumped api ver and api age
>> * rephrased "not implemented" message as suggested [0].
>>
>> v3 -> v4:
>> * revert intermediate storage plugin for directory based plugins
>> * add a `die "not supported"` method in Plugin.pm
>> * dir based plugins now call the file_reassign_volume method in
>>    Plugin.pm as the generic file/directory based method
>> * restored old `volume_has_feature` method in Plugin.pm and override it
>>    in directory based plugins to check against the new `reassign` feature
>>    (not too happy about the repetition for each plugin)
>>
>> v2 -> v3:
>> * added feature flags instead of dummy "not implemented" methods in
>>    plugins which do not support it as that would break compatibility with
>>    3rd party plugins.
>>    Had to make $features available outside the `has_features` method in
>>    Plugins.pm in order to be able to individually add features in the
>>    `BaseDirPlugin.pm`.
>> * added the BaseDirPlugin.pm to maintain compat with 3rd party plugins,
>>    this is explained in the commit message
>> * moved the actual renaming from the `reassign_volume` to a dedicated
>>    `rename_volume` method to make this functionality available to other
>>    possible uses in the future.
>> * added support for linked clones ($base)
>>
>>
>> rfc -> v1 -> v2: nothing changed
>>
>> [0] 
>> https://lists.proxmox.com/pipermail/pve-devel/2020-November/046031.html
>>
>>
>>   PVE/Storage.pm                 | 15 +++++++--
>>   PVE/Storage/CIFSPlugin.pm      | 13 ++++++++
>>   PVE/Storage/DirPlugin.pm       | 13 ++++++++
>>   PVE/Storage/GlusterfsPlugin.pm | 13 ++++++++
>>   PVE/Storage/LVMPlugin.pm       | 24 ++++++++++++++
>>   PVE/Storage/LvmThinPlugin.pm   |  1 +
>>   PVE/Storage/NFSPlugin.pm       | 13 ++++++++
>>   PVE/Storage/Plugin.pm          | 60 ++++++++++++++++++++++++++++++++++
>>   PVE/Storage/RBDPlugin.pm       | 31 ++++++++++++++++++
>>   PVE/Storage/ZFSPoolPlugin.pm   | 26 +++++++++++++++
>>   10 files changed, 207 insertions(+), 2 deletions(-)
>>
>> diff --git a/PVE/Storage.pm b/PVE/Storage.pm
>> index 122c3e9..b950c82 100755
>> --- a/PVE/Storage.pm
>> +++ b/PVE/Storage.pm
>> @@ -41,11 +41,11 @@ use PVE::Storage::DRBDPlugin;
>>   use PVE::Storage::PBSPlugin;
>>   # Storage API version. Increment it on changes in storage API 
>> interface.
>> -use constant APIVER => 8;
>> +use constant APIVER => 9;
>>   # Age is the number of versions we're backward compatible with.
>>   # This is like having 'current=APIVER' and age='APIAGE' in libtool,
>>   # see 
>> https://www.gnu.org/software/libtool/manual/html_node/Libtool-versioning.html 
>>
>> -use constant APIAGE => 7;
>> +use constant APIAGE => 8;
>>   # load standard plugins
>>   PVE::Storage::DirPlugin->register();
>> @@ -349,6 +349,7 @@ sub volume_snapshot_needs_fsfreeze {
>>   #            snapshot - taking a snapshot is possible
>>   #            sparseinit - volume is sparsely initialized
>>   #            template - conversion to base image is possible
>> +#            reassign - reassigning disks to other guest is possible
>>   # $snap - check if the feature is supported for a given snapshot
>>   # $running - if the guest owning the volume is running
>>   # $opts - hash with further options:
>> @@ -1843,6 +1844,16 @@ sub complete_volume {
>>       return $res;
>>   }
>> +sub reassign_volume {
>> +    my ($cfg, $volid, $target_vmid) = @_;
>> +
>> +    my ($storeid, $volname) = parse_volume_id($volid);
>> +    my $scfg = storage_config($cfg, $storeid);
>> +    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
>> +

Check if the storage is enabled and activate it?
And is it necessary to activate the volume for certain storages, e.g. 
LVM (might not be, but did you check)?

>> +    return $plugin->reassign_volume($scfg, $storeid, $volname, 
>> $target_vmid);
>> +}
>> +
>>   # Various io-heavy operations require io/bandwidth limits which can be
>>   # configured on multiple levels: The global defaults in 
>> datacenter.cfg, and
>>   # per-storage overrides. When we want to do a restore from storage A 
>> to storage
>> diff --git a/PVE/Storage/CIFSPlugin.pm b/PVE/Storage/CIFSPlugin.pm
>> index be06cc7..3affa5d 100644
>> --- a/PVE/Storage/CIFSPlugin.pm
>> +++ b/PVE/Storage/CIFSPlugin.pm
>> @@ -293,4 +293,17 @@ sub update_volume_notes {
>>       PVE::Storage::DirPlugin::update_volume_notes($class, @_);
>>   }
>> +sub reassign_volume {
>> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
>> +    return $class->file_reassign_volume($scfg, $storeid, $volname, 
>> $target_vmid);
>> +}
>> +
>> +sub volume_has_feature {
>> +    my ($class, $scfg, $feature, $storeid, $volname, $snapname, 
>> $running, $opts) = @_;
>> +
>> +    shift @_;
>> +    return $class->file_can_reassign_volume(@_) if $feature eq 
>> 'reassign';
>> +    return $class->SUPER::volume_has_feature(@_);
>> +}
>> +
>>   1;
>> diff --git a/PVE/Storage/DirPlugin.pm b/PVE/Storage/DirPlugin.pm
>> index 2267f11..cfdb575 100644
>> --- a/PVE/Storage/DirPlugin.pm
>> +++ b/PVE/Storage/DirPlugin.pm
>> @@ -156,4 +156,17 @@ sub check_config {
>>       return $opts;
>>   }
>> +sub reassign_volume {
>> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
>> +    return $class->file_reassign_volume($scfg, $storeid, $volname, 
>> $target_vmid);
>> +}
>> +
>> +sub volume_has_feature {
>> +    my ($class, $scfg, $feature, $storeid, $volname, $snapname, 
>> $running, $opts) = @_;
>> +
>> +    shift @_;
>> +    return $class->file_can_reassign_volume(@_) if $feature eq 
>> 'reassign';
>> +    return $class->SUPER::volume_has_feature(@_);
>> +}
>> +
>>   1;
>> diff --git a/PVE/Storage/GlusterfsPlugin.pm 
>> b/PVE/Storage/GlusterfsPlugin.pm
>> index 2dd414d..8132a3e 100644
>> --- a/PVE/Storage/GlusterfsPlugin.pm
>> +++ b/PVE/Storage/GlusterfsPlugin.pm
>> @@ -348,4 +348,17 @@ sub check_connection {
>>       return defined($server) ? 1 : 0;
>>   }
>> +sub reassign_volume {
>> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
>> +    return $class->file_reassign_volume($scfg, $storeid, $volname, 
>> $target_vmid);
>> +}
>> +
>> +sub volume_has_feature {
>> +    my ($class, $scfg, $feature, $storeid, $volname, $snapname, 
>> $running, $opts) = @_;
>> +
>> +    shift @_;
>> +    return $class->file_can_reassign_volume(@_) if $feature eq 
>> 'reassign';
>> +    return $class->SUPER::volume_has_feature(@_);
>> +}
>> +
>>   1;
>> diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
>> index df49b76..0d9a691 100644
>> --- a/PVE/Storage/LVMPlugin.pm
>> +++ b/PVE/Storage/LVMPlugin.pm
>> @@ -339,6 +339,13 @@ sub lvcreate {
>>       run_command($cmd, errmsg => "lvcreate '$vg/$name' error");
>>   }
>> +sub lvrename {
>> +    my ($vg, $oldname, $newname) = @_;
>> +
>> +    my $cmd = ['/sbin/lvrename', $vg, $oldname, $newname];
>> +    run_command($cmd, errmsg => "lvrename '${vg}/${oldname}' to 
>> '${newname}' error");
>> +}
>> +
>>   sub alloc_image {
>>       my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
>> @@ -584,6 +591,7 @@ sub volume_has_feature {
>>       my $features = {
>>       copy => { base => 1, current => 1},
>> +    reassign => {current => 1},
>>       };
>>       my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
>> @@ -684,4 +692,20 @@ sub volume_import_write {
>>       input => '<&'.fileno($input_fh));
>>   }
>> +sub reassign_volume {
>> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
>> +
>> +    $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
>> +    my $target_volname = $class->find_free_diskname($storeid, $scfg, 
>> $target_vmid);
>> +    return $class->rename_volume($scfg, $storeid, $volname, 
>> $target_volname, $target_vmid);
>> +    });
>> +}
>> +
>> +sub rename_volume {
>> +    my ($class, $scfg, $storeid, $source_volname, $target_volname, 
>> $vmid) = @_;
> 
> Nit: The signature is different than the corresponding one in Plugin.pm. 
> I know you're not using $base here, but maybe add it for consistency.
> 
> That said, why not assemble the new volume ID in the reassign function 
> itself and have the rename function only be concerned with renaming? 
> Then you could drop the $base form the signature altogether.
> 
>> +
>> +    lvrename($scfg->{vgname}, $source_volname, $target_volname);
>> +    return "${storeid}:${target_volname}";
>> +}
>> +
>>   1;
>> diff --git a/PVE/Storage/LvmThinPlugin.pm b/PVE/Storage/LvmThinPlugin.pm
>> index c9e127c..895af8b 100644
>> --- a/PVE/Storage/LvmThinPlugin.pm
>> +++ b/PVE/Storage/LvmThinPlugin.pm
>> @@ -355,6 +355,7 @@ sub volume_has_feature {
>>       template => { current => 1},
>>       copy => { base => 1, current => 1, snap => 1},
>>       sparseinit => { base => 1, current => 1},
>> +    reassign => {current => 1},
>>       };
>>       my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
>> diff --git a/PVE/Storage/NFSPlugin.pm b/PVE/Storage/NFSPlugin.pm
>> index 39bf15a..7217ca2 100644
>> --- a/PVE/Storage/NFSPlugin.pm
>> +++ b/PVE/Storage/NFSPlugin.pm
>> @@ -197,4 +197,17 @@ sub update_volume_notes {
>>       PVE::Storage::DirPlugin::update_volume_notes($class, @_);
>>   }
>> +sub reassign_volume {
>> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
>> +    return $class->file_reassign_volume($scfg, $storeid, $volname, 
>> $target_vmid);
>> +}
>> +
>> +sub volume_has_feature {
>> +    my ($class, $scfg, $feature, $storeid, $volname, $snapname, 
>> $running, $opts) = @_;
>> +
>> +    shift @_;
>> +    return $class->file_can_reassign_volume(@_) if $feature eq 
>> 'reassign';
>> +    return $class->SUPER::volume_has_feature(@_);
>> +}
>> +
>>   1;
>> diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
>> index d2d8184..c1ea41d 100644
>> --- a/PVE/Storage/Plugin.pm
>> +++ b/PVE/Storage/Plugin.pm
>> @@ -927,6 +927,7 @@ sub storage_can_replicate {
>>       return 0;
>>   }
>> +
> 
> Nit: accidental newline
> 
>>   sub volume_has_feature {
>>       my ($class, $scfg, $feature, $storeid, $volname, $snapname, 
>> $running, $opts) = @_;
>> @@ -1456,4 +1457,63 @@ sub volume_import_formats {
>>       return ();
>>   }
>> +sub reassign_volume {
>> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
>> +    die "not implemented in storage plugin '$class'\n";
>> +}
>> +
>> +# general reassignment method for file/directory based storages
>> +sub file_reassign_volume {
>> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
>> +
>> +    my $base;
>> +    my $base_vmid;
>> +    my $format;
>> +    my $source_vmid;
>> +
>> +    (undef, $volname, $source_vmid, $base, $base_vmid, undef, 
>> $format) = $class->parse_volname($volname);
>> +
>> +    # parse_volname strips the directory from volname
>> +    my $source_volname = "${source_vmid}/${volname}";
>> +
>> +    if ($base) {
>> +    $base = "${base_vmid}/${base}/";
>> +    } else {
>> +    $base = '';
>> +    }
>> +
>> +    $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
>> +    my $target_volname = $class->find_free_diskname($storeid, $scfg, 
>> $target_vmid, $format, 1);
>> +
>> +    return $class->rename_volume($scfg, $storeid, $source_volname, 
>> $target_volname, $target_vmid, $base);
>> +    });
>> +}
>> +
>> +sub rename_volume {
>> +    my ($class, $scfg, $storeid, $source_volname, $target_volname, 
>> $vmid, $base) = @_;
>> +
>> +    my $basedir = $class->get_subdir($scfg, 'images');
>> +    my $imagedir = "${basedir}/${vmid}";
>> +
>> +    mkpath $imagedir;
>> +
>> +    my $old_path = "${basedir}/${source_volname}";
>> +    my $new_path = "${imagedir}/${target_volname}";
>> +
>> +    rename($old_path, $new_path) ||
>> +    die "rename '$old_path' to '$new_path' failed - $!\n";
>> +
>> +    return "${storeid}:${base}${vmid}/${target_volname}";
>> +}
>> +
>> +sub file_can_reassign_volume {
>> +    my ($class, $scfg, $feature, $storeid, $volname, $snapname, 
>> $running, $opts) = @_;
>> +
>> +    my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
>> +    $class->parse_volname($volname);
>> +
>> +    return 1 if !$isBase && $format =~ /^raw$|^qcow2$|^vmdk$/;
>> +    return undef;
>> +}
>> +
>>   1;
>> diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
>> index fab6d57..86a9a0d 100644
>> --- a/PVE/Storage/RBDPlugin.pm
>> +++ b/PVE/Storage/RBDPlugin.pm
>> @@ -712,6 +712,7 @@ sub volume_has_feature {
>>       template => { current => 1},
>>       copy => { base => 1, current => 1, snap => 1},
>>       sparseinit => { base => 1, current => 1},
>> +    reassign => {current => 1},
>>       };
>>       my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
>> @@ -728,4 +729,34 @@ sub volume_has_feature {
>>       return undef;
>>   }
>> +sub reassign_volume {
>> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
>> +
>> +    my $base;;
>> +    (undef, $volname, undef, $base) = $class->parse_volname($volname);
>> +
>> +    if ($base) {
>> +    $base = $base . '/';
>> +    } else {
>> +    $base = '';
>> +    }
>> +
>> +    $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
>> +    my $target_volname = $class->find_free_diskname($storeid, $scfg, 
>> $target_vmid);
>> +    return $class->rename_volume($scfg, $storeid, $volname, 
>> $target_volname, $target_vmid, $base);
>> +    });
>> +}
>> +
>> +sub rename_volume {
>> +    my ($class, $scfg, $storeid, $source_volname, $target_volname, 
>> $vmid, $base) = @_;
>> +
>> +    my $cmd = $rbd_cmd->($scfg, $storeid, 'rename', $source_volname, 
>> $target_volname);
>> +
>> +    run_rbd_command(
>> +    $cmd,
>> +    errmsg => "could not rename image '${source_volname}' to 
>> '${target_volname}'",
>> +    );
>> +    return "${storeid}:${base}${target_volname}";
>> +}
>> +
>>   1;
>> diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
>> index fe65ae4..c63a94b 100644
>> --- a/PVE/Storage/ZFSPoolPlugin.pm
>> +++ b/PVE/Storage/ZFSPoolPlugin.pm
>> @@ -686,6 +686,7 @@ sub volume_has_feature {
>>       copy => { base => 1, current => 1},
>>       sparseinit => { base => 1, current => 1},
>>       replicate => { base => 1, current => 1},
>> +    reassign => {current => 1},
>>       };
>>       my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
>> @@ -788,4 +789,29 @@ sub volume_import_formats {
>>       return $class->volume_export_formats($scfg, $storeid, $volname, 
>> undef, $base_snapshot, $with_snapshots);
>>   }
>> +sub reassign_volume {
>> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
>> +
>> +    my $base;;
>> +    (undef, $volname, undef, $base) = $class->parse_volname($volname);
>> +
>> +    if ($base) {
>> +    $base = $base . '/';
>> +    } else {
>> +    $base = '';
>> +    }
>> +
>> +    $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
>> +    my $target_volname = $class->find_free_diskname($storeid, $scfg, 
>> $target_vmid);
>> +    return $class->rename_volume($scfg, $storeid, $volname, 
>> $target_volname, $target_vmid, $base);
>> +    });
>> +}
>> +
>> +sub rename_volume {
>> +    my ($class, $scfg, $storeid, $source_volname, $target_volname, 
>> $vmid, $base) = @_;
>> +
>> +    $class->zfs_request($scfg, 5, 'rename', 
>> "$scfg->{pool}/$source_volname", "$scfg->{pool}/$target_volname");
>> +    return "${storeid}:${base}${target_volname}";
>> +}
>> +
>>   1;
>>
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [pve-devel] [PATCH v6 qemu-server 2/5] disk reassign: add API endpoint
  2021-04-02 10:19 ` [pve-devel] [PATCH v6 qemu-server 2/5] disk reassign: add API endpoint Aaron Lauterer
@ 2021-04-15 11:52   ` Fabian Ebner
  2021-04-19  9:26     ` Aaron Lauterer
  2021-04-18 15:24   ` Thomas Lamprecht
  1 sibling, 1 reply; 18+ messages in thread
From: Fabian Ebner @ 2021-04-15 11:52 UTC (permalink / raw)
  To: pve-devel, a.lauterer

One nit and one comment inline.

Am 02.04.21 um 12:19 schrieb Aaron Lauterer:
> The goal of this new API endpoint is to provide an easy way to move a
> disk between VMs as this was only possible with manual intervention
> until now. Either by renaming the VM disk or by manually adding the
> disks volid to the config of the other VM.
> 
> The latter can easily cause unexpected behavior such as disks attached
> to VM B would be deleted if it used to be a disk of VM A. This happens
> because PVE assumes that the VMID in the volname always matches the VM
> the disk is attached to and thus, would remove any disk with VMID A
> when VM A was deleted.
> 
> The term `reassign` was chosen as it is not yet used
> for VM disks.
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> v5 -> v6:
> * guard Replication snapshot cleanup
>      additionally to the eval, that code is now only run if the volume is
>      on a storage with the 'replicate' feature
> * add permission check for target vmid
> * changed regex to match unused keys better
> 
> thx @Fabian for these suggestions/catching problems
> 
> v4 -> v5:
> * implemented suggestions from Fabian [1]
>      * logging before action
>      * improving description
>      * improving error messages
>      * using Replication::prepare to remove replication snapshots
>      * check if disk is physical disk using /dev/...
> 
> v3 -> v4: nothing
> 
> v2 -> v3:
> * reordered the locking as discussed with fabian [0] to
> run checks
>      fork worker
> 	lock source config
> 	    lock target config
> 		run checks
> 		...
> 
> * added more checks
>      * will not reassign to or from templates
>      * will not reassign if VM has snapshots present
> * cleanup if disk used to be replicated
> * made task log slightly more verbose
> * integrated general recommendations regarding code
> * renamed `disk` to `drive_key`
> * prepended some vars with `source_` for easier distinction
> 
> v1 -> v2: print config key and volid info at the end of the job so it
> shows up on the CLI and task log
> 
> rfc -> v1:
> * add support to reassign unused disks
> * add support to provide a config digest for the target vm
> * add additional check if disk key is present in config
> * reorder checks a bit
> 
> In order to support unused disk I had to extend
> PVE::QemuServer::Drive::valid_drive_names for the API parameter
> validation.
> 
> Checks are ordered so that cheap tests are run at the first chance to
> fail early.
> 
> The check if both VMs are present on the node is a bit redundant because
> locking the config files will fail if the VM is not present. But with
> the additional check we can provide a useful error message to the user
> instead of a "Configuration file xyz does not exist" error.
> 
> [0] https://lists.proxmox.com/pipermail/pve-devel/2020-September/044930.html
> [1] https://lists.proxmox.com/pipermail/pve-devel/2020-November/046030.html
> 
>   PVE/API2/Qemu.pm        | 172 ++++++++++++++++++++++++++++++++++++++++
>   PVE/QemuServer/Drive.pm |   4 +
>   2 files changed, 176 insertions(+)
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index e95ab13..9642b9b 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -35,6 +35,7 @@ use PVE::API2::Qemu::Agent;
>   use PVE::VZDump::Plugin;
>   use PVE::DataCenterConfig;
>   use PVE::SSHInfo;
> +use PVE::Replication;
>   
>   BEGIN {
>       if (!$ENV{PVE_GENERATING_DOCS}) {
> @@ -4377,4 +4378,175 @@ __PACKAGE__->register_method({
>   	return PVE::QemuServer::Cloudinit::dump_cloudinit_config($conf, $param->{vmid}, $param->{type});
>       }});
>   
> +__PACKAGE__->register_method({
> +    name => 'reassign_vm_disk',
> +    path => '{vmid}/reassign_disk',
> +    method => 'POST',
> +    protected => 1,
> +    proxyto => 'node',
> +    description => "Reassign a disk to another VM",
> +    permissions => {
> +	description => "You need 'VM.Config.Disk' permissions on /vms/{vmid} and /vms/{target vmid}, and 'Datastore.Allocate' permissions on the storage.",
> +	check => [ 'and',
> +		   ['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
> +		   ['perm', '/storage/{storage}', [ 'Datastore.Allocate' ]],
> +	    ],
> +    },
> +    parameters => {
> +        additionalProperties => 0,
> +	properties => {
> +	    node => get_standard_option('pve-node'),
> +	    vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
> +	    target_vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
> +	    drive_name => {
> +	        type => 'string',
> +		description => "The config key of the disk to reassign (for example, ide0 or scsi1).",
> +		enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
> +	    },
> +	    digest => {
> +		type => 'string',
> +		description => 'Prevent changes if current the configuration file of the source VM has a different SHA1 digest. This can be used to prevent concurrent modifications.',
> +		maxLength => 40,
> +		optional => 1,
> +	    },
> +	    target_digest => {
> +		type => 'string',
> +		description => 'Prevent changes if current the configuration file of the target VM has a different SHA1 digest. This can be used to prevent concurrent modifications.',
> +		maxLength => 40,
> +		optional => 1,
> +	    },
> +	},
> +    },
> +    returns => {
> +	type => 'string',
> +	description => "the task ID.",
> +    },
> +    code => sub {
> +	my ($param) = @_;
> +
> +	my $rpcenv = PVE::RPCEnvironment::get();
> +	my $authuser = $rpcenv->get_user();
> +
> +	my $node = extract_param($param, 'node');
> +	my $source_vmid = extract_param($param, 'vmid');
> +	my $target_vmid = extract_param($param, 'target_vmid');
> +	my $source_digest = extract_param($param, 'digest');
> +	my $target_digest = extract_param($param, 'target_digest');
> +	my $drive_name = extract_param($param, 'drive_name');
> +
> +	my $storecfg = PVE::Storage::config();
> +	my $vmlist; > +	my $drive;
> +	my $source_volid;
Nit: $vmlist and $drive are only ever used within the 
load_and_check_configs closure, so they can be declared there

> +
> +	$rpcenv->check_vm_perm($authuser, $target_vmid, undef, ['VM.Config.Disk'])
> +	    if $authuser ne 'root@pam';
> +
> +	die "Reassigning a disk to the same VM is not possible. Did you mean to move the disk?\n"
> +	    if $source_vmid eq $target_vmid;
> +
> +	my $load_and_check_configs = sub {
> +	    $vmlist = PVE::Cluster::get_vmlist()->{ids};
> +	    die "Both VMs need to be on the same node ($vmlist->{$source_vmid}->{node}) but target VM is on $vmlist->{$target_vmid}->{node}.\n"
> +		if $vmlist->{$source_vmid}->{node} ne $vmlist->{$target_vmid}->{node};
> +
> +	    my $source_conf = PVE::QemuConfig->load_config($source_vmid);
> +	    PVE::QemuConfig->check_lock($source_conf);
> +	    my $target_conf = PVE::QemuConfig->load_config($target_vmid);
> +	    PVE::QemuConfig->check_lock($target_conf);
> +
> +	    die "Can't reassign disks from or to templates\n"
> +		if ($source_conf->{template} || $target_conf->{template});
> +
> +	    if ($source_digest) {
> +		eval { PVE::Tools::assert_if_modified($source_digest, $source_conf->{digest}) };
> +		if (my $err = $@) {
> +		    die "VM ${source_vmid}: ${err}";
> +		}
> +	    }
> +
> +	    if ($target_digest) {
> +		eval { PVE::Tools::assert_if_modified($target_digest, $target_conf->{digest}) };
> +		if (my $err = $@) {
> +		    die "VM ${target_vmid}: ${err}";
> +		}
> +	    }
> +
> +	    die "Disk '${drive_name}' does not exist\n"
> +		if !defined($source_conf->{$drive_name});
> +
> +	    $drive = PVE::QemuServer::parse_drive($drive_name, $source_conf->{$drive_name});
> +	    $source_volid = $drive->{file};
> +	    die "disk '${drive_name}' has no associated volume\n" if !$source_volid;
> +	    die "CD drive contents can't be reassigned\n" if PVE::QemuServer::drive_is_cdrom($drive, 1);
> +	    die "Can't reassign physical disk\n" if $drive->{file} =~ m|^/dev/|;
> +	    die "Can't reassign disk used by a snapshot\n"
> +		if PVE::QemuServer::Drive::is_volume_in_use($storecfg, $source_conf, $drive_name, $source_volid);
> +
> +	    die "Storage does not support the reassignment of this disk\n"
> +		if !PVE::Storage::volume_has_feature($storecfg, 'reassign', $source_volid);
> +
> +	    die "Cannot reassign disk while the source VM is running\n"
> +		if PVE::QemuServer::check_running($source_vmid) && $drive_name !~ m/^unused\d+$/;
> +
> +	    return ($source_conf, $target_conf);
> +	};
> +
> +	my $logfunc = sub {
> +	    my ($msg) = @_;
> +	    print STDERR "$msg\n";
> +	};
> +
> +	my $reassign_func = sub {
> +	    return PVE::QemuConfig->lock_config($source_vmid, sub {
> +		return PVE::QemuConfig->lock_config($target_vmid, sub {
> +		    my ($source_conf, $target_conf) = &$load_and_check_configs();
> +
> +		    PVE::Cluster::log_msg('info', $authuser, "reassign disk VM $source_vmid: reassign --disk ${drive_name} --target_vmid $target_vmid");
> +
> +		    my $new_volid = PVE::Storage::reassign_volume(
> +			$storecfg,
> +			$source_volid,
> +			$target_vmid,
> +		    );
> +
> +		    delete $source_conf->{$drive_name};
> +		    print "removing disk '${drive_name}' from VM '${source_vmid}'\n";
> +		    PVE::QemuConfig->write_config($source_vmid, $source_conf);
> +
> +		    # remove possible replication snapshots
> +		    if (PVE::Storage::volume_has_feature($storecfg, 'replicate', $source_volid)) {
> +			eval {
> +			    PVE::Replication::prepare(
> +				$storecfg,
> +				[$new_volid],
> +				undef,
> +				undef,

To actually remove the replication snapshots, you need to use 1 for 
last_sync. undef defaults to 0 and does not remove the replication 
snapshots. 0 happens when a VM was stolen, but replication snapshots for 
stolen VMs are still valid!

The good news is that patch 4 isn't needed ;)

> +				undef,
> +				$logfunc,
> +			    )
> +			};
> +			if (my $err = $@) {
> +			    print "Failed to remove replication snapshots on reassigned disk. Manual cleanup could be necessary.\n";
> +			}
> +		    }
> +
> +		    my $key;
> +		    eval { $key = PVE::QemuConfig->add_unused_volume($target_conf, $new_volid) };
> +		    if (my $err = $@) {
> +			print "failed to add reassigned disk '${new_volid}' to VM '${target_vmid}'. Try to free an 'unused' disk slot and run 'qm rescan ${target_vmid}'.\n";
> +			return 0;
> +		    }
> +
> +		    print "adding disk to VM '${target_vmid}' as '${key}: ${new_volid}'\n";
> +		    PVE::QemuConfig->write_config($target_vmid, $target_conf);
> +		});
> +	    });
> +	};
> +
> +	&$load_and_check_configs();
> +
> +	return $rpcenv->fork_worker('qmreassign', $source_vmid, $authuser, $reassign_func);
> +    }});
> +
>   1;
> diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
> index 01ea8d7..e938b9b 100644
> --- a/PVE/QemuServer/Drive.pm
> +++ b/PVE/QemuServer/Drive.pm
> @@ -392,6 +392,10 @@ sub valid_drive_names {
>               'efidisk0');
>   }
>   
> +sub valid_drive_names_with_unused {
> +    return (valid_drive_names(), map {"unused$_"} (0 .. ($MAX_UNUSED_DISKS -1)));
> +}
> +
>   sub is_valid_drivename {
>       my $dev = shift;
>   
> 




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [pve-devel] [PATCH v6 storage 1/1] add disk reassign feature
  2021-04-15 11:31     ` Fabian Ebner
@ 2021-04-15 11:53       ` Aaron Lauterer
  2021-04-15 12:09         ` Fabian Ebner
  2021-04-15 12:20         ` Thomas Lamprecht
  0 siblings, 2 replies; 18+ messages in thread
From: Aaron Lauterer @ 2021-04-15 11:53 UTC (permalink / raw)
  To: Fabian Ebner, pve-devel



On 4/15/21 1:31 PM, Fabian Ebner wrote:
> Am 15.04.21 um 13:07 schrieb Fabian Ebner:
>> We often have a generic implementation in Plugin.pm for filesystem-based storages (maybe erroring out if there is no $scfg->{path}). Is there a reason not to use such an approach, i.e. making file_reassign_volume the default reassign_volume implementation in Plugin.pm? This would avoid 

Just adding the functionality on the top level Plugin.pm could have some potential ugly side effects for 3rd party plugins that do not yet handle that call themselves. So to be on the safe side, by default we rather fail right there (was discussed a versions ago).
IMHO it would be nice though to change the structure of the storage plugins a bit. E.g. instead of assuming dir/file storages for Plugin.pm, having a basic abstraction specifically for any directory/file based storage which handles all the common tasks and further down the hierarchy the specific implementations regarding mounting and such. But that would mean a hard break of the current approach, especially for 3rd party plugins.


> 
> I mean together with adapting the default implementation for volume_has_feature too.
> 
> One more thing I noticed below:
> 
>> some fragmentation/duplication. Of course, the plugins that cannot use the generic implementation need to provide their own or signal that they do not have the feature.
>>
>> More comments inline:
>>
>> Am 02.04.21 um 12:19 schrieb Aaron Lauterer:
>>> Functionality has been added for the following storage types:
>>>
>>> * dir based ones
>>>      * directory
>>>      * NFS
>>>      * CIFS
>>>      * gluster
>>
>> What about CephFS? Can it use the generic implementation or not? If it can and you choose the approach I suggested at the beginning, you don't even need to add code to it ;)

We do not store any guest disks on CephFS. If people do that (not a good idea as it can be unresponsive for some time if an MDS takes over and needs to do some replay), they usually configure a directory storage on top of /mnt/pve/cephfs/.

>>
>>> * ZFS
>>> * (thin) LVM
>>> * Ceph
>>>
>>> A new feature `reassign` has been introduced to mark which storage
>>> plugin supports the feature.
>>>
>>> Version API and AGE have been bumped.
>>>
>>> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
>>> ---
>>> v5 -> v6:
>>> * refactor dir based feature check to reduce code repetition by
>>>    introducing the file_can_reassign_volume sub that does the actual check
>>>
>>> v4 -> v5:
>>> * rebased on master
>>> * bumped api ver and api age
>>> * rephrased "not implemented" message as suggested [0].
>>>
>>> v3 -> v4:
>>> * revert intermediate storage plugin for directory based plugins
>>> * add a `die "not supported"` method in Plugin.pm
>>> * dir based plugins now call the file_reassign_volume method in
>>>    Plugin.pm as the generic file/directory based method
>>> * restored old `volume_has_feature` method in Plugin.pm and override it
>>>    in directory based plugins to check against the new `reassign` feature
>>>    (not too happy about the repetition for each plugin)
>>>
>>> v2 -> v3:
>>> * added feature flags instead of dummy "not implemented" methods in
>>>    plugins which do not support it as that would break compatibility with
>>>    3rd party plugins.
>>>    Had to make $features available outside the `has_features` method in
>>>    Plugins.pm in order to be able to individually add features in the
>>>    `BaseDirPlugin.pm`.
>>> * added the BaseDirPlugin.pm to maintain compat with 3rd party plugins,
>>>    this is explained in the commit message
>>> * moved the actual renaming from the `reassign_volume` to a dedicated
>>>    `rename_volume` method to make this functionality available to other
>>>    possible uses in the future.
>>> * added support for linked clones ($base)
>>>
>>>
>>> rfc -> v1 -> v2: nothing changed
>>>
>>> [0] https://lists.proxmox.com/pipermail/pve-devel/2020-November/046031.html
>>>
>>>
>>>   PVE/Storage.pm                 | 15 +++++++--
>>>   PVE/Storage/CIFSPlugin.pm      | 13 ++++++++
>>>   PVE/Storage/DirPlugin.pm       | 13 ++++++++
>>>   PVE/Storage/GlusterfsPlugin.pm | 13 ++++++++
>>>   PVE/Storage/LVMPlugin.pm       | 24 ++++++++++++++
>>>   PVE/Storage/LvmThinPlugin.pm   |  1 +
>>>   PVE/Storage/NFSPlugin.pm       | 13 ++++++++
>>>   PVE/Storage/Plugin.pm          | 60 ++++++++++++++++++++++++++++++++++
>>>   PVE/Storage/RBDPlugin.pm       | 31 ++++++++++++++++++
>>>   PVE/Storage/ZFSPoolPlugin.pm   | 26 +++++++++++++++
>>>   10 files changed, 207 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/PVE/Storage.pm b/PVE/Storage.pm
>>> index 122c3e9..b950c82 100755
>>> --- a/PVE/Storage.pm
>>> +++ b/PVE/Storage.pm
>>> @@ -41,11 +41,11 @@ use PVE::Storage::DRBDPlugin;
>>>   use PVE::Storage::PBSPlugin;
>>>   # Storage API version. Increment it on changes in storage API interface.
>>> -use constant APIVER => 8;
>>> +use constant APIVER => 9;
>>>   # Age is the number of versions we're backward compatible with.
>>>   # This is like having 'current=APIVER' and age='APIAGE' in libtool,
>>>   # see https://www.gnu.org/software/libtool/manual/html_node/Libtool-versioning.html
>>> -use constant APIAGE => 7;
>>> +use constant APIAGE => 8;
>>>   # load standard plugins
>>>   PVE::Storage::DirPlugin->register();
>>> @@ -349,6 +349,7 @@ sub volume_snapshot_needs_fsfreeze {
>>>   #            snapshot - taking a snapshot is possible
>>>   #            sparseinit - volume is sparsely initialized
>>>   #            template - conversion to base image is possible
>>> +#            reassign - reassigning disks to other guest is possible
>>>   # $snap - check if the feature is supported for a given snapshot
>>>   # $running - if the guest owning the volume is running
>>>   # $opts - hash with further options:
>>> @@ -1843,6 +1844,16 @@ sub complete_volume {
>>>       return $res;
>>>   }
>>> +sub reassign_volume {
>>> +    my ($cfg, $volid, $target_vmid) = @_;
>>> +
>>> +    my ($storeid, $volname) = parse_volume_id($volid);
>>> +    my $scfg = storage_config($cfg, $storeid);
>>> +    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
>>> +
> 
> Check if the storage is enabled and activate it?
> And is it necessary to activate the volume for certain storages, e.g. LVM (might not be, but did you check)?

Good point about activating the storage. I don't think I did check on that. I'll do some tests.

> 
>>> +    return $plugin->reassign_volume($scfg, $storeid, $volname, $target_vmid);
>>> +}
>>> +
>>>   # Various io-heavy operations require io/bandwidth limits which can be
>>>   # configured on multiple levels: The global defaults in datacenter.cfg, and
>>>   # per-storage overrides. When we want to do a restore from storage A to storage
>>> diff --git a/PVE/Storage/CIFSPlugin.pm b/PVE/Storage/CIFSPlugin.pm
>>> index be06cc7..3affa5d 100644
>>> --- a/PVE/Storage/CIFSPlugin.pm
>>> +++ b/PVE/Storage/CIFSPlugin.pm
>>> @@ -293,4 +293,17 @@ sub update_volume_notes {
>>>       PVE::Storage::DirPlugin::update_volume_notes($class, @_);
>>>   }
>>> +sub reassign_volume {
>>> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
>>> +    return $class->file_reassign_volume($scfg, $storeid, $volname, $target_vmid);
>>> +}
>>> +
>>> +sub volume_has_feature {
>>> +    my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
>>> +
>>> +    shift @_;
>>> +    return $class->file_can_reassign_volume(@_) if $feature eq 'reassign';
>>> +    return $class->SUPER::volume_has_feature(@_);
>>> +}
>>> +
>>>   1;
>>> diff --git a/PVE/Storage/DirPlugin.pm b/PVE/Storage/DirPlugin.pm
>>> index 2267f11..cfdb575 100644
>>> --- a/PVE/Storage/DirPlugin.pm
>>> +++ b/PVE/Storage/DirPlugin.pm
>>> @@ -156,4 +156,17 @@ sub check_config {
>>>       return $opts;
>>>   }
>>> +sub reassign_volume {
>>> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
>>> +    return $class->file_reassign_volume($scfg, $storeid, $volname, $target_vmid);
>>> +}
>>> +
>>> +sub volume_has_feature {
>>> +    my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
>>> +
>>> +    shift @_;
>>> +    return $class->file_can_reassign_volume(@_) if $feature eq 'reassign';
>>> +    return $class->SUPER::volume_has_feature(@_);
>>> +}
>>> +
>>>   1;
>>> diff --git a/PVE/Storage/GlusterfsPlugin.pm b/PVE/Storage/GlusterfsPlugin.pm
>>> index 2dd414d..8132a3e 100644
>>> --- a/PVE/Storage/GlusterfsPlugin.pm
>>> +++ b/PVE/Storage/GlusterfsPlugin.pm
>>> @@ -348,4 +348,17 @@ sub check_connection {
>>>       return defined($server) ? 1 : 0;
>>>   }
>>> +sub reassign_volume {
>>> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
>>> +    return $class->file_reassign_volume($scfg, $storeid, $volname, $target_vmid);
>>> +}
>>> +
>>> +sub volume_has_feature {
>>> +    my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
>>> +
>>> +    shift @_;
>>> +    return $class->file_can_reassign_volume(@_) if $feature eq 'reassign';
>>> +    return $class->SUPER::volume_has_feature(@_);
>>> +}
>>> +
>>>   1;
>>> diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
>>> index df49b76..0d9a691 100644
>>> --- a/PVE/Storage/LVMPlugin.pm
>>> +++ b/PVE/Storage/LVMPlugin.pm
>>> @@ -339,6 +339,13 @@ sub lvcreate {
>>>       run_command($cmd, errmsg => "lvcreate '$vg/$name' error");
>>>   }
>>> +sub lvrename {
>>> +    my ($vg, $oldname, $newname) = @_;
>>> +
>>> +    my $cmd = ['/sbin/lvrename', $vg, $oldname, $newname];
>>> +    run_command($cmd, errmsg => "lvrename '${vg}/${oldname}' to '${newname}' error");
>>> +}
>>> +
>>>   sub alloc_image {
>>>       my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
>>> @@ -584,6 +591,7 @@ sub volume_has_feature {
>>>       my $features = {
>>>       copy => { base => 1, current => 1},
>>> +    reassign => {current => 1},
>>>       };
>>>       my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
>>> @@ -684,4 +692,20 @@ sub volume_import_write {
>>>       input => '<&'.fileno($input_fh));
>>>   }
>>> +sub reassign_volume {
>>> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
>>> +
>>> +    $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
>>> +    my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid);
>>> +    return $class->rename_volume($scfg, $storeid, $volname, $target_volname, $target_vmid);
>>> +    });
>>> +}
>>> +
>>> +sub rename_volume {
>>> +    my ($class, $scfg, $storeid, $source_volname, $target_volname, $vmid) = @_;
>>
>> Nit: The signature is different than the corresponding one in Plugin.pm. I know you're not using $base here, but maybe add it for consistency.
>>
>> That said, why not assemble the new volume ID in the reassign function itself and have the rename function only be concerned with renaming? Then you could drop the $base form the signature altogether.

Yeah, Dominic also encountered a possible change/improvement on to how handle the base. I'll take a closer look :)
>>
>>> +
>>> +    lvrename($scfg->{vgname}, $source_volname, $target_volname);
>>> +    return "${storeid}:${target_volname}";
>>> +}
>>> +
>>>   1;
>>> diff --git a/PVE/Storage/LvmThinPlugin.pm b/PVE/Storage/LvmThinPlugin.pm
>>> index c9e127c..895af8b 100644
>>> --- a/PVE/Storage/LvmThinPlugin.pm
>>> +++ b/PVE/Storage/LvmThinPlugin.pm
>>> @@ -355,6 +355,7 @@ sub volume_has_feature {
>>>       template => { current => 1},
>>>       copy => { base => 1, current => 1, snap => 1},
>>>       sparseinit => { base => 1, current => 1},
>>> +    reassign => {current => 1},
>>>       };
>>>       my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
>>> diff --git a/PVE/Storage/NFSPlugin.pm b/PVE/Storage/NFSPlugin.pm
>>> index 39bf15a..7217ca2 100644
>>> --- a/PVE/Storage/NFSPlugin.pm
>>> +++ b/PVE/Storage/NFSPlugin.pm
>>> @@ -197,4 +197,17 @@ sub update_volume_notes {
>>>       PVE::Storage::DirPlugin::update_volume_notes($class, @_);
>>>   }
>>> +sub reassign_volume {
>>> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
>>> +    return $class->file_reassign_volume($scfg, $storeid, $volname, $target_vmid);
>>> +}
>>> +
>>> +sub volume_has_feature {
>>> +    my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
>>> +
>>> +    shift @_;
>>> +    return $class->file_can_reassign_volume(@_) if $feature eq 'reassign';
>>> +    return $class->SUPER::volume_has_feature(@_);
>>> +}
>>> +
>>>   1;
>>> diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
>>> index d2d8184..c1ea41d 100644
>>> --- a/PVE/Storage/Plugin.pm
>>> +++ b/PVE/Storage/Plugin.pm
>>> @@ -927,6 +927,7 @@ sub storage_can_replicate {
>>>       return 0;
>>>   }
>>> +
>>
>> Nit: accidental newline
>>
>>>   sub volume_has_feature {
>>>       my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
>>> @@ -1456,4 +1457,63 @@ sub volume_import_formats {
>>>       return ();
>>>   }
>>> +sub reassign_volume {
>>> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
>>> +    die "not implemented in storage plugin '$class'\n";
>>> +}
>>> +
>>> +# general reassignment method for file/directory based storages
>>> +sub file_reassign_volume {
>>> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
>>> +
>>> +    my $base;
>>> +    my $base_vmid;
>>> +    my $format;
>>> +    my $source_vmid;
>>> +
>>> +    (undef, $volname, $source_vmid, $base, $base_vmid, undef, $format) = $class->parse_volname($volname);
>>> +
>>> +    # parse_volname strips the directory from volname
>>> +    my $source_volname = "${source_vmid}/${volname}";
>>> +
>>> +    if ($base) {
>>> +    $base = "${base_vmid}/${base}/";
>>> +    } else {
>>> +    $base = '';
>>> +    }
>>> +
>>> +    $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
>>> +    my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid, $format, 1);
>>> +
>>> +    return $class->rename_volume($scfg, $storeid, $source_volname, $target_volname, $target_vmid, $base);
>>> +    });
>>> +}
>>> +
>>> +sub rename_volume {
>>> +    my ($class, $scfg, $storeid, $source_volname, $target_volname, $vmid, $base) = @_;
>>> +
>>> +    my $basedir = $class->get_subdir($scfg, 'images');
>>> +    my $imagedir = "${basedir}/${vmid}";
>>> +
>>> +    mkpath $imagedir;
>>> +
>>> +    my $old_path = "${basedir}/${source_volname}";
>>> +    my $new_path = "${imagedir}/${target_volname}";
>>> +
>>> +    rename($old_path, $new_path) ||
>>> +    die "rename '$old_path' to '$new_path' failed - $!\n";
>>> +
>>> +    return "${storeid}:${base}${vmid}/${target_volname}";
>>> +}
>>> +
>>> +sub file_can_reassign_volume {
>>> +    my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
>>> +
>>> +    my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
>>> +    $class->parse_volname($volname);
>>> +
>>> +    return 1 if !$isBase && $format =~ /^raw$|^qcow2$|^vmdk$/;
>>> +    return undef;
>>> +}
>>> +
>>>   1;
>>> diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
>>> index fab6d57..86a9a0d 100644
>>> --- a/PVE/Storage/RBDPlugin.pm
>>> +++ b/PVE/Storage/RBDPlugin.pm
>>> @@ -712,6 +712,7 @@ sub volume_has_feature {
>>>       template => { current => 1},
>>>       copy => { base => 1, current => 1, snap => 1},
>>>       sparseinit => { base => 1, current => 1},
>>> +    reassign => {current => 1},
>>>       };
>>>       my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
>>> @@ -728,4 +729,34 @@ sub volume_has_feature {
>>>       return undef;
>>>   }
>>> +sub reassign_volume {
>>> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
>>> +
>>> +    my $base;;
>>> +    (undef, $volname, undef, $base) = $class->parse_volname($volname);
>>> +
>>> +    if ($base) {
>>> +    $base = $base . '/';
>>> +    } else {
>>> +    $base = '';
>>> +    }
>>> +
>>> +    $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
>>> +    my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid);
>>> +    return $class->rename_volume($scfg, $storeid, $volname, $target_volname, $target_vmid, $base);
>>> +    });
>>> +}
>>> +
>>> +sub rename_volume {
>>> +    my ($class, $scfg, $storeid, $source_volname, $target_volname, $vmid, $base) = @_;
>>> +
>>> +    my $cmd = $rbd_cmd->($scfg, $storeid, 'rename', $source_volname, $target_volname);
>>> +
>>> +    run_rbd_command(
>>> +    $cmd,
>>> +    errmsg => "could not rename image '${source_volname}' to '${target_volname}'",
>>> +    );
>>> +    return "${storeid}:${base}${target_volname}";
>>> +}
>>> +
>>>   1;
>>> diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
>>> index fe65ae4..c63a94b 100644
>>> --- a/PVE/Storage/ZFSPoolPlugin.pm
>>> +++ b/PVE/Storage/ZFSPoolPlugin.pm
>>> @@ -686,6 +686,7 @@ sub volume_has_feature {
>>>       copy => { base => 1, current => 1},
>>>       sparseinit => { base => 1, current => 1},
>>>       replicate => { base => 1, current => 1},
>>> +    reassign => {current => 1},
>>>       };
>>>       my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
>>> @@ -788,4 +789,29 @@ sub volume_import_formats {
>>>       return $class->volume_export_formats($scfg, $storeid, $volname, undef, $base_snapshot, $with_snapshots);
>>>   }
>>> +sub reassign_volume {
>>> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
>>> +
>>> +    my $base;;
>>> +    (undef, $volname, undef, $base) = $class->parse_volname($volname);
>>> +
>>> +    if ($base) {
>>> +    $base = $base . '/';
>>> +    } else {
>>> +    $base = '';
>>> +    }
>>> +
>>> +    $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
>>> +    my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid);
>>> +    return $class->rename_volume($scfg, $storeid, $volname, $target_volname, $target_vmid, $base);
>>> +    });
>>> +}
>>> +
>>> +sub rename_volume {
>>> +    my ($class, $scfg, $storeid, $source_volname, $target_volname, $vmid, $base) = @_;
>>> +
>>> +    $class->zfs_request($scfg, 5, 'rename', "$scfg->{pool}/$source_volname", "$scfg->{pool}/$target_volname");
>>> +    return "${storeid}:${base}${target_volname}";
>>> +}
>>> +
>>>   1;
>>>
>>
>>
>> _______________________________________________
>> pve-devel mailing list
>> pve-devel@lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>>
>>




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [pve-devel] [PATCH v6 storage 1/1] add disk reassign feature
  2021-04-15 11:53       ` Aaron Lauterer
@ 2021-04-15 12:09         ` Fabian Ebner
  2021-04-15 12:21           ` Thomas Lamprecht
  2021-04-15 12:20         ` Thomas Lamprecht
  1 sibling, 1 reply; 18+ messages in thread
From: Fabian Ebner @ 2021-04-15 12:09 UTC (permalink / raw)
  To: Aaron Lauterer, pve-devel; +Cc: Thomas Lamprecht

Am 15.04.21 um 13:53 schrieb Aaron Lauterer:
> 
> 
> On 4/15/21 1:31 PM, Fabian Ebner wrote:
>> Am 15.04.21 um 13:07 schrieb Fabian Ebner:
>>> We often have a generic implementation in Plugin.pm for 
>>> filesystem-based storages (maybe erroring out if there is no 
>>> $scfg->{path}). Is there a reason not to use such an approach, i.e. 
>>> making file_reassign_volume the default reassign_volume 
>>> implementation in Plugin.pm? This would avoid 
> 
> Just adding the functionality on the top level Plugin.pm could have some 
> potential ugly side effects for 3rd party plugins that do not yet handle 
> that call themselves. So to be on the safe side, by default we rather 
> fail right there (was discussed a versions ago).

Sorry, I didn't read up on all the history. But why not fail only for 
those without a path, like we do for other stuff:

     # this only works for file based storage types
     die "storage definintion has no path\n" if !$scfg->{path};

> IMHO it would be nice though to change the structure of the storage 
> plugins a bit. E.g. instead of assuming dir/file storages for Plugin.pm, 
> having a basic abstraction specifically for any directory/file based 
> storage which handles all the common tasks and further down the 
> hierarchy the specific implementations regarding mounting and such. But 
> that would mean a hard break of the current approach, especially for 3rd 
> party plugins.
> 

I was under the impression that this is one of the functions of 
APIVERSION. External plugins can look at the changes since the last 
version they supported, decide if they're happy with the default 
implementation and implement their own otherwise.

@Thomas: Is this wrong?

I agree that a generic dir base plugin would be nicer, but for now 
Plugin.pm provides many of the generic dir-based implementations.

> 
>>
>> I mean together with adapting the default implementation for 
>> volume_has_feature too.
>>
>> One more thing I noticed below:
>>
>>> some fragmentation/duplication. Of course, the plugins that cannot 
>>> use the generic implementation need to provide their own or signal 
>>> that they do not have the feature.
>>>
>>> More comments inline:
>>>
>>> Am 02.04.21 um 12:19 schrieb Aaron Lauterer:
>>>> Functionality has been added for the following storage types:
>>>>
>>>> * dir based ones
>>>>      * directory
>>>>      * NFS
>>>>      * CIFS
>>>>      * gluster
>>>
>>> What about CephFS? Can it use the generic implementation or not? If 
>>> it can and you choose the approach I suggested at the beginning, you 
>>> don't even need to add code to it ;)
> 
> We do not store any guest disks on CephFS. If people do that (not a good 
> idea as it can be unresponsive for some time if an MDS takes over and 
> needs to do some replay), they usually configure a directory storage on 
> top of /mnt/pve/cephfs/.
> 

Ok, right.

>>>
>>>> * ZFS
>>>> * (thin) LVM
>>>> * Ceph
>>>>
>>>> A new feature `reassign` has been introduced to mark which storage
>>>> plugin supports the feature.
>>>>
>>>> Version API and AGE have been bumped.
>>>>
>>>> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
>>>> ---
>>>> v5 -> v6:
>>>> * refactor dir based feature check to reduce code repetition by
>>>>    introducing the file_can_reassign_volume sub that does the actual 
>>>> check
>>>>
>>>> v4 -> v5:
>>>> * rebased on master
>>>> * bumped api ver and api age
>>>> * rephrased "not implemented" message as suggested [0].
>>>>
>>>> v3 -> v4:
>>>> * revert intermediate storage plugin for directory based plugins
>>>> * add a `die "not supported"` method in Plugin.pm
>>>> * dir based plugins now call the file_reassign_volume method in
>>>>    Plugin.pm as the generic file/directory based method
>>>> * restored old `volume_has_feature` method in Plugin.pm and override it
>>>>    in directory based plugins to check against the new `reassign` 
>>>> feature
>>>>    (not too happy about the repetition for each plugin)
>>>>
>>>> v2 -> v3:
>>>> * added feature flags instead of dummy "not implemented" methods in
>>>>    plugins which do not support it as that would break compatibility 
>>>> with
>>>>    3rd party plugins.
>>>>    Had to make $features available outside the `has_features` method in
>>>>    Plugins.pm in order to be able to individually add features in the
>>>>    `BaseDirPlugin.pm`.
>>>> * added the BaseDirPlugin.pm to maintain compat with 3rd party plugins,
>>>>    this is explained in the commit message
>>>> * moved the actual renaming from the `reassign_volume` to a dedicated
>>>>    `rename_volume` method to make this functionality available to other
>>>>    possible uses in the future.
>>>> * added support for linked clones ($base)
>>>>
>>>>
>>>> rfc -> v1 -> v2: nothing changed
>>>>
>>>> [0] 
>>>> https://lists.proxmox.com/pipermail/pve-devel/2020-November/046031.html
>>>>
>>>>
>>>>   PVE/Storage.pm                 | 15 +++++++--
>>>>   PVE/Storage/CIFSPlugin.pm      | 13 ++++++++
>>>>   PVE/Storage/DirPlugin.pm       | 13 ++++++++
>>>>   PVE/Storage/GlusterfsPlugin.pm | 13 ++++++++
>>>>   PVE/Storage/LVMPlugin.pm       | 24 ++++++++++++++
>>>>   PVE/Storage/LvmThinPlugin.pm   |  1 +
>>>>   PVE/Storage/NFSPlugin.pm       | 13 ++++++++
>>>>   PVE/Storage/Plugin.pm          | 60 
>>>> ++++++++++++++++++++++++++++++++++
>>>>   PVE/Storage/RBDPlugin.pm       | 31 ++++++++++++++++++
>>>>   PVE/Storage/ZFSPoolPlugin.pm   | 26 +++++++++++++++
>>>>   10 files changed, 207 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/PVE/Storage.pm b/PVE/Storage.pm
>>>> index 122c3e9..b950c82 100755
>>>> --- a/PVE/Storage.pm
>>>> +++ b/PVE/Storage.pm
>>>> @@ -41,11 +41,11 @@ use PVE::Storage::DRBDPlugin;
>>>>   use PVE::Storage::PBSPlugin;
>>>>   # Storage API version. Increment it on changes in storage API 
>>>> interface.
>>>> -use constant APIVER => 8;
>>>> +use constant APIVER => 9;
>>>>   # Age is the number of versions we're backward compatible with.
>>>>   # This is like having 'current=APIVER' and age='APIAGE' in libtool,
>>>>   # see 
>>>> https://www.gnu.org/software/libtool/manual/html_node/Libtool-versioning.html 
>>>>
>>>> -use constant APIAGE => 7;
>>>> +use constant APIAGE => 8;
>>>>   # load standard plugins
>>>>   PVE::Storage::DirPlugin->register();
>>>> @@ -349,6 +349,7 @@ sub volume_snapshot_needs_fsfreeze {
>>>>   #            snapshot - taking a snapshot is possible
>>>>   #            sparseinit - volume is sparsely initialized
>>>>   #            template - conversion to base image is possible
>>>> +#            reassign - reassigning disks to other guest is possible
>>>>   # $snap - check if the feature is supported for a given snapshot
>>>>   # $running - if the guest owning the volume is running
>>>>   # $opts - hash with further options:
>>>> @@ -1843,6 +1844,16 @@ sub complete_volume {
>>>>       return $res;
>>>>   }
>>>> +sub reassign_volume {
>>>> +    my ($cfg, $volid, $target_vmid) = @_;
>>>> +
>>>> +    my ($storeid, $volname) = parse_volume_id($volid);
>>>> +    my $scfg = storage_config($cfg, $storeid);
>>>> +    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
>>>> +
>>
>> Check if the storage is enabled and activate it?
>> And is it necessary to activate the volume for certain storages, e.g. 
>> LVM (might not be, but did you check)?
> 
> Good point about activating the storage. I don't think I did check on 
> that. I'll do some tests.
> 
>>
>>>> +    return $plugin->reassign_volume($scfg, $storeid, $volname, 
>>>> $target_vmid);
>>>> +}
>>>> +
>>>>   # Various io-heavy operations require io/bandwidth limits which 
>>>> can be
>>>>   # configured on multiple levels: The global defaults in 
>>>> datacenter.cfg, and
>>>>   # per-storage overrides. When we want to do a restore from storage 
>>>> A to storage
>>>> diff --git a/PVE/Storage/CIFSPlugin.pm b/PVE/Storage/CIFSPlugin.pm
>>>> index be06cc7..3affa5d 100644
>>>> --- a/PVE/Storage/CIFSPlugin.pm
>>>> +++ b/PVE/Storage/CIFSPlugin.pm
>>>> @@ -293,4 +293,17 @@ sub update_volume_notes {
>>>>       PVE::Storage::DirPlugin::update_volume_notes($class, @_);
>>>>   }
>>>> +sub reassign_volume {
>>>> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
>>>> +    return $class->file_reassign_volume($scfg, $storeid, $volname, 
>>>> $target_vmid);
>>>> +}
>>>> +
>>>> +sub volume_has_feature {
>>>> +    my ($class, $scfg, $feature, $storeid, $volname, $snapname, 
>>>> $running, $opts) = @_;
>>>> +
>>>> +    shift @_;
>>>> +    return $class->file_can_reassign_volume(@_) if $feature eq 
>>>> 'reassign';
>>>> +    return $class->SUPER::volume_has_feature(@_);
>>>> +}
>>>> +
>>>>   1;
>>>> diff --git a/PVE/Storage/DirPlugin.pm b/PVE/Storage/DirPlugin.pm
>>>> index 2267f11..cfdb575 100644
>>>> --- a/PVE/Storage/DirPlugin.pm
>>>> +++ b/PVE/Storage/DirPlugin.pm
>>>> @@ -156,4 +156,17 @@ sub check_config {
>>>>       return $opts;
>>>>   }
>>>> +sub reassign_volume {
>>>> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
>>>> +    return $class->file_reassign_volume($scfg, $storeid, $volname, 
>>>> $target_vmid);
>>>> +}
>>>> +
>>>> +sub volume_has_feature {
>>>> +    my ($class, $scfg, $feature, $storeid, $volname, $snapname, 
>>>> $running, $opts) = @_;
>>>> +
>>>> +    shift @_;
>>>> +    return $class->file_can_reassign_volume(@_) if $feature eq 
>>>> 'reassign';
>>>> +    return $class->SUPER::volume_has_feature(@_);
>>>> +}
>>>> +
>>>>   1;
>>>> diff --git a/PVE/Storage/GlusterfsPlugin.pm 
>>>> b/PVE/Storage/GlusterfsPlugin.pm
>>>> index 2dd414d..8132a3e 100644
>>>> --- a/PVE/Storage/GlusterfsPlugin.pm
>>>> +++ b/PVE/Storage/GlusterfsPlugin.pm
>>>> @@ -348,4 +348,17 @@ sub check_connection {
>>>>       return defined($server) ? 1 : 0;
>>>>   }
>>>> +sub reassign_volume {
>>>> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
>>>> +    return $class->file_reassign_volume($scfg, $storeid, $volname, 
>>>> $target_vmid);
>>>> +}
>>>> +
>>>> +sub volume_has_feature {
>>>> +    my ($class, $scfg, $feature, $storeid, $volname, $snapname, 
>>>> $running, $opts) = @_;
>>>> +
>>>> +    shift @_;
>>>> +    return $class->file_can_reassign_volume(@_) if $feature eq 
>>>> 'reassign';
>>>> +    return $class->SUPER::volume_has_feature(@_);
>>>> +}
>>>> +
>>>>   1;
>>>> diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
>>>> index df49b76..0d9a691 100644
>>>> --- a/PVE/Storage/LVMPlugin.pm
>>>> +++ b/PVE/Storage/LVMPlugin.pm
>>>> @@ -339,6 +339,13 @@ sub lvcreate {
>>>>       run_command($cmd, errmsg => "lvcreate '$vg/$name' error");
>>>>   }
>>>> +sub lvrename {
>>>> +    my ($vg, $oldname, $newname) = @_;
>>>> +
>>>> +    my $cmd = ['/sbin/lvrename', $vg, $oldname, $newname];
>>>> +    run_command($cmd, errmsg => "lvrename '${vg}/${oldname}' to 
>>>> '${newname}' error");
>>>> +}
>>>> +
>>>>   sub alloc_image {
>>>>       my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
>>>> @@ -584,6 +591,7 @@ sub volume_has_feature {
>>>>       my $features = {
>>>>       copy => { base => 1, current => 1},
>>>> +    reassign => {current => 1},
>>>>       };
>>>>       my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
>>>> @@ -684,4 +692,20 @@ sub volume_import_write {
>>>>       input => '<&'.fileno($input_fh));
>>>>   }
>>>> +sub reassign_volume {
>>>> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
>>>> +
>>>> +    $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, 
>>>> sub {
>>>> +    my $target_volname = $class->find_free_diskname($storeid, 
>>>> $scfg, $target_vmid);
>>>> +    return $class->rename_volume($scfg, $storeid, $volname, 
>>>> $target_volname, $target_vmid);
>>>> +    });
>>>> +}
>>>> +
>>>> +sub rename_volume {
>>>> +    my ($class, $scfg, $storeid, $source_volname, $target_volname, 
>>>> $vmid) = @_;
>>>
>>> Nit: The signature is different than the corresponding one in 
>>> Plugin.pm. I know you're not using $base here, but maybe add it for 
>>> consistency.
>>>
>>> That said, why not assemble the new volume ID in the reassign 
>>> function itself and have the rename function only be concerned with 
>>> renaming? Then you could drop the $base form the signature altogether.
> 
> Yeah, Dominic also encountered a possible change/improvement on to how 
> handle the base. I'll take a closer look :)
>>>
>>>> +
>>>> +    lvrename($scfg->{vgname}, $source_volname, $target_volname);
>>>> +    return "${storeid}:${target_volname}";
>>>> +}
>>>> +
>>>>   1;
>>>> diff --git a/PVE/Storage/LvmThinPlugin.pm 
>>>> b/PVE/Storage/LvmThinPlugin.pm
>>>> index c9e127c..895af8b 100644
>>>> --- a/PVE/Storage/LvmThinPlugin.pm
>>>> +++ b/PVE/Storage/LvmThinPlugin.pm
>>>> @@ -355,6 +355,7 @@ sub volume_has_feature {
>>>>       template => { current => 1},
>>>>       copy => { base => 1, current => 1, snap => 1},
>>>>       sparseinit => { base => 1, current => 1},
>>>> +    reassign => {current => 1},
>>>>       };
>>>>       my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
>>>> diff --git a/PVE/Storage/NFSPlugin.pm b/PVE/Storage/NFSPlugin.pm
>>>> index 39bf15a..7217ca2 100644
>>>> --- a/PVE/Storage/NFSPlugin.pm
>>>> +++ b/PVE/Storage/NFSPlugin.pm
>>>> @@ -197,4 +197,17 @@ sub update_volume_notes {
>>>>       PVE::Storage::DirPlugin::update_volume_notes($class, @_);
>>>>   }
>>>> +sub reassign_volume {
>>>> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
>>>> +    return $class->file_reassign_volume($scfg, $storeid, $volname, 
>>>> $target_vmid);
>>>> +}
>>>> +
>>>> +sub volume_has_feature {
>>>> +    my ($class, $scfg, $feature, $storeid, $volname, $snapname, 
>>>> $running, $opts) = @_;
>>>> +
>>>> +    shift @_;
>>>> +    return $class->file_can_reassign_volume(@_) if $feature eq 
>>>> 'reassign';
>>>> +    return $class->SUPER::volume_has_feature(@_);
>>>> +}
>>>> +
>>>>   1;
>>>> diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
>>>> index d2d8184..c1ea41d 100644
>>>> --- a/PVE/Storage/Plugin.pm
>>>> +++ b/PVE/Storage/Plugin.pm
>>>> @@ -927,6 +927,7 @@ sub storage_can_replicate {
>>>>       return 0;
>>>>   }
>>>> +
>>>
>>> Nit: accidental newline
>>>
>>>>   sub volume_has_feature {
>>>>       my ($class, $scfg, $feature, $storeid, $volname, $snapname, 
>>>> $running, $opts) = @_;
>>>> @@ -1456,4 +1457,63 @@ sub volume_import_formats {
>>>>       return ();
>>>>   }
>>>> +sub reassign_volume {
>>>> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
>>>> +    die "not implemented in storage plugin '$class'\n";
>>>> +}
>>>> +
>>>> +# general reassignment method for file/directory based storages
>>>> +sub file_reassign_volume {
>>>> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
>>>> +
>>>> +    my $base;
>>>> +    my $base_vmid;
>>>> +    my $format;
>>>> +    my $source_vmid;
>>>> +
>>>> +    (undef, $volname, $source_vmid, $base, $base_vmid, undef, 
>>>> $format) = $class->parse_volname($volname);
>>>> +
>>>> +    # parse_volname strips the directory from volname
>>>> +    my $source_volname = "${source_vmid}/${volname}";
>>>> +
>>>> +    if ($base) {
>>>> +    $base = "${base_vmid}/${base}/";
>>>> +    } else {
>>>> +    $base = '';
>>>> +    }
>>>> +
>>>> +    $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, 
>>>> sub {
>>>> +    my $target_volname = $class->find_free_diskname($storeid, 
>>>> $scfg, $target_vmid, $format, 1);
>>>> +
>>>> +    return $class->rename_volume($scfg, $storeid, $source_volname, 
>>>> $target_volname, $target_vmid, $base);
>>>> +    });
>>>> +}
>>>> +
>>>> +sub rename_volume {
>>>> +    my ($class, $scfg, $storeid, $source_volname, $target_volname, 
>>>> $vmid, $base) = @_;
>>>> +
>>>> +    my $basedir = $class->get_subdir($scfg, 'images');
>>>> +    my $imagedir = "${basedir}/${vmid}";
>>>> +
>>>> +    mkpath $imagedir;
>>>> +
>>>> +    my $old_path = "${basedir}/${source_volname}";
>>>> +    my $new_path = "${imagedir}/${target_volname}";
>>>> +
>>>> +    rename($old_path, $new_path) ||
>>>> +    die "rename '$old_path' to '$new_path' failed - $!\n";
>>>> +
>>>> +    return "${storeid}:${base}${vmid}/${target_volname}";
>>>> +}
>>>> +
>>>> +sub file_can_reassign_volume {
>>>> +    my ($class, $scfg, $feature, $storeid, $volname, $snapname, 
>>>> $running, $opts) = @_;
>>>> +
>>>> +    my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, 
>>>> $format) =
>>>> +    $class->parse_volname($volname);
>>>> +
>>>> +    return 1 if !$isBase && $format =~ /^raw$|^qcow2$|^vmdk$/;
>>>> +    return undef;
>>>> +}
>>>> +
>>>>   1;
>>>> diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
>>>> index fab6d57..86a9a0d 100644
>>>> --- a/PVE/Storage/RBDPlugin.pm
>>>> +++ b/PVE/Storage/RBDPlugin.pm
>>>> @@ -712,6 +712,7 @@ sub volume_has_feature {
>>>>       template => { current => 1},
>>>>       copy => { base => 1, current => 1, snap => 1},
>>>>       sparseinit => { base => 1, current => 1},
>>>> +    reassign => {current => 1},
>>>>       };
>>>>       my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
>>>> @@ -728,4 +729,34 @@ sub volume_has_feature {
>>>>       return undef;
>>>>   }
>>>> +sub reassign_volume {
>>>> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
>>>> +
>>>> +    my $base;;
>>>> +    (undef, $volname, undef, $base) = $class->parse_volname($volname);
>>>> +
>>>> +    if ($base) {
>>>> +    $base = $base . '/';
>>>> +    } else {
>>>> +    $base = '';
>>>> +    }
>>>> +
>>>> +    $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, 
>>>> sub {
>>>> +    my $target_volname = $class->find_free_diskname($storeid, 
>>>> $scfg, $target_vmid);
>>>> +    return $class->rename_volume($scfg, $storeid, $volname, 
>>>> $target_volname, $target_vmid, $base);
>>>> +    });
>>>> +}
>>>> +
>>>> +sub rename_volume {
>>>> +    my ($class, $scfg, $storeid, $source_volname, $target_volname, 
>>>> $vmid, $base) = @_;
>>>> +
>>>> +    my $cmd = $rbd_cmd->($scfg, $storeid, 'rename', 
>>>> $source_volname, $target_volname);
>>>> +
>>>> +    run_rbd_command(
>>>> +    $cmd,
>>>> +    errmsg => "could not rename image '${source_volname}' to 
>>>> '${target_volname}'",
>>>> +    );
>>>> +    return "${storeid}:${base}${target_volname}";
>>>> +}
>>>> +
>>>>   1;
>>>> diff --git a/PVE/Storage/ZFSPoolPlugin.pm 
>>>> b/PVE/Storage/ZFSPoolPlugin.pm
>>>> index fe65ae4..c63a94b 100644
>>>> --- a/PVE/Storage/ZFSPoolPlugin.pm
>>>> +++ b/PVE/Storage/ZFSPoolPlugin.pm
>>>> @@ -686,6 +686,7 @@ sub volume_has_feature {
>>>>       copy => { base => 1, current => 1},
>>>>       sparseinit => { base => 1, current => 1},
>>>>       replicate => { base => 1, current => 1},
>>>> +    reassign => {current => 1},
>>>>       };
>>>>       my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
>>>> @@ -788,4 +789,29 @@ sub volume_import_formats {
>>>>       return $class->volume_export_formats($scfg, $storeid, 
>>>> $volname, undef, $base_snapshot, $with_snapshots);
>>>>   }
>>>> +sub reassign_volume {
>>>> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
>>>> +
>>>> +    my $base;;
>>>> +    (undef, $volname, undef, $base) = $class->parse_volname($volname);
>>>> +
>>>> +    if ($base) {
>>>> +    $base = $base . '/';
>>>> +    } else {
>>>> +    $base = '';
>>>> +    }
>>>> +
>>>> +    $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, 
>>>> sub {
>>>> +    my $target_volname = $class->find_free_diskname($storeid, 
>>>> $scfg, $target_vmid);
>>>> +    return $class->rename_volume($scfg, $storeid, $volname, 
>>>> $target_volname, $target_vmid, $base);
>>>> +    });
>>>> +}
>>>> +
>>>> +sub rename_volume {
>>>> +    my ($class, $scfg, $storeid, $source_volname, $target_volname, 
>>>> $vmid, $base) = @_;
>>>> +
>>>> +    $class->zfs_request($scfg, 5, 'rename', 
>>>> "$scfg->{pool}/$source_volname", "$scfg->{pool}/$target_volname");
>>>> +    return "${storeid}:${base}${target_volname}";
>>>> +}
>>>> +
>>>>   1;
>>>>
>>>
>>>
>>> _______________________________________________
>>> pve-devel mailing list
>>> pve-devel@lists.proxmox.com
>>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>>>
>>>




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [pve-devel] [PATCH v6 storage 1/1] add disk reassign feature
  2021-04-15 11:53       ` Aaron Lauterer
  2021-04-15 12:09         ` Fabian Ebner
@ 2021-04-15 12:20         ` Thomas Lamprecht
  1 sibling, 0 replies; 18+ messages in thread
From: Thomas Lamprecht @ 2021-04-15 12:20 UTC (permalink / raw)
  To: Proxmox VE development discussion, Aaron Lauterer, Fabian Ebner

On 15.04.21 13:53, Aaron Lauterer wrote:
> Just adding the functionality on the top level Plugin.pm could have some
> potential ugly side effects for 3rd party plugins that do not yet handle that
> call themselves. So to be on the safe side, by default we rather fail right
> there (was discussed a versions ago).

I may have forgotten the old discussion, but I do not think that this is a problem.

External plugins can detect if they require it and implement it, and actually,
we could just check the ABI version a plugin provides on calling the base method
and error out if it's less than the one where we introduced this method.

> IMHO it would be nice though to change the structure of the storage plugins a
> bit. E.g. instead of assuming dir/file storages for Plugin.pm, having a basic
> abstraction specifically for any directory/file based storage which handles
> all the common tasks and further down the hierarchy the specific
> implementations regarding mounting and such. But that would mean a hard break
> of the current approach, especially for 3rd party plugins.

That sounds actually quite like what we already have, rather the base plugin
module should just provide the set of methods with a `die "implement me"`, and
probably only that, i.e., be a plain abstract interface.

But that's quite some change involved and requires a ABI version break as all
plugins would need to adapt to that one, and the benefit is meh, at least for
our internal ones; and after all those are the ones we actually support.




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [pve-devel] [PATCH v6 storage 1/1] add disk reassign feature
  2021-04-15 12:09         ` Fabian Ebner
@ 2021-04-15 12:21           ` Thomas Lamprecht
  0 siblings, 0 replies; 18+ messages in thread
From: Thomas Lamprecht @ 2021-04-15 12:21 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fabian Ebner, Aaron Lauterer

On 15.04.21 14:09, Fabian Ebner wrote:
>> IMHO it would be nice though to change the structure of the storage plugins a bit. E.g. instead of assuming dir/file storages for Plugin.pm, having a basic abstraction specifically for any directory/file based storage which handles all the common tasks and further down the hierarchy the specific implementations regarding mounting and such. But that would mean a hard break of the current approach, especially for 3rd party plugins.
>>
> 
> I was under the impression that this is one of the functions of APIVERSION. External plugins can look at the changes since the last version they 
supported, decide if they're happy with the default implementation and implement their own otherwise.
> 
> @Thomas: Is this wrong?

No that's right.




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [pve-devel] [PATCH v6 qemu-server 3/5] cli: disk reassign: add reassign_disk to qm command
  2021-04-02 10:19 ` [pve-devel] [PATCH v6 qemu-server 3/5] cli: disk reassign: add reassign_disk to qm command Aaron Lauterer
@ 2021-04-18 14:50   ` Thomas Lamprecht
  0 siblings, 0 replies; 18+ messages in thread
From: Thomas Lamprecht @ 2021-04-18 14:50 UTC (permalink / raw)
  To: Proxmox VE development discussion, Aaron Lauterer

On 02.04.21 12:19, Aaron Lauterer wrote:
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> v5 -> v6: nothing
> v4 -> v5: renamed `drive_key` to `drive_name`
> v3 ->v4: nothing
> v2 -> v3: renamed parameter `disk` to `drive_key`
> rfc -> v1 -> v2: nothing changed
>  PVE/CLI/qm.pm | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
> index f8972bd..36ae8e7 100755
> --- a/PVE/CLI/qm.pm
> +++ b/PVE/CLI/qm.pm
> @@ -914,6 +914,8 @@ our $cmddef = {
>  
>      move_disk => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage'], { node => $nodename }, $upid_exit ],
>  
> +    reassign_disk => [ "PVE::API2::Qemu", 'reassign_vm_disk', ['vmid', 'target_vmid', 'drive_name'], { node => $nodename } ],

no new commands/params with underscore as separator please.

We can actually do s/move_disk/move-disk/ for above in a separate patch and
add an alias for the old one:

move_disk => { alias => 'move-disk' },

> +
>      unlink => [ "PVE::API2::Qemu", 'unlink', ['vmid'], { node => $nodename } ],
>  
>      config => [ "PVE::API2::Qemu", 'vm_config', ['vmid'],
> 





^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [pve-devel] [PATCH v6 qemu-server 2/5] disk reassign: add API endpoint
  2021-04-02 10:19 ` [pve-devel] [PATCH v6 qemu-server 2/5] disk reassign: add API endpoint Aaron Lauterer
  2021-04-15 11:52   ` Fabian Ebner
@ 2021-04-18 15:24   ` Thomas Lamprecht
  2021-04-19  9:25     ` Aaron Lauterer
  1 sibling, 1 reply; 18+ messages in thread
From: Thomas Lamprecht @ 2021-04-18 15:24 UTC (permalink / raw)
  To: Proxmox VE development discussion, Aaron Lauterer

On 02.04.21 12:19, Aaron Lauterer wrote:
> The goal of this new API endpoint is to provide an easy way to move a
> disk between VMs as this was only possible with manual intervention
> until now. Either by renaming the VM disk or by manually adding the
> disks volid to the config of the other VM.
> 
> The latter can easily cause unexpected behavior such as disks attached
> to VM B would be deleted if it used to be a disk of VM A. This happens
> because PVE assumes that the VMID in the volname always matches the VM
> the disk is attached to and thus, would remove any disk with VMID A
> when VM A was deleted.
> 
> The term `reassign` was chosen as it is not yet used
> for VM disks.
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> v5 -> v6:
> * guard Replication snapshot cleanup
>     additionally to the eval, that code is now only run if the volume is
>     on a storage with the 'replicate' feature
> * add permission check for target vmid
> * changed regex to match unused keys better
> 
> thx @Fabian for these suggestions/catching problems
> 
> v4 -> v5:
> * implemented suggestions from Fabian [1]
>     * logging before action
>     * improving description
>     * improving error messages
>     * using Replication::prepare to remove replication snapshots
>     * check if disk is physical disk using /dev/...
> 
> v3 -> v4: nothing
> 
> v2 -> v3:
> * reordered the locking as discussed with fabian [0] to
> run checks
>     fork worker
> 	lock source config
> 	    lock target config
> 		run checks
> 		...
> 
> * added more checks
>     * will not reassign to or from templates
>     * will not reassign if VM has snapshots present
> * cleanup if disk used to be replicated
> * made task log slightly more verbose
> * integrated general recommendations regarding code
> * renamed `disk` to `drive_key`
> * prepended some vars with `source_` for easier distinction
> 
> v1 -> v2: print config key and volid info at the end of the job so it
> shows up on the CLI and task log
> 
> rfc -> v1:
> * add support to reassign unused disks
> * add support to provide a config digest for the target vm
> * add additional check if disk key is present in config
> * reorder checks a bit
> 
> In order to support unused disk I had to extend
> PVE::QemuServer::Drive::valid_drive_names for the API parameter
> validation.
> 
> Checks are ordered so that cheap tests are run at the first chance to
> fail early.
> 
> The check if both VMs are present on the node is a bit redundant because
> locking the config files will fail if the VM is not present. But with
> the additional check we can provide a useful error message to the user
> instead of a "Configuration file xyz does not exist" error.
> 
> [0] https://lists.proxmox.com/pipermail/pve-devel/2020-September/044930.html
> [1] https://lists.proxmox.com/pipermail/pve-devel/2020-November/046030.html
> 
>  PVE/API2/Qemu.pm        | 172 ++++++++++++++++++++++++++++++++++++++++
>  PVE/QemuServer/Drive.pm |   4 +
>  2 files changed, 176 insertions(+)
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index e95ab13..9642b9b 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -35,6 +35,7 @@ use PVE::API2::Qemu::Agent;
>  use PVE::VZDump::Plugin;
>  use PVE::DataCenterConfig;
>  use PVE::SSHInfo;
> +use PVE::Replication;
>  
>  BEGIN {
>      if (!$ENV{PVE_GENERATING_DOCS}) {
> @@ -4377,4 +4378,175 @@ __PACKAGE__->register_method({
>  	return PVE::QemuServer::Cloudinit::dump_cloudinit_config($conf, $param->{vmid}, $param->{type});
>      }});
>  
> +__PACKAGE__->register_method({
> +    name => 'reassign_vm_disk',
> +    path => '{vmid}/reassign_disk',
> +    method => 'POST',
> +    protected => 1,
> +    proxyto => 'node',
> +    description => "Reassign a disk to another VM",
> +    permissions => {
> +	description => "You need 'VM.Config.Disk' permissions on /vms/{vmid} and /vms/{target vmid}, and 'Datastore.Allocate' permissions on the storage.",
> +	check => [ 'and',
> +		   ['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
> +		   ['perm', '/storage/{storage}', [ 'Datastore.Allocate' ]],
> +	    ],
> +    },
> +    parameters => {
> +        additionalProperties => 0,
> +	properties => {
> +	    node => get_standard_option('pve-node'),
> +	    vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
> +	    target_vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),

s/target_vmid/'target-vmid'/

> +	    drive_name => {

A bit late in the review cycle, but why not both source and target drive parameters?
IMO it's not a good idea to just find a free disk-name at the target, we control digest
already, so we can move this to the user to choose. This allows them to actually know
at which drive the new disks is assigned too, else they cannot really know.

For consistency and easier grasping of the API the following parameters could then be
used:

source-vmid
target-vmid
source-drive
target-drive
source-digest
target-digest


> +	        type => 'string',
> +		description => "The config key of the disk to reassign (for example, ide0 or scsi1).",
> +		enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
> +	    },
> +	    digest => {
> +		type => 'string',
> +		description => 'Prevent changes if current the configuration file of the source VM has a different SHA1 digest. This can be used to prevent concurrent modifications.',
> +		maxLength => 40,
> +		optional => 1,
> +	    },
> +	    target_digest => {
> +		type => 'string',
> +		description => 'Prevent changes if current the configuration file of the target VM has a different SHA1 digest. This can be used to prevent concurrent modifications.',
> +		maxLength => 40,
> +		optional => 1,
> +	    },
> +	},
> +    },
> +    returns => {
> +	type => 'string',
> +	description => "the task ID.",
> +    },
> +    code => sub {
> +	my ($param) = @_;
> +
> +	my $rpcenv = PVE::RPCEnvironment::get();
> +	my $authuser = $rpcenv->get_user();
> +
> +	my $node = extract_param($param, 'node');
> +	my $source_vmid = extract_param($param, 'vmid');
> +	my $target_vmid = extract_param($param, 'target_vmid');
> +	my $source_digest = extract_param($param, 'digest');
> +	my $target_digest = extract_param($param, 'target_digest');
> +	my $drive_name = extract_param($param, 'drive_name');
> +
> +	my $storecfg = PVE::Storage::config();
> +	my $vmlist;
> +	my $drive;
> +	my $source_volid;
> +
> +	$rpcenv->check_vm_perm($authuser, $target_vmid, undef, ['VM.Config.Disk'])
> +	    if $authuser ne 'root@pam';
> +
> +	die "Reassigning a disk to the same VM is not possible. Did you mean to move the disk?\n"
> +	    if $source_vmid eq $target_vmid;
> +
> +	my $load_and_check_configs = sub {
> +	    $vmlist = PVE::Cluster::get_vmlist()->{ids};
> +	    die "Both VMs need to be on the same node ($vmlist->{$source_vmid}->{node}) but target VM is on $vmlist->{$target_vmid}->{node}.\n"
> +		if $vmlist->{$source_vmid}->{node} ne $vmlist->{$target_vmid}->{node};
> +
> +	    my $source_conf = PVE::QemuConfig->load_config($source_vmid);
> +	    PVE::QemuConfig->check_lock($source_conf);
> +	    my $target_conf = PVE::QemuConfig->load_config($target_vmid);
> +	    PVE::QemuConfig->check_lock($target_conf);
> +
> +	    die "Can't reassign disks from or to templates\n"
> +		if ($source_conf->{template} || $target_conf->{template});
> +
> +	    if ($source_digest) {
> +		eval { PVE::Tools::assert_if_modified($source_digest, $source_conf->{digest}) };
> +		if (my $err = $@) {
> +		    die "VM ${source_vmid}: ${err}";
> +		}
> +	    }
> +
> +	    if ($target_digest) {
> +		eval { PVE::Tools::assert_if_modified($target_digest, $target_conf->{digest}) };
> +		if (my $err = $@) {
> +		    die "VM ${target_vmid}: ${err}";
> +		}
> +	    }
> +
> +	    die "Disk '${drive_name}' does not exist\n"
> +		if !defined($source_conf->{$drive_name});
> +
> +	    $drive = PVE::QemuServer::parse_drive($drive_name, $source_conf->{$drive_name});
> +	    $source_volid = $drive->{file};
> +	    die "disk '${drive_name}' has no associated volume\n" if !$source_volid;
> +	    die "CD drive contents can't be reassigned\n" if PVE::QemuServer::drive_is_cdrom($drive, 1);
> +	    die "Can't reassign physical disk\n" if $drive->{file} =~ m|^/dev/|;
> +	    die "Can't reassign disk used by a snapshot\n"
> +		if PVE::QemuServer::Drive::is_volume_in_use($storecfg, $source_conf, $drive_name, $source_volid);
> +
> +	    die "Storage does not support the reassignment of this disk\n"
> +		if !PVE::Storage::volume_has_feature($storecfg, 'reassign', $source_volid);
> +
> +	    die "Cannot reassign disk while the source VM is running\n"
> +		if PVE::QemuServer::check_running($source_vmid) && $drive_name !~ m/^unused\d+$/;
> +
> +	    return ($source_conf, $target_conf);
> +	};
> +
> +	my $logfunc = sub {
> +	    my ($msg) = @_;
> +	    print STDERR "$msg\n";
> +	};
> +
> +	my $reassign_func = sub {
> +	    return PVE::QemuConfig->lock_config($source_vmid, sub {
> +		return PVE::QemuConfig->lock_config($target_vmid, sub {
> +		    my ($source_conf, $target_conf) = &$load_and_check_configs();
> +
> +		    PVE::Cluster::log_msg('info', $authuser, "reassign disk VM $source_vmid: reassign --disk ${drive_name} --target_vmid $target_vmid");

why a cluster log? this is no cluster action and already runs in a task log?

> +
> +		    my $new_volid = PVE::Storage::reassign_volume(
> +			$storecfg,
> +			$source_volid,
> +			$target_vmid,
> +		    );
> +
> +		    delete $source_conf->{$drive_name};
> +		    print "removing disk '${drive_name}' from VM '${source_vmid}'\n";
> +		    PVE::QemuConfig->write_config($source_vmid, $source_conf);
> +

if the task/node/... dies here the disk is lost in limbo?

> +		    # remove possible replication snapshots
> +		    if (PVE::Storage::volume_has_feature($storecfg, 'replicate', $source_volid)) {
> +			eval {
> +			    PVE::Replication::prepare(
> +				$storecfg,
> +				[$new_volid],
> +				undef,
> +				undef,
> +				undef,
> +				$logfunc,
> +			    )
> +			};
> +			if (my $err = $@) {
> +			    print "Failed to remove replication snapshots on reassigned disk. Manual cleanup could be necessary.\n";
> +			}
> +		    }
> +
> +		    my $key;
> +		    eval { $key = PVE::QemuConfig->add_unused_volume($target_conf, $new_volid) };
> +		    if (my $err = $@) {
> +			print "failed to add reassigned disk '${new_volid}' to VM '${target_vmid}'. Try to free an 'unused' disk slot and run 'qm rescan ${target_vmid}'.\n";
> +			return 0;
> +		    }
> +
> +		    print "adding disk to VM '${target_vmid}' as '${key}: ${new_volid}'\n";
> +		    PVE::QemuConfig->write_config($target_vmid, $target_conf);

This can violate the underlying assumption that a VMID is owned by it's node, and
thus only that node will modify its configuration...

We could restrict this to the case where both VMs(hmm, where's the CT implementation?)
are on the same node, which would be the safe and easy way for now. For cross-node we
would probably need to make it two-phase and proxy the add over.

But after re-checking above's load_config($target_vmid) usage this whole thing can
currently only work if both VMIDs are on the same node - I do not see that limitation
described anywhere, would be good to have prominently in the API description and also
improve UX by checking explicitly if the $target_vmid is located on this node.

Or were you not aware of that?


> +		});
> +	    });
> +	};
> +
> +	&$load_and_check_configs();
> +
> +	return $rpcenv->fork_worker('qmreassign', $source_vmid, $authuser, $reassign_func);

the worker ID could be something like: "$source_vmid:$source_drive -> $target_vmid:$target_drive".

> +    }});
> +
>  1;
> diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
> index 01ea8d7..e938b9b 100644
> --- a/PVE/QemuServer/Drive.pm
> +++ b/PVE/QemuServer/Drive.pm
> @@ -392,6 +392,10 @@ sub valid_drive_names {
>              'efidisk0');
>  }
>  
> +sub valid_drive_names_with_unused {
> +    return (valid_drive_names(), map {"unused$_"} (0 .. ($MAX_UNUSED_DISKS -1)));
> +}
> +
>  sub is_valid_drivename {
>      my $dev = shift;
>  
> 





^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [pve-devel] [PATCH v6 qemu-server 2/5] disk reassign: add API endpoint
  2021-04-18 15:24   ` Thomas Lamprecht
@ 2021-04-19  9:25     ` Aaron Lauterer
  0 siblings, 0 replies; 18+ messages in thread
From: Aaron Lauterer @ 2021-04-19  9:25 UTC (permalink / raw)
  To: Thomas Lamprecht, Proxmox VE development discussion



On 4/18/21 5:24 PM, Thomas Lamprecht wrote:
> On 02.04.21 12:19, Aaron Lauterer wrote:
>> The goal of this new API endpoint is to provide an easy way to move a
>> disk between VMs as this was only possible with manual intervention
>> until now. Either by renaming the VM disk or by manually adding the
>> disks volid to the config of the other VM.
>>
>> The latter can easily cause unexpected behavior such as disks attached
>> to VM B would be deleted if it used to be a disk of VM A. This happens
>> because PVE assumes that the VMID in the volname always matches the VM
>> the disk is attached to and thus, would remove any disk with VMID A
>> when VM A was deleted.
>>
>> The term `reassign` was chosen as it is not yet used
>> for VM disks.
>>
>> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
>> ---
>> v5 -> v6:
>> * guard Replication snapshot cleanup
>>      additionally to the eval, that code is now only run if the volume is
>>      on a storage with the 'replicate' feature
>> * add permission check for target vmid
>> * changed regex to match unused keys better
>>
>> thx @Fabian for these suggestions/catching problems
>>
>> v4 -> v5:
>> * implemented suggestions from Fabian [1]
>>      * logging before action
>>      * improving description
>>      * improving error messages
>>      * using Replication::prepare to remove replication snapshots
>>      * check if disk is physical disk using /dev/...
>>
>> v3 -> v4: nothing
>>
>> v2 -> v3:
>> * reordered the locking as discussed with fabian [0] to
>> run checks
>>      fork worker
>> 	lock source config
>> 	    lock target config
>> 		run checks
>> 		...
>>
>> * added more checks
>>      * will not reassign to or from templates
>>      * will not reassign if VM has snapshots present
>> * cleanup if disk used to be replicated
>> * made task log slightly more verbose
>> * integrated general recommendations regarding code
>> * renamed `disk` to `drive_key`
>> * prepended some vars with `source_` for easier distinction
>>
>> v1 -> v2: print config key and volid info at the end of the job so it
>> shows up on the CLI and task log
>>
>> rfc -> v1:
>> * add support to reassign unused disks
>> * add support to provide a config digest for the target vm
>> * add additional check if disk key is present in config
>> * reorder checks a bit
>>
>> In order to support unused disk I had to extend
>> PVE::QemuServer::Drive::valid_drive_names for the API parameter
>> validation.
>>
>> Checks are ordered so that cheap tests are run at the first chance to
>> fail early.
>>
>> The check if both VMs are present on the node is a bit redundant because
>> locking the config files will fail if the VM is not present. But with
>> the additional check we can provide a useful error message to the user
>> instead of a "Configuration file xyz does not exist" error.
>>
>> [0] https://lists.proxmox.com/pipermail/pve-devel/2020-September/044930.html
>> [1] https://lists.proxmox.com/pipermail/pve-devel/2020-November/046030.html
>>
>>   PVE/API2/Qemu.pm        | 172 ++++++++++++++++++++++++++++++++++++++++
>>   PVE/QemuServer/Drive.pm |   4 +
>>   2 files changed, 176 insertions(+)
>>
>> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
>> index e95ab13..9642b9b 100644
>> --- a/PVE/API2/Qemu.pm
>> +++ b/PVE/API2/Qemu.pm
>> @@ -35,6 +35,7 @@ use PVE::API2::Qemu::Agent;
>>   use PVE::VZDump::Plugin;
>>   use PVE::DataCenterConfig;
>>   use PVE::SSHInfo;
>> +use PVE::Replication;
>>   
>>   BEGIN {
>>       if (!$ENV{PVE_GENERATING_DOCS}) {
>> @@ -4377,4 +4378,175 @@ __PACKAGE__->register_method({
>>   	return PVE::QemuServer::Cloudinit::dump_cloudinit_config($conf, $param->{vmid}, $param->{type});
>>       }});
>>   
>> +__PACKAGE__->register_method({
>> +    name => 'reassign_vm_disk',
>> +    path => '{vmid}/reassign_disk',
>> +    method => 'POST',
>> +    protected => 1,
>> +    proxyto => 'node',
>> +    description => "Reassign a disk to another VM",
>> +    permissions => {
>> +	description => "You need 'VM.Config.Disk' permissions on /vms/{vmid} and /vms/{target vmid}, and 'Datastore.Allocate' permissions on the storage.",
>> +	check => [ 'and',
>> +		   ['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
>> +		   ['perm', '/storage/{storage}', [ 'Datastore.Allocate' ]],
>> +	    ],
>> +    },
>> +    parameters => {
>> +        additionalProperties => 0,
>> +	properties => {
>> +	    node => get_standard_option('pve-node'),
>> +	    vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
>> +	    target_vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
> 
> s/target_vmid/'target-vmid'/
> 
>> +	    drive_name => {
> 
> A bit late in the review cycle, but why not both source and target drive parameters?
> IMO it's not a good idea to just find a free disk-name at the target, we control digest
> already, so we can move this to the user to choose. This allows them to actually know
> at which drive the new disks is assigned too, else they cannot really know.

Sounds like a good idea and is something that will help further below, reducing the chances that the disk will be in limbo.

> 
> For consistency and easier grasping of the API the following parameters could then be
> used:
> 
> source-vmid
> target-vmid
> source-drive
> target-drive
> source-digest
> target-digest
> 
> 
>> +	        type => 'string',
>> +		description => "The config key of the disk to reassign (for example, ide0 or scsi1).",
>> +		enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
>> +	    },
>> +	    digest => {
>> +		type => 'string',
>> +		description => 'Prevent changes if current the configuration file of the source VM has a different SHA1 digest. This can be used to prevent concurrent modifications.',
>> +		maxLength => 40,
>> +		optional => 1,
>> +	    },
>> +	    target_digest => {
>> +		type => 'string',
>> +		description => 'Prevent changes if current the configuration file of the target VM has a different SHA1 digest. This can be used to prevent concurrent modifications.',
>> +		maxLength => 40,
>> +		optional => 1,
>> +	    },
>> +	},
>> +    },
>> +    returns => {
>> +	type => 'string',
>> +	description => "the task ID.",
>> +    },
>> +    code => sub {
>> +	my ($param) = @_;
>> +
>> +	my $rpcenv = PVE::RPCEnvironment::get();
>> +	my $authuser = $rpcenv->get_user();
>> +
>> +	my $node = extract_param($param, 'node');
>> +	my $source_vmid = extract_param($param, 'vmid');
>> +	my $target_vmid = extract_param($param, 'target_vmid');
>> +	my $source_digest = extract_param($param, 'digest');
>> +	my $target_digest = extract_param($param, 'target_digest');
>> +	my $drive_name = extract_param($param, 'drive_name');
>> +
>> +	my $storecfg = PVE::Storage::config();
>> +	my $vmlist;
>> +	my $drive;
>> +	my $source_volid;
>> +
>> +	$rpcenv->check_vm_perm($authuser, $target_vmid, undef, ['VM.Config.Disk'])
>> +	    if $authuser ne 'root@pam';
>> +
>> +	die "Reassigning a disk to the same VM is not possible. Did you mean to move the disk?\n"
>> +	    if $source_vmid eq $target_vmid;
>> +
>> +	my $load_and_check_configs = sub {
>> +	    $vmlist = PVE::Cluster::get_vmlist()->{ids};
>> +	    die "Both VMs need to be on the same node ($vmlist->{$source_vmid}->{node}) but target VM is on $vmlist->{$target_vmid}->{node}.\n"
>> +		if $vmlist->{$source_vmid}->{node} ne $vmlist->{$target_vmid}->{node};
>> +
>> +	    my $source_conf = PVE::QemuConfig->load_config($source_vmid);
>> +	    PVE::QemuConfig->check_lock($source_conf);
>> +	    my $target_conf = PVE::QemuConfig->load_config($target_vmid);
>> +	    PVE::QemuConfig->check_lock($target_conf);
>> +
>> +	    die "Can't reassign disks from or to templates\n"
>> +		if ($source_conf->{template} || $target_conf->{template});
>> +
>> +	    if ($source_digest) {
>> +		eval { PVE::Tools::assert_if_modified($source_digest, $source_conf->{digest}) };
>> +		if (my $err = $@) {
>> +		    die "VM ${source_vmid}: ${err}";
>> +		}
>> +	    }
>> +
>> +	    if ($target_digest) {
>> +		eval { PVE::Tools::assert_if_modified($target_digest, $target_conf->{digest}) };
>> +		if (my $err = $@) {
>> +		    die "VM ${target_vmid}: ${err}";
>> +		}
>> +	    }
>> +
>> +	    die "Disk '${drive_name}' does not exist\n"
>> +		if !defined($source_conf->{$drive_name});
>> +
>> +	    $drive = PVE::QemuServer::parse_drive($drive_name, $source_conf->{$drive_name});
>> +	    $source_volid = $drive->{file};
>> +	    die "disk '${drive_name}' has no associated volume\n" if !$source_volid;
>> +	    die "CD drive contents can't be reassigned\n" if PVE::QemuServer::drive_is_cdrom($drive, 1);
>> +	    die "Can't reassign physical disk\n" if $drive->{file} =~ m|^/dev/|;
>> +	    die "Can't reassign disk used by a snapshot\n"
>> +		if PVE::QemuServer::Drive::is_volume_in_use($storecfg, $source_conf, $drive_name, $source_volid);
>> +
>> +	    die "Storage does not support the reassignment of this disk\n"
>> +		if !PVE::Storage::volume_has_feature($storecfg, 'reassign', $source_volid);
>> +
>> +	    die "Cannot reassign disk while the source VM is running\n"
>> +		if PVE::QemuServer::check_running($source_vmid) && $drive_name !~ m/^unused\d+$/;
>> +
>> +	    return ($source_conf, $target_conf);
>> +	};
>> +
>> +	my $logfunc = sub {
>> +	    my ($msg) = @_;
>> +	    print STDERR "$msg\n";
>> +	};
>> +
>> +	my $reassign_func = sub {
>> +	    return PVE::QemuConfig->lock_config($source_vmid, sub {
>> +		return PVE::QemuConfig->lock_config($target_vmid, sub {
>> +		    my ($source_conf, $target_conf) = &$load_and_check_configs();
>> +
>> +		    PVE::Cluster::log_msg('info', $authuser, "reassign disk VM $source_vmid: reassign --disk ${drive_name} --target_vmid $target_vmid");
> 
> why a cluster log? this is no cluster action and already runs in a task log?
> 
>> +
>> +		    my $new_volid = PVE::Storage::reassign_volume(
>> +			$storecfg,
>> +			$source_volid,
>> +			$target_vmid,
>> +		    );
>> +
>> +		    delete $source_conf->{$drive_name};
>> +		    print "removing disk '${drive_name}' from VM '${source_vmid}'\n";
>> +		    PVE::QemuConfig->write_config($source_vmid, $source_conf);
>> +
> 
> if the task/node/... dies here the disk is lost in limbo?

hmm, I guess if we also make a drive key a mandatory parameter, we could immediately write it to the new VMs config and do the snapshot cleanup afterwards. The possible error, a few lines further down, if there is no unused slot available would also be avoided. If the snapshot cleanup fails, it needs to be handled manually, but the disk is assigned to the right VM.

> 
>> +		    # remove possible replication snapshots
>> +		    if (PVE::Storage::volume_has_feature($storecfg, 'replicate', $source_volid)) {
>> +			eval {
>> +			    PVE::Replication::prepare(
>> +				$storecfg,
>> +				[$new_volid],
>> +				undef,
>> +				undef,
>> +				undef,
>> +				$logfunc,
>> +			    )
>> +			};
>> +			if (my $err = $@) {
>> +			    print "Failed to remove replication snapshots on reassigned disk. Manual cleanup could be necessary.\n";
>> +			}
>> +		    }
>> +
>> +		    my $key;
>> +		    eval { $key = PVE::QemuConfig->add_unused_volume($target_conf, $new_volid) };
>> +		    if (my $err = $@) {
>> +			print "failed to add reassigned disk '${new_volid}' to VM '${target_vmid}'. Try to free an 'unused' disk slot and run 'qm rescan ${target_vmid}'.\n";
>> +			return 0;
>> +		    }
>> +
>> +		    print "adding disk to VM '${target_vmid}' as '${key}: ${new_volid}'\n";
>> +		    PVE::QemuConfig->write_config($target_vmid, $target_conf);
> 
> This can violate the underlying assumption that a VMID is owned by it's node, and
> thus only that node will modify its configuration...
> 
> We could restrict this to the case where both VMs(hmm, where's the CT implementation?)

CTs is something that can be done in the future as well ;).

> are on the same node, which would be the safe and easy way for now. For cross-node we
> would probably need to make it two-phase and proxy the add over.
> 
> But after re-checking above's load_config($target_vmid) usage this whole thing can
> currently only work if both VMIDs are on the same node - I do not see that limitation
> described anywhere, would be good to have prominently in the API description and also
> improve UX by checking explicitly if the $target_vmid is located on this node.
> 
> Or were you not aware of that?

The check if both VMs are located on the same node is one of the first checks at the beginning of the API code. Mentioning this in the description is something I forgot. I'll rephrase it a bit to mention that fact.

> 
> 
>> +		});
>> +	    });
>> +	};
>> +
>> +	&$load_and_check_configs();
>> +
>> +	return $rpcenv->fork_worker('qmreassign', $source_vmid, $authuser, $reassign_func);
> 
> the worker ID could be something like: "$source_vmid:$source_drive -> $target_vmid:$target_drive".
> 
>> +    }});
>> +
>>   1;
>> diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
>> index 01ea8d7..e938b9b 100644
>> --- a/PVE/QemuServer/Drive.pm
>> +++ b/PVE/QemuServer/Drive.pm
>> @@ -392,6 +392,10 @@ sub valid_drive_names {
>>               'efidisk0');
>>   }
>>   
>> +sub valid_drive_names_with_unused {
>> +    return (valid_drive_names(), map {"unused$_"} (0 .. ($MAX_UNUSED_DISKS -1)));
>> +}
>> +
>>   sub is_valid_drivename {
>>       my $dev = shift;
>>   
>>
> 




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [pve-devel] [PATCH v6 qemu-server 2/5] disk reassign: add API endpoint
  2021-04-15 11:52   ` Fabian Ebner
@ 2021-04-19  9:26     ` Aaron Lauterer
  0 siblings, 0 replies; 18+ messages in thread
From: Aaron Lauterer @ 2021-04-19  9:26 UTC (permalink / raw)
  To: Fabian Ebner, pve-devel



On 4/15/21 1:52 PM, Fabian Ebner wrote:
> One nit and one comment inline.
> 
> Am 02.04.21 um 12:19 schrieb Aaron Lauterer:
>> The goal of this new API endpoint is to provide an easy way to move a
>> disk between VMs as this was only possible with manual intervention
>> until now. Either by renaming the VM disk or by manually adding the
>> disks volid to the config of the other VM.
>>
>> The latter can easily cause unexpected behavior such as disks attached
>> to VM B would be deleted if it used to be a disk of VM A. This happens
>> because PVE assumes that the VMID in the volname always matches the VM
>> the disk is attached to and thus, would remove any disk with VMID A
>> when VM A was deleted.
>>
>> The term `reassign` was chosen as it is not yet used
>> for VM disks.
>>
>> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
>> ---
>> v5 -> v6:
>> * guard Replication snapshot cleanup
>>      additionally to the eval, that code is now only run if the volume is
>>      on a storage with the 'replicate' feature
>> * add permission check for target vmid
>> * changed regex to match unused keys better
>>
>> thx @Fabian for these suggestions/catching problems
>>
>> v4 -> v5:
>> * implemented suggestions from Fabian [1]
>>      * logging before action
>>      * improving description
>>      * improving error messages
>>      * using Replication::prepare to remove replication snapshots
>>      * check if disk is physical disk using /dev/...
>>
>> v3 -> v4: nothing
>>
>> v2 -> v3:
>> * reordered the locking as discussed with fabian [0] to
>> run checks
>>      fork worker
>>     lock source config
>>         lock target config
>>         run checks
>>         ...
>>
>> * added more checks
>>      * will not reassign to or from templates
>>      * will not reassign if VM has snapshots present
>> * cleanup if disk used to be replicated
>> * made task log slightly more verbose
>> * integrated general recommendations regarding code
>> * renamed `disk` to `drive_key`
>> * prepended some vars with `source_` for easier distinction
>>
>> v1 -> v2: print config key and volid info at the end of the job so it
>> shows up on the CLI and task log
>>
>> rfc -> v1:
>> * add support to reassign unused disks
>> * add support to provide a config digest for the target vm
>> * add additional check if disk key is present in config
>> * reorder checks a bit
>>
>> In order to support unused disk I had to extend
>> PVE::QemuServer::Drive::valid_drive_names for the API parameter
>> validation.
>>
>> Checks are ordered so that cheap tests are run at the first chance to
>> fail early.
>>
>> The check if both VMs are present on the node is a bit redundant because
>> locking the config files will fail if the VM is not present. But with
>> the additional check we can provide a useful error message to the user
>> instead of a "Configuration file xyz does not exist" error.
>>
>> [0] https://lists.proxmox.com/pipermail/pve-devel/2020-September/044930.html
>> [1] https://lists.proxmox.com/pipermail/pve-devel/2020-November/046030.html
>>
>>   PVE/API2/Qemu.pm        | 172 ++++++++++++++++++++++++++++++++++++++++
>>   PVE/QemuServer/Drive.pm |   4 +
>>   2 files changed, 176 insertions(+)
>>
>> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
>> index e95ab13..9642b9b 100644
>> --- a/PVE/API2/Qemu.pm
>> +++ b/PVE/API2/Qemu.pm
>> @@ -35,6 +35,7 @@ use PVE::API2::Qemu::Agent;
>>   use PVE::VZDump::Plugin;
>>   use PVE::DataCenterConfig;
>>   use PVE::SSHInfo;
>> +use PVE::Replication;
>>   BEGIN {
>>       if (!$ENV{PVE_GENERATING_DOCS}) {
>> @@ -4377,4 +4378,175 @@ __PACKAGE__->register_method({
>>       return PVE::QemuServer::Cloudinit::dump_cloudinit_config($conf, $param->{vmid}, $param->{type});
>>       }});
>> +__PACKAGE__->register_method({
>> +    name => 'reassign_vm_disk',
>> +    path => '{vmid}/reassign_disk',
>> +    method => 'POST',
>> +    protected => 1,
>> +    proxyto => 'node',
>> +    description => "Reassign a disk to another VM",
>> +    permissions => {
>> +    description => "You need 'VM.Config.Disk' permissions on /vms/{vmid} and /vms/{target vmid}, and 'Datastore.Allocate' permissions on the storage.",
>> +    check => [ 'and',
>> +           ['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
>> +           ['perm', '/storage/{storage}', [ 'Datastore.Allocate' ]],
>> +        ],
>> +    },
>> +    parameters => {
>> +        additionalProperties => 0,
>> +    properties => {
>> +        node => get_standard_option('pve-node'),
>> +        vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
>> +        target_vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
>> +        drive_name => {
>> +            type => 'string',
>> +        description => "The config key of the disk to reassign (for example, ide0 or scsi1).",
>> +        enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
>> +        },
>> +        digest => {
>> +        type => 'string',
>> +        description => 'Prevent changes if current the configuration file of the source VM has a different SHA1 digest. This can be used to prevent concurrent modifications.',
>> +        maxLength => 40,
>> +        optional => 1,
>> +        },
>> +        target_digest => {
>> +        type => 'string',
>> +        description => 'Prevent changes if current the configuration file of the target VM has a different SHA1 digest. This can be used to prevent concurrent modifications.',
>> +        maxLength => 40,
>> +        optional => 1,
>> +        },
>> +    },
>> +    },
>> +    returns => {
>> +    type => 'string',
>> +    description => "the task ID.",
>> +    },
>> +    code => sub {
>> +    my ($param) = @_;
>> +
>> +    my $rpcenv = PVE::RPCEnvironment::get();
>> +    my $authuser = $rpcenv->get_user();
>> +
>> +    my $node = extract_param($param, 'node');
>> +    my $source_vmid = extract_param($param, 'vmid');
>> +    my $target_vmid = extract_param($param, 'target_vmid');
>> +    my $source_digest = extract_param($param, 'digest');
>> +    my $target_digest = extract_param($param, 'target_digest');
>> +    my $drive_name = extract_param($param, 'drive_name');
>> +
>> +    my $storecfg = PVE::Storage::config();
>> +    my $vmlist; > +    my $drive;
>> +    my $source_volid;
> Nit: $vmlist and $drive are only ever used within the load_and_check_configs closure, so they can be declared there
> 
>> +
>> +    $rpcenv->check_vm_perm($authuser, $target_vmid, undef, ['VM.Config.Disk'])
>> +        if $authuser ne 'root@pam';
>> +
>> +    die "Reassigning a disk to the same VM is not possible. Did you mean to move the disk?\n"
>> +        if $source_vmid eq $target_vmid;
>> +
>> +    my $load_and_check_configs = sub {
>> +        $vmlist = PVE::Cluster::get_vmlist()->{ids};
>> +        die "Both VMs need to be on the same node ($vmlist->{$source_vmid}->{node}) but target VM is on $vmlist->{$target_vmid}->{node}.\n"
>> +        if $vmlist->{$source_vmid}->{node} ne $vmlist->{$target_vmid}->{node};
>> +
>> +        my $source_conf = PVE::QemuConfig->load_config($source_vmid);
>> +        PVE::QemuConfig->check_lock($source_conf);
>> +        my $target_conf = PVE::QemuConfig->load_config($target_vmid);
>> +        PVE::QemuConfig->check_lock($target_conf);
>> +
>> +        die "Can't reassign disks from or to templates\n"
>> +        if ($source_conf->{template} || $target_conf->{template});
>> +
>> +        if ($source_digest) {
>> +        eval { PVE::Tools::assert_if_modified($source_digest, $source_conf->{digest}) };
>> +        if (my $err = $@) {
>> +            die "VM ${source_vmid}: ${err}";
>> +        }
>> +        }
>> +
>> +        if ($target_digest) {
>> +        eval { PVE::Tools::assert_if_modified($target_digest, $target_conf->{digest}) };
>> +        if (my $err = $@) {
>> +            die "VM ${target_vmid}: ${err}";
>> +        }
>> +        }
>> +
>> +        die "Disk '${drive_name}' does not exist\n"
>> +        if !defined($source_conf->{$drive_name});
>> +
>> +        $drive = PVE::QemuServer::parse_drive($drive_name, $source_conf->{$drive_name});
>> +        $source_volid = $drive->{file};
>> +        die "disk '${drive_name}' has no associated volume\n" if !$source_volid;
>> +        die "CD drive contents can't be reassigned\n" if PVE::QemuServer::drive_is_cdrom($drive, 1);
>> +        die "Can't reassign physical disk\n" if $drive->{file} =~ m|^/dev/|;
>> +        die "Can't reassign disk used by a snapshot\n"
>> +        if PVE::QemuServer::Drive::is_volume_in_use($storecfg, $source_conf, $drive_name, $source_volid);
>> +
>> +        die "Storage does not support the reassignment of this disk\n"
>> +        if !PVE::Storage::volume_has_feature($storecfg, 'reassign', $source_volid);
>> +
>> +        die "Cannot reassign disk while the source VM is running\n"
>> +        if PVE::QemuServer::check_running($source_vmid) && $drive_name !~ m/^unused\d+$/;
>> +
>> +        return ($source_conf, $target_conf);
>> +    };
>> +
>> +    my $logfunc = sub {
>> +        my ($msg) = @_;
>> +        print STDERR "$msg\n";
>> +    };
>> +
>> +    my $reassign_func = sub {
>> +        return PVE::QemuConfig->lock_config($source_vmid, sub {
>> +        return PVE::QemuConfig->lock_config($target_vmid, sub {
>> +            my ($source_conf, $target_conf) = &$load_and_check_configs();
>> +
>> +            PVE::Cluster::log_msg('info', $authuser, "reassign disk VM $source_vmid: reassign --disk ${drive_name} --target_vmid $target_vmid");
>> +
>> +            my $new_volid = PVE::Storage::reassign_volume(
>> +            $storecfg,
>> +            $source_volid,
>> +            $target_vmid,
>> +            );
>> +
>> +            delete $source_conf->{$drive_name};
>> +            print "removing disk '${drive_name}' from VM '${source_vmid}'\n";
>> +            PVE::QemuConfig->write_config($source_vmid, $source_conf);
>> +
>> +            # remove possible replication snapshots
>> +            if (PVE::Storage::volume_has_feature($storecfg, 'replicate', $source_volid)) {
>> +            eval {
>> +                PVE::Replication::prepare(
>> +                $storecfg,
>> +                [$new_volid],
>> +                undef,
>> +                undef,
> 
> To actually remove the replication snapshots, you need to use 1 for last_sync. undef defaults to 0 and does not remove the replication snapshots. 0 happens when a VM was stolen, but replication snapshots for stolen VMs are still valid!
> 
> The good news is that patch 4 isn't needed ;)

Thanks a lot for that hint :)

> 
>> +                undef,
>> +                $logfunc,
>> +                )
>> +            };
>> +            if (my $err = $@) {
>> +                print "Failed to remove replication snapshots on reassigned disk. Manual cleanup could be necessary.\n";
>> +            }
>> +            }
>> +
>> +            my $key;
>> +            eval { $key = PVE::QemuConfig->add_unused_volume($target_conf, $new_volid) };
>> +            if (my $err = $@) {
>> +            print "failed to add reassigned disk '${new_volid}' to VM '${target_vmid}'. Try to free an 'unused' disk slot and run 'qm rescan ${target_vmid}'.\n";
>> +            return 0;
>> +            }
>> +
>> +            print "adding disk to VM '${target_vmid}' as '${key}: ${new_volid}'\n";
>> +            PVE::QemuConfig->write_config($target_vmid, $target_conf);
>> +        });
>> +        });
>> +    };
>> +
>> +    &$load_and_check_configs();
>> +
>> +    return $rpcenv->fork_worker('qmreassign', $source_vmid, $authuser, $reassign_func);
>> +    }});
>> +
>>   1;
>> diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
>> index 01ea8d7..e938b9b 100644
>> --- a/PVE/QemuServer/Drive.pm
>> +++ b/PVE/QemuServer/Drive.pm
>> @@ -392,6 +392,10 @@ sub valid_drive_names {
>>               'efidisk0');
>>   }
>> +sub valid_drive_names_with_unused {
>> +    return (valid_drive_names(), map {"unused$_"} (0 .. ($MAX_UNUSED_DISKS -1)));
>> +}
>> +
>>   sub is_valid_drivename {
>>       my $dev = shift;
>>




^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2021-04-19  9:26 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-02 10:19 [pve-devel] [PATCH v6 series 0/5] disk reassign: add new feature Aaron Lauterer
2021-04-02 10:19 ` [pve-devel] [PATCH v6 storage 1/1] add disk reassign feature Aaron Lauterer
2021-04-13  7:53   ` Dominic Jäger
2021-04-15 11:07   ` Fabian Ebner
2021-04-15 11:31     ` Fabian Ebner
2021-04-15 11:53       ` Aaron Lauterer
2021-04-15 12:09         ` Fabian Ebner
2021-04-15 12:21           ` Thomas Lamprecht
2021-04-15 12:20         ` Thomas Lamprecht
2021-04-02 10:19 ` [pve-devel] [PATCH v6 qemu-server 2/5] disk reassign: add API endpoint Aaron Lauterer
2021-04-15 11:52   ` Fabian Ebner
2021-04-19  9:26     ` Aaron Lauterer
2021-04-18 15:24   ` Thomas Lamprecht
2021-04-19  9:25     ` Aaron Lauterer
2021-04-02 10:19 ` [pve-devel] [PATCH v6 qemu-server 3/5] cli: disk reassign: add reassign_disk to qm command Aaron Lauterer
2021-04-18 14:50   ` Thomas Lamprecht
2021-04-02 10:19 ` [pve-devel] [PATCH v6 guest-common 4/5] Replication: mention disk reassign in comment of possible reasons Aaron Lauterer
2021-04-02 10:19 ` [pve-devel] [PATCH v6 manager 5/5] ui: tasks: add qmreassign task description Aaron Lauterer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal