* [pve-devel] [PATCH v5 series 0/5] disk reassign: add new feature
@ 2020-12-15 12:48 Aaron Lauterer
2020-12-15 12:48 ` [pve-devel] [PATCH v5 storage 1/5] add disk reassign feature Aaron Lauterer
` (5 more replies)
0 siblings, 6 replies; 10+ messages in thread
From: Aaron Lauterer @ 2020-12-15 12:48 UTC (permalink / raw)
To: pve-devel
This series implements a new feature which allows users to easily
reassign disks between VMs. Currently this is only possible with one of
the following manual steps:
* rename the disk image/file and do a `qm rescan`
* configure the disk manually and use the old image name, having an
image for VM A assigned to VM B
The latter can cause unexpected behavior because PVE expects that the
VMID in a disk name always corresponds to the VM it is assigned to. Thus
when a disk, original from VM A was manually configured as disk for VM B
it happens that, when deleting VM A, the disk in question will be
deleted as well because it still had the VMID of VM A in it's name.
To issue a reassign from the CLI run:
qm reassign_disk <source VMID> <target VMID> <drive name>
where <drive name> is the config key of the disk, e.g. ide0, scsi1 and so
on.
The following storage types are implemented at the moment:
* dir based ones
* directory
* NFS
* CIFS
* gluster
* ZFS
* (thin) LVM
* Ceph RBD
v4 -> v5:
* rebase on current master
* reorder patches
* rename `drive_key` to `drive_name`
thanks @Dominic for pointing out that there already are a lot of
different names in use for this [0] and not to invent another one ;)
* implemented suggested changes from Fabian [1][2]. More directly in the
patches themselves
v3 -> v4:
* revert intermediate storage plugin for directory based plugins
* add a `die "not supported"` method in Plugin.pm
* dir based plugins now call the file_reassign_volume method in
Plugin.pm as the generic file/directory based method
* restored old `volume_has_feature` method in Plugin.pm and override it
in directory based plugins to check against the new `reassign` feature
(not too happy about the repetition for each plugin)
* task description mapping has been moved from widget-toolkit to
pve-manager/utils
v2 -> v3:
* change locking approach
* add more checks
* add intermedia storage plugin for directory based plugins
* use feature flags
* split up the reassign method to have a dedicated method for the
renaming itself
* handle linked clones
* clean up if disk used to be replicated
I hope I didn't forget anything major.
v1 -> v2:
print info about the new disk volid and key at the end of the job so it
shows up in the CLI output and task log
Changes from RFC -> V1:
* support to reassign unused disks
* digest for target vm config
* reorder the checks a bit
* adding another one to check if the given key for the disk even exists
in the config.
[0] https://lists.proxmox.com/pipermail/pve-devel/2020-November/045986.html
[1] https://lists.proxmox.com/pipermail/pve-devel/2020-November/046031.html
[2] https://lists.proxmox.com/pipermail/pve-devel/2020-November/046030.html
storage: Aaron Lauterer (1):
add disk reassign feature
PVE/Storage.pm | 15 ++++++++--
PVE/Storage/CIFSPlugin.pm | 19 +++++++++++++
PVE/Storage/DirPlugin.pm | 19 +++++++++++++
PVE/Storage/GlusterfsPlugin.pm | 19 +++++++++++++
PVE/Storage/LVMPlugin.pm | 24 ++++++++++++++++
PVE/Storage/LvmThinPlugin.pm | 1 +
PVE/Storage/NFSPlugin.pm | 19 +++++++++++++
PVE/Storage/Plugin.pm | 51 ++++++++++++++++++++++++++++++++++
PVE/Storage/RBDPlugin.pm | 31 +++++++++++++++++++++
PVE/Storage/ZFSPoolPlugin.pm | 28 +++++++++++++++++++
10 files changed, 224 insertions(+), 2 deletions(-)
qemu-server: Aaron Lauterer (2):
disk reassign: add API endpoint
cli: disk reassign: add reassign_disk to qm command
PVE/API2/Qemu.pm | 151 ++++++++++++++++++++++++++++++++++++++++
PVE/CLI/qm.pm | 2 +
PVE/QemuServer/Drive.pm | 4 ++
3 files changed, 157 insertions(+)
guest-common: Aaron Lauterer (1):
Replication: mention disk reassign in comment of possible reasons
PVE/Replication.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
manager: Aaron Lauterer (1):
ui: tasks: add qmreassign task description
www/manager6/Utils.js | 1 +
1 file changed, 1 insertion(+)
--
2.20.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [pve-devel] [PATCH v5 storage 1/5] add disk reassign feature
2020-12-15 12:48 [pve-devel] [PATCH v5 series 0/5] disk reassign: add new feature Aaron Lauterer
@ 2020-12-15 12:48 ` Aaron Lauterer
2021-03-31 9:34 ` Fabian Grünbichler
2020-12-15 12:48 ` [pve-devel] [PATCH v5 qemu-server 2/5] disk reassign: add API endpoint Aaron Lauterer
` (4 subsequent siblings)
5 siblings, 1 reply; 10+ messages in thread
From: Aaron Lauterer @ 2020-12-15 12:48 UTC (permalink / raw)
To: pve-devel
Functionality has been added for the following storage types:
* dir based ones
* directory
* NFS
* CIFS
* gluster
* ZFS
* (thin) LVM
* Ceph
A new feature `reassign` has been introduced to mark which storage
plugin supports the feature.
Version API and AGE have been bumped.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
v4 -> v5:
* rebased on master
* bumped api ver and api age
* rephrased "not implemented" message as suggested [0].
v3 -> v4:
* revert intermediate storage plugin for directory based plugins
* add a `die "not supported"` method in Plugin.pm
* dir based plugins now call the file_reassign_volume method in
Plugin.pm as the generic file/directory based method
* restored old `volume_has_feature` method in Plugin.pm and override it
in directory based plugins to check against the new `reassign` feature
(not too happy about the repetition for each plugin)
v2 -> v3:
* added feature flags instead of dummy "not implemented" methods in
plugins which do not support it as that would break compatibility with
3rd party plugins.
Had to make $features available outside the `has_features` method in
Plugins.pm in order to be able to individually add features in the
`BaseDirPlugin.pm`.
* added the BaseDirPlugin.pm to maintain compat with 3rd party plugins,
this is explained in the commit message
* moved the actual renaming from the `reassign_volume` to a dedicated
`rename_volume` method to make this functionality available to other
possible uses in the future.
* added support for linked clones ($base)
rfc -> v1 -> v2: nothing changed
[0] https://lists.proxmox.com/pipermail/pve-devel/2020-November/046031.html
PVE/Storage.pm | 15 ++++++++--
PVE/Storage/CIFSPlugin.pm | 19 +++++++++++++
PVE/Storage/DirPlugin.pm | 19 +++++++++++++
PVE/Storage/GlusterfsPlugin.pm | 19 +++++++++++++
PVE/Storage/LVMPlugin.pm | 24 ++++++++++++++++
PVE/Storage/LvmThinPlugin.pm | 1 +
PVE/Storage/NFSPlugin.pm | 19 +++++++++++++
PVE/Storage/Plugin.pm | 51 ++++++++++++++++++++++++++++++++++
PVE/Storage/RBDPlugin.pm | 31 +++++++++++++++++++++
PVE/Storage/ZFSPoolPlugin.pm | 26 +++++++++++++++++
10 files changed, 222 insertions(+), 2 deletions(-)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index aded60e..ca3b7bd 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -41,11 +41,11 @@ use PVE::Storage::DRBDPlugin;
use PVE::Storage::PBSPlugin;
# Storage API version. Increment it on changes in storage API interface.
-use constant APIVER => 8;
+use constant APIVER => 9;
# Age is the number of versions we're backward compatible with.
# This is like having 'current=APIVER' and age='APIAGE' in libtool,
# see https://www.gnu.org/software/libtool/manual/html_node/Libtool-versioning.html
-use constant APIAGE => 7;
+use constant APIAGE => 8;
# load standard plugins
PVE::Storage::DirPlugin->register();
@@ -351,6 +351,7 @@ sub volume_snapshot_needs_fsfreeze {
# snapshot - taking a snapshot is possible
# sparseinit - volume is sparsely initialized
# template - conversion to base image is possible
+# reassign - reassigning disks to other guest is possible
# $snap - check if the feature is supported for a given snapshot
# $running - if the guest owning the volume is running
# $opts - hash with further options:
@@ -1843,6 +1844,16 @@ sub complete_volume {
return $res;
}
+sub reassign_volume {
+ my ($cfg, $volid, $target_vmid) = @_;
+
+ my ($storeid, $volname) = parse_volume_id($volid);
+ my $scfg = storage_config($cfg, $storeid);
+ my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+
+ return $plugin->reassign_volume($scfg, $storeid, $volname, $target_vmid);
+}
+
# Various io-heavy operations require io/bandwidth limits which can be
# configured on multiple levels: The global defaults in datacenter.cfg, and
# per-storage overrides. When we want to do a restore from storage A to storage
diff --git a/PVE/Storage/CIFSPlugin.pm b/PVE/Storage/CIFSPlugin.pm
index be06cc7..f2ff5b4 100644
--- a/PVE/Storage/CIFSPlugin.pm
+++ b/PVE/Storage/CIFSPlugin.pm
@@ -293,4 +293,23 @@ sub update_volume_notes {
PVE::Storage::DirPlugin::update_volume_notes($class, @_);
}
+sub reassign_volume {
+ my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+ return $class->file_reassign_volume($scfg, $storeid, $volname, $target_vmid);
+}
+
+sub volume_has_feature {
+ my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
+
+ my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
+ $class->parse_volname($volname);
+
+ if ($feature eq 'reassign') {
+ return 1 if !$isBase && $format =~ /^raw$|^qcow2$|^vmdk$/;
+ return undef;
+ } else {
+ return $class->SUPER::volume_has_feature($scfg, $feature, $storeid, $volname, $snapname, $running, $opts);
+ }
+}
+
1;
diff --git a/PVE/Storage/DirPlugin.pm b/PVE/Storage/DirPlugin.pm
index 2267f11..f4cb4bb 100644
--- a/PVE/Storage/DirPlugin.pm
+++ b/PVE/Storage/DirPlugin.pm
@@ -156,4 +156,23 @@ sub check_config {
return $opts;
}
+sub reassign_volume {
+ my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+ return $class->file_reassign_volume($scfg, $storeid, $volname, $target_vmid);
+}
+
+sub volume_has_feature {
+ my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
+
+ my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
+ $class->parse_volname($volname);
+
+ if ($feature eq 'reassign') {
+ return 1 if !$isBase && $format =~ /^raw$|^qcow2$|^vmdk$/;
+ return undef;
+ } else {
+ return $class->SUPER::volume_has_feature($scfg, $feature, $storeid, $volname, $snapname, $running, $opts);
+ }
+}
+
1;
diff --git a/PVE/Storage/GlusterfsPlugin.pm b/PVE/Storage/GlusterfsPlugin.pm
index 2dd414d..5c9fd78 100644
--- a/PVE/Storage/GlusterfsPlugin.pm
+++ b/PVE/Storage/GlusterfsPlugin.pm
@@ -348,4 +348,23 @@ sub check_connection {
return defined($server) ? 1 : 0;
}
+sub reassign_volume {
+ my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+ return $class->file_reassign_volume($scfg, $storeid, $volname, $target_vmid);
+}
+
+sub volume_has_feature {
+ my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
+
+ my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
+ $class->parse_volname($volname);
+
+ if ($feature eq 'reassign') {
+ return 1 if !$isBase && $format =~ /^raw$|^qcow2$|^vmdk$/;
+ return undef;
+ } else {
+ return $class->SUPER::volume_has_feature($scfg, $feature, $storeid, $volname, $snapname, $running, $opts);
+ }
+}
+
1;
diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
index 73e8e48..19898fa 100644
--- a/PVE/Storage/LVMPlugin.pm
+++ b/PVE/Storage/LVMPlugin.pm
@@ -339,6 +339,13 @@ sub lvcreate {
run_command($cmd, errmsg => "lvcreate '$vg/$name' error");
}
+sub lvrename {
+ my ($vg, $oldname, $newname) = @_;
+
+ my $cmd = ['/sbin/lvrename', $vg, $oldname, $newname];
+ run_command($cmd, errmsg => "lvrename '${vg}/${oldname}' to '${newname}' error");
+}
+
sub alloc_image {
my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
@@ -583,6 +590,7 @@ sub volume_has_feature {
my $features = {
copy => { base => 1, current => 1},
+ reassign => {current => 1},
};
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
@@ -683,4 +691,20 @@ sub volume_import_write {
input => '<&'.fileno($input_fh));
}
+sub reassign_volume {
+ my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+
+ $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
+ my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid);
+ return $class->rename_volume($scfg, $storeid, $volname, $target_volname, $target_vmid);
+ });
+}
+
+sub rename_volume {
+ my ($class, $scfg, $storeid, $source_volname, $target_volname, $vmid) = @_;
+
+ lvrename($scfg->{vgname}, $source_volname, $target_volname);
+ return "${storeid}:${target_volname}";
+}
+
1;
diff --git a/PVE/Storage/LvmThinPlugin.pm b/PVE/Storage/LvmThinPlugin.pm
index c9e127c..895af8b 100644
--- a/PVE/Storage/LvmThinPlugin.pm
+++ b/PVE/Storage/LvmThinPlugin.pm
@@ -355,6 +355,7 @@ sub volume_has_feature {
template => { current => 1},
copy => { base => 1, current => 1, snap => 1},
sparseinit => { base => 1, current => 1},
+ reassign => {current => 1},
};
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
diff --git a/PVE/Storage/NFSPlugin.pm b/PVE/Storage/NFSPlugin.pm
index e8e27c0..b5a8f75 100644
--- a/PVE/Storage/NFSPlugin.pm
+++ b/PVE/Storage/NFSPlugin.pm
@@ -180,4 +180,23 @@ sub update_volume_notes {
PVE::Storage::DirPlugin::update_volume_notes($class, @_);
}
+sub reassign_volume {
+ my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+ return $class->file_reassign_volume($scfg, $storeid, $volname, $target_vmid);
+}
+
+sub volume_has_feature {
+ my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
+
+ my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
+ $class->parse_volname($volname);
+
+ if ($feature eq 'reassign') {
+ return 1 if !$isBase && $format =~ /^raw$|^qcow2$|^vmdk$/;
+ return undef;
+ } else {
+ return $class->SUPER::volume_has_feature($scfg, $feature, $storeid, $volname, $snapname, $running, $opts);
+ }
+}
+
1;
diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index 57c58a9..aa496e2 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -925,6 +925,7 @@ sub storage_can_replicate {
return 0;
}
+
sub volume_has_feature {
my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
@@ -1454,4 +1455,54 @@ sub volume_import_formats {
return ();
}
+sub reassign_volume {
+ my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+ die "not implemented in storage plugin '$class'\n";
+}
+
+# general reassignment method for file/directory based storages
+sub file_reassign_volume {
+ my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+
+ my $base;
+ my $base_vmid;
+ my $format;
+ my $source_vmid;
+
+ (undef, $volname, $source_vmid, $base, $base_vmid, undef, $format) = $class->parse_volname($volname);
+
+ # parse_volname strips the directory from volname
+ my $source_volname = "${source_vmid}/${volname}";
+
+ if ($base) {
+ $base = "${base_vmid}/${base}/";
+ } else {
+ $base = '';
+ }
+
+ $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
+ my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid, $format, 1);
+
+ return $class->rename_volume($scfg, $storeid, $source_volname, $target_volname, $target_vmid, $base);
+ });
+}
+
+sub rename_volume {
+ my ($class, $scfg, $storeid, $source_volname, $target_volname, $vmid, $base) = @_;
+
+ my $basedir = $class->get_subdir($scfg, 'images');
+ my $imagedir = "${basedir}/${vmid}";
+
+ mkpath $imagedir;
+
+ my $old_path = "${basedir}/${source_volname}";
+ my $new_path = "${imagedir}/${target_volname}";
+
+ rename($old_path, $new_path) ||
+ die "rename '$old_path' to '$new_path' failed - $!\n";
+
+ return "${storeid}:${base}${vmid}/${target_volname}";
+}
+
+
1;
diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index fab6d57..86a9a0d 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -712,6 +712,7 @@ sub volume_has_feature {
template => { current => 1},
copy => { base => 1, current => 1, snap => 1},
sparseinit => { base => 1, current => 1},
+ reassign => {current => 1},
};
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
@@ -728,4 +729,34 @@ sub volume_has_feature {
return undef;
}
+sub reassign_volume {
+ my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+
+ my $base;;
+ (undef, $volname, undef, $base) = $class->parse_volname($volname);
+
+ if ($base) {
+ $base = $base . '/';
+ } else {
+ $base = '';
+ }
+
+ $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
+ my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid);
+ return $class->rename_volume($scfg, $storeid, $volname, $target_volname, $target_vmid, $base);
+ });
+}
+
+sub rename_volume {
+ my ($class, $scfg, $storeid, $source_volname, $target_volname, $vmid, $base) = @_;
+
+ my $cmd = $rbd_cmd->($scfg, $storeid, 'rename', $source_volname, $target_volname);
+
+ run_rbd_command(
+ $cmd,
+ errmsg => "could not rename image '${source_volname}' to '${target_volname}'",
+ );
+ return "${storeid}:${base}${target_volname}";
+}
+
1;
diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index 3054331..adccddf 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++ b/PVE/Storage/ZFSPoolPlugin.pm
@@ -659,6 +659,7 @@ sub volume_has_feature {
copy => { base => 1, current => 1},
sparseinit => { base => 1, current => 1},
replicate => { base => 1, current => 1},
+ reassign => {current => 1},
};
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
@@ -761,4 +762,29 @@ sub volume_import_formats {
return $class->volume_export_formats($scfg, $storeid, $volname, undef, $base_snapshot, $with_snapshots);
}
+sub reassign_volume {
+ my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+
+ my $base;;
+ (undef, $volname, undef, $base) = $class->parse_volname($volname);
+
+ if ($base) {
+ $base = $base . '/';
+ } else {
+ $base = '';
+ }
+
+ $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
+ my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid);
+ return $class->rename_volume($scfg, $storeid, $volname, $target_volname, $target_vmid, $base);
+ });
+}
+
+sub rename_volume {
+ my ($class, $scfg, $storeid, $source_volname, $target_volname, $vmid, $base) = @_;
+
+ $class->zfs_request($scfg, 5, 'rename', "$scfg->{pool}/$source_volname", "$scfg->{pool}/$target_volname");
+ return "${storeid}:${base}${target_volname}";
+}
+
1;
--
2.20.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [pve-devel] [PATCH v5 qemu-server 2/5] disk reassign: add API endpoint
2020-12-15 12:48 [pve-devel] [PATCH v5 series 0/5] disk reassign: add new feature Aaron Lauterer
2020-12-15 12:48 ` [pve-devel] [PATCH v5 storage 1/5] add disk reassign feature Aaron Lauterer
@ 2020-12-15 12:48 ` Aaron Lauterer
2021-03-31 9:23 ` Fabian Grünbichler
2020-12-15 12:48 ` [pve-devel] [PATCH v5 qemu-server 3/5] cli: disk reassign: add reassign_disk to qm command Aaron Lauterer
` (3 subsequent siblings)
5 siblings, 1 reply; 10+ messages in thread
From: Aaron Lauterer @ 2020-12-15 12:48 UTC (permalink / raw)
To: pve-devel
The goal of this new API endpoint is to provide an easy way to move a
disk between VMs as this was only possible with manual intervention
until now. Either by renaming the VM disk or by manually adding the
disks volid to the config of the other VM.
The latter can easily cause unexpected behavior such as disks attached
to VM B would be deleted if it used to be a disk of VM A. This happens
because PVE assumes that the VMID in the volname always matches the VM
the disk is attached to and thus, would remove any disk with VMID A
when VM A was deleted.
The term `reassign` was chosen as it is not yet used
for VM disks.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
v4 -> v5:
* implemented suggestions from Fabian [1]
* logging before action
* improving description
* improving error messages
* using Replication::prepare to remove replication snapshots
* check if disk is physical disk using /dev/...
v3 -> v4: nothing
v2 -> v3:
* reordered the locking as discussed with fabian [0] to
run checks
fork worker
lock source config
lock target config
run checks
...
* added more checks
* will not reassign to or from templates
* will not reassign if VM has snapshots present
* cleanup if disk used to be replicated
* made task log slightly more verbose
* integrated general recommendations regarding code
* renamed `disk` to `drive_key`
* prepended some vars with `source_` for easier distinction
v1 -> v2: print config key and volid info at the end of the job so it
shows up on the CLI and task log
rfc -> v1:
* add support to reassign unused disks
* add support to provide a config digest for the target vm
* add additional check if disk key is present in config
* reorder checks a bit
In order to support unused disk I had to extend
PVE::QemuServer::Drive::valid_drive_names for the API parameter
validation.
Checks are ordered so that cheap tests are run at the first chance to
fail early.
The check if both VMs are present on the node is a bit redundant because
locking the config files will fail if the VM is not present. But with
the additional check we can provide a useful error message to the user
instead of a "Configuration file xyz does not exist" error.
[0] https://lists.proxmox.com/pipermail/pve-devel/2020-September/044930.html
[1] https://lists.proxmox.com/pipermail/pve-devel/2020-November/046030.html
PVE/API2/Qemu.pm | 151 ++++++++++++++++++++++++++++++++++++++++
PVE/QemuServer/Drive.pm | 4 ++
2 files changed, 155 insertions(+)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 00d98ab..614f068 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -35,6 +35,7 @@ use PVE::API2::Qemu::Agent;
use PVE::VZDump::Plugin;
use PVE::DataCenterConfig;
use PVE::SSHInfo;
+use PVE::Replication;
BEGIN {
if (!$ENV{PVE_GENERATING_DOCS}) {
@@ -4319,4 +4320,154 @@ __PACKAGE__->register_method({
return PVE::QemuServer::Cloudinit::dump_cloudinit_config($conf, $param->{vmid}, $param->{type});
}});
+__PACKAGE__->register_method({
+ name => 'reassign_vm_disk',
+ path => '{vmid}/reassign_disk',
+ method => 'POST',
+ protected => 1,
+ proxyto => 'node',
+ description => "Reassign a disk to another VM",
+ permissions => {
+ description => "You need 'VM.Config.Disk' permissions on /vms/{vmid} and /vms/{target vmid}, and 'Datastore.Allocate' permissions on the storage.",
+ check => [ 'and',
+ ['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
+ ['perm', '/storage/{storage}', [ 'Datastore.Allocate' ]],
+ ],
+ },
+ parameters => {
+ additionalProperties => 0,
+ properties => {
+ node => get_standard_option('pve-node'),
+ vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
+ target_vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
+ drive_name => {
+ type => 'string',
+ description => "The config key of the disk to reassign (for example, ide0 or scsi1).",
+ enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
+ },
+ digest => {
+ type => 'string',
+ description => 'Prevent changes if current the configuration file of the source VM has a different SHA1 digest. This can be used to prevent concurrent modifications.',
+ maxLength => 40,
+ optional => 1,
+ },
+ target_digest => {
+ type => 'string',
+ description => 'Prevent changes if current the configuration file of the target VM has a different SHA1 digest. This can be used to prevent concurrent modifications.',
+ maxLength => 40,
+ optional => 1,
+ },
+ },
+ },
+ returns => {
+ type => 'string',
+ description => "the task ID.",
+ },
+ code => sub {
+ my ($param) = @_;
+
+ my $rpcenv = PVE::RPCEnvironment::get();
+ my $authuser = $rpcenv->get_user();
+
+ my $node = extract_param($param, 'node');
+ my $source_vmid = extract_param($param, 'vmid');
+ my $target_vmid = extract_param($param, 'target_vmid');
+ my $source_digest = extract_param($param, 'digest');
+ my $target_digest = extract_param($param, 'target_digest');
+ my $drive_name = extract_param($param, 'drive_name');
+
+ my $storecfg = PVE::Storage::config();
+ my $vmlist;
+ my $drive;
+ my $source_volid;
+
+ die "Reassigning a disk to the same VM is not possible. Did you mean to move the disk?\n"
+ if $source_vmid eq $target_vmid;
+
+ my $load_and_check_configs = sub {
+ $vmlist = PVE::Cluster::get_vmlist()->{ids};
+ die "Both VMs need to be on the same node ($vmlist->{$source_vmid}->{node}) but target VM is on $vmlist->{$target_vmid}->{node}.\n"
+ if $vmlist->{$source_vmid}->{node} ne $vmlist->{$target_vmid}->{node};
+
+ my $source_conf = PVE::QemuConfig->load_config($source_vmid);
+ PVE::QemuConfig->check_lock($source_conf);
+ my $target_conf = PVE::QemuConfig->load_config($target_vmid);
+ PVE::QemuConfig->check_lock($target_conf);
+
+ die "Can't reassign disks from or to templates\n"
+ if ($source_conf->{template} || $target_conf->{template});
+
+ if ($source_digest) {
+ eval { PVE::Tools::assert_if_modified($source_digest, $source_conf->{digest}) };
+ if (my $err = $@) {
+ die "VM ${source_vmid}: ${err}";
+ }
+ }
+
+ if ($target_digest) {
+ eval { PVE::Tools::assert_if_modified($target_digest, $target_conf->{digest}) };
+ if (my $err = $@) {
+ die "VM ${target_vmid}: ${err}";
+ }
+ }
+
+ die "Disk '${drive_name}' does not exist\n"
+ if !defined($source_conf->{$drive_name});
+
+ $drive = PVE::QemuServer::parse_drive($drive_name, $source_conf->{$drive_name});
+ $source_volid = $drive->{file};
+ die "disk '${drive_name}' has no associated volume\n" if !$source_volid;
+ die "CD drive contents can't be reassigned\n" if PVE::QemuServer::drive_is_cdrom($drive, 1);
+ die "Can't reassign physical disk\n" if $drive->{file} =~ m|^/dev/|;
+ die "Can't reassign disk used by a snapshot\n"
+ if PVE::QemuServer::Drive::is_volume_in_use($storecfg, $source_conf, $drive_name, $source_volid);
+
+ die "Storage does not support the reassignment of this disk\n"
+ if !PVE::Storage::volume_has_feature($storecfg, 'reassign', $source_volid);
+
+ die "Cannot reassign disk while the source VM is running\n"
+ if PVE::QemuServer::check_running($source_vmid) && $drive_name !~ m/unused[0-9]/;
+
+ return ($source_conf, $target_conf);
+ };
+
+ my $logfunc = sub {
+ my ($msg) = @_;
+ print STDERR "$msg\n";
+ };
+
+ my $reassign_func = sub {
+ return PVE::QemuConfig->lock_config($source_vmid, sub {
+ return PVE::QemuConfig->lock_config($target_vmid, sub {
+ my ($source_conf, $target_conf) = &$load_and_check_configs();
+
+ PVE::Cluster::log_msg('info', $authuser, "reassign disk VM $source_vmid: reassign --disk ${drive_name} --target_vmid $target_vmid");
+
+ my $new_volid = PVE::Storage::reassign_volume($storecfg, $source_volid, $target_vmid);
+
+ delete $source_conf->{$drive_name};
+ print "removing disk '${drive_name}' from VM '${source_vmid}'\n";
+ PVE::QemuConfig->write_config($source_vmid, $source_conf);
+
+ # remove possible replication snapshots
+ PVE::Replication::prepare($storecfg, [$new_volid], undef, undef, undef, $logfunc);
+
+ my $key;
+ eval { $key = PVE::QemuConfig->add_unused_volume($target_conf, $new_volid) };
+ if (my $err = $@) {
+ print "failed to add reassigned disk '${new_volid}' to VM '${target_vmid}'. Try to free an 'unused' disk slot and run 'qm rescan ${target_vmid}'.\n";
+ return 0;
+ }
+
+ print "adding disk to VM '${target_vmid}' as '${key}: ${new_volid}'\n";
+ PVE::QemuConfig->write_config($target_vmid, $target_conf);
+ });
+ });
+ };
+
+ &$load_and_check_configs();
+
+ return $rpcenv->fork_worker('qmreassign', $source_vmid, $authuser, $reassign_func);
+ }});
+
1;
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index d560937..95c0538 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -383,6 +383,10 @@ sub valid_drive_names {
'efidisk0');
}
+sub valid_drive_names_with_unused {
+ return (valid_drive_names(), map {"unused$_"} (0 .. ($MAX_UNUSED_DISKS -1)));
+}
+
sub is_valid_drivename {
my $dev = shift;
--
2.20.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [pve-devel] [PATCH v5 qemu-server 3/5] cli: disk reassign: add reassign_disk to qm command
2020-12-15 12:48 [pve-devel] [PATCH v5 series 0/5] disk reassign: add new feature Aaron Lauterer
2020-12-15 12:48 ` [pve-devel] [PATCH v5 storage 1/5] add disk reassign feature Aaron Lauterer
2020-12-15 12:48 ` [pve-devel] [PATCH v5 qemu-server 2/5] disk reassign: add API endpoint Aaron Lauterer
@ 2020-12-15 12:48 ` Aaron Lauterer
2020-12-15 12:48 ` [pve-devel] [PATCH v5 guest-common 4/5] Replication: mention disk reassign in comment of possible reasons Aaron Lauterer
` (2 subsequent siblings)
5 siblings, 0 replies; 10+ messages in thread
From: Aaron Lauterer @ 2020-12-15 12:48 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
v4 -> v5: renamed `drive_key` to `drive_name`
v3 ->v4: nothing
v2 -> v3: renamed parameter `disk` to `drive_key`
rfc -> v1 -> v2: nothing changed
PVE/CLI/qm.pm | 2 ++
1 file changed, 2 insertions(+)
diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index b9b6051..ba552eb 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -908,6 +908,8 @@ our $cmddef = {
move_disk => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage'], { node => $nodename }, $upid_exit ],
+ reassign_disk => [ "PVE::API2::Qemu", 'reassign_vm_disk', ['vmid', 'target_vmid', 'drive_name'], { node => $nodename } ],
+
unlink => [ "PVE::API2::Qemu", 'unlink', ['vmid'], { node => $nodename } ],
config => [ "PVE::API2::Qemu", 'vm_config', ['vmid'],
--
2.20.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [pve-devel] [PATCH v5 guest-common 4/5] Replication: mention disk reassign in comment of possible reasons
2020-12-15 12:48 [pve-devel] [PATCH v5 series 0/5] disk reassign: add new feature Aaron Lauterer
` (2 preceding siblings ...)
2020-12-15 12:48 ` [pve-devel] [PATCH v5 qemu-server 3/5] cli: disk reassign: add reassign_disk to qm command Aaron Lauterer
@ 2020-12-15 12:48 ` Aaron Lauterer
2020-12-15 12:48 ` [pve-devel] [PATCH v5 manager 5/5] ui: tasks: add qmreassign task description Aaron Lauterer
2021-02-15 14:59 ` [pve-devel] [PATCH v5 series 0/5] disk reassign: add new feature Aaron Lauterer
5 siblings, 0 replies; 10+ messages in thread
From: Aaron Lauterer @ 2020-12-15 12:48 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
v5: newly added
PVE/Replication.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/Replication.pm b/PVE/Replication.pm
index ae0f145..47f5434 100644
--- a/PVE/Replication.pm
+++ b/PVE/Replication.pm
@@ -169,7 +169,7 @@ sub prepare {
# logfunc will written in replication log.
$logfunc->("delete stale replication snapshot error: $err");
}
- # Last_sync=0 and a replication snapshot only occur, if the VM was stolen
+ # Last_sync=0 and a replication snapshot only occur, if the VM was stolen or the disk reassigned
} else {
$last_snapshots->{$volid}->{$snap} = 1;
}
--
2.20.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [pve-devel] [PATCH v5 manager 5/5] ui: tasks: add qmreassign task description
2020-12-15 12:48 [pve-devel] [PATCH v5 series 0/5] disk reassign: add new feature Aaron Lauterer
` (3 preceding siblings ...)
2020-12-15 12:48 ` [pve-devel] [PATCH v5 guest-common 4/5] Replication: mention disk reassign in comment of possible reasons Aaron Lauterer
@ 2020-12-15 12:48 ` Aaron Lauterer
2021-02-15 14:59 ` [pve-devel] [PATCH v5 series 0/5] disk reassign: add new feature Aaron Lauterer
5 siblings, 0 replies; 10+ messages in thread
From: Aaron Lauterer @ 2020-12-15 12:48 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
v4->v5: rebased
www/manager6/Utils.js | 1 +
1 file changed, 1 insertion(+)
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index 5b6a0221..f70e901f 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -1707,6 +1707,7 @@ Ext.define('PVE.Utils', { utilities: {
qmigrate: ['VM', gettext('Migrate')],
qmmove: ['VM', gettext('Move disk')],
qmpause: ['VM', gettext('Pause')],
+ qmreassign: ['VM', gettext('Reassign disk')],
qmreboot: ['VM', gettext('Reboot')],
qmreset: ['VM', gettext('Reset')],
qmrestore: ['VM', gettext('Restore')],
--
2.20.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [pve-devel] [PATCH v5 series 0/5] disk reassign: add new feature
2020-12-15 12:48 [pve-devel] [PATCH v5 series 0/5] disk reassign: add new feature Aaron Lauterer
` (4 preceding siblings ...)
2020-12-15 12:48 ` [pve-devel] [PATCH v5 manager 5/5] ui: tasks: add qmreassign task description Aaron Lauterer
@ 2021-02-15 14:59 ` Aaron Lauterer
5 siblings, 0 replies; 10+ messages in thread
From: Aaron Lauterer @ 2021-02-15 14:59 UTC (permalink / raw)
To: pve-devel
Bump in case this has been missed. :)
The patches should still apply fine. (just tested)
On 12/15/20 1:48 PM, Aaron Lauterer wrote:
> This series implements a new feature which allows users to easily
> reassign disks between VMs. Currently this is only possible with one of
> the following manual steps:
>
> * rename the disk image/file and do a `qm rescan`
> * configure the disk manually and use the old image name, having an
> image for VM A assigned to VM B
>
> The latter can cause unexpected behavior because PVE expects that the
> VMID in a disk name always corresponds to the VM it is assigned to. Thus
> when a disk, original from VM A was manually configured as disk for VM B
> it happens that, when deleting VM A, the disk in question will be
> deleted as well because it still had the VMID of VM A in it's name.
>
> To issue a reassign from the CLI run:
>
> qm reassign_disk <source VMID> <target VMID> <drive name>
>
> where <drive name> is the config key of the disk, e.g. ide0, scsi1 and so
> on.
>
> The following storage types are implemented at the moment:
> * dir based ones
> * directory
> * NFS
> * CIFS
> * gluster
> * ZFS
> * (thin) LVM
> * Ceph RBD
>
>
> v4 -> v5:
> * rebase on current master
> * reorder patches
> * rename `drive_key` to `drive_name`
> thanks @Dominic for pointing out that there already are a lot of
> different names in use for this [0] and not to invent another one ;)
> * implemented suggested changes from Fabian [1][2]. More directly in the
> patches themselves
>
> v3 -> v4:
> * revert intermediate storage plugin for directory based plugins
> * add a `die "not supported"` method in Plugin.pm
> * dir based plugins now call the file_reassign_volume method in
> Plugin.pm as the generic file/directory based method
> * restored old `volume_has_feature` method in Plugin.pm and override it
> in directory based plugins to check against the new `reassign` feature
> (not too happy about the repetition for each plugin)
> * task description mapping has been moved from widget-toolkit to
> pve-manager/utils
>
>
> v2 -> v3:
> * change locking approach
> * add more checks
> * add intermedia storage plugin for directory based plugins
> * use feature flags
> * split up the reassign method to have a dedicated method for the
> renaming itself
> * handle linked clones
> * clean up if disk used to be replicated
>
> I hope I didn't forget anything major.
>
> v1 -> v2:
> print info about the new disk volid and key at the end of the job so it
> shows up in the CLI output and task log
>
> Changes from RFC -> V1:
> * support to reassign unused disks
> * digest for target vm config
> * reorder the checks a bit
> * adding another one to check if the given key for the disk even exists
> in the config.
>
> [0] https://lists.proxmox.com/pipermail/pve-devel/2020-November/045986.html
> [1] https://lists.proxmox.com/pipermail/pve-devel/2020-November/046031.html
> [2] https://lists.proxmox.com/pipermail/pve-devel/2020-November/046030.html
>
> storage: Aaron Lauterer (1):
> add disk reassign feature
>
> PVE/Storage.pm | 15 ++++++++--
> PVE/Storage/CIFSPlugin.pm | 19 +++++++++++++
> PVE/Storage/DirPlugin.pm | 19 +++++++++++++
> PVE/Storage/GlusterfsPlugin.pm | 19 +++++++++++++
> PVE/Storage/LVMPlugin.pm | 24 ++++++++++++++++
> PVE/Storage/LvmThinPlugin.pm | 1 +
> PVE/Storage/NFSPlugin.pm | 19 +++++++++++++
> PVE/Storage/Plugin.pm | 51 ++++++++++++++++++++++++++++++++++
> PVE/Storage/RBDPlugin.pm | 31 +++++++++++++++++++++
> PVE/Storage/ZFSPoolPlugin.pm | 28 +++++++++++++++++++
> 10 files changed, 224 insertions(+), 2 deletions(-)
>
>
> qemu-server: Aaron Lauterer (2):
> disk reassign: add API endpoint
> cli: disk reassign: add reassign_disk to qm command
>
> PVE/API2/Qemu.pm | 151 ++++++++++++++++++++++++++++++++++++++++
> PVE/CLI/qm.pm | 2 +
> PVE/QemuServer/Drive.pm | 4 ++
> 3 files changed, 157 insertions(+)
>
>
> guest-common: Aaron Lauterer (1):
> Replication: mention disk reassign in comment of possible reasons
>
> PVE/Replication.pm | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
>
> manager: Aaron Lauterer (1):
> ui: tasks: add qmreassign task description
>
> www/manager6/Utils.js | 1 +
> 1 file changed, 1 insertion(+)
>
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [pve-devel] [PATCH v5 qemu-server 2/5] disk reassign: add API endpoint
2020-12-15 12:48 ` [pve-devel] [PATCH v5 qemu-server 2/5] disk reassign: add API endpoint Aaron Lauterer
@ 2021-03-31 9:23 ` Fabian Grünbichler
2021-04-01 14:24 ` Aaron Lauterer
0 siblings, 1 reply; 10+ messages in thread
From: Fabian Grünbichler @ 2021-03-31 9:23 UTC (permalink / raw)
To: Proxmox VE development discussion
sorry for the long pause - haven't checked the GUI part, but that should
be unaffected by my comments in-line.
On December 15, 2020 1:48 pm, Aaron Lauterer wrote:
> The goal of this new API endpoint is to provide an easy way to move a
> disk between VMs as this was only possible with manual intervention
> until now. Either by renaming the VM disk or by manually adding the
> disks volid to the config of the other VM.
>
> The latter can easily cause unexpected behavior such as disks attached
> to VM B would be deleted if it used to be a disk of VM A. This happens
> because PVE assumes that the VMID in the volname always matches the VM
> the disk is attached to and thus, would remove any disk with VMID A
> when VM A was deleted.
>
> The term `reassign` was chosen as it is not yet used
> for VM disks.
>
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> v4 -> v5:
> * implemented suggestions from Fabian [1]
> * logging before action
> * improving description
> * improving error messages
> * using Replication::prepare to remove replication snapshots
> * check if disk is physical disk using /dev/...
>
> v3 -> v4: nothing
>
> v2 -> v3:
> * reordered the locking as discussed with fabian [0] to
> run checks
> fork worker
> lock source config
> lock target config
> run checks
> ...
>
> * added more checks
> * will not reassign to or from templates
> * will not reassign if VM has snapshots present
> * cleanup if disk used to be replicated
> * made task log slightly more verbose
> * integrated general recommendations regarding code
> * renamed `disk` to `drive_key`
> * prepended some vars with `source_` for easier distinction
>
> v1 -> v2: print config key and volid info at the end of the job so it
> shows up on the CLI and task log
>
> rfc -> v1:
> * add support to reassign unused disks
> * add support to provide a config digest for the target vm
> * add additional check if disk key is present in config
> * reorder checks a bit
>
> In order to support unused disk I had to extend
> PVE::QemuServer::Drive::valid_drive_names for the API parameter
> validation.
>
> Checks are ordered so that cheap tests are run at the first chance to
> fail early.
>
> The check if both VMs are present on the node is a bit redundant because
> locking the config files will fail if the VM is not present. But with
> the additional check we can provide a useful error message to the user
> instead of a "Configuration file xyz does not exist" error.
>
> [0] https://lists.proxmox.com/pipermail/pve-devel/2020-September/044930.html
> [1] https://lists.proxmox.com/pipermail/pve-devel/2020-November/046030.html
>
> PVE/API2/Qemu.pm | 151 ++++++++++++++++++++++++++++++++++++++++
> PVE/QemuServer/Drive.pm | 4 ++
> 2 files changed, 155 insertions(+)
>
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index 00d98ab..614f068 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -35,6 +35,7 @@ use PVE::API2::Qemu::Agent;
> use PVE::VZDump::Plugin;
> use PVE::DataCenterConfig;
> use PVE::SSHInfo;
> +use PVE::Replication;
>
> BEGIN {
> if (!$ENV{PVE_GENERATING_DOCS}) {
> @@ -4319,4 +4320,154 @@ __PACKAGE__->register_method({
> return PVE::QemuServer::Cloudinit::dump_cloudinit_config($conf, $param->{vmid}, $param->{type});
> }});
>
> +__PACKAGE__->register_method({
> + name => 'reassign_vm_disk',
> + path => '{vmid}/reassign_disk',
> + method => 'POST',
> + protected => 1,
> + proxyto => 'node',
> + description => "Reassign a disk to another VM",
> + permissions => {
> + description => "You need 'VM.Config.Disk' permissions on /vms/{vmid} and /vms/{target vmid}, and 'Datastore.Allocate' permissions on the storage.",
> + check => [ 'and',
> + ['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
> + ['perm', '/storage/{storage}', [ 'Datastore.Allocate' ]],
> + ],
did you actually check this works? you can check the source vmid here,
but the rest as described in the schema needs to be manually checked
down below..
> + },
> + parameters => {
> + additionalProperties => 0,
> + properties => {
> + node => get_standard_option('pve-node'),
> + vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
> + target_vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
> + drive_name => {
> + type => 'string',
> + description => "The config key of the disk to reassign (for example, ide0 or scsi1).",
> + enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
> + },
> + digest => {
> + type => 'string',
> + description => 'Prevent changes if current the configuration file of the source VM has a different SHA1 digest. This can be used to prevent concurrent modifications.',
> + maxLength => 40,
> + optional => 1,
> + },
> + target_digest => {
> + type => 'string',
> + description => 'Prevent changes if current the configuration file of the target VM has a different SHA1 digest. This can be used to prevent concurrent modifications.',
> + maxLength => 40,
> + optional => 1,
> + },
> + },
> + },
> + returns => {
> + type => 'string',
> + description => "the task ID.",
> + },
> + code => sub {
> + my ($param) = @_;
> +
> + my $rpcenv = PVE::RPCEnvironment::get();
> + my $authuser = $rpcenv->get_user();
> +
> + my $node = extract_param($param, 'node');
> + my $source_vmid = extract_param($param, 'vmid');
> + my $target_vmid = extract_param($param, 'target_vmid');
> + my $source_digest = extract_param($param, 'digest');
> + my $target_digest = extract_param($param, 'target_digest');
> + my $drive_name = extract_param($param, 'drive_name');
> +
> + my $storecfg = PVE::Storage::config();
> + my $vmlist;
> + my $drive;
> + my $source_volid;
> +
> + die "Reassigning a disk to the same VM is not possible. Did you mean to move the disk?\n"
> + if $source_vmid eq $target_vmid;
> +
> + my $load_and_check_configs = sub {
> + $vmlist = PVE::Cluster::get_vmlist()->{ids};
> + die "Both VMs need to be on the same node ($vmlist->{$source_vmid}->{node}) but target VM is on $vmlist->{$target_vmid}->{node}.\n"
> + if $vmlist->{$source_vmid}->{node} ne $vmlist->{$target_vmid}->{node};
> +
> + my $source_conf = PVE::QemuConfig->load_config($source_vmid);
> + PVE::QemuConfig->check_lock($source_conf);
> + my $target_conf = PVE::QemuConfig->load_config($target_vmid);
> + PVE::QemuConfig->check_lock($target_conf);
> +
> + die "Can't reassign disks from or to templates\n"
> + if ($source_conf->{template} || $target_conf->{template});
> +
> + if ($source_digest) {
> + eval { PVE::Tools::assert_if_modified($source_digest, $source_conf->{digest}) };
> + if (my $err = $@) {
> + die "VM ${source_vmid}: ${err}";
> + }
> + }
> +
> + if ($target_digest) {
> + eval { PVE::Tools::assert_if_modified($target_digest, $target_conf->{digest}) };
> + if (my $err = $@) {
> + die "VM ${target_vmid}: ${err}";
> + }
> + }
> +
> + die "Disk '${drive_name}' does not exist\n"
> + if !defined($source_conf->{$drive_name});
> +
> + $drive = PVE::QemuServer::parse_drive($drive_name, $source_conf->{$drive_name});
> + $source_volid = $drive->{file};
> + die "disk '${drive_name}' has no associated volume\n" if !$source_volid;
> + die "CD drive contents can't be reassigned\n" if PVE::QemuServer::drive_is_cdrom($drive, 1);
> + die "Can't reassign physical disk\n" if $drive->{file} =~ m|^/dev/|;
> + die "Can't reassign disk used by a snapshot\n"
> + if PVE::QemuServer::Drive::is_volume_in_use($storecfg, $source_conf, $drive_name, $source_volid);
> +
> + die "Storage does not support the reassignment of this disk\n"
> + if !PVE::Storage::volume_has_feature($storecfg, 'reassign', $source_volid);
> +
> + die "Cannot reassign disk while the source VM is running\n"
> + if PVE::QemuServer::check_running($source_vmid) && $drive_name !~ m/unused[0-9]/;
we use /^unused\d+$/ in other places for this, which is a bit less
error-prone in case we ever add other "unused" keys..
> +
> + return ($source_conf, $target_conf);
> + };
> +
> + my $logfunc = sub {
> + my ($msg) = @_;
> + print STDERR "$msg\n";
> + };
> +
> + my $reassign_func = sub {
> + return PVE::QemuConfig->lock_config($source_vmid, sub {
> + return PVE::QemuConfig->lock_config($target_vmid, sub {
> + my ($source_conf, $target_conf) = &$load_and_check_configs();
> +
> + PVE::Cluster::log_msg('info', $authuser, "reassign disk VM $source_vmid: reassign --disk ${drive_name} --target_vmid $target_vmid");
> +
> + my $new_volid = PVE::Storage::reassign_volume($storecfg, $source_volid, $target_vmid);
> +
> + delete $source_conf->{$drive_name};
> + print "removing disk '${drive_name}' from VM '${source_vmid}'\n";
> + PVE::QemuConfig->write_config($source_vmid, $source_conf);
> +
> + # remove possible replication snapshots
> + PVE::Replication::prepare($storecfg, [$new_volid], undef, undef, undef, $logfunc);
might make sense to wrap this in an eval as well, and print an
appropriate warning that manual cleanup might be required here as well..
> +
> + my $key;
> + eval { $key = PVE::QemuConfig->add_unused_volume($target_conf, $new_volid) };
> + if (my $err = $@) {
> + print "failed to add reassigned disk '${new_volid}' to VM '${target_vmid}'. Try to free an 'unused' disk slot and run 'qm rescan ${target_vmid}'.\n";
> + return 0;
> + }
> +
> + print "adding disk to VM '${target_vmid}' as '${key}: ${new_volid}'\n";
> + PVE::QemuConfig->write_config($target_vmid, $target_conf);
> + });
> + });
> + };
> +
> + &$load_and_check_configs();
> +
> + return $rpcenv->fork_worker('qmreassign', $source_vmid, $authuser, $reassign_func);
> + }});
> +
> 1;
> diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
> index d560937..95c0538 100644
> --- a/PVE/QemuServer/Drive.pm
> +++ b/PVE/QemuServer/Drive.pm
> @@ -383,6 +383,10 @@ sub valid_drive_names {
> 'efidisk0');
> }
>
> +sub valid_drive_names_with_unused {
> + return (valid_drive_names(), map {"unused$_"} (0 .. ($MAX_UNUSED_DISKS -1)));
> +}
> +
> sub is_valid_drivename {
> my $dev = shift;
>
> --
> 2.20.1
>
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [pve-devel] [PATCH v5 storage 1/5] add disk reassign feature
2020-12-15 12:48 ` [pve-devel] [PATCH v5 storage 1/5] add disk reassign feature Aaron Lauterer
@ 2021-03-31 9:34 ` Fabian Grünbichler
0 siblings, 0 replies; 10+ messages in thread
From: Fabian Grünbichler @ 2021-03-31 9:34 UTC (permalink / raw)
To: Proxmox VE development discussion
maybe I am missing something, but AFAICT the volume_has_feature check
for the dir based plugins could also be moved to Plugin.pm, since it is
always identical:
> +sub volume_has_feature {
> + my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
> +
> + my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
> + $class->parse_volname($volname);
> +
> + if ($feature eq 'reassign') {
> + return 1 if !$isBase && $format =~ /^raw$|^qcow2$|^vmdk$/;
> + return undef;
> + } else {
> + return $class->SUPER::volume_has_feature($scfg, $feature, $storeid, $volname, $snapname, $running, $opts);
> + }
> +}
could then become
sub volume_has_feature {
my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
return $class->file_can_reassign_volume(@_)
if $feature eq 'reassign';
return $class->SUPER::volume_has_feature(@_);
}
with file_can_reassign_volume calling parse_volname (which is shared for
all dir based plugins!).
I guess theoretically we could also support renaming subvols (it's just
a dir rename?), but it's a pretty legacy usecase anyway so we can not
include it until someone complains.
On December 15, 2020 1:48 pm, Aaron Lauterer wrote:
> Functionality has been added for the following storage types:
>
> * dir based ones
> * directory
> * NFS
> * CIFS
> * gluster
> * ZFS
> * (thin) LVM
> * Ceph
>
> A new feature `reassign` has been introduced to mark which storage
> plugin supports the feature.
>
> Version API and AGE have been bumped.
>
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> v4 -> v5:
> * rebased on master
> * bumped api ver and api age
> * rephrased "not implemented" message as suggested [0].
>
> v3 -> v4:
> * revert intermediate storage plugin for directory based plugins
> * add a `die "not supported"` method in Plugin.pm
> * dir based plugins now call the file_reassign_volume method in
> Plugin.pm as the generic file/directory based method
> * restored old `volume_has_feature` method in Plugin.pm and override it
> in directory based plugins to check against the new `reassign` feature
> (not too happy about the repetition for each plugin)
>
> v2 -> v3:
> * added feature flags instead of dummy "not implemented" methods in
> plugins which do not support it as that would break compatibility with
> 3rd party plugins.
> Had to make $features available outside the `has_features` method in
> Plugins.pm in order to be able to individually add features in the
> `BaseDirPlugin.pm`.
> * added the BaseDirPlugin.pm to maintain compat with 3rd party plugins,
> this is explained in the commit message
> * moved the actual renaming from the `reassign_volume` to a dedicated
> `rename_volume` method to make this functionality available to other
> possible uses in the future.
> * added support for linked clones ($base)
>
>
> rfc -> v1 -> v2: nothing changed
>
> [0] https://lists.proxmox.com/pipermail/pve-devel/2020-November/046031.html
>
> PVE/Storage.pm | 15 ++++++++--
> PVE/Storage/CIFSPlugin.pm | 19 +++++++++++++
> PVE/Storage/DirPlugin.pm | 19 +++++++++++++
> PVE/Storage/GlusterfsPlugin.pm | 19 +++++++++++++
> PVE/Storage/LVMPlugin.pm | 24 ++++++++++++++++
> PVE/Storage/LvmThinPlugin.pm | 1 +
> PVE/Storage/NFSPlugin.pm | 19 +++++++++++++
> PVE/Storage/Plugin.pm | 51 ++++++++++++++++++++++++++++++++++
> PVE/Storage/RBDPlugin.pm | 31 +++++++++++++++++++++
> PVE/Storage/ZFSPoolPlugin.pm | 26 +++++++++++++++++
> 10 files changed, 222 insertions(+), 2 deletions(-)
>
> diff --git a/PVE/Storage.pm b/PVE/Storage.pm
> index aded60e..ca3b7bd 100755
> --- a/PVE/Storage.pm
> +++ b/PVE/Storage.pm
> @@ -41,11 +41,11 @@ use PVE::Storage::DRBDPlugin;
> use PVE::Storage::PBSPlugin;
>
> # Storage API version. Increment it on changes in storage API interface.
> -use constant APIVER => 8;
> +use constant APIVER => 9;
> # Age is the number of versions we're backward compatible with.
> # This is like having 'current=APIVER' and age='APIAGE' in libtool,
> # see https://www.gnu.org/software/libtool/manual/html_node/Libtool-versioning.html
> -use constant APIAGE => 7;
> +use constant APIAGE => 8;
>
> # load standard plugins
> PVE::Storage::DirPlugin->register();
> @@ -351,6 +351,7 @@ sub volume_snapshot_needs_fsfreeze {
> # snapshot - taking a snapshot is possible
> # sparseinit - volume is sparsely initialized
> # template - conversion to base image is possible
> +# reassign - reassigning disks to other guest is possible
> # $snap - check if the feature is supported for a given snapshot
> # $running - if the guest owning the volume is running
> # $opts - hash with further options:
> @@ -1843,6 +1844,16 @@ sub complete_volume {
> return $res;
> }
>
> +sub reassign_volume {
> + my ($cfg, $volid, $target_vmid) = @_;
> +
> + my ($storeid, $volname) = parse_volume_id($volid);
> + my $scfg = storage_config($cfg, $storeid);
> + my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
> +
> + return $plugin->reassign_volume($scfg, $storeid, $volname, $target_vmid);
> +}
> +
> # Various io-heavy operations require io/bandwidth limits which can be
> # configured on multiple levels: The global defaults in datacenter.cfg, and
> # per-storage overrides. When we want to do a restore from storage A to storage
> diff --git a/PVE/Storage/CIFSPlugin.pm b/PVE/Storage/CIFSPlugin.pm
> index be06cc7..f2ff5b4 100644
> --- a/PVE/Storage/CIFSPlugin.pm
> +++ b/PVE/Storage/CIFSPlugin.pm
> @@ -293,4 +293,23 @@ sub update_volume_notes {
> PVE::Storage::DirPlugin::update_volume_notes($class, @_);
> }
>
> +sub reassign_volume {
> + my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> + return $class->file_reassign_volume($scfg, $storeid, $volname, $target_vmid);
> +}
> +
> +sub volume_has_feature {
> + my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
> +
> + my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
> + $class->parse_volname($volname);
> +
> + if ($feature eq 'reassign') {
> + return 1 if !$isBase && $format =~ /^raw$|^qcow2$|^vmdk$/;
> + return undef;
> + } else {
> + return $class->SUPER::volume_has_feature($scfg, $feature, $storeid, $volname, $snapname, $running, $opts);
> + }
> +}
> +
> 1;
> diff --git a/PVE/Storage/DirPlugin.pm b/PVE/Storage/DirPlugin.pm
> index 2267f11..f4cb4bb 100644
> --- a/PVE/Storage/DirPlugin.pm
> +++ b/PVE/Storage/DirPlugin.pm
> @@ -156,4 +156,23 @@ sub check_config {
> return $opts;
> }
>
> +sub reassign_volume {
> + my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> + return $class->file_reassign_volume($scfg, $storeid, $volname, $target_vmid);
> +}
> +
> +sub volume_has_feature {
> + my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
> +
> + my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
> + $class->parse_volname($volname);
> +
> + if ($feature eq 'reassign') {
> + return 1 if !$isBase && $format =~ /^raw$|^qcow2$|^vmdk$/;
> + return undef;
> + } else {
> + return $class->SUPER::volume_has_feature($scfg, $feature, $storeid, $volname, $snapname, $running, $opts);
> + }
> +}
> +
> 1;
> diff --git a/PVE/Storage/GlusterfsPlugin.pm b/PVE/Storage/GlusterfsPlugin.pm
> index 2dd414d..5c9fd78 100644
> --- a/PVE/Storage/GlusterfsPlugin.pm
> +++ b/PVE/Storage/GlusterfsPlugin.pm
> @@ -348,4 +348,23 @@ sub check_connection {
> return defined($server) ? 1 : 0;
> }
>
> +sub reassign_volume {
> + my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> + return $class->file_reassign_volume($scfg, $storeid, $volname, $target_vmid);
> +}
> +
> +sub volume_has_feature {
> + my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
> +
> + my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
> + $class->parse_volname($volname);
> +
> + if ($feature eq 'reassign') {
> + return 1 if !$isBase && $format =~ /^raw$|^qcow2$|^vmdk$/;
> + return undef;
> + } else {
> + return $class->SUPER::volume_has_feature($scfg, $feature, $storeid, $volname, $snapname, $running, $opts);
> + }
> +}
> +
> 1;
> diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
> index 73e8e48..19898fa 100644
> --- a/PVE/Storage/LVMPlugin.pm
> +++ b/PVE/Storage/LVMPlugin.pm
> @@ -339,6 +339,13 @@ sub lvcreate {
> run_command($cmd, errmsg => "lvcreate '$vg/$name' error");
> }
>
> +sub lvrename {
> + my ($vg, $oldname, $newname) = @_;
> +
> + my $cmd = ['/sbin/lvrename', $vg, $oldname, $newname];
> + run_command($cmd, errmsg => "lvrename '${vg}/${oldname}' to '${newname}' error");
> +}
> +
> sub alloc_image {
> my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
>
> @@ -583,6 +590,7 @@ sub volume_has_feature {
>
> my $features = {
> copy => { base => 1, current => 1},
> + reassign => {current => 1},
> };
>
> my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
> @@ -683,4 +691,20 @@ sub volume_import_write {
> input => '<&'.fileno($input_fh));
> }
>
> +sub reassign_volume {
> + my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> +
> + $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
> + my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid);
> + return $class->rename_volume($scfg, $storeid, $volname, $target_volname, $target_vmid);
> + });
> +}
> +
> +sub rename_volume {
> + my ($class, $scfg, $storeid, $source_volname, $target_volname, $vmid) = @_;
> +
> + lvrename($scfg->{vgname}, $source_volname, $target_volname);
> + return "${storeid}:${target_volname}";
> +}
> +
> 1;
> diff --git a/PVE/Storage/LvmThinPlugin.pm b/PVE/Storage/LvmThinPlugin.pm
> index c9e127c..895af8b 100644
> --- a/PVE/Storage/LvmThinPlugin.pm
> +++ b/PVE/Storage/LvmThinPlugin.pm
> @@ -355,6 +355,7 @@ sub volume_has_feature {
> template => { current => 1},
> copy => { base => 1, current => 1, snap => 1},
> sparseinit => { base => 1, current => 1},
> + reassign => {current => 1},
> };
>
> my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
> diff --git a/PVE/Storage/NFSPlugin.pm b/PVE/Storage/NFSPlugin.pm
> index e8e27c0..b5a8f75 100644
> --- a/PVE/Storage/NFSPlugin.pm
> +++ b/PVE/Storage/NFSPlugin.pm
> @@ -180,4 +180,23 @@ sub update_volume_notes {
> PVE::Storage::DirPlugin::update_volume_notes($class, @_);
> }
>
> +sub reassign_volume {
> + my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> + return $class->file_reassign_volume($scfg, $storeid, $volname, $target_vmid);
> +}
> +
> +sub volume_has_feature {
> + my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
> +
> + my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
> + $class->parse_volname($volname);
> +
> + if ($feature eq 'reassign') {
> + return 1 if !$isBase && $format =~ /^raw$|^qcow2$|^vmdk$/;
> + return undef;
> + } else {
> + return $class->SUPER::volume_has_feature($scfg, $feature, $storeid, $volname, $snapname, $running, $opts);
> + }
> +}
> +
> 1;
> diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
> index 57c58a9..aa496e2 100644
> --- a/PVE/Storage/Plugin.pm
> +++ b/PVE/Storage/Plugin.pm
> @@ -925,6 +925,7 @@ sub storage_can_replicate {
> return 0;
> }
>
> +
> sub volume_has_feature {
> my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
>
> @@ -1454,4 +1455,54 @@ sub volume_import_formats {
> return ();
> }
>
> +sub reassign_volume {
> + my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> + die "not implemented in storage plugin '$class'\n";
> +}
> +
> +# general reassignment method for file/directory based storages
> +sub file_reassign_volume {
> + my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> +
> + my $base;
> + my $base_vmid;
> + my $format;
> + my $source_vmid;
> +
> + (undef, $volname, $source_vmid, $base, $base_vmid, undef, $format) = $class->parse_volname($volname);
> +
> + # parse_volname strips the directory from volname
> + my $source_volname = "${source_vmid}/${volname}";
> +
> + if ($base) {
> + $base = "${base_vmid}/${base}/";
> + } else {
> + $base = '';
> + }
> +
> + $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
> + my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid, $format, 1);
> +
> + return $class->rename_volume($scfg, $storeid, $source_volname, $target_volname, $target_vmid, $base);
> + });
> +}
> +
> +sub rename_volume {
> + my ($class, $scfg, $storeid, $source_volname, $target_volname, $vmid, $base) = @_;
> +
> + my $basedir = $class->get_subdir($scfg, 'images');
> + my $imagedir = "${basedir}/${vmid}";
> +
> + mkpath $imagedir;
> +
> + my $old_path = "${basedir}/${source_volname}";
> + my $new_path = "${imagedir}/${target_volname}";
> +
> + rename($old_path, $new_path) ||
> + die "rename '$old_path' to '$new_path' failed - $!\n";
> +
> + return "${storeid}:${base}${vmid}/${target_volname}";
> +}
> +
> +
> 1;
> diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
> index fab6d57..86a9a0d 100644
> --- a/PVE/Storage/RBDPlugin.pm
> +++ b/PVE/Storage/RBDPlugin.pm
> @@ -712,6 +712,7 @@ sub volume_has_feature {
> template => { current => 1},
> copy => { base => 1, current => 1, snap => 1},
> sparseinit => { base => 1, current => 1},
> + reassign => {current => 1},
> };
>
> my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
> @@ -728,4 +729,34 @@ sub volume_has_feature {
> return undef;
> }
>
> +sub reassign_volume {
> + my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> +
> + my $base;;
> + (undef, $volname, undef, $base) = $class->parse_volname($volname);
> +
> + if ($base) {
> + $base = $base . '/';
> + } else {
> + $base = '';
> + }
> +
> + $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
> + my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid);
> + return $class->rename_volume($scfg, $storeid, $volname, $target_volname, $target_vmid, $base);
> + });
> +}
> +
> +sub rename_volume {
> + my ($class, $scfg, $storeid, $source_volname, $target_volname, $vmid, $base) = @_;
> +
> + my $cmd = $rbd_cmd->($scfg, $storeid, 'rename', $source_volname, $target_volname);
> +
> + run_rbd_command(
> + $cmd,
> + errmsg => "could not rename image '${source_volname}' to '${target_volname}'",
> + );
> + return "${storeid}:${base}${target_volname}";
> +}
> +
> 1;
> diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
> index 3054331..adccddf 100644
> --- a/PVE/Storage/ZFSPoolPlugin.pm
> +++ b/PVE/Storage/ZFSPoolPlugin.pm
> @@ -659,6 +659,7 @@ sub volume_has_feature {
> copy => { base => 1, current => 1},
> sparseinit => { base => 1, current => 1},
> replicate => { base => 1, current => 1},
> + reassign => {current => 1},
> };
>
> my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
> @@ -761,4 +762,29 @@ sub volume_import_formats {
> return $class->volume_export_formats($scfg, $storeid, $volname, undef, $base_snapshot, $with_snapshots);
> }
>
> +sub reassign_volume {
> + my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> +
> + my $base;;
> + (undef, $volname, undef, $base) = $class->parse_volname($volname);
> +
> + if ($base) {
> + $base = $base . '/';
> + } else {
> + $base = '';
> + }
> +
> + $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
> + my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid);
> + return $class->rename_volume($scfg, $storeid, $volname, $target_volname, $target_vmid, $base);
> + });
> +}
> +
> +sub rename_volume {
> + my ($class, $scfg, $storeid, $source_volname, $target_volname, $vmid, $base) = @_;
> +
> + $class->zfs_request($scfg, 5, 'rename', "$scfg->{pool}/$source_volname", "$scfg->{pool}/$target_volname");
> + return "${storeid}:${base}${target_volname}";
> +}
> +
> 1;
> --
> 2.20.1
>
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [pve-devel] [PATCH v5 qemu-server 2/5] disk reassign: add API endpoint
2021-03-31 9:23 ` Fabian Grünbichler
@ 2021-04-01 14:24 ` Aaron Lauterer
0 siblings, 0 replies; 10+ messages in thread
From: Aaron Lauterer @ 2021-04-01 14:24 UTC (permalink / raw)
To: Proxmox VE development discussion, Fabian Grünbichler
On 3/31/21 11:23 AM, Fabian Grünbichler wrote:
> sorry for the long pause - haven't checked the GUI part, but that should
> be unaffected by my comments in-line.
thanks and yes, the GUI should be oblivious to any changes here
>
> On December 15, 2020 1:48 pm, Aaron Lauterer wrote:
>> The goal of this new API endpoint is to provide an easy way to move a
>> disk between VMs as this was only possible with manual intervention
>> until now. Either by renaming the VM disk or by manually adding the
>> disks volid to the config of the other VM.
>>
>> The latter can easily cause unexpected behavior such as disks attached
>> to VM B would be deleted if it used to be a disk of VM A. This happens
>> because PVE assumes that the VMID in the volname always matches the VM
>> the disk is attached to and thus, would remove any disk with VMID A
>> when VM A was deleted.
>>
>> The term `reassign` was chosen as it is not yet used
>> for VM disks.
>>
>> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
>> ---
>> v4 -> v5:
>> * implemented suggestions from Fabian [1]
>> * logging before action
>> * improving description
>> * improving error messages
>> * using Replication::prepare to remove replication snapshots
>> * check if disk is physical disk using /dev/...
>>
>> v3 -> v4: nothing
>>
>> v2 -> v3:
>> * reordered the locking as discussed with fabian [0] to
>> run checks
>> fork worker
>> lock source config
>> lock target config
>> run checks
>> ...
>>
>> * added more checks
>> * will not reassign to or from templates
>> * will not reassign if VM has snapshots present
>> * cleanup if disk used to be replicated
>> * made task log slightly more verbose
>> * integrated general recommendations regarding code
>> * renamed `disk` to `drive_key`
>> * prepended some vars with `source_` for easier distinction
>>
>> v1 -> v2: print config key and volid info at the end of the job so it
>> shows up on the CLI and task log
>>
>> rfc -> v1:
>> * add support to reassign unused disks
>> * add support to provide a config digest for the target vm
>> * add additional check if disk key is present in config
>> * reorder checks a bit
>>
>> In order to support unused disk I had to extend
>> PVE::QemuServer::Drive::valid_drive_names for the API parameter
>> validation.
>>
>> Checks are ordered so that cheap tests are run at the first chance to
>> fail early.
>>
>> The check if both VMs are present on the node is a bit redundant because
>> locking the config files will fail if the VM is not present. But with
>> the additional check we can provide a useful error message to the user
>> instead of a "Configuration file xyz does not exist" error.
>>
>> [0] https://lists.proxmox.com/pipermail/pve-devel/2020-September/044930.html
>> [1] https://lists.proxmox.com/pipermail/pve-devel/2020-November/046030.html
>>
>> PVE/API2/Qemu.pm | 151 ++++++++++++++++++++++++++++++++++++++++
>> PVE/QemuServer/Drive.pm | 4 ++
>> 2 files changed, 155 insertions(+)
>>
>> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
>> index 00d98ab..614f068 100644
>> --- a/PVE/API2/Qemu.pm
>> +++ b/PVE/API2/Qemu.pm
>> @@ -35,6 +35,7 @@ use PVE::API2::Qemu::Agent;
>> use PVE::VZDump::Plugin;
>> use PVE::DataCenterConfig;
>> use PVE::SSHInfo;
>> +use PVE::Replication;
>>
>> BEGIN {
>> if (!$ENV{PVE_GENERATING_DOCS}) {
>> @@ -4319,4 +4320,154 @@ __PACKAGE__->register_method({
>> return PVE::QemuServer::Cloudinit::dump_cloudinit_config($conf, $param->{vmid}, $param->{type});
>> }});
>>
>> +__PACKAGE__->register_method({
>> + name => 'reassign_vm_disk',
>> + path => '{vmid}/reassign_disk',
>> + method => 'POST',
>> + protected => 1,
>> + proxyto => 'node',
>> + description => "Reassign a disk to another VM",
>> + permissions => {
>> + description => "You need 'VM.Config.Disk' permissions on /vms/{vmid} and /vms/{target vmid}, and 'Datastore.Allocate' permissions on the storage.",
>> + check => [ 'and',
>> + ['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
>> + ['perm', '/storage/{storage}', [ 'Datastore.Allocate' ]],
>> + ],
>
> did you actually check this works? you can check the source vmid here,
> but the rest as described in the schema needs to be manually checked
> down below..
Thanks for catching this.
>
>> + },
>> + parameters => {
>> + additionalProperties => 0,
>> + properties => {
>> + node => get_standard_option('pve-node'),
>> + vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
>> + target_vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
>> + drive_name => {
>> + type => 'string',
>> + description => "The config key of the disk to reassign (for example, ide0 or scsi1).",
>> + enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
>> + },
>> + digest => {
>> + type => 'string',
>> + description => 'Prevent changes if current the configuration file of the source VM has a different SHA1 digest. This can be used to prevent concurrent modifications.',
>> + maxLength => 40,
>> + optional => 1,
>> + },
>> + target_digest => {
>> + type => 'string',
>> + description => 'Prevent changes if current the configuration file of the target VM has a different SHA1 digest. This can be used to prevent concurrent modifications.',
>> + maxLength => 40,
>> + optional => 1,
>> + },
>> + },
>> + },
>> + returns => {
>> + type => 'string',
>> + description => "the task ID.",
>> + },
>> + code => sub {
>> + my ($param) = @_;
>> +
>> + my $rpcenv = PVE::RPCEnvironment::get();
>> + my $authuser = $rpcenv->get_user();
>> +
>> + my $node = extract_param($param, 'node');
>> + my $source_vmid = extract_param($param, 'vmid');
>> + my $target_vmid = extract_param($param, 'target_vmid');
>> + my $source_digest = extract_param($param, 'digest');
>> + my $target_digest = extract_param($param, 'target_digest');
>> + my $drive_name = extract_param($param, 'drive_name');
>> +
>> + my $storecfg = PVE::Storage::config();
>> + my $vmlist;
>> + my $drive;
>> + my $source_volid;
>> +
>> + die "Reassigning a disk to the same VM is not possible. Did you mean to move the disk?\n"
>> + if $source_vmid eq $target_vmid;
>> +
>> + my $load_and_check_configs = sub {
>> + $vmlist = PVE::Cluster::get_vmlist()->{ids};
>> + die "Both VMs need to be on the same node ($vmlist->{$source_vmid}->{node}) but target VM is on $vmlist->{$target_vmid}->{node}.\n"
>> + if $vmlist->{$source_vmid}->{node} ne $vmlist->{$target_vmid}->{node};
>> +
>> + my $source_conf = PVE::QemuConfig->load_config($source_vmid);
>> + PVE::QemuConfig->check_lock($source_conf);
>> + my $target_conf = PVE::QemuConfig->load_config($target_vmid);
>> + PVE::QemuConfig->check_lock($target_conf);
>> +
>> + die "Can't reassign disks from or to templates\n"
>> + if ($source_conf->{template} || $target_conf->{template});
>> +
>> + if ($source_digest) {
>> + eval { PVE::Tools::assert_if_modified($source_digest, $source_conf->{digest}) };
>> + if (my $err = $@) {
>> + die "VM ${source_vmid}: ${err}";
>> + }
>> + }
>> +
>> + if ($target_digest) {
>> + eval { PVE::Tools::assert_if_modified($target_digest, $target_conf->{digest}) };
>> + if (my $err = $@) {
>> + die "VM ${target_vmid}: ${err}";
>> + }
>> + }
>> +
>> + die "Disk '${drive_name}' does not exist\n"
>> + if !defined($source_conf->{$drive_name});
>> +
>> + $drive = PVE::QemuServer::parse_drive($drive_name, $source_conf->{$drive_name});
>> + $source_volid = $drive->{file};
>> + die "disk '${drive_name}' has no associated volume\n" if !$source_volid;
>> + die "CD drive contents can't be reassigned\n" if PVE::QemuServer::drive_is_cdrom($drive, 1);
>> + die "Can't reassign physical disk\n" if $drive->{file} =~ m|^/dev/|;
>> + die "Can't reassign disk used by a snapshot\n"
>> + if PVE::QemuServer::Drive::is_volume_in_use($storecfg, $source_conf, $drive_name, $source_volid);
>> +
>> + die "Storage does not support the reassignment of this disk\n"
>> + if !PVE::Storage::volume_has_feature($storecfg, 'reassign', $source_volid);
>> +
>> + die "Cannot reassign disk while the source VM is running\n"
>> + if PVE::QemuServer::check_running($source_vmid) && $drive_name !~ m/unused[0-9]/;
>
> we use /^unused\d+$/ in other places for this, which is a bit less
> error-prone in case we ever add other "unused" keys..
Good to know :)
>
>> +
>> + return ($source_conf, $target_conf);
>> + };
>> +
>> + my $logfunc = sub {
>> + my ($msg) = @_;
>> + print STDERR "$msg\n";
>> + };
>> +
>> + my $reassign_func = sub {
>> + return PVE::QemuConfig->lock_config($source_vmid, sub {
>> + return PVE::QemuConfig->lock_config($target_vmid, sub {
>> + my ($source_conf, $target_conf) = &$load_and_check_configs();
>> +
>> + PVE::Cluster::log_msg('info', $authuser, "reassign disk VM $source_vmid: reassign --disk ${drive_name} --target_vmid $target_vmid");
>> +
>> + my $new_volid = PVE::Storage::reassign_volume($storecfg, $source_volid, $target_vmid);
>> +
>> + delete $source_conf->{$drive_name};
>> + print "removing disk '${drive_name}' from VM '${source_vmid}'\n";
>> + PVE::QemuConfig->write_config($source_vmid, $source_conf);
>> +
>> + # remove possible replication snapshots
>> + PVE::Replication::prepare($storecfg, [$new_volid], undef, undef, undef, $logfunc);
>
> might make sense to wrap this in an eval as well, and print an
> appropriate warning that manual cleanup might be required here as well..
>
>> +
>> + my $key;
>> + eval { $key = PVE::QemuConfig->add_unused_volume($target_conf, $new_volid) };
>> + if (my $err = $@) {
>> + print "failed to add reassigned disk '${new_volid}' to VM '${target_vmid}'. Try to free an 'unused' disk slot and run 'qm rescan ${target_vmid}'.\n";
>> + return 0;
>> + }
>> +
>> + print "adding disk to VM '${target_vmid}' as '${key}: ${new_volid}'\n";
>> + PVE::QemuConfig->write_config($target_vmid, $target_conf);
>> + });
>> + });
>> + };
>> +
>> + &$load_and_check_configs();
>> +
>> + return $rpcenv->fork_worker('qmreassign', $source_vmid, $authuser, $reassign_func);
>> + }});
>> +
>> 1;
>> diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
>> index d560937..95c0538 100644
>> --- a/PVE/QemuServer/Drive.pm
>> +++ b/PVE/QemuServer/Drive.pm
>> @@ -383,6 +383,10 @@ sub valid_drive_names {
>> 'efidisk0');
>> }
>>
>> +sub valid_drive_names_with_unused {
>> + return (valid_drive_names(), map {"unused$_"} (0 .. ($MAX_UNUSED_DISKS -1)));
>> +}
>> +
>> sub is_valid_drivename {
>> my $dev = shift;
>>
>> --
>> 2.20.1
>>
>>
>>
>> _______________________________________________
>> pve-devel mailing list
>> pve-devel@lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>>
>>
>>
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2021-04-01 14:25 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-15 12:48 [pve-devel] [PATCH v5 series 0/5] disk reassign: add new feature Aaron Lauterer
2020-12-15 12:48 ` [pve-devel] [PATCH v5 storage 1/5] add disk reassign feature Aaron Lauterer
2021-03-31 9:34 ` Fabian Grünbichler
2020-12-15 12:48 ` [pve-devel] [PATCH v5 qemu-server 2/5] disk reassign: add API endpoint Aaron Lauterer
2021-03-31 9:23 ` Fabian Grünbichler
2021-04-01 14:24 ` Aaron Lauterer
2020-12-15 12:48 ` [pve-devel] [PATCH v5 qemu-server 3/5] cli: disk reassign: add reassign_disk to qm command Aaron Lauterer
2020-12-15 12:48 ` [pve-devel] [PATCH v5 guest-common 4/5] Replication: mention disk reassign in comment of possible reasons Aaron Lauterer
2020-12-15 12:48 ` [pve-devel] [PATCH v5 manager 5/5] ui: tasks: add qmreassign task description Aaron Lauterer
2021-02-15 14:59 ` [pve-devel] [PATCH v5 series 0/5] disk reassign: add new feature Aaron Lauterer
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox