* [pve-devel] [PATCH v7 series 0/5] disk reassign: add new feature
@ 2021-04-20 16:34 Aaron Lauterer
2021-04-20 16:34 ` [pve-devel] [PATCH v7 storage 1/5] add disk reassign feature Aaron Lauterer
` (4 more replies)
0 siblings, 5 replies; 7+ messages in thread
From: Aaron Lauterer @ 2021-04-20 16:34 UTC (permalink / raw)
To: pve-devel
This series implements a new feature which allows users to easily
reassign disks between VMs. Currently this is only possible with one of
the following manual steps:
* rename the disk image/file and do a `qm rescan`
* configure the disk manually and use the old image name, having an
image for VM A assigned to VM B
The latter can cause unexpected behavior because PVE expects that the
VMID in a disk name always corresponds to the VM it is assigned to. Thus
when a disk, original from VM A was manually configured as disk for VM B
it happens that, when deleting VM A, the disk in question will be
deleted as well because it still had the VMID of VM A in it's name.
To issue a reassign from the CLI run:
qm reassign-disk <source VMID> <target VMID> <source disk> <target disk>
where <source/target disk> is the config key of the disk, e.g. ide0,
scsi1 and so on.
The following storage types are implemented at the moment:
* dir based ones
* ZFS
* (thin) LVM
* Ceph RBD
v6 -> v7:
This was a rather large change to the previous version. I hope I
incorporated all suggestions and hints and did not miss anything.
More details are usually found in the direct patches.
* removed original patch 4 as it is not needed (thx @febner for the hint)
* added another (optional) patch to align move_disk to use a dash
instead of an underscore
* make sure storage is activated
* restructure storage plugins so that dir based ones are handled
directly in plugin.pm with API version checks for external plugins
* add target disk key
* use update_vm_api to add the disk to the new VM (hotplug if possible)
* removed cluster log
* reordered reassing procedure
* changed worker ID to show source and target better
v5 -> v6:
* guard Replication snapshot cleanup
* add permission check for target vmid
* changed regex to match unused keys better
* refactor dir based feature check to reduce code repetition
v4 -> v5:
* rebase on current master
* reorder patches
* rename `drive_key` to `drive_name`
thanks @Dominic for pointing out that there already are a lot of
different names in use for this [0] and not to invent another one
* implemented suggested changes from Fabian [1][2]. More directly in the
patches themselves
v3 -> v4:
* revert intermediate storage plugin for directory based plugins
* add a `die "not supported"` method in Plugin.pm
* dir based plugins now call the file_reassign_volume method in
Plugin.pm as the generic file/directory based method
* restored old `volume_has_feature` method in Plugin.pm and override it
in directory based plugins to check against the new `reassign` feature
(not too happy about the repetition for each plugin)
* task description mapping has been moved from widget-toolkit to
pve-manager/utils
v2 -> v3:
* change locking approach
* add more checks
* add intermedia storage plugin for directory based plugins
* use feature flags
* split up the reassign method to have a dedicated method for the
renaming itself
* handle linked clones
* clean up if disk used to be replicated
I hope I didn't forget anything major.
v1 -> v2:
print info about the new disk volid and key at the end of the job so it
shows up in the CLI output and task log
Changes from RFC -> V1:
* support to reassign unused disks
* digest for target vm config
* reorder the checks a bit
* adding another one to check if the given key for the disk even exists
in the config.
[0] https://lists.proxmox.com/pipermail/pve-devel/2020-November/045986.html
[1] https://lists.proxmox.com/pipermail/pve-devel/2020-November/046031.html
[2] https://lists.proxmox.com/pipermail/pve-devel/2020-November/046030.html
storage: Aaron Lauterer (1):
add disk reassign feature
PVE/Storage.pm | 19 +++++++++++--
PVE/Storage/LVMPlugin.pm | 34 +++++++++++++++++++++++
PVE/Storage/LvmThinPlugin.pm | 1 +
PVE/Storage/Plugin.pm | 52 ++++++++++++++++++++++++++++++++++++
PVE/Storage/RBDPlugin.pm | 37 +++++++++++++++++++++++++
PVE/Storage/ZFSPoolPlugin.pm | 38 ++++++++++++++++++++++++++
6 files changed, 179 insertions(+), 2 deletions(-)
qemu-server: Aaron Lauterer (3):
disk reassign: add API endpoint
cli: disk reassign: add reassign_disk to qm command
cli: qm: change move_disk parameter to move-disk
PVE/API2/Qemu.pm | 220 ++++++++++++++++++++++++++++++++++++++++
PVE/CLI/qm.pm | 5 +-
PVE/QemuServer/Drive.pm | 4 +
3 files changed, 228 insertions(+), 1 deletion(-)
manager: Aaron Lauterer (1):
ui: tasks: add qmreassign task description
www/manager6/Utils.js | 1 +
1 file changed, 1 insertion(+)
--
2.20.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [pve-devel] [PATCH v7 storage 1/5] add disk reassign feature
2021-04-20 16:34 [pve-devel] [PATCH v7 series 0/5] disk reassign: add new feature Aaron Lauterer
@ 2021-04-20 16:34 ` Aaron Lauterer
2021-04-23 13:49 ` Thomas Lamprecht
2021-04-20 16:34 ` [pve-devel] [PATCH v7 qemu-server 2/5] disk reassign: add API endpoint Aaron Lauterer
` (3 subsequent siblings)
4 siblings, 1 reply; 7+ messages in thread
From: Aaron Lauterer @ 2021-04-20 16:34 UTC (permalink / raw)
To: pve-devel
Functionality has been added for the following storage types:
* dir based ones
* ZFS
* (thin) LVM
* Ceph
A new feature `reassign` has been introduced to mark which storage
plugin supports the feature.
Version API and AGE have been bumped.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
v6 -> v7:
We now place everything for dis based plugins in the Plugin.pm and check
against the supported API version to avoid running the code on external
plugins that do not yet officially support the reassign feature.
* activate storage before doing anything else
* checks if storage is enabled as well
* code cleanup
* change long function calls to multiline
* base parameter is not passed to rename function anymore but
handled in the reassign function
* prefixed vars with source_ / target_ to make them easier to
distinguish
v5 -> v6:
* refactor dir based feature check to reduce code repetition by
introducing the file_can_reassign_volume sub that does the actual check
v4 -> v5:
* rebased on master
* bumped api ver and api age
* rephrased "not implemented" message as suggested [0].
v3 -> v4:
* revert intermediate storage plugin for directory based plugins
* add a `die "not supported"` method in Plugin.pm
* dir based plugins now call the file_reassign_volume method in
Plugin.pm as the generic file/directory based method
* restored old `volume_has_feature` method in Plugin.pm and override it
in directory based plugins to check against the new `reassign` feature
(not too happy about the repetition for each plugin)
v2 -> v3:
* added feature flags instead of dummy "not implemented" methods in
plugins which do not support it as that would break compatibility with
3rd party plugins.
Had to make $features available outside the `has_features` method in
Plugins.pm in order to be able to individually add features in the
`BaseDirPlugin.pm`.
* added the BaseDirPlugin.pm to maintain compat with 3rd party plugins,
this is explained in the commit message
* moved the actual renaming from the `reassign_volume` to a dedicated
`rename_volume` method to make this functionality available to other
possible uses in the future.
* added support for linked clones ($base)
rfc -> v1 -> v2: nothing changed
[0] https://lists.proxmox.com/pipermail/pve-devel/2020-November/046031.html
PVE/Storage.pm | 19 +++++++++++--
PVE/Storage/LVMPlugin.pm | 34 +++++++++++++++++++++++
PVE/Storage/LvmThinPlugin.pm | 1 +
PVE/Storage/Plugin.pm | 52 ++++++++++++++++++++++++++++++++++++
PVE/Storage/RBDPlugin.pm | 37 +++++++++++++++++++++++++
PVE/Storage/ZFSPoolPlugin.pm | 38 ++++++++++++++++++++++++++
6 files changed, 179 insertions(+), 2 deletions(-)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 122c3e9..ea782cc 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -41,11 +41,11 @@ use PVE::Storage::DRBDPlugin;
use PVE::Storage::PBSPlugin;
# Storage API version. Increment it on changes in storage API interface.
-use constant APIVER => 8;
+use constant APIVER => 9;
# Age is the number of versions we're backward compatible with.
# This is like having 'current=APIVER' and age='APIAGE' in libtool,
# see https://www.gnu.org/software/libtool/manual/html_node/Libtool-versioning.html
-use constant APIAGE => 7;
+use constant APIAGE => 8;
# load standard plugins
PVE::Storage::DirPlugin->register();
@@ -349,6 +349,7 @@ sub volume_snapshot_needs_fsfreeze {
# snapshot - taking a snapshot is possible
# sparseinit - volume is sparsely initialized
# template - conversion to base image is possible
+# reassign - reassigning disks to other guest is possible
# $snap - check if the feature is supported for a given snapshot
# $running - if the guest owning the volume is running
# $opts - hash with further options:
@@ -1843,6 +1844,20 @@ sub complete_volume {
return $res;
}
+sub reassign_volume {
+ my ($cfg, $volid, $target_vmid) = @_;
+
+ my ($storeid, $volname) = parse_volume_id($volid);
+
+ activate_storage($cfg, $storeid);
+
+ my $scfg = storage_config($cfg, $storeid);
+ my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+
+
+ return $plugin->reassign_volume($scfg, $storeid, $volname, $target_vmid);
+}
+
# Various io-heavy operations require io/bandwidth limits which can be
# configured on multiple levels: The global defaults in datacenter.cfg, and
# per-storage overrides. When we want to do a restore from storage A to storage
diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
index df49b76..ff169f6 100644
--- a/PVE/Storage/LVMPlugin.pm
+++ b/PVE/Storage/LVMPlugin.pm
@@ -339,6 +339,13 @@ sub lvcreate {
run_command($cmd, errmsg => "lvcreate '$vg/$name' error");
}
+sub lvrename {
+ my ($vg, $oldname, $newname) = @_;
+
+ my $cmd = ['/sbin/lvrename', $vg, $oldname, $newname];
+ run_command($cmd, errmsg => "lvrename '${vg}/${oldname}' to '${newname}' error");
+}
+
sub alloc_image {
my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
@@ -584,6 +591,7 @@ sub volume_has_feature {
my $features = {
copy => { base => 1, current => 1},
+ reassign => {current => 1},
};
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
@@ -684,4 +692,30 @@ sub volume_import_write {
input => '<&'.fileno($input_fh));
}
+sub reassign_volume {
+ my ($class, $scfg, $storeid, $source_volname, $target_vmid) = @_;
+
+ $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
+ my $target_volname = $class->find_free_diskname(
+ $storeid,
+ $scfg,
+ $target_vmid,
+ );
+ $class->rename_volume(
+ $scfg,
+ $storeid,
+ $source_volname,
+ $target_volname,
+ $target_vmid,
+ );
+ return "${storeid}:${target_volname}";
+ });
+}
+
+sub rename_volume {
+ my ($class, $scfg, $storeid, $source_volname, $target_volname, $target_vmid) = @_;
+
+ lvrename($scfg->{vgname}, $source_volname, $target_volname);
+}
+
1;
diff --git a/PVE/Storage/LvmThinPlugin.pm b/PVE/Storage/LvmThinPlugin.pm
index c9e127c..895af8b 100644
--- a/PVE/Storage/LvmThinPlugin.pm
+++ b/PVE/Storage/LvmThinPlugin.pm
@@ -355,6 +355,7 @@ sub volume_has_feature {
template => { current => 1},
copy => { base => 1, current => 1, snap => 1},
sparseinit => { base => 1, current => 1},
+ reassign => {current => 1},
};
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index d7136a1..f14279e 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -939,6 +939,7 @@ sub volume_has_feature {
snap => {qcow2 => 1} },
sparseinit => { base => {qcow2 => 1, raw => 1, vmdk => 1},
current => {qcow2 => 1, raw => 1, vmdk => 1} },
+ reassign => { current =>{qcow2 => 1, raw => 1, vmdk => 1} },
};
# clone_image creates a qcow2 volume
@@ -946,6 +947,10 @@ sub volume_has_feature {
defined($opts->{valid_target_formats}) &&
!(grep { $_ eq 'qcow2' } @{$opts->{valid_target_formats}});
+ return 0 if $feature eq 'reassign'
+ && $class->can('api')
+ && $class->api() < 9;
+
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
$class->parse_volname($volname);
@@ -1463,4 +1468,51 @@ sub volume_import_formats {
return ();
}
+sub reassign_volume {
+ my ($class, $scfg, $storeid, $source_volname, $target_vmid) = @_;
+ die "not implemented in storage plugin '$class'\n" if $class->can('api') && $class->api() < 9;
+
+ my $base;
+ my $format;
+ my $source_vmid;
+
+ (undef, undef, $source_vmid, $base, undef, undef, $format) = $class->parse_volname($source_volname);
+
+ $base = $base ? "${base}/" : '';
+
+ $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
+ my $target_volname = $class->find_free_diskname(
+ $storeid,
+ $scfg,
+ $target_vmid,
+ $format,
+ 1,
+ );
+ $class->rename_volume(
+ $scfg,
+ $storeid,
+ $source_volname,
+ $target_volname,
+ $target_vmid,
+ $base,
+ );
+ return "${storeid}:${base}${target_vmid}/${target_volname}";
+ });
+}
+
+sub rename_volume {
+ my ($class, $scfg, $storeid, $source_volname, $target_volname, $target_vmid) = @_;
+
+ my $basedir = $class->get_subdir($scfg, 'images');
+ my $imagedir = "${basedir}/${target_vmid}";
+
+ mkpath $imagedir;
+
+ my $old_path = "${basedir}/${source_volname}";
+ my $new_path = "${imagedir}/${target_volname}";
+
+ rename($old_path, $new_path) ||
+ die "rename '$old_path' to '$new_path' failed - $!\n";
+}
+
1;
diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 42641e2..39db2cf 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -728,6 +728,7 @@ sub volume_has_feature {
template => { current => 1},
copy => { base => 1, current => 1, snap => 1},
sparseinit => { base => 1, current => 1},
+ reassign => {current => 1},
};
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) = $class->parse_volname($volname);
@@ -743,4 +744,40 @@ sub volume_has_feature {
return undef;
}
+sub reassign_volume {
+ my ($class, $scfg, $storeid, $source_volname, $target_vmid) = @_;
+
+ my $base;
+ (undef, $source_volname, undef, $base) = $class->parse_volname($source_volname);
+
+ $base = $base ? "${base}/" : '';
+
+ $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
+ my $target_volname = $class->find_free_diskname(
+ $storeid,
+ $scfg,
+ $target_vmid,
+ );
+ $class->rename_volume(
+ $scfg,
+ $storeid,
+ $source_volname,
+ $target_volname,
+ $target_vmid,
+ );
+ return "${storeid}:${base}${target_volname}";
+ });
+}
+
+sub rename_volume {
+ my ($class, $scfg, $storeid, $source_volname, $target_volname, $target_vmid) = @_;
+
+ my $cmd = $rbd_cmd->($scfg, $storeid, 'rename', $source_volname, $target_volname);
+
+ run_rbd_command(
+ $cmd,
+ errmsg => "could not rename image '${source_volname}' to '${target_volname}'",
+ );
+}
+
1;
diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index 2e2abe3..755e117 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++ b/PVE/Storage/ZFSPoolPlugin.pm
@@ -687,6 +687,7 @@ sub volume_has_feature {
copy => { base => 1, current => 1},
sparseinit => { base => 1, current => 1},
replicate => { base => 1, current => 1},
+ reassign => {current => 1},
};
my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
@@ -789,4 +790,41 @@ sub volume_import_formats {
return $class->volume_export_formats($scfg, $storeid, $volname, undef, $base_snapshot, $with_snapshots);
}
+sub reassign_volume {
+ my ($class, $scfg, $storeid, $source_volname, $target_vmid) = @_;
+
+ my $base;
+ (undef, $source_volname, undef, $base) = $class->parse_volname($source_volname);
+
+ $base = $base ? "${base}/" : '';
+
+ $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
+ my $target_volname = $class->find_free_diskname(
+ $storeid,
+ $scfg,
+ $target_vmid,
+ );
+ $class->rename_volume(
+ $scfg,
+ $storeid,
+ $source_volname,
+ $target_volname,
+ $target_vmid,
+ );
+ return "${storeid}:${base}${target_volname}";
+ });
+}
+
+sub rename_volume {
+ my ($class, $scfg, $storeid, $source_volname, $target_volname, $target_vmid) = @_;
+
+ $class->zfs_request(
+ $scfg,
+ 5,
+ 'rename',
+ "$scfg->{pool}/$source_volname",
+ "$scfg->{pool}/$target_volname",
+ );
+}
+
1;
--
2.20.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [pve-devel] [PATCH v7 qemu-server 2/5] disk reassign: add API endpoint
2021-04-20 16:34 [pve-devel] [PATCH v7 series 0/5] disk reassign: add new feature Aaron Lauterer
2021-04-20 16:34 ` [pve-devel] [PATCH v7 storage 1/5] add disk reassign feature Aaron Lauterer
@ 2021-04-20 16:34 ` Aaron Lauterer
2021-04-20 16:34 ` [pve-devel] [PATCH v7 qemu-server 3/5] cli: disk reassign: add reassign_disk to qm command Aaron Lauterer
` (2 subsequent siblings)
4 siblings, 0 replies; 7+ messages in thread
From: Aaron Lauterer @ 2021-04-20 16:34 UTC (permalink / raw)
To: pve-devel
The goal of this new API endpoint is to provide an easy way to move a
disk between VMs as this was only possible with manual intervention
until now. Either by renaming the VM disk or by manually adding the
disks volid to the config of the other VM.
The latter can easily cause unexpected behavior such as disks attached
to VM B would be deleted if it used to be a disk of VM A. This happens
because PVE assumes that the VMID in the volname always matches the VM
the disk is attached to and thus, would remove any disk with VMID A
when VM A was deleted.
The term `reassign` was chosen as it is not yet used
for VM disks.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
v6 -> v7:
this was a rather large change:
* added new parameter to specify target disk config key
* add check if free
* use $update_vm_api to add disk to new VM (hotplug if possible)
* renamed parameters and vars to clearly distinguish between source and
target VMs / disk config keys
* expand description to mention that a rename works only between VMs on
the same node
* check if target drive type supports all config parameters of the disk
* removed cluster log. was there to emulate the behavior of move_disk
but even there it seems to log a very outdated syntax...
* reordered the reassignment procedure
1. reassign/rename volume
2. remove from source vm config
3. update target vm
4. remove potential old replication snapshots
This should help to reduce the possibilities that a disk ends up in
limbo. If the rename/reassign on the storage level fails, we haven't
changed any VM config yet. If the replication snapshot removal
fails, nothing happens to the VMs, it needs to be cleaned up
manually though.
* fixed parameter for replication snapshot removal (thx @febner for the
hint)
* change worker ID to show which vm & disk is reassigned to which.
tried to find a way that does not interfere with the UPID parser.
AFAICT this one works okayish now. The GUI has a bit of a glitch
where it replaces - with / in the title of the tasks detail view.
v5 -> v6:
* guard Replication snapshot cleanup
additionally to the eval, that code is now only run if the volume is
on a storage with the 'replicate' feature
* add permission check for target vmid
* changed regex to match unused keys better
thx @Fabian for these suggestions/catching problems
v4 -> v5:
* implemented suggestions from Fabian [1]
* logging before action
* improving description
* improving error messages
* using Replication::prepare to remove replication snapshots
* check if disk is physical disk using /dev/...
v3 -> v4: nothing
v2 -> v3:
* reordered the locking as discussed with fabian [0] to
run checks
fork worker
lock source config
lock target config
run checks
...
* added more checks
* will not reassign to or from templates
* will not reassign if VM has snapshots present
* cleanup if disk used to be replicated
* made task log slightly more verbose
* integrated general recommendations regarding code
* renamed `disk` to `drive_key`
* prepended some vars with `source_` for easier distinction
v1 -> v2: print config key and volid info at the end of the job so it
shows up on the CLI and task log
rfc -> v1:
* add support to reassign unused disks
* add support to provide a config digest for the target vm
* add additional check if disk key is present in config
* reorder checks a bit
In order to support unused disk I had to extend
PVE::QemuServer::Drive::valid_drive_names for the API parameter
validation.
Checks are ordered so that cheap tests are run at the first chance to
fail early.
The check if both VMs are present on the node is a bit redundant because
locking the config files will fail if the VM is not present. But with
the additional check we can provide a useful error message to the user
instead of a "Configuration file xyz does not exist" error.
[0] https://lists.proxmox.com/pipermail/pve-devel/2020-September/044930.html
[1] https://lists.proxmox.com/pipermail/pve-devel/2020-November/046030.html
PVE/API2/Qemu.pm | 220 ++++++++++++++++++++++++++++++++++++++++
PVE/QemuServer/Drive.pm | 4 +
2 files changed, 224 insertions(+)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index c56b609..b90a83b 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -35,6 +35,7 @@ use PVE::API2::Qemu::Agent;
use PVE::VZDump::Plugin;
use PVE::DataCenterConfig;
use PVE::SSHInfo;
+use PVE::Replication;
BEGIN {
if (!$ENV{PVE_GENERATING_DOCS}) {
@@ -4395,4 +4396,223 @@ __PACKAGE__->register_method({
return PVE::QemuServer::Cloudinit::dump_cloudinit_config($conf, $param->{vmid}, $param->{type});
}});
+__PACKAGE__->register_method({
+ name => 'reassign_vm_disk',
+ path => '{vmid}/reassign_disk',
+ method => 'POST',
+ protected => 1,
+ proxyto => 'node',
+ description => "Reassign a disk to another VM on the same node",
+ permissions => {
+ description => "You need 'VM.Config.Disk' permissions on /vms/{vmid} and /vms/{target vmid}, and 'Datastore.Allocate' permissions on the storage.",
+ check => [ 'and',
+ ['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
+ ['perm', '/storage/{storage}', [ 'Datastore.Allocate' ]],
+ ],
+ },
+ parameters => {
+ additionalProperties => 0,
+ properties => {
+ node => get_standard_option('pve-node'),
+ 'source-vmid' => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
+ 'target-vmid' => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
+ 'source-drive' => {
+ type => 'string',
+ description => "The config key of the disk to reassign (for example, ide0 or scsi1).",
+ enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
+ },
+ 'target-drive' => {
+ type => 'string',
+ description => "The config key the disk will be reassigned to (for example, ide0 or scsi1).",
+ enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
+ },
+ 'source-digest' => {
+ type => 'string',
+ description => 'Prevent changes if current the configuration file of the source VM has a different SHA1 digest. This can be used to prevent concurrent modifications.',
+ maxLength => 40,
+ optional => 1,
+ },
+ 'target-digest' => {
+ type => 'string',
+ description => 'Prevent changes if current the configuration file of the target VM has a different SHA1 digest. This can be used to prevent concurrent modifications.',
+ maxLength => 40,
+ optional => 1,
+ },
+ },
+ },
+ returns => {
+ type => 'string',
+ description => "the task ID.",
+ },
+ code => sub {
+ my ($param) = @_;
+
+ my $rpcenv = PVE::RPCEnvironment::get();
+ my $authuser = $rpcenv->get_user();
+
+ my $node = extract_param($param, 'node');
+ my $source_vmid = extract_param($param, 'source-vmid');
+ my $target_vmid = extract_param($param, 'target-vmid');
+ my $source_digest = extract_param($param, 'source-digest');
+ my $target_digest = extract_param($param, 'target-digest');
+ my $source_drive = extract_param($param, 'source-drive');
+ my $target_drive = extract_param($param, 'target-drive');
+
+ my $storecfg = PVE::Storage::config();
+ my $source_volid;
+
+ $rpcenv->check_vm_perm($authuser, $target_vmid, undef, ['VM.Config.Disk'])
+ if $authuser ne 'root@pam';
+
+ die "Reassigning a disk to the same VM is not possible. Did you mean to move the disk?\n"
+ if $source_vmid eq $target_vmid;
+
+ my $load_and_check_configs = sub {
+ my $vmlist = PVE::Cluster::get_vmlist()->{ids};
+ die "Both VMs need to be on the same node ($vmlist->{$source_vmid}->{node}) but target VM is on $vmlist->{$target_vmid}->{node}.\n"
+ if $vmlist->{$source_vmid}->{node} ne $vmlist->{$target_vmid}->{node};
+
+ my $source_conf = PVE::QemuConfig->load_config($source_vmid);
+ PVE::QemuConfig->check_lock($source_conf);
+ my $target_conf = PVE::QemuConfig->load_config($target_vmid);
+ PVE::QemuConfig->check_lock($target_conf);
+
+ die "Can't reassign disks from or to templates\n"
+ if ($source_conf->{template} || $target_conf->{template});
+
+ if ($source_digest) {
+ eval { PVE::Tools::assert_if_modified($source_digest, $source_conf->{digest}) };
+ if (my $err = $@) {
+ die "VM ${source_vmid}: ${err}";
+ }
+ }
+
+ if ($target_digest) {
+ eval { PVE::Tools::assert_if_modified($target_digest, $target_conf->{digest}) };
+ if (my $err = $@) {
+ die "VM ${target_vmid}: ${err}";
+ }
+ }
+
+ die "Disk '${source_drive}' does not exist\n"
+ if !defined($source_conf->{$source_drive});
+
+ die "Target disk key '${target_drive}' is already in use\n"
+ if exists $target_conf->{$target_drive};
+
+ my $drive = PVE::QemuServer::parse_drive(
+ $source_drive,
+ $source_conf->{$source_drive},
+ );
+ $source_volid = $drive->{file};
+
+ die "disk '${source_drive}' has no associated volume\n"
+ if !$source_volid;
+ die "CD drive contents can't be reassigned\n"
+ if PVE::QemuServer::drive_is_cdrom($drive, 1);
+ die "Can't reassign physical disk\n" if $drive->{file} =~ m|^/dev/|;
+ die "Can't reassign disk used by a snapshot\n"
+ if PVE::QemuServer::Drive::is_volume_in_use(
+ $storecfg,
+ $source_conf,
+ $source_drive,
+ $source_volid,
+ );
+
+ die "Storage does not support the reassignment of this disk\n"
+ if !PVE::Storage::volume_has_feature(
+ $storecfg,
+ 'reassign',
+ $source_volid,
+ );
+
+ die "Cannot reassign disk while the source VM is running\n"
+ if PVE::QemuServer::check_running($source_vmid)
+ && $source_drive !~ m/^unused\d+$/;
+
+ if ($target_drive !~ m/^unused\d+$/ && $target_drive =~ m/^([^\d]+)\d+$/) {
+ my $interface = $1;
+ my $desc = PVE::JSONSchema::get_standard_option("pve-qm-${interface}");
+ eval {
+ PVE::JSONSchema::parse_property_string(
+ $desc->{format},
+ $source_conf->{$source_drive},
+ )
+ };
+ if (my $err = $@) {
+ die "Cannot reassign disk: ${err}";
+ }
+ }
+
+ return ($source_conf, $target_conf);
+ };
+
+ my $logfunc = sub {
+ my ($msg) = @_;
+ print STDERR "$msg\n";
+ };
+
+ my $reassign_func = sub {
+ return PVE::QemuConfig->lock_config($source_vmid, sub {
+ return PVE::QemuConfig->lock_config($target_vmid, sub {
+ my ($source_conf, $target_conf) = &$load_and_check_configs();
+
+ my $drive_param = PVE::QemuServer::parse_drive(
+ $target_drive,
+ $source_conf->{$source_drive},
+ );
+
+ print "reassigning disk '$source_drive'\n";
+ my $new_volid = PVE::Storage::reassign_volume(
+ $storecfg,
+ $source_volid,
+ $target_vmid,
+ );
+
+ $drive_param->{file} = $new_volid;
+
+ delete $source_conf->{$source_drive};
+ print "removing disk '${source_drive}' from VM '${source_vmid}' config\n";
+ PVE::QemuConfig->write_config($source_vmid, $source_conf);
+
+ my $drive_string = PVE::QemuServer::print_drive($drive_param);
+ &$update_vm_api(
+ {
+ node => $node,
+ vmid => $target_vmid,
+ digest => $target_digest,
+ $target_drive => $drive_string,
+ },
+ 1,
+ );
+
+ # remove possible replication snapshots
+ if (PVE::Storage::volume_has_feature(
+ $storecfg,
+ 'replicate',
+ $source_volid),
+ ) {
+ eval {
+ PVE::Replication::prepare(
+ $storecfg,
+ [$new_volid],
+ undef,
+ 1,
+ undef,
+ $logfunc,
+ )
+ };
+ if (my $err = $@) {
+ print "Failed to remove replication snapshots on reassigned disk. Manual cleanup could be necessary.\n";
+ }
+ }
+ });
+ });
+ };
+
+ &$load_and_check_configs();
+
+ return $rpcenv->fork_worker('qmreassign', "${source_vmid}-${source_drive}>${target_vmid}-${target_drive}", $authuser, $reassign_func);
+ }});
+
1;
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index 9016a43..db0f3c9 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -392,6 +392,10 @@ sub valid_drive_names {
'efidisk0');
}
+sub valid_drive_names_with_unused {
+ return (valid_drive_names(), map {"unused$_"} (0 .. ($MAX_UNUSED_DISKS -1)));
+}
+
sub is_valid_drivename {
my $dev = shift;
--
2.20.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [pve-devel] [PATCH v7 qemu-server 3/5] cli: disk reassign: add reassign_disk to qm command
2021-04-20 16:34 [pve-devel] [PATCH v7 series 0/5] disk reassign: add new feature Aaron Lauterer
2021-04-20 16:34 ` [pve-devel] [PATCH v7 storage 1/5] add disk reassign feature Aaron Lauterer
2021-04-20 16:34 ` [pve-devel] [PATCH v7 qemu-server 2/5] disk reassign: add API endpoint Aaron Lauterer
@ 2021-04-20 16:34 ` Aaron Lauterer
2021-04-20 16:34 ` [pve-devel] [PATCH v7 manager 4/5] ui: tasks: add qmreassign task description Aaron Lauterer
2021-04-20 16:34 ` [pve-devel] [PATCH v7 qemu-server 5/5] cli: qm: change move_disk parameter to move-disk Aaron Lauterer
4 siblings, 0 replies; 7+ messages in thread
From: Aaron Lauterer @ 2021-04-20 16:34 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
v6 -> v7:
* added target drive parameter
* renamed parameters to include source/target
* use - instead of _ in command name
v5 -> v6: nothing
v4 -> v5: renamed `drive_key` to `drive_name`
v3 ->v4: nothing
v2 -> v3: renamed parameter `disk` to `drive_key`
rfc -> v1 -> v2: nothing changed
PVE/CLI/qm.pm | 2 ++
1 file changed, 2 insertions(+)
diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index f8972bd..6d78600 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -914,6 +914,8 @@ our $cmddef = {
move_disk => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage'], { node => $nodename }, $upid_exit ],
+ 'reassign-disk' => [ "PVE::API2::Qemu", 'reassign_vm_disk', ['source-vmid', 'target-vmid', 'source-drive', 'target-drive'], { node => $nodename } ],
+
unlink => [ "PVE::API2::Qemu", 'unlink', ['vmid'], { node => $nodename } ],
config => [ "PVE::API2::Qemu", 'vm_config', ['vmid'],
--
2.20.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [pve-devel] [PATCH v7 manager 4/5] ui: tasks: add qmreassign task description
2021-04-20 16:34 [pve-devel] [PATCH v7 series 0/5] disk reassign: add new feature Aaron Lauterer
` (2 preceding siblings ...)
2021-04-20 16:34 ` [pve-devel] [PATCH v7 qemu-server 3/5] cli: disk reassign: add reassign_disk to qm command Aaron Lauterer
@ 2021-04-20 16:34 ` Aaron Lauterer
2021-04-20 16:34 ` [pve-devel] [PATCH v7 qemu-server 5/5] cli: qm: change move_disk parameter to move-disk Aaron Lauterer
4 siblings, 0 replies; 7+ messages in thread
From: Aaron Lauterer @ 2021-04-20 16:34 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
v4->v7: rebased
www/manager6/Utils.js | 1 +
1 file changed, 1 insertion(+)
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index f502950f..51942938 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -1801,6 +1801,7 @@ Ext.define('PVE.Utils', {
qmigrate: ['VM', gettext('Migrate')],
qmmove: ['VM', gettext('Move disk')],
qmpause: ['VM', gettext('Pause')],
+ qmreassign: ['VM', gettext('Reassign disk')],
qmreboot: ['VM', gettext('Reboot')],
qmreset: ['VM', gettext('Reset')],
qmrestore: ['VM', gettext('Restore')],
--
2.20.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [pve-devel] [PATCH v7 qemu-server 5/5] cli: qm: change move_disk parameter to move-disk
2021-04-20 16:34 [pve-devel] [PATCH v7 series 0/5] disk reassign: add new feature Aaron Lauterer
` (3 preceding siblings ...)
2021-04-20 16:34 ` [pve-devel] [PATCH v7 manager 4/5] ui: tasks: add qmreassign task description Aaron Lauterer
@ 2021-04-20 16:34 ` Aaron Lauterer
4 siblings, 0 replies; 7+ messages in thread
From: Aaron Lauterer @ 2021-04-20 16:34 UTC (permalink / raw)
To: pve-devel
also add alias to keep move_disk working.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
this one is optional but would align the use of - instead of _ in the
command names
PVE/CLI/qm.pm | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index 6d78600..b629e8f 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -912,7 +912,8 @@ our $cmddef = {
resize => [ "PVE::API2::Qemu", 'resize_vm', ['vmid', 'disk', 'size'], { node => $nodename } ],
- move_disk => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage'], { node => $nodename }, $upid_exit ],
+ 'move-disk' => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage'], { node => $nodename }, $upid_exit ],
+ move_disk => { alias => 'move-disk' },
'reassign-disk' => [ "PVE::API2::Qemu", 'reassign_vm_disk', ['source-vmid', 'target-vmid', 'source-drive', 'target-drive'], { node => $nodename } ],
--
2.20.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [pve-devel] [PATCH v7 storage 1/5] add disk reassign feature
2021-04-20 16:34 ` [pve-devel] [PATCH v7 storage 1/5] add disk reassign feature Aaron Lauterer
@ 2021-04-23 13:49 ` Thomas Lamprecht
0 siblings, 0 replies; 7+ messages in thread
From: Thomas Lamprecht @ 2021-04-23 13:49 UTC (permalink / raw)
To: Proxmox VE development discussion, Aaron Lauterer
On 20.04.21 18:34, Aaron Lauterer wrote:
> Functionality has been added for the following storage types:
>
> * dir based ones
> * ZFS
> * (thin) LVM
> * Ceph
>
> A new feature `reassign` has been introduced to mark which storage
> plugin supports the feature.
>
> Version API and AGE have been bumped.
>
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> v6 -> v7:
> We now place everything for dis based plugins in the Plugin.pm and check
> against the supported API version to avoid running the code on external
> plugins that do not yet officially support the reassign feature.
>
> * activate storage before doing anything else
> * checks if storage is enabled as well
> * code cleanup
> * change long function calls to multiline
> * base parameter is not passed to rename function anymore but
> handled in the reassign function
> * prefixed vars with source_ / target_ to make them easier to
> distinguish
>
> v5 -> v6:
> * refactor dir based feature check to reduce code repetition by
> introducing the file_can_reassign_volume sub that does the actual check
>
> v4 -> v5:
> * rebased on master
> * bumped api ver and api age
> * rephrased "not implemented" message as suggested [0].
>
> v3 -> v4:
> * revert intermediate storage plugin for directory based plugins
> * add a `die "not supported"` method in Plugin.pm
> * dir based plugins now call the file_reassign_volume method in
> Plugin.pm as the generic file/directory based method
> * restored old `volume_has_feature` method in Plugin.pm and override it
> in directory based plugins to check against the new `reassign` feature
> (not too happy about the repetition for each plugin)
>
> v2 -> v3:
> * added feature flags instead of dummy "not implemented" methods in
> plugins which do not support it as that would break compatibility with
> 3rd party plugins.
> Had to make $features available outside the `has_features` method in
> Plugins.pm in order to be able to individually add features in the
> `BaseDirPlugin.pm`.
> * added the BaseDirPlugin.pm to maintain compat with 3rd party plugins,
> this is explained in the commit message
> * moved the actual renaming from the `reassign_volume` to a dedicated
> `rename_volume` method to make this functionality available to other
> possible uses in the future.
> * added support for linked clones ($base)
>
>
> rfc -> v1 -> v2: nothing changed
>
> [0] https://lists.proxmox.com/pipermail/pve-devel/2020-November/046031.html
>
>
>
>
> PVE/Storage.pm | 19 +++++++++++--
> PVE/Storage/LVMPlugin.pm | 34 +++++++++++++++++++++++
> PVE/Storage/LvmThinPlugin.pm | 1 +
> PVE/Storage/Plugin.pm | 52 ++++++++++++++++++++++++++++++++++++
> PVE/Storage/RBDPlugin.pm | 37 +++++++++++++++++++++++++
> PVE/Storage/ZFSPoolPlugin.pm | 38 ++++++++++++++++++++++++++
> 6 files changed, 179 insertions(+), 2 deletions(-)
>
> diff --git a/PVE/Storage.pm b/PVE/Storage.pm
> index 122c3e9..ea782cc 100755
> --- a/PVE/Storage.pm
> +++ b/PVE/Storage.pm
> @@ -41,11 +41,11 @@ use PVE::Storage::DRBDPlugin;
> use PVE::Storage::PBSPlugin;
>
> # Storage API version. Increment it on changes in storage API interface.
> -use constant APIVER => 8;
> +use constant APIVER => 9;
> # Age is the number of versions we're backward compatible with.
> # This is like having 'current=APIVER' and age='APIAGE' in libtool,
> # see https://www.gnu.org/software/libtool/manual/html_node/Libtool-versioning.html
> -use constant APIAGE => 7;
> +use constant APIAGE => 8;
>
> # load standard plugins
> PVE::Storage::DirPlugin->register();
> @@ -349,6 +349,7 @@ sub volume_snapshot_needs_fsfreeze {
> # snapshot - taking a snapshot is possible
> # sparseinit - volume is sparsely initialized
> # template - conversion to base image is possible
> +# reassign - reassigning disks to other guest is possible
> # $snap - check if the feature is supported for a given snapshot
> # $running - if the guest owning the volume is running
> # $opts - hash with further options:
> @@ -1843,6 +1844,20 @@ sub complete_volume {
> return $res;
> }
>
> +sub reassign_volume {
> + my ($cfg, $volid, $target_vmid) = @_;
> +
> + my ($storeid, $volname) = parse_volume_id($volid);
> +
> + activate_storage($cfg, $storeid);
> +
> + my $scfg = storage_config($cfg, $storeid);
> + my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
> +
> +
> + return $plugin->reassign_volume($scfg, $storeid, $volname, $target_vmid);
> +}
> +
> # Various io-heavy operations require io/bandwidth limits which can be
> # configured on multiple levels: The global defaults in datacenter.cfg, and
> # per-storage overrides. When we want to do a restore from storage A to storage
> diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
> index df49b76..ff169f6 100644
> --- a/PVE/Storage/LVMPlugin.pm
> +++ b/PVE/Storage/LVMPlugin.pm
> @@ -339,6 +339,13 @@ sub lvcreate {
> run_command($cmd, errmsg => "lvcreate '$vg/$name' error");
> }
>
> +sub lvrename {
> + my ($vg, $oldname, $newname) = @_;
> +
> + my $cmd = ['/sbin/lvrename', $vg, $oldname, $newname];
> + run_command($cmd, errmsg => "lvrename '${vg}/${oldname}' to '${newname}' error");
I'd rather inline this in rename_volume and drop the intermediate $cmd, no use
for that here:
run_command(
['lvrename', $vg, $oldname, $newname],
errmsg => "lvrename '${vg}/${oldname}' to '${newname}' error",
);
> +}
> +
> sub alloc_image {
> my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
>
> @@ -584,6 +591,7 @@ sub volume_has_feature {
>
> my $features = {
> copy => { base => 1, current => 1},
> + reassign => {current => 1},
> };
>
> my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
> @@ -684,4 +692,30 @@ sub volume_import_write {
> input => '<&'.fileno($input_fh));
> }
>
> +sub reassign_volume {
> + my ($class, $scfg, $storeid, $source_volname, $target_vmid) = @_;
> +
> + $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
> + my $target_volname = $class->find_free_diskname(
> + $storeid,
> + $scfg,
> + $target_vmid,
> + );
as mentioned, I do not want this to automatically find a free disk-name,
that's something the caller must provide to make this behave in a more clean
and predictable manner.
And at that point we actually would only need rename_volume support with the
rest not happening in the storage layer, adding only one very concise method
to the storage plugin layer seems favorable, IMO.
> + $class->rename_volume(
> + $scfg,
> + $storeid,
> + $source_volname,
> + $target_volname,
> + $target_vmid,
> + );
> + return "${storeid}:${target_volname}";
> + });
> +}
> +
> +sub rename_volume {
> + my ($class, $scfg, $storeid, $source_volname, $target_volname, $target_vmid) = @_;
> +
> + lvrename($scfg->{vgname}, $source_volname, $target_volname);
> +}
> +
> 1;
> diff --git a/PVE/Storage/LvmThinPlugin.pm b/PVE/Storage/LvmThinPlugin.pm
> index c9e127c..895af8b 100644
> --- a/PVE/Storage/LvmThinPlugin.pm
> +++ b/PVE/Storage/LvmThinPlugin.pm
> @@ -355,6 +355,7 @@ sub volume_has_feature {
> template => { current => 1},
> copy => { base => 1, current => 1, snap => 1},
> sparseinit => { base => 1, current => 1},
> + reassign => {current => 1},
> };
>
> my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
> diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
> index d7136a1..f14279e 100644
> --- a/PVE/Storage/Plugin.pm
> +++ b/PVE/Storage/Plugin.pm
> @@ -939,6 +939,7 @@ sub volume_has_feature {
> snap => {qcow2 => 1} },
> sparseinit => { base => {qcow2 => 1, raw => 1, vmdk => 1},
> current => {qcow2 => 1, raw => 1, vmdk => 1} },
> + reassign => { current =>{qcow2 => 1, raw => 1, vmdk => 1} },
> };
>
> # clone_image creates a qcow2 volume
> @@ -946,6 +947,10 @@ sub volume_has_feature {
> defined($opts->{valid_target_formats}) &&
> !(grep { $_ eq 'qcow2' } @{$opts->{valid_target_formats}});
>
> + return 0 if $feature eq 'reassign'
> + && $class->can('api')
> + && $class->api() < 9;
> +
> my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
> $class->parse_volname($volname);
>
> @@ -1463,4 +1468,51 @@ sub volume_import_formats {
> return ();
> }
>
> +sub reassign_volume {
> + my ($class, $scfg, $storeid, $source_volname, $target_vmid) = @_;
> + die "not implemented in storage plugin '$class'\n" if $class->can('api') && $class->api() < 9;
> +
> + my $base;
> + my $format;
> + my $source_vmid;
> +
> + (undef, undef, $source_vmid, $base, undef, undef, $format) = $class->parse_volname($source_volname);
> +
> + $base = $base ? "${base}/" : '';
> +
> + $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
> + my $target_volname = $class->find_free_diskname(
same here, $target_volname needs to be an argument.
> + $storeid,
> + $scfg,
> + $target_vmid,
> + $format,
> + 1,
> + );
> + $class->rename_volume(
> + $scfg,
> + $storeid,
> + $source_volname,
> + $target_volname,
> + $target_vmid,
> + $base,
> + );
> + return "${storeid}:${base}${target_vmid}/${target_volname}";
> + });
> +}
> +
> +sub rename_volume {
> + my ($class, $scfg, $storeid, $source_volname, $target_volname, $target_vmid) = @_;
> +
> + my $basedir = $class->get_subdir($scfg, 'images');
> + my $imagedir = "${basedir}/${target_vmid}";
> +
> + mkpath $imagedir;
> +
> + my $old_path = "${basedir}/${source_volname}";
> + my $new_path = "${imagedir}/${target_volname}";
> +
> + rename($old_path, $new_path) ||
> + die "rename '$old_path' to '$new_path' failed - $!\n";
> +}
> +
> 1;
> diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
> index 42641e2..39db2cf 100644
> --- a/PVE/Storage/RBDPlugin.pm
> +++ b/PVE/Storage/RBDPlugin.pm
> @@ -728,6 +728,7 @@ sub volume_has_feature {
> template => { current => 1},
> copy => { base => 1, current => 1, snap => 1},
> sparseinit => { base => 1, current => 1},
> + reassign => {current => 1},
> };
>
> my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) = $class->parse_volname($volname);
> @@ -743,4 +744,40 @@ sub volume_has_feature {
> return undef;
> }
>
> +sub reassign_volume {
> + my ($class, $scfg, $storeid, $source_volname, $target_vmid) = @_;
> +
> + my $base;
> + (undef, $source_volname, undef, $base) = $class->parse_volname($source_volname);
> +
> + $base = $base ? "${base}/" : '';
> +
> + $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
> + my $target_volname = $class->find_free_diskname(
same here, $target_volname needs to be an argument.
> + $storeid,
> + $scfg,
> + $target_vmid,
> + );
> + $class->rename_volume(
> + $scfg,
> + $storeid,
> + $source_volname,
> + $target_volname,
> + $target_vmid,
> + );
> + return "${storeid}:${base}${target_volname}";
> + });
> +}
> +
> +sub rename_volume {
> + my ($class, $scfg, $storeid, $source_volname, $target_volname, $target_vmid) = @_;
> +
> + my $cmd = $rbd_cmd->($scfg, $storeid, 'rename', $source_volname, $target_volname);
> +
> + run_rbd_command(
> + $cmd,
> + errmsg => "could not rename image '${source_volname}' to '${target_volname}'",
> + );
> +}
> +
> 1;
> diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
> index 2e2abe3..755e117 100644
> --- a/PVE/Storage/ZFSPoolPlugin.pm
> +++ b/PVE/Storage/ZFSPoolPlugin.pm
> @@ -687,6 +687,7 @@ sub volume_has_feature {
> copy => { base => 1, current => 1},
> sparseinit => { base => 1, current => 1},
> replicate => { base => 1, current => 1},
> + reassign => {current => 1},
> };
>
> my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
> @@ -789,4 +790,41 @@ sub volume_import_formats {
> return $class->volume_export_formats($scfg, $storeid, $volname, undef, $base_snapshot, $with_snapshots);
> }
>
> +sub reassign_volume {
> + my ($class, $scfg, $storeid, $source_volname, $target_vmid) = @_;
> +
> + my $base;
> + (undef, $source_volname, undef, $base) = $class->parse_volname($source_volname);
> +
> + $base = $base ? "${base}/" : '';
> +
> + $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
> + my $target_volname = $class->find_free_diskname(
same here, $target_volname needs to be an argument.
> + $storeid,
> + $scfg,
> + $target_vmid,
> + );
> + $class->rename_volume(
> + $scfg,
> + $storeid,
> + $source_volname,
> + $target_volname,
> + $target_vmid,
> + );
> + return "${storeid}:${base}${target_volname}";
> + });
> +}
> +
> +sub rename_volume {
> + my ($class, $scfg, $storeid, $source_volname, $target_volname, $target_vmid) = @_;
> +
> + $class->zfs_request(
> + $scfg,
> + 5,
> + 'rename',
> + "$scfg->{pool}/$source_volname",
> + "$scfg->{pool}/$target_volname",
> + );
nit: Pulling out $pool here would actually help a bit for readability, as then we could use a
single line
my $pool = $scfg->{pool};
$class->zfs_request($scfg, 5, 'rename', "$pool/$source_volname", "$pool/$target_volname");
> +}
> +
> 1;
>
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2021-04-23 13:50 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-20 16:34 [pve-devel] [PATCH v7 series 0/5] disk reassign: add new feature Aaron Lauterer
2021-04-20 16:34 ` [pve-devel] [PATCH v7 storage 1/5] add disk reassign feature Aaron Lauterer
2021-04-23 13:49 ` Thomas Lamprecht
2021-04-20 16:34 ` [pve-devel] [PATCH v7 qemu-server 2/5] disk reassign: add API endpoint Aaron Lauterer
2021-04-20 16:34 ` [pve-devel] [PATCH v7 qemu-server 3/5] cli: disk reassign: add reassign_disk to qm command Aaron Lauterer
2021-04-20 16:34 ` [pve-devel] [PATCH v7 manager 4/5] ui: tasks: add qmreassign task description Aaron Lauterer
2021-04-20 16:34 ` [pve-devel] [PATCH v7 qemu-server 5/5] cli: qm: change move_disk parameter to move-disk Aaron Lauterer
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox