public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH v4 serries 0/4] disk reassign: add new feature
@ 2020-10-02  8:23 Aaron Lauterer
  2020-10-02  8:23 ` [pve-devel] [PATCH v4 qemu-server 1/4] disk reassign: add API endpoint Aaron Lauterer
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Aaron Lauterer @ 2020-10-02  8:23 UTC (permalink / raw)
  To: pve-devel

This series implements a new feature which allows users to easily
reassign disks between VMs. Currently this is only possible with one of
the following manual steps:

* rename the disk image/file and do a `qm rescan`
* configure the disk manually and use the old image name, having an
    image for VM A assigned to VM B

The latter can cause unexpected behavior because PVE expects that the
VMID in a disk name always corresponds to the VM it is assigned to. Thus
when a disk, original from VM A was manually configured as disk for VM B
it happens that, when deleting VM A, the disk in question will be
deleted as well because it still had the VMID of VM A in it's name.

To issue a reassign from the CLI run:

qm reassign_disk <source VMID> <target VMID> <drive key>

where <drive key> is the config key of the disk, e.g. ide0, scsi1 and so
on.

The following storage types are implemented at the moment:
* dir based ones
    * directory
    * NFS
    * CIFS
    * gluster
* ZFS
* (thin) LVM
* Ceph RBD

v3 -> v4:
* revert intermediate storage plugin for directory based plugins
* add a `die "not supported"` method in Plugin.pm
* dir based plugins now call the file_reassign_volume method in
  Plugin.pm as the generic file/directory based method
* restored old `volume_has_feature` method in Plugin.pm and override it
  in directory based plugins to check against the new `reassign` feature
  (not too happy about the repetition for each plugin)
* task description mapping has been moved from widget-toolkit to
  pve-manager/utils


v2 -> v3:
* change locking approach
* add more checks
* add intermedia storage plugin for directory based plugins
* use feature flags
* split up the reassign method to have a dedicated method for the
    renaming itself
* handle linked clones
* clean up if disk used to be replicated

I hope I didn't forget anything major.

v1 -> v2:
print info about the new disk volid and key at the end of the job so it
shows up in the CLI output and task log

Changes from RFC -> V1:
* support to reassign unused disks
* digest for target vm config
* reorder the checks a bit
* adding another one to check if the given key for the disk even exists
  in the config.

qemu-server: Aaron Lauterer (2):
  disk reassign: add API endpoint
  cli: disk reassign: add reassign_disk to qm command

 PVE/API2/Qemu.pm        | 156 ++++++++++++++++++++++++++++++++++++++++
 PVE/CLI/qm.pm           |   2 +
 PVE/QemuServer/Drive.pm |   4 ++
 3 files changed, 162 insertions(+)



storage: Aaron Lauterer (1):
  add disk reassign feature

 PVE/Storage.pm                 | 15 ++++++++--
 PVE/Storage/CIFSPlugin.pm      | 19 +++++++++++++
 PVE/Storage/DirPlugin.pm       | 19 +++++++++++++
 PVE/Storage/GlusterfsPlugin.pm | 19 +++++++++++++
 PVE/Storage/LVMPlugin.pm       | 24 ++++++++++++++++
 PVE/Storage/LvmThinPlugin.pm   |  1 +
 PVE/Storage/NFSPlugin.pm       | 19 +++++++++++++
 PVE/Storage/Plugin.pm          | 51 ++++++++++++++++++++++++++++++++++
 PVE/Storage/RBDPlugin.pm       | 31 +++++++++++++++++++++
 PVE/Storage/ZFSPoolPlugin.pm   | 28 +++++++++++++++++++
 10 files changed, 224 insertions(+), 2 deletions(-)

manager: Aaron Lauterer (1):
  ui: tasks: add qmreassign task description

 www/manager6/Utils.js | 3 +++
 1 file changed, 3 insertions(+)


-- 
2.20.1





^ permalink raw reply	[flat|nested] 9+ messages in thread

* [pve-devel] [PATCH v4 qemu-server 1/4] disk reassign: add API endpoint
  2020-10-02  8:23 [pve-devel] [PATCH v4 serries 0/4] disk reassign: add new feature Aaron Lauterer
@ 2020-10-02  8:23 ` Aaron Lauterer
  2020-11-20 16:17   ` Fabian Grünbichler
  2020-10-02  8:23 ` [pve-devel] [PATCH v4 qemu-server 2/4] cli: disk reassign: add reassign_disk to qm command Aaron Lauterer
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 9+ messages in thread
From: Aaron Lauterer @ 2020-10-02  8:23 UTC (permalink / raw)
  To: pve-devel

The goal of this new API endpoint is to provide an easy way to move a
disk between VMs as this was only possible with manual intervention
until now. Either by renaming the VM disk or by manually adding the
disks volid to the config of the other VM.

The latter can easily cause unexpected behavior such as disks attached
to VM B would be deleted if it used to be a disk of VM A. This happens
because PVE assumes that the VMID in the volname always matches the VM
the disk is attached to and thus, would remove any disk with VMID A
when VM A was deleted.

The term `reassign` was chosen as it is not yet used
for VM disks.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
v3 -> v4: nothing

v2 -> v3:
* reordered the locking as discussed with fabian [0] to
run checks
    fork worker
	lock source config
	    lock target config
		run checks
		...

* added more checks
    * will not reassign to or from templates
    * will not reassign if VM has snapshots present
* cleanup if disk used to be replicated
* made task log slightly more verbose
* integrated general recommendations regarding code
* renamed `disk` to `drive_key`
* prepended some vars with `source_` for easier distinction

v1 -> v2: print config key and volid info at the end of the job so it
shows up on the CLI and task log

rfc -> v1:
* add support to reassign unused disks
* add support to provide a config digest for the target vm
* add additional check if disk key is present in config
* reorder checks a bit

In order to support unused disk I had to extend
PVE::QemuServer::Drive::valid_drive_names for the API parameter
validation.

Checks are ordered so that cheap tests are run at the first chance to
fail early.

The check if both VMs are present on the node is a bit redundant because
locking the config files will fail if the VM is not present. But with
the additional check we can provide a useful error message to the user
instead of a "Configuration file xyz does not exist" error.

[0] https://lists.proxmox.com/pipermail/pve-devel/2020-September/044930.html


 PVE/API2/Qemu.pm        | 156 ++++++++++++++++++++++++++++++++++++++++
 PVE/QemuServer/Drive.pm |   4 ++
 2 files changed, 160 insertions(+)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 8da616a..613b257 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -4265,4 +4265,160 @@ __PACKAGE__->register_method({
 	return PVE::QemuServer::Cloudinit::dump_cloudinit_config($conf, $param->{vmid}, $param->{type});
     }});
 
+__PACKAGE__->register_method({
+    name => 'reassign_vm_disk',
+    path => '{vmid}/reassign_disk',
+    method => 'POST',
+    protected => 1,
+    proxyto => 'node',
+    description => "Reassign a disk to another VM",
+    permissions => {
+	description => "You need 'VM.Config.Disk' permissions on /vms/{vmid}, and 'Datastore.Allocate' permissions on the storage.",
+	check => [ 'and',
+		   ['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
+		   ['perm', '/storage/{storage}', [ 'Datastore.Allocate' ]],
+	    ],
+    },
+    parameters => {
+        additionalProperties => 0,
+	properties => {
+	    node => get_standard_option('pve-node'),
+	    vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
+	    target_vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
+	    drive_key => {
+	        type => 'string',
+		description => "The config key of the disk to reassign (for example, ide0 or scsi1).",
+		enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
+	    },
+	    digest => {
+		type => 'string',
+		description => 'Prevent changes if current the configuration file of the source VM has a different SHA1 digest. This can be used to prevent concurrent modifications.',
+		maxLength => 40,
+		optional => 1,
+	    },
+	    target_digest => {
+		type => 'string',
+		description => 'Prevent changes if current the configuration file of the target VM has a different SHA1 digest. This can be used to prevent concurrent modifications.',
+		maxLength => 40,
+		optional => 1,
+	    },
+	},
+    },
+    returns => {
+	type => 'string',
+	description => "the task ID.",
+    },
+    code => sub {
+	my ($param) = @_;
+
+	my $rpcenv = PVE::RPCEnvironment::get();
+	my $authuser = $rpcenv->get_user();
+
+	my $node = extract_param($param, 'node');
+	my $source_vmid = extract_param($param, 'vmid');
+	my $target_vmid = extract_param($param, 'target_vmid');
+	my $source_digest = extract_param($param, 'digest');
+	my $target_digest = extract_param($param, 'target_digest');
+	my $drive_key = extract_param($param, 'drive_key');
+
+	my $storecfg = PVE::Storage::config();
+	my $vmlist;
+	my $drive;
+	my $source_volid;
+
+	die "You cannot reassign a disk to the same VM\n"
+	    if $source_vmid eq $target_vmid;
+
+	my $load_and_check_configs = sub {
+	    $vmlist = PVE::QemuServer::vzlist();
+	    die "Both VMs need to be on the same node\n"
+		if !$vmlist->{$source_vmid}->{exists} || !$vmlist->{$target_vmid}->{exists};
+
+	    my $source_conf = PVE::QemuConfig->load_config($source_vmid);
+	    PVE::QemuConfig->check_lock($source_conf);
+	    my $target_conf = PVE::QemuConfig->load_config($target_vmid);
+	    PVE::QemuConfig->check_lock($target_conf);
+
+	    die "Can't reassign disks with templates\n"
+		if ($source_conf->{template} || $target_conf->{template});
+
+	    if ($source_digest) {
+		eval { PVE::Tools::assert_if_modified($source_digest, $source_conf->{digest}) };
+		if (my $err = $@) {
+		    die "Verification of source VM digest failed: ${err}";
+		}
+	    }
+
+	    if ($target_digest) {
+		eval { PVE::Tools::assert_if_modified($target_digest, $target_conf->{digest}) };
+		if (my $err = $@) {
+		    die "Verification of target VM digest failed: ${err}";
+		}
+	    }
+
+	    die "Disk '${drive_key}' does not exist\n"
+		if !defined($source_conf->{$drive_key});
+
+	    $drive = PVE::QemuServer::parse_drive($drive_key, $source_conf->{$drive_key});
+	    $source_volid = $drive->{file};
+	    die "disk '${drive_key}' has no associated volume\n" if !$source_volid;
+	    die "CD drive contents can't be reassigned\n" if PVE::QemuServer::drive_is_cdrom($drive, 1);
+
+	    die "Can't reassign disk used by a snapshot\n"
+		if PVE::QemuServer::Drive::is_volume_in_use($storecfg, $source_conf, $drive_key, $source_volid);
+
+	    my $hasfeature = PVE::Storage::volume_has_feature($storecfg, 'reassign', $source_volid);
+	    die "Storage does not support the reassignment of this disk\n" if !$hasfeature;
+
+	    die "Cannot reassign disk while the source VM is running\n"
+		if PVE::QemuServer::check_running($source_vmid) && $drive_key !~ m/unused[0-9]/;
+
+	    return ($source_conf, $target_conf);
+	};
+
+	my $reassign_func = sub {
+	    return PVE::QemuConfig->lock_config($source_vmid, sub {
+		return PVE::QemuConfig->lock_config($target_vmid, sub {
+		    my ($source_conf, $target_conf) = &$load_and_check_configs();
+
+		    PVE::Cluster::log_msg('info', $authuser, "reassign disk VM $source_vmid: reassign --disk ${drive_key} --target_vmid $target_vmid");
+
+		    my $new_volid = PVE::Storage::reassign_volume($storecfg, $source_volid, $target_vmid);
+
+		    delete $source_conf->{$drive_key};
+		    PVE::QemuConfig->write_config($source_vmid, $source_conf);
+		    print "removing disk '${drive_key}' from VM '${source_vmid}'\n";
+
+		    # remove possible replication snapshots
+		    my $had_snapshots = 0;
+		    if (PVE::Storage::volume_has_feature($storecfg, 'replicate', $new_volid)) {
+			my $snapshots = PVE::Storage::volume_snapshot_list($storecfg, $new_volid);
+			for my $snap (@$snapshots) {
+			    next if (substr($snap, 0, 12) ne '__replicate_');
+
+			    $had_snapshots = 1;
+			    PVE::Storage::volume_snapshot_delete($storecfg, $new_volid, $snap);
+			}
+			print "Disk '${drive_key}:${source_volid}' was replicated. On the next replication run it will be cleaned up on the replication target.\n"
+			    if $had_snapshots;;
+		    }
+
+		    my $key;
+		    eval { $key = PVE::QemuConfig->add_unused_volume($target_conf, $new_volid) };
+		    if (my $err = $@) {
+			print "adding moved disk '${new_volid}' to VM '${target_vmid}' config failed.\n";
+			return 0;
+		    }
+
+		    PVE::QemuConfig->write_config($target_vmid, $target_conf);
+		    print "adding disk to VM '${target_vmid}' as '${key}: ${new_volid}'\n";
+		});
+	    });
+	};
+
+	&$load_and_check_configs();
+
+	return $rpcenv->fork_worker('qmreassign', $source_vmid, $authuser, $reassign_func);
+    }});
+
 1;
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index 91c33f8..d2f59cd 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -383,6 +383,10 @@ sub valid_drive_names {
             'efidisk0');
 }
 
+sub valid_drive_names_with_unused {
+    return (valid_drive_names(), map {"unused$_"} (0 .. ($MAX_UNUSED_DISKS -1)));
+}
+
 sub is_valid_drivename {
     my $dev = shift;
 
-- 
2.20.1





^ permalink raw reply	[flat|nested] 9+ messages in thread

* [pve-devel] [PATCH v4 qemu-server 2/4] cli: disk reassign: add reassign_disk to qm command
  2020-10-02  8:23 [pve-devel] [PATCH v4 serries 0/4] disk reassign: add new feature Aaron Lauterer
  2020-10-02  8:23 ` [pve-devel] [PATCH v4 qemu-server 1/4] disk reassign: add API endpoint Aaron Lauterer
@ 2020-10-02  8:23 ` Aaron Lauterer
  2020-10-02  8:23 ` [pve-devel] [PATCH v4 storage 3/4] add disk reassign feature Aaron Lauterer
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Aaron Lauterer @ 2020-10-02  8:23 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
v3 ->v4: nothing
v2 -> v3: renamed parameter `disk` to `drive_key`
rfc -> v1 -> v2: nothing changed

 PVE/CLI/qm.pm | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index 282fa86..7fe25c6 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -913,6 +913,8 @@ our $cmddef = {
 
     move_disk => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage'], { node => $nodename }, $upid_exit ],
 
+    reassign_disk => [ "PVE::API2::Qemu", 'reassign_vm_disk', ['vmid', 'target_vmid', 'drive_key'], { node => $nodename } ],
+
     unlink => [ "PVE::API2::Qemu", 'unlink', ['vmid'], { node => $nodename } ],
 
     config => [ "PVE::API2::Qemu", 'vm_config', ['vmid'],
-- 
2.20.1





^ permalink raw reply	[flat|nested] 9+ messages in thread

* [pve-devel] [PATCH v4 storage 3/4] add disk reassign feature
  2020-10-02  8:23 [pve-devel] [PATCH v4 serries 0/4] disk reassign: add new feature Aaron Lauterer
  2020-10-02  8:23 ` [pve-devel] [PATCH v4 qemu-server 1/4] disk reassign: add API endpoint Aaron Lauterer
  2020-10-02  8:23 ` [pve-devel] [PATCH v4 qemu-server 2/4] cli: disk reassign: add reassign_disk to qm command Aaron Lauterer
@ 2020-10-02  8:23 ` Aaron Lauterer
  2020-11-20 16:17   ` Fabian Grünbichler
  2020-10-02  8:23 ` [pve-devel] [PATCH v4 manager 4/4] ui: tasks: add qmreassign task description Aaron Lauterer
  2020-10-30 10:42 ` [pve-devel] [PATCH v4 serries 0/4] disk reassign: add new feature Aaron Lauterer
  4 siblings, 1 reply; 9+ messages in thread
From: Aaron Lauterer @ 2020-10-02  8:23 UTC (permalink / raw)
  To: pve-devel

Functionality has been added for the following storage types:

* dir based ones
    * directory
    * NFS
    * CIFS
    * gluster
* ZFS
* (thin) LVM
* Ceph

A new feature `reassign` has been introduced to mark which storage
plugin supports the feature.

Version API and AGE have been bumped.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
v3 -> v4:
* revert intermediate storage plugin for directory based plugins
* add a `die "not supported"` method in Plugin.pm
* dir based plugins now call the file_reassign_volume method in
  Plugin.pm as the generic file/directory based method
* restored old `volume_has_feature` method in Plugin.pm and override it
  in directory based plugins to check against the new `reassign` feature
  (not too happy about the repetition for each plugin)

v2 -> v3:
* added feature flags instead of dummy "not implemented" methods in
  plugins which do not support it as that would break compatibility with
  3rd party plugins.
  Had to make $features available outside the `has_features` method in
  Plugins.pm in order to be able to individually add features in the
  `BaseDirPlugin.pm`.
* added the BaseDirPlugin.pm to maintain compat with 3rd party plugins,
  this is explained in the commit message
* moved the actual renaming from the `reassign_volume` to a dedicated
  `rename_volume` method to make this functionality available to other
  possible uses in the future.
* added support for linked clones ($base)


rfc -> v1 -> v2: nothing changed


 PVE/Storage.pm                 | 15 ++++++++--
 PVE/Storage/CIFSPlugin.pm      | 19 +++++++++++++
 PVE/Storage/DirPlugin.pm       | 19 +++++++++++++
 PVE/Storage/GlusterfsPlugin.pm | 19 +++++++++++++
 PVE/Storage/LVMPlugin.pm       | 24 ++++++++++++++++
 PVE/Storage/LvmThinPlugin.pm   |  1 +
 PVE/Storage/NFSPlugin.pm       | 19 +++++++++++++
 PVE/Storage/Plugin.pm          | 51 ++++++++++++++++++++++++++++++++++
 PVE/Storage/RBDPlugin.pm       | 31 +++++++++++++++++++++
 PVE/Storage/ZFSPoolPlugin.pm   | 28 +++++++++++++++++++
 10 files changed, 224 insertions(+), 2 deletions(-)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 4a60615..874457b 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -40,11 +40,11 @@ use PVE::Storage::DRBDPlugin;
 use PVE::Storage::PBSPlugin;
 
 # Storage API version. Icrement it on changes in storage API interface.
-use constant APIVER => 6;
+use constant APIVER => 7;
 # Age is the number of versions we're backward compatible with.
 # This is like having 'current=APIVER' and age='APIAGE' in libtool,
 # see https://www.gnu.org/software/libtool/manual/html_node/Libtool-versioning.html
-use constant APIAGE => 5;
+use constant APIAGE => 6;
 
 # load standard plugins
 PVE::Storage::DirPlugin->register();
@@ -294,6 +294,7 @@ sub volume_snapshot_delete {
 #            snapshot - taking a snapshot is possible
 #            sparseinit - volume is sparsely initialized
 #            template - conversion to base image is possible
+#            reassign - reassigning disks to other guest is possible
 # $snap - check if the feature is supported for a given snapshot
 # $running - if the guest owning the volume is running
 # $opts - hash with further options:
@@ -1759,6 +1760,16 @@ sub complete_volume {
     return $res;
 }
 
+sub reassign_volume {
+    my ($cfg, $volid, $target_vmid) = @_;
+
+    my ($storeid, $volname) = parse_volume_id($volid);
+    my $scfg = storage_config($cfg, $storeid);
+    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+
+    return $plugin->reassign_volume($scfg, $storeid, $volname, $target_vmid);
+}
+
 # Various io-heavy operations require io/bandwidth limits which can be
 # configured on multiple levels: The global defaults in datacenter.cfg, and
 # per-storage overrides. When we want to do a restore from storage A to storage
diff --git a/PVE/Storage/CIFSPlugin.pm b/PVE/Storage/CIFSPlugin.pm
index 6edbc9d..13c81cc 100644
--- a/PVE/Storage/CIFSPlugin.pm
+++ b/PVE/Storage/CIFSPlugin.pm
@@ -280,4 +280,23 @@ sub check_connection {
     return 1;
 }
 
+sub reassign_volume {
+    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+    return $class->file_reassign_volume($scfg, $storeid, $volname, $target_vmid);
+}
+
+sub volume_has_feature {
+    my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
+
+    my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
+	$class->parse_volname($volname);
+
+    if ($feature eq 'reassign') {
+	return 1 if !$isBase && $format =~ /^raw$|^qcow2$|^vmdk$/;
+	return undef;
+    } else {
+	return $class->SUPER::volume_has_feature($scfg, $feature, $storeid, $volname, $snapname, $running, $opts);
+    }
+}
+
 1;
diff --git a/PVE/Storage/DirPlugin.pm b/PVE/Storage/DirPlugin.pm
index 3c81d24..fa61339 100644
--- a/PVE/Storage/DirPlugin.pm
+++ b/PVE/Storage/DirPlugin.pm
@@ -128,4 +128,23 @@ sub check_config {
     return $opts;
 }
 
+sub reassign_volume {
+    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+    return $class->file_reassign_volume($scfg, $storeid, $volname, $target_vmid);
+}
+
+sub volume_has_feature {
+    my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
+
+    my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
+	$class->parse_volname($volname);
+
+    if ($feature eq 'reassign') {
+	return 1 if !$isBase && $format =~ /^raw$|^qcow2$|^vmdk$/;
+	return undef;
+    } else {
+	return $class->SUPER::volume_has_feature($scfg, $feature, $storeid, $volname, $snapname, $running, $opts);
+    }
+}
+
 1;
diff --git a/PVE/Storage/GlusterfsPlugin.pm b/PVE/Storage/GlusterfsPlugin.pm
index 2dd414d..5c9fd78 100644
--- a/PVE/Storage/GlusterfsPlugin.pm
+++ b/PVE/Storage/GlusterfsPlugin.pm
@@ -348,4 +348,23 @@ sub check_connection {
     return defined($server) ? 1 : 0;
 }
 
+sub reassign_volume {
+    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+    return $class->file_reassign_volume($scfg, $storeid, $volname, $target_vmid);
+}
+
+sub volume_has_feature {
+    my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
+
+    my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
+	$class->parse_volname($volname);
+
+    if ($feature eq 'reassign') {
+	return 1 if !$isBase && $format =~ /^raw$|^qcow2$|^vmdk$/;
+	return undef;
+    } else {
+	return $class->SUPER::volume_has_feature($scfg, $feature, $storeid, $volname, $snapname, $running, $opts);
+    }
+}
+
 1;
diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
index c0740d4..4203eda 100644
--- a/PVE/Storage/LVMPlugin.pm
+++ b/PVE/Storage/LVMPlugin.pm
@@ -337,6 +337,13 @@ sub lvcreate {
     run_command($cmd, errmsg => "lvcreate '$vg/$name' error");
 }
 
+sub lvrename {
+    my ($vg, $oldname, $newname) = @_;
+
+    my $cmd = ['/sbin/lvrename', $vg, $oldname, $newname];
+    run_command($cmd, errmsg => "lvrename '${vg}/${oldname}' to '${newname}' error");
+}
+
 sub alloc_image {
     my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
 
@@ -581,6 +588,7 @@ sub volume_has_feature {
 
     my $features = {
 	copy => { base => 1, current => 1},
+	reassign => {current => 1},
     };
 
     my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
@@ -681,4 +689,20 @@ sub volume_import_write {
 	input => '<&'.fileno($input_fh));
 }
 
+sub reassign_volume {
+    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+
+    $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
+	my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid);
+	return $class->rename_volume($scfg, $storeid, $volname, $target_volname, $target_vmid);
+    });
+}
+
+sub rename_volume {
+    my ($class, $scfg, $storeid, $source_volname, $target_volname, $vmid) = @_;
+
+    lvrename($scfg->{vgname}, $source_volname, $target_volname);
+    return "${storeid}:${target_volname}";
+}
+
 1;
diff --git a/PVE/Storage/LvmThinPlugin.pm b/PVE/Storage/LvmThinPlugin.pm
index d1c5b1f..adc4ae2 100644
--- a/PVE/Storage/LvmThinPlugin.pm
+++ b/PVE/Storage/LvmThinPlugin.pm
@@ -355,6 +355,7 @@ sub volume_has_feature {
 	template => { current => 1},
 	copy => { base => 1, current => 1, snap => 1},
 	sparseinit => { base => 1, current => 1},
+	reassign => {current => 1},
     };
 
     my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
diff --git a/PVE/Storage/NFSPlugin.pm b/PVE/Storage/NFSPlugin.pm
index 6abb24b..9d45e9f 100644
--- a/PVE/Storage/NFSPlugin.pm
+++ b/PVE/Storage/NFSPlugin.pm
@@ -176,4 +176,23 @@ sub check_connection {
     return 1;
 }
 
+sub reassign_volume {
+    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+    return $class->file_reassign_volume($scfg, $storeid, $volname, $target_vmid);
+}
+
+sub volume_has_feature {
+    my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
+
+    my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
+	$class->parse_volname($volname);
+
+    if ($feature eq 'reassign') {
+	return 1 if !$isBase && $format =~ /^raw$|^qcow2$|^vmdk$/;
+	return undef;
+    } else {
+	return $class->SUPER::volume_has_feature($scfg, $feature, $storeid, $volname, $snapname, $running, $opts);
+    }
+}
+
 1;
diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index e6cd99c..d10f1aa 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -888,6 +888,7 @@ sub storage_can_replicate {
     return 0;
 }
 
+
 sub volume_has_feature {
     my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
 
@@ -1411,4 +1412,54 @@ sub volume_import_formats {
     return ();
 }
 
+sub reassign_volume {
+    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+    die "no implemented in class '$class'\n";
+}
+
+# general reassignment method for file/directory based storages
+sub file_reassign_volume {
+    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+
+    my $base;
+    my $base_vmid;
+    my $format;
+    my $source_vmid;
+
+    (undef, $volname, $source_vmid, $base, $base_vmid, undef, $format) = $class->parse_volname($volname);
+
+    # parse_volname strips the directory from volname
+    my $source_volname = "${source_vmid}/${volname}";
+
+    if ($base) {
+	$base = "${base_vmid}/${base}/";
+    } else {
+	$base = '';
+    }
+
+    $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
+	my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid, $format, 1);
+
+	return $class->rename_volume($scfg, $storeid, $source_volname, $target_volname, $target_vmid, $base);
+    });
+}
+
+sub rename_volume {
+    my ($class, $scfg, $storeid, $source_volname, $target_volname, $vmid, $base) = @_;
+
+    my $basedir = $class->get_subdir($scfg, 'images');
+    my $imagedir = "${basedir}/${vmid}";
+
+    mkpath $imagedir;
+
+    my $old_path = "${basedir}/${source_volname}";
+    my $new_path = "${imagedir}/${target_volname}";
+
+    rename($old_path, $new_path) ||
+	die "rename '$old_path' to '$new_path' failed - $!\n";
+
+    return "${storeid}:${base}${vmid}/${target_volname}";
+}
+
+
 1;
diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 38f2b46..325937a 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -703,6 +703,7 @@ sub volume_has_feature {
 	template => { current => 1},
 	copy => { base => 1, current => 1, snap => 1},
 	sparseinit => { base => 1, current => 1},
+	reassign => {current => 1},
     };
 
     my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
@@ -719,4 +720,34 @@ sub volume_has_feature {
     return undef;
 }
 
+sub reassign_volume {
+    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+
+    my $base;;
+    (undef, $volname, undef, $base) = $class->parse_volname($volname);
+
+    if ($base) {
+	$base = $base . '/';
+    } else {
+	$base = '';
+    }
+
+    $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
+	my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid);
+	return $class->rename_volume($scfg, $storeid, $volname, $target_volname, $target_vmid, $base);
+    });
+}
+
+sub rename_volume {
+    my ($class, $scfg, $storeid, $source_volname, $target_volname, $vmid, $base) = @_;
+
+    my $cmd = $rbd_cmd->($scfg, $storeid, 'rename', $source_volname, $target_volname);
+
+    run_rbd_command(
+	$cmd,
+	errmsg => "could not rename image '${source_volname}' to '${target_volname}'",
+    );
+    return "${storeid}:${base}${target_volname}";
+}
+
 1;
diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index 6ac05b4..42f183c 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++ b/PVE/Storage/ZFSPoolPlugin.pm
@@ -11,6 +11,8 @@ use PVE::RPCEnvironment;
 use PVE::Storage::Plugin;
 use PVE::Tools qw(run_command);
 
+use Data::Dumper;
+
 use base qw(PVE::Storage::Plugin);
 
 sub type {
@@ -658,6 +660,7 @@ sub volume_has_feature {
 	copy => { base => 1, current => 1},
 	sparseinit => { base => 1, current => 1},
 	replicate => { base => 1, current => 1},
+	reassign => {current => 1},
     };
 
     my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
@@ -760,4 +763,29 @@ sub volume_import_formats {
     return $class->volume_export_formats($scfg, $storeid, $volname, undef, $base_snapshot, $with_snapshots);
 }
 
+sub reassign_volume {
+    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+
+    my $base;;
+    (undef, $volname, undef, $base) = $class->parse_volname($volname);
+
+    if ($base) {
+	$base = $base . '/';
+    } else {
+	$base = '';
+    }
+
+    $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
+	my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid);
+	return $class->rename_volume($scfg, $storeid, $volname, $target_volname, $target_vmid, $base);
+    });
+}
+
+sub rename_volume {
+    my ($class, $scfg, $storeid, $source_volname, $target_volname, $vmid, $base) = @_;
+
+    $class->zfs_request($scfg, 5, 'rename', "$scfg->{pool}/$source_volname", "$scfg->{pool}/$target_volname");
+    return "${storeid}:${base}${target_volname}";
+}
+
 1;
-- 
2.20.1





^ permalink raw reply	[flat|nested] 9+ messages in thread

* [pve-devel] [PATCH v4 manager 4/4] ui: tasks: add qmreassign task description
  2020-10-02  8:23 [pve-devel] [PATCH v4 serries 0/4] disk reassign: add new feature Aaron Lauterer
                   ` (2 preceding siblings ...)
  2020-10-02  8:23 ` [pve-devel] [PATCH v4 storage 3/4] add disk reassign feature Aaron Lauterer
@ 2020-10-02  8:23 ` Aaron Lauterer
  2020-10-30 10:42 ` [pve-devel] [PATCH v4 serries 0/4] disk reassign: add new feature Aaron Lauterer
  4 siblings, 0 replies; 9+ messages in thread
From: Aaron Lauterer @ 2020-10-02  8:23 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
 www/manager6/Utils.js | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index 19227384..0a5a207c 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -1566,6 +1566,9 @@ Ext.define('PVE.Utils', { utilities: {
     constructor: function() {
 	var me = this;
 	Ext.apply(me, me.utilities);
+	Proxmox.Utils.override_task_descriptions({
+	    qmreassign: ['VM', gettext('Reassign disk')],
+	});
     },
 
     getNodeVMs: function(nodename) {
-- 
2.20.1





^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [pve-devel] [PATCH v4 serries 0/4] disk reassign: add new feature
  2020-10-02  8:23 [pve-devel] [PATCH v4 serries 0/4] disk reassign: add new feature Aaron Lauterer
                   ` (3 preceding siblings ...)
  2020-10-02  8:23 ` [pve-devel] [PATCH v4 manager 4/4] ui: tasks: add qmreassign task description Aaron Lauterer
@ 2020-10-30 10:42 ` Aaron Lauterer
  2020-11-19 10:54   ` Dominic Jäger
  4 siblings, 1 reply; 9+ messages in thread
From: Aaron Lauterer @ 2020-10-30 10:42 UTC (permalink / raw)
  To: pve-devel

Does anyone have time to take a look at the latest iteration of these patches?

Thx :)

On 10/2/20 10:23 AM, Aaron Lauterer wrote:
> This series implements a new feature which allows users to easily
> reassign disks between VMs. Currently this is only possible with one of
> the following manual steps:
> 
> * rename the disk image/file and do a `qm rescan`
> * configure the disk manually and use the old image name, having an
>      image for VM A assigned to VM B
> 
> The latter can cause unexpected behavior because PVE expects that the
> VMID in a disk name always corresponds to the VM it is assigned to. Thus
> when a disk, original from VM A was manually configured as disk for VM B
> it happens that, when deleting VM A, the disk in question will be
> deleted as well because it still had the VMID of VM A in it's name.
> 
> To issue a reassign from the CLI run:
> 
> qm reassign_disk <source VMID> <target VMID> <drive key>
> 
> where <drive key> is the config key of the disk, e.g. ide0, scsi1 and so
> on.
> 
> The following storage types are implemented at the moment:
> * dir based ones
>      * directory
>      * NFS
>      * CIFS
>      * gluster
> * ZFS
> * (thin) LVM
> * Ceph RBD
> 
> v3 -> v4:
> * revert intermediate storage plugin for directory based plugins
> * add a `die "not supported"` method in Plugin.pm
> * dir based plugins now call the file_reassign_volume method in
>    Plugin.pm as the generic file/directory based method
> * restored old `volume_has_feature` method in Plugin.pm and override it
>    in directory based plugins to check against the new `reassign` feature
>    (not too happy about the repetition for each plugin)
> * task description mapping has been moved from widget-toolkit to
>    pve-manager/utils
> 
> 
> v2 -> v3:
> * change locking approach
> * add more checks
> * add intermedia storage plugin for directory based plugins
> * use feature flags
> * split up the reassign method to have a dedicated method for the
>      renaming itself
> * handle linked clones
> * clean up if disk used to be replicated
> 
> I hope I didn't forget anything major.
> 
> v1 -> v2:
> print info about the new disk volid and key at the end of the job so it
> shows up in the CLI output and task log
> 
> Changes from RFC -> V1:
> * support to reassign unused disks
> * digest for target vm config
> * reorder the checks a bit
> * adding another one to check if the given key for the disk even exists
>    in the config.
> 
> qemu-server: Aaron Lauterer (2):
>    disk reassign: add API endpoint
>    cli: disk reassign: add reassign_disk to qm command
> 
>   PVE/API2/Qemu.pm        | 156 ++++++++++++++++++++++++++++++++++++++++
>   PVE/CLI/qm.pm           |   2 +
>   PVE/QemuServer/Drive.pm |   4 ++
>   3 files changed, 162 insertions(+)
> 
> 
> 
> storage: Aaron Lauterer (1):
>    add disk reassign feature
> 
>   PVE/Storage.pm                 | 15 ++++++++--
>   PVE/Storage/CIFSPlugin.pm      | 19 +++++++++++++
>   PVE/Storage/DirPlugin.pm       | 19 +++++++++++++
>   PVE/Storage/GlusterfsPlugin.pm | 19 +++++++++++++
>   PVE/Storage/LVMPlugin.pm       | 24 ++++++++++++++++
>   PVE/Storage/LvmThinPlugin.pm   |  1 +
>   PVE/Storage/NFSPlugin.pm       | 19 +++++++++++++
>   PVE/Storage/Plugin.pm          | 51 ++++++++++++++++++++++++++++++++++
>   PVE/Storage/RBDPlugin.pm       | 31 +++++++++++++++++++++
>   PVE/Storage/ZFSPoolPlugin.pm   | 28 +++++++++++++++++++
>   10 files changed, 224 insertions(+), 2 deletions(-)
> 
> manager: Aaron Lauterer (1):
>    ui: tasks: add qmreassign task description
> 
>   www/manager6/Utils.js | 3 +++
>   1 file changed, 3 insertions(+)
> 
> 




^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [pve-devel] [PATCH v4 serries 0/4] disk reassign: add new feature
  2020-10-30 10:42 ` [pve-devel] [PATCH v4 serries 0/4] disk reassign: add new feature Aaron Lauterer
@ 2020-11-19 10:54   ` Dominic Jäger
  0 siblings, 0 replies; 9+ messages in thread
From: Dominic Jäger @ 2020-11-19 10:54 UTC (permalink / raw)
  To: Proxmox VE development discussion

Gave it a quick test: Moving some disks from one VM to another on a CIFS
storage worked for me.  Had to do some conflict resolution upon application
though.
=> If (there is more feedback and) you rebase it, then I can test it (+ GUI) more thoroughly :)

I would probably rename the parameter name drive_key for qm to something that
already exists in our code, because to me it looks like we have different names
for it already
 - Bus/Device in the GUI
 - the word (ide, scsi)  is called Bus/Controller in man qm
 - the word is called interface (of a drive) in QemuServer.pm
 - we have $deviceid =~ m/^(virtio)(\d+)$/ in qemu-server, so the whole word+integer
 - and also "my $drive_id = "$drive->{interface}$drive->{index}";
 - we have "{drive_name} may be used to specify ide0, scsi1, etc ..." in the (old) Importdisk.pm
 - and in ControllerSelector.js the first half is controller and the integer is deviceid

And for me it would make sense to minimize those differences.

Tested-by: Dominic Jäger <d.jaeger@proxmox.com>

On Fri, Oct 30, 2020 at 11:42:23AM +0100, Aaron Lauterer wrote:
> Does anyone have time to take a look at the latest iteration of these patches?
> 
> Thx :)
> 
> On 10/2/20 10:23 AM, Aaron Lauterer wrote:
> > deleted as well because it still had the VMID of VM A in it's name.
> > 
> > To issue a reassign from the CLI run:
> > 
> > qm reassign_disk <source VMID> <target VMID> <drive key>
> > 
> > where <drive key> is the config key of the disk, e.g. ide0, scsi1 and so
> > on.
> > 
> > The following storage types are implemented at the moment:
> > * dir based ones
> >      * CIFS




^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [pve-devel] [PATCH v4 storage 3/4] add disk reassign feature
  2020-10-02  8:23 ` [pve-devel] [PATCH v4 storage 3/4] add disk reassign feature Aaron Lauterer
@ 2020-11-20 16:17   ` Fabian Grünbichler
  0 siblings, 0 replies; 9+ messages in thread
From: Fabian Grünbichler @ 2020-11-20 16:17 UTC (permalink / raw)
  To: Proxmox VE development discussion

small nit inline, also order in series is wrong - this patch goes first, 
the rest depends on it..
;)

needs a rebase/fixup since APIAGE/VER were bumped in the meantime..

On October 2, 2020 10:23 am, Aaron Lauterer wrote:
> Functionality has been added for the following storage types:
> 
> * dir based ones
>     * directory
>     * NFS
>     * CIFS
>     * gluster
> * ZFS
> * (thin) LVM
> * Ceph
> 
> A new feature `reassign` has been introduced to mark which storage
> plugin supports the feature.
> 
> Version API and AGE have been bumped.
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> v3 -> v4:
> * revert intermediate storage plugin for directory based plugins
> * add a `die "not supported"` method in Plugin.pm
> * dir based plugins now call the file_reassign_volume method in
>   Plugin.pm as the generic file/directory based method
> * restored old `volume_has_feature` method in Plugin.pm and override it
>   in directory based plugins to check against the new `reassign` feature
>   (not too happy about the repetition for each plugin)
> 
> v2 -> v3:
> * added feature flags instead of dummy "not implemented" methods in
>   plugins which do not support it as that would break compatibility with
>   3rd party plugins.
>   Had to make $features available outside the `has_features` method in
>   Plugins.pm in order to be able to individually add features in the
>   `BaseDirPlugin.pm`.
> * added the BaseDirPlugin.pm to maintain compat with 3rd party plugins,
>   this is explained in the commit message
> * moved the actual renaming from the `reassign_volume` to a dedicated
>   `rename_volume` method to make this functionality available to other
>   possible uses in the future.
> * added support for linked clones ($base)
> 
> 
> rfc -> v1 -> v2: nothing changed
> 
> 
>  PVE/Storage.pm                 | 15 ++++++++--
>  PVE/Storage/CIFSPlugin.pm      | 19 +++++++++++++
>  PVE/Storage/DirPlugin.pm       | 19 +++++++++++++
>  PVE/Storage/GlusterfsPlugin.pm | 19 +++++++++++++
>  PVE/Storage/LVMPlugin.pm       | 24 ++++++++++++++++
>  PVE/Storage/LvmThinPlugin.pm   |  1 +
>  PVE/Storage/NFSPlugin.pm       | 19 +++++++++++++
>  PVE/Storage/Plugin.pm          | 51 ++++++++++++++++++++++++++++++++++
>  PVE/Storage/RBDPlugin.pm       | 31 +++++++++++++++++++++
>  PVE/Storage/ZFSPoolPlugin.pm   | 28 +++++++++++++++++++
>  10 files changed, 224 insertions(+), 2 deletions(-)
> 
> diff --git a/PVE/Storage.pm b/PVE/Storage.pm
> index 4a60615..874457b 100755
> --- a/PVE/Storage.pm
> +++ b/PVE/Storage.pm
> @@ -40,11 +40,11 @@ use PVE::Storage::DRBDPlugin;
>  use PVE::Storage::PBSPlugin;
>  
>  # Storage API version. Icrement it on changes in storage API interface.
> -use constant APIVER => 6;
> +use constant APIVER => 7;
>  # Age is the number of versions we're backward compatible with.
>  # This is like having 'current=APIVER' and age='APIAGE' in libtool,
>  # see https://www.gnu.org/software/libtool/manual/html_node/Libtool-versioning.html
> -use constant APIAGE => 5;
> +use constant APIAGE => 6;
>  
>  # load standard plugins
>  PVE::Storage::DirPlugin->register();
> @@ -294,6 +294,7 @@ sub volume_snapshot_delete {
>  #            snapshot - taking a snapshot is possible
>  #            sparseinit - volume is sparsely initialized
>  #            template - conversion to base image is possible
> +#            reassign - reassigning disks to other guest is possible
>  # $snap - check if the feature is supported for a given snapshot
>  # $running - if the guest owning the volume is running
>  # $opts - hash with further options:
> @@ -1759,6 +1760,16 @@ sub complete_volume {
>      return $res;
>  }
>  
> +sub reassign_volume {
> +    my ($cfg, $volid, $target_vmid) = @_;
> +
> +    my ($storeid, $volname) = parse_volume_id($volid);
> +    my $scfg = storage_config($cfg, $storeid);
> +    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
> +
> +    return $plugin->reassign_volume($scfg, $storeid, $volname, $target_vmid);
> +}
> +
>  # Various io-heavy operations require io/bandwidth limits which can be
>  # configured on multiple levels: The global defaults in datacenter.cfg, and
>  # per-storage overrides. When we want to do a restore from storage A to storage
> diff --git a/PVE/Storage/CIFSPlugin.pm b/PVE/Storage/CIFSPlugin.pm
> index 6edbc9d..13c81cc 100644
> --- a/PVE/Storage/CIFSPlugin.pm
> +++ b/PVE/Storage/CIFSPlugin.pm
> @@ -280,4 +280,23 @@ sub check_connection {
>      return 1;
>  }
>  
> +sub reassign_volume {
> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> +    return $class->file_reassign_volume($scfg, $storeid, $volname, $target_vmid);
> +}
> +
> +sub volume_has_feature {
> +    my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
> +
> +    my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
> +	$class->parse_volname($volname);
> +
> +    if ($feature eq 'reassign') {
> +	return 1 if !$isBase && $format =~ /^raw$|^qcow2$|^vmdk$/;
> +	return undef;
> +    } else {
> +	return $class->SUPER::volume_has_feature($scfg, $feature, $storeid, $volname, $snapname, $running, $opts);
> +    }
> +}
> +
>  1;
> diff --git a/PVE/Storage/DirPlugin.pm b/PVE/Storage/DirPlugin.pm
> index 3c81d24..fa61339 100644
> --- a/PVE/Storage/DirPlugin.pm
> +++ b/PVE/Storage/DirPlugin.pm
> @@ -128,4 +128,23 @@ sub check_config {
>      return $opts;
>  }
>  
> +sub reassign_volume {
> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> +    return $class->file_reassign_volume($scfg, $storeid, $volname, $target_vmid);
> +}
> +
> +sub volume_has_feature {
> +    my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
> +
> +    my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
> +	$class->parse_volname($volname);
> +
> +    if ($feature eq 'reassign') {
> +	return 1 if !$isBase && $format =~ /^raw$|^qcow2$|^vmdk$/;
> +	return undef;
> +    } else {
> +	return $class->SUPER::volume_has_feature($scfg, $feature, $storeid, $volname, $snapname, $running, $opts);
> +    }
> +}
> +
>  1;
> diff --git a/PVE/Storage/GlusterfsPlugin.pm b/PVE/Storage/GlusterfsPlugin.pm
> index 2dd414d..5c9fd78 100644
> --- a/PVE/Storage/GlusterfsPlugin.pm
> +++ b/PVE/Storage/GlusterfsPlugin.pm
> @@ -348,4 +348,23 @@ sub check_connection {
>      return defined($server) ? 1 : 0;
>  }
>  
> +sub reassign_volume {
> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> +    return $class->file_reassign_volume($scfg, $storeid, $volname, $target_vmid);
> +}
> +
> +sub volume_has_feature {
> +    my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
> +
> +    my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
> +	$class->parse_volname($volname);
> +
> +    if ($feature eq 'reassign') {
> +	return 1 if !$isBase && $format =~ /^raw$|^qcow2$|^vmdk$/;
> +	return undef;
> +    } else {
> +	return $class->SUPER::volume_has_feature($scfg, $feature, $storeid, $volname, $snapname, $running, $opts);
> +    }
> +}
> +
>  1;
> diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
> index c0740d4..4203eda 100644
> --- a/PVE/Storage/LVMPlugin.pm
> +++ b/PVE/Storage/LVMPlugin.pm
> @@ -337,6 +337,13 @@ sub lvcreate {
>      run_command($cmd, errmsg => "lvcreate '$vg/$name' error");
>  }
>  
> +sub lvrename {
> +    my ($vg, $oldname, $newname) = @_;
> +
> +    my $cmd = ['/sbin/lvrename', $vg, $oldname, $newname];
> +    run_command($cmd, errmsg => "lvrename '${vg}/${oldname}' to '${newname}' error");
> +}
> +
>  sub alloc_image {
>      my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
>  
> @@ -581,6 +588,7 @@ sub volume_has_feature {
>  
>      my $features = {
>  	copy => { base => 1, current => 1},
> +	reassign => {current => 1},
>      };
>  
>      my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
> @@ -681,4 +689,20 @@ sub volume_import_write {
>  	input => '<&'.fileno($input_fh));
>  }
>  
> +sub reassign_volume {
> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> +
> +    $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
> +	my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid);
> +	return $class->rename_volume($scfg, $storeid, $volname, $target_volname, $target_vmid);
> +    });
> +}
> +
> +sub rename_volume {
> +    my ($class, $scfg, $storeid, $source_volname, $target_volname, $vmid) = @_;
> +
> +    lvrename($scfg->{vgname}, $source_volname, $target_volname);
> +    return "${storeid}:${target_volname}";
> +}
> +
>  1;
> diff --git a/PVE/Storage/LvmThinPlugin.pm b/PVE/Storage/LvmThinPlugin.pm
> index d1c5b1f..adc4ae2 100644
> --- a/PVE/Storage/LvmThinPlugin.pm
> +++ b/PVE/Storage/LvmThinPlugin.pm
> @@ -355,6 +355,7 @@ sub volume_has_feature {
>  	template => { current => 1},
>  	copy => { base => 1, current => 1, snap => 1},
>  	sparseinit => { base => 1, current => 1},
> +	reassign => {current => 1},
>      };
>  
>      my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
> diff --git a/PVE/Storage/NFSPlugin.pm b/PVE/Storage/NFSPlugin.pm
> index 6abb24b..9d45e9f 100644
> --- a/PVE/Storage/NFSPlugin.pm
> +++ b/PVE/Storage/NFSPlugin.pm
> @@ -176,4 +176,23 @@ sub check_connection {
>      return 1;
>  }
>  
> +sub reassign_volume {
> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> +    return $class->file_reassign_volume($scfg, $storeid, $volname, $target_vmid);
> +}
> +
> +sub volume_has_feature {
> +    my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
> +
> +    my ($vtype, $name, $vmid, $basename, $basevmid, $isBase, $format) =
> +	$class->parse_volname($volname);
> +
> +    if ($feature eq 'reassign') {
> +	return 1 if !$isBase && $format =~ /^raw$|^qcow2$|^vmdk$/;
> +	return undef;
> +    } else {
> +	return $class->SUPER::volume_has_feature($scfg, $feature, $storeid, $volname, $snapname, $running, $opts);
> +    }
> +}
> +
>  1;
> diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
> index e6cd99c..d10f1aa 100644
> --- a/PVE/Storage/Plugin.pm
> +++ b/PVE/Storage/Plugin.pm
> @@ -888,6 +888,7 @@ sub storage_can_replicate {
>      return 0;
>  }
>  
> +
>  sub volume_has_feature {
>      my ($class, $scfg, $feature, $storeid, $volname, $snapname, $running, $opts) = @_;
>  
> @@ -1411,4 +1412,54 @@ sub volume_import_formats {
>      return ();
>  }
>  
> +sub reassign_volume {
> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> +    die "no implemented in class '$class'\n";

not implemented, and maybe s/class/storage plugin/

> +}
> +
> +# general reassignment method for file/directory based storages
> +sub file_reassign_volume {
> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> +
> +    my $base;
> +    my $base_vmid;
> +    my $format;
> +    my $source_vmid;
> +
> +    (undef, $volname, $source_vmid, $base, $base_vmid, undef, $format) = $class->parse_volname($volname);
> +
> +    # parse_volname strips the directory from volname
> +    my $source_volname = "${source_vmid}/${volname}";
> +
> +    if ($base) {
> +	$base = "${base_vmid}/${base}/";
> +    } else {
> +	$base = '';
> +    }
> +
> +    $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
> +	my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid, $format, 1);
> +
> +	return $class->rename_volume($scfg, $storeid, $source_volname, $target_volname, $target_vmid, $base);
> +    });
> +}
> +
> +sub rename_volume {
> +    my ($class, $scfg, $storeid, $source_volname, $target_volname, $vmid, $base) = @_;
> +
> +    my $basedir = $class->get_subdir($scfg, 'images');
> +    my $imagedir = "${basedir}/${vmid}";
> +
> +    mkpath $imagedir;
> +
> +    my $old_path = "${basedir}/${source_volname}";
> +    my $new_path = "${imagedir}/${target_volname}";
> +
> +    rename($old_path, $new_path) ||
> +	die "rename '$old_path' to '$new_path' failed - $!\n";
> +
> +    return "${storeid}:${base}${vmid}/${target_volname}";
> +}
> +
> +
>  1;
> diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
> index 38f2b46..325937a 100644
> --- a/PVE/Storage/RBDPlugin.pm
> +++ b/PVE/Storage/RBDPlugin.pm
> @@ -703,6 +703,7 @@ sub volume_has_feature {
>  	template => { current => 1},
>  	copy => { base => 1, current => 1, snap => 1},
>  	sparseinit => { base => 1, current => 1},
> +	reassign => {current => 1},
>      };
>  
>      my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
> @@ -719,4 +720,34 @@ sub volume_has_feature {
>      return undef;
>  }
>  
> +sub reassign_volume {
> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> +
> +    my $base;;
> +    (undef, $volname, undef, $base) = $class->parse_volname($volname);
> +
> +    if ($base) {
> +	$base = $base . '/';
> +    } else {
> +	$base = '';
> +    }
> +
> +    $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
> +	my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid);
> +	return $class->rename_volume($scfg, $storeid, $volname, $target_volname, $target_vmid, $base);
> +    });
> +}
> +
> +sub rename_volume {
> +    my ($class, $scfg, $storeid, $source_volname, $target_volname, $vmid, $base) = @_;
> +
> +    my $cmd = $rbd_cmd->($scfg, $storeid, 'rename', $source_volname, $target_volname);
> +
> +    run_rbd_command(
> +	$cmd,
> +	errmsg => "could not rename image '${source_volname}' to '${target_volname}'",
> +    );
> +    return "${storeid}:${base}${target_volname}";
> +}
> +
>  1;
> diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
> index 6ac05b4..42f183c 100644
> --- a/PVE/Storage/ZFSPoolPlugin.pm
> +++ b/PVE/Storage/ZFSPoolPlugin.pm
> @@ -11,6 +11,8 @@ use PVE::RPCEnvironment;
>  use PVE::Storage::Plugin;
>  use PVE::Tools qw(run_command);
>  
> +use Data::Dumper;
> +
>  use base qw(PVE::Storage::Plugin);
>  
>  sub type {
> @@ -658,6 +660,7 @@ sub volume_has_feature {
>  	copy => { base => 1, current => 1},
>  	sparseinit => { base => 1, current => 1},
>  	replicate => { base => 1, current => 1},
> +	reassign => {current => 1},
>      };
>  
>      my ($vtype, $name, $vmid, $basename, $basevmid, $isBase) =
> @@ -760,4 +763,29 @@ sub volume_import_formats {
>      return $class->volume_export_formats($scfg, $storeid, $volname, undef, $base_snapshot, $with_snapshots);
>  }
>  
> +sub reassign_volume {
> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> +
> +    my $base;;
> +    (undef, $volname, undef, $base) = $class->parse_volname($volname);
> +
> +    if ($base) {
> +	$base = $base . '/';
> +    } else {
> +	$base = '';
> +    }
> +
> +    $class->cluster_lock_storage($storeid, $scfg->{shared}, undef, sub {
> +	my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid);
> +	return $class->rename_volume($scfg, $storeid, $volname, $target_volname, $target_vmid, $base);
> +    });
> +}
> +
> +sub rename_volume {
> +    my ($class, $scfg, $storeid, $source_volname, $target_volname, $vmid, $base) = @_;
> +
> +    $class->zfs_request($scfg, 5, 'rename', "$scfg->{pool}/$source_volname", "$scfg->{pool}/$target_volname");
> +    return "${storeid}:${base}${target_volname}";
> +}
> +
>  1;
> -- 
> 2.20.1
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 




^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [pve-devel] [PATCH v4 qemu-server 1/4] disk reassign: add API endpoint
  2020-10-02  8:23 ` [pve-devel] [PATCH v4 qemu-server 1/4] disk reassign: add API endpoint Aaron Lauterer
@ 2020-11-20 16:17   ` Fabian Grünbichler
  0 siblings, 0 replies; 9+ messages in thread
From: Fabian Grünbichler @ 2020-11-20 16:17 UTC (permalink / raw)
  To: Proxmox VE development discussion

On October 2, 2020 10:23 am, Aaron Lauterer wrote:
> The goal of this new API endpoint is to provide an easy way to move a
> disk between VMs as this was only possible with manual intervention
> until now. Either by renaming the VM disk or by manually adding the
> disks volid to the config of the other VM.
> 
> The latter can easily cause unexpected behavior such as disks attached
> to VM B would be deleted if it used to be a disk of VM A. This happens
> because PVE assumes that the VMID in the volname always matches the VM
> the disk is attached to and thus, would remove any disk with VMID A
> when VM A was deleted.
> 
> The term `reassign` was chosen as it is not yet used
> for VM disks.
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> v3 -> v4: nothing
> 
> v2 -> v3:
> * reordered the locking as discussed with fabian [0] to
> run checks
>     fork worker
> 	lock source config
> 	    lock target config
> 		run checks
> 		...
> 
> * added more checks
>     * will not reassign to or from templates
>     * will not reassign if VM has snapshots present
> * cleanup if disk used to be replicated
> * made task log slightly more verbose
> * integrated general recommendations regarding code
> * renamed `disk` to `drive_key`
> * prepended some vars with `source_` for easier distinction
> 
> v1 -> v2: print config key and volid info at the end of the job so it
> shows up on the CLI and task log
> 
> rfc -> v1:
> * add support to reassign unused disks
> * add support to provide a config digest for the target vm
> * add additional check if disk key is present in config
> * reorder checks a bit
> 
> In order to support unused disk I had to extend
> PVE::QemuServer::Drive::valid_drive_names for the API parameter
> validation.
> 
> Checks are ordered so that cheap tests are run at the first chance to
> fail early.
> 
> The check if both VMs are present on the node is a bit redundant because
> locking the config files will fail if the VM is not present. But with
> the additional check we can provide a useful error message to the user
> instead of a "Configuration file xyz does not exist" error.
> 
> [0] https://lists.proxmox.com/pipermail/pve-devel/2020-September/044930.html
> 
> 
>  PVE/API2/Qemu.pm        | 156 ++++++++++++++++++++++++++++++++++++++++
>  PVE/QemuServer/Drive.pm |   4 ++
>  2 files changed, 160 insertions(+)
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index 8da616a..613b257 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -4265,4 +4265,160 @@ __PACKAGE__->register_method({
>  	return PVE::QemuServer::Cloudinit::dump_cloudinit_config($conf, $param->{vmid}, $param->{type});
>      }});
>  
> +__PACKAGE__->register_method({
> +    name => 'reassign_vm_disk',
> +    path => '{vmid}/reassign_disk',
> +    method => 'POST',
> +    protected => 1,
> +    proxyto => 'node',
> +    description => "Reassign a disk to another VM",
> +    permissions => {
> +	description => "You need 'VM.Config.Disk' permissions on /vms/{vmid}, and 'Datastore.Allocate' permissions on the storage.",

and VM.Config.Disk on target_vmid?

> +	check => [ 'and',
> +		   ['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
> +		   ['perm', '/storage/{storage}', [ 'Datastore.Allocate' ]],
> +	    ],
> +    },
> +    parameters => {
> +        additionalProperties => 0,
> +	properties => {
> +	    node => get_standard_option('pve-node'),
> +	    vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
> +	    target_vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
> +	    drive_key => {
> +	        type => 'string',
> +		description => "The config key of the disk to reassign (for example, ide0 or scsi1).",
> +		enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
> +	    },
> +	    digest => {
> +		type => 'string',
> +		description => 'Prevent changes if current the configuration file of the source VM has a different SHA1 digest. This can be used to prevent concurrent modifications.',
> +		maxLength => 40,
> +		optional => 1,
> +	    },
> +	    target_digest => {
> +		type => 'string',
> +		description => 'Prevent changes if current the configuration file of the target VM has a different SHA1 digest. This can be used to prevent concurrent modifications.',
> +		maxLength => 40,
> +		optional => 1,
> +	    },
> +	},
> +    },
> +    returns => {
> +	type => 'string',
> +	description => "the task ID.",
> +    },
> +    code => sub {
> +	my ($param) = @_;
> +
> +	my $rpcenv = PVE::RPCEnvironment::get();
> +	my $authuser = $rpcenv->get_user();
> +
> +	my $node = extract_param($param, 'node');
> +	my $source_vmid = extract_param($param, 'vmid');
> +	my $target_vmid = extract_param($param, 'target_vmid');
> +	my $source_digest = extract_param($param, 'digest');
> +	my $target_digest = extract_param($param, 'target_digest');
> +	my $drive_key = extract_param($param, 'drive_key');
> +
> +	my $storecfg = PVE::Storage::config();
> +	my $vmlist;
> +	my $drive;
> +	my $source_volid;
> +
> +	die "You cannot reassign a disk to the same VM\n"
"Reassigning disk with same source and target VM not possible. Did you 
mean to move the disk?"

> +	    if $source_vmid eq $target_vmid;
> +
> +	my $load_and_check_configs = sub {
> +	    $vmlist = PVE::QemuServer::vzlist();
> +	    die "Both VMs need to be on the same node\n"
> +		if !$vmlist->{$source_vmid}->{exists} || !$vmlist->{$target_vmid}->{exists};

if we use PVE::Cluser::get_vmlist() here, we could include the nodes as 
well, which might be more informative?

> +
> +	    my $source_conf = PVE::QemuConfig->load_config($source_vmid);
> +	    PVE::QemuConfig->check_lock($source_conf);
> +	    my $target_conf = PVE::QemuConfig->load_config($target_vmid);
> +	    PVE::QemuConfig->check_lock($target_conf);
> +
> +	    die "Can't reassign disks with templates\n"

disks from/to template

> +		if ($source_conf->{template} || $target_conf->{template});
> +
> +	    if ($source_digest) {
> +		eval { PVE::Tools::assert_if_modified($source_digest, $source_conf->{digest}) };
> +		if (my $err = $@) {
> +		    die "Verification of source VM digest failed: ${err}";

a simple "VM $vmid: " prefix would be enough, the rest is contained in 
$err anyway..

> +		}
> +	    }
> +
> +	    if ($target_digest) {
> +		eval { PVE::Tools::assert_if_modified($target_digest, $target_conf->{digest}) };
> +		if (my $err = $@) {
> +		    die "Verification of target VM digest failed: ${err}";

same

> +		}
> +	    }
> +
> +	    die "Disk '${drive_key}' does not exist\n"
> +		if !defined($source_conf->{$drive_key});
> +
> +	    $drive = PVE::QemuServer::parse_drive($drive_key, $source_conf->{$drive_key});
> +	    $source_volid = $drive->{file};
> +	    die "disk '${drive_key}' has no associated volume\n" if !$source_volid;
> +	    die "CD drive contents can't be reassigned\n" if PVE::QemuServer::drive_is_cdrom($drive, 1);

check for non-volume disks missing? it will/should fail in the storage 
layer, but better to catch it here already..

> +
> +	    die "Can't reassign disk used by a snapshot\n"
> +		if PVE::QemuServer::Drive::is_volume_in_use($storecfg, $source_conf, $drive_key, $source_volid);
> +
> +	    my $hasfeature = PVE::Storage::volume_has_feature($storecfg, 'reassign', $source_volid);
> +	    die "Storage does not support the reassignment of this disk\n" if !$hasfeature;

variable only used once for this check, you can just

die ..
  if !PVE::Storage::...

> +
> +	    die "Cannot reassign disk while the source VM is running\n"
> +		if PVE::QemuServer::check_running($source_vmid) && $drive_key !~ m/unused[0-9]/;
> +
> +	    return ($source_conf, $target_conf);
> +	};
> +
> +	my $reassign_func = sub {
> +	    return PVE::QemuConfig->lock_config($source_vmid, sub {
> +		return PVE::QemuConfig->lock_config($target_vmid, sub {
> +		    my ($source_conf, $target_conf) = &$load_and_check_configs();
> +
> +		    PVE::Cluster::log_msg('info', $authuser, "reassign disk VM $source_vmid: reassign --disk ${drive_key} --target_vmid $target_vmid");
> +
> +		    my $new_volid = PVE::Storage::reassign_volume($storecfg, $source_volid, $target_vmid);
> +
> +		    delete $source_conf->{$drive_key};
> +		    PVE::QemuConfig->write_config($source_vmid, $source_conf);
> +		    print "removing disk '${drive_key}' from VM '${source_vmid}'\n";

this message is misleading, as tense and the state of the source VM 
don't match ;)

> +
> +		    # remove possible replication snapshots
> +		    my $had_snapshots = 0;
> +		    if (PVE::Storage::volume_has_feature($storecfg, 'replicate', $new_volid)) {
> +			my $snapshots = PVE::Storage::volume_snapshot_list($storecfg, $new_volid);

> +			for my $snap (@$snapshots) {
> +			    next if (substr($snap, 0, 12) ne '__replicate_');
> +
> +			    $had_snapshots = 1;
> +			    PVE::Storage::volume_snapshot_delete($storecfg, $new_volid, $snap);
> +			}
> +			print "Disk '${drive_key}:${source_volid}' was replicated. On the next replication run it will be cleaned up on the replication target.\n"
> +			    if $had_snapshots;;
> +		    }

this can fail, so either wrap it in eval or move it below the following 
block, or above the removal from source config. above is potentially 
problematic, as we would need to get the replication lock then..

also, isn't this basically what PVE::Replication::prepare does?

> +
> +		    my $key;
> +		    eval { $key = PVE::QemuConfig->add_unused_volume($target_conf, $new_volid) };
> +		    if (my $err = $@) {
> +			print "adding moved disk '${new_volid}' to VM '${target_vmid}' config failed.\n";

I thought we are reassigned a disk here ;) might want to mention that 
adding it as unused failed, which is basically only possible if there 
is no free empty unused slot. freeing up a slot, and rescanning the VMID 
will fix the issue.

> +			return 0;
> +		    }
> +
> +		    PVE::QemuConfig->write_config($target_vmid, $target_conf);
> +		    print "adding disk to VM '${target_vmid}' as '${key}: ${new_volid}'\n";

again, order is wrong here - if the write_config fails, the print never 
happens. if the write was successful, the tense of the print is wrong.

> +		});
> +	    });
> +	};
> +
> +	&$load_and_check_configs();
> +
> +	return $rpcenv->fork_worker('qmreassign', $source_vmid, $authuser, $reassign_func);
> +    }});
> +
>  1;
> diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
> index 91c33f8..d2f59cd 100644
> --- a/PVE/QemuServer/Drive.pm
> +++ b/PVE/QemuServer/Drive.pm
> @@ -383,6 +383,10 @@ sub valid_drive_names {
>              'efidisk0');
>  }
>  
> +sub valid_drive_names_with_unused {
> +    return (valid_drive_names(), map {"unused$_"} (0 .. ($MAX_UNUSED_DISKS -1)));
> +}
> +
>  sub is_valid_drivename {
>      my $dev = shift;
>  
> -- 
> 2.20.1
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 




^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-11-20 16:17 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-02  8:23 [pve-devel] [PATCH v4 serries 0/4] disk reassign: add new feature Aaron Lauterer
2020-10-02  8:23 ` [pve-devel] [PATCH v4 qemu-server 1/4] disk reassign: add API endpoint Aaron Lauterer
2020-11-20 16:17   ` Fabian Grünbichler
2020-10-02  8:23 ` [pve-devel] [PATCH v4 qemu-server 2/4] cli: disk reassign: add reassign_disk to qm command Aaron Lauterer
2020-10-02  8:23 ` [pve-devel] [PATCH v4 storage 3/4] add disk reassign feature Aaron Lauterer
2020-11-20 16:17   ` Fabian Grünbichler
2020-10-02  8:23 ` [pve-devel] [PATCH v4 manager 4/4] ui: tasks: add qmreassign task description Aaron Lauterer
2020-10-30 10:42 ` [pve-devel] [PATCH v4 serries 0/4] disk reassign: add new feature Aaron Lauterer
2020-11-19 10:54   ` Dominic Jäger

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal