public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH v2 series 0/5] disk reassign: add new feature
@ 2020-09-01 12:44 Aaron Lauterer
  2020-09-01 12:44 ` [pve-devel] [PATCH v2 qemu-server 1/5] disk reassign: add API endpoint Aaron Lauterer
                   ` (4 more replies)
  0 siblings, 5 replies; 14+ messages in thread
From: Aaron Lauterer @ 2020-09-01 12:44 UTC (permalink / raw)
  To: pve-devel

This RFC series implements a new feature which allows users to easily
reassign disks between VMs. Currently this is only possible with one of
the following manual steps:

* rename the disk image/file and do a `qm rescan`
* configure the disk manually and use the old image name, having an
    image for VM A assigned to VM B

The latter can cause unexpected behavior because PVE expects that the
VMID in a disk name always corresponds to the VM it is assigned to. Thus
when a disk, original from VM A was manually configured as disk for VM B
it happens that, when deleting VM A, the disk in question will be
deleted as well because it still had the VMID of VM A in it's name.

To issue a reassign from the CLI run:

qm reassign_disk <source VMID> <target VMID> <disk key>

where <disk key> is the config key of the disk, e.g. ide0, scsi1 and so
on.

The following storage types are implemented at the moment:
* dir based ones
    * directory
    * NFS
    * CIFS
* ZFS
* (thin) LVM
* Ceph RBD

Changes from RFC -> V1:
* support to reassign unused disks
* digest for target vm config
* reorder the checks a bit
* adding another one to check if the given key for the disk even exists
  in the config.

v1 -> v2:
print info about the new disk volid and key at the end of the job so it
shows up in the CLI output and task log


qemu-server: Aaron Lauterer (2):
  disk reassign: add API endpoint
  cli: disk reassign: add reassign_disk to qm command

 PVE/API2/Qemu.pm        | 108 ++++++++++++++++++++++++++++++++++++++++
 PVE/CLI/qm.pm           |   2 +
 PVE/QemuServer/Drive.pm |   4 ++
 3 files changed, 114 insertions(+)


storage: Aaron Lauterer (2):
  add disk reassign feature
  disk reassign: add not implemented yet message to storages

 PVE/Storage.pm                   | 10 ++++++++++
 PVE/Storage/CephFSPlugin.pm      |  5 +++++
 PVE/Storage/DRBDPlugin.pm        |  5 +++++
 PVE/Storage/GlusterfsPlugin.pm   |  5 +++++
 PVE/Storage/ISCSIDirectPlugin.pm |  5 +++++
 PVE/Storage/ISCSIPlugin.pm       |  4 ++++
 PVE/Storage/LVMPlugin.pm         | 15 +++++++++++++++
 PVE/Storage/Plugin.pm            | 21 +++++++++++++++++++++
 PVE/Storage/RBDPlugin.pm         | 13 +++++++++++++
 PVE/Storage/ZFSPlugin.pm         |  5 +++++
 PVE/Storage/ZFSPoolPlugin.pm     |  9 +++++++++
 11 files changed, 97 insertions(+)


widget-toolkit: Aaron Lauterer (1):
  utils: task_desc_table: add qmreassign

 src/Utils.js | 1 +
 1 file changed, 1 insertion(+)

-- 
2.20.1





^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pve-devel] [PATCH v2 qemu-server 1/5] disk reassign: add API endpoint
  2020-09-01 12:44 [pve-devel] [PATCH v2 series 0/5] disk reassign: add new feature Aaron Lauterer
@ 2020-09-01 12:44 ` Aaron Lauterer
  2020-09-03  7:46   ` Fabian Grünbichler
  2020-09-01 12:44 ` [pve-devel] [PATCH v2 qemu-server 2/5] cli: disk reassign: add reassign_disk to qm command Aaron Lauterer
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 14+ messages in thread
From: Aaron Lauterer @ 2020-09-01 12:44 UTC (permalink / raw)
  To: pve-devel

The goal of this new API endpoint is to provide an easy way to move a
disk between VMs as this was only possible with manual intervention
until now. Either by renaming the VM disk or by manually adding the
disks volid to the config of the other VM.

The latter can easily cause unexpected behavior such as disks attached
to VM B would be deleted if it used to be a disk of VM A. This happens
because PVE assumes that the VMID in the volname always matches the VM
the disk is attached to and thus, would remove any disk with VMID A
when VM A was deleted.

The term `reassign` was chosen as it is not yet used
for disk VMs.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
v1 -> v2: print config key and volid info at the end of the job so it
shows up on the CLI and task log

rfc -> v1:
* add support to reassign unused disks
* add support to provide a config digest for the target vm
* add additional check if disk key is present in config
* reorder checks a bit

In order to support unused disk I had to extend
PVE::QemuServer::Drive::valid_drive_names for the API parameter
validation.

Checks are ordered so that cheap tests are run at the first chance to
fail early.

The check if both VMs are present on the node is a bit redundant because
locking the config files will fail if the VM is not present. But with
the additional check we can provide a useful error message to the user
instead of a "Configuration file xyz does not exist" error.

Feedback, especially on the checks is welcome.


 PVE/API2/Qemu.pm        | 108 ++++++++++++++++++++++++++++++++++++++++
 PVE/QemuServer/Drive.pm |   4 ++
 2 files changed, 112 insertions(+)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 8da616a..45aadbf 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -4265,4 +4265,112 @@ __PACKAGE__->register_method({
 	return PVE::QemuServer::Cloudinit::dump_cloudinit_config($conf, $param->{vmid}, $param->{type});
     }});
 
+__PACKAGE__->register_method({
+    name => 'reassign_vm_disk',
+    path => '{vmid}/reassign_disk',
+    method => 'POST',
+    protected => 1,
+    proxyto => 'node',
+    description => "Reassign a disk to another VM",
+    permissions => {
+	description => "You need 'VM.Config.Disk' permissions on /vms/{vmid}, and 'Datastore.Allocate' permissions on the storage.",
+	check => [ 'and',
+		   ['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
+		   ['perm', '/storage/{storage}', [ 'Datastore.Allocate' ]],
+	    ],
+    },
+    parameters => {
+        additionalProperties => 0,
+	properties => {
+	    node => get_standard_option('pve-node'),
+	    vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
+	    target_vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
+	    disk => {
+	        type => 'string',
+		description => "The config key of the disk to move (for example, ide0 or scsi1).",
+		enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
+	    },
+	    digest => {
+		type => 'string',
+		description => 'Prevent changes if current the configuration file of the source VM has a different SHA1 digest. This can be used to prevent concurrent modifications.',
+		maxLength => 40,
+		optional => 1,
+	    },
+	    target_digest => {
+		type => 'string',
+		description => 'Prevent changes if current the configuration file of the target VM has a different SHA1 digest. This can be used to prevent concurrent modifications.',
+		maxLength => 40,
+		optional => 1,
+	    },
+	},
+    },
+    returns => {
+	type => 'string',
+	description => "the task ID.",
+    },
+    code => sub {
+	my ($param) = @_;
+
+	my $rpcenv = PVE::RPCEnvironment::get();
+	my $authuser = $rpcenv->get_user();
+
+	my $node = extract_param($param, 'node');
+	my $vmid = extract_param($param, 'vmid');
+	my $target_vmid = extract_param($param, 'target_vmid');
+	my $digest = extract_param($param, 'digest');
+	my $target_digest = extract_param($param, 'target_digest');
+	my $disk = extract_param($param, 'disk');
+
+	my $storecfg = PVE::Storage::config();
+
+	die "You cannot reassign a disk to the same VM\n"
+	    if $vmid eq $target_vmid;
+
+	my $vmlist = PVE::QemuServer::vzlist();
+	die "Both VMs need to be on the same node\n"
+	    if !$vmlist->{$vmid}->{exists} || !$vmlist->{$target_vmid}->{exists};
+
+	return PVE::QemuConfig->lock_config($vmid, sub {
+	    my $conf = PVE::QemuConfig->load_config($vmid);
+	    PVE::QemuConfig->check_lock($conf);
+
+	    die "Source VM config checksum missmatch (file change by other user?)\n"
+		if $digest && $digest ne $conf->{digest};
+
+	    die "Disk '${disk}' does not exist\n"
+		if !exists($conf->{$disk});
+
+	    my $drive = PVE::QemuServer::parse_drive($disk, $conf->{$disk});
+	    die "disk '$disk' has no associated volume\n" if !$drive->{file};
+	    die "Cannot reassign a cdrom drive\n" if PVE::QemuServer::drive_is_cdrom($drive, 1);
+
+	    die "Cannot reassign disk while the source VM is running\n"
+		if PVE::QemuServer::check_running($vmid) && $disk !~ m/unused[0-9]/;
+
+	    return PVE::QemuConfig->lock_config($target_vmid, sub {
+		my $target_conf = PVE::QemuConfig->load_config($target_vmid);
+		PVE::QemuConfig->check_lock($target_conf);
+
+		die "Target VM config checksum missmatch (file change by other user?)\n"
+		    if $target_digest && $target_digest ne $target_conf->{digest};
+
+		PVE::Cluster::log_msg('info', $authuser, "reassign disk VM $vmid: reassign --disk $disk --target_vmid $target_vmid");
+
+		my $realcmd = sub {
+		    my $new_volid = PVE::Storage::reassign_volume($storecfg, $drive->{file}, $target_vmid);
+
+		    delete $conf->{$disk};
+		    PVE::QemuConfig->write_config($vmid, $conf);
+
+		    my $key = PVE::QemuConfig->add_unused_volume($target_conf, $new_volid);
+		    PVE::QemuConfig->write_config($target_vmid, $target_conf);
+
+		    print "reassigned disk to VM ${target_vmid} as '${key}: ${new_volid}'\n";
+		};
+
+		return $rpcenv->fork_worker('qmreassign', $vmid, $authuser, $realcmd);
+	    });
+	});
+    }});
+
 1;
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index 91c33f8..d2f59cd 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -383,6 +383,10 @@ sub valid_drive_names {
             'efidisk0');
 }
 
+sub valid_drive_names_with_unused {
+    return (valid_drive_names(), map {"unused$_"} (0 .. ($MAX_UNUSED_DISKS -1)));
+}
+
 sub is_valid_drivename {
     my $dev = shift;
 
-- 
2.20.1





^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pve-devel] [PATCH v2 qemu-server 2/5] cli: disk reassign: add reassign_disk to qm command
  2020-09-01 12:44 [pve-devel] [PATCH v2 series 0/5] disk reassign: add new feature Aaron Lauterer
  2020-09-01 12:44 ` [pve-devel] [PATCH v2 qemu-server 1/5] disk reassign: add API endpoint Aaron Lauterer
@ 2020-09-01 12:44 ` Aaron Lauterer
  2020-09-01 12:44 ` [pve-devel] [PATCH v2 storage 3/5] add disk reassign feature Aaron Lauterer
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 14+ messages in thread
From: Aaron Lauterer @ 2020-09-01 12:44 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
rfc -> v1 -> v2: nothing changed

 PVE/CLI/qm.pm | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index 282fa86..c947884 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -913,6 +913,8 @@ our $cmddef = {
 
     move_disk => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage'], { node => $nodename }, $upid_exit ],
 
+    reassign_disk => [ "PVE::API2::Qemu", 'reassign_vm_disk', ['vmid', 'target_vmid', 'disk'], { node => $nodename } ],
+
     unlink => [ "PVE::API2::Qemu", 'unlink', ['vmid'], { node => $nodename } ],
 
     config => [ "PVE::API2::Qemu", 'vm_config', ['vmid'],
-- 
2.20.1





^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pve-devel] [PATCH v2 storage 3/5] add disk reassign feature
  2020-09-01 12:44 [pve-devel] [PATCH v2 series 0/5] disk reassign: add new feature Aaron Lauterer
  2020-09-01 12:44 ` [pve-devel] [PATCH v2 qemu-server 1/5] disk reassign: add API endpoint Aaron Lauterer
  2020-09-01 12:44 ` [pve-devel] [PATCH v2 qemu-server 2/5] cli: disk reassign: add reassign_disk to qm command Aaron Lauterer
@ 2020-09-01 12:44 ` Aaron Lauterer
  2020-09-03  7:55   ` Fabian Grünbichler
  2020-09-01 12:44 ` [pve-devel] [PATCH v2 storage 4/5] disk reassign: add not implemented yet message to storages Aaron Lauterer
  2020-09-01 12:44 ` [pve-devel] [PATCH v2 widget-toolkit 5/5] utils: task_desc_table: add qmreassign Aaron Lauterer
  4 siblings, 1 reply; 14+ messages in thread
From: Aaron Lauterer @ 2020-09-01 12:44 UTC (permalink / raw)
  To: pve-devel

Functionality has been added for the following storage types:

* dir based ones
    * directory
    * NFS
    * CIFS
* ZFS
* (thin) LVM
* Ceph

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
rfc -> v1 -> v2: nothing changed

 PVE/Storage.pm               | 10 ++++++++++
 PVE/Storage/LVMPlugin.pm     | 15 +++++++++++++++
 PVE/Storage/Plugin.pm        | 21 +++++++++++++++++++++
 PVE/Storage/RBDPlugin.pm     | 13 +++++++++++++
 PVE/Storage/ZFSPoolPlugin.pm |  9 +++++++++
 5 files changed, 68 insertions(+)

diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 4a60615..919c606 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -1759,6 +1759,16 @@ sub complete_volume {
     return $res;
 }
 
+sub reassign_volume {
+    my ($cfg, $volid, $target_vmid) = @_;
+
+    my ($storeid, $volname) = parse_volume_id($volid);
+    my $scfg = storage_config($cfg, $storeid);
+    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+
+    return $plugin->reassign_volume($scfg, $storeid, $volname, $target_vmid);
+}
+
 # Various io-heavy operations require io/bandwidth limits which can be
 # configured on multiple levels: The global defaults in datacenter.cfg, and
 # per-storage overrides. When we want to do a restore from storage A to storage
diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
index c0740d4..28ff6c8 100644
--- a/PVE/Storage/LVMPlugin.pm
+++ b/PVE/Storage/LVMPlugin.pm
@@ -337,6 +337,13 @@ sub lvcreate {
     run_command($cmd, errmsg => "lvcreate '$vg/$name' error");
 }
 
+sub lvrename {
+    my ($vg, $oldname, $newname) = @_;
+
+    my $cmd = ['/sbin/lvrename', $vg, $oldname, $newname];
+    run_command($cmd, errmsg => "lvrename '${vg}/${oldname}' to '${newname}' error");
+}
+
 sub alloc_image {
     my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
 
@@ -681,4 +688,12 @@ sub volume_import_write {
 	input => '<&'.fileno($input_fh));
 }
 
+sub reassign_volume {
+    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+
+    my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid);
+    lvrename($scfg->{vgname}, $volname, $target_volname);
+    return "${storeid}:${target_volname}";
+}
+
 1;
diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index 8a58ff4..770a482 100644
--- a/PVE/Storage/Plugin.pm
+++ b/PVE/Storage/Plugin.pm
@@ -1411,4 +1411,25 @@ sub volume_import_formats {
     return ();
 }
 
+sub reassign_volume {
+    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+
+    my $basedir = $class->get_subdir($scfg, 'images');
+    my $imagedir = "${basedir}/${target_vmid}";
+
+    mkpath $imagedir;
+
+    my @parsed_volname = $class->parse_volname($volname);
+    my $format = $parsed_volname[6];
+    my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid, $format, 1);
+
+    my $old_path = "${basedir}/${volname}";
+    my $new_path = "${imagedir}/${target_volname}";
+
+    rename($old_path, $new_path) ||
+	die "rename '$old_path' to '$new_path' failed - $!\n";
+
+    return "${storeid}:${target_vmid}/${target_volname}";
+}
+
 1;
diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 38f2b46..c0bd74c 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -719,4 +719,17 @@ sub volume_has_feature {
     return undef;
 }
 
+sub reassign_volume {
+    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+
+    my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid);
+    my $cmd = $rbd_cmd->($scfg, $storeid, 'rename', $volname, $target_volname);
+
+    run_rbd_command(
+	$cmd,
+	errmsg => "could not rename image '$volname' to '$target_volname'",
+    );
+    return "${storeid}:${target_volname}";
+}
+
 1;
diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index 10354b3..5baa5c2 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++ b/PVE/Storage/ZFSPoolPlugin.pm
@@ -749,4 +749,13 @@ sub volume_import_formats {
     return $class->volume_export_formats($scfg, $storeid, $volname, undef, $base_snapshot, $with_snapshots);
 }
 
+sub reassign_volume {
+    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+
+    my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid);
+    $class->zfs_request($scfg, 5, 'rename', "$scfg->{pool}/$volname", "$scfg->{pool}/$target_volname");
+
+    return "${storeid}:${target_volname}";
+}
+
 1;
-- 
2.20.1





^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pve-devel] [PATCH v2 storage 4/5] disk reassign: add not implemented yet message to storages
  2020-09-01 12:44 [pve-devel] [PATCH v2 series 0/5] disk reassign: add new feature Aaron Lauterer
                   ` (2 preceding siblings ...)
  2020-09-01 12:44 ` [pve-devel] [PATCH v2 storage 3/5] add disk reassign feature Aaron Lauterer
@ 2020-09-01 12:44 ` Aaron Lauterer
  2020-09-03  7:58   ` Fabian Grünbichler
  2020-09-01 12:44 ` [pve-devel] [PATCH v2 widget-toolkit 5/5] utils: task_desc_table: add qmreassign Aaron Lauterer
  4 siblings, 1 reply; 14+ messages in thread
From: Aaron Lauterer @ 2020-09-01 12:44 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
rfc -> v1 -> v2: nothing changed

 PVE/Storage/CephFSPlugin.pm      | 5 +++++
 PVE/Storage/DRBDPlugin.pm        | 5 +++++
 PVE/Storage/GlusterfsPlugin.pm   | 5 +++++
 PVE/Storage/ISCSIDirectPlugin.pm | 5 +++++
 PVE/Storage/ISCSIPlugin.pm       | 4 ++++
 PVE/Storage/ZFSPlugin.pm         | 5 +++++
 6 files changed, 29 insertions(+)

diff --git a/PVE/Storage/CephFSPlugin.pm b/PVE/Storage/CephFSPlugin.pm
index 880ec05..ba31366 100644
--- a/PVE/Storage/CephFSPlugin.pm
+++ b/PVE/Storage/CephFSPlugin.pm
@@ -222,4 +222,9 @@ sub deactivate_storage {
     }
 }
 
+sub reassign_volume {
+    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+    die "Not implemented for this storage type\n";
+}
+
 1;
diff --git a/PVE/Storage/DRBDPlugin.pm b/PVE/Storage/DRBDPlugin.pm
index dbae4d1..304ade7 100644
--- a/PVE/Storage/DRBDPlugin.pm
+++ b/PVE/Storage/DRBDPlugin.pm
@@ -404,4 +404,9 @@ sub volume_has_feature {
     return undef;
 }
 
+sub reassign_volume {
+    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+    die "Not implemented for this storage type\n";
+}
+
 1;
diff --git a/PVE/Storage/GlusterfsPlugin.pm b/PVE/Storage/GlusterfsPlugin.pm
index 2dd414d..831d2ef 100644
--- a/PVE/Storage/GlusterfsPlugin.pm
+++ b/PVE/Storage/GlusterfsPlugin.pm
@@ -348,4 +348,9 @@ sub check_connection {
     return defined($server) ? 1 : 0;
 }
 
+sub reassign_volume {
+    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+    die "Not implemented for this storage type\n";
+}
+
 1;
diff --git a/PVE/Storage/ISCSIDirectPlugin.pm b/PVE/Storage/ISCSIDirectPlugin.pm
index 9777969..8d72173 100644
--- a/PVE/Storage/ISCSIDirectPlugin.pm
+++ b/PVE/Storage/ISCSIDirectPlugin.pm
@@ -252,4 +252,9 @@ sub volume_has_feature {
     return undef;
 }
 
+sub reassign_volume {
+    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+    die "Not implemented for this storage type\n";
+}
+
 1;
diff --git a/PVE/Storage/ISCSIPlugin.pm b/PVE/Storage/ISCSIPlugin.pm
index f2694ba..602fb1d 100644
--- a/PVE/Storage/ISCSIPlugin.pm
+++ b/PVE/Storage/ISCSIPlugin.pm
@@ -438,5 +438,9 @@ sub volume_has_feature {
     return undef;
 }
 
+sub reassign_volume {
+    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+    die "Not implemented for this storage type\n";
+}
 
 1;
diff --git a/PVE/Storage/ZFSPlugin.pm b/PVE/Storage/ZFSPlugin.pm
index 383f0a0..4097b1a 100644
--- a/PVE/Storage/ZFSPlugin.pm
+++ b/PVE/Storage/ZFSPlugin.pm
@@ -421,4 +421,9 @@ sub deactivate_volume {
     return 1;
 }
 
+sub reassign_volume {
+    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
+    die "Not implemented for this storage type\n";
+}
+
 1;
-- 
2.20.1





^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pve-devel] [PATCH v2 widget-toolkit 5/5] utils: task_desc_table: add qmreassign
  2020-09-01 12:44 [pve-devel] [PATCH v2 series 0/5] disk reassign: add new feature Aaron Lauterer
                   ` (3 preceding siblings ...)
  2020-09-01 12:44 ` [pve-devel] [PATCH v2 storage 4/5] disk reassign: add not implemented yet message to storages Aaron Lauterer
@ 2020-09-01 12:44 ` Aaron Lauterer
  4 siblings, 0 replies; 14+ messages in thread
From: Aaron Lauterer @ 2020-09-01 12:44 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
rfc -> v1 -> v2: nothing changed

 src/Utils.js | 1 +
 1 file changed, 1 insertion(+)

diff --git a/src/Utils.js b/src/Utils.js
index 8595cce..af41f33 100644
--- a/src/Utils.js
+++ b/src/Utils.js
@@ -587,6 +587,7 @@ utilities: {
 	qmigrate: ['VM', gettext('Migrate')],
 	qmclone: ['VM', gettext('Clone')],
 	qmmove: ['VM', gettext('Move disk')],
+	qmreassign: ['VM', gettext('Reassign disk')],
 	qmtemplate: ['VM', gettext('Convert to template')],
 	qmstart: ['VM', gettext('Start')],
 	qmstop: ['VM', gettext('Stop')],
-- 
2.20.1





^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pve-devel] [PATCH v2 qemu-server 1/5] disk reassign: add API endpoint
  2020-09-01 12:44 ` [pve-devel] [PATCH v2 qemu-server 1/5] disk reassign: add API endpoint Aaron Lauterer
@ 2020-09-03  7:46   ` Fabian Grünbichler
  2020-09-03  8:30     ` Aaron Lauterer
  0 siblings, 1 reply; 14+ messages in thread
From: Fabian Grünbichler @ 2020-09-03  7:46 UTC (permalink / raw)
  To: Proxmox VE development discussion

On September 1, 2020 2:44 pm, Aaron Lauterer wrote:
> The goal of this new API endpoint is to provide an easy way to move a
> disk between VMs as this was only possible with manual intervention
> until now. Either by renaming the VM disk or by manually adding the
> disks volid to the config of the other VM.
> 
> The latter can easily cause unexpected behavior such as disks attached
> to VM B would be deleted if it used to be a disk of VM A. This happens
> because PVE assumes that the VMID in the volname always matches the VM
> the disk is attached to and thus, would remove any disk with VMID A
> when VM A was deleted.
> 
> The term `reassign` was chosen as it is not yet used
> for disk VMs.
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> v1 -> v2: print config key and volid info at the end of the job so it
> shows up on the CLI and task log
> 
> rfc -> v1:
> * add support to reassign unused disks
> * add support to provide a config digest for the target vm
> * add additional check if disk key is present in config
> * reorder checks a bit
> 
> In order to support unused disk I had to extend
> PVE::QemuServer::Drive::valid_drive_names for the API parameter
> validation.
> 
> Checks are ordered so that cheap tests are run at the first chance to
> fail early.
> 
> The check if both VMs are present on the node is a bit redundant because
> locking the config files will fail if the VM is not present. But with
> the additional check we can provide a useful error message to the user
> instead of a "Configuration file xyz does not exist" error.
> 
> Feedback, especially on the checks is welcome.
> 
> 
>  PVE/API2/Qemu.pm        | 108 ++++++++++++++++++++++++++++++++++++++++
>  PVE/QemuServer/Drive.pm |   4 ++
>  2 files changed, 112 insertions(+)
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index 8da616a..45aadbf 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -4265,4 +4265,112 @@ __PACKAGE__->register_method({
>  	return PVE::QemuServer::Cloudinit::dump_cloudinit_config($conf, $param->{vmid}, $param->{type});
>      }});
>  
> +__PACKAGE__->register_method({
> +    name => 'reassign_vm_disk',
> +    path => '{vmid}/reassign_disk',
> +    method => 'POST',
> +    protected => 1,
> +    proxyto => 'node',
> +    description => "Reassign a disk to another VM",
> +    permissions => {
> +	description => "You need 'VM.Config.Disk' permissions on /vms/{vmid}, and 'Datastore.Allocate' permissions on the storage.",
> +	check => [ 'and',
> +		   ['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
> +		   ['perm', '/storage/{storage}', [ 'Datastore.Allocate' ]],
> +	    ],
> +    },
> +    parameters => {
> +        additionalProperties => 0,
> +	properties => {
> +	    node => get_standard_option('pve-node'),
> +	    vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
> +	    target_vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
> +	    disk => {
> +	        type => 'string',
> +		description => "The config key of the disk to move (for example, ide0 or scsi1).",
> +		enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
> +	    },
> +	    digest => {
> +		type => 'string',
> +		description => 'Prevent changes if current the configuration file of the source VM has a different SHA1 digest. This can be used to prevent concurrent modifications.',
> +		maxLength => 40,
> +		optional => 1,
> +	    },
> +	    target_digest => {
> +		type => 'string',
> +		description => 'Prevent changes if current the configuration file of the target VM has a different SHA1 digest. This can be used to prevent concurrent modifications.',
> +		maxLength => 40,
> +		optional => 1,
> +	    },
> +	},
> +    },
> +    returns => {
> +	type => 'string',
> +	description => "the task ID.",
> +    },
> +    code => sub {
> +	my ($param) = @_;
> +
> +	my $rpcenv = PVE::RPCEnvironment::get();
> +	my $authuser = $rpcenv->get_user();
> +
> +	my $node = extract_param($param, 'node');
> +	my $vmid = extract_param($param, 'vmid');
> +	my $target_vmid = extract_param($param, 'target_vmid');
> +	my $digest = extract_param($param, 'digest');
> +	my $target_digest = extract_param($param, 'target_digest');
> +	my $disk = extract_param($param, 'disk');
> +
> +	my $storecfg = PVE::Storage::config();
> +
> +	die "You cannot reassign a disk to the same VM\n"
> +	    if $vmid eq $target_vmid;
> +
> +	my $vmlist = PVE::QemuServer::vzlist();
> +	die "Both VMs need to be on the same node\n"
> +	    if !$vmlist->{$vmid}->{exists} || !$vmlist->{$target_vmid}->{exists};

we could skip this if the target storage is shared, and let the user run 
'rescan' afterwards?
> +
> +	return PVE::QemuConfig->lock_config($vmid, sub {
> +	    my $conf = PVE::QemuConfig->load_config($vmid);
> +	    PVE::QemuConfig->check_lock($conf);
> +
> +	    die "Source VM config checksum missmatch (file change by other user?)\n"
> +		if $digest && $digest ne $conf->{digest};

PVE::Tools::assert_if_modified

> +
> +	    die "Disk '${disk}' does not exist\n"
> +		if !exists($conf->{$disk});

why not defined?

> +
> +	    my $drive = PVE::QemuServer::parse_drive($disk, $conf->{$disk});
> +	    die "disk '$disk' has no associated volume\n" if !$drive->{file};

missing check for whether it's actually volume-based, and not 
pass-through.. what about templates/base volumes/linked 
clones/snapshots?

> +	    die "Cannot reassign a cdrom drive\n" if PVE::QemuServer::drive_is_cdrom($drive, 1);

"CD drive contents can't be reassigned.\n"

> +
> +	    die "Cannot reassign disk while the source VM is running\n"
> +		if PVE::QemuServer::check_running($vmid) && $disk !~ m/unused[0-9]/;
> +
> +	    return PVE::QemuConfig->lock_config($target_vmid, sub {
> +		my $target_conf = PVE::QemuConfig->load_config($target_vmid);
> +		PVE::QemuConfig->check_lock($target_conf);
> +
> +		die "Target VM config checksum missmatch (file change by other user?)\n"
> +		    if $target_digest && $target_digest ne $target_conf->{digest};

same as above.

> +
> +		PVE::Cluster::log_msg('info', $authuser, "reassign disk VM $vmid: reassign --disk $disk --target_vmid $target_vmid");
> +
> +		my $realcmd = sub {
> +		    my $new_volid = PVE::Storage::reassign_volume($storecfg, $drive->{file}, $target_vmid);
> +
> +		    delete $conf->{$disk};
> +		    PVE::QemuConfig->write_config($vmid, $conf);
> +
> +		    my $key = PVE::QemuConfig->add_unused_volume($target_conf, $new_volid);

this can fail, now the volume is unreferenced altogether -> maybe wrap 
in an eval and print a more helpful error message including the new 
volid?

> +		    PVE::QemuConfig->write_config($target_vmid, $target_conf);
> +
> +		    print "reassigned disk to VM ${target_vmid} as '${key}: ${new_volid}'\n";

or split this up, and

print 'renamed volume '$old_volid' to '$new_volid'\n";

after the call to reassign_volume, and print the config changes 
individually after the respective actions?

print "remove disk '$disk' from VM '$vmid'\n";

print "added volume '$new_volid' as '$key' to VM '$target_vmid'\n";

> +		};
> +
> +		return $rpcenv->fork_worker('qmreassign', $vmid, $authuser, $realcmd);

I am not so happy about

lock_config(
  load_config;
  lock_config(
    load_config; 
    fork_worker(
      cfs_lock/write_config
    )
  )
)

as it tends to accumulate more checks and extra functionality and at 
some point might introduce an inconsistency.. note that the current 
version seems fine (we always lock first then load, we only look at 
config files that belong to a specific node, ...), but maybe we could 
switch to

# check for early return/nice error
(conf1, conf2) = load_and_check_sub(id1, id2);
fork_worker(
  lock_config(
    lock_config(
      # recheck in locked worker context
      (conf1, conf2) = load_and_check_sub(id1, id2);
      ...
    )
  )
)

to make it more fool/future-proof, especially since the checks are rather cheap..

> +	    });
> +	});
> +    }});
> +
>  1;
> diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
> index 91c33f8..d2f59cd 100644
> --- a/PVE/QemuServer/Drive.pm
> +++ b/PVE/QemuServer/Drive.pm
> @@ -383,6 +383,10 @@ sub valid_drive_names {
>              'efidisk0');
>  }
>  
> +sub valid_drive_names_with_unused {
> +    return (valid_drive_names(), map {"unused$_"} (0 .. ($MAX_UNUSED_DISKS -1)));
> +}
> +
>  sub is_valid_drivename {
>      my $dev = shift;
>  
> -- 
> 2.20.1
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pve-devel] [PATCH v2 storage 3/5] add disk reassign feature
  2020-09-01 12:44 ` [pve-devel] [PATCH v2 storage 3/5] add disk reassign feature Aaron Lauterer
@ 2020-09-03  7:55   ` Fabian Grünbichler
  0 siblings, 0 replies; 14+ messages in thread
From: Fabian Grünbichler @ 2020-09-03  7:55 UTC (permalink / raw)
  To: Proxmox VE development discussion

some small nits inline

On September 1, 2020 2:44 pm, Aaron Lauterer wrote:
> Functionality has been added for the following storage types:
> 
> * dir based ones
>     * directory
>     * NFS
>     * CIFS
> * ZFS
> * (thin) LVM
> * Ceph
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> rfc -> v1 -> v2: nothing changed
> 
>  PVE/Storage.pm               | 10 ++++++++++
>  PVE/Storage/LVMPlugin.pm     | 15 +++++++++++++++
>  PVE/Storage/Plugin.pm        | 21 +++++++++++++++++++++
>  PVE/Storage/RBDPlugin.pm     | 13 +++++++++++++
>  PVE/Storage/ZFSPoolPlugin.pm |  9 +++++++++
>  5 files changed, 68 insertions(+)
> 
> diff --git a/PVE/Storage.pm b/PVE/Storage.pm
> index 4a60615..919c606 100755
> --- a/PVE/Storage.pm
> +++ b/PVE/Storage.pm
> @@ -1759,6 +1759,16 @@ sub complete_volume {
>      return $res;
>  }
>  
> +sub reassign_volume {
> +    my ($cfg, $volid, $target_vmid) = @_;
> +
> +    my ($storeid, $volname) = parse_volume_id($volid);
> +    my $scfg = storage_config($cfg, $storeid);
> +    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
> +
> +    return $plugin->reassign_volume($scfg, $storeid, $volname, $target_vmid);
> +}
> +
>  # Various io-heavy operations require io/bandwidth limits which can be
>  # configured on multiple levels: The global defaults in datacenter.cfg, and
>  # per-storage overrides. When we want to do a restore from storage A to storage
> diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
> index c0740d4..28ff6c8 100644
> --- a/PVE/Storage/LVMPlugin.pm
> +++ b/PVE/Storage/LVMPlugin.pm
> @@ -337,6 +337,13 @@ sub lvcreate {
>      run_command($cmd, errmsg => "lvcreate '$vg/$name' error");
>  }
>  
> +sub lvrename {
> +    my ($vg, $oldname, $newname) = @_;
> +
> +    my $cmd = ['/sbin/lvrename', $vg, $oldname, $newname];
> +    run_command($cmd, errmsg => "lvrename '${vg}/${oldname}' to '${newname}' error");
> +}
> +
>  sub alloc_image {
>      my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
>  
> @@ -681,4 +688,12 @@ sub volume_import_write {
>  	input => '<&'.fileno($input_fh));
>  }
>  
> +sub reassign_volume {
> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> +
> +    my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid);
> +    lvrename($scfg->{vgname}, $volname, $target_volname);
> +    return "${storeid}:${target_volname}";
> +}
> +
>  1;
> diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
> index 8a58ff4..770a482 100644
> --- a/PVE/Storage/Plugin.pm
> +++ b/PVE/Storage/Plugin.pm
> @@ -1411,4 +1411,25 @@ sub volume_import_formats {
>      return ();
>  }
>  
> +sub reassign_volume {
> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> +
> +    my $basedir = $class->get_subdir($scfg, 'images');
> +    my $imagedir = "${basedir}/${target_vmid}";
> +
> +    mkpath $imagedir;
> +
> +    my @parsed_volname = $class->parse_volname($volname);
> +    my $format = $parsed_volname[6];
> +    my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid, $format, 1);
> +
$volname vs $target_volname, but one includes the vmid the other does 
not? rename 'target_volname' to diskname (like the sub you call), then 
you can do 

my $target_volname = "$target_vmid/$diskname";

> +    my $old_path = "${basedir}/${volname}";
> +    my $new_path = "${imagedir}/${target_volname}";

and construct both paths in the same fashion

> +
> +    rename($old_path, $new_path) ||
> +	die "rename '$old_path' to '$new_path' failed - $!\n";
> +
> +    return "${storeid}:${target_vmid}/${target_volname}";

and return $storeid:$target_volname here

> +}
> +
>  1;
> diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
> index 38f2b46..c0bd74c 100644
> --- a/PVE/Storage/RBDPlugin.pm
> +++ b/PVE/Storage/RBDPlugin.pm
> @@ -719,4 +719,17 @@ sub volume_has_feature {
>      return undef;
>  }
>  
> +sub reassign_volume {
> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> +
> +    my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid);
> +    my $cmd = $rbd_cmd->($scfg, $storeid, 'rename', $volname, $target_volname);
> +
> +    run_rbd_command(
> +	$cmd,
> +	errmsg => "could not rename image '$volname' to '$target_volname'",
> +    );
> +    return "${storeid}:${target_volname}";
> +}
> +
>  1;
> diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
> index 10354b3..5baa5c2 100644
> --- a/PVE/Storage/ZFSPoolPlugin.pm
> +++ b/PVE/Storage/ZFSPoolPlugin.pm
> @@ -749,4 +749,13 @@ sub volume_import_formats {
>      return $class->volume_export_formats($scfg, $storeid, $volname, undef, $base_snapshot, $with_snapshots);
>  }
>  
> +sub reassign_volume {
> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> +
> +    my $target_volname = $class->find_free_diskname($storeid, $scfg, $target_vmid);
> +    $class->zfs_request($scfg, 5, 'rename', "$scfg->{pool}/$volname", "$scfg->{pool}/$target_volname");
> +
> +    return "${storeid}:${target_volname}";
> +}
> +
>  1;
> -- 
> 2.20.1
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pve-devel] [PATCH v2 storage 4/5] disk reassign: add not implemented yet message to storages
  2020-09-01 12:44 ` [pve-devel] [PATCH v2 storage 4/5] disk reassign: add not implemented yet message to storages Aaron Lauterer
@ 2020-09-03  7:58   ` Fabian Grünbichler
  2020-09-03  9:01     ` Aaron Lauterer
  0 siblings, 1 reply; 14+ messages in thread
From: Fabian Grünbichler @ 2020-09-03  7:58 UTC (permalink / raw)
  To: Proxmox VE development discussion

wouldn't it make more sense to implement it in Dir/NFS/CIFSPlugin, and 
add this 'implement me' into Plugin itself? otherwise this breaks 
external plugins. also, would it make sense to add a feature for this so 
that we can check in the calling code with a meaningful error message 
before attempting and die-ing?

On September 1, 2020 2:44 pm, Aaron Lauterer wrote:
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> rfc -> v1 -> v2: nothing changed
> 
>  PVE/Storage/CephFSPlugin.pm      | 5 +++++
>  PVE/Storage/DRBDPlugin.pm        | 5 +++++
>  PVE/Storage/GlusterfsPlugin.pm   | 5 +++++
>  PVE/Storage/ISCSIDirectPlugin.pm | 5 +++++
>  PVE/Storage/ISCSIPlugin.pm       | 4 ++++
>  PVE/Storage/ZFSPlugin.pm         | 5 +++++
>  6 files changed, 29 insertions(+)
> 
> diff --git a/PVE/Storage/CephFSPlugin.pm b/PVE/Storage/CephFSPlugin.pm
> index 880ec05..ba31366 100644
> --- a/PVE/Storage/CephFSPlugin.pm
> +++ b/PVE/Storage/CephFSPlugin.pm
> @@ -222,4 +222,9 @@ sub deactivate_storage {
>      }
>  }
>  
> +sub reassign_volume {
> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> +    die "Not implemented for this storage type\n";
> +}
> +
>  1;
> diff --git a/PVE/Storage/DRBDPlugin.pm b/PVE/Storage/DRBDPlugin.pm
> index dbae4d1..304ade7 100644
> --- a/PVE/Storage/DRBDPlugin.pm
> +++ b/PVE/Storage/DRBDPlugin.pm
> @@ -404,4 +404,9 @@ sub volume_has_feature {
>      return undef;
>  }
>  
> +sub reassign_volume {
> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> +    die "Not implemented for this storage type\n";
> +}
> +
>  1;
> diff --git a/PVE/Storage/GlusterfsPlugin.pm b/PVE/Storage/GlusterfsPlugin.pm
> index 2dd414d..831d2ef 100644
> --- a/PVE/Storage/GlusterfsPlugin.pm
> +++ b/PVE/Storage/GlusterfsPlugin.pm
> @@ -348,4 +348,9 @@ sub check_connection {
>      return defined($server) ? 1 : 0;
>  }
>  
> +sub reassign_volume {
> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> +    die "Not implemented for this storage type\n";
> +}
> +
>  1;
> diff --git a/PVE/Storage/ISCSIDirectPlugin.pm b/PVE/Storage/ISCSIDirectPlugin.pm
> index 9777969..8d72173 100644
> --- a/PVE/Storage/ISCSIDirectPlugin.pm
> +++ b/PVE/Storage/ISCSIDirectPlugin.pm
> @@ -252,4 +252,9 @@ sub volume_has_feature {
>      return undef;
>  }
>  
> +sub reassign_volume {
> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> +    die "Not implemented for this storage type\n";
> +}
> +
>  1;
> diff --git a/PVE/Storage/ISCSIPlugin.pm b/PVE/Storage/ISCSIPlugin.pm
> index f2694ba..602fb1d 100644
> --- a/PVE/Storage/ISCSIPlugin.pm
> +++ b/PVE/Storage/ISCSIPlugin.pm
> @@ -438,5 +438,9 @@ sub volume_has_feature {
>      return undef;
>  }
>  
> +sub reassign_volume {
> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> +    die "Not implemented for this storage type\n";
> +}
>  
>  1;
> diff --git a/PVE/Storage/ZFSPlugin.pm b/PVE/Storage/ZFSPlugin.pm
> index 383f0a0..4097b1a 100644
> --- a/PVE/Storage/ZFSPlugin.pm
> +++ b/PVE/Storage/ZFSPlugin.pm
> @@ -421,4 +421,9 @@ sub deactivate_volume {
>      return 1;
>  }
>  
> +sub reassign_volume {
> +    my ($class, $scfg, $storeid, $volname, $target_vmid) = @_;
> +    die "Not implemented for this storage type\n";
> +}
> +
>  1;
> -- 
> 2.20.1
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pve-devel] [PATCH v2 qemu-server 1/5] disk reassign: add API endpoint
  2020-09-03  7:46   ` Fabian Grünbichler
@ 2020-09-03  8:30     ` Aaron Lauterer
  2020-09-03  9:07       ` Fabian Grünbichler
  0 siblings, 1 reply; 14+ messages in thread
From: Aaron Lauterer @ 2020-09-03  8:30 UTC (permalink / raw)
  To: pve-devel, Fabian Grünbichler



On 9/3/20 9:46 AM, Fabian Grünbichler wrote:
> On September 1, 2020 2:44 pm, Aaron Lauterer wrote:


[..]

>> +
>> +	my $storecfg = PVE::Storage::config();
>> +
>> +	die "You cannot reassign a disk to the same VM\n"
>> +	    if $vmid eq $target_vmid;
>> +
>> +	my $vmlist = PVE::QemuServer::vzlist();
>> +	die "Both VMs need to be on the same node\n"
>> +	    if !$vmlist->{$vmid}->{exists} || !$vmlist->{$target_vmid}->{exists};
> 
> we could skip this if the target storage is shared, and let the user run
> 'rescan' afterwards?

I would like to keep this as simple as possible. Just as a thought for now, but maybe we could issue the rescan to the other node by calling that API endpoint. Though we probably lose the possibility to log the new disk key and volid that way. But at least it would not be a two step process for the user.

>> +
>> +	return PVE::QemuConfig->lock_config($vmid, sub {
>> +	    my $conf = PVE::QemuConfig->load_config($vmid);
>> +	    PVE::QemuConfig->check_lock($conf);
>> +
>> +	    die "Source VM config checksum missmatch (file change by other user?)\n"
>> +		if $digest && $digest ne $conf->{digest};
> 
> PVE::Tools::assert_if_modified

thx

> 
>> +
>> +	    die "Disk '${disk}' does not exist\n"
>> +		if !exists($conf->{$disk});
> 
> why not defined?

because I thought exists is enough ;). But yep, defined is better

> 
>> +
>> +	    my $drive = PVE::QemuServer::parse_drive($disk, $conf->{$disk});
>> +	    die "disk '$disk' has no associated volume\n" if !$drive->{file};
> 
> missing check for whether it's actually volume-based, and not
> pass-through.. what about templates/base volumes/linked
> clones/snapshots?

Passed through disks will fail later on in the code but having a check right away with a nicer error msg is probably a good idea. I haven't thought about templates, base volumes and linked clones yet. Same goes for Snapshots :/

I guess we won't want to rename a ZFS dataset with snapshots present? Another thing is that renaming the dataset will most likely break replication.

> 
>> +	    die "Cannot reassign a cdrom drive\n" if PVE::QemuServer::drive_is_cdrom($drive, 1);
> 
> "CD drive contents can't be reassigned.\n"

Sounds much better :)

> 
>> +
>> +	    die "Cannot reassign disk while the source VM is running\n"
>> +		if PVE::QemuServer::check_running($vmid) && $disk !~ m/unused[0-9]/;
>> +
>> +	    return PVE::QemuConfig->lock_config($target_vmid, sub {
>> +		my $target_conf = PVE::QemuConfig->load_config($target_vmid);
>> +		PVE::QemuConfig->check_lock($target_conf);
>> +
>> +		die "Target VM config checksum missmatch (file change by other user?)\n"
>> +		    if $target_digest && $target_digest ne $target_conf->{digest};
> 
> same as above.
> 
>> +
>> +		PVE::Cluster::log_msg('info', $authuser, "reassign disk VM $vmid: reassign --disk $disk --target_vmid $target_vmid");
>> +
>> +		my $realcmd = sub {
>> +		    my $new_volid = PVE::Storage::reassign_volume($storecfg, $drive->{file}, $target_vmid);
>> +
>> +		    delete $conf->{$disk};
>> +		    PVE::QemuConfig->write_config($vmid, $conf);
>> +
>> +		    my $key = PVE::QemuConfig->add_unused_volume($target_conf, $new_volid);
> 
> this can fail, now the volume is unreferenced altogether -> maybe wrap
> in an eval and print a more helpful error message including the new
> volid?

Ok

> 
>> +		    PVE::QemuConfig->write_config($target_vmid, $target_conf);
>> +
>> +		    print "reassigned disk to VM ${target_vmid} as '${key}: ${new_volid}'\n";
> 
> or split this up, and
> 
> print 'renamed volume '$old_volid' to '$new_volid'\n";
> 
> after the call to reassign_volume, and print the config changes
> individually after the respective actions?
> 
> print "remove disk '$disk' from VM '$vmid'\n";
> 
> print "added volume '$new_volid' as '$key' to VM '$target_vmid'\n";

Sounds reasonable, being more verbose on what is going on is probably a good idea.

> 
>> +		};
>> +
>> +		return $rpcenv->fork_worker('qmreassign', $vmid, $authuser, $realcmd);
> 
> I am not so happy about
> 
> lock_config(
>    load_config;
>    lock_config(
>      load_config;
>      fork_worker(
>        cfs_lock/write_config
>      )
>    )
> )
> 
> as it tends to accumulate more checks and extra functionality and at
> some point might introduce an inconsistency.. note that the current
> version seems fine (we always lock first then load, we only look at
> config files that belong to a specific node, ...), but maybe we could
> switch to
> 
> # check for early return/nice error
> (conf1, conf2) = load_and_check_sub(id1, id2);
> fork_worker(
>    lock_config(
>      lock_config(
>        # recheck in locked worker context
>        (conf1, conf2) = load_and_check_sub(id1, id2);
>        ...
>      )
>    )
> )
> 
> to make it more fool/future-proof, especially since the checks are rather cheap..

Sounds good.

> 
>> +	    });
>> +	});
>> +    }});
>> +
>>   1;
>> diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
>> index 91c33f8..d2f59cd 100644
>> --- a/PVE/QemuServer/Drive.pm
>> +++ b/PVE/QemuServer/Drive.pm
>> @@ -383,6 +383,10 @@ sub valid_drive_names {
>>               'efidisk0');
>>   }
>>   
>> +sub valid_drive_names_with_unused {
>> +    return (valid_drive_names(), map {"unused$_"} (0 .. ($MAX_UNUSED_DISKS -1)));
>> +}
>> +
>>   sub is_valid_drivename {
>>       my $dev = shift;
>>   
>> -- 
>> 2.20.1
>>
>>
>>
>> _______________________________________________
>> pve-devel mailing list
>> pve-devel@lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>>
>>
>>
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pve-devel] [PATCH v2 storage 4/5] disk reassign: add not implemented yet message to storages
  2020-09-03  7:58   ` Fabian Grünbichler
@ 2020-09-03  9:01     ` Aaron Lauterer
  2020-09-03  9:06       ` Aaron Lauterer
  0 siblings, 1 reply; 14+ messages in thread
From: Aaron Lauterer @ 2020-09-03  9:01 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fabian Grünbichler



On 9/3/20 9:58 AM, Fabian Grünbichler wrote:
> wouldn't it make more sense to implement it in Dir/NFS/CIFSPlugin, and
> add this 'implement me' into Plugin itself? otherwise this breaks
> external plugins. also, would it make sense to add a feature for this so
> that we can check in the calling code with a meaningful error message
> before attempting and die-ing?

The storage plugins are a bit of a mess hierarchically. The base plugin (Plugin.pm) implements quite a few methods for the dir based plugins (dir, nfs, cifs) like `find_free_diskname` for example.
The other plugins overwrite these methods.

If we want to do it properly and to avoid code duplication, we should probably add another class in between to which me move the common file based operations which are used by all the dir based plugins.

Plugin.pm
     BaseDirPlugin.pm
         DirPlugin.pm
         NFSPlugin.pm
         CIFSPlugin.pm


Having the reassigning as additional feature sounds good. I will try that. But this will need




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pve-devel] [PATCH v2 storage 4/5] disk reassign: add not implemented yet message to storages
  2020-09-03  9:01     ` Aaron Lauterer
@ 2020-09-03  9:06       ` Aaron Lauterer
  2020-09-03  9:19         ` Fabian Grünbichler
  0 siblings, 1 reply; 14+ messages in thread
From: Aaron Lauterer @ 2020-09-03  9:06 UTC (permalink / raw)
  To: pve-devel

sent to early without finishing the last sentence...

On 9/3/20 11:01 AM, Aaron Lauterer wrote:
> 
> 
> On 9/3/20 9:58 AM, Fabian Grünbichler wrote:
>> wouldn't it make more sense to implement it in Dir/NFS/CIFSPlugin, and
>> add this 'implement me' into Plugin itself? otherwise this breaks
>> external plugins. also, would it make sense to add a feature for this so
>> that we can check in the calling code with a meaningful error message
>> before attempting and die-ing?
> 
> The storage plugins are a bit of a mess hierarchically. The base plugin (Plugin.pm) implements quite a few methods for the dir based plugins (dir, nfs, cifs) like `find_free_diskname` for example.
> The other plugins overwrite these methods.
> 
> If we want to do it properly and to avoid code duplication, we should probably add another class in between to which me move the common file based operations which are used by all the dir based plugins.
> 
> Plugin.pm
>      BaseDirPlugin.pm
>          DirPlugin.pm
>          NFSPlugin.pm
>          CIFSPlugin.pm
> 
> 
> Having the reassigning as additional feature sounds good. I will try that. But this will need
But this will need the intermediate dir base class so that we can add the feature just for them and not all plugins, especially third party ones which we cannot update.

Then again, what if a third party plugin is using the dir based methods in the plugin.pm already?
should I just add new things to the intermediate BaseDirPlugin in order to stay compatible?

> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pve-devel] [PATCH v2 qemu-server 1/5] disk reassign: add API endpoint
  2020-09-03  8:30     ` Aaron Lauterer
@ 2020-09-03  9:07       ` Fabian Grünbichler
  0 siblings, 0 replies; 14+ messages in thread
From: Fabian Grünbichler @ 2020-09-03  9:07 UTC (permalink / raw)
  To: Aaron Lauterer, pve-devel

On September 3, 2020 10:30 am, Aaron Lauterer wrote:
> 
> 
> On 9/3/20 9:46 AM, Fabian Grünbichler wrote:
>> On September 1, 2020 2:44 pm, Aaron Lauterer wrote:
> 
> 
> [..]
> 
>>> +
>>> +	    my $drive = PVE::QemuServer::parse_drive($disk, $conf->{$disk});
>>> +	    die "disk '$disk' has no associated volume\n" if !$drive->{file};
>> 
>> missing check for whether it's actually volume-based, and not
>> pass-through.. what about templates/base volumes/linked
>> clones/snapshots?
> 
> Passed through disks will fail later on in the code but having a check right away with a nicer error msg is probably a good idea. I haven't thought about templates, base volumes and linked clones yet. Same goes for Snapshots :/
> 
> I guess we won't want to rename a ZFS dataset with snapshots present? Another thing is that renaming the dataset will most likely break replication.
> 

from the top of my head, but please verify/check for missing stuff:

snapshot: can't re-assign if volume is referenced in a snapshot (would 
invalidate that reference!), need to remove snapshot(s) first
base volume: does not make much sense (except if it does not have any clones, in 
which case we could re-assign to another template but that seems rather 
contrived)
linked volume: needs special handling in storage (pass and encode base 
volume/template VMID so that new volid is still correct)
replicated source: reassigning from a VM that is replicated should work (from 
replication's PoV, that's just like removing a disk -> it will get 
cleaned up on the next run), but some special handling to also remove 
the replication snapshots might be a good idea to return the volume to a 
clean slate
replicated target: should work, but check that volume is replicatable 
(like when adding a disk) might make sense?





^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pve-devel] [PATCH v2 storage 4/5] disk reassign: add not implemented yet message to storages
  2020-09-03  9:06       ` Aaron Lauterer
@ 2020-09-03  9:19         ` Fabian Grünbichler
  0 siblings, 0 replies; 14+ messages in thread
From: Fabian Grünbichler @ 2020-09-03  9:19 UTC (permalink / raw)
  To: Proxmox VE development discussion

On September 3, 2020 11:06 am, Aaron Lauterer wrote:
> sent to early without finishing the last sentence...
> 
> On 9/3/20 11:01 AM, Aaron Lauterer wrote:
>> 
>> 
>> On 9/3/20 9:58 AM, Fabian Grünbichler wrote:
>>> wouldn't it make more sense to implement it in Dir/NFS/CIFSPlugin, and
>>> add this 'implement me' into Plugin itself? otherwise this breaks
>>> external plugins. also, would it make sense to add a feature for this so
>>> that we can check in the calling code with a meaningful error message
>>> before attempting and die-ing?
>> 
>> The storage plugins are a bit of a mess hierarchically. The base plugin (Plugin.pm) implements quite a few methods for the dir based plugins (dir, nfs, cifs) like `find_free_diskname` for example.
>> The other plugins overwrite these methods.

I know. most of that predates external plugins though ;) for new stuff, 
we should not pile on top of the mess.

>> 
>> If we want to do it properly and to avoid code duplication, we should probably add another class in between to which me move the common file based operations which are used by all the dir based plugins.
>> 
>> Plugin.pm
>>      BaseDirPlugin.pm
>>          DirPlugin.pm
>>          NFSPlugin.pm
>>          CIFSPlugin.pm
>> 

might a be a good idea, but would be a breaking change as you indicate 
below (so maybe 7.0 material?)

>> 
>> Having the reassigning as additional feature sounds good. I will try that. But this will need
> But this will need the intermediate dir base class so that we can add the feature just for them and not all plugins, especially third party ones which we cannot update.

you could also just refactor volume_has_feature (e.g., by splitting out 
the $features hash so that plugins can override it without fully 
overriding volume_has_feature), or override it with just that feature 
check and fallback to the base one otherwise.

> Then again, what if a third party plugin is using the dir based methods in the plugin.pm already?
> should I just add new things to the intermediate BaseDirPlugin in order to stay compatible?

that would be a possibility, and then move all the existing non-generic stuff at 
some later point when we need/want to break API anyhway.




^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2020-09-03  9:20 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-01 12:44 [pve-devel] [PATCH v2 series 0/5] disk reassign: add new feature Aaron Lauterer
2020-09-01 12:44 ` [pve-devel] [PATCH v2 qemu-server 1/5] disk reassign: add API endpoint Aaron Lauterer
2020-09-03  7:46   ` Fabian Grünbichler
2020-09-03  8:30     ` Aaron Lauterer
2020-09-03  9:07       ` Fabian Grünbichler
2020-09-01 12:44 ` [pve-devel] [PATCH v2 qemu-server 2/5] cli: disk reassign: add reassign_disk to qm command Aaron Lauterer
2020-09-01 12:44 ` [pve-devel] [PATCH v2 storage 3/5] add disk reassign feature Aaron Lauterer
2020-09-03  7:55   ` Fabian Grünbichler
2020-09-01 12:44 ` [pve-devel] [PATCH v2 storage 4/5] disk reassign: add not implemented yet message to storages Aaron Lauterer
2020-09-03  7:58   ` Fabian Grünbichler
2020-09-03  9:01     ` Aaron Lauterer
2020-09-03  9:06       ` Aaron Lauterer
2020-09-03  9:19         ` Fabian Grünbichler
2020-09-01 12:44 ` [pve-devel] [PATCH v2 widget-toolkit 5/5] utils: task_desc_table: add qmreassign Aaron Lauterer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal