From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <a.lauterer@proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id AEAD9772DA
 for <pve-devel@lists.proxmox.com>; Thu, 21 Oct 2021 13:31:11 +0200 (CEST)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id 902BD1E0F0
 for <pve-devel@lists.proxmox.com>; Thu, 21 Oct 2021 13:30:41 +0200 (CEST)
Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com
 [94.136.29.106])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS id E05D41DD76
 for <pve-devel@lists.proxmox.com>; Thu, 21 Oct 2021 13:30:32 +0200 (CEST)
Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1])
 by proxmox-new.maurer-it.com (Proxmox) with ESMTP id B9CC045F05
 for <pve-devel@lists.proxmox.com>; Thu, 21 Oct 2021 13:30:32 +0200 (CEST)
From: Aaron Lauterer <a.lauterer@proxmox.com>
To: pve-devel@lists.proxmox.com
Date: Thu, 21 Oct 2021 13:30:26 +0200
Message-Id: <20211021113030.2649985-6-a.lauterer@proxmox.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20211021113030.2649985-1-a.lauterer@proxmox.com>
References: <20211021113030.2649985-1-a.lauterer@proxmox.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SPAM-LEVEL: Spam detection results:  0
 AWL 0.177 Adjusted score from AWL reputation of From: address
 BAYES_00                 -1.9 Bayes spam probability is 0 to 1%
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
Subject: [pve-devel] [PATCH v3 qemu-server 5/9] api: move-disk: add move to
 other VM
X-BeenThere: pve-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/>
List-Post: <mailto:pve-devel@lists.proxmox.com>
List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Thu, 21 Oct 2021 11:31:11 -0000

The goal of this is to expand the move-disk API endpoint to make it
possible to move a disk to another VM. Previously this was only possible
with manual intervertion either by renaming the VM disk or by manually
adding the disks volid to the config of the other VM.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
v2 -> v3:
* mention default target disk key in description
* code style things

v1 -> v2:
* make --target-disk optional and use source disk key as fallback
* use parse_volname instead of custom regex
* adapt to find_free_volname
* smaller (style) fixes

rfc -> v1:
* add check if target guest is replicated and fail if the moved volume
  does not support it
* check if source volume has a format suffix and pass it to
  'find_free_disk'
* fixed some style nits

old dedicated api endpoint -> rfc:
There are some big changes here. The old [0] dedicated API endpoint is
gone and most of its code is now part of move_disk. Error messages have
been changed accordingly and sometimes enahnced by adding disk keys and
VMIDs where appropriate.

Since a move to other guests should be possible for unused disks, we
need to check before doing a move to storage to make sure to not
handle unused disks.


 PVE/API2/Qemu.pm | 217 +++++++++++++++++++++++++++++++++++++++++++++--
 PVE/CLI/qm.pm    |   2 +-
 2 files changed, 213 insertions(+), 6 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 1ac81e2..d2d9b1d 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -36,6 +36,7 @@ use PVE::API2::Qemu::Agent;
 use PVE::VZDump::Plugin;
 use PVE::DataCenterConfig;
 use PVE::SSHInfo;
+use PVE::Replication;
 
 BEGIN {
     if (!$ENV{PVE_GENERATING_DOCS}) {
@@ -3280,9 +3281,11 @@ __PACKAGE__->register_method({
     method => 'POST',
     protected => 1,
     proxyto => 'node',
-    description => "Move volume to different storage.",
+    description => "Move volume to different storage or to a different VM.",
     permissions => {
-	description => "You need 'VM.Config.Disk' permissions on /vms/{vmid}, and 'Datastore.AllocateSpace' permissions on the storage.",
+	description => "You need 'VM.Config.Disk' permissions on /vms/{vmid}, " .
+	    "and 'Datastore.AllocateSpace' permissions on the storage. To move ".
+	    "a disk to another VM, you need the permissions on the target VM as well.",
 	check => [ 'and',
 		   ['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
 		   ['perm', '/storage/{storage}', [ 'Datastore.AllocateSpace' ]],
@@ -3293,14 +3296,19 @@ __PACKAGE__->register_method({
 	properties => {
 	    node => get_standard_option('pve-node'),
 	    vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
+	    'target-vmid' => get_standard_option('pve-vmid', {
+		completion => \&PVE::QemuServer::complete_vmid,
+		optional => 1,
+	    }),
 	    disk => {
 	        type => 'string',
 		description => "The disk you want to move.",
-		enum => [PVE::QemuServer::Drive::valid_drive_names()],
+		enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
 	    },
             storage => get_standard_option('pve-storage-id', {
 		description => "Target storage.",
 		completion => \&PVE::QemuServer::complete_storage,
+		optional => 1,
             }),
             'format' => {
                 type => 'string',
@@ -3327,6 +3335,20 @@ __PACKAGE__->register_method({
 		minimum => '0',
 		default => 'move limit from datacenter or storage config',
 	    },
+	    'target-disk' => {
+	        type => 'string',
+		description => "The config key the disk will be moved to on the target VM " .
+		    "(for example, ide0 or scsi1). Default is the source disk key.",
+		enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
+		optional => 1,
+	    },
+	    'target-digest' => {
+		type => 'string',
+		description => 'Prevent changes if current configuration file of the target VM has " .
+		    "a different SHA1 digest. This can be used to prevent concurrent modifications.',
+		maxLength => 40,
+		optional => 1,
+	    },
 	},
     },
     returns => {
@@ -3341,14 +3363,22 @@ __PACKAGE__->register_method({
 
 	my $node = extract_param($param, 'node');
 	my $vmid = extract_param($param, 'vmid');
+	my $target_vmid = extract_param($param, 'target-vmid');
 	my $digest = extract_param($param, 'digest');
+	my $target_digest = extract_param($param, 'target-digest');
 	my $disk = extract_param($param, 'disk');
+	my $target_disk = extract_param($param, 'target-disk') // $disk;
 	my $storeid = extract_param($param, 'storage');
 	my $format = extract_param($param, 'format');
 
+	die "either set storage or target-vmid, but not both\n"
+	    if $storeid && $target_vmid;
+
+
 	my $storecfg = PVE::Storage::config();
+	my $source_volid;
 
-	my $updatefn =  sub {
+	my $move_updatefn =  sub {
 	    my $conf = PVE::QemuConfig->load_config($vmid);
 	    PVE::QemuConfig->check_lock($conf);
 
@@ -3458,7 +3488,184 @@ __PACKAGE__->register_method({
             return $rpcenv->fork_worker('qmmove', $vmid, $authuser, $realcmd);
 	};
 
-	return PVE::QemuConfig->lock_config($vmid, $updatefn);
+	my $load_and_check_reassign_configs = sub {
+	    my $vmlist = PVE::Cluster::get_vmlist()->{ids};
+
+	    if ($vmlist->{$vmid}->{node} ne $vmlist->{$target_vmid}->{node}) {
+		die "Both VMs need to be on the same node $vmlist->{$vmid}->{node}) ".
+		    "but target VM is on $vmlist->{$target_vmid}->{node}.\n";
+	    }
+
+	    my $source_conf = PVE::QemuConfig->load_config($vmid);
+	    PVE::QemuConfig->check_lock($source_conf);
+	    my $target_conf = PVE::QemuConfig->load_config($target_vmid);
+	    PVE::QemuConfig->check_lock($target_conf);
+
+	    die "Can't move disks from or to template VMs\n"
+		if ($source_conf->{template} || $target_conf->{template});
+
+	    if ($digest) {
+		eval { PVE::Tools::assert_if_modified($digest, $source_conf->{digest}) };
+		if (my $err = $@) {
+		    die "VM ${vmid}: ${err}";
+		}
+	    }
+
+	    if ($target_digest) {
+		eval { PVE::Tools::assert_if_modified($target_digest, $target_conf->{digest}) };
+		if (my $err = $@) {
+		    die "VM ${target_vmid}: ${err}";
+		}
+	    }
+
+	    die "Disk '${disk}' for VM '$vmid' does not exist\n"
+		if !defined($source_conf->{$disk});
+
+	    die "Target disk key '${target_disk}' is already in use for VM '$target_vmid'\n"
+		if exists($target_conf->{$target_disk});
+
+	    my $drive = PVE::QemuServer::parse_drive(
+		$disk,
+		$source_conf->{$disk},
+	    );
+	    $source_volid = $drive->{file};
+
+	    die "disk '${disk}' has no associated volume\n"
+		if !$source_volid;
+	    die "CD drive contents can't be moved to another VM\n"
+		if PVE::QemuServer::drive_is_cdrom($drive, 1);
+	    die "Can't move  physical disk to another VM\n" if $source_volid =~ m|^/dev/|;
+	    die "Can't move disk used by a snapshot to another VM\n"
+		if PVE::QemuServer::Drive::is_volume_in_use($storecfg, $source_conf, $disk, $source_volid);
+	    die "Storage does not support moving of this disk to another VM\n"
+		if (!PVE::Storage::volume_has_feature($storecfg, 'rename', $source_volid));
+	    die "Cannot move disk to another VM while the source VM is running\n"
+		if PVE::QemuServer::check_running($vmid) && $disk !~ m/^unused\d+$/;
+
+	    if ($target_disk !~ m/^unused\d+$/ && $target_disk =~ m/^([^\d]+)\d+$/) {
+		my $interface = $1;
+		my $desc = PVE::JSONSchema::get_standard_option("pve-qm-${interface}");
+		eval {
+		    PVE::JSONSchema::parse_property_string(
+			$desc->{format},
+			$source_conf->{$disk},
+		    )
+		};
+		if (my $err = $@) {
+		    die "Cannot move disk to another VM: ${err}";
+		}
+	    }
+
+	    my $repl_conf = PVE::ReplicationConfig->new();
+	    my $is_replicated = $repl_conf->check_for_existing_jobs($target_vmid, 1);
+	    my ($storeid, undef) = PVE::Storage::parse_volume_id($source_volid);
+	    my $format = (PVE::Storage::parse_volname($storecfg, $source_volid))[6];
+	    if ($is_replicated && !PVE::Storage::storage_can_replicate($storecfg, $storeid, $format)) {
+		die "Cannot move disk to a replicated VM. Storage does not support replication!\n";
+	    }
+
+	    return ($source_conf, $target_conf);
+	};
+
+	my $logfunc = sub {
+	    my ($msg) = @_;
+	    print STDERR "$msg\n";
+	};
+
+	my $disk_reassignfn = sub {
+	    return PVE::QemuConfig->lock_config($vmid, sub {
+		return PVE::QemuConfig->lock_config($target_vmid, sub {
+		    my ($source_conf, $target_conf) = &$load_and_check_reassign_configs();
+
+		    my $drive_param = PVE::QemuServer::parse_drive(
+			$target_disk,
+			$source_conf->{$disk},
+		    );
+
+		    print "moving disk '$disk' from VM '$vmid' to '$target_vmid'\n";
+		    my ($storeid, $source_volname) = PVE::Storage::parse_volume_id($source_volid);
+
+		    my $fmt = (PVE::Storage::parse_volname($storecfg, $source_volid))[6];
+
+		    my $target_volname = PVE::Storage::find_free_volname(
+			$storecfg,
+			$storeid,
+			$target_vmid,
+			$fmt
+		    );
+
+		    my $new_volid = PVE::Storage::rename_volume(
+			$storecfg,
+			$source_volid,
+			$target_volname,
+		    );
+
+		    $drive_param->{file} = $new_volid;
+
+		    delete $source_conf->{$disk};
+		    print "removing disk '${disk}' from VM '${vmid}' config\n";
+		    PVE::QemuConfig->write_config($vmid, $source_conf);
+
+		    my $drive_string = PVE::QemuServer::print_drive($drive_param);
+		    &$update_vm_api(
+			{
+			    node => $node,
+			    vmid => $target_vmid,
+			    digest => $target_digest,
+			    $target_disk => $drive_string,
+			},
+			1,
+		    );
+
+		    # remove possible replication snapshots
+		    if (PVE::Storage::volume_has_feature(
+			    $storecfg,
+			    'replicate',
+			    $source_volid),
+		    ) {
+			eval {
+			    PVE::Replication::prepare(
+				$storecfg,
+				[$new_volid],
+				undef,
+				1,
+				undef,
+				$logfunc,
+			    )
+			};
+			if (my $err = $@) {
+			    print "Failed to remove replication snapshots on moved disk " .
+				"'$target_disk'. Manual cleanup could be necessary.\n";
+			}
+		    }
+		});
+	    });
+	};
+
+	if ($target_vmid) {
+	    $rpcenv->check_vm_perm($authuser, $target_vmid, undef, ['VM.Config.Disk'])
+		if $authuser ne 'root@pam';
+
+	    die "Moving a disk to the same VM is not possible. Did you mean to ".
+		"move the disk to a different storage?\n"
+		if $vmid eq $target_vmid;
+
+	    &$load_and_check_reassign_configs();
+	    return $rpcenv->fork_worker(
+		'qmmove',
+		"${vmid}-${disk}>${target_vmid}-${target_disk}",
+		$authuser,
+		$disk_reassignfn
+	    );
+	} elsif ($storeid) {
+	    die "cannot move disk '$disk', only configured disks can be moved to another storage\n"
+		if $disk =~ m/^unused\d+$/;
+	    return PVE::QemuConfig->lock_config($vmid, $move_updatefn);
+	} else {
+	    die "Provide either a 'storage' to move the disk to a different " .
+		"storage or 'target-vmid' and 'target-disk' to move the disk " .
+		"to another VM\n";
+	}
     }});
 
 my $check_vm_disks_local = sub {
diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index ef99b6d..a92d301 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -910,7 +910,7 @@ our $cmddef = {
 
     resize => [ "PVE::API2::Qemu", 'resize_vm', ['vmid', 'disk', 'size'], { node => $nodename } ],
 
-    'move-disk' => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage'], { node => $nodename }, $upid_exit ],
+    'move-disk' => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', 'disk', 'storage', 'target-vmid', 'target-disk'], { node => $nodename }, $upid_exit ],
     move_disk => { alias => 'move-disk' },
 
     unlink => [ "PVE::API2::Qemu", 'unlink', ['vmid'], { node => $nodename } ],
-- 
2.30.2