From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: <f.ebner@proxmox.com> Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id D4FB96FD09 for <pve-devel@lists.proxmox.com>; Wed, 1 Sep 2021 11:49:36 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id B86242D6B5 for <pve-devel@lists.proxmox.com>; Wed, 1 Sep 2021 11:49:06 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id A47CD2D676 for <pve-devel@lists.proxmox.com>; Wed, 1 Sep 2021 11:49:01 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 5BFDC4449C for <pve-devel@lists.proxmox.com>; Wed, 1 Sep 2021 11:49:01 +0200 (CEST) To: Aaron Lauterer <a.lauterer@proxmox.com>, pve-devel@lists.proxmox.com References: <20210806134647.632054-1-a.lauterer@proxmox.com> <20210806134647.632054-6-a.lauterer@proxmox.com> <0b50d6a0-8fe8-8c5e-0c1c-6aa09a25cd67@proxmox.com> <f6811e02-6c19-074b-fffa-1132dea80ca6@proxmox.com> From: Fabian Ebner <f.ebner@proxmox.com> Message-ID: <61ff7797-484d-4f66-3268-ea26e54629eb@proxmox.com> Date: Wed, 1 Sep 2021 11:48:47 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.12.0 MIME-Version: 1.0 In-Reply-To: <f6811e02-6c19-074b-fffa-1132dea80ca6@proxmox.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 AWL 0.862 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment NICE_REPLY_A -0.932 Looks like a legit reply (A) SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [proxmox.com, qemu.pm, qm.pm] Subject: Re: [pve-devel] [PATCH v2 qemu-server 5/9] api: move-disk: add move to other VM X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com> List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe> List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/> List-Post: <mailto:pve-devel@lists.proxmox.com> List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help> List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe> X-List-Received-Date: Wed, 01 Sep 2021 09:49:36 -0000 Am 13.08.21 um 17:35 schrieb Aaron Lauterer: > > > On 8/13/21 9:41 AM, Fabian Ebner wrote: >> Am 06.08.21 um 15:46 schrieb Aaron Lauterer: >>> The goal of this is to expand the move-disk API endpoint to make it >>> possible to move a disk to another VM. Previously this was only possible >>> with manual intervertion either by renaming the VM disk or by manually >>> adding the disks volid to the config of the other VM. >>> >>> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com> >>> --- >>> v1 -> v2: >>> * make --target-disk optional and use source disk key as fallback >>> * use parse_volname instead of custom regex >>> * adapt to find_free_volname >>> * smaller (style) fixes >>> >>> rfc -> v1: >>> * add check if target guest is replicated and fail if the moved volume >>> does not support it >>> * check if source volume has a format suffix and pass it to >>> 'find_free_disk' >>> * fixed some style nits >>> >>> old dedicated api endpoint -> rfc: >>> There are some big changes here. The old [0] dedicated API endpoint is >>> gone and most of its code is now part of move_disk. Error messages have >>> been changed accordingly and sometimes enahnced by adding disk keys and >>> VMIDs where appropriate. >>> >>> Since a move to other guests should be possible for unused disks, we >>> need to check before doing a move to storage to make sure to not >>> handle unused disks. >>> >>> PVE/API2/Qemu.pm | 238 ++++++++++++++++++++++++++++++++++++++++++++++- >>> PVE/CLI/qm.pm | 2 +- >>> 2 files changed, 234 insertions(+), 6 deletions(-) >>> >>> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm >>> index ef0d877..30e222a 100644 >>> --- a/PVE/API2/Qemu.pm >>> +++ b/PVE/API2/Qemu.pm >>> @@ -35,6 +35,7 @@ use PVE::API2::Qemu::Agent; >>> use PVE::VZDump::Plugin; >>> use PVE::DataCenterConfig; >>> use PVE::SSHInfo; >>> +use PVE::Replication; >>> BEGIN { >>> if (!$ENV{PVE_GENERATING_DOCS}) { >>> @@ -3274,9 +3275,11 @@ __PACKAGE__->register_method({ >>> method => 'POST', >>> protected => 1, >>> proxyto => 'node', >>> - description => "Move volume to different storage.", >>> + description => "Move volume to different storage or to a >>> different VM.", >>> permissions => { >>> - description => "You need 'VM.Config.Disk' permissions on >>> /vms/{vmid}, and 'Datastore.AllocateSpace' permissions on the storage.", >>> + description => "You need 'VM.Config.Disk' permissions on >>> /vms/{vmid}, " . >>> + "and 'Datastore.AllocateSpace' permissions on the storage. >>> To move ". >>> + "a disk to another VM, you need the permissions on the >>> target VM as well.", >>> check => [ 'and', >>> ['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]], >>> ['perm', '/storage/{storage}', [ >>> 'Datastore.AllocateSpace' ]], >>> @@ -3287,14 +3290,19 @@ __PACKAGE__->register_method({ >>> properties => { >>> node => get_standard_option('pve-node'), >>> vmid => get_standard_option('pve-vmid', { completion => >>> \&PVE::QemuServer::complete_vmid }), >>> + 'target-vmid' => get_standard_option('pve-vmid', { >>> + completion => \&PVE::QemuServer::complete_vmid, >>> + optional => 1, >>> + }), >>> disk => { >>> type => 'string', >>> description => "The disk you want to move.", >>> - enum => [PVE::QemuServer::Drive::valid_drive_names()], >>> + enum => >>> [PVE::QemuServer::Drive::valid_drive_names_with_unused()], >>> }, >>> storage => get_standard_option('pve-storage-id', { >>> description => "Target storage.", >>> completion => \&PVE::QemuServer::complete_storage, >>> + optional => 1, >>> }), >>> 'format' => { >>> type => 'string', >>> @@ -3321,6 +3329,20 @@ __PACKAGE__->register_method({ >>> minimum => '0', >>> default => 'move limit from datacenter or storage config', >>> }, >>> + 'target-disk' => { >>> + type => 'string', >>> + description => "The config key the disk will be moved to on >>> the target VM " . >>> + "(for example, ide0 or scsi1).", >>> + enum => >>> [PVE::QemuServer::Drive::valid_drive_names_with_unused()], >>> + optional => 1, >> >> The default could be mentioned here. > > Good point. > >> >>> + }, >>> + 'target-digest' => { >>> + type => 'string', >>> + description => 'Prevent changes if current configuration >>> file of the target VM has " . >>> + "a different SHA1 digest. This can be used to prevent >>> concurrent modifications.', >>> + maxLength => 40, >>> + optional => 1, >>> + }, >>> }, >>> }, >>> returns => { >>> @@ -3335,14 +3357,22 @@ __PACKAGE__->register_method({ >>> my $node = extract_param($param, 'node'); >>> my $vmid = extract_param($param, 'vmid'); >>> + my $target_vmid = extract_param($param, 'target-vmid'); >>> my $digest = extract_param($param, 'digest'); >>> + my $target_digest = extract_param($param, 'target-digest'); >>> my $disk = extract_param($param, 'disk'); >>> + my $target_disk = extract_param($param, 'target-disk') // $disk; >>> my $storeid = extract_param($param, 'storage'); >>> my $format = extract_param($param, 'format'); >>> + die "either set storage or target-vmid, but not both\n" >>> + if $storeid && $target_vmid; >>> + >>> + >>> my $storecfg = PVE::Storage::config(); >>> + my $source_volid; >>> - my $updatefn = sub { >>> + my $move_updatefn = sub { >>> my $conf = PVE::QemuConfig->load_config($vmid); >>> PVE::QemuConfig->check_lock($conf); >>> @@ -3452,7 +3482,205 @@ __PACKAGE__->register_method({ >>> return $rpcenv->fork_worker('qmmove', $vmid, $authuser, >>> $realcmd); >>> }; >>> - return PVE::QemuConfig->lock_config($vmid, $updatefn); >>> + my $load_and_check_reassign_configs = sub { >>> + my $vmlist = PVE::Cluster::get_vmlist()->{ids}; >>> + >>> + if ($vmlist->{$vmid}->{node} ne >>> $vmlist->{$target_vmid}->{node}) { >>> + die "Both VMs need to be on the same node >>> $vmlist->{$vmid}->{node}) ". >>> + "but target VM is on $vmlist->{$target_vmid}->{node}.\n"; >>> + } >>> + >>> + my $source_conf = PVE::QemuConfig->load_config($vmid); >>> + PVE::QemuConfig->check_lock($source_conf); >>> + my $target_conf = PVE::QemuConfig->load_config($target_vmid); >>> + PVE::QemuConfig->check_lock($target_conf); >>> + >>> + die "Can't move disks from or to template VMs\n" >>> + if ($source_conf->{template} || $target_conf->{template}); >>> + >>> + if ($digest) { >>> + eval { PVE::Tools::assert_if_modified($digest, >>> $source_conf->{digest}) }; >>> + if (my $err = $@) { >>> + die "VM ${vmid}: ${err}"; >>> + } >>> + } >>> + >>> + if ($target_digest) { >>> + eval { PVE::Tools::assert_if_modified($target_digest, >>> $target_conf->{digest}) }; >>> + if (my $err = $@) { >>> + die "VM ${target_vmid}: ${err}"; >>> + } >>> + } >>> + >>> + die "Disk '${disk}' for VM '$vmid' does not exist\n" >>> + if !defined($source_conf->{$disk}); >>> + >>> + die "Target disk key '${target_disk}' is already in use for >>> VM '$target_vmid'\n" >>> + if exists($target_conf->{$target_disk}); >>> + >>> + my $drive = PVE::QemuServer::parse_drive( >>> + $disk, >>> + $source_conf->{$disk}, >>> + ); >>> + $source_volid = $drive->{file}; >>> + >>> + die "disk '${disk}' has no associated volume\n" >>> + if !$source_volid; >>> + die "CD drive contents can't be moved to another VM\n" >>> + if PVE::QemuServer::drive_is_cdrom($drive, 1); >>> + die "Can't move physical disk to another VM\n" if >>> $source_volid =~ m|^/dev/|; >>> + if (PVE::QemuServer::Drive::is_volume_in_use( >>> + $storecfg, >>> + $source_conf, >>> + $disk, >>> + $source_volid, >>> + )) { >>> + die "Can't move disk used by a snapshot to another VM\n" >>> + } >> >> This looks weird to me style-wise. Also missing semicolon after die. > > Yeah, no matter how, it either looks weird or will be a few characters > over the 100 limit... > For the sake of readability I think I'll opt for the slightly too long > post if variant. > The style guide doesn't have an explicit example of a long if with a function call, but the two examples for long if conditions (at the end of the "Wrapping post-if section") [0] look different. To match one of those, you could either: 1. move the '{' to its own line. 2. use if ( function call ) { But maybe the current style is also acceptable? [0]: https://pve.proxmox.com/wiki/Perl_Style_Guide#Wrapping_Post-If >> >>> + >>> + if (!PVE::Storage::volume_has_feature( >>> + $storecfg, >>> + 'rename', >>> + $source_volid, >>> + )) { >>> + die "Storage does not support moving of this disk to another >>> VM\n" >>> + } >> >> Same here, but this time the if-condition could fit on one line within >> the 100 character limit ;) Again, missing semicolon. > > You are right, switchting this to post if > Could also be a normal if with the condition on one line, but both are fine. >> >>> + >>> + die "Cannot move disk to another VM while the source VM is >>> running\n" >>> + if PVE::QemuServer::check_running($vmid) && $disk !~ >>> m/^unused\d+$/; >>> + >>> + if ($target_disk !~ m/^unused\d+$/ && $target_disk =~ >>> m/^([^\d]+)\d+$/) { >>> + my $interface = $1; >> >> Nit: Isn't the interface already present in the result from parse_drive? > > The previous parse_drive is done on the source disk. Here we are > checking against the target disk which can use a different config key / > interface. Also we cannot use parse_drive for the target disk using the > source_conf for the data as it will just return undef in case the > parse_property_string fails, which is exactly what we are trying to set > up here so that we can check if the target disk key supports all config > options, and if not, present them to the user, so they have an idea why > it does not work. > Right, there are potentially two different interfaces. Please just ignore my wrong suggestion. >> >>> + my $desc = >>> PVE::JSONSchema::get_standard_option("pve-qm-${interface}"); >>> + eval { >>> + PVE::JSONSchema::parse_property_string( >>> + $desc->{format}, >>> + $source_conf->{$disk}, >>> + ) >>> + }; >>> + if (my $err = $@) { >>> + die "Cannot move disk to another VM: ${err}"; >>> + } >>> + } >>> + >>> + my $repl_conf = PVE::ReplicationConfig->new(); >>> + my $is_replicated = >>> $repl_conf->check_for_existing_jobs($target_vmid, 1); >>> + my ($storeid, undef) = >>> PVE::Storage::parse_volume_id($source_volid); >>> + my $format = (PVE::Storage::parse_volname($storecfg, >>> $source_volid))[6]; >>> + if ($is_replicated && >>> !PVE::Storage::storage_can_replicate($storecfg, $storeid, $format)) { >>> + die "Cannot move disk to a replicated VM. Storage does not >>> support replication!\n"; >>> + } >>> + >>> + return ($source_conf, $target_conf); >>> + }; >>> + >>> + my $logfunc = sub { >>> + my ($msg) = @_; >>> + print STDERR "$msg\n"; >>> + }; >>> + >>> + my $disk_reassignfn = sub { >>> + return PVE::QemuConfig->lock_config($vmid, sub { >>> + return PVE::QemuConfig->lock_config($target_vmid, sub { >>> + my ($source_conf, $target_conf) = >>> &$load_and_check_reassign_configs(); >>> + >>> + my $drive_param = PVE::QemuServer::parse_drive( >>> + $target_disk, >>> + $source_conf->{$disk}, >>> + ); >>> + >>> + print "moving disk '$disk' from VM '$vmid' to >>> '$target_vmid'\n"; >>> + my ($storeid, $source_volname) = >>> PVE::Storage::parse_volume_id($source_volid); >>> + >>> + my ( >>> + undef, >>> + undef, >>> + undef, >>> + undef, >>> + undef, >>> + undef, >>> + $fmt >>> + ) = PVE::Storage::parse_volname($storecfg, $source_volid); >> >> Nit: using >> my $fmt = (PVE::Storage::parse_volname($storecfg, >> $source_volid))[6]; >> like above is shorter. > > thx! > >> >>> + >>> + my $target_volname = PVE::Storage::find_free_volname( >>> + $storecfg, >>> + $storeid, >>> + $target_vmid, >>> + $fmt >>> + ); >>> + >>> + my $new_volid = PVE::Storage::rename_volume( >>> + $storecfg, >>> + $source_volid, >>> + $target_volname, >>> + ); >>> + >>> + $drive_param->{file} = $new_volid; >>> + >>> + delete $source_conf->{$disk}; >>> + print "removing disk '${disk}' from VM '${vmid}' config\n"; >>> + PVE::QemuConfig->write_config($vmid, $source_conf); >>> + >>> + my $drive_string = >>> PVE::QemuServer::print_drive($drive_param); >>> + &$update_vm_api( >>> + { >>> + node => $node, >>> + vmid => $target_vmid, >>> + digest => $target_digest, >>> + $target_disk => $drive_string, >>> + }, >>> + 1, >>> + ); >>> + >>> + # remove possible replication snapshots >>> + if (PVE::Storage::volume_has_feature( >>> + $storecfg, >>> + 'replicate', >>> + $source_volid), >>> + ) { >>> + eval { >>> + PVE::Replication::prepare( >>> + $storecfg, >>> + [$new_volid], >>> + undef, >>> + 1, >>> + undef, >>> + $logfunc, >>> + ) >>> + }; >>> + if (my $err = $@) { >>> + print "Failed to remove replication snapshots on >>> moved disk " . >>> + "'$target_disk'. Manual cleanup could be necessary.\n"; >>> + } >>> + } >>> + }); >>> + }); >>> + }; >>> + >>> + if ($target_vmid) { >>> + $rpcenv->check_vm_perm($authuser, $target_vmid, undef, >>> ['VM.Config.Disk']) >>> + if $authuser ne 'root@pam'; >>> + >>> + die "Moving a disk to the same VM is not possible. Did you >>> mean to ". >>> + "move the disk to a different storage?\n" >>> + if $vmid eq $target_vmid; >>> + >>> + &$load_and_check_reassign_configs(); >>> + return $rpcenv->fork_worker( >>> + 'qmmove', >>> + "${vmid}-${disk}>${target_vmid}-${target_disk}", >>> + $authuser, >>> + $disk_reassignfn >>> + ); >>> + } elsif ($storeid) { >>> + die "cannot move disk '$disk', only configured disks can be >>> moved to another storage\n" >>> + if $disk =~ m/^unused\d+$/; >>> + return PVE::QemuConfig->lock_config($vmid, $move_updatefn); >>> + } else { >>> + die "Provide either a 'storage' to move the disk to a >>> different " . >>> + "storage or 'target-vmid' and 'target-disk' to move the disk >>> " . >>> + "to another VM\n"; >>> + } >>> }}); >>> my $check_vm_disks_local = sub { >>> diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm >>> index ef99b6d..a92d301 100755 >>> --- a/PVE/CLI/qm.pm >>> +++ b/PVE/CLI/qm.pm >>> @@ -910,7 +910,7 @@ our $cmddef = { >>> resize => [ "PVE::API2::Qemu", 'resize_vm', ['vmid', 'disk', >>> 'size'], { node => $nodename } ], >>> - 'move-disk' => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', >>> 'disk', 'storage'], { node => $nodename }, $upid_exit ], >>> + 'move-disk' => [ "PVE::API2::Qemu", 'move_vm_disk', ['vmid', >>> 'disk', 'storage', 'target-vmid', 'target-disk'], { node => $nodename >>> }, $upid_exit ], >>> move_disk => { alias => 'move-disk' }, >>> unlink => [ "PVE::API2::Qemu", 'unlink', ['vmid'], { node => >>> $nodename } ], >>>