From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id E1227618F5 for ; Fri, 20 Nov 2020 17:17:39 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id D247D14C9E for ; Fri, 20 Nov 2020 17:17:39 +0100 (CET) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [212.186.127.180]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id A48EF14C92 for ; Fri, 20 Nov 2020 17:17:37 +0100 (CET) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 6E0AF43CFE for ; Fri, 20 Nov 2020 17:17:37 +0100 (CET) Date: Fri, 20 Nov 2020 17:17:28 +0100 From: Fabian =?iso-8859-1?q?Gr=FCnbichler?= To: Proxmox VE development discussion References: <20201002082354.20204-1-a.lauterer@proxmox.com> <20201002082354.20204-2-a.lauterer@proxmox.com> In-Reply-To: <20201002082354.20204-2-a.lauterer@proxmox.com> MIME-Version: 1.0 User-Agent: astroid/0.15.0 (https://github.com/astroidmail/astroid) Message-Id: <1605886635.0iix0q0nsh.astroid@nora.none> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-SPAM-LEVEL: Spam detection results: 0 AWL 0.026 Adjusted score from AWL reputation of From: address KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_DNSWL_MED -2.3 Sender listed at https://www.dnswl.org/, medium trust SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [proxmox.com, qemu.pm, drive.pm] Subject: Re: [pve-devel] [PATCH v4 qemu-server 1/4] disk reassign: add API endpoint X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 20 Nov 2020 16:17:39 -0000 On October 2, 2020 10:23 am, Aaron Lauterer wrote: > The goal of this new API endpoint is to provide an easy way to move a > disk between VMs as this was only possible with manual intervention > until now. Either by renaming the VM disk or by manually adding the > disks volid to the config of the other VM. >=20 > The latter can easily cause unexpected behavior such as disks attached > to VM B would be deleted if it used to be a disk of VM A. This happens > because PVE assumes that the VMID in the volname always matches the VM > the disk is attached to and thus, would remove any disk with VMID A > when VM A was deleted. >=20 > The term `reassign` was chosen as it is not yet used > for VM disks. >=20 > Signed-off-by: Aaron Lauterer > --- > v3 -> v4: nothing >=20 > v2 -> v3: > * reordered the locking as discussed with fabian [0] to > run checks > fork worker > lock source config > lock target config > run checks > ... >=20 > * added more checks > * will not reassign to or from templates > * will not reassign if VM has snapshots present > * cleanup if disk used to be replicated > * made task log slightly more verbose > * integrated general recommendations regarding code > * renamed `disk` to `drive_key` > * prepended some vars with `source_` for easier distinction >=20 > v1 -> v2: print config key and volid info at the end of the job so it > shows up on the CLI and task log >=20 > rfc -> v1: > * add support to reassign unused disks > * add support to provide a config digest for the target vm > * add additional check if disk key is present in config > * reorder checks a bit >=20 > In order to support unused disk I had to extend > PVE::QemuServer::Drive::valid_drive_names for the API parameter > validation. >=20 > Checks are ordered so that cheap tests are run at the first chance to > fail early. >=20 > The check if both VMs are present on the node is a bit redundant because > locking the config files will fail if the VM is not present. But with > the additional check we can provide a useful error message to the user > instead of a "Configuration file xyz does not exist" error. >=20 > [0] https://lists.proxmox.com/pipermail/pve-devel/2020-September/044930.h= tml >=20 >=20 > PVE/API2/Qemu.pm | 156 ++++++++++++++++++++++++++++++++++++++++ > PVE/QemuServer/Drive.pm | 4 ++ > 2 files changed, 160 insertions(+) >=20 > diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm > index 8da616a..613b257 100644 > --- a/PVE/API2/Qemu.pm > +++ b/PVE/API2/Qemu.pm > @@ -4265,4 +4265,160 @@ __PACKAGE__->register_method({ > return PVE::QemuServer::Cloudinit::dump_cloudinit_config($conf, $param-= >{vmid}, $param->{type}); > }}); > =20 > +__PACKAGE__->register_method({ > + name =3D> 'reassign_vm_disk', > + path =3D> '{vmid}/reassign_disk', > + method =3D> 'POST', > + protected =3D> 1, > + proxyto =3D> 'node', > + description =3D> "Reassign a disk to another VM", > + permissions =3D> { > + description =3D> "You need 'VM.Config.Disk' permissions on /vms/{vmid},= and 'Datastore.Allocate' permissions on the storage.", and VM.Config.Disk on target_vmid? > + check =3D> [ 'and', > + ['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]], > + ['perm', '/storage/{storage}', [ 'Datastore.Allocate' ]], > + ], > + }, > + parameters =3D> { > + additionalProperties =3D> 0, > + properties =3D> { > + node =3D> get_standard_option('pve-node'), > + vmid =3D> get_standard_option('pve-vmid', { completion =3D> \&PVE::= QemuServer::complete_vmid }), > + target_vmid =3D> get_standard_option('pve-vmid', { completion =3D> = \&PVE::QemuServer::complete_vmid }), > + drive_key =3D> { > + type =3D> 'string', > + description =3D> "The config key of the disk to reassign (for example,= ide0 or scsi1).", > + enum =3D> [PVE::QemuServer::Drive::valid_drive_names_with_unused()], > + }, > + digest =3D> { > + type =3D> 'string', > + description =3D> 'Prevent changes if current the configuration file of= the source VM has a different SHA1 digest. This can be used to prevent con= current modifications.', > + maxLength =3D> 40, > + optional =3D> 1, > + }, > + target_digest =3D> { > + type =3D> 'string', > + description =3D> 'Prevent changes if current the configuration file of= the target VM has a different SHA1 digest. This can be used to prevent con= current modifications.', > + maxLength =3D> 40, > + optional =3D> 1, > + }, > + }, > + }, > + returns =3D> { > + type =3D> 'string', > + description =3D> "the task ID.", > + }, > + code =3D> sub { > + my ($param) =3D @_; > + > + my $rpcenv =3D PVE::RPCEnvironment::get(); > + my $authuser =3D $rpcenv->get_user(); > + > + my $node =3D extract_param($param, 'node'); > + my $source_vmid =3D extract_param($param, 'vmid'); > + my $target_vmid =3D extract_param($param, 'target_vmid'); > + my $source_digest =3D extract_param($param, 'digest'); > + my $target_digest =3D extract_param($param, 'target_digest'); > + my $drive_key =3D extract_param($param, 'drive_key'); > + > + my $storecfg =3D PVE::Storage::config(); > + my $vmlist; > + my $drive; > + my $source_volid; > + > + die "You cannot reassign a disk to the same VM\n" "Reassigning disk with same source and target VM not possible. Did you=20 mean to move the disk?" > + if $source_vmid eq $target_vmid; > + > + my $load_and_check_configs =3D sub { > + $vmlist =3D PVE::QemuServer::vzlist(); > + die "Both VMs need to be on the same node\n" > + if !$vmlist->{$source_vmid}->{exists} || !$vmlist->{$target_vmid}->{ex= ists}; if we use PVE::Cluser::get_vmlist() here, we could include the nodes as=20 well, which might be more informative? > + > + my $source_conf =3D PVE::QemuConfig->load_config($source_vmid); > + PVE::QemuConfig->check_lock($source_conf); > + my $target_conf =3D PVE::QemuConfig->load_config($target_vmid); > + PVE::QemuConfig->check_lock($target_conf); > + > + die "Can't reassign disks with templates\n" disks from/to template > + if ($source_conf->{template} || $target_conf->{template}); > + > + if ($source_digest) { > + eval { PVE::Tools::assert_if_modified($source_digest, $source_conf->{d= igest}) }; > + if (my $err =3D $@) { > + die "Verification of source VM digest failed: ${err}"; a simple "VM $vmid: " prefix would be enough, the rest is contained in=20 $err anyway.. > + } > + } > + > + if ($target_digest) { > + eval { PVE::Tools::assert_if_modified($target_digest, $target_conf->{d= igest}) }; > + if (my $err =3D $@) { > + die "Verification of target VM digest failed: ${err}"; same > + } > + } > + > + die "Disk '${drive_key}' does not exist\n" > + if !defined($source_conf->{$drive_key}); > + > + $drive =3D PVE::QemuServer::parse_drive($drive_key, $source_conf->{= $drive_key}); > + $source_volid =3D $drive->{file}; > + die "disk '${drive_key}' has no associated volume\n" if !$source_vo= lid; > + die "CD drive contents can't be reassigned\n" if PVE::QemuServer::d= rive_is_cdrom($drive, 1); check for non-volume disks missing? it will/should fail in the storage=20 layer, but better to catch it here already.. > + > + die "Can't reassign disk used by a snapshot\n" > + if PVE::QemuServer::Drive::is_volume_in_use($storecfg, $source_conf, $= drive_key, $source_volid); > + > + my $hasfeature =3D PVE::Storage::volume_has_feature($storecfg, 'rea= ssign', $source_volid); > + die "Storage does not support the reassignment of this disk\n" if != $hasfeature; variable only used once for this check, you can just die .. if !PVE::Storage::... > + > + die "Cannot reassign disk while the source VM is running\n" > + if PVE::QemuServer::check_running($source_vmid) && $drive_key !~ m/unu= sed[0-9]/; > + > + return ($source_conf, $target_conf); > + }; > + > + my $reassign_func =3D sub { > + return PVE::QemuConfig->lock_config($source_vmid, sub { > + return PVE::QemuConfig->lock_config($target_vmid, sub { > + my ($source_conf, $target_conf) =3D &$load_and_check_configs(); > + > + PVE::Cluster::log_msg('info', $authuser, "reassign disk VM $source= _vmid: reassign --disk ${drive_key} --target_vmid $target_vmid"); > + > + my $new_volid =3D PVE::Storage::reassign_volume($storecfg, $source= _volid, $target_vmid); > + > + delete $source_conf->{$drive_key}; > + PVE::QemuConfig->write_config($source_vmid, $source_conf); > + print "removing disk '${drive_key}' from VM '${source_vmid}'\n"; this message is misleading, as tense and the state of the source VM=20 don't match ;) > + > + # remove possible replication snapshots > + my $had_snapshots =3D 0; > + if (PVE::Storage::volume_has_feature($storecfg, 'replicate', $new_= volid)) { > + my $snapshots =3D PVE::Storage::volume_snapshot_list($storecfg, $new_= volid); > + for my $snap (@$snapshots) { > + next if (substr($snap, 0, 12) ne '__replicate_'); > + > + $had_snapshots =3D 1; > + PVE::Storage::volume_snapshot_delete($storecfg, $new_volid, $snap= ); > + } > + print "Disk '${drive_key}:${source_volid}' was replicated. On the nex= t replication run it will be cleaned up on the replication target.\n" > + if $had_snapshots;; > + } this can fail, so either wrap it in eval or move it below the following=20 block, or above the removal from source config. above is potentially=20 problematic, as we would need to get the replication lock then.. also, isn't this basically what PVE::Replication::prepare does? > + > + my $key; > + eval { $key =3D PVE::QemuConfig->add_unused_volume($target_conf, $= new_volid) }; > + if (my $err =3D $@) { > + print "adding moved disk '${new_volid}' to VM '${target_vmid}' config= failed.\n"; I thought we are reassigned a disk here ;) might want to mention that=20 adding it as unused failed, which is basically only possible if there=20 is no free empty unused slot. freeing up a slot, and rescanning the VMID=20 will fix the issue. > + return 0; > + } > + > + PVE::QemuConfig->write_config($target_vmid, $target_conf); > + print "adding disk to VM '${target_vmid}' as '${key}: ${new_volid}= '\n"; again, order is wrong here - if the write_config fails, the print never=20 happens. if the write was successful, the tense of the print is wrong. > + }); > + }); > + }; > + > + &$load_and_check_configs(); > + > + return $rpcenv->fork_worker('qmreassign', $source_vmid, $authuser, $rea= ssign_func); > + }}); > + > 1; > diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm > index 91c33f8..d2f59cd 100644 > --- a/PVE/QemuServer/Drive.pm > +++ b/PVE/QemuServer/Drive.pm > @@ -383,6 +383,10 @@ sub valid_drive_names { > 'efidisk0'); > } > =20 > +sub valid_drive_names_with_unused { > + return (valid_drive_names(), map {"unused$_"} (0 .. ($MAX_UNUSED_DIS= KS -1))); > +} > + > sub is_valid_drivename { > my $dev =3D shift; > =20 > --=20 > 2.20.1 >=20 >=20 >=20 > _______________________________________________ > pve-devel mailing list > pve-devel@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel >=20 >=20 >=20 =