From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id A5C2960920 for ; Thu, 3 Sep 2020 10:31:10 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 9201716710 for ; Thu, 3 Sep 2020 10:30:40 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [212.186.127.180]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 9186716703 for ; Thu, 3 Sep 2020 10:30:38 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 559C6449E9 for ; Thu, 3 Sep 2020 10:30:38 +0200 (CEST) To: pve-devel@lists.proxmox.com, =?UTF-8?Q?Fabian_Gr=c3=bcnbichler?= References: <20200901124421.25901-1-a.lauterer@proxmox.com> <20200901124421.25901-2-a.lauterer@proxmox.com> <1599118042.6ymt99i7i5.astroid@nora.none> From: Aaron Lauterer Message-ID: Date: Thu, 3 Sep 2020 10:30:37 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.2.1 MIME-Version: 1.0 In-Reply-To: <1599118042.6ymt99i7i5.astroid@nora.none> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 AWL -0.266 Adjusted score from AWL reputation of From: address KAM_ASCII_DIVIDERS 0.8 Spam that uses ascii formatting tricks KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment NICE_REPLY_A -0.324 Looks like a legit reply (A) RCVD_IN_DNSWL_MED -2.3 Sender listed at https://www.dnswl.org/, medium trust SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [proxmox.com, drive.pm] Subject: Re: [pve-devel] [PATCH v2 qemu-server 1/5] disk reassign: add API endpoint X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 03 Sep 2020 08:31:10 -0000 On 9/3/20 9:46 AM, Fabian Grünbichler wrote: > On September 1, 2020 2:44 pm, Aaron Lauterer wrote: [..] >> + >> + my $storecfg = PVE::Storage::config(); >> + >> + die "You cannot reassign a disk to the same VM\n" >> + if $vmid eq $target_vmid; >> + >> + my $vmlist = PVE::QemuServer::vzlist(); >> + die "Both VMs need to be on the same node\n" >> + if !$vmlist->{$vmid}->{exists} || !$vmlist->{$target_vmid}->{exists}; > > we could skip this if the target storage is shared, and let the user run > 'rescan' afterwards? I would like to keep this as simple as possible. Just as a thought for now, but maybe we could issue the rescan to the other node by calling that API endpoint. Though we probably lose the possibility to log the new disk key and volid that way. But at least it would not be a two step process for the user. >> + >> + return PVE::QemuConfig->lock_config($vmid, sub { >> + my $conf = PVE::QemuConfig->load_config($vmid); >> + PVE::QemuConfig->check_lock($conf); >> + >> + die "Source VM config checksum missmatch (file change by other user?)\n" >> + if $digest && $digest ne $conf->{digest}; > > PVE::Tools::assert_if_modified thx > >> + >> + die "Disk '${disk}' does not exist\n" >> + if !exists($conf->{$disk}); > > why not defined? because I thought exists is enough ;). But yep, defined is better > >> + >> + my $drive = PVE::QemuServer::parse_drive($disk, $conf->{$disk}); >> + die "disk '$disk' has no associated volume\n" if !$drive->{file}; > > missing check for whether it's actually volume-based, and not > pass-through.. what about templates/base volumes/linked > clones/snapshots? Passed through disks will fail later on in the code but having a check right away with a nicer error msg is probably a good idea. I haven't thought about templates, base volumes and linked clones yet. Same goes for Snapshots :/ I guess we won't want to rename a ZFS dataset with snapshots present? Another thing is that renaming the dataset will most likely break replication. > >> + die "Cannot reassign a cdrom drive\n" if PVE::QemuServer::drive_is_cdrom($drive, 1); > > "CD drive contents can't be reassigned.\n" Sounds much better :) > >> + >> + die "Cannot reassign disk while the source VM is running\n" >> + if PVE::QemuServer::check_running($vmid) && $disk !~ m/unused[0-9]/; >> + >> + return PVE::QemuConfig->lock_config($target_vmid, sub { >> + my $target_conf = PVE::QemuConfig->load_config($target_vmid); >> + PVE::QemuConfig->check_lock($target_conf); >> + >> + die "Target VM config checksum missmatch (file change by other user?)\n" >> + if $target_digest && $target_digest ne $target_conf->{digest}; > > same as above. > >> + >> + PVE::Cluster::log_msg('info', $authuser, "reassign disk VM $vmid: reassign --disk $disk --target_vmid $target_vmid"); >> + >> + my $realcmd = sub { >> + my $new_volid = PVE::Storage::reassign_volume($storecfg, $drive->{file}, $target_vmid); >> + >> + delete $conf->{$disk}; >> + PVE::QemuConfig->write_config($vmid, $conf); >> + >> + my $key = PVE::QemuConfig->add_unused_volume($target_conf, $new_volid); > > this can fail, now the volume is unreferenced altogether -> maybe wrap > in an eval and print a more helpful error message including the new > volid? Ok > >> + PVE::QemuConfig->write_config($target_vmid, $target_conf); >> + >> + print "reassigned disk to VM ${target_vmid} as '${key}: ${new_volid}'\n"; > > or split this up, and > > print 'renamed volume '$old_volid' to '$new_volid'\n"; > > after the call to reassign_volume, and print the config changes > individually after the respective actions? > > print "remove disk '$disk' from VM '$vmid'\n"; > > print "added volume '$new_volid' as '$key' to VM '$target_vmid'\n"; Sounds reasonable, being more verbose on what is going on is probably a good idea. > >> + }; >> + >> + return $rpcenv->fork_worker('qmreassign', $vmid, $authuser, $realcmd); > > I am not so happy about > > lock_config( > load_config; > lock_config( > load_config; > fork_worker( > cfs_lock/write_config > ) > ) > ) > > as it tends to accumulate more checks and extra functionality and at > some point might introduce an inconsistency.. note that the current > version seems fine (we always lock first then load, we only look at > config files that belong to a specific node, ...), but maybe we could > switch to > > # check for early return/nice error > (conf1, conf2) = load_and_check_sub(id1, id2); > fork_worker( > lock_config( > lock_config( > # recheck in locked worker context > (conf1, conf2) = load_and_check_sub(id1, id2); > ... > ) > ) > ) > > to make it more fool/future-proof, especially since the checks are rather cheap.. Sounds good. > >> + }); >> + }); >> + }}); >> + >> 1; >> diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm >> index 91c33f8..d2f59cd 100644 >> --- a/PVE/QemuServer/Drive.pm >> +++ b/PVE/QemuServer/Drive.pm >> @@ -383,6 +383,10 @@ sub valid_drive_names { >> 'efidisk0'); >> } >> >> +sub valid_drive_names_with_unused { >> + return (valid_drive_names(), map {"unused$_"} (0 .. ($MAX_UNUSED_DISKS -1))); >> +} >> + >> sub is_valid_drivename { >> my $dev = shift; >> >> -- >> 2.20.1 >> >> >> >> _______________________________________________ >> pve-devel mailing list >> pve-devel@lists.proxmox.com >> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel >> >> >> > > > _______________________________________________ > pve-devel mailing list > pve-devel@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel > >