From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 9AD79904BB for ; Tue, 2 Apr 2024 11:39:41 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 7CC0A1BC1 for ; Tue, 2 Apr 2024 11:39:11 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Tue, 2 Apr 2024 11:39:10 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id B94B742845; Tue, 2 Apr 2024 11:39:10 +0200 (CEST) Message-ID: Date: Tue, 2 Apr 2024 11:39:09 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Beta To: Fiona Ebner , Proxmox VE development discussion References: <20240320125158.2094900-1-d.csapak@proxmox.com> <20240320125158.2094900-4-d.csapak@proxmox.com> <6bc2caef-3b3f-468e-b75e-45a15bc5ed1a@proxmox.com> Content-Language: en-US From: Dominik Csapak In-Reply-To: <6bc2caef-3b3f-468e-b75e-45a15bc5ed1a@proxmox.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-SPAM-LEVEL: Spam detection results: 0 AWL 0.014 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pve-devel] [PATCH qemu-server 3/3] api: include not mapped resources for running vms in migrate preconditions X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 02 Apr 2024 09:39:41 -0000 On 3/22/24 17:19, Fiona Ebner wrote: > Am 20.03.24 um 13:51 schrieb Dominik Csapak: >> so that we can show a proper warning in the migrate dialog and check it >> in the bulk migrate precondition check >> >> the unavailable_storages and allowed_nodes should be the same as before >> >> Signed-off-by: Dominik Csapak >> --- >> not super happy with this partial approach, we probably should just >> always return the 'allowed_nodes' and 'not_allowed_nodes' and change >> the gui to handle the running vs not running state? > > So not_allowed_nodes can already be returned in both states after this > patch. But allowed nodes still only if not running. I mean, there could > be API users that break if we'd always return allowed_nodes, but it > doesn't sound unreasonable for me to do so. Might even be an opportunity > to structure the code in a bit more straightforward manner. yes, as said previosly i'd like this api call a bit to make it more practical but that probably has to wait for the next major release as for returning 'allowed_nodes' always, we'd have to adapt the gui of course, but if we don't deem it 'too breaking' i'd rework that a bit even now > >> >> PVE/API2/Qemu.pm | 27 +++++++++++++++------------ >> 1 file changed, 15 insertions(+), 12 deletions(-) >> >> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm >> index 8581a529..b0f155f7 100644 >> --- a/PVE/API2/Qemu.pm >> +++ b/PVE/API2/Qemu.pm >> @@ -4439,7 +4439,7 @@ __PACKAGE__->register_method({ >> not_allowed_nodes => { >> type => 'object', >> optional => 1, >> - description => "List not allowed nodes with additional informations, only passed if VM is offline" >> + description => "List not allowed nodes with additional informations", >> }, >> local_disks => { >> type => 'array', >> @@ -4496,25 +4496,28 @@ __PACKAGE__->register_method({ >> >> # if vm is not running, return target nodes where local storage/mapped devices are available >> # for offline migration >> + my $checked_nodes = {}; >> + my $allowed_nodes = []; >> if (!$res->{running}) { >> - $res->{allowed_nodes} = []; >> - my $checked_nodes = PVE::QemuServer::check_local_storage_availability($vmconf, $storecfg); >> + $checked_nodes = PVE::QemuServer::check_local_storage_availability($vmconf, $storecfg); >> delete $checked_nodes->{$localnode}; >> + } >> >> - foreach my $node (keys %$checked_nodes) { >> - my $missing_mappings = $missing_mappings_by_node->{$node}; >> - if (scalar($missing_mappings->@*)) { >> - $checked_nodes->{$node}->{'unavailable-resources'} = $missing_mappings; >> - next; >> - } >> + foreach my $node ((keys $checked_nodes->%*, keys $missing_mappings_by_node->%*)) { > > Style nit: please use 'for' instead of 'foreach' > > Like this you might iterate over certain nodes twice and then push them > onto the allowed_nodes array twice. oops, yes ^^ > >> + my $missing_mappings = $missing_mappings_by_node->{$node}; >> + if (scalar($missing_mappings->@*)) { >> + $checked_nodes->{$node}->{'unavailable-resources'} = $missing_mappings; >> + next; >> + } >> >> + if (!$res->{running}) { >> if (!defined($checked_nodes->{$node}->{unavailable_storages})) { >> - push @{$res->{allowed_nodes}}, $node; >> + push $allowed_nodes->@*, $node; >> } >> - >> } >> - $res->{not_allowed_nodes} = $checked_nodes; >> } >> + $res->{not_allowed_nodes} = $checked_nodes if scalar(keys($checked_nodes->%*)) || !$res->{running}; > > Why not return the empty hash if running? The whole post-if is just > covering that single special case. > >> + $res->{allowed_nodes} = $allowed_nodes if scalar($allowed_nodes->@*) || !$res->{running}; > > Nit: Right now, $allowed_nodes can only be non-empty if > !$res->{running}, so the first part of the check is redundant. > true >> >> my $local_disks = &$check_vm_disks_local($storecfg, $vmconf, $vmid); >> $res->{local_disks} = [ values %$local_disks ];;