public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Dominik Csapak <d.csapak@proxmox.com>
To: Fiona Ebner <f.ebner@proxmox.com>,
	Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH qemu-server 3/3] api: include not mapped resources for running vms in migrate preconditions
Date: Tue, 2 Apr 2024 11:39:09 +0200	[thread overview]
Message-ID: <d8739de7-82ca-4c64-bb96-64487b6ddcd9@proxmox.com> (raw)
In-Reply-To: <6bc2caef-3b3f-468e-b75e-45a15bc5ed1a@proxmox.com>

On 3/22/24 17:19, Fiona Ebner wrote:
> Am 20.03.24 um 13:51 schrieb Dominik Csapak:
>> so that we can show a proper warning in the migrate dialog and check it
>> in the bulk migrate precondition check
>>
>> the unavailable_storages and allowed_nodes should be the same as before
>>
>> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
>> ---
>> not super happy with this partial approach, we probably should just
>> always return the 'allowed_nodes' and 'not_allowed_nodes' and change
>> the gui to handle the running vs not running state?
> 
> So not_allowed_nodes can already be returned in both states after this
> patch. But allowed nodes still only if not running. I mean, there could
> be API users that break if we'd always return allowed_nodes, but it
> doesn't sound unreasonable for me to do so. Might even be an opportunity
> to structure the code in a bit more straightforward manner.

yes, as said previosly i'd like this api call a bit to make it more practical
but that probably has to wait for the next major release

as for returning 'allowed_nodes' always, we'd have to adapt the gui of course,
but if we don't deem it 'too breaking' i'd rework that a bit even now

> 
>>
>>   PVE/API2/Qemu.pm | 27 +++++++++++++++------------
>>   1 file changed, 15 insertions(+), 12 deletions(-)
>>
>> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
>> index 8581a529..b0f155f7 100644
>> --- a/PVE/API2/Qemu.pm
>> +++ b/PVE/API2/Qemu.pm
>> @@ -4439,7 +4439,7 @@ __PACKAGE__->register_method({
>>   	    not_allowed_nodes => {
>>   		type => 'object',
>>   		optional => 1,
>> -		description => "List not allowed nodes with additional informations, only passed if VM is offline"
>> +		description => "List not allowed nodes with additional informations",
>>   	    },
>>   	    local_disks => {
>>   		type => 'array',
>> @@ -4496,25 +4496,28 @@ __PACKAGE__->register_method({
>>   
>>   	# if vm is not running, return target nodes where local storage/mapped devices are available
>>   	# for offline migration
>> +	my $checked_nodes = {};
>> +	my $allowed_nodes = [];
>>   	if (!$res->{running}) {
>> -	    $res->{allowed_nodes} = [];
>> -	    my $checked_nodes = PVE::QemuServer::check_local_storage_availability($vmconf, $storecfg);
>> +	    $checked_nodes = PVE::QemuServer::check_local_storage_availability($vmconf, $storecfg);
>>   	    delete $checked_nodes->{$localnode};
>> +	}
>>   
>> -	    foreach my $node (keys %$checked_nodes) {
>> -		my $missing_mappings = $missing_mappings_by_node->{$node};
>> -		if (scalar($missing_mappings->@*)) {
>> -		    $checked_nodes->{$node}->{'unavailable-resources'} = $missing_mappings;
>> -		    next;
>> -		}
>> +	foreach my $node ((keys $checked_nodes->%*, keys $missing_mappings_by_node->%*)) {
> 
> Style nit: please use 'for' instead of 'foreach'
> 
> Like this you might iterate over certain nodes twice and then push them
> onto the allowed_nodes array twice.

oops, yes ^^

> 
>> +	    my $missing_mappings = $missing_mappings_by_node->{$node};
>> +	    if (scalar($missing_mappings->@*)) {
>> +		$checked_nodes->{$node}->{'unavailable-resources'} = $missing_mappings;
>> +		next;
>> +	    }
>>   
>> +	    if (!$res->{running}) {
>>   		if (!defined($checked_nodes->{$node}->{unavailable_storages})) {
>> -		    push @{$res->{allowed_nodes}}, $node;
>> +		    push $allowed_nodes->@*, $node;
>>   		}
>> -
>>   	    }
>> -	    $res->{not_allowed_nodes} = $checked_nodes;
>>   	}
>> +	$res->{not_allowed_nodes} = $checked_nodes if scalar(keys($checked_nodes->%*)) || !$res->{running};
> 
> Why not return the empty hash if running? The whole post-if is just
> covering that single special case.
> 
>> +	$res->{allowed_nodes} = $allowed_nodes if scalar($allowed_nodes->@*) || !$res->{running};
> 
> Nit: Right now, $allowed_nodes can only be non-empty if
> !$res->{running}, so the first part of the check is redundant.
> 

true

>>   
>>   	my $local_disks = &$check_vm_disks_local($storecfg, $vmconf, $vmid);
>>   	$res->{local_disks} = [ values %$local_disks ];;





  reply	other threads:[~2024-04-02  9:39 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-20 12:51 [pve-devel] [PATCH qemu-server/manager] pci live migration followups Dominik Csapak
2024-03-20 12:51 ` [pve-devel] [PATCH qemu-server 1/3] stop cleanup: remove unnecessary tpmstate cleanup Dominik Csapak
2024-03-22 14:54   ` Fiona Ebner
2024-03-20 12:51 ` [pve-devel] [PATCH qemu-server 2/3] migrate: call vm_stop_cleanup after stopping in phase3_cleanup Dominik Csapak
2024-03-22 15:17   ` Fiona Ebner
2024-03-20 12:51 ` [pve-devel] [PATCH qemu-server 3/3] api: include not mapped resources for running vms in migrate preconditions Dominik Csapak
2024-03-22 14:53   ` Stefan Sterz
2024-03-22 16:19   ` Fiona Ebner
2024-04-02  9:39     ` Dominik Csapak [this message]
2024-04-10 10:52       ` Fiona Ebner
2024-03-20 12:51 ` [pve-devel] [PATCH manager 1/3] bulk migrate: improve precondition checks Dominik Csapak
2024-03-20 12:51 ` [pve-devel] [PATCH manager 2/3] bulk migrate: include checks for live-migratable local resources Dominik Csapak
2024-03-20 12:51 ` [pve-devel] [PATCH manager 3/3] ui: adapt migration window to precondition api change Dominik Csapak

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d8739de7-82ca-4c64-bb96-64487b6ddcd9@proxmox.com \
    --to=d.csapak@proxmox.com \
    --cc=f.ebner@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal