From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <f.ebner@proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id 9EDF49070D
 for <pve-devel@lists.proxmox.com>; Fri,  3 Feb 2023 14:46:06 +0100 (CET)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id 7E3AF53A5
 for <pve-devel@lists.proxmox.com>; Fri,  3 Feb 2023 14:45:36 +0100 (CET)
Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com
 [94.136.29.106])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS
 for <pve-devel@lists.proxmox.com>; Fri,  3 Feb 2023 14:45:35 +0100 (CET)
Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1])
 by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 76E1E452D4;
 Fri,  3 Feb 2023 14:45:35 +0100 (CET)
Message-ID: <b20fe8b6-a63a-979d-512d-a89f745a5078@proxmox.com>
Date: Fri, 3 Feb 2023 14:45:34 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.6.0
From: Fiona Ebner <f.ebner@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>,
 Alexandre Derumier <aderumier@odiso.com>
References: <20230202110344.840195-1-aderumier@odiso.com>
 <20230202110344.840195-9-aderumier@odiso.com>
Content-Language: en-US
In-Reply-To: <20230202110344.840195-9-aderumier@odiso.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-SPAM-LEVEL: Spam detection results:  0
 AWL 0.043 Adjusted score from AWL reputation of From: address
 BAYES_00                 -1.9 Bayes spam probability is 0 to 1%
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 NICE_REPLY_A            -0.09 Looks like a legit reply (A)
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
Subject: Re: [pve-devel] [PATCH v3 qemu-server 08/13] memory: don't use
 foreach_reversedimm for unplug
X-BeenThere: pve-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/>
List-Post: <mailto:pve-devel@lists.proxmox.com>
List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Fri, 03 Feb 2023 13:46:06 -0000

Thanks for looking into this! The new way is more straightforward for
sure :)

Am 02.02.23 um 12:03 schrieb Alexandre Derumier:
> @@ -316,30 +284,33 @@ sub qemu_memory_hotplug {
>  
>      } else {
>  
> -	foreach_reverse_dimm($conf, $vmid, $value, $sockets, sub {
> -	    my ($conf, $vmid, $name, $dimm_size, $numanode, $current_size, $memory) = @_;
> +	my $dimms = qemu_dimm_list($vmid);

Looking at qemu_dimm_list(), I feel like we should start filtering to
only return dimms (based on the id or is there something better?), since
we now also iterate over them and not only check for existence.
Currently, qemu_dimm_list() returns all memory objects which would
include the virtiomem devices. The function is not called if virtiomem
is used, but this would make it future-proof (and avoid breakage if
people have manually attached non-dimm memory devices, although I'd say
that's not supported from our perspective ;)).

> +	my $current_size = $memory;
> +	for my $name (sort { $dimms->{$b}{slot} <=> $dimms->{$a}{slot} } keys %$dimms) {

Style nit: please use $dimms->{$b}->{slot}

>  
> -		return if $current_size >= get_current_memory($conf->{memory});
> +	    my $dimm_size = $dimms->{$name}->{size} / 1024 / 1024;
>  
> -		print "try to unplug memory dimm $name\n";
> +	    last if $current_size <= $value || $current_size <= $static_memory;

Nit: the second half of the condition is not really needed. We already
assert at the start of qemu_memory_hotplug() that $value >=
$static_memory and if we didn't somehow end up with values not matching
reality in the config, we should reach $static_memory only after
unplugging everything, or not?

>  
> -		my $retry = 0;
> -		while (1) {
> -		    eval { PVE::QemuServer::qemu_devicedel($vmid, $name) };
> -		    sleep 3;
> -		    my $dimm_list = qemu_dimm_list($vmid);
> -		    last if !$dimm_list->{$name};
> -		    raise_param_exc({ $name => "error unplug memory module" }) if $retry > 5;
> -		    $retry++;
> -		}
> +	    print "try to unplug memory dimm $name\n";
>  
> -		#update conf after each succesful module unplug
> -		$newmem->{current} = $current_size;
> -		$conf->{memory} = print_memory($newmem);
> +	    my $retry = 0;
> +	    while (1) {
> +		eval { PVE::QemuServer::qemu_devicedel($vmid, $name) };
> +		sleep 3;
> +		my $dimm_list = qemu_dimm_list($vmid);
> +		last if !$dimm_list->{$name};
> +		raise_param_exc({ $name => "error unplug memory module" }) if $retry > 5;
> +		$retry++;
> +	    }
> +	    $current_size -= $dimm_size;
> +	    #update conf after each succesful module unplug
> +	    $newmem->{current} = $current_size;
> +	    $conf->{memory} = print_memory($newmem);
>  
> -		eval { PVE::QemuServer::qemu_objectdel($vmid, "mem-$name"); };
> -		PVE::QemuConfig->write_config($vmid, $conf);
> -	});
> +	    eval { PVE::QemuServer::qemu_objectdel($vmid, "mem-$name"); };
> +	    PVE::QemuConfig->write_config($vmid, $conf);
> +	}
>      }
>      return $conf->{memory};
>  }