From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <f.ebner@proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id 7903993CCB
 for <pve-devel@lists.proxmox.com>; Wed, 22 Feb 2023 16:20:06 +0100 (CET)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id 5AD131A7EA
 for <pve-devel@lists.proxmox.com>; Wed, 22 Feb 2023 16:19:36 +0100 (CET)
Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com
 [94.136.29.106])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS
 for <pve-devel@lists.proxmox.com>; Wed, 22 Feb 2023 16:19:35 +0100 (CET)
Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1])
 by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 826744814A;
 Wed, 22 Feb 2023 16:19:35 +0100 (CET)
Message-ID: <ba496d78-59a1-f97e-58db-072051d05b5e@proxmox.com>
Date: Wed, 22 Feb 2023 16:19:34 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101
 Thunderbird/102.8.0
From: Fiona Ebner <f.ebner@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>,
 Alexandre Derumier <aderumier@odiso.com>
References: <20230213120021.3783742-1-aderumier@odiso.com>
 <20230213120021.3783742-12-aderumier@odiso.com>
Content-Language: en-US
In-Reply-To: <20230213120021.3783742-12-aderumier@odiso.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-SPAM-LEVEL: Spam detection results:  0
 AWL 0.044 Adjusted score from AWL reputation of From: address
 BAYES_00                 -1.9 Bayes spam probability is 0 to 1%
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 NICE_REPLY_A           -0.095 Looks like a legit reply (A)
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
Subject: Re: [pve-devel] [PATCH v4 qemu-server 11/16] memory: don't use
 foreach_reversedimm for unplug
X-BeenThere: pve-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/>
List-Post: <mailto:pve-devel@lists.proxmox.com>
List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Wed, 22 Feb 2023 15:20:06 -0000

Am 13.02.23 um 13:00 schrieb Alexandre Derumier:
> @@ -322,30 +290,33 @@ sub qemu_memory_hotplug {
>  
>      } else {
>  
> -	foreach_reverse_dimm($conf, $vmid, $value, $sockets, sub {
> -	    my ($conf, $vmid, $name, $dimm_size, $numanode, $current_size, $memory) = @_;
> +	my $dimms = qemu_dimm_list($vmid);

Has been renamed in the last patch ;) Ideally, this and the last patch
would be ordered first in the series. Then I would've applied them right
away (fixing up the renamed call), because they fix an existing bug, see
below.

> +	my $current_size = $memory;
> +	for my $name (sort { $dimms->{$b}{slot} <=> $dimms->{$a}{slot} } keys %$dimms) {

Style nit: Please use $dimms->{$b}->{slot}

Turns out the current code with foreach_reverse_dimm() is actually buggy
(not only when reaching $dimm_id==254). With qemu-server 7.3-3:
> root@pve701 ~ # grep memory: /etc/pve/qemu-server/135.conf
> memory: 1536
> root@pve701 ~ # perl memory-qmp.pm 135
> dimm0 (node 0): 536870912
> $VAR1 = {
>           'base-memory' => 1073741824,
>           'plugged-memory' => 536870912
>         };
> root@pve701 ~ # qm set 135 --memory 20480                           
> update VM 135: -memory 20480
> root@pve701 ~ # grep memory: /etc/pve/qemu-server/135.conf
> memory: 20480
> root@pve701 ~ # perl memory-qmp.pm 135
> dimm0 (node 0): 536870912
> dimm1 (node 1): 536870912
> ...
> dimm33 (node 1): 1073741824
> dimm34 (node 0): 1073741824
> $VAR1 = {
>           'base-memory' => 1073741824,
>           'plugged-memory' => '20401094656'
>         };
> root@pve701 ~ # qm set 135 --memory 1536 
> update VM 135: -memory 1536
> try to unplug memory dimm dimm33
> try to unplug memory dimm dimm32
> ...
> try to unplug memory dimm dimm2
> try to unplug memory dimm dimm1
> root@pve701 ~ # grep memory: /etc/pve/qemu-server/135.conf
> memory: 1536
> root@pve701 ~ # perl memory-qmp.pm 135
> dimm0 (node 0): 536870912
> dimm34 (node 0): 1073741824
> $VAR1 = {
>           'base-memory' => 1073741824,
>           'plugged-memory' => 1610612736
>         };

And this patch fixes that :)