From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 5BF9161FF7 for ; Thu, 20 Aug 2020 10:18:01 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 4B45B2AC31 for ; Thu, 20 Aug 2020 10:17:31 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [212.186.127.180]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id A163F2AC23 for ; Thu, 20 Aug 2020 10:17:30 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 62DE94472E for ; Thu, 20 Aug 2020 10:17:30 +0200 (CEST) To: PVE development discussion , Stefan Reiter References: <20200212133228.8442-1-s.reiter@proxmox.com> From: Thomas Lamprecht Message-ID: <37ae0397-4d21-b24d-3b51-a5065b63ec56@proxmox.com> Date: Thu, 20 Aug 2020 10:17:29 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:80.0) Gecko/20100101 Thunderbird/80.0 MIME-Version: 1.0 In-Reply-To: <20200212133228.8442-1-s.reiter@proxmox.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-SPAM-LEVEL: Spam detection results: 0 AWL 0.620 Adjusted score from AWL reputation of From: address KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment NICE_REPLY_A -1.361 Looks like a legit reply (A) RCVD_IN_DNSWL_MED -2.3 Sender listed at https://www.dnswl.org/, medium trust SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [qemuserver.pm] Subject: Re: [pve-devel] [PATCH qemu-server] fix #2570: add 'keephugepages' config X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 20 Aug 2020 08:18:01 -0000 On 12.02.20 14:32, Stefan Reiter wrote: > We already keep hugepages if they are created with the kernel > commandline (hugepagesz=x hugepages=y), but some setups (specifically > hugepages across multiple NUMA nodes) cannot be configured that way. > Since we always clear these hugepages at VM shutdown, rebooting a VM > that uses them might not work, since the requested count might not be > available anymore by the time we want to use them (also, we would then > no longer allocate them correctly on the NUMA nodes). > but they also will be kept on normal shutdown now. Why not do this transparent, without extra config, solely on API reboot call? > Add a 'keephugepages' parameter to skip cleanup and simply leave them > untouched. > > Signed-off-by: Stefan Reiter > --- > > I tried adding it as a 'keep' sub-parameter first (i.e. > 'hugepages: 1024,keep=1' etc.), but it turns out that we hardcode > $config->{hugepages} to be a numeric string in a *lot* of different places, so I > opted for this variant instead. Open for suggestions ofc. > > PVE/QemuServer.pm | 14 ++++++++++++-- > 1 file changed, 12 insertions(+), 2 deletions(-) > > diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm > index 23176dd..4741707 100644 > --- a/PVE/QemuServer.pm > +++ b/PVE/QemuServer.pm > @@ -384,6 +384,14 @@ EODESC > description => "Enable/disable hugepages memory.", > enum => [qw(any 2 1024)], > }, > + keephugepages => { > + optional => 1, > + type => 'boolean', > + default => 0, > + description => "Use together with hugepages. If enabled, hugepages will" > + . " not be deleted after VM shutdown and can be used for" > + . " subsequent starts.", > + }, > vcpus => { > optional => 1, > type => 'integer', > @@ -5441,11 +5449,13 @@ sub vm_start { > > eval { $run_qemu->() }; > if (my $err = $@) { > - PVE::QemuServer::Memory::hugepages_reset($hugepages_host_topology); > + PVE::QemuServer::Memory::hugepages_reset($hugepages_host_topology) > + if !$conf->{keephugepages}; > die $err; > } > > - PVE::QemuServer::Memory::hugepages_pre_deallocate($hugepages_topology); > + PVE::QemuServer::Memory::hugepages_pre_deallocate($hugepages_topology) > + if !$conf->{keephugepages}; > }; > eval { PVE::QemuServer::Memory::hugepages_update_locked($code); }; > >