From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 63BD762065 for ; Thu, 20 Aug 2020 09:59:24 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 5AF3B2AA72 for ; Thu, 20 Aug 2020 09:59:24 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [212.186.127.180]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 0F3492AA63 for ; Thu, 20 Aug 2020 09:59:23 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id D7EF0446DF for ; Thu, 20 Aug 2020 09:59:22 +0200 (CEST) To: pve-devel@pve.proxmox.com References: <20200212133228.8442-1-s.reiter@proxmox.com> From: Stefan Reiter Message-ID: <14879da6-c2fd-59c3-4b4c-7db4c9a5cfd1@proxmox.com> Date: Thu, 20 Aug 2020 09:59:22 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.9.0 MIME-Version: 1.0 In-Reply-To: <20200212133228.8442-1-s.reiter@proxmox.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-SPAM-LEVEL: Spam detection results: 0 AWL 0.634 Adjusted score from AWL reputation of From: address KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment NICE_REPLY_A -1.361 Looks like a legit reply (A) RCVD_IN_DNSWL_MED -2.3 Sender listed at https://www.dnswl.org/, medium trust SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [qemuserver.pm] Subject: Re: [pve-devel] [PATCH qemu-server] fix #2570: add 'keephugepages' config X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 20 Aug 2020 07:59:24 -0000 Ping? This is an old one, but the bug report is still active. Would need a rebase if the approach is deemed ok. On 2/12/20 2:32 PM, Stefan Reiter wrote: > We already keep hugepages if they are created with the kernel > commandline (hugepagesz=x hugepages=y), but some setups (specifically > hugepages across multiple NUMA nodes) cannot be configured that way. > Since we always clear these hugepages at VM shutdown, rebooting a VM > that uses them might not work, since the requested count might not be > available anymore by the time we want to use them (also, we would then > no longer allocate them correctly on the NUMA nodes). > > Add a 'keephugepages' parameter to skip cleanup and simply leave them > untouched. > > Signed-off-by: Stefan Reiter > --- > > I tried adding it as a 'keep' sub-parameter first (i.e. > 'hugepages: 1024,keep=1' etc.), but it turns out that we hardcode > $config->{hugepages} to be a numeric string in a *lot* of different places, so I > opted for this variant instead. Open for suggestions ofc. > > PVE/QemuServer.pm | 14 ++++++++++++-- > 1 file changed, 12 insertions(+), 2 deletions(-) > > diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm > index 23176dd..4741707 100644 > --- a/PVE/QemuServer.pm > +++ b/PVE/QemuServer.pm > @@ -384,6 +384,14 @@ EODESC > description => "Enable/disable hugepages memory.", > enum => [qw(any 2 1024)], > }, > + keephugepages => { > + optional => 1, > + type => 'boolean', > + default => 0, > + description => "Use together with hugepages. If enabled, hugepages will" > + . " not be deleted after VM shutdown and can be used for" > + . " subsequent starts.", > + }, > vcpus => { > optional => 1, > type => 'integer', > @@ -5441,11 +5449,13 @@ sub vm_start { > > eval { $run_qemu->() }; > if (my $err = $@) { > - PVE::QemuServer::Memory::hugepages_reset($hugepages_host_topology); > + PVE::QemuServer::Memory::hugepages_reset($hugepages_host_topology) > + if !$conf->{keephugepages}; > die $err; > } > > - PVE::QemuServer::Memory::hugepages_pre_deallocate($hugepages_topology); > + PVE::QemuServer::Memory::hugepages_pre_deallocate($hugepages_topology) > + if !$conf->{keephugepages}; > }; > eval { PVE::QemuServer::Memory::hugepages_update_locked($code); }; > >