public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Stefan Reiter <s.reiter@proxmox.com>
To: pve-devel@pve.proxmox.com
Subject: Re: [pve-devel] [PATCH qemu-server] fix #2570: add 'keephugepages' config
Date: Thu, 20 Aug 2020 09:59:22 +0200	[thread overview]
Message-ID: <14879da6-c2fd-59c3-4b4c-7db4c9a5cfd1@proxmox.com> (raw)
In-Reply-To: <20200212133228.8442-1-s.reiter@proxmox.com>

Ping? This is an old one, but the bug report is still active.

Would need a rebase if the approach is deemed ok.

On 2/12/20 2:32 PM, Stefan Reiter wrote:
> We already keep hugepages if they are created with the kernel
> commandline (hugepagesz=x hugepages=y), but some setups (specifically
> hugepages across multiple NUMA nodes) cannot be configured that way.
> Since we always clear these hugepages at VM shutdown, rebooting a VM
> that uses them might not work, since the requested count might not be
> available anymore by the time we want to use them (also, we would then
> no longer allocate them correctly on the NUMA nodes).
> 
> Add a 'keephugepages' parameter to skip cleanup and simply leave them
> untouched.
> 
> Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
> ---
> 
> I tried adding it as a 'keep' sub-parameter first (i.e.
> 'hugepages: 1024,keep=1' etc.), but it turns out that we hardcode
> $config->{hugepages} to be a numeric string in a *lot* of different places, so I
> opted for this variant instead. Open for suggestions ofc.
> 
>   PVE/QemuServer.pm | 14 ++++++++++++--
>   1 file changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index 23176dd..4741707 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -384,6 +384,14 @@ EODESC
>   	description => "Enable/disable hugepages memory.",
>   	enum => [qw(any 2 1024)],
>       },
> +    keephugepages => {
> +	optional => 1,
> +	type => 'boolean',
> +	default => 0,
> +	description => "Use together with hugepages. If enabled, hugepages will"
> +		     . " not be deleted after VM shutdown and can be used for"
> +		     . " subsequent starts.",
> +    },
>       vcpus => {
>   	optional => 1,
>   	type => 'integer',
> @@ -5441,11 +5449,13 @@ sub vm_start {
>   
>   		eval { $run_qemu->() };
>   		if (my $err = $@) {
> -		    PVE::QemuServer::Memory::hugepages_reset($hugepages_host_topology);
> +		    PVE::QemuServer::Memory::hugepages_reset($hugepages_host_topology)
> +			if !$conf->{keephugepages};
>   		    die $err;
>   		}
>   
> -		PVE::QemuServer::Memory::hugepages_pre_deallocate($hugepages_topology);
> +		PVE::QemuServer::Memory::hugepages_pre_deallocate($hugepages_topology)
> +		    if !$conf->{keephugepages};
>   	    };
>   	    eval { PVE::QemuServer::Memory::hugepages_update_locked($code); };
>   
> 




       reply	other threads:[~2020-08-20  7:59 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20200212133228.8442-1-s.reiter@proxmox.com>
2020-08-20  7:59 ` Stefan Reiter [this message]
2020-08-20  8:17 ` Thomas Lamprecht
2020-08-20  8:20   ` Stefan Reiter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=14879da6-c2fd-59c3-4b4c-7db4c9a5cfd1@proxmox.com \
    --to=s.reiter@proxmox.com \
    --cc=pve-devel@pve.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal