public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-user@lists.proxmox.com" <pve-user@lists.proxmox.com>
Subject: Re: [PVE-User] Hotplug Memory and default Linux kernel parameters
Date: Tue, 29 Aug 2023 08:14:35 +0000	[thread overview]
Message-ID: <9d57631330b7acaef801148e14b3a050df156ccd.camel@groupe-cyllene.com> (raw)
In-Reply-To: <222509B0-CEB7-436E-82AA-D16AA78B633C@caltech.edu>

Hi,

see
https://pve.proxmox.com/wiki/Hotplug_(qemu_disk,nic,cpu,memory)#Memory_Hotplug


2 possiblity:

add "memhp_default_state=online"   to grub

or

add udev script
/lib/udev/rules.d/80-hotplug-cpu.rules

SUBSYSTEM=="cpu", ACTION=="add", TEST=="online", ATTR{online}=="0",
ATTR{online}="1"



When hotplug is enabled, only 1GB of "static" memory is enabled (to
boot the kernel,...).
Then the other memory modules, are hotpluggable, and by default are
offline. 



Le samedi 26 août 2023 à 22:41 +0000, Anderson, Stuart B. a écrit :
> Enabling PVE Hotplug Memory for a Linux Guest (tested with PVE 7/8
> and EL8/9) results in default kernel parameters that are orders of
> magnitude smaller than without Hotplug. It appears that the Kernel is
> mistakenly setting defaults as if the guest has only 1GB of memory.
> Does anyone know how to get the same kernel defaults with Hotplug
> Memory enabled as for disabled? Is this a bug in PVE, QEMU, or the
> way Linux queries QEMU?
> 
> 
> For example, a PVE7/EL8 VM with 32GB of Hotplug Memory has a very
> small value of Max processes:
> 
> [root@ldas-pcdev4 ~]# grep processes /proc/$(pgrep systemd-
> logind)/limits
> Max processes             2654                 2654                
> processes 
> 
> compared to disabling Hotplug Memory:
> 
> [root@condor-f1 ~]# grep processes /proc/$(pgrep systemd-
> logind)/limits
> Max processes             127390               127390              
> processes 
> 
> 
> Presumably this is due to the following memory layout as seen by the
> kernel,
> 
> #
> # With Hotplug Memory: 1 bank with a 1GB DIMM
> #
> [root@ldas-pcdev4 ~]# lsmem
> RANGE                                 SIZE  STATE REMOVABLE     BLOCK
> 0x0000000000000000-0x000000003fffffff   1G online       yes       0-7
> 0x0000010000000000-0x00000107bfffffff  31G online       yes 8192-8439
> 
> Memory block size:       128M
> Total online memory:      32G
> Total offline memory:      0B
> 
> [root@ldas-pcdev4 ~]# lshw -class memory
>   *-firmware                        description: BIOS
>        vendor: SeaBIOS
>        physical id: 0
>        version: rel-1.16.1-0-g3208b098f51a-prebuilt.qemu.org
>        date: 04/01/2014
>        size: 96KiB
>   *-memory
>        description: System Memory
>        physical id: 1000
>        size: 32GiB
>        capabilities: ecc
>        configuration: errordetection=multi-bit-ecc
>      *-bank
>           description: DIMM RAM
>           vendor: QEMU
>           physical id: 0
>           slot: DIMM 0
>           size: 1GiB
> 
> 
> #
> # Without Hotplug Memory: 2 banks of of 16GB DIMM
> #
> [root@condor-f1 ~]# lsmem
> RANGE                                 SIZE  STATE REMOVABLE  BLOCK
> 0x0000000000000000-0x00000000bfffffff   3G online       yes   0-23
> 0x0000000100000000-0x000000083fffffff  29G online       yes 32-263
> 
> Memory block size:       128M
> Total online memory:      32G
> Total offline memory:      0B
> 
> [root@condor-f1 ~]# lshw -class memory
>   *-firmware                        description: BIOS
>        vendor: SeaBIOS
>        physical id: 0
>        version: rel-1.16.1-0-g3208b098f51a-prebuilt.qemu.org
>        date: 04/01/2014
>        size: 96KiB
>   *-memory
>        description: System Memory
>        physical id: 1000
>        size: 32GiB
>        capabilities: ecc
>        configuration: errordetection=multi-bit-ecc
>      *-bank:0
>           description: DIMM RAM
>           vendor: QEMU
>           physical id: 0
>           slot: DIMM 0
>           size: 16GiB
>      *-bank:1
>           description: DIMM RAM
>           vendor: QEMU
>           physical id: 1
>           slot: DIMM 1
>           size: 16GiB
> 
> 
> 
> P.S. Unfornately, this isn't fixed with PVE8 (with a newer QEMU) or
> updating to a newer EL9 kernel. Here is PVE8/EL9 VM with 233GB of
> Hotplug Memory showing the same problematic small value: 
> 
> [root@pcdev15 ~]# cat /etc/redhat-release 
> Rocky Linux release 9.2 (Blue Onyx)
> 
> [root@pcdev15 ~]# grep processes /proc/$(pgrep systemd-logind)/limits
> Max processes             2659                 2659                
> processes
> 
> 
> --
> Stuart Anderson
> sba@caltech.edu
> 
> 
> 
> 
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 


      reply	other threads:[~2023-08-29  8:20 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-26 22:41 Anderson, Stuart B.
2023-08-29  8:14 ` DERUMIER, Alexandre [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9d57631330b7acaef801148e14b3a050df156ccd.camel@groupe-cyllene.com \
    --to=alexandre.derumier@groupe-cyllene.com \
    --cc=pve-user@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal