public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>,
	"t.lamprecht@proxmox.com" <t.lamprecht@proxmox.com>,
	"f.ebner@proxmox.com" <f.ebner@proxmox.com>,
	"aderumier@odiso.com" <aderumier@odiso.com>
Subject: Re: [pve-devel] [PATCH v2 pve-manager 2/2] ui: qemu : memoryedit: add new max && virtio fields
Date: Sat, 2 Sep 2023 06:18:22 +0000	[thread overview]
Message-ID: <43d759a21681a2bdf8454435d7a8d6a62da0b124.camel@groupe-cyllene.com> (raw)
In-Reply-To: <3e337e38-1a91-8b41-c03c-1f89c8885df7@proxmox.com>

Le vendredi 01 septembre 2023 à 12:24 +0200, Fiona Ebner a écrit :
> Am 01.09.23 um 11:48 schrieb Thomas Lamprecht:
> > Am 19/06/2023 um 09:28 schrieb Alexandre Derumier:
> > > +               xtype: 'pveMemoryField',
> > > +               name: 'max',
> > > +               minValue: 65536,
> > > +               maxValue: 4194304,
> > > +               value: '',
> > > +               step: 65536,
> > > +               fieldLabel: gettext('Maximum memory') + ' (MiB)',
> > 
> > This huge step size will be confusing to users, there should be a
> > way to have
> > smaller steps (e.g., 1 GiB or even 128 MiB).
> > 
> > As even nowadays, with a huge amount of installed memory on a lot
> > of servers,
> > deciding that a (potentially bad actor) VM can use up 64G or 128G
> > is still
> > quite the difference on a lot of setups. Fiona is checking the
> > backend here
> > to see if it might be done with a finer granularity, or what other
> > options
> > we have here.
> > 

I was not think about max size as a security feature, but more to
define the min dimm size to reach this max value.
But indeed, it could be interesting.
 

The step of max should be a at minimum the dimmsize 

max > 4GB && <= 64GB : 1gb dimm size
max > 64GB && <= 128GB : 2gb dimm size
max > 128GB && <= 256GB : 4gb dimm size


and we start qemu with the real max mem of the range. (max conf >4G
<=64GB ----> qemu is started with 64GB maxmem)

Like this, user could change max value between his current max range,
without need to restart vm.


> 
> From a first glance, I think it should be possible. Even if we keep
> the
> restriction "all memory devices should have the same size", which
> makes
> the code easier:
> 
> For dimms, we have 64 slots, so I don't see a reason why we can't use
> 64
> MiB granularity rather than 64 GiB.
> 
> 

Note that I think we shouldn't go under 128Mib for dimmsize as it's the
minimum hotplug granularity on linux

https://docs.kernel.org/admin-guide/mm/memory-hotplug.html
"Memory Hot(Un)Plug Granularity
Memory hot(un)plug in Linux uses the SPARSEMEM memory model, which
divides the physical memory address space into chunks of the same size:
memory sections. The size of a memory section is architecture
dependent. For example, x86_64 uses 128 MiB and ppc64 uses 16 MiB."

Static mem is already setup at 4GB, so I really don't known if we want
128mib dimm size in 2023 ?  

I'm really don't have tested windows and other OS under 1 gbit dimm
size.


If really needed, we could add

max > 4GB && <= 8GB, 128mb dimmsize
max > 8GB && <= 16GB : 256mb dimmsize
max > 16GB && <= 32GB : 512mb dimmsize





> For virtio-mem, we have one device per socket (up to 8, assuming a
> power
> of 2), and the blocksize is 2 MiB, so we could have 16 MiB
> granularity.

Yes, it's not a problem to use 16MiB max step granulary.


> Or is there an issue setting the 'size' for a  virtio-mem-pci device
> to
> such a fine grained value? Even if there is, we can just create the
> device with a bigger supported 'size' and have our API reject a
> request
> to go beyond the maximum later.
> 

Yes, I was more for the second proposal

Like for classic mem, we could simple do something like

max > 4GB && <= 64GB, with a maxstep of 16MbiB :  

--> start qemu with maxmem=64GB , 32000 slots of 2MIB chunks,  but
don't allow user on api to add more memory than max in config.

The max memory is not allocated anyway (until you use hugepages)

This would allow user to change the max value in the current max range
without need to restart he vm.



  reply	other threads:[~2023-09-02  6:19 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-19  7:28 [pve-devel] [PATCH-SERIE v6 qemu-server/pve-manager] rework memory hotplug + virtiomem Alexandre Derumier
2023-06-19  7:28 ` [pve-devel] [PATCH v6 qemu-server 01/10] add memory parser Alexandre Derumier
2023-09-01 10:23   ` Fiona Ebner
2023-06-19  7:28 ` [pve-devel] [PATCH v2 pve-manager 1/2] ui: qemu: hardware: add new memory format support Alexandre Derumier
2023-06-19  7:28 ` [pve-devel] [PATCH v6 qemu-server 02/10] memory: add get_static_mem Alexandre Derumier
2023-06-19  7:28 ` [pve-devel] [PATCH v2 pve-manager 2/2] ui: qemu : memoryedit: add new max && virtio fields Alexandre Derumier
2023-09-01  9:48   ` Thomas Lamprecht
2023-09-01 10:24     ` Fiona Ebner
2023-09-02  6:18       ` DERUMIER, Alexandre [this message]
2023-09-04 10:48         ` Fiona Ebner
2023-09-04 11:40         ` Thomas Lamprecht
2023-09-04 11:48           ` Fiona Ebner
2023-09-05 15:10             ` DERUMIER, Alexandre
2023-09-05 15:16               ` Thomas Lamprecht
2023-09-05 22:35                 ` DERUMIER, Alexandre
2024-07-08 15:10                   ` Fiona Ebner
2024-07-09  9:38                     ` DERUMIER, Alexandre via pve-devel
2023-06-19  7:28 ` [pve-devel] [PATCH v6 qemu-server 03/10] memory: use static_memory in foreach_dimm Alexandre Derumier
2023-09-01 11:39   ` [pve-devel] applied: " Fiona Ebner
2023-06-19  7:28 ` [pve-devel] [PATCH v6 qemu-server 04/10] config: memory: add 'max' option Alexandre Derumier
2023-06-19  7:28 ` [pve-devel] [PATCH v6 qemu-server 05/10] memory: get_max_mem: use config memory max Alexandre Derumier
2023-06-19  7:28 ` [pve-devel] [PATCH v6 qemu-server 06/10] memory: use 64 slots && static dimm size when max is defined Alexandre Derumier
2023-06-19  7:28 ` [pve-devel] [PATCH v6 qemu-server 07/10] test: add memory-max tests Alexandre Derumier
2023-06-19  7:28 ` [pve-devel] [PATCH v6 qemu-server 08/10] memory: add virtio-mem support Alexandre Derumier
2023-06-19  7:28 ` [pve-devel] [PATCH v6 qemu-server 09/10] memory: virtio-mem : implement redispatch retry Alexandre Derumier
2023-06-19  7:28 ` [pve-devel] [PATCH v6 qemu-server 10/10] tests: add virtio-mem tests Alexandre Derumier
2023-09-01 12:24 ` [pve-devel] [PATCH-SERIE v6 qemu-server/pve-manager] rework memory hotplug + virtiomem Fiona Ebner
     [not found]   ` <CAOKSTBveZE6K6etnDESKXBt1_XpDYUMGpr12qQPyuv0beDRcQw@mail.gmail.com>
2023-09-01 16:30     ` DERUMIER, Alexandre
2023-09-01 16:32   ` DERUMIER, Alexandre

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=43d759a21681a2bdf8454435d7a8d6a62da0b124.camel@groupe-cyllene.com \
    --to=alexandre.derumier@groupe-cyllene.com \
    --cc=aderumier@odiso.com \
    --cc=f.ebner@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    --cc=t.lamprecht@proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal