From: "DERUMIER, Alexandre" <Alexandre.DERUMIER@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>,
"aderumier@odiso.com" <aderumier@odiso.com>,
"f.ebner@proxmox.com" <f.ebner@proxmox.com>
Subject: Re: [pve-devel] [PATCH v2 qemu-server 6/9] memory: use 64 slots && static dimm size when max is defined
Date: Fri, 27 Jan 2023 15:52:03 +0000 [thread overview]
Message-ID: <32a94fdf92b9312c67648ba2b33231e53a7ce312.camel@groupe-cyllene.com> (raw)
In-Reply-To: <70ef3f8a-2a0c-f7da-8f04-d4b73e13df9d@proxmox.com>
>
> Question about the existing code: The loops below can count up to
> $dimm_id 255, but in the commit message you say that there are at
> most
> 255 slots (so the highest ID is 254?). But yeah, it only becomes
> relevant when going all the way to approximately 4 TiB.
yes, the max slot is 255 (id0->id254).
If I remember (2015 ^_^), the last iteration (dimm id 255) of the 8x32
loop, was over the value supported by qemu or the conf (because of the
static memory), so it always return .
>
> > @@ -209,7 +216,7 @@ sub foreach_dimm{
> > &$func($conf, $vmid, $name, $dimm_size, $numanode,
> > $current_size, $memory);
> > return $current_size if $current_size >= $memory;
> > }
> > - $dimm_size *= 2;
> > + $dimm_size *= 2 if !$confmem->{max};
> > }
> > }
> >
> > @@ -220,7 +227,12 @@ sub foreach_reverse_dimm {
>
> Question about the existing code: There is
> my $dimm_id = 253;
> Shouldn't that start at 254 (highest valid ID we can count up to?).
> Again only becomes relevant with a lot of memory.
>
mmm, I really didn't remember. I need to double check, but I think it
should 254 indeed.
> > my $current_size = 0;
> > my $dimm_size = 0;
> >
> > - if($conf->{hugepages} && $conf->{hugepages} == 1024) {
> > + my $confmem = parse_memory($conf->{memory});
> > + if ($confmem->{max}) {
> > + $dimm_id = $MAX_SLOTS - 1;
> > + $current_size = $confmem->{max};
>
> Does this need to be $confmem->{max} + $static_size? See below for a
> description of the issue. Didn't think about it in detail, so please
> double check ;)
mmm, I wonder if I don't lower the number of slots, as "max" option
from config, is "static + x dimm slots", but in this case it should
depend of the number of sockets.
> > + $dimm_size = $confmem->{max} / $MAX_SLOTS;
> > + } elsif ($conf->{hugepages} && $conf->{hugepages} == 1024) {
> > $current_size = 8355840;
> > $dimm_size = 131072;
> > } else {
>
> Nit: the loops below here are
> for (my $j = 0; $j < 8; $j++) {
> for (my $i = 0; $i < 32; $i++) {
> so it looks like potentially iterating more often than $MAX_SLOTS and
> reaching negative $dimm_ids. I know that we should always return from
> the loop earlier than that, but maybe it can be improved by
> extracting
> the inner part in a sub/closure and using different loops depending
> on
> how many slots there are? Same applies to foreach_dimm().
>
>
Yes, I think it should be better.
> Real issue: something is wrong with the calculation for unplugging in
> combination with 'max' (it uses the wrong dimm IDs):
>
>
I have verify myself, indeed , it's really buggy.
I'll work on it this weekend, thanks for the review !
next prev parent reply other threads:[~2023-01-27 15:52 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-04 6:42 [pve-devel] [PATCH v2 qemu-server 0/9] rework memory hotplug + virtiomem Alexandre Derumier
2023-01-04 6:42 ` [pve-devel] [PATCH v2 qemu-server 1/9] test: add memory tests Alexandre Derumier
2023-01-24 13:04 ` Fiona Ebner
2023-01-04 6:42 ` [pve-devel] [PATCH v2 qemu-server 2/9] add memory parser Alexandre Derumier
2023-01-24 13:04 ` Fiona Ebner
2023-01-04 6:42 ` [pve-devel] [PATCH v2 qemu-server 3/9] memory: add get_static_mem Alexandre Derumier
2023-01-24 13:04 ` Fiona Ebner
2023-01-04 6:42 ` [pve-devel] [PATCH v2 qemu-server 4/9] config: memory: add 'max' option Alexandre Derumier
2023-01-24 13:05 ` Fiona Ebner
2023-01-27 15:03 ` DERUMIER, Alexandre
2023-01-30 8:03 ` Fiona Ebner
2023-01-30 8:45 ` DERUMIER, Alexandre
2023-01-04 6:42 ` [pve-devel] [PATCH v2 qemu-server 5/9] memory: get_max_mem: use config memory max Alexandre Derumier
2023-01-24 13:05 ` Fiona Ebner
2023-01-27 15:15 ` DERUMIER, Alexandre
2023-01-30 8:04 ` Fiona Ebner
2023-01-04 6:43 ` [pve-devel] [PATCH v2 qemu-server 6/9] memory: use 64 slots && static dimm size when max is defined Alexandre Derumier
2023-01-24 13:06 ` Fiona Ebner
2023-01-27 15:52 ` DERUMIER, Alexandre [this message]
2023-01-04 6:43 ` [pve-devel] [PATCH v2 qemu-server 7/9] test: add memory-max tests Alexandre Derumier
2023-01-24 13:06 ` Fiona Ebner
2023-01-04 6:43 ` [pve-devel] [PATCH v2 qemu-server 8/9] memory: add virtio-mem support Alexandre Derumier
2023-01-24 13:06 ` Fiona Ebner
2023-01-25 9:00 ` DERUMIER, Alexandre
2023-01-25 9:54 ` Fiona Ebner
2023-01-25 10:28 ` DERUMIER, Alexandre
2023-01-25 10:52 ` Fiona Ebner
2023-01-04 6:43 ` [pve-devel] [PATCH v2 qemu-server 9/9] tests: add virtio-mem tests Alexandre Derumier
2023-01-24 13:08 ` Fiona Ebner
2023-01-24 13:08 ` [pve-devel] [PATCH v2 qemu-server 0/9] rework memory hotplug + virtiomem Fiona Ebner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=32a94fdf92b9312c67648ba2b33231e53a7ce312.camel@groupe-cyllene.com \
--to=alexandre.derumier@groupe-cyllene.com \
--cc=aderumier@odiso.com \
--cc=f.ebner@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox