public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
From: Marco Gaiarin <gaio@lilliput.linux.it>
To: pve-user@lists.proxmox.com
Subject: [PVE-User] page allocation failure and virtio_balloon ...
Date: Thu, 16 Oct 2025 10:45:43 +0200	[thread overview]
Message-ID: <lun6sl-ial.ln1@leia.lilliput.linux.it> (raw)

I've optimized memory management in a cluster i manage, defining for some
VMs a min/max memory, and of course installing ballooning device.

As a start i've setup min memory one half of the max memory, and i'm adding
a GB of RAM one by one, but still i catch something some PAF in the VM:

Oct 16 09:23:18 vdmsv1 kernel: [500235.012795] kworker/1:0: page allocation failure: order:0, mode:0x24310ca(GFP_HIGHUSER_MOVABLE|__GFP_NORETRY|__GFP_NOMEMALLOC)
Oct 16 09:23:18 vdmsv1 kernel: [500235.012815] CPU: 1 PID: 25260 Comm: kworker/1:0 Not tainted 4.9.0-19-amd64 #1 Debian 4.9.320-3
Oct 16 09:23:18 vdmsv1 kernel: [500235.012816] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.3-0-ga6ed6b701f0a-prebuilt.qemu.org 04/01/2014
Oct 16 09:23:18 vdmsv1 kernel: [500235.012834] Workqueue: events_freezable update_balloon_size_func [virtio_balloon]
Oct 16 09:23:18 vdmsv1 kernel: [500235.012837]  0000000000000000 ffffffffad213127 ffffffffad603b30 ffffa1dbc42b3bd0
Oct 16 09:23:18 vdmsv1 kernel: [500235.012840]  ffffffffacd8dd8a 024310ca00000006 ffffffffad603b30 ffffa1dbc42b3b70
Oct 16 09:23:18 vdmsv1 kernel: [500235.012842]  ffff895a00000010 ffffa1dbc42b3be0 ffffa1dbc42b3b90 756a7c817f7c654b
Oct 16 09:23:18 vdmsv1 kernel: [500235.012845] Call Trace:
Oct 16 09:23:18 vdmsv1 kernel: [500235.012889]  [<ffffffffad213127>] ? dump_stack+0x66/0x81
[...]
Oct 16 09:23:18 vdmsv1 kernel: [500235.012981]  [<ffffffffad222737>] ? ret_from_fork+0x57/0x70
Oct 16 09:23:18 vdmsv1 kernel: [500235.012982] Mem-Info:
Oct 16 09:23:18 vdmsv1 kernel: [500235.012990] active_anon:1441909 inactive_anon:236013 isolated_anon:0
Oct 16 09:23:18 vdmsv1 kernel: [500235.012990]  active_file:242538 inactive_file:271083 isolated_file:0
Oct 16 09:23:18 vdmsv1 kernel: [500235.012990]  unevictable:575 dirty:327 writeback:0 unstable:0
Oct 16 09:23:18 vdmsv1 kernel: [500235.012990]  slab_reclaimable:226171 slab_unreclaimable:119473
Oct 16 09:23:18 vdmsv1 kernel: [500235.012990]  mapped:36459 shmem:30366 pagetables:55126 bounce:0
Oct 16 09:23:18 vdmsv1 kernel: [500235.012990]  free:33864 free_pcp:11 free_cma:0
Oct 16 09:23:18 vdmsv1 kernel: [500235.012996] Node 0 active_anon:5767636kB inactive_anon:944052kB active_file:970152kB inactive_file:1084332kB unevictable:2300kB isolated(anon):0kB isolated(file):0kB mapped:145836kB dirty:1308kB writeback:0kB shmem:121464kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 2048kB writeback_tmp:0kB unstable:0kB pages_scanned:57 all_unreclaimable? no
Oct 16 09:23:18 vdmsv1 kernel: [500235.012997] Node 0 DMA free:15908kB min:64kB low:80kB high:96kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15908kB mlocked:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
Oct 16 09:23:18 vdmsv1 kernel: [500235.013001] lowmem_reserve[]: 0 2974 16013 16013 16013
Oct 16 09:23:18 vdmsv1 kernel: [500235.013004] Node 0 DMA32 free:64616kB min:12540kB low:15672kB high:18804kB active_anon:1347240kB inactive_anon:184356kB active_file:217624kB inactive_file:414336kB unevictable:0kB writepending:80kB present:3129196kB managed:2657840kB mlocked:0kB slab_reclaimable:211560kB slab_unreclaimable:116860kB kernel_stack:2960kB pagetables:44012kB bounce:0kB free_pcp:44kB local_pcp:0kB free_cma:0kB
Oct 16 09:23:18 vdmsv1 kernel: [500235.013012] lowmem_reserve[]: 0 0 13039 13039 13039
Oct 16 09:23:18 vdmsv1 kernel: [500235.013015] Node 0 Normal free:54932kB min:54976kB low:68720kB high:82464kB active_anon:4420392kB inactive_anon:759696kB active_file:752528kB inactive_file:670024kB unevictable:2300kB writepending:1228kB present:13631488kB managed:8123560kB mlocked:2300kB slab_reclaimable:693124kB slab_unreclaimable:361032kB kernel_stack:15968kB pagetables:176492kB bounce:0kB free_pcp:112kB local_pcp:0kB free_cma:0kB
Oct 16 09:23:18 vdmsv1 kernel: [500235.013019] lowmem_reserve[]: 0 0 0 0 0
Oct 16 09:23:18 vdmsv1 kernel: [500235.013022] Node 0 DMA: 1*4kB (U) 0*8kB 0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15908kB
Oct 16 09:23:18 vdmsv1 kernel: [500235.013033] Node 0 DMA32: 6146*4kB (UME) 3055*8kB (UME) 864*16kB (UM) 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 1*2048kB (H) 0*4096kB = 64896kB
Oct 16 09:23:18 vdmsv1 kernel: [500235.013042] Node 0 Normal: 13730*4kB (UME) 2*8kB (M) 1*16kB (M) 3*32kB (M) 2*64kB (M) 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 55176kB
Oct 16 09:23:18 vdmsv1 kernel: [500235.013053] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
Oct 16 09:23:18 vdmsv1 kernel: [500235.013053] 582867 total pagecache pages
Oct 16 09:23:18 vdmsv1 kernel: [500235.013054] 38399 pages in swap cache
Oct 16 09:23:18 vdmsv1 kernel: [500235.013056] Swap cache stats: add 1479527, delete 1441128, find 6937743/7156831
Oct 16 09:23:18 vdmsv1 kernel: [500235.013057] Free swap  = 14964364kB
Oct 16 09:23:18 vdmsv1 kernel: [500235.013057] Total swap = 15624188kB
Oct 16 09:23:18 vdmsv1 kernel: [500235.013058] 4194169 pages RAM
Oct 16 09:23:18 vdmsv1 kernel: [500235.013058] 0 pages HighMem/MovableOnly
Oct 16 09:23:18 vdmsv1 kernel: [500235.013059] 1494842 pages reserved
Oct 16 09:23:18 vdmsv1 kernel: [500235.013059] 0 pages hwpoisoned
Oct 16 09:23:18 vdmsv1 kernel: [500235.013096] virtio_balloon virtio0: Out of puff! Can't get 1 pages

It is normal? There's something that i've not understood?


Thanks.


PS: phisical PVE node have free ram, it is currently about 80% of RAM (64GB
    total, so 12 free)

-- 



_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user


                 reply	other threads:[~2025-10-16 19:40 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=lun6sl-ial.ln1@leia.lilliput.linux.it \
    --to=gaio@lilliput.linux.it \
    --cc=pve-user@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal