From: John <cuorie+proxmox@gmail.com>
To: Proxmox VE user list <pve-user@lists.proxmox.com>
Subject: Re: [PVE-User] swappiness on Debian container
Date: Wed, 26 May 2021 09:37:07 -0400 [thread overview]
Message-ID: <CAJqs_hvO2Q5WLtXx3DO58sUeUp1UPr82ukD6-CyqBxkkoC8hAA@mail.gmail.com> (raw)
In-Reply-To: <07951a6a-bce5-6224-600c-25462b88334d@matrixscience.com>
Not sure this is a best practice, but when creating containers, I set the
swap amount to 0.
-John
On Wed, May 26, 2021 at 7:51 AM Adam Weremczuk <adamw@matrixscience.com>
wrote:
> Hi all,
>
> Proxmox 6.2-6, one of containers (Debian 9.9) is swapping memory even
> though it has plenty of unused RAM left.
>
> I've tried suggestions from:
>
>
> https://askubuntu.com/questions/157793/why-is-swap-being-used-even-though-i-have-plenty-of-free-ram
>
> https://askubuntu.com/questions/192304/changing-swappiness-in-sysctl-conf-doesnt-work-for-me
>
> to change the default 60 to 10, rebooted multiple times but none of them
> is taking an effect.
>
> I've tried switching between privileged / unprivileged container as well
> as enable nesting. Still no joy:
>
> $ sudo sysctl -p
> sysctl: setting key "vm.swappiness": Read-only file system
>
> $ mount
> ms-zfs-pool/subvol-101-disk-0 on / type zfs (rw,xattr,posixacl)
> none on /dev type tmpfs (rw,relatime,size=492k,mode=755)
> proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
> proc on /proc/sys/net type proc (rw,nosuid,nodev,noexec,relatime)
> proc on /proc/sys type proc (ro,nosuid,nodev,noexec,relatime)
> proc on /proc/sysrq-trigger type proc (ro,nosuid,nodev,noexec,relatime)
> sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
> sysfs on /sys type sysfs (ro,nosuid,nodev,noexec,relatime)
> sysfs on /sys/devices/virtual/net type sysfs (rw,relatime)
> sysfs on /sys/devices/virtual/net type sysfs
> (rw,nosuid,nodev,noexec,relatime)
> fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
> proc on /dev/.lxc/proc type proc (rw,relatime)
> sys on /dev/.lxc/sys type sysfs (rw,relatime)
> lxcfs on /proc/cpuinfo type fuse.lxcfs
> (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
> lxcfs on /proc/diskstats type fuse.lxcfs
> (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
> lxcfs on /proc/loadavg type fuse.lxcfs
> (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
> lxcfs on /proc/meminfo type fuse.lxcfs
> (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
> lxcfs on /proc/stat type fuse.lxcfs
> (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
> lxcfs on /proc/swaps type fuse.lxcfs
> (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
> lxcfs on /proc/uptime type fuse.lxcfs
> (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
> lxcfs on /sys/devices/system/cpu/online type fuse.lxcfs
> (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
> devpts on /dev/console type devpts
> (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
> devpts on /dev/pts type devpts
> (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666,max=1024)
> devpts on /dev/ptmx type devpts
> (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666,max=1024)
> devpts on /dev/tty1 type devpts
> (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666,max=1024)
> devpts on /dev/tty2 type devpts
> (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666,max=1024)
> tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
> tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
> tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
> tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
> cgroup on /sys/fs/cgroup/systemd type cgroup
> (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
> cgroup on /sys/fs/cgroup/blkio type cgroup
> (rw,nosuid,nodev,noexec,relatime,blkio)
> cgroup on /sys/fs/cgroup/memory type cgroup
> (rw,nosuid,nodev,noexec,relatime,memory)
> cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup
> (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
> cgroup on /sys/fs/cgroup/perf_event type cgroup
> (rw,nosuid,nodev,noexec,relatime,perf_event)
> cgroup on /sys/fs/cgroup/cpuset type cgroup
> (rw,nosuid,nodev,noexec,relatime,cpuset)
> cgroup on /sys/fs/cgroup/hugetlb type cgroup
> (rw,nosuid,nodev,noexec,relatime,hugetlb)
> cgroup on /sys/fs/cgroup/freezer type cgroup
> (rw,nosuid,nodev,noexec,relatime,freezer)
> cgroup on /sys/fs/cgroup/devices type cgroup
> (rw,nosuid,nodev,noexec,relatime,devices)
> cgroup on /sys/fs/cgroup/pids type cgroup
> (rw,nosuid,nodev,noexec,relatime,pids)
> cgroup on /sys/fs/cgroup/rdma type cgroup
> (rw,nosuid,nodev,noexec,relatime,rdma)
> cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup
> (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
> mqueue on /dev/mqueue type mqueue (rw,relatime)
> hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
>
> Any ideas?
>
> Regards,
> Adam
>
>
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>
prev parent reply other threads:[~2021-05-26 13:37 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-05-26 11:45 Adam Weremczuk
2021-05-26 13:37 ` John [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAJqs_hvO2Q5WLtXx3DO58sUeUp1UPr82ukD6-CyqBxkkoC8hAA@mail.gmail.com \
--to=cuorie+proxmox@gmail.com \
--cc=pve-user@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal