public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Thomas Lamprecht <t.lamprecht@proxmox.com>
To: Oguz Bektas <o.bektas@proxmox.com>,
	Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH container] vmstatus: include detected IP address of running containers
Date: Tue, 27 Jul 2021 17:02:59 +0200	[thread overview]
Message-ID: <679e56cc-9cd9-ae44-72ee-e01e3be053d4@proxmox.com> (raw)
In-Reply-To: <YQAVku4oDXjsF9UG@gaia>

On 27.07.21 16:17, Oguz Bektas wrote:
> On Tue, Jul 27, 2021 at 04:03:17PM +0200, Thomas Lamprecht wrote:
>> On 27.07.21 15:37, Oguz Bektas wrote:
>>> add a helper 'find_lxc_ip_address' to fetch IP address of container from
>>> its network namespace using lxc-info.
>>>
>>> for the moment it can be queried with the pct tool:
>>> $ pct status 1000 --verbose
>>> cpu: 0
>>> cpus: 1
>>> disk: 6422528
>>> diskread: 368640
>>> diskwrite: 0
>>> ipaddress: 192.168.31.83        <----
>>> maxdisk: 4294967296
>>> maxmem: 536870912
>>> maxswap: 536870912
>>> mem: 864256
>>> name: CT1000
>>> netin: 3281265
>>> netout: 15794
>>> pid: 34897
>>> status: running
>>> swap: 94208
>>> type: lxc
>>> uptime: 11088
>>> vmid: 1000
>>>
>>
>>
>> And why use `lxc-info`, this is called very often and we know about setups with
>> 1500+ CTs on a single host, so it'd be good to check if adding 1000s forks every
>> status call could be avoided.
> 
> lxc-info already queries this information from netlink, otherwise switching
> network namespace and communicating to netlink in perl would be hacky (at least
> that's the impression i got from looking at the relevant lxc code [0][1]).

Does not have to be hacky if done nicely and we can do that in rust and use only
the bindings in perl.. Mira already did quite some work for the contrack stuff,
that could possibly be re-used/shaped into something for that use case.

But we cannot really do away the fork their either, as unshare into another NS
is only valid for the current's process future children...

> 
> [0]: https://github.com/lxc/lxc/blob/f1c64634c40a7218165538b89aca320fa258b3c1/src/lxc/lxccontainer.c#L2235
> [1]: https://github.com/lxc/lxc/blob/f1c64634c40a7218165538b89aca320fa258b3c1/src/lxc/lxccontainer.c#L2429
> 
> fabian also mentioned we can use 'iproute2':

sure, would be useful as it actually gets you the full CIDR and possibly
any other detail.

> $ lxc-info --pid 1000
> PID:            34897
> $ ip netns attach 1000 34897
> $ ip -n 1000 a show

three commands isn't exactly better than the "no command" I wished for ;-)

> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
>     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>     inet 127.0.0.1/8 scope host lo
>        valid_lft forever preferred_lft forever
>     inet6 ::1/128 scope host
>        valid_lft forever preferred_lft forever
> 2: eth0@if84: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
>     link/ether fa:2e:e1:b6:a4:b0 brd ff:ff:ff:ff:ff:ff link-netns 1000
>     inet 192.168.31.83/20 scope global eth0
>        valid_lft forever preferred_lft forever
>     inet6 fe80::f82e:e1ff:feb6:a4b0/64 scope link
>        valid_lft forever preferred_lft forever
> 
> but that leaves the namespace file in /run/netns/ which requires extra cleanup
> and handling when container is rebooting or shutting down, so this way seemed
> like it would be cleaner.

yeah no, that seems icky.

> 
> for avoiding the forks in bigger setups it might make sense to call it once and
> cache the result until there's a network hotplug or similar, or only poll every N minutes.

the latter is adding a lot of complexity, the former can work out but would need to
be in /run or the like, else every worker would just cache 1000s of entries eventually,
and that's not really that what a simple and clean solution would look like either.

We could replicate what we do for VMs, adding a `$full` boolean flag and only query that
info if it's true, that'd be at least dead simple.

> 
>> what about multiple, you only print the last match which is quite confusing...
>> You forgot to add this to the API/CLI return schema in $PVE::LXC::vmstatus_return_properties,
>> which may highly probably actually want to be an array...
> 
> okay will do that in v2 with an array to account for multiple addresses
> 

and also:

On 27.07.21 16:03, Thomas Lamprecht wrote:
> You forgot to add this to the API/CLI return schema in $PVE::LXC::vmstatus_return_properties,
> which may highly probably actually want to be an array...




      reply	other threads:[~2021-07-27 15:03 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-27 13:37 Oguz Bektas
2021-07-27 14:03 ` Thomas Lamprecht
2021-07-27 14:17   ` Oguz Bektas
2021-07-27 15:02     ` Thomas Lamprecht [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=679e56cc-9cd9-ae44-72ee-e01e3be053d4@proxmox.com \
    --to=t.lamprecht@proxmox.com \
    --cc=o.bektas@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal