public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Aaron Lauterer <a.lauterer@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: Re: [pve-devel] [PATCH cluster-pve8 2/2] status: handle new pve9- metrics update data
Date: Fri, 23 May 2025 18:35:35 +0200	[thread overview]
Message-ID: <91fe7ca4-a07b-4326-8587-f4e08f1ecd5e@proxmox.com> (raw)
In-Reply-To: <20250523160029.404400-3-a.lauterer@proxmox.com>



On  2025-05-23  18:00, Aaron Lauterer wrote:
> For PVE9 there will be additional fields in the metrics that are
> collected. The new columns/fields are added at the end of the current
> ones. Therefore, if we get the new format, we need to cut it.
> 
> Paths to rrd filenames needed to be set manually to 'pve2-...' and will
> use the 'node' part instead of the full key, as that could also be
> 'pve9-...' which does not exists.

since it pops up for the first time in the series here: I currently 
chose 'pve9-' as prefix for the metric keys, following what we used so 
far AFAICT -> PVE version when it was introduced
.
But we could also think about changing it to something like 
'pve-{node,storage,vm}-{version}' as that could make it easier to change 
the code to also handle other new and right now unknown formats in the 
futures if we always only append new columns/fields.

But I am not exactly sure how we should do the versioning because the 
current approach in the status.c is to strncmp a fixed length of the 
full key and that would be problematic if we use the following examples:

pve-vm-9
pve-vm-10

when do we check for 8 or 9 character long strings? There might be a 
nice way to do this, as in, checking until we reach the separating /.

2 digits with leading 0 could be one approach. But if we also add minor 
PVE versions, well that makes it more complicated.
Or we could switch to it being just an integer that will be increased 
when we add more data.

Just to throw out some ideas :)


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


  reply	other threads:[~2025-05-23 16:36 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-05-23 16:00 [pve-devel] [RFC cluster/common/container/manager/pve9-rrd-migration-tool/qemu-server/storage 00/19] Expand and migrate RRD data Aaron Lauterer
2025-05-23 16:00 ` [pve-devel] [PATCH cluster-pve8 1/2] cfs status.c: drop old pve2-vm rrd schema support Aaron Lauterer
2025-05-23 16:00 ` [pve-devel] [PATCH cluster-pve8 2/2] status: handle new pve9- metrics update data Aaron Lauterer
2025-05-23 16:35   ` Aaron Lauterer [this message]
2025-06-02 13:31   ` Thomas Lamprecht
2025-06-11 14:18     ` Aaron Lauterer
2025-05-23 16:00 ` [pve-devel] [PATCH pve9-rrd-migration-tool 1/1] introduce rrd migration tool for pve8 -> pve9 Aaron Lauterer
2025-05-23 16:00 ` [pve-devel] [PATCH cluster 1/1] status: introduce new pve9- rrd and metric format Aaron Lauterer
2025-05-23 16:37 ` [pve-devel] [PATCH common 1/4] fix error in pressure parsing Aaron Lauterer
2025-05-23 16:37 ` [pve-devel] [PATCH common 2/4] add functions to retrieve pressures for vm/ct Aaron Lauterer
2025-05-23 16:37   ` [pve-devel] [PATCH common 3/4] add helper to fetch value from smaps_rollup for pid Aaron Lauterer
2025-06-02 14:11     ` Thomas Lamprecht
2025-05-23 16:37   ` [pve-devel] [PATCH common 4/4] metrics: add buffer and cache to meminfo Aaron Lauterer
2025-06-02 14:07     ` Thomas Lamprecht
2025-06-11 15:17       ` Aaron Lauterer
2025-05-23 16:37   ` [pve-devel] [PATCH manager 1/5] api2tools: drop old VM rrd schema Aaron Lauterer
2025-05-23 16:37   ` [pve-devel] [PATCH manager 2/5] pvestatd: collect and distribute new pve9- metrics Aaron Lauterer
2025-05-23 16:37   ` [pve-devel] [PATCH manager 3/5] api: nodes: rrd and rrddata fetch from new pve9-node rrd files if present Aaron Lauterer
2025-05-23 16:37   ` [pve-devel] [PATCH manager 4/5] api2tools: extract stats: handle existence of new pve9- data Aaron Lauterer
2025-05-23 16:37   ` [pve-devel] [PATCH manager 5/5] ui: rrdmodels: add new columns Aaron Lauterer
2025-05-23 16:37   ` [pve-devel] [PATCH storage 1/1] status: rrddata: use new pve9 rrd location if file is present Aaron Lauterer
2025-05-23 16:37   ` [pve-devel] [PATCH qemu-server 1/4] metrics: add pressure to metrics Aaron Lauterer
2025-05-23 16:37   ` [pve-devel] [PATCH qemu-server 2/4] vmstatus: add memhost for host view of vm mem consumption Aaron Lauterer
2025-05-23 16:37   ` [pve-devel] [PATCH qemu-server 3/4] vmstatus: switch mem stat to PSS of VM cgroup Aaron Lauterer
2025-05-23 16:37   ` [pve-devel] [PATCH qemu-server 4/4] rrddata: use new pve9 rrd location if file is present Aaron Lauterer
2025-05-23 16:37   ` [pve-devel] [PATCH container 1/1] " Aaron Lauterer
2025-06-02 14:39   ` [pve-devel] [PATCH common 2/4] add functions to retrieve pressures for vm/ct Thomas Lamprecht
2025-05-26 11:52 ` [pve-devel] [RFC cluster/common/container/manager/pve9-rrd-migration-tool/qemu-server/storage 00/19] Expand and migrate RRD data DERUMIER, Alexandre via pve-devel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=91fe7ca4-a07b-4326-8587-f4e08f1ecd5e@proxmox.com \
    --to=a.lauterer@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal