* [pve-devel] [PATCH cluster-pve8 v3 1/2] cfs status.c: drop old pve2-vm rrd schema support
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
@ 2025-07-15 14:31 ` Aaron Lauterer
2025-07-16 22:32 ` [pve-devel] applied: " Thomas Lamprecht
2025-07-15 14:31 ` [pve-devel] [PATCH cluster-pve8 v3 2/2] status: handle new metrics update data Aaron Lauterer
` (34 subsequent siblings)
35 siblings, 1 reply; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:31 UTC (permalink / raw)
To: pve-devel
the newer pve2.3-vm schema has been introduced with commit ba9dcfc1 back
in 2013. By now there should be no cluster where an older node might
still send the old pve2-vm schema.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
src/pmxcfs/status.c | 12 +++---------
1 file changed, 3 insertions(+), 9 deletions(-)
diff --git a/src/pmxcfs/status.c b/src/pmxcfs/status.c
index 0895e53..eba4e52 100644
--- a/src/pmxcfs/status.c
+++ b/src/pmxcfs/status.c
@@ -1234,16 +1234,10 @@ static void update_rrd_data(const char *key, gconstpointer data, size_t len) {
create_rrd_file(filename, argcount, rrd_def_node);
}
- } else if ((strncmp(key, "pve2-vm/", 8) == 0) || (strncmp(key, "pve2.3-vm/", 10) == 0)) {
- const char *vmid;
+ } else if (strncmp(key, "pve2.3-vm/", 10) == 0) {
+ const char *vmid = key + 10;
- if (strncmp(key, "pve2-vm/", 8) == 0) {
- vmid = key + 8;
- skip = 2;
- } else {
- vmid = key + 10;
- skip = 4;
- }
+ skip = 4;
if (strchr(vmid, '/') != NULL) {
goto keyerror;
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] [PATCH cluster-pve8 v3 2/2] status: handle new metrics update data
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
2025-07-15 14:31 ` [pve-devel] [PATCH cluster-pve8 v3 1/2] cfs status.c: drop old pve2-vm rrd schema support Aaron Lauterer
@ 2025-07-15 14:31 ` Aaron Lauterer
2025-07-16 22:32 ` [pve-devel] applied: " Thomas Lamprecht
2025-07-15 14:31 ` [pve-devel] [PATCH manager-pve8 v3 1/2] api2tools: drop old VM rrd schema Aaron Lauterer
` (33 subsequent siblings)
35 siblings, 1 reply; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:31 UTC (permalink / raw)
To: pve-devel
For PVE9 we plan to add additional fields in the metrics that are
collected and distributed in the cluster. The new fields/columns are
added at the end of the current ones. This makes it possible for PVE8
installations to still use them by cutting the new additional data.
To make it more future proof, the format of the keys for each metrics
are changed:
Old: pve{version}-{type}/{id}
New: pve-{type}-{version}/{id}
This way we have an easier time to handle new versions in the future as
we initially only need to check for `pve-{type}-`. If we know the
version, we can handle it accordingly; e.g. pad if older format with
missing data. If we don't know the version, it must be a newer one and
we cut the data stream at the length we need for the current version.
This means of course that to avoid a breaking change, we can only add
new columns if needed, but not remove any! But waiting for a breaking
change until the next major release is a worthy trade-off if it allows
us to expand the format in between if needed.
Since the full keys were used for the final location within the RRD
directory, we need to change that as well and set it manually to
'pve2-{type}' as the key we receive could be for a newer data format.
The 'rrd_skip_data' function got a new parameter defining the sepataring
character. This then makes it possible to use it to determine which part
of the key string is the version/type and which one is the actual
resource identifier.
We drop the pve2-vm schema as the newer pve2.3-vm has been introduced
with commit ba9dcfc1 back in 2013. By now there should be no cluster
where an older node might still send the old pve2-vm schema.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
src/pmxcfs/status.c | 79 ++++++++++++++++++++++++++++++---------------
1 file changed, 53 insertions(+), 26 deletions(-)
diff --git a/src/pmxcfs/status.c b/src/pmxcfs/status.c
index eba4e52..640540f 100644
--- a/src/pmxcfs/status.c
+++ b/src/pmxcfs/status.c
@@ -1185,16 +1185,33 @@ static void create_rrd_file(const char *filename, int argcount, const char *rrdd
}
}
-static inline const char *rrd_skip_data(const char *data, int count) {
+static inline const char *rrd_skip_data(const char *data, int count, char separator) {
int found = 0;
while (*data && found < count) {
- if (*data++ == ':') {
+ if (*data++ == separator) {
found++;
}
}
return data;
}
+// The key and subdirectory format used up until PVE8 is 'pve{version}-{type}/{id}' with version
+// being 2 or 2.3 for VMs. Starting with PVE9 'pve-{type}-{version}/{id}'. Newer versions are only
+// allowed to append new columns to the data! Otherwise this would be a breaking change.
+//
+// Type can be: node, vm, storage
+//
+// Version is the version of PVE with which it was introduced, e.g.: 9.0, 9.2, 10.0.
+//
+// ID is the actual identifier of the item in question. E.g. node name, VMID or for storage it is
+// '{node}/{storage name}'
+//
+// This way, we can handle unknown new formats gracefully and cut the data at the expected
+// column for the currently understood format. Receiving older formats will still need special
+// checks to determine how much padding is needed.
+//
+// Should we ever plan to change existing columns, we need to introduce this as a breaking
+// change!
static void update_rrd_data(const char *key, gconstpointer data, size_t len) {
g_return_if_fail(key != NULL);
g_return_if_fail(data != NULL);
@@ -1210,12 +1227,13 @@ static void update_rrd_data(const char *key, gconstpointer data, size_t len) {
char *filename = NULL;
- int skip = 0;
+ int skip = 0; // columns to skip at beginning. They contain non-archivable data, like uptime,
+ // status, is guest a template and such.
+ int keep_columns = 0; // how many columns do we want to keep (after initial skip) in case we get
+ // more columns than needed from a newer format
- if (strncmp(key, "pve2-node/", 10) == 0) {
- const char *node = key + 10;
-
- skip = 2;
+ if (strncmp(key, "pve2-node/", 10) == 0 || strncmp(key, "pve-node-", 9) == 0) {
+ const char *node = rrd_skip_data(key, 1, '/');
if (strchr(node, '/') != NULL) {
goto keyerror;
@@ -1225,19 +1243,23 @@ static void update_rrd_data(const char *key, gconstpointer data, size_t len) {
goto keyerror;
}
- filename = g_strdup_printf(RRDDIR "/%s", key);
+ skip = 2; // first two columns are live data that isn't archived
- if (!g_file_test(filename, G_FILE_TEST_EXISTS)) {
+ if (strncmp(key, "pve-node-", 9) == 0) {
+ keep_columns = 12; // pve2-node format uses 12 columns
+ }
+ filename = g_strdup_printf(RRDDIR "/pve2-node/%s", node);
+
+ if (!g_file_test(filename, G_FILE_TEST_EXISTS)) {
mkdir(RRDDIR "/pve2-node", 0755);
int argcount = sizeof(rrd_def_node) / sizeof(void *) - 1;
create_rrd_file(filename, argcount, rrd_def_node);
}
- } else if (strncmp(key, "pve2.3-vm/", 10) == 0) {
- const char *vmid = key + 10;
+ } else if (strncmp(key, "pve2.3-vm/", 10) == 0 || strncmp(key, "pve-vm-", 7) == 0) {
- skip = 4;
+ const char *vmid = rrd_skip_data(key, 1, '/');
if (strchr(vmid, '/') != NULL) {
goto keyerror;
@@ -1247,29 +1269,29 @@ static void update_rrd_data(const char *key, gconstpointer data, size_t len) {
goto keyerror;
}
+ skip = 4; // first 4 columns are live data that isn't archived
+
+ if (strncmp(key, "pve-vm-", 7) == 0) {
+ keep_columns = 10; // pve2.3-vm format uses 10 data columns
+ }
+
filename = g_strdup_printf(RRDDIR "/%s/%s", "pve2-vm", vmid);
if (!g_file_test(filename, G_FILE_TEST_EXISTS)) {
-
mkdir(RRDDIR "/pve2-vm", 0755);
int argcount = sizeof(rrd_def_vm) / sizeof(void *) - 1;
create_rrd_file(filename, argcount, rrd_def_vm);
}
- } else if (strncmp(key, "pve2-storage/", 13) == 0) {
- const char *node = key + 13;
+ } else if (strncmp(key, "pve2-storage/", 13) == 0 || strncmp(key, "pve-storage-", 12) == 0) {
+ const char *node = rrd_skip_data(key, 1, '/'); // will contain {node}/{storage}
- const char *storage = node;
- while (*storage && *storage != '/') {
- storage++;
- }
+ const char *storage = rrd_skip_data(node, 1, '/');
- if (*storage != '/' || ((storage - node) < 1)) {
+ if ((storage - node) < 1) {
goto keyerror;
}
- storage++;
-
if (strchr(storage, '/') != NULL) {
goto keyerror;
}
@@ -1278,12 +1300,10 @@ static void update_rrd_data(const char *key, gconstpointer data, size_t len) {
goto keyerror;
}
- filename = g_strdup_printf(RRDDIR "/%s", key);
+ filename = g_strdup_printf(RRDDIR "/pve2-storage/%s", node);
if (!g_file_test(filename, G_FILE_TEST_EXISTS)) {
-
mkdir(RRDDIR "/pve2-storage", 0755);
-
char *dir = g_path_get_dirname(filename);
mkdir(dir, 0755);
g_free(dir);
@@ -1296,7 +1316,14 @@ static void update_rrd_data(const char *key, gconstpointer data, size_t len) {
goto keyerror;
}
- const char *dp = skip ? rrd_skip_data(data, skip) : data;
+ const char *dp = skip ? rrd_skip_data(data, skip, ':') : data;
+
+ if (keep_columns) {
+ keep_columns++; // We specify the number of columns we want earlier, but we also have the
+ // always present timestamp column, so we need to skip one more column
+ char *cut = (char *)rrd_skip_data(dp, keep_columns, ':');
+ *(cut - 1) = 0; // terminate string by replacing colon from field separator with zero.
+ }
const char *update_args[] = {dp, NULL};
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] applied: [PATCH cluster-pve8 v3 2/2] status: handle new metrics update data
2025-07-15 14:31 ` [pve-devel] [PATCH cluster-pve8 v3 2/2] status: handle new metrics update data Aaron Lauterer
@ 2025-07-16 22:32 ` Thomas Lamprecht
0 siblings, 0 replies; 59+ messages in thread
From: Thomas Lamprecht @ 2025-07-16 22:32 UTC (permalink / raw)
To: Proxmox VE development discussion, Aaron Lauterer
Am 15.07.25 um 16:31 schrieb Aaron Lauterer:
> For PVE9 we plan to add additional fields in the metrics that are
> collected and distributed in the cluster. The new fields/columns are
> added at the end of the current ones. This makes it possible for PVE8
> installations to still use them by cutting the new additional data.
>
> To make it more future proof, the format of the keys for each metrics
> are changed:
>
> Old: pve{version}-{type}/{id}
> New: pve-{type}-{version}/{id}
>
> This way we have an easier time to handle new versions in the future as
> we initially only need to check for `pve-{type}-`. If we know the
> version, we can handle it accordingly; e.g. pad if older format with
> missing data. If we don't know the version, it must be a newer one and
> we cut the data stream at the length we need for the current version.
>
> This means of course that to avoid a breaking change, we can only add
> new columns if needed, but not remove any! But waiting for a breaking
> change until the next major release is a worthy trade-off if it allows
> us to expand the format in between if needed.
>
> Since the full keys were used for the final location within the RRD
> directory, we need to change that as well and set it manually to
> 'pve2-{type}' as the key we receive could be for a newer data format.
>
> The 'rrd_skip_data' function got a new parameter defining the sepataring
> character. This then makes it possible to use it to determine which part
> of the key string is the version/type and which one is the actual
> resource identifier.
>
> We drop the pve2-vm schema as the newer pve2.3-vm has been introduced
> with commit ba9dcfc1 back in 2013. By now there should be no cluster
> where an older node might still send the old pve2-vm schema.
>
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> src/pmxcfs/status.c | 79 ++++++++++++++++++++++++++++++---------------
> 1 file changed, 53 insertions(+), 26 deletions(-)
>
>
applied, thanks!
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] [PATCH manager-pve8 v3 1/2] api2tools: drop old VM rrd schema
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
2025-07-15 14:31 ` [pve-devel] [PATCH cluster-pve8 v3 1/2] cfs status.c: drop old pve2-vm rrd schema support Aaron Lauterer
2025-07-15 14:31 ` [pve-devel] [PATCH cluster-pve8 v3 2/2] status: handle new metrics update data Aaron Lauterer
@ 2025-07-15 14:31 ` Aaron Lauterer
2025-07-16 22:32 ` [pve-devel] applied: " Thomas Lamprecht
2025-07-15 14:31 ` [pve-devel] [PATCH manager-pve8 v3 2/2] api2tools: extract stats: handle existence of new pve-{type}-9.0 data Aaron Lauterer
` (32 subsequent siblings)
35 siblings, 1 reply; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:31 UTC (permalink / raw)
To: pve-devel
pve2.3-vm has been introduced with commit 3b6ad3ac back in 2013. By now
there should not be any combination of clustered nodes that still send
the old pve2-vm variant.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
PVE/API2Tools.pm | 18 +-----------------
1 file changed, 1 insertion(+), 17 deletions(-)
diff --git a/PVE/API2Tools.pm b/PVE/API2Tools.pm
index d6154925..1e235c47 100644
--- a/PVE/API2Tools.pm
+++ b/PVE/API2Tools.pm
@@ -97,23 +97,7 @@ sub extract_vm_stats {
my $d;
- if ($d = $rrd->{"pve2-vm/$vmid"}) {
-
- $entry->{uptime} = ($d->[0] || 0) + 0;
- $entry->{name} = $d->[1];
- $entry->{status} = $entry->{uptime} ? 'running' : 'stopped';
- $entry->{maxcpu} = ($d->[3] || 0) + 0;
- $entry->{cpu} = ($d->[4] || 0) + 0;
- $entry->{maxmem} = ($d->[5] || 0) + 0;
- $entry->{mem} = ($d->[6] || 0) + 0;
- $entry->{maxdisk} = ($d->[7] || 0) + 0;
- $entry->{disk} = ($d->[8] || 0) + 0;
- $entry->{netin} = ($d->[9] || 0) + 0;
- $entry->{netout} = ($d->[10] || 0) + 0;
- $entry->{diskread} = ($d->[11] || 0) + 0;
- $entry->{diskwrite} = ($d->[12] || 0) + 0;
-
- } elsif ($d = $rrd->{"pve2.3-vm/$vmid"}) {
+ if ($d = $rrd->{"pve2.3-vm/$vmid"}) {
$entry->{uptime} = ($d->[0] || 0) + 0;
$entry->{name} = $d->[1];
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] [PATCH manager-pve8 v3 2/2] api2tools: extract stats: handle existence of new pve-{type}-9.0 data
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
` (2 preceding siblings ...)
2025-07-15 14:31 ` [pve-devel] [PATCH manager-pve8 v3 1/2] api2tools: drop old VM rrd schema Aaron Lauterer
@ 2025-07-15 14:31 ` Aaron Lauterer
2025-07-16 22:32 ` [pve-devel] applied: " Thomas Lamprecht
2025-07-15 14:31 ` [pve-devel] [PATCH pve9-rrd-migration-tool v3 1/1] introduce rrd migration tool for pve8 -> pve9 Aaron Lauterer
` (31 subsequent siblings)
35 siblings, 1 reply; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:31 UTC (permalink / raw)
To: pve-devel
We add a new function to handle different key names, as it would
otherwise become quite unreadable.
It checks which key format exists for the type and resource:
* the old pve2-{type} / pve2.3-vm
* the new pve-{type}-{version}
and will return the one that was found. Since we will only have one key
per resource, we can return on the first hit.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
Notes:
changes since:
RFC:
* switch from pve9- to pve-{type}-9.0 schema
PVE/API2Tools.pm | 28 ++++++++++++++++++++++++----
1 file changed, 24 insertions(+), 4 deletions(-)
diff --git a/PVE/API2Tools.pm b/PVE/API2Tools.pm
index 1e235c47..08548524 100644
--- a/PVE/API2Tools.pm
+++ b/PVE/API2Tools.pm
@@ -41,6 +41,24 @@ sub get_hwaddress {
return $hwaddress;
}
+# each rrd key for a resource will only exist once. The key format might be different though. Therefore return on first hit
+sub get_rrd_key {
+ my ($rrd, $type, $id) = @_;
+
+ # check for old formats: pve2-{type}/{id}. For VMs and CTs the version number is different than for nodes and storages
+ if ($type ne "vm" && exists $rrd->{"pve2-${type}/${id}"}) {
+ return "pve2-${type}/${id}";
+ } elsif ($type eq "vm" && exists $rrd->{"pve2.3-${type}/${id}"}) {
+ return "pve2.3-${type}/${id}";
+ }
+
+ # if no old key has been found, we expect on in the newer format: pve-{type}-{version}/{id}
+ # We accept all new versions, as the expectation is that they are only allowed to add new colums as non-breaking change
+ for my $k (keys %$rrd) {
+ return $k if $k =~ m/^pve-\Q${type}\E-\d\d?.\d\/\Q${id}\E$/;
+ }
+}
+
sub extract_node_stats {
my ($node, $members, $rrd, $exclude_stats) = @_;
@@ -51,8 +69,8 @@ sub extract_node_stats {
status => 'unknown',
};
- if (my $d = $rrd->{"pve2-node/$node"}) {
-
+ my $key = get_rrd_key($rrd, "node", $node);
+ if (my $d = $rrd->{$key}) {
if (
!$members || # no cluster
($members->{$node} && $members->{$node}->{online})
@@ -96,8 +114,9 @@ sub extract_vm_stats {
};
my $d;
+ my $key = get_rrd_key($rrd, "vm", $vmid);
- if ($d = $rrd->{"pve2.3-vm/$vmid"}) {
+ if (my $d = $rrd->{$key}) {
$entry->{uptime} = ($d->[0] || 0) + 0;
$entry->{name} = $d->[1];
@@ -135,7 +154,8 @@ sub extract_storage_stats {
content => $content,
};
- if (my $d = $rrd->{"pve2-storage/$node/$storeid"}) {
+ my $key = get_rrd_key($rrd, "storage", "${node}/${storeid}");
+ if (my $d = $rrd->{$key}) {
$entry->{maxdisk} = ($d->[1] || 0) + 0;
$entry->{disk} = ($d->[2] || 0) + 0;
$entry->{status} = 'available';
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] [PATCH pve9-rrd-migration-tool v3 1/1] introduce rrd migration tool for pve8 -> pve9
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
` (3 preceding siblings ...)
2025-07-15 14:31 ` [pve-devel] [PATCH manager-pve8 v3 2/2] api2tools: extract stats: handle existence of new pve-{type}-9.0 data Aaron Lauterer
@ 2025-07-15 14:31 ` Aaron Lauterer
2025-07-16 22:32 ` Thomas Lamprecht
2025-07-15 14:31 ` [pve-devel] [PATCH cluster v3 1/4] cfs status.c: drop old pve2-vm rrd schema support Aaron Lauterer
` (30 subsequent siblings)
35 siblings, 1 reply; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:31 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
.cargo/config.toml | 5 +
.gitignore | 5 +
Cargo.toml | 20 ++
build.rs | 29 +++
src/lib.rs | 5 +
src/main.rs | 502 ++++++++++++++++++++++++++++++++++++++++
src/parallel_handler.rs | 162 +++++++++++++
wrapper.h | 1 +
8 files changed, 729 insertions(+)
create mode 100644 .cargo/config.toml
create mode 100644 .gitignore
create mode 100644 Cargo.toml
create mode 100644 build.rs
create mode 100644 src/lib.rs
create mode 100644 src/main.rs
create mode 100644 src/parallel_handler.rs
create mode 100644 wrapper.h
diff --git a/.cargo/config.toml b/.cargo/config.toml
new file mode 100644
index 0000000..3b5b6e4
--- /dev/null
+++ b/.cargo/config.toml
@@ -0,0 +1,5 @@
+[source]
+[source.debian-packages]
+directory = "/usr/share/cargo/registry"
+[source.crates-io]
+replace-with = "debian-packages"
diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..7741e63
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,5 @@
+./target
+./build
+
+Cargo.lock
+
diff --git a/Cargo.toml b/Cargo.toml
new file mode 100644
index 0000000..d3523f3
--- /dev/null
+++ b/Cargo.toml
@@ -0,0 +1,20 @@
+[package]
+name = "proxmox_rrd_migration_8-9"
+version = "0.1.0"
+edition = "2021"
+authors = [
+ "Aaron Lauterer <a.lauterer@proxmox.com>",
+ "Proxmox Support Team <support@proxmox.com>",
+]
+license = "AGPL-3"
+homepage = "https://www.proxmox.com"
+
+[dependencies]
+anyhow = "1.0.86"
+pico-args = "0.5.0"
+proxmox-async = "0.4"
+crossbeam-channel = "0.5"
+
+[build-dependencies]
+bindgen = "0.66.1"
+pkg-config = "0.3"
diff --git a/build.rs b/build.rs
new file mode 100644
index 0000000..56d07cc
--- /dev/null
+++ b/build.rs
@@ -0,0 +1,29 @@
+use std::env;
+use std::path::PathBuf;
+
+fn main() {
+ println!("cargo:rustc-link-lib=rrd");
+
+ println!("cargo:rerun-if-changed=wrapper.h");
+ // The bindgen::Builder is the main entry point
+ // to bindgen, and lets you build up options for
+ // the resulting bindings.
+
+ let bindings = bindgen::Builder::default()
+ // The input header we would like to generate
+ // bindings for.
+ .header("wrapper.h")
+ // Tell cargo to invalidate the built crate whenever any of the
+ // included header files changed.
+ .parse_callbacks(Box::new(bindgen::CargoCallbacks))
+ // Finish the builder and generate the bindings.
+ .generate()
+ // Unwrap the Result and panic on failure.
+ .expect("Unable to generate bindings");
+
+ // Write the bindings to the $OUT_DIR/bindings.rs file.
+ let out_path = PathBuf::from(env::var("OUT_DIR").unwrap());
+ bindings
+ .write_to_file(out_path.join("bindings.rs"))
+ .expect("Couldn't write bindings!");
+}
diff --git a/src/lib.rs b/src/lib.rs
new file mode 100644
index 0000000..a38a13a
--- /dev/null
+++ b/src/lib.rs
@@ -0,0 +1,5 @@
+#![allow(non_upper_case_globals)]
+#![allow(non_camel_case_types)]
+#![allow(non_snake_case)]
+
+include!(concat!(env!("OUT_DIR"), "/bindings.rs"));
diff --git a/src/main.rs b/src/main.rs
new file mode 100644
index 0000000..6e6f91c
--- /dev/null
+++ b/src/main.rs
@@ -0,0 +1,502 @@
+use anyhow::{bail, Error, Result};
+use proxmox_rrd_migration_8_9::{rrd_clear_error, rrd_create_r2, rrd_get_context, rrd_get_error};
+use std::ffi::{CStr, CString, OsString};
+use std::fs;
+use std::os::unix::ffi::OsStrExt;
+use std::os::unix::fs::PermissionsExt;
+use std::path::PathBuf;
+use std::sync::Arc;
+
+use crate::parallel_handler::ParallelHandler;
+
+pub mod parallel_handler;
+
+const BASE_DIR: &str = "/var/lib/rrdcached/db";
+const SOURCE_SUBDIR_NODE: &str = "pve2-node";
+const SOURCE_SUBDIR_GUEST: &str = "pve2-vm";
+const SOURCE_SUBDIR_STORAGE: &str = "pve2-storage";
+const TARGET_SUBDIR_NODE: &str = "pve-node-9.0";
+const TARGET_SUBDIR_GUEST: &str = "pve-vm-9.0";
+const TARGET_SUBDIR_STORAGE: &str = "pve-storage-9.0";
+const MAX_THREADS: usize = 4;
+const RRD_STEP_SIZE: usize = 60;
+
+// RRAs are defined in the following way:
+//
+// RRA:CF:xff:step:rows
+// CF: AVERAGE or MAX
+// xff: 0.5
+// steps: stepsize is defined on rrd file creation! example: with 60 seconds step size:
+// e.g. 1 => 60 sec, 30 => 1800 seconds or 30 min
+// rows: how many aggregated rows are kept, as in how far back in time we store data
+//
+// how many seconds are aggregated per RRA: steps * stepsize * rows
+// how many hours are aggregated per RRA: steps * stepsize * rows / 3600
+// how many days are aggregated per RRA: steps * stepsize * rows / 3600 / 24
+// https://oss.oetiker.ch/rrdtool/tut/rrd-beginners.en.html#Understanding_by_an_example
+
+const RRD_VM_DEF: [&CStr; 25] = [
+ c"DS:maxcpu:GAUGE:120:0:U",
+ c"DS:cpu:GAUGE:120:0:U",
+ c"DS:maxmem:GAUGE:120:0:U",
+ c"DS:mem:GAUGE:120:0:U",
+ c"DS:maxdisk:GAUGE:120:0:U",
+ c"DS:disk:GAUGE:120:0:U",
+ c"DS:netin:DERIVE:120:0:U",
+ c"DS:netout:DERIVE:120:0:U",
+ c"DS:diskread:DERIVE:120:0:U",
+ c"DS:diskwrite:DERIVE:120:0:U",
+ c"DS:memhost:GAUGE:120:0:U",
+ c"DS:pressurecpusome:GAUGE:120:0:U",
+ c"DS:pressurecpufull:GAUGE:120:0:U",
+ c"DS:pressureiosome:GAUGE:120:0:U",
+ c"DS:pressureiofull:GAUGE:120:0:U",
+ c"DS:pressurememorysome:GAUGE:120:0:U",
+ c"DS:pressurememoryfull:GAUGE:120:0:U",
+ c"RRA:AVERAGE:0.5:1:1440", // 1 min * 1440 => 1 day
+ c"RRA:AVERAGE:0.5:30:1440", // 30 min * 1440 => 30 day
+ c"RRA:AVERAGE:0.5:360:1440", // 6 hours * 1440 => 360 day ~1 year
+ c"RRA:AVERAGE:0.5:10080:570", // 1 week * 570 => ~10 years
+ c"RRA:MAX:0.5:1:1440", // 1 min * 1440 => 1 day
+ c"RRA:MAX:0.5:30:1440", // 30 min * 1440 => 30 day
+ c"RRA:MAX:0.5:360:1440", // 6 hours * 1440 => 360 day ~1 year
+ c"RRA:MAX:0.5:10080:570", // 1 week * 570 => ~10 years
+];
+
+const RRD_NODE_DEF: [&CStr; 27] = [
+ c"DS:loadavg:GAUGE:120:0:U",
+ c"DS:maxcpu:GAUGE:120:0:U",
+ c"DS:cpu:GAUGE:120:0:U",
+ c"DS:iowait:GAUGE:120:0:U",
+ c"DS:memtotal:GAUGE:120:0:U",
+ c"DS:memused:GAUGE:120:0:U",
+ c"DS:swaptotal:GAUGE:120:0:U",
+ c"DS:swapused:GAUGE:120:0:U",
+ c"DS:roottotal:GAUGE:120:0:U",
+ c"DS:rootused:GAUGE:120:0:U",
+ c"DS:netin:DERIVE:120:0:U",
+ c"DS:netout:DERIVE:120:0:U",
+ c"DS:memfree:GAUGE:120:0:U",
+ c"DS:arcsize:GAUGE:120:0:U",
+ c"DS:pressurecpusome:GAUGE:120:0:U",
+ c"DS:pressureiosome:GAUGE:120:0:U",
+ c"DS:pressureiofull:GAUGE:120:0:U",
+ c"DS:pressurememorysome:GAUGE:120:0:U",
+ c"DS:pressurememoryfull:GAUGE:120:0:U",
+ c"RRA:AVERAGE:0.5:1:1440", // 1 min * 1440 => 1 day
+ c"RRA:AVERAGE:0.5:30:1440", // 30 min * 1440 => 30 day
+ c"RRA:AVERAGE:0.5:360:1440", // 6 hours * 1440 => 360 day ~1 year
+ c"RRA:AVERAGE:0.5:10080:570", // 1 week * 570 => ~10 years
+ c"RRA:MAX:0.5:1:1440", // 1 min * 1440 => 1 day
+ c"RRA:MAX:0.5:30:1440", // 30 min * 1440 => 30 day
+ c"RRA:MAX:0.5:360:1440", // 6 hours * 1440 => 360 day ~1 year
+ c"RRA:MAX:0.5:10080:570", // 1 week * 570 => ~10 years
+];
+
+const RRD_STORAGE_DEF: [&CStr; 10] = [
+ c"DS:total:GAUGE:120:0:U",
+ c"DS:used:GAUGE:120:0:U",
+ c"RRA:AVERAGE:0.5:1:1440", // 1 min * 1440 => 1 day
+ c"RRA:AVERAGE:0.5:30:1440", // 30 min * 1440 => 30 day
+ c"RRA:AVERAGE:0.5:360:1440", // 6 hours * 1440 => 360 day ~1 year
+ c"RRA:AVERAGE:0.5:10080:570", // 1 week * 570 => ~10 years
+ c"RRA:MAX:0.5:1:1440", // 1 min * 1440 => 1 day
+ c"RRA:MAX:0.5:30:1440", // 30 min * 1440 => 30 day
+ c"RRA:MAX:0.5:360:1440", // 6 hours * 1440 => 360 day ~1 year
+ c"RRA:MAX:0.5:10080:570", // 1 week * 570 => ~10 years
+];
+
+const HELP: &str = "\
+proxmox-rrd-migration tool
+
+Migrates existing RRD graph data to the new format.
+
+Use this only in the process of upgrading from Proxmox VE 8 to 9 according to the upgrade guide!
+
+USAGE:
+ proxmox-rrd-migration [OPTIONS]
+
+ FLAGS:
+ -h, --help Prints this help information
+
+ OPTIONS:
+ --force Migrate, even if the target already exists.
+ This will overwrite any migrated RRD files!
+
+ --threads THREADS Number of paralell threads. Default from 1 to 4.
+
+ --test For internal use only.
+ Tests parallel guest migration only!
+ --source For internal use only. Source directory.
+ --target For internal use only. Target directory.
+ ";
+
+#[derive(Debug)]
+struct Args {
+ force: bool,
+ threads: Option<usize>,
+ test: bool,
+ source: Option<PathBuf>,
+ target: Option<PathBuf>,
+}
+
+fn parse_args() -> Result<Args, Error> {
+ let mut pargs = pico_args::Arguments::from_env();
+
+ // Help has a higher priority and should be handled separately.
+ if pargs.contains(["-h", "--help"]) {
+ print!("{}", HELP);
+ std::process::exit(0);
+ }
+
+ let mut args = Args {
+ threads: pargs.opt_value_from_str("--threads").unwrap(),
+ force: false,
+ test: false,
+ source: pargs.opt_value_from_str("--source").unwrap(),
+ target: pargs.opt_value_from_str("--target").unwrap(),
+ };
+
+ if pargs.contains("--test") {
+ args.test = true;
+ }
+ if pargs.contains("--force") {
+ args.force = true;
+ }
+
+ // It's up to the caller what to do with the remaining arguments.
+ let remaining = pargs.finish();
+ if !remaining.is_empty() {
+ bail!(format!("Warning: unused arguments left: {:?}", remaining));
+ }
+
+ Ok(args)
+}
+
+fn main() {
+ let args = match parse_args() {
+ Ok(v) => v,
+ Err(e) => {
+ eprintln!("Error: {}.", e);
+ std::process::exit(1);
+ }
+ };
+
+ let mut source_dir_guests: PathBuf = [BASE_DIR, SOURCE_SUBDIR_GUEST].iter().collect();
+ let mut target_dir_guests: PathBuf = [BASE_DIR, TARGET_SUBDIR_GUEST].iter().collect();
+ let source_dir_nodes: PathBuf = [BASE_DIR, SOURCE_SUBDIR_NODE].iter().collect();
+ let target_dir_nodes: PathBuf = [BASE_DIR, TARGET_SUBDIR_NODE].iter().collect();
+ let source_dir_storage: PathBuf = [BASE_DIR, SOURCE_SUBDIR_STORAGE].iter().collect();
+ let target_dir_storage: PathBuf = [BASE_DIR, TARGET_SUBDIR_STORAGE].iter().collect();
+
+ if args.test {
+ source_dir_guests = args.source.clone().unwrap();
+ target_dir_guests = args.target.clone().unwrap();
+ }
+
+ if !args.force && target_dir_guests.exists() {
+ eprintln!(
+ "Aborting! Target path for guests already exists. Use '--force' to still migrate. It will overwrite existing files!"
+ );
+ std::process::exit(1);
+ }
+ if !args.force && target_dir_nodes.exists() {
+ eprintln!(
+ "Aborting! Target path for nodes already exists. Use '--force' to still migrate. It will overwrite existing files!"
+ );
+ std::process::exit(1);
+ }
+ if !args.force && target_dir_storage.exists() {
+ eprintln!(
+ "Aborting! Target path for storages already exists. Use '--force' to still migrate. It will overwrite existing files!"
+ );
+ std::process::exit(1);
+ }
+
+ if !args.test {
+ if let Err(e) = migrate_nodes(source_dir_nodes, target_dir_nodes) {
+ eprintln!("Error migrating nodes: {}", e);
+ std::process::exit(1);
+ }
+ if let Err(e) = migrate_storage(source_dir_storage, target_dir_storage) {
+ eprintln!("Error migrating storage: {}", e);
+ std::process::exit(1);
+ }
+ }
+ if let Err(e) = migrate_guests(source_dir_guests, target_dir_guests, set_threads(&args)) {
+ eprintln!("Error migrating guests: {}", e);
+ std::process::exit(1);
+ }
+}
+
+/// Set number of threads
+///
+/// Either a fixed parameter or determining a range between 1 to 4 threads
+/// based on the number of CPU cores available in the system.
+fn set_threads(args: &Args) -> usize {
+ if args.threads.is_some() {
+ return args.threads.unwrap();
+ }
+ // check for a way to get physical cores and not threads?
+ let cpus: usize = String::from_utf8_lossy(
+ std::process::Command::new("nproc")
+ .output()
+ .expect("Error running nproc")
+ .stdout
+ .as_slice()
+ .trim_ascii(),
+ )
+ .parse::<usize>()
+ .expect("Could not parse nproc output");
+
+ if cpus < 32 {
+ let threads = cpus / 8;
+ if threads == 0 {
+ return 1;
+ }
+ return threads;
+ }
+ return MAX_THREADS;
+}
+
+/// Migrate guest RRD files
+///
+/// In parallel to speed up the process as most time is spent on converting the
+/// data to the new format.
+fn migrate_guests(
+ source_dir_guests: PathBuf,
+ target_dir_guests: PathBuf,
+ threads: usize,
+) -> Result<(), Error> {
+ println!("Migrating RRD data for guests…");
+ println!("Using {} thread(s)", threads);
+
+ let mut guest_source_files: Vec<(CString, OsString)> = Vec::new();
+
+ fs::read_dir(&source_dir_guests)?
+ .filter(|f| f.is_ok())
+ .map(|f| f.unwrap().path())
+ .filter(|f| f.is_file())
+ .for_each(|file| {
+ let path = CString::new(file.as_path().as_os_str().as_bytes())
+ .expect("Could not convert path to CString.");
+ let fname = file
+ .file_name()
+ .map(|v| v.to_os_string())
+ .expect("Could not convert fname to OsString.");
+ guest_source_files.push((path, fname))
+ });
+ if !target_dir_guests.exists() {
+ println!("Creating new directory: '{}'", target_dir_guests.display());
+ std::fs::create_dir(&target_dir_guests)?;
+ }
+
+ let total_guests = guest_source_files.len();
+ let guests = Arc::new(std::sync::atomic::AtomicUsize::new(0));
+ let guests2 = guests.clone();
+ let start_time = std::time::SystemTime::now();
+
+ let migration_pool = ParallelHandler::new(
+ "guest rrd migration",
+ threads,
+ move |(path, fname): (CString, OsString)| {
+ let mut source: [*const i8; 2] = [std::ptr::null(); 2];
+ source[0] = path.as_ptr();
+
+ let node_name = fname;
+ let mut target_path = target_dir_guests.clone();
+ target_path.push(node_name);
+
+ let target_path = CString::new(target_path.to_str().unwrap()).unwrap();
+
+ unsafe {
+ rrd_get_context();
+ rrd_clear_error();
+ let res = rrd_create_r2(
+ target_path.as_ptr(),
+ RRD_STEP_SIZE as u64,
+ 0,
+ 0,
+ source.as_mut_ptr(),
+ std::ptr::null(),
+ RRD_VM_DEF.len() as i32,
+ RRD_VM_DEF.map(|v| v.as_ptr()).as_mut_ptr(),
+ );
+ if res != 0 {
+ bail!(
+ "RRD create Error: {}",
+ CStr::from_ptr(rrd_get_error()).to_string_lossy()
+ );
+ }
+ }
+ let current_guests = guests2.fetch_add(1, std::sync::atomic::Ordering::SeqCst);
+ if current_guests > 0 && current_guests % 200 == 0 {
+ println!("Migrated {} of {} guests", current_guests, total_guests);
+ }
+ Ok(())
+ },
+ );
+ let migration_channel = migration_pool.channel();
+
+ for file in guest_source_files {
+ let migration_channel = migration_channel.clone();
+ migration_channel.send(file)?;
+ }
+
+ drop(migration_channel);
+ migration_pool.complete()?;
+
+ let elapsed = start_time.elapsed()?.as_secs_f64();
+ let guests = guests.load(std::sync::atomic::Ordering::SeqCst);
+ println!("Migrated {} guests in {:.2}s", guests, elapsed,);
+
+ Ok(())
+}
+
+/// Migrate node RRD files
+///
+/// In serial as the number of nodes will not be high.
+fn migrate_nodes(source_dir_nodes: PathBuf, target_dir_nodes: PathBuf) -> Result<(), Error> {
+ println!("Migrating RRD data for nodes…");
+
+ if !target_dir_nodes.exists() {
+ println!("Creating new directory: '{}'", target_dir_nodes.display());
+ std::fs::create_dir(&target_dir_nodes)?;
+ }
+
+ let mut node_source_files: Vec<(CString, OsString)> = Vec::new();
+ fs::read_dir(&source_dir_nodes)?
+ .filter(|f| f.is_ok())
+ .map(|f| f.unwrap().path())
+ .filter(|f| f.is_file())
+ .for_each(|file| {
+ let path = CString::new(file.as_path().as_os_str().as_bytes())
+ .expect("Could not convert path to CString.");
+ let fname = file
+ .file_name()
+ .map(|v| v.to_os_string())
+ .expect("Could not convert fname to OsString.");
+ node_source_files.push((path, fname))
+ });
+
+ for file in node_source_files {
+ println!("Node: '{}'", PathBuf::from(file.1.clone()).display());
+ let mut source: [*const i8; 2] = [std::ptr::null(); 2];
+
+ source[0] = file.0.as_ptr();
+
+ let node_name = file.1;
+ let mut target_path = target_dir_nodes.clone();
+ target_path.push(node_name);
+
+ let target_path = CString::new(target_path.to_str().unwrap()).unwrap();
+
+ unsafe {
+ rrd_get_context();
+ rrd_clear_error();
+ let res = rrd_create_r2(
+ target_path.as_ptr(),
+ RRD_STEP_SIZE as u64,
+ 0,
+ 0,
+ source.as_mut_ptr(),
+ std::ptr::null(),
+ RRD_NODE_DEF.len() as i32,
+ RRD_NODE_DEF.map(|v| v.as_ptr()).as_mut_ptr(),
+ );
+ if res != 0 {
+ bail!(
+ "RRD create Error: {}",
+ CStr::from_ptr(rrd_get_error()).to_string_lossy()
+ );
+ }
+ }
+ }
+ println!("Migrated all nodes");
+
+ Ok(())
+}
+
+/// Migrate storage RRD files
+///
+/// In serial as the number of storage will not be that high.
+fn migrate_storage(source_dir_storage: PathBuf, target_dir_storage: PathBuf) -> Result<(), Error> {
+ println!("Migrating RRD data for storages…");
+
+ if !target_dir_storage.exists() {
+ println!("Creating new directory: '{}'", target_dir_storage.display());
+ std::fs::create_dir(&target_dir_storage)?;
+ }
+
+ // storage has another layer of directories per node over which we need to iterate
+ fs::read_dir(&source_dir_storage)?
+ .filter(|f| f.is_ok())
+ .map(|f| f.unwrap().path())
+ .filter(|f| f.is_dir())
+ .try_for_each(|node| {
+ let mut storage_source_files: Vec<(CString, OsString)> = Vec::new();
+
+ let mut source_node_subdir = source_dir_storage.clone();
+ source_node_subdir.push(&node.file_name().unwrap());
+
+ let mut target_node_subdir = target_dir_storage.clone();
+ target_node_subdir.push(&node.file_name().unwrap());
+
+ fs::create_dir(target_node_subdir.as_path())?;
+ let metadata = target_node_subdir.metadata()?;
+ let mut permissions = metadata.permissions();
+ permissions.set_mode(0o755);
+
+ fs::read_dir(&source_node_subdir)?
+ .filter(|f| f.is_ok())
+ .map(|f| f.unwrap().path())
+ .filter(|f| f.is_file())
+ .for_each(|file| {
+ let path = CString::new(file.as_path().as_os_str().as_bytes())
+ .expect("Could not convert path to CString.");
+ let fname = file
+ .file_name()
+ .map(|v| v.to_os_string())
+ .expect("Could not convert fname to OsString.");
+ storage_source_files.push((path, fname))
+ });
+
+ for file in storage_source_files {
+ println!("Storage: '{}'", PathBuf::from(file.1.clone()).display());
+ let mut source: [*const i8; 2] = [std::ptr::null(); 2];
+
+ source[0] = file.0.as_ptr();
+
+ let node_name = file.1;
+ let mut target_path = target_node_subdir.clone();
+ target_path.push(node_name);
+
+ let target_path = CString::new(target_path.to_str().unwrap()).unwrap();
+
+ unsafe {
+ rrd_get_context();
+ rrd_clear_error();
+ let res = rrd_create_r2(
+ target_path.as_ptr(),
+ RRD_STEP_SIZE as u64,
+ 0,
+ 0,
+ source.as_mut_ptr(),
+ std::ptr::null(),
+ RRD_STORAGE_DEF.len() as i32,
+ RRD_STORAGE_DEF.map(|v| v.as_ptr()).as_mut_ptr(),
+ );
+ if res != 0 {
+ bail!(
+ "RRD create Error: {}",
+ CStr::from_ptr(rrd_get_error()).to_string_lossy()
+ );
+ }
+ }
+ }
+ Ok(())
+ })?;
+ println!("Migrated all nodes");
+
+ Ok(())
+}
diff --git a/src/parallel_handler.rs b/src/parallel_handler.rs
new file mode 100644
index 0000000..787742a
--- /dev/null
+++ b/src/parallel_handler.rs
@@ -0,0 +1,162 @@
+//! A thread pool which run a closure in parallel.
+
+use std::sync::{Arc, Mutex};
+use std::thread::JoinHandle;
+
+use anyhow::{Error, bail, format_err};
+use crossbeam_channel::{Sender, bounded};
+
+/// A handle to send data to the worker thread (implements clone)
+pub struct SendHandle<I> {
+ input: Sender<I>,
+ abort: Arc<Mutex<Option<String>>>,
+}
+
+/// Returns the first error happened, if any
+pub fn check_abort(abort: &Mutex<Option<String>>) -> Result<(), Error> {
+ let guard = abort.lock().unwrap();
+ if let Some(err_msg) = &*guard {
+ return Err(format_err!("{}", err_msg));
+ }
+ Ok(())
+}
+
+impl<I: Send> SendHandle<I> {
+ /// Send data to the worker threads
+ pub fn send(&self, input: I) -> Result<(), Error> {
+ check_abort(&self.abort)?;
+ match self.input.send(input) {
+ Ok(()) => Ok(()),
+ Err(_) => bail!("send failed - channel closed"),
+ }
+ }
+}
+
+/// A thread pool which run the supplied closure
+///
+/// The send command sends data to the worker threads. If one handler
+/// returns an error, we mark the channel as failed and it is no
+/// longer possible to send data.
+///
+/// When done, the 'complete()' method needs to be called to check for
+/// outstanding errors.
+pub struct ParallelHandler<I> {
+ handles: Vec<JoinHandle<()>>,
+ name: String,
+ input: Option<SendHandle<I>>,
+}
+
+impl<I> Clone for SendHandle<I> {
+ fn clone(&self) -> Self {
+ Self {
+ input: self.input.clone(),
+ abort: Arc::clone(&self.abort),
+ }
+ }
+}
+
+impl<I: Send + 'static> ParallelHandler<I> {
+ /// Create a new thread pool, each thread processing incoming data
+ /// with 'handler_fn'.
+ pub fn new<F>(name: &str, threads: usize, handler_fn: F) -> Self
+ where
+ F: Fn(I) -> Result<(), Error> + Send + Clone + 'static,
+ {
+ let mut handles = Vec::new();
+ let (input_tx, input_rx) = bounded::<I>(threads);
+
+ let abort = Arc::new(Mutex::new(None));
+
+ for i in 0..threads {
+ let input_rx = input_rx.clone();
+ let abort = Arc::clone(&abort);
+ let handler_fn = handler_fn.clone();
+
+ handles.push(
+ std::thread::Builder::new()
+ .name(format!("{} ({})", name, i))
+ .spawn(move || {
+ loop {
+ let data = match input_rx.recv() {
+ Ok(data) => data,
+ Err(_) => return,
+ };
+ if let Err(err) = (handler_fn)(data) {
+ let mut guard = abort.lock().unwrap();
+ if guard.is_none() {
+ *guard = Some(err.to_string());
+ }
+ }
+ }
+ })
+ .unwrap(),
+ );
+ }
+ Self {
+ handles,
+ name: name.to_string(),
+ input: Some(SendHandle {
+ input: input_tx,
+ abort,
+ }),
+ }
+ }
+
+ /// Returns a cloneable channel to send data to the worker threads
+ pub fn channel(&self) -> SendHandle<I> {
+ self.input.as_ref().unwrap().clone()
+ }
+
+ /// Send data to the worker threads
+ pub fn send(&self, input: I) -> Result<(), Error> {
+ self.input.as_ref().unwrap().send(input)?;
+ Ok(())
+ }
+
+ /// Wait for worker threads to complete and check for errors
+ pub fn complete(mut self) -> Result<(), Error> {
+ let input = self.input.take().unwrap();
+ let abort = Arc::clone(&input.abort);
+ check_abort(&abort)?;
+ drop(input);
+
+ let msg_list = self.join_threads();
+
+ // an error might be encountered while waiting for the join
+ check_abort(&abort)?;
+
+ if msg_list.is_empty() {
+ return Ok(());
+ }
+ Err(format_err!("{}", msg_list.join("\n")))
+ }
+
+ fn join_threads(&mut self) -> Vec<String> {
+ let mut msg_list = Vec::new();
+
+ let mut i = 0;
+ while let Some(handle) = self.handles.pop() {
+ if let Err(panic) = handle.join() {
+ if let Some(panic_msg) = panic.downcast_ref::<&str>() {
+ msg_list.push(format!("thread {} ({i}) panicked: {panic_msg}", self.name));
+ } else if let Some(panic_msg) = panic.downcast_ref::<String>() {
+ msg_list.push(format!("thread {} ({i}) panicked: {panic_msg}", self.name));
+ } else {
+ msg_list.push(format!("thread {} ({i}) panicked", self.name));
+ }
+ }
+ i += 1;
+ }
+ msg_list
+ }
+}
+
+// Note: We make sure that all threads will be joined
+impl<I> Drop for ParallelHandler<I> {
+ fn drop(&mut self) {
+ drop(self.input.take());
+ while let Some(handle) = self.handles.pop() {
+ let _ = handle.join();
+ }
+ }
+}
diff --git a/wrapper.h b/wrapper.h
new file mode 100644
index 0000000..64d0aa6
--- /dev/null
+++ b/wrapper.h
@@ -0,0 +1 @@
+#include <rrd.h>
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] [PATCH cluster v3 1/4] cfs status.c: drop old pve2-vm rrd schema support
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
` (4 preceding siblings ...)
2025-07-15 14:31 ` [pve-devel] [PATCH pve9-rrd-migration-tool v3 1/1] introduce rrd migration tool for pve8 -> pve9 Aaron Lauterer
@ 2025-07-15 14:31 ` Aaron Lauterer
2025-07-16 22:32 ` [pve-devel] applied: " Thomas Lamprecht
2025-07-15 14:31 ` [pve-devel] [PATCH cluster v3 2/4] status: handle new metrics update data Aaron Lauterer
` (29 subsequent siblings)
35 siblings, 1 reply; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:31 UTC (permalink / raw)
To: pve-devel
the newer pve2.3-vm schema has been introduced with commit ba9dcfc1 back
in 2013. By now there should be no cluster where an older node might
still send the old pve2-vm schema.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
src/pmxcfs/status.c | 12 +++---------
1 file changed, 3 insertions(+), 9 deletions(-)
diff --git a/src/pmxcfs/status.c b/src/pmxcfs/status.c
index 0895e53..eba4e52 100644
--- a/src/pmxcfs/status.c
+++ b/src/pmxcfs/status.c
@@ -1234,16 +1234,10 @@ static void update_rrd_data(const char *key, gconstpointer data, size_t len) {
create_rrd_file(filename, argcount, rrd_def_node);
}
- } else if ((strncmp(key, "pve2-vm/", 8) == 0) || (strncmp(key, "pve2.3-vm/", 10) == 0)) {
- const char *vmid;
+ } else if (strncmp(key, "pve2.3-vm/", 10) == 0) {
+ const char *vmid = key + 10;
- if (strncmp(key, "pve2-vm/", 8) == 0) {
- vmid = key + 8;
- skip = 2;
- } else {
- vmid = key + 10;
- skip = 4;
- }
+ skip = 4;
if (strchr(vmid, '/') != NULL) {
goto keyerror;
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] [PATCH cluster v3 2/4] status: handle new metrics update data
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
` (5 preceding siblings ...)
2025-07-15 14:31 ` [pve-devel] [PATCH cluster v3 1/4] cfs status.c: drop old pve2-vm rrd schema support Aaron Lauterer
@ 2025-07-15 14:31 ` Aaron Lauterer
2025-07-16 22:32 ` [pve-devel] applied: " Thomas Lamprecht
2025-07-15 14:31 ` [pve-devel] [PATCH cluster v3 3/4] status: introduce new pve-{type}- rrd and metric format Aaron Lauterer
` (28 subsequent siblings)
35 siblings, 1 reply; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:31 UTC (permalink / raw)
To: pve-devel
For PVE9 we plan to add additional fields in the metrics that are
collected and distributed in the cluster. The new fields/columns are
added at the end of the current ones. This makes it possible for PVE8
installations to still use them by cutting the new additional data.
To make it more future proof, the format of the keys for each metrics
are changed:
Old: pve{version}-{type}/{id}
New: pve-{type}-{version}/{id}
This way we have an easier time to handle new versions in the future as
we initially only need to check for `pve-{type}-`. If we know the
version, we can handle it accordingly; e.g. pad if older format with
missing data. If we don't know the version, it must be a newer one and
we cut the data stream at the length we need for the current version.
This means of course that to avoid a breaking change, we can only add
new columns if needed, but not remove any! But waiting for a breaking
change until the next major release is a worthy trade-off if it allows
us to expand the format in between if needed.
Since the full keys were used for the final location within the RRD
directory, we need to change that as well and set it manually to
'pve2-{type}' as the key we receive could be for a newer data format.
The 'rrd_skip_data' function got a new parameter defining the sepataring
character. This then makes it possible to use it to determine which part
of the key string is the version/type and which one is the actual
resource identifier.
We drop the pve2-vm schema as the newer pve2.3-vm has been introduced
with commit ba9dcfc1 back in 2013. By now there should be no cluster
where an older node might still send the old pve2-vm schema.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
src/pmxcfs/status.c | 79 ++++++++++++++++++++++++++++++---------------
1 file changed, 53 insertions(+), 26 deletions(-)
diff --git a/src/pmxcfs/status.c b/src/pmxcfs/status.c
index eba4e52..640540f 100644
--- a/src/pmxcfs/status.c
+++ b/src/pmxcfs/status.c
@@ -1185,16 +1185,33 @@ static void create_rrd_file(const char *filename, int argcount, const char *rrdd
}
}
-static inline const char *rrd_skip_data(const char *data, int count) {
+static inline const char *rrd_skip_data(const char *data, int count, char separator) {
int found = 0;
while (*data && found < count) {
- if (*data++ == ':') {
+ if (*data++ == separator) {
found++;
}
}
return data;
}
+// The key and subdirectory format used up until PVE8 is 'pve{version}-{type}/{id}' with version
+// being 2 or 2.3 for VMs. Starting with PVE9 'pve-{type}-{version}/{id}'. Newer versions are only
+// allowed to append new columns to the data! Otherwise this would be a breaking change.
+//
+// Type can be: node, vm, storage
+//
+// Version is the version of PVE with which it was introduced, e.g.: 9.0, 9.2, 10.0.
+//
+// ID is the actual identifier of the item in question. E.g. node name, VMID or for storage it is
+// '{node}/{storage name}'
+//
+// This way, we can handle unknown new formats gracefully and cut the data at the expected
+// column for the currently understood format. Receiving older formats will still need special
+// checks to determine how much padding is needed.
+//
+// Should we ever plan to change existing columns, we need to introduce this as a breaking
+// change!
static void update_rrd_data(const char *key, gconstpointer data, size_t len) {
g_return_if_fail(key != NULL);
g_return_if_fail(data != NULL);
@@ -1210,12 +1227,13 @@ static void update_rrd_data(const char *key, gconstpointer data, size_t len) {
char *filename = NULL;
- int skip = 0;
+ int skip = 0; // columns to skip at beginning. They contain non-archivable data, like uptime,
+ // status, is guest a template and such.
+ int keep_columns = 0; // how many columns do we want to keep (after initial skip) in case we get
+ // more columns than needed from a newer format
- if (strncmp(key, "pve2-node/", 10) == 0) {
- const char *node = key + 10;
-
- skip = 2;
+ if (strncmp(key, "pve2-node/", 10) == 0 || strncmp(key, "pve-node-", 9) == 0) {
+ const char *node = rrd_skip_data(key, 1, '/');
if (strchr(node, '/') != NULL) {
goto keyerror;
@@ -1225,19 +1243,23 @@ static void update_rrd_data(const char *key, gconstpointer data, size_t len) {
goto keyerror;
}
- filename = g_strdup_printf(RRDDIR "/%s", key);
+ skip = 2; // first two columns are live data that isn't archived
- if (!g_file_test(filename, G_FILE_TEST_EXISTS)) {
+ if (strncmp(key, "pve-node-", 9) == 0) {
+ keep_columns = 12; // pve2-node format uses 12 columns
+ }
+ filename = g_strdup_printf(RRDDIR "/pve2-node/%s", node);
+
+ if (!g_file_test(filename, G_FILE_TEST_EXISTS)) {
mkdir(RRDDIR "/pve2-node", 0755);
int argcount = sizeof(rrd_def_node) / sizeof(void *) - 1;
create_rrd_file(filename, argcount, rrd_def_node);
}
- } else if (strncmp(key, "pve2.3-vm/", 10) == 0) {
- const char *vmid = key + 10;
+ } else if (strncmp(key, "pve2.3-vm/", 10) == 0 || strncmp(key, "pve-vm-", 7) == 0) {
- skip = 4;
+ const char *vmid = rrd_skip_data(key, 1, '/');
if (strchr(vmid, '/') != NULL) {
goto keyerror;
@@ -1247,29 +1269,29 @@ static void update_rrd_data(const char *key, gconstpointer data, size_t len) {
goto keyerror;
}
+ skip = 4; // first 4 columns are live data that isn't archived
+
+ if (strncmp(key, "pve-vm-", 7) == 0) {
+ keep_columns = 10; // pve2.3-vm format uses 10 data columns
+ }
+
filename = g_strdup_printf(RRDDIR "/%s/%s", "pve2-vm", vmid);
if (!g_file_test(filename, G_FILE_TEST_EXISTS)) {
-
mkdir(RRDDIR "/pve2-vm", 0755);
int argcount = sizeof(rrd_def_vm) / sizeof(void *) - 1;
create_rrd_file(filename, argcount, rrd_def_vm);
}
- } else if (strncmp(key, "pve2-storage/", 13) == 0) {
- const char *node = key + 13;
+ } else if (strncmp(key, "pve2-storage/", 13) == 0 || strncmp(key, "pve-storage-", 12) == 0) {
+ const char *node = rrd_skip_data(key, 1, '/'); // will contain {node}/{storage}
- const char *storage = node;
- while (*storage && *storage != '/') {
- storage++;
- }
+ const char *storage = rrd_skip_data(node, 1, '/');
- if (*storage != '/' || ((storage - node) < 1)) {
+ if ((storage - node) < 1) {
goto keyerror;
}
- storage++;
-
if (strchr(storage, '/') != NULL) {
goto keyerror;
}
@@ -1278,12 +1300,10 @@ static void update_rrd_data(const char *key, gconstpointer data, size_t len) {
goto keyerror;
}
- filename = g_strdup_printf(RRDDIR "/%s", key);
+ filename = g_strdup_printf(RRDDIR "/pve2-storage/%s", node);
if (!g_file_test(filename, G_FILE_TEST_EXISTS)) {
-
mkdir(RRDDIR "/pve2-storage", 0755);
-
char *dir = g_path_get_dirname(filename);
mkdir(dir, 0755);
g_free(dir);
@@ -1296,7 +1316,14 @@ static void update_rrd_data(const char *key, gconstpointer data, size_t len) {
goto keyerror;
}
- const char *dp = skip ? rrd_skip_data(data, skip) : data;
+ const char *dp = skip ? rrd_skip_data(data, skip, ':') : data;
+
+ if (keep_columns) {
+ keep_columns++; // We specify the number of columns we want earlier, but we also have the
+ // always present timestamp column, so we need to skip one more column
+ char *cut = (char *)rrd_skip_data(dp, keep_columns, ':');
+ *(cut - 1) = 0; // terminate string by replacing colon from field separator with zero.
+ }
const char *update_args[] = {dp, NULL};
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] applied: [PATCH cluster v3 2/4] status: handle new metrics update data
2025-07-15 14:31 ` [pve-devel] [PATCH cluster v3 2/4] status: handle new metrics update data Aaron Lauterer
@ 2025-07-16 22:32 ` Thomas Lamprecht
0 siblings, 0 replies; 59+ messages in thread
From: Thomas Lamprecht @ 2025-07-16 22:32 UTC (permalink / raw)
To: Proxmox VE development discussion, Aaron Lauterer
Am 15.07.25 um 16:31 schrieb Aaron Lauterer:
> For PVE9 we plan to add additional fields in the metrics that are
> collected and distributed in the cluster. The new fields/columns are
> added at the end of the current ones. This makes it possible for PVE8
> installations to still use them by cutting the new additional data.
>
> To make it more future proof, the format of the keys for each metrics
> are changed:
>
> Old: pve{version}-{type}/{id}
> New: pve-{type}-{version}/{id}
>
> This way we have an easier time to handle new versions in the future as
> we initially only need to check for `pve-{type}-`. If we know the
> version, we can handle it accordingly; e.g. pad if older format with
> missing data. If we don't know the version, it must be a newer one and
> we cut the data stream at the length we need for the current version.
>
> This means of course that to avoid a breaking change, we can only add
> new columns if needed, but not remove any! But waiting for a breaking
> change until the next major release is a worthy trade-off if it allows
> us to expand the format in between if needed.
>
> Since the full keys were used for the final location within the RRD
> directory, we need to change that as well and set it manually to
> 'pve2-{type}' as the key we receive could be for a newer data format.
>
> The 'rrd_skip_data' function got a new parameter defining the sepataring
> character. This then makes it possible to use it to determine which part
> of the key string is the version/type and which one is the actual
> resource identifier.
>
> We drop the pve2-vm schema as the newer pve2.3-vm has been introduced
> with commit ba9dcfc1 back in 2013. By now there should be no cluster
> where an older node might still send the old pve2-vm schema.
>
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> src/pmxcfs/status.c | 79 ++++++++++++++++++++++++++++++---------------
> 1 file changed, 53 insertions(+), 26 deletions(-)
>
>
applied, thanks!
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] [PATCH cluster v3 3/4] status: introduce new pve-{type}- rrd and metric format
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
` (6 preceding siblings ...)
2025-07-15 14:31 ` [pve-devel] [PATCH cluster v3 2/4] status: handle new metrics update data Aaron Lauterer
@ 2025-07-15 14:31 ` Aaron Lauterer
2025-07-15 14:31 ` [pve-devel] [PATCH cluster v3 4/4] rrd: adapt to new RRD format with different aggregation windows Aaron Lauterer
` (27 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:31 UTC (permalink / raw)
To: pve-devel
With PVE9 now we have additional fields in the metrics that are
collected and distributed in the cluster. The new fields/columns are
added at the end of the existing ones. This makes it possible for PVE8
installations to still use them by cutting the new additional data.
To make it more future proof, the format of the keys for each metrics
are now changed:
Old pre PVE9: pve{version}-{type}/{id}
Now with PVE9: pve-{type}-{version}/{id}
This way we have an easier time to handle new versions in the future as
we initially only need to check for `pve-{type}-`. If we know the
version, we can handle it accordingly; e.g. pad if older format with
missing data. If we don't know the version, it must be a newer one and
we cut the data stream at the length we need for the current version.
This means of course that to avoid a breaking change, we can only add
new columns if needed, but not remove any! But waiting for a breaking
change until the next major release is a worthy trade-off if it allows
us to expand the format in between if needed.
The 'rrd_skip_data' function got a new parameter defining the sepataring
character. This then makes it possible to use it also to determine which
part of the key string is the version/type and which one is the actual
resource identifier.
We add several new columns to nodes and VMs (guest) RRDs. See futher
down for details. Additionally we change the RRA definitions on how we
aggregate the data to match how we do it for the Proxmox Backup Server
[0].
The migration of an existing installation is handled by a dedicated
tool. Only once that has happened, will we store data in the new
format.
This leaves us with a few cases to handle:
data recv → old new
↓ rrd files
-------------|---------------------------|-------------------------------------
none | check if directories exists:
| neither old or new -> new
| new -> new
| old only -> old
--------------|---------------------------|-------------------------------------
only old | use old file as is | cut new columns and use old file
--------------|---------------------------|-------------------------------------
new present | pad data to match new fmt | use new file as is and pass data
To handle the padding we use a buffer. Cutting can be handled as we
already do it in the stable/bookworm (PVE8) branch by introducing a null
terminator in the original string at the end of the expected columns.
We add the following new columns:
Nodes:
* memfree
* arcsize
* pressures:
* cpu some
* io some
* io full
* mem some
* mem full
VMs:
* memhost (memory consumption of all processes in the guests cgroup, host view)
* pressures:
* cpu some
* cpu full
* io some
* io full
* mem some
* mem full
[0] https://git.proxmox.com/?p=proxmox-backup.git;a=blob;f=src/server/metric_collection/rrd.rs;h=ed39cc94ee056924b7adbc21b84c0209478bcf42;hb=dc324716a688a67d700fa133725740ac5d3795ce#l76
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
src/pmxcfs/status.c | 261 +++++++++++++++++++++++++++++++++++++++-----
1 file changed, 236 insertions(+), 25 deletions(-)
diff --git a/src/pmxcfs/status.c b/src/pmxcfs/status.c
index 640540f..5ecda12 100644
--- a/src/pmxcfs/status.c
+++ b/src/pmxcfs/status.c
@@ -1096,6 +1096,9 @@ kventry_hash_set(GHashTable *kvhash, const char *key, gconstpointer data, size_t
return TRUE;
}
+// We create the RRD files with a 60 second stepsize, therefore, RRA timesteps
+// are alwys per 60 seconds. These 60 seconds are usually showing up in other
+// code paths where we interact with RRD data!
static const char *rrd_def_node[] = {
"DS:loadavg:GAUGE:120:0:U",
"DS:maxcpu:GAUGE:120:0:U",
@@ -1124,6 +1127,39 @@ static const char *rrd_def_node[] = {
NULL,
};
+static const char *rrd_def_node_pve9_0[] = {
+ "DS:loadavg:GAUGE:120:0:U",
+ "DS:maxcpu:GAUGE:120:0:U",
+ "DS:cpu:GAUGE:120:0:U",
+ "DS:iowait:GAUGE:120:0:U",
+ "DS:memtotal:GAUGE:120:0:U",
+ "DS:memused:GAUGE:120:0:U",
+ "DS:swaptotal:GAUGE:120:0:U",
+ "DS:swapused:GAUGE:120:0:U",
+ "DS:roottotal:GAUGE:120:0:U",
+ "DS:rootused:GAUGE:120:0:U",
+ "DS:netin:DERIVE:120:0:U",
+ "DS:netout:DERIVE:120:0:U",
+ "DS:memfree:GAUGE:120:0:U",
+ "DS:arcsize:GAUGE:120:0:U",
+ "DS:pressurecpusome:GAUGE:120:0:U",
+ "DS:pressureiosome:GAUGE:120:0:U",
+ "DS:pressureiofull:GAUGE:120:0:U",
+ "DS:pressurememorysome:GAUGE:120:0:U",
+ "DS:pressurememoryfull:GAUGE:120:0:U",
+
+ "RRA:AVERAGE:0.5:1:1440", // 1 min * 1440 => 1 day
+ "RRA:AVERAGE:0.5:30:1440", // 30 min * 1440 => 30 day
+ "RRA:AVERAGE:0.5:360:1440", // 6 hours * 1440 => 360 day ~1 year
+ "RRA:AVERAGE:0.5:10080:570", // 1 week * 570 => ~10 years
+
+ "RRA:MAX:0.5:1:1440", // 1 min * 1440 => 1 day
+ "RRA:MAX:0.5:30:1440", // 30 min * 1440 => 30 day
+ "RRA:MAX:0.5:360:1440", // 6 hours * 1440 => 360 day ~1 year
+ "RRA:MAX:0.5:10080:570", // 1 week * 570 => ~10 years
+ NULL,
+};
+
static const char *rrd_def_vm[] = {
"DS:maxcpu:GAUGE:120:0:U",
"DS:cpu:GAUGE:120:0:U",
@@ -1149,6 +1185,36 @@ static const char *rrd_def_vm[] = {
"RRA:MAX:0.5:10080:70", // 7 day max - ony year
NULL,
};
+static const char *rrd_def_vm_pve9_0[] = {
+ "DS:maxcpu:GAUGE:120:0:U",
+ "DS:cpu:GAUGE:120:0:U",
+ "DS:maxmem:GAUGE:120:0:U",
+ "DS:mem:GAUGE:120:0:U",
+ "DS:maxdisk:GAUGE:120:0:U",
+ "DS:disk:GAUGE:120:0:U",
+ "DS:netin:DERIVE:120:0:U",
+ "DS:netout:DERIVE:120:0:U",
+ "DS:diskread:DERIVE:120:0:U",
+ "DS:diskwrite:DERIVE:120:0:U",
+ "DS:memhost:GAUGE:120:0:U",
+ "DS:pressurecpusome:GAUGE:120:0:U",
+ "DS:pressurecpufull:GAUGE:120:0:U",
+ "DS:pressureiosome:GAUGE:120:0:U",
+ "DS:pressureiofull:GAUGE:120:0:U",
+ "DS:pressurememorysome:GAUGE:120:0:U",
+ "DS:pressurememoryfull:GAUGE:120:0:U",
+
+ "RRA:AVERAGE:0.5:1:1440", // 1 min * 1440 => 1 day
+ "RRA:AVERAGE:0.5:30:1440", // 30 min * 1440 => 30 day
+ "RRA:AVERAGE:0.5:360:1440", // 6 hours * 1440 => 360 day ~1 year
+ "RRA:AVERAGE:0.5:10080:570", // 1 week * 570 => ~10 years
+
+ "RRA:MAX:0.5:1:1440", // 1 min * 1440 => 1 day
+ "RRA:MAX:0.5:30:1440", // 30 min * 1440 => 30 day
+ "RRA:MAX:0.5:360:1440", // 6 hours * 1440 => 360 day ~1 year
+ "RRA:MAX:0.5:10080:570", // 1 week * 570 => ~10 years
+ NULL,
+};
static const char *rrd_def_storage[] = {
"DS:total:GAUGE:120:0:U",
@@ -1168,8 +1234,30 @@ static const char *rrd_def_storage[] = {
NULL,
};
+static const char *rrd_def_storage_pve9_0[] = {
+ "DS:total:GAUGE:120:0:U",
+ "DS:used:GAUGE:120:0:U",
+
+ "RRA:AVERAGE:0.5:1:1440", // 1 min * 1440 => 1 day
+ "RRA:AVERAGE:0.5:30:1440", // 30 min * 1440 => 30 day
+ "RRA:AVERAGE:0.5:360:1440", // 6 hours * 1440 => 360 day ~1 year
+ "RRA:AVERAGE:0.5:10080:570", // 1 week * 570 => ~10 years
+
+ "RRA:MAX:0.5:1:1440", // 1 min * 1440 => 1 day
+ "RRA:MAX:0.5:30:1440", // 30 min * 1440 => 30 day
+ "RRA:MAX:0.5:360:1440", // 6 hours * 1440 => 360 day ~1 year
+ "RRA:MAX:0.5:10080:570", // 1 week * 570 => ~10 years
+ NULL,
+};
+
#define RRDDIR "/var/lib/rrdcached/db"
+// A 4k buffer should be plenty to temporarily store RRD data. 64 bit integers are 20 chars long,
+// plus the separator char: (4096-1)/21~195 columns This buffer is only used in the
+// `update_rrd_data` function. It is safe to use as the calling sites get the global mutex:
+// rrd_update_data -> rrdentry_hash_set -> cfs_status_set / and cfs_kvstore_node_set
+static char rrd_format_update_buffer[4096];
+
static void create_rrd_file(const char *filename, int argcount, const char *rrddef[]) {
/* start at day boundary */
time_t ctime;
@@ -1229,6 +1317,8 @@ static void update_rrd_data(const char *key, gconstpointer data, size_t len) {
int skip = 0; // columns to skip at beginning. They contain non-archivable data, like uptime,
// status, is guest a template and such.
+ int padding = 0; // how many columns need to be added with "U" if we get an old format that is
+ // missing columns at the end.
int keep_columns = 0; // how many columns do we want to keep (after initial skip) in case we get
// more columns than needed from a newer format
@@ -1243,20 +1333,60 @@ static void update_rrd_data(const char *key, gconstpointer data, size_t len) {
goto keyerror;
}
- skip = 2; // first two columns are live data that isn't archived
+ filename = g_strdup_printf(RRDDIR "/pve-node-9.0/%s", node);
+ char *filename_pve2 = g_strdup_printf(RRDDIR "/pve2-node/%s", node);
- if (strncmp(key, "pve-node-", 9) == 0) {
- keep_columns = 12; // pve2-node format uses 12 columns
+ int use_pve2_file = 0;
+
+ // check existing rrd files and directories
+ if (g_file_test(filename, G_FILE_TEST_EXISTS)) {
+ // pve-node-9.0 file exists, we use that
+ // TODO: get conditions so, that we do not have this empty branch
+ } else if (g_file_test(filename_pve2, G_FILE_TEST_EXISTS)) {
+ // old file exists, use it
+ use_pve2_file = 1;
+ filename = g_strdup_printf("%s", filename_pve2);
+ } else {
+ // neither file exists, check for directories to decide and create file
+ char *dir_pve2 = g_strdup_printf(RRDDIR "/pve2-node");
+ char *dir_pve90 = g_strdup_printf(RRDDIR "/pve-node-9.0");
+
+ if (g_file_test(dir_pve90, G_FILE_TEST_IS_DIR)) {
+
+ int argcount = sizeof(rrd_def_node_pve9_0) / sizeof(void *) - 1;
+ create_rrd_file(filename, argcount, rrd_def_node_pve9_0);
+ } else if (g_file_test(dir_pve2, G_FILE_TEST_IS_DIR)) {
+ use_pve2_file = 1;
+
+ filename = g_strdup_printf("%s", filename_pve2);
+
+ int argcount = sizeof(rrd_def_node) / sizeof(void *) - 1;
+ create_rrd_file(filename, argcount, rrd_def_node);
+ } else {
+ // no dir exists yet, use new pve-node-9.0
+ mkdir(RRDDIR "/pve-node-9.0", 0755);
+
+ int argcount = sizeof(rrd_def_node_pve9_0) / sizeof(void *) - 1;
+ create_rrd_file(filename, argcount, rrd_def_node_pve9_0);
+ }
+ g_free(dir_pve2);
+ g_free(dir_pve90);
}
- filename = g_strdup_printf(RRDDIR "/pve2-node/%s", node);
+ skip = 2; // first two columns are live data that isn't archived
- if (!g_file_test(filename, G_FILE_TEST_EXISTS)) {
- mkdir(RRDDIR "/pve2-node", 0755);
- int argcount = sizeof(rrd_def_node) / sizeof(void *) - 1;
- create_rrd_file(filename, argcount, rrd_def_node);
+ if (strncmp(key, "pve2-node/", 10) == 0 && !use_pve2_file) {
+ padding = 7; // pve-node-9.0 has 7 more columns than pve2-node
+ } else if (strncmp(key, "pve-node-", 9) == 0 && use_pve2_file) {
+ keep_columns = 12; // pve2-node format uses 12 columns
+ } else if (strncmp(key, "pve-node-9.0/", 13) != 0) {
+ // we received an unknown format, expectation is it is newer and has more columns
+ // than we can currently handle
+ keep_columns = 19; // pve-node-9.0 format uses 19 columns
}
+ g_free(filename_pve2);
+
} else if (strncmp(key, "pve2.3-vm/", 10) == 0 || strncmp(key, "pve-vm-", 7) == 0) {
const char *vmid = rrd_skip_data(key, 1, '/');
@@ -1269,20 +1399,60 @@ static void update_rrd_data(const char *key, gconstpointer data, size_t len) {
goto keyerror;
}
- skip = 4; // first 4 columns are live data that isn't archived
+ filename = g_strdup_printf(RRDDIR "/pve-vm-9.0/%s", vmid);
+ char *filename_pve2 = g_strdup_printf(RRDDIR "/%s/%s", "pve2-vm", vmid);
+
+ int use_pve2_file = 0;
+
+ // check existing rrd files and directories
+ if (g_file_test(filename, G_FILE_TEST_EXISTS)) {
+ // pve-vm-9.0 file exists, we use that
+ // TODO: get conditions so, that we do not have this empty branch
+ } else if (g_file_test(filename_pve2, G_FILE_TEST_EXISTS)) {
+ // old file exists, use it
+ use_pve2_file = 1;
+ filename = g_strdup_printf("%s", filename_pve2);
+ } else {
+ // neither file exists, check for directories to decide and create file
+ char *dir_pve2 = g_strdup_printf(RRDDIR "/pve2-vm");
+ char *dir_pve90 = g_strdup_printf(RRDDIR "/pve-vm-9.0");
+
+ if (g_file_test(dir_pve90, G_FILE_TEST_IS_DIR)) {
- if (strncmp(key, "pve-vm-", 7) == 0) {
- keep_columns = 10; // pve2.3-vm format uses 10 data columns
+ int argcount = sizeof(rrd_def_vm_pve9_0) / sizeof(void *) - 1;
+ create_rrd_file(filename, argcount, rrd_def_vm_pve9_0);
+ } else if (g_file_test(dir_pve2, G_FILE_TEST_IS_DIR)) {
+ use_pve2_file = 1;
+
+ filename = g_strdup_printf("%s", filename_pve2);
+
+ int argcount = sizeof(rrd_def_vm) / sizeof(void *) - 1;
+ create_rrd_file(filename, argcount, rrd_def_vm);
+ } else {
+ // no dir exists yet, use new pve-vm-9.0
+ mkdir(RRDDIR "/pve-vm-9.0", 0755);
+
+ int argcount = sizeof(rrd_def_vm_pve9_0) / sizeof(void *) - 1;
+ create_rrd_file(filename, argcount, rrd_def_vm_pve9_0);
+ }
+ g_free(dir_pve2);
+ g_free(dir_pve90);
}
- filename = g_strdup_printf(RRDDIR "/%s/%s", "pve2-vm", vmid);
+ skip = 4; // first 4 columns are live data that isn't archived
- if (!g_file_test(filename, G_FILE_TEST_EXISTS)) {
- mkdir(RRDDIR "/pve2-vm", 0755);
- int argcount = sizeof(rrd_def_vm) / sizeof(void *) - 1;
- create_rrd_file(filename, argcount, rrd_def_vm);
+ if (strncmp(key, "pve2.3-vm/", 10) == 0 && !use_pve2_file) {
+ padding = 7; // pve-vm-9.0 has 7 more columns than pve2.3-vm
+ } else if (strncmp(key, "pve-vm-", 7) == 0 && use_pve2_file) {
+ keep_columns = 10; // pve2.3-vm format uses 10 columns
+ } else if (strncmp(key, "pve-vm-9.0/", 11) != 0) {
+ // we received an unknown format, expectation is it is newer and has more columns
+ // than we can currently handle
+ keep_columns = 17; // pve-vm-9.0 format uses 19 columns
}
+ g_free(filename_pve2);
+
} else if (strncmp(key, "pve2-storage/", 13) == 0 || strncmp(key, "pve-storage-", 12) == 0) {
const char *node = rrd_skip_data(key, 1, '/'); // will contain {node}/{storage}
@@ -1300,18 +1470,50 @@ static void update_rrd_data(const char *key, gconstpointer data, size_t len) {
goto keyerror;
}
- filename = g_strdup_printf(RRDDIR "/pve2-storage/%s", node);
+ filename = g_strdup_printf(RRDDIR "/pve-storage-9.0/%s", node);
+ char *filename_pve2 = g_strdup_printf(RRDDIR "/%s/%s", "pve2-storage", node);
+
+ // check existing rrd files and directories
+ if (g_file_test(filename, G_FILE_TEST_EXISTS)) {
+ // pve-storage-9.0 file exists, we use that
+ // TODO: get conditions so, that we do not have this empty branch
+ } else if (g_file_test(filename_pve2, G_FILE_TEST_EXISTS)) {
+ // old file exists, use it
+ filename = g_strdup_printf("%s", filename_pve2);
+ } else {
+ // neither file exists, check for directories to decide and create file
+ char *dir_pve2 = g_strdup_printf(RRDDIR "/pve2-storage");
+ char *dir_pve90 = g_strdup_printf(RRDDIR "/pve-storage-9.0");
+
+ if (g_file_test(dir_pve90, G_FILE_TEST_IS_DIR)) {
+
+ int argcount = sizeof(rrd_def_storage_pve9_0) / sizeof(void *) - 1;
+ create_rrd_file(filename, argcount, rrd_def_storage_pve9_0);
+ } else if (g_file_test(dir_pve2, G_FILE_TEST_IS_DIR)) {
+ filename = g_strdup_printf("%s", filename_pve2);
- if (!g_file_test(filename, G_FILE_TEST_EXISTS)) {
- mkdir(RRDDIR "/pve2-storage", 0755);
- char *dir = g_path_get_dirname(filename);
- mkdir(dir, 0755);
- g_free(dir);
+ int argcount = sizeof(rrd_def_storage) / sizeof(void *) - 1;
+ create_rrd_file(filename, argcount, rrd_def_storage);
+ } else {
+ // no dir exists yet, use new pve-storage-9.0
+ mkdir(RRDDIR "/pve-storage-9.0", 0755);
- int argcount = sizeof(rrd_def_storage) / sizeof(void *) - 1;
- create_rrd_file(filename, argcount, rrd_def_storage);
+ int argcount = sizeof(rrd_def_storage_pve9_0) / sizeof(void *) - 1;
+ create_rrd_file(filename, argcount, rrd_def_storage_pve9_0);
+ }
+ g_free(dir_pve2);
+ g_free(dir_pve90);
}
+ // actual data columns didn't change between pve2-storage and pve-storage-9.0
+ if (strncmp(key, "pve-storage-", 12) == 0 && strncmp(key, "pve-storage-9.0/", 16) != 0) {
+ // we received an unknown format, expectation is it is newer and has more columns
+ // than we can currently handle
+ keep_columns = 2; // pve-storage-9.0 format uses 2 columns
+ }
+
+ g_free(filename_pve2);
+
} else {
goto keyerror;
}
@@ -1325,7 +1527,16 @@ static void update_rrd_data(const char *key, gconstpointer data, size_t len) {
*(cut - 1) = 0; // terminate string by replacing colon from field separator with zero.
}
- const char *update_args[] = {dp, NULL};
+ const char *update_args[] = {NULL, NULL};
+ if (padding) {
+ // add padding "U" columns to data string
+ char *padsrc =
+ ":U:U:U:U:U:U:U:U:U:U:U:U:U:U:U:U:U:U:U:U:U:U:U:U:U"; // can pad up to 25 columns
+ g_snprintf(rrd_format_update_buffer, 1024 * 4, "%s%.*s", dp, padding * 2, padsrc);
+ update_args[0] = rrd_format_update_buffer;
+ } else {
+ update_args[0] = dp;
+ }
if (use_daemon) {
int status;
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] [PATCH cluster v3 4/4] rrd: adapt to new RRD format with different aggregation windows
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
` (7 preceding siblings ...)
2025-07-15 14:31 ` [pve-devel] [PATCH cluster v3 3/4] status: introduce new pve-{type}- rrd and metric format Aaron Lauterer
@ 2025-07-15 14:31 ` Aaron Lauterer
2025-07-15 14:31 ` [pve-devel] [PATCH common v3 1/2] fix error in pressure parsing Aaron Lauterer
` (26 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:31 UTC (permalink / raw)
To: pve-devel
With PVE9 we introduced a new RRD format that has different aggregation
steps, similar to what we use in the Backup Server.
We therefore need to adapt the functions that get data from RRD
accordingly.
The result is usually a finer resolution for time windows larger than
hourly.
We also introduce decade as a time window. In case existing RRD files
have not yet been converted to the new RRD format, we need keep using
the old time windows. Additionally, since they only store data up to a
year, we catch the situation where a full decade might be requested and
pin it to a year.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
src/PVE/RRD.pm | 52 ++++++++++++++++++++++++++++++++++++++++----------
1 file changed, 42 insertions(+), 10 deletions(-)
diff --git a/src/PVE/RRD.pm b/src/PVE/RRD.pm
index 93df608..c95f495 100644
--- a/src/PVE/RRD.pm
+++ b/src/PVE/RRD.pm
@@ -14,14 +14,30 @@ sub create_rrd_data {
my $rrd = "$rrddir/$rrdname";
+ # Format: [ resolution, number of data points/count]
+ # Old ranges, pre PVE9
+ my $setup_pve2 = {
+ hour => [60, 60], # 1 min resolution, one hour
+ day => [60 * 30, 70], # 30 min resolution, one day
+ week => [60 * 180, 70], # 3 hour resolution, one week
+ month => [60 * 720, 70], # 12 hour resolution, 1 month
+ year => [60 * 10080, 70], # 7 day resolution, 1 year
+ };
+
my $setup = {
- hour => [60, 70],
- day => [60 * 30, 70],
- week => [60 * 180, 70],
- month => [60 * 720, 70],
- year => [60 * 10080, 70],
+ hour => [60, 60], # 1 min resolution
+ day => [60, 1440], # 1 min resolution, full day
+ week => [60 * 30, 336], # 30 min resolution, 7 days
+ month => [3600 * 6, 121], # 6 hour resolution, 30 days, need one more count. Otherwise RRD gets wrong $step
+ year => [3600 * 6, 1140], # 6 hour resolution, 360 days
+ decade => [86400 * 7, 570], # 1 week resolution, 10 years
};
+ if ($rrdname =~ /^pve2/) {
+ $setup = $setup_pve2;
+ $timeframe = "year" if $timeframe eq "decade"; # we only store up to one year in the old format
+ }
+
my ($reso, $count) = @{ $setup->{$timeframe} };
my $ctime = $reso * int(time() / $reso);
my $req_start = $ctime - $reso * $count;
@@ -82,14 +98,30 @@ sub create_rrd_graph {
my $filename = "${rrd}_${ds_txt}.png";
+ # Format: [ resolution, number of data points/count]
+ # Old ranges, pre PVE9
+ my $setup_pve2 = {
+ hour => [60, 60], # 1 min resolution, one hour
+ day => [60 * 30, 70], # 30 min resolution, one day
+ week => [60 * 180, 70], # 3 hour resolution, one week
+ month => [60 * 720, 70], # 12 hour resolution, 1 month
+ year => [60 * 10080, 70], # 7 day resolution, 1 year
+ };
+
my $setup = {
- hour => [60, 60],
- day => [60 * 30, 70],
- week => [60 * 180, 70],
- month => [60 * 720, 70],
- year => [60 * 10080, 70],
+ hour => [60, 60], # 1 min resolution
+ day => [60, 1440], # 1 min resolution, full day
+ week => [60 * 30, 336], # 30 min resolution, 7 days
+ month => [3600 * 6, 121], # 6 hour resolution, 30 days, need one more count. Otherwise RRD gets wrong $step
+ year => [3600 * 6, 1140], # 6 hour resolution, 360 days
+ decade => [86400 * 7, 570], # 1 week resolution, 10 years
};
+ if ($rrdname =~ /^pve2/) {
+ $setup = $setup_pve2;
+ $timeframe = "year" if $timeframe eq "decade"; # we only store up to one year in the old format
+ }
+
my ($reso, $count) = @{ $setup->{$timeframe} };
my @args = (
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] [PATCH common v3 1/2] fix error in pressure parsing
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
` (8 preceding siblings ...)
2025-07-15 14:31 ` [pve-devel] [PATCH cluster v3 4/4] rrd: adapt to new RRD format with different aggregation windows Aaron Lauterer
@ 2025-07-15 14:31 ` Aaron Lauterer
2025-07-16 22:33 ` [pve-devel] applied: " Thomas Lamprecht
2025-07-15 14:31 ` [pve-devel] [PATCH common v3 2/2] add function to retrieve pressures from cgroup Aaron Lauterer
` (25 subsequent siblings)
35 siblings, 1 reply; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:31 UTC (permalink / raw)
To: pve-devel
From: Folke Gleumes <f.gleumes@proxmox.com>
Originally-by: Folke Gleumes <f.gleumes@proxmox.com>
[AL: rebased]
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
src/PVE/ProcFSTools.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/PVE/ProcFSTools.pm b/src/PVE/ProcFSTools.pm
index b67211e..f342890 100644
--- a/src/PVE/ProcFSTools.pm
+++ b/src/PVE/ProcFSTools.pm
@@ -144,7 +144,7 @@ sub parse_pressure {
$res->{$1}->{avg10} = $2;
$res->{$1}->{avg60} = $3;
$res->{$1}->{avg300} = $4;
- $res->{$1}->{total} = $4;
+ $res->{$1}->{total} = $5;
}
}
$fh->close;
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] [PATCH common v3 2/2] add function to retrieve pressures from cgroup
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
` (9 preceding siblings ...)
2025-07-15 14:31 ` [pve-devel] [PATCH common v3 1/2] fix error in pressure parsing Aaron Lauterer
@ 2025-07-15 14:31 ` Aaron Lauterer
2025-07-16 22:33 ` [pve-devel] applied: " Thomas Lamprecht
2025-07-15 14:31 ` [pve-devel] [PATCH widget-toolkit v3 1/2] rrdchart: allow to override the series object Aaron Lauterer
` (24 subsequent siblings)
35 siblings, 1 reply; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:31 UTC (permalink / raw)
To: pve-devel
From: Folke Gleumes <f.gleumes@proxmox.com>
Originally-by: Folke Gleumes <f.gleumes@proxmox.com>
[AL:
* rebased on current master
* merged into single function for generic cgroups
* renamed commit to match single generic function
]
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
Notes:
while the read_cgroup_pressure function would fit better into
SysFSTools.pm I have kept it in ProcFSTools.pm for now, mainly because
we currently don't use ProcFSTools in SysFSTools, and the actual
function to parse the contents of the pressure files in in ProcFSTools.
If that would not be an issue, we could also move the
read_cgroup_pressure function to SysFSTools
changes since
RFC:
* instead of dedicated functions for CTs and VMs we use a more generic
for cgroups in general
src/PVE/ProcFSTools.pm | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/src/PVE/ProcFSTools.pm b/src/PVE/ProcFSTools.pm
index f342890..f28372b 100644
--- a/src/PVE/ProcFSTools.pm
+++ b/src/PVE/ProcFSTools.pm
@@ -151,6 +151,17 @@ sub parse_pressure {
return $res;
}
+sub read_cgroup_pressure {
+ my ($cgroup_path) = @_;
+
+ my $res = {};
+ for my $type (qw(cpu memory io)) {
+ my $stats = parse_pressure("sys/fs/cgroup/${cgroup_path}/${type}.pressure");
+ $res->{$type} = $stats if $stats;
+ }
+ return $res;
+}
+
sub read_pressure {
my $res = {};
foreach my $type (qw(cpu memory io)) {
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] applied: [PATCH common v3 2/2] add function to retrieve pressures from cgroup
2025-07-15 14:31 ` [pve-devel] [PATCH common v3 2/2] add function to retrieve pressures from cgroup Aaron Lauterer
@ 2025-07-16 22:33 ` Thomas Lamprecht
0 siblings, 0 replies; 59+ messages in thread
From: Thomas Lamprecht @ 2025-07-16 22:33 UTC (permalink / raw)
To: Proxmox VE development discussion, Aaron Lauterer
Am 15.07.25 um 16:31 schrieb Aaron Lauterer:
> From: Folke Gleumes <f.gleumes@proxmox.com>
>
> Originally-by: Folke Gleumes <f.gleumes@proxmox.com>
> [AL:
> * rebased on current master
> * merged into single function for generic cgroups
> * renamed commit to match single generic function
> ]
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
>
> Notes:
> while the read_cgroup_pressure function would fit better into
> SysFSTools.pm I have kept it in ProcFSTools.pm for now, mainly because
> we currently don't use ProcFSTools in SysFSTools, and the actual
> function to parse the contents of the pressure files in in ProcFSTools.
>
> If that would not be an issue, we could also move the
> read_cgroup_pressure function to SysFSTools
>
> changes since
> RFC:
> * instead of dedicated functions for CTs and VMs we use a more generic
> for cgroups in general
>
> src/PVE/ProcFSTools.pm | 11 +++++++++++
> 1 file changed, 11 insertions(+)
>
>
applied, thanks!
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] [PATCH widget-toolkit v3 1/2] rrdchart: allow to override the series object
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
` (10 preceding siblings ...)
2025-07-15 14:31 ` [pve-devel] [PATCH common v3 2/2] add function to retrieve pressures from cgroup Aaron Lauterer
@ 2025-07-15 14:31 ` Aaron Lauterer
2025-07-21 11:42 ` Dominik Csapak
2025-07-15 14:31 ` [pve-devel] [PATCH widget-toolkit v3 2/2] rrdchart: use reference for undo button Aaron Lauterer
` (23 subsequent siblings)
35 siblings, 1 reply; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:31 UTC (permalink / raw)
To: pve-devel
this way we can keep the current behavior, but also make it possible to
finely control a series if needed. For example, if we want a stacked
graph, or just a line without fill.
Additionally we need to adjust the tooltip renderer to also gather the
titles from these directly configured series.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
src/panel/RRDChart.js | 53 +++++++++++++++++++++++++++++++++----------
1 file changed, 41 insertions(+), 12 deletions(-)
diff --git a/src/panel/RRDChart.js b/src/panel/RRDChart.js
index 86cf4e2..3b41ae6 100644
--- a/src/panel/RRDChart.js
+++ b/src/panel/RRDChart.js
@@ -118,13 +118,33 @@ Ext.define('Proxmox.widget.RRDChart', {
suffix = 'B/s';
}
- let prefix = item.field;
- if (view.fieldTitles && view.fieldTitles[view.fields.indexOf(item.field)]) {
- prefix = view.fieldTitles[view.fields.indexOf(item.field)];
+ let value = record.get(item.field);
+ if (value === null) {
+ tooltip.setHtml('No Data');
+ } else {
+ let prefix = item.field;
+ if (view.fieldTitles && view.fieldTitles[view.fields.indexOf(item.field)]) {
+ prefix = view.fieldTitles[view.fields.indexOf(item.field)];
+ } else {
+ // If series is passed in directly, we don't have fieldTitles set. The title property can be a
+ // single string for a line series, or an array for an area/stacked series.
+ for (const field of view.fields) {
+ if (Array.isArray(field.yField)) {
+ if (field.title && field.title[field.yField.indexOf(item.field)]) {
+ prefix = field.title[field.yField.indexOf(item.field)];
+ break;
+ }
+ } else if (field.title) {
+ prefix = field.title;
+ break;
+ }
+ }
+ }
+
+ let v = this.convertToUnits(record.get(item.field));
+ let t = new Date(record.get('time'));
+ tooltip.setHtml(`${prefix}: ${v}${suffix}<br>${t}`);
}
- let v = this.convertToUnits(record.get(item.field));
- let t = new Date(record.get('time'));
- tooltip.setHtml(`${prefix}: ${v}${suffix}<br>${t}`);
},
onAfterAnimation: function (chart, eopts) {
@@ -261,17 +281,26 @@ Ext.define('Proxmox.widget.RRDChart', {
// add a series for each field we get
me.fields.forEach(function (item, index) {
- let title = item;
- if (me.fieldTitles && me.fieldTitles[index]) {
- title = me.fieldTitles[index];
+ let yField;
+ let title;
+ let object;
+
+ if (typeof item === 'object') {
+ object = item;
+ } else {
+ yField = item;
+ title = item;
+ if (me.fieldTitles && me.fieldTitles[index]) {
+ title = me.fieldTitles[index];
+ }
}
me.addSeries(
Ext.apply(
{
type: 'line',
xField: 'time',
- yField: item,
- title: title,
+ yField,
+ title,
fill: true,
style: {
lineWidth: 1.5,
@@ -290,7 +319,7 @@ Ext.define('Proxmox.widget.RRDChart', {
renderer: 'onSeriesTooltipRender',
},
},
- me.seriesConfig,
+ object ?? me.seriesConfig,
),
);
});
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [pve-devel] [PATCH widget-toolkit v3 1/2] rrdchart: allow to override the series object
2025-07-15 14:31 ` [pve-devel] [PATCH widget-toolkit v3 1/2] rrdchart: allow to override the series object Aaron Lauterer
@ 2025-07-21 11:42 ` Dominik Csapak
2025-07-21 15:08 ` Aaron Lauterer
0 siblings, 1 reply; 59+ messages in thread
From: Dominik Csapak @ 2025-07-21 11:42 UTC (permalink / raw)
To: Proxmox VE development discussion, Aaron Lauterer
One nit inline, but aside from that:
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
On 7/15/25 16:31, Aaron Lauterer wrote:
> this way we can keep the current behavior, but also make it possible to
> finely control a series if needed. For example, if we want a stacked
> graph, or just a line without fill.
>
> Additionally we need to adjust the tooltip renderer to also gather the
> titles from these directly configured series.
>
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> src/panel/RRDChart.js | 53 +++++++++++++++++++++++++++++++++----------
> 1 file changed, 41 insertions(+), 12 deletions(-)
>
> diff --git a/src/panel/RRDChart.js b/src/panel/RRDChart.js
> index 86cf4e2..3b41ae6 100644
> --- a/src/panel/RRDChart.js
> +++ b/src/panel/RRDChart.js
> @@ -118,13 +118,33 @@ Ext.define('Proxmox.widget.RRDChart', {
> suffix = 'B/s';
> }
>
> - let prefix = item.field;
> - if (view.fieldTitles && view.fieldTitles[view.fields.indexOf(item.field)]) {
> - prefix = view.fieldTitles[view.fields.indexOf(item.field)];
> + let value = record.get(item.field);
> + if (value === null) {
> + tooltip.setHtml('No Data');
nit: this change seems a bit unrelated? we did just put it in
convertToUnits previously, and did not check the value before...
also, we could use the 'value' later, namely...
> + } else {
> + let prefix = item.field;
> + if (view.fieldTitles && view.fieldTitles[view.fields.indexOf(item.field)]) {
> + prefix = view.fieldTitles[view.fields.indexOf(item.field)];
> + } else {
> + // If series is passed in directly, we don't have fieldTitles set. The title property can be a
> + // single string for a line series, or an array for an area/stacked series.
> + for (const field of view.fields) {
> + if (Array.isArray(field.yField)) {
> + if (field.title && field.title[field.yField.indexOf(item.field)]) {
> + prefix = field.title[field.yField.indexOf(item.field)];
> + break;
> + }
> + } else if (field.title) {
> + prefix = field.title;
> + break;
> + }
> + }
> + }
> +
> + let v = this.convertToUnits(record.get(item.field));
here
> + let t = new Date(record.get('time'));
> + tooltip.setHtml(`${prefix}: ${v}${suffix}<br>${t}`);
> }
> - let v = this.convertToUnits(record.get(item.field));
> - let t = new Date(record.get('time'));
> - tooltip.setHtml(`${prefix}: ${v}${suffix}<br>${t}`);
> },
>
> onAfterAnimation: function (chart, eopts) {
> @@ -261,17 +281,26 @@ Ext.define('Proxmox.widget.RRDChart', {
>
> // add a series for each field we get
> me.fields.forEach(function (item, index) {
> - let title = item;
> - if (me.fieldTitles && me.fieldTitles[index]) {
> - title = me.fieldTitles[index];
> + let yField;
> + let title;
> + let object;
> +
> + if (typeof item === 'object') {
> + object = item;
> + } else {
> + yField = item;
> + title = item;
> + if (me.fieldTitles && me.fieldTitles[index]) {
> + title = me.fieldTitles[index];
> + }
> }
> me.addSeries(
> Ext.apply(
> {
> type: 'line',
> xField: 'time',
> - yField: item,
> - title: title,
> + yField,
> + title,
> fill: true,
> style: {
> lineWidth: 1.5,
> @@ -290,7 +319,7 @@ Ext.define('Proxmox.widget.RRDChart', {
> renderer: 'onSeriesTooltipRender',
> },
> },
> - me.seriesConfig,
> + object ?? me.seriesConfig,
> ),
> );
> });
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [pve-devel] [PATCH widget-toolkit v3 1/2] rrdchart: allow to override the series object
2025-07-21 11:42 ` Dominik Csapak
@ 2025-07-21 15:08 ` Aaron Lauterer
0 siblings, 0 replies; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-21 15:08 UTC (permalink / raw)
To: Dominik Csapak, Proxmox VE development discussion
On 2025-07-21 13:42, Dominik Csapak wrote:
> One nit inline, but aside from that:
>
> Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
>
> On 7/15/25 16:31, Aaron Lauterer wrote:
>> this way we can keep the current behavior, but also make it possible to
>> finely control a series if needed. For example, if we want a stacked
>> graph, or just a line without fill.
>>
>> Additionally we need to adjust the tooltip renderer to also gather the
>> titles from these directly configured series.
>>
>> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
>> ---
>> src/panel/RRDChart.js | 53 +++++++++++++++++++++++++++++++++----------
>> 1 file changed, 41 insertions(+), 12 deletions(-)
>>
>> diff --git a/src/panel/RRDChart.js b/src/panel/RRDChart.js
>> index 86cf4e2..3b41ae6 100644
>> --- a/src/panel/RRDChart.js
>> +++ b/src/panel/RRDChart.js
>> @@ -118,13 +118,33 @@ Ext.define('Proxmox.widget.RRDChart', {
>> suffix = 'B/s';
>> }
>> - let prefix = item.field;
>> - if (view.fieldTitles &&
>> view.fieldTitles[view.fields.indexOf(item.field)]) {
>> - prefix =
>> view.fieldTitles[view.fields.indexOf(item.field)];
>> + let value = record.get(item.field);
>> + if (value === null) {
>> + tooltip.setHtml('No Data');
>
> nit: this change seems a bit unrelated? we did just put it in
> convertToUnits previously, and did not check the value before...
I forgot to mention in the commit msg why this is happening. In the next
version, it will be explained.
in short: since stacked charts will also draw data points even if there
is no data, we need to catch that to avoid errors later in the tooltip
renderer.
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] [PATCH widget-toolkit v3 2/2] rrdchart: use reference for undo button
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
` (11 preceding siblings ...)
2025-07-15 14:31 ` [pve-devel] [PATCH widget-toolkit v3 1/2] rrdchart: allow to override the series object Aaron Lauterer
@ 2025-07-15 14:31 ` Aaron Lauterer
2025-07-21 11:43 ` Dominik Csapak
2025-07-15 14:31 ` [pve-devel] [PATCH manager v3 01/14] api2tools: drop old VM rrd schema Aaron Lauterer
` (22 subsequent siblings)
35 siblings, 1 reply; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:31 UTC (permalink / raw)
To: pve-devel
This makes targeting the undo button more stable in situations where it
might not be the 0 indexed item in the tools.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
src/panel/RRDChart.js | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/src/panel/RRDChart.js b/src/panel/RRDChart.js
index 3b41ae6..f9a9b33 100644
--- a/src/panel/RRDChart.js
+++ b/src/panel/RRDChart.js
@@ -152,7 +152,7 @@ Ext.define('Proxmox.widget.RRDChart', {
return;
}
// if the undo button is disabled, disable our tool
- let ourUndoZoomButton = chart.header.tools[0];
+ let ourUndoZoomButton = chart.lookupReference('undoButton');
let undoButton = chart.interactions[0].getUndoButton();
ourUndoZoomButton.setDisabled(undoButton.isDisabled());
},
@@ -269,6 +269,7 @@ Ext.define('Proxmox.widget.RRDChart', {
me.addTool({
type: 'minus',
disabled: true,
+ reference: 'undoButton',
tooltip: gettext('Undo Zoom'),
handler: function () {
let undoButton = me.interactions[0].getUndoButton();
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [pve-devel] [PATCH widget-toolkit v3 2/2] rrdchart: use reference for undo button
2025-07-15 14:31 ` [pve-devel] [PATCH widget-toolkit v3 2/2] rrdchart: use reference for undo button Aaron Lauterer
@ 2025-07-21 11:43 ` Dominik Csapak
0 siblings, 0 replies; 59+ messages in thread
From: Dominik Csapak @ 2025-07-21 11:43 UTC (permalink / raw)
To: Proxmox VE development discussion, Aaron Lauterer
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
On 7/15/25 16:31, Aaron Lauterer wrote:
> This makes targeting the undo button more stable in situations where it
> might not be the 0 indexed item in the tools.
>
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> src/panel/RRDChart.js | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/src/panel/RRDChart.js b/src/panel/RRDChart.js
> index 3b41ae6..f9a9b33 100644
> --- a/src/panel/RRDChart.js
> +++ b/src/panel/RRDChart.js
> @@ -152,7 +152,7 @@ Ext.define('Proxmox.widget.RRDChart', {
> return;
> }
> // if the undo button is disabled, disable our tool
> - let ourUndoZoomButton = chart.header.tools[0];
> + let ourUndoZoomButton = chart.lookupReference('undoButton');
> let undoButton = chart.interactions[0].getUndoButton();
> ourUndoZoomButton.setDisabled(undoButton.isDisabled());
> },
> @@ -269,6 +269,7 @@ Ext.define('Proxmox.widget.RRDChart', {
> me.addTool({
> type: 'minus',
> disabled: true,
> + reference: 'undoButton',
> tooltip: gettext('Undo Zoom'),
> handler: function () {
> let undoButton = me.interactions[0].getUndoButton();
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] [PATCH manager v3 01/14] api2tools: drop old VM rrd schema
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
` (12 preceding siblings ...)
2025-07-15 14:31 ` [pve-devel] [PATCH widget-toolkit v3 2/2] rrdchart: use reference for undo button Aaron Lauterer
@ 2025-07-15 14:31 ` Aaron Lauterer
2025-07-18 19:17 ` [pve-devel] applied: " Thomas Lamprecht
2025-07-15 14:31 ` [pve-devel] [PATCH manager v3 02/14] api2tools: extract stats: handle existence of new pve-{type}-9.0 data Aaron Lauterer
` (21 subsequent siblings)
35 siblings, 1 reply; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:31 UTC (permalink / raw)
To: pve-devel
pve2.3-vm has been introduced with commit 3b6ad3ac back in 2013. By now
there should not be any combination of clustered nodes that still send
the old pve2-vm variant.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
PVE/API2Tools.pm | 18 +-----------------
1 file changed, 1 insertion(+), 17 deletions(-)
diff --git a/PVE/API2Tools.pm b/PVE/API2Tools.pm
index d6154925..1e235c47 100644
--- a/PVE/API2Tools.pm
+++ b/PVE/API2Tools.pm
@@ -97,23 +97,7 @@ sub extract_vm_stats {
my $d;
- if ($d = $rrd->{"pve2-vm/$vmid"}) {
-
- $entry->{uptime} = ($d->[0] || 0) + 0;
- $entry->{name} = $d->[1];
- $entry->{status} = $entry->{uptime} ? 'running' : 'stopped';
- $entry->{maxcpu} = ($d->[3] || 0) + 0;
- $entry->{cpu} = ($d->[4] || 0) + 0;
- $entry->{maxmem} = ($d->[5] || 0) + 0;
- $entry->{mem} = ($d->[6] || 0) + 0;
- $entry->{maxdisk} = ($d->[7] || 0) + 0;
- $entry->{disk} = ($d->[8] || 0) + 0;
- $entry->{netin} = ($d->[9] || 0) + 0;
- $entry->{netout} = ($d->[10] || 0) + 0;
- $entry->{diskread} = ($d->[11] || 0) + 0;
- $entry->{diskwrite} = ($d->[12] || 0) + 0;
-
- } elsif ($d = $rrd->{"pve2.3-vm/$vmid"}) {
+ if ($d = $rrd->{"pve2.3-vm/$vmid"}) {
$entry->{uptime} = ($d->[0] || 0) + 0;
$entry->{name} = $d->[1];
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] [PATCH manager v3 02/14] api2tools: extract stats: handle existence of new pve-{type}-9.0 data
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
` (13 preceding siblings ...)
2025-07-15 14:31 ` [pve-devel] [PATCH manager v3 01/14] api2tools: drop old VM rrd schema Aaron Lauterer
@ 2025-07-15 14:31 ` Aaron Lauterer
2025-07-18 19:17 ` [pve-devel] applied: " Thomas Lamprecht
2025-07-15 14:32 ` [pve-devel] [PATCH manager v3 03/14] pvestatd: collect and distribute new pve-{type}-9.0 metrics Aaron Lauterer
` (20 subsequent siblings)
35 siblings, 1 reply; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:31 UTC (permalink / raw)
To: pve-devel
We add a new function to handle different key names, as it would
otherwise become quite unreadable.
It checks which key format exists for the type and resource:
* the old pve2-{type} / pve2.3-vm
* the new pve-{type}-{version}
and will return the one that was found. Since we will only have one key
per resource, we can return on the first hit.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
Notes:
changes since:
RFC:
* switch from pve9- to pve-{type}-9.0 schema
PVE/API2Tools.pm | 28 ++++++++++++++++++++++++----
1 file changed, 24 insertions(+), 4 deletions(-)
diff --git a/PVE/API2Tools.pm b/PVE/API2Tools.pm
index 1e235c47..08548524 100644
--- a/PVE/API2Tools.pm
+++ b/PVE/API2Tools.pm
@@ -41,6 +41,24 @@ sub get_hwaddress {
return $hwaddress;
}
+# each rrd key for a resource will only exist once. The key format might be different though. Therefore return on first hit
+sub get_rrd_key {
+ my ($rrd, $type, $id) = @_;
+
+ # check for old formats: pve2-{type}/{id}. For VMs and CTs the version number is different than for nodes and storages
+ if ($type ne "vm" && exists $rrd->{"pve2-${type}/${id}"}) {
+ return "pve2-${type}/${id}";
+ } elsif ($type eq "vm" && exists $rrd->{"pve2.3-${type}/${id}"}) {
+ return "pve2.3-${type}/${id}";
+ }
+
+ # if no old key has been found, we expect on in the newer format: pve-{type}-{version}/{id}
+ # We accept all new versions, as the expectation is that they are only allowed to add new colums as non-breaking change
+ for my $k (keys %$rrd) {
+ return $k if $k =~ m/^pve-\Q${type}\E-\d\d?.\d\/\Q${id}\E$/;
+ }
+}
+
sub extract_node_stats {
my ($node, $members, $rrd, $exclude_stats) = @_;
@@ -51,8 +69,8 @@ sub extract_node_stats {
status => 'unknown',
};
- if (my $d = $rrd->{"pve2-node/$node"}) {
-
+ my $key = get_rrd_key($rrd, "node", $node);
+ if (my $d = $rrd->{$key}) {
if (
!$members || # no cluster
($members->{$node} && $members->{$node}->{online})
@@ -96,8 +114,9 @@ sub extract_vm_stats {
};
my $d;
+ my $key = get_rrd_key($rrd, "vm", $vmid);
- if ($d = $rrd->{"pve2.3-vm/$vmid"}) {
+ if (my $d = $rrd->{$key}) {
$entry->{uptime} = ($d->[0] || 0) + 0;
$entry->{name} = $d->[1];
@@ -135,7 +154,8 @@ sub extract_storage_stats {
content => $content,
};
- if (my $d = $rrd->{"pve2-storage/$node/$storeid"}) {
+ my $key = get_rrd_key($rrd, "storage", "${node}/${storeid}");
+ if (my $d = $rrd->{$key}) {
$entry->{maxdisk} = ($d->[1] || 0) + 0;
$entry->{disk} = ($d->[2] || 0) + 0;
$entry->{status} = 'available';
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] [PATCH manager v3 03/14] pvestatd: collect and distribute new pve-{type}-9.0 metrics
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
` (14 preceding siblings ...)
2025-07-15 14:31 ` [pve-devel] [PATCH manager v3 02/14] api2tools: extract stats: handle existence of new pve-{type}-9.0 data Aaron Lauterer
@ 2025-07-15 14:32 ` Aaron Lauterer
2025-07-15 14:32 ` [pve-devel] [PATCH manager v3 04/14] api: nodes: rrd and rrddata add decade option and use new pve-node-9.0 rrd files Aaron Lauterer
` (19 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:32 UTC (permalink / raw)
To: pve-devel
If we see that the migration to the new pve-{type}-9.0 rrd format has been done
or is ongoing (new dir exists), we collect and send out the new format with additional
columns for nodes and VMs (guests).
Those are:
Nodes:
* memfree
* arcsize
* pressures:
* cpu some
* io some
* io full
* mem some
* mem full
VMs:
* memhost (memory consumption of all processes in the guests cgroup -> host view)
* pressures:
* cpu some
* cpu full
* io some
* io full
* mem some
* mem full
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
Notes:
changes since:
RFC:
* switch from pve9- to pve-{type}-9.0 schema
PVE/Service/pvestatd.pm | 342 +++++++++++++++++++++++++++++-----------
1 file changed, 250 insertions(+), 92 deletions(-)
diff --git a/PVE/Service/pvestatd.pm b/PVE/Service/pvestatd.pm
index e3ea06bb..a22ae7bb 100755
--- a/PVE/Service/pvestatd.pm
+++ b/PVE/Service/pvestatd.pm
@@ -82,6 +82,16 @@ my $cached_kvm_version = '';
my $next_flag_update_time;
my $failed_flag_update_delay_sec = 120;
+# Checks if RRD files exist in the specified location.
+my $rrd_dir_exists = sub {
+ my ($location) = @_;
+ if (-d "/var/lib/rrdcached/db/${location}") {
+ return 1;
+ } else {
+ return 0;
+ }
+};
+
sub update_supported_cpuflags {
my $kvm_version = PVE::QemuServer::kvm_user_version();
@@ -180,32 +190,66 @@ sub update_node_status {
my $meminfo = PVE::ProcFSTools::read_meminfo();
+ my $pressures = PVE::ProcFSTools::read_pressure();
+
my $dinfo = df('/', 1); # output is bytes
# everything not free is considered to be used
my $dused = $dinfo->{blocks} - $dinfo->{bfree};
my $ctime = time();
- my $data = $generate_rrd_string->(
- [
- $uptime,
- $sublevel,
- $ctime,
- $avg1,
- $maxcpu,
- $stat->{cpu},
- $stat->{wait},
- $meminfo->{memtotal},
- $meminfo->{memused},
- $meminfo->{swaptotal},
- $meminfo->{swapused},
- $dinfo->{blocks},
- $dused,
- $netin,
- $netout,
- ],
- );
- PVE::Cluster::broadcast_rrd("pve2-node/$nodename", $data);
+ my $data;
+ # TODO: drop old pve2- schema with PVE 10
+ if ($rrd_dir_exists->("pve-node-9.0")) {
+ $data = $generate_rrd_string->(
+ [
+ $uptime,
+ $sublevel,
+ $ctime,
+ $avg1,
+ $maxcpu,
+ $stat->{cpu},
+ $stat->{wait},
+ $meminfo->{memtotal},
+ $meminfo->{memused},
+ $meminfo->{swaptotal},
+ $meminfo->{swapused},
+ $dinfo->{blocks},
+ $dused,
+ $netin,
+ $netout,
+ $meminfo->{memavailable},
+ $meminfo->{arcsize},
+ $pressures->{cpu}->{some}->{avg10},
+ $pressures->{io}->{some}->{avg10},
+ $pressures->{io}->{full}->{avg10},
+ $pressures->{memory}->{some}->{avg10},
+ $pressures->{memory}->{full}->{avg10},
+ ],
+ );
+ PVE::Cluster::broadcast_rrd("pve-node-9.0/$nodename", $data);
+ } else {
+ $data = $generate_rrd_string->(
+ [
+ $uptime,
+ $sublevel,
+ $ctime,
+ $avg1,
+ $maxcpu,
+ $stat->{cpu},
+ $stat->{wait},
+ $meminfo->{memtotal},
+ $meminfo->{memused},
+ $meminfo->{swaptotal},
+ $meminfo->{swapused},
+ $dinfo->{blocks},
+ $dused,
+ $netin,
+ $netout,
+ ],
+ );
+ PVE::Cluster::broadcast_rrd("pve2-node/$nodename", $data);
+ }
my $node_metric = {
uptime => $uptime,
@@ -273,44 +317,101 @@ sub update_qemu_status {
my $data;
my $status = $d->{qmpstatus} || $d->{status} || 'stopped';
my $template = $d->{template} ? $d->{template} : "0";
- if ($d->{pid}) { # running
- $data = $generate_rrd_string->([
- $d->{uptime},
- $d->{name},
- $status,
- $template,
- $ctime,
- $d->{cpus},
- $d->{cpu},
- $d->{maxmem},
- $d->{mem},
- $d->{maxdisk},
- $d->{disk},
- $d->{netin},
- $d->{netout},
- $d->{diskread},
- $d->{diskwrite},
- ]);
+
+ # TODO: drop old pve2.3- schema with PVE 10
+ if ($rrd_dir_exists->("pve-vm-9.0")) {
+ if ($d->{pid}) { # running
+ $data = $generate_rrd_string->([
+ $d->{uptime},
+ $d->{name},
+ $status,
+ $template,
+ $ctime,
+ $d->{cpus},
+ $d->{cpu},
+ $d->{maxmem},
+ $d->{mem},
+ $d->{maxdisk},
+ $d->{disk},
+ $d->{netin},
+ $d->{netout},
+ $d->{diskread},
+ $d->{diskwrite},
+ $d->{memhost},
+ $d->{pressurecpusome},
+ $d->{pressurecpufull},
+ $d->{pressureiosome},
+ $d->{pressureiofull},
+ $d->{pressurememorysome},
+ $d->{pressurememoryfull},
+ ]);
+ } else {
+ $data = $generate_rrd_string->([
+ 0,
+ $d->{name},
+ $status,
+ $template,
+ $ctime,
+ $d->{cpus},
+ undef,
+ $d->{maxmem},
+ undef,
+ $d->{maxdisk},
+ $d->{disk},
+ undef,
+ undef,
+ undef,
+ undef,
+ undef,
+ undef,
+ undef,
+ undef,
+ undef,
+ undef,
+ undef,
+ ]);
+ }
+ PVE::Cluster::broadcast_rrd("pve-vm-9.0/$vmid", $data);
} else {
- $data = $generate_rrd_string->([
- 0,
- $d->{name},
- $status,
- $template,
- $ctime,
- $d->{cpus},
- undef,
- $d->{maxmem},
- undef,
- $d->{maxdisk},
- $d->{disk},
- undef,
- undef,
- undef,
- undef,
- ]);
+ if ($d->{pid}) { # running
+ $data = $generate_rrd_string->([
+ $d->{uptime},
+ $d->{name},
+ $status,
+ $template,
+ $ctime,
+ $d->{cpus},
+ $d->{cpu},
+ $d->{maxmem},
+ $d->{mem},
+ $d->{maxdisk},
+ $d->{disk},
+ $d->{netin},
+ $d->{netout},
+ $d->{diskread},
+ $d->{diskwrite},
+ ]);
+ } else {
+ $data = $generate_rrd_string->([
+ 0,
+ $d->{name},
+ $status,
+ $template,
+ $ctime,
+ $d->{cpus},
+ undef,
+ $d->{maxmem},
+ undef,
+ $d->{maxdisk},
+ $d->{disk},
+ undef,
+ undef,
+ undef,
+ undef,
+ ]);
+ }
+ PVE::Cluster::broadcast_rrd("pve2.3-vm/$vmid", $data);
}
- PVE::Cluster::broadcast_rrd("pve2.3-vm/$vmid", $data);
PVE::ExtMetric::update_all($transactions, 'qemu', $vmid, $d, $ctime, $nodename);
}
@@ -506,44 +607,100 @@ sub update_lxc_status {
my $d = $vmstatus->{$vmid};
my $template = $d->{template} ? $d->{template} : "0";
my $data;
- if ($d->{status} eq 'running') { # running
- $data = $generate_rrd_string->([
- $d->{uptime},
- $d->{name},
- $d->{status},
- $template,
- $ctime,
- $d->{cpus},
- $d->{cpu},
- $d->{maxmem},
- $d->{mem},
- $d->{maxdisk},
- $d->{disk},
- $d->{netin},
- $d->{netout},
- $d->{diskread},
- $d->{diskwrite},
- ]);
+ # TODO: drop old pve2.3-vm schema with PVE 10
+ if ($rrd_dir_exists->("pve-vm-9.0")) {
+ if ($d->{pid}) { # running
+ $data = $generate_rrd_string->([
+ $d->{uptime},
+ $d->{name},
+ $d->{status},
+ $template,
+ $ctime,
+ $d->{cpus},
+ $d->{cpu},
+ $d->{maxmem},
+ $d->{mem},
+ $d->{maxdisk},
+ $d->{disk},
+ $d->{netin},
+ $d->{netout},
+ $d->{diskread},
+ $d->{diskwrite},
+ undef,
+ $d->{pressurecpusome},
+ $d->{pressurecpufull},
+ $d->{pressureiosome},
+ $d->{pressureiofull},
+ $d->{pressurememorysome},
+ $d->{pressurememoryfull},
+ ]);
+ } else {
+ $data = $generate_rrd_string->([
+ 0,
+ $d->{name},
+ $d->{status},
+ $template,
+ $ctime,
+ $d->{cpus},
+ undef,
+ $d->{maxmem},
+ undef,
+ $d->{maxdisk},
+ $d->{disk},
+ undef,
+ undef,
+ undef,
+ undef,
+ undef,
+ undef,
+ undef,
+ undef,
+ undef,
+ undef,
+ undef,
+ ]);
+ }
+ PVE::Cluster::broadcast_rrd("pve-vm-9.0/$vmid", $data);
} else {
- $data = $generate_rrd_string->([
- 0,
- $d->{name},
- $d->{status},
- $template,
- $ctime,
- $d->{cpus},
- undef,
- $d->{maxmem},
- undef,
- $d->{maxdisk},
- $d->{disk},
- undef,
- undef,
- undef,
- undef,
- ]);
+ if ($d->{status} eq 'running') { # running
+ $data = $generate_rrd_string->([
+ $d->{uptime},
+ $d->{name},
+ $d->{status},
+ $template,
+ $ctime,
+ $d->{cpus},
+ $d->{cpu},
+ $d->{maxmem},
+ $d->{mem},
+ $d->{maxdisk},
+ $d->{disk},
+ $d->{netin},
+ $d->{netout},
+ $d->{diskread},
+ $d->{diskwrite},
+ ]);
+ } else {
+ $data = $generate_rrd_string->([
+ 0,
+ $d->{name},
+ $d->{status},
+ $template,
+ $ctime,
+ $d->{cpus},
+ undef,
+ $d->{maxmem},
+ undef,
+ $d->{maxdisk},
+ $d->{disk},
+ undef,
+ undef,
+ undef,
+ undef,
+ ]);
+ }
+ PVE::Cluster::broadcast_rrd("pve2.3-vm/$vmid", $data);
}
- PVE::Cluster::broadcast_rrd("pve2.3-vm/$vmid", $data);
PVE::ExtMetric::update_all($transactions, 'lxc', $vmid, $d, $ctime, $nodename);
}
@@ -568,6 +725,7 @@ sub update_storage_status {
my $data = $generate_rrd_string->([$ctime, $d->{total}, $d->{used}]);
my $key = "pve2-storage/${nodename}/$storeid";
+ $key = "pve-storage-9.0/${nodename}/$storeid" if $rrd_dir_exists->("pve-storage-9.0");
PVE::Cluster::broadcast_rrd($key, $data);
PVE::ExtMetric::update_all($transactions, 'storage', $nodename, $storeid, $d, $ctime);
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] [PATCH manager v3 04/14] api: nodes: rrd and rrddata add decade option and use new pve-node-9.0 rrd files
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
` (15 preceding siblings ...)
2025-07-15 14:32 ` [pve-devel] [PATCH manager v3 03/14] pvestatd: collect and distribute new pve-{type}-9.0 metrics Aaron Lauterer
@ 2025-07-15 14:32 ` Aaron Lauterer
2025-07-15 14:32 ` [pve-devel] [PATCH manager v3 05/14] api2tools: extract_vm_status add new vm memhost column Aaron Lauterer
` (18 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:32 UTC (permalink / raw)
To: pve-devel
if the new rrd pve-node-9.0 files are present, they contain the current
data and should be used.
'decade' is now possible as timeframe with the new RRD format.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
Notes:
changes since:
RFC:
* switch from pve9- to pve-{type}-9.0 schema
PVE/API2/Nodes.pm | 16 +++++++++-------
1 file changed, 9 insertions(+), 7 deletions(-)
diff --git a/PVE/API2/Nodes.pm b/PVE/API2/Nodes.pm
index 1eb04d9a..69b3d873 100644
--- a/PVE/API2/Nodes.pm
+++ b/PVE/API2/Nodes.pm
@@ -836,7 +836,7 @@ __PACKAGE__->register_method({
timeframe => {
description => "Specify the time frame you are interested in.",
type => 'string',
- enum => ['hour', 'day', 'week', 'month', 'year'],
+ enum => ['hour', 'day', 'week', 'month', 'year', 'decade'],
},
ds => {
description => "The list of datasources you want to display.",
@@ -860,9 +860,10 @@ __PACKAGE__->register_method({
code => sub {
my ($param) = @_;
- return PVE::RRD::create_rrd_graph(
- "pve2-node/$param->{node}", $param->{timeframe}, $param->{ds}, $param->{cf},
- );
+ my $path = "pve-node-9.0/$param->{node}";
+ $path = "pve2-node/$param->{node}" if !-e "/var/lib/rrdcached/db/${path}";
+ return PVE::RRD::create_rrd_graph($path, $param->{timeframe},
+ $param->{ds}, $param->{cf});
},
});
@@ -883,7 +884,7 @@ __PACKAGE__->register_method({
timeframe => {
description => "Specify the time frame you are interested in.",
type => 'string',
- enum => ['hour', 'day', 'week', 'month', 'year'],
+ enum => ['hour', 'day', 'week', 'month', 'year', 'decade'],
},
cf => {
description => "The RRD consolidation function",
@@ -903,8 +904,9 @@ __PACKAGE__->register_method({
code => sub {
my ($param) = @_;
- return PVE::RRD::create_rrd_data("pve2-node/$param->{node}", $param->{timeframe},
- $param->{cf});
+ my $path = "pve-node-9.0/$param->{node}";
+ $path = "pve2-node/$param->{node}" if !-e "/var/lib/rrdcached/db/${path}";
+ return PVE::RRD::create_rrd_data($path, $param->{timeframe}, $param->{cf});
},
});
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] [PATCH manager v3 05/14] api2tools: extract_vm_status add new vm memhost column
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
` (16 preceding siblings ...)
2025-07-15 14:32 ` [pve-devel] [PATCH manager v3 04/14] api: nodes: rrd and rrddata add decade option and use new pve-node-9.0 rrd files Aaron Lauterer
@ 2025-07-15 14:32 ` Aaron Lauterer
2025-07-15 14:32 ` [pve-devel] [PATCH manager v3 06/14] ui: rrdmodels: add new columns and update existing Aaron Lauterer
` (17 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:32 UTC (permalink / raw)
To: pve-devel
as this will also be displayed in the status of VMs
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
Notes:
this is a dedicated patch that should be applied only for PVE9 as it
adds new data in the result
PVE/API2/Cluster.pm | 7 +++++++
PVE/API2Tools.pm | 3 +++
2 files changed, 10 insertions(+)
diff --git a/PVE/API2/Cluster.pm b/PVE/API2/Cluster.pm
index a025d264..81cdf217 100644
--- a/PVE/API2/Cluster.pm
+++ b/PVE/API2/Cluster.pm
@@ -301,6 +301,13 @@ __PACKAGE__->register_method({
renderer => 'bytes',
minimum => 0,
},
+ memhost => {
+ description => "Used memory in bytes from the point of view of the host (for types 'qemu').",
+ type => 'integer',
+ optional => 1,
+ renderer => 'bytes',
+ minimum => 0,
+ },
maxmem => {
description => "Number of available memory in bytes"
. " (for types 'node', 'qemu' and 'lxc').",
diff --git a/PVE/API2Tools.pm b/PVE/API2Tools.pm
index 08548524..ed0bddbf 100644
--- a/PVE/API2Tools.pm
+++ b/PVE/API2Tools.pm
@@ -133,6 +133,9 @@ sub extract_vm_stats {
$entry->{netout} = ($d->[12] || 0) + 0;
$entry->{diskread} = ($d->[13] || 0) + 0;
$entry->{diskwrite} = ($d->[14] || 0) + 0;
+ if ($key =~ /^pve-vm-/) {
+ $entry->{memhost} = ($d->[15] || 0) +0;
+ }
}
return $entry;
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] [PATCH manager v3 06/14] ui: rrdmodels: add new columns and update existing
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
` (17 preceding siblings ...)
2025-07-15 14:32 ` [pve-devel] [PATCH manager v3 05/14] api2tools: extract_vm_status add new vm memhost column Aaron Lauterer
@ 2025-07-15 14:32 ` Aaron Lauterer
2025-07-21 11:48 ` Dominik Csapak
2025-07-15 14:32 ` [pve-devel] [PATCH manager v3 07/14] ui: node summary: use stacked memory graph with zfs arc Aaron Lauterer
` (16 subsequent siblings)
35 siblings, 1 reply; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:32 UTC (permalink / raw)
To: pve-devel
Memory columns will be used in an area graph, which cannot handle
gaps directly. Therefore we set the default value to 'null'. This makes
it easier to handle the tooltip when there is no data.
We calculate memused to subtract the arcsize. While the columns report
what they represent, in the stacked/area memory graph of the node we
need to account for the fact that memused includes the ZFS arc.
We also calculate the total / free memory for nodes and guests to get
the diff from the configured/max memory from the current used one.
Otherwise, we might see some spikes in the overall memory graph,
as the can be small timing differences when the data is collected.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
Notes:
changes since:
v2:
* add default values where needed for area graphs
* add calculated values, usually to keep the memory graphs from spiking
at the top due to slight timing differences where the data doesn't
align perfectly
RFC:
* drop node memcache and membuffer columns as we now have memavailable
which is better suited
www/manager6/data/model/RRDModels.js | 44 ++++++++++++++++++++++++++--
1 file changed, 41 insertions(+), 3 deletions(-)
diff --git a/www/manager6/data/model/RRDModels.js b/www/manager6/data/model/RRDModels.js
index 82f4e5cd..70b45986 100644
--- a/www/manager6/data/model/RRDModels.js
+++ b/www/manager6/data/model/RRDModels.js
@@ -18,14 +18,39 @@ Ext.define('pve-rrd-node', {
'loadavg',
'maxcpu',
'memtotal',
- 'memused',
+ { name: 'memused', defaultValue: null },
'netin',
'netout',
'roottotal',
'rootused',
'swaptotal',
'swapused',
+ { name: 'memfree', defaultValue: null },
+ { name: 'arcsize', defaultValue: null },
+ 'pressurecpusome',
+ 'pressureiosome',
+ 'pressureiofull',
+ 'pressurememorysome',
+ 'pressurememoryfull',
{ type: 'date', dateFormat: 'timestamp', name: 'time' },
+ {
+ name: 'memfree-capped',
+ calculate: function (data) {
+ if (data.memtotal === null || data.memused === null) {
+ return null;
+ }
+ return data.memtotal - data.memused;
+ },
+ },
+ {
+ name: 'memused-sub-arcsize',
+ calculate: function (data) {
+ if (data.memused === null) {
+ return null;
+ }
+ return data.memused - data.arcsize;
+ },
+ },
],
});
@@ -42,13 +67,26 @@ Ext.define('pve-rrd-guest', {
'maxcpu',
'netin',
'netout',
- 'mem',
- 'maxmem',
+ { name: 'mem', defaultValue: null },
+ { name: 'maxmem', defaultValue: null},
'disk',
'maxdisk',
'diskread',
'diskwrite',
+ {name: 'memhost', defaultValue: null},
+ 'pressurecpusome',
+ 'pressurecpufull',
+ 'pressureiosome',
+ 'pressurecpufull',
+ 'pressurememorysome',
+ 'pressurememoryfull',
{ type: 'date', dateFormat: 'timestamp', name: 'time' },
+ {
+ name: 'maxmem-capped',
+ calculate: function (data) {
+ return data.maxmem - data.mem;
+ },
+ },
],
});
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [pve-devel] [PATCH manager v3 06/14] ui: rrdmodels: add new columns and update existing
2025-07-15 14:32 ` [pve-devel] [PATCH manager v3 06/14] ui: rrdmodels: add new columns and update existing Aaron Lauterer
@ 2025-07-21 11:48 ` Dominik Csapak
0 siblings, 0 replies; 59+ messages in thread
From: Dominik Csapak @ 2025-07-21 11:48 UTC (permalink / raw)
To: Proxmox VE development discussion, Aaron Lauterer
comments inline:
On 7/15/25 16:32, Aaron Lauterer wrote:
> Memory columns will be used in an area graph, which cannot handle
> gaps directly. Therefore we set the default value to 'null'. This makes
> it easier to handle the tooltip when there is no data.
>
> We calculate memused to subtract the arcsize. While the columns report
> what they represent, in the stacked/area memory graph of the node we
> need to account for the fact that memused includes the ZFS arc.
>
> We also calculate the total / free memory for nodes and guests to get
> the diff from the configured/max memory from the current used one.
> Otherwise, we might see some spikes in the overall memory graph,
> as the can be small timing differences when the data is collected.
>
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
>
> Notes:
> changes since:
> v2:
> * add default values where needed for area graphs
> * add calculated values, usually to keep the memory graphs from spiking
> at the top due to slight timing differences where the data doesn't
> align perfectly
> RFC:
> * drop node memcache and membuffer columns as we now have memavailable
> which is better suited
>
> www/manager6/data/model/RRDModels.js | 44 ++++++++++++++++++++++++++--
> 1 file changed, 41 insertions(+), 3 deletions(-)
>
> diff --git a/www/manager6/data/model/RRDModels.js b/www/manager6/data/model/RRDModels.js
> index 82f4e5cd..70b45986 100644
> --- a/www/manager6/data/model/RRDModels.js
> +++ b/www/manager6/data/model/RRDModels.js
> @@ -18,14 +18,39 @@ Ext.define('pve-rrd-node', {
> 'loadavg',
> 'maxcpu',
> 'memtotal',
> - 'memused',
> + { name: 'memused', defaultValue: null },
> 'netin',
> 'netout',
> 'roottotal',
> 'rootused',
> 'swaptotal',
> 'swapused',
> + { name: 'memfree', defaultValue: null },
> + { name: 'arcsize', defaultValue: null },
> + 'pressurecpusome',
> + 'pressureiosome',
> + 'pressureiofull',
> + 'pressurememorysome',
> + 'pressurememoryfull',
> { type: 'date', dateFormat: 'timestamp', name: 'time' },
> + {
> + name: 'memfree-capped',
> + calculate: function (data) {
> + if (data.memtotal === null || data.memused === null) {
> + return null;
> + }
> + return data.memtotal - data.memused;
i think if you did the check in reverse, you can omit the 'defaultValue:
null'
by e.g. doing something like:
if (data.memtotal >= 0 && data.memused >= 0 && data.memtotal >=
data.memused) {
return data.memtotal - data.memused;
}
return null;
?
> + },
> + },
> + {
> + name: 'memused-sub-arcsize',
> + calculate: function (data) {
> + if (data.memused === null) {
> + return null;
> + }
> + return data.memused - data.arcsize;
here you only check memused for null. what about data.arcsize?
> + },
> + },
> ],
> });
>
> @@ -42,13 +67,26 @@ Ext.define('pve-rrd-guest', {
> 'maxcpu',
> 'netin',
> 'netout',
> - 'mem',
> - 'maxmem',
> + { name: 'mem', defaultValue: null },
> + { name: 'maxmem', defaultValue: null},
> 'disk',
> 'maxdisk',
> 'diskread',
> 'diskwrite',
> + {name: 'memhost', defaultValue: null},
> + 'pressurecpusome',
> + 'pressurecpufull',
> + 'pressureiosome',
> + 'pressurecpufull',
> + 'pressurememorysome',
> + 'pressurememoryfull',
> { type: 'date', dateFormat: 'timestamp', name: 'time' },
> + {
> + name: 'maxmem-capped',
> + calculate: function (data) {
> + return data.maxmem - data.mem;
here you don't check neither maxmem nor mem for 'null'
> + },
> + },
> ],
> });
>
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] [PATCH manager v3 07/14] ui: node summary: use stacked memory graph with zfs arc
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
` (18 preceding siblings ...)
2025-07-15 14:32 ` [pve-devel] [PATCH manager v3 06/14] ui: rrdmodels: add new columns and update existing Aaron Lauterer
@ 2025-07-15 14:32 ` Aaron Lauterer
2025-07-21 12:01 ` Dominik Csapak
2025-07-15 14:32 ` [pve-devel] [PATCH manager v3 08/14] ui: add pressure graphs to node and guest summary Aaron Lauterer
` (15 subsequent siblings)
35 siblings, 1 reply; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:32 UTC (permalink / raw)
To: pve-devel
To display the used memory and the ZFS arc as a separate data point,
keeping the old line overlapping filled line graphs won't work
anymore. We therefore switch them to area graphs which are stacked by
default.
The order of the fields is important here as it affects the order in the
stacking. This means we also need to override colors manually to keep
them in line as it used to be.
Additionally, we don't use the 3rd color in the default extjs color
scheme, as that would be dark red [0]. We go with a color that is
different enough and not associated as a warning or error: dark-grey.
[0] https://docs.sencha.com/extjs/7.0.0/classic/src/Base.js-6.html#line318
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
www/manager6/node/Summary.js | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/www/manager6/node/Summary.js b/www/manager6/node/Summary.js
index c9d73494..ed3d33d9 100644
--- a/www/manager6/node/Summary.js
+++ b/www/manager6/node/Summary.js
@@ -177,11 +177,18 @@ Ext.define('PVE.node.Summary', {
{
xtype: 'proxmoxRRDChart',
title: gettext('Memory usage'),
- fields: ['memtotal', 'memused'],
- fieldTitles: [gettext('Total'), gettext('RAM usage')],
+ fields: [
+ {
+ type: 'area',
+ yField: ['memused-sub-arcsize', 'arcsize', 'memfree-capped'],
+ title: [gettext('Used'), gettext('ZFS'), gettext('Free')],
+ },
+ ],
+ colors: ['#115fa6', '#7c7474', '#94ae0a'],
unit: 'bytes',
powerOfTwo: true,
store: rrdstore,
+ stacked: true,
},
{
xtype: 'proxmoxRRDChart',
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [pve-devel] [PATCH manager v3 07/14] ui: node summary: use stacked memory graph with zfs arc
2025-07-15 14:32 ` [pve-devel] [PATCH manager v3 07/14] ui: node summary: use stacked memory graph with zfs arc Aaron Lauterer
@ 2025-07-21 12:01 ` Dominik Csapak
0 siblings, 0 replies; 59+ messages in thread
From: Dominik Csapak @ 2025-07-21 12:01 UTC (permalink / raw)
To: Proxmox VE development discussion, Aaron Lauterer
We should probably refactor the colors to Proxmox.Utils if we'll use
them more often in the future, but aside from that:
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
On 7/15/25 16:32, Aaron Lauterer wrote:
> To display the used memory and the ZFS arc as a separate data point,
> keeping the old line overlapping filled line graphs won't work
> anymore. We therefore switch them to area graphs which are stacked by
> default.
>
> The order of the fields is important here as it affects the order in the
> stacking. This means we also need to override colors manually to keep
> them in line as it used to be.
> Additionally, we don't use the 3rd color in the default extjs color
> scheme, as that would be dark red [0]. We go with a color that is
> different enough and not associated as a warning or error: dark-grey.
>
> [0] https://docs.sencha.com/extjs/7.0.0/classic/src/Base.js-6.html#line318
>
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> www/manager6/node/Summary.js | 11 +++++++++--
> 1 file changed, 9 insertions(+), 2 deletions(-)
>
> diff --git a/www/manager6/node/Summary.js b/www/manager6/node/Summary.js
> index c9d73494..ed3d33d9 100644
> --- a/www/manager6/node/Summary.js
> +++ b/www/manager6/node/Summary.js
> @@ -177,11 +177,18 @@ Ext.define('PVE.node.Summary', {
> {
> xtype: 'proxmoxRRDChart',
> title: gettext('Memory usage'),
> - fields: ['memtotal', 'memused'],
> - fieldTitles: [gettext('Total'), gettext('RAM usage')],
> + fields: [
> + {
> + type: 'area',
> + yField: ['memused-sub-arcsize', 'arcsize', 'memfree-capped'],
> + title: [gettext('Used'), gettext('ZFS'), gettext('Free')],
> + },
> + ],
> + colors: ['#115fa6', '#7c7474', '#94ae0a'],
> unit: 'bytes',
> powerOfTwo: true,
> store: rrdstore,
> + stacked: true,
> },
> {
> xtype: 'proxmoxRRDChart',
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] [PATCH manager v3 08/14] ui: add pressure graphs to node and guest summary
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
` (19 preceding siblings ...)
2025-07-15 14:32 ` [pve-devel] [PATCH manager v3 07/14] ui: node summary: use stacked memory graph with zfs arc Aaron Lauterer
@ 2025-07-15 14:32 ` Aaron Lauterer
2025-07-21 12:05 ` Dominik Csapak
2025-07-15 14:32 ` [pve-devel] [PATCH manager v3 09/14] ui: GuestStatusView: add memhost for VM guests Aaron Lauterer
` (14 subsequent siblings)
35 siblings, 1 reply; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:32 UTC (permalink / raw)
To: pve-devel
From: Folke Gleumes <f.gleumes@proxmox.com>
Pressures are indicatios that processes needed to wait for their
resources. While 'some' means, that some of the processes on the host
(node summary) or in the guests cgroup had to wait, 'full' means that
all processes couldn't get the resources fast enough.
We set the colors accordingly. For 'some' we use yellow, for 'full' we
use red.
This should make it clear that this is not just another graph, but
indicates performance issues. It also sets the pressure graphs apart
from the other graphs that follow the usual color scheme.
Originally-by: Folke Gleumes <f.gleumes@proxmox.com>
[AL:
* rebased
* reworked commit msg
* set colors
]
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
www/manager6/node/Summary.js | 27 +++++++++++++++++++++++++++
www/manager6/panel/GuestSummary.js | 30 ++++++++++++++++++++++++++++++
2 files changed, 57 insertions(+)
diff --git a/www/manager6/node/Summary.js b/www/manager6/node/Summary.js
index ed3d33d9..b00fcf2e 100644
--- a/www/manager6/node/Summary.js
+++ b/www/manager6/node/Summary.js
@@ -196,6 +196,33 @@ Ext.define('PVE.node.Summary', {
fields: ['netin', 'netout'],
store: rrdstore,
},
+ {
+ xtype: 'proxmoxRRDChart',
+ title: gettext('CPU pressure'),
+ fieldTitles: ['Some'],
+ fields: ['pressurecpusome'],
+ colors: ['#FFD13E', '#A61120'],
+ store: rrdstore,
+ unit: 'percent',
+ },
+ {
+ xtype: 'proxmoxRRDChart',
+ title: gettext('IO pressure'),
+ fieldTitles: ['Some', 'Full'],
+ fields: ['pressureiosome', 'pressureiofull'],
+ colors: ['#FFD13E', '#A61120'],
+ store: rrdstore,
+ unit: 'percent',
+ },
+ {
+ xtype: 'proxmoxRRDChart',
+ title: gettext('Memory pressure'),
+ fieldTitles: ['Some', 'Full'],
+ fields: ['pressurememorysome', 'pressurememoryfull'],
+ colors: ['#FFD13E', '#A61120'],
+ store: rrdstore,
+ unit: 'percent',
+ },
],
listeners: {
resize: function (panel) {
diff --git a/www/manager6/panel/GuestSummary.js b/www/manager6/panel/GuestSummary.js
index 5efbe40f..0b62dbb7 100644
--- a/www/manager6/panel/GuestSummary.js
+++ b/www/manager6/panel/GuestSummary.js
@@ -102,6 +102,36 @@ Ext.define('PVE.guest.Summary', {
fields: ['diskread', 'diskwrite'],
store: rrdstore,
},
+ {
+ xtype: 'proxmoxRRDChart',
+ title: gettext('CPU pressure'),
+ pveSelNode: me.pveSelNode,
+ fieldTitles: ['Some', 'Full'],
+ fields: ['pressurecpusome', 'pressurecpufull'],
+ colors: ['#FFD13E', '#A61120'],
+ store: rrdstore,
+ unit: 'percent',
+ },
+ {
+ xtype: 'proxmoxRRDChart',
+ title: gettext('IO pressure'),
+ pveSelNode: me.pveSelNode,
+ fieldTitles: ['Some', 'Full'],
+ fields: ['pressureiosome', 'pressureiofull'],
+ colors: ['#FFD13E', '#A61120'],
+ store: rrdstore,
+ unit: 'percent',
+ },
+ {
+ xtype: 'proxmoxRRDChart',
+ title: gettext('Memory pressure'),
+ pveSelNode: me.pveSelNode,
+ fieldTitles: ['Some', 'Full'],
+ fields: ['pressurememorysome', 'pressurememoryfull'],
+ colors: ['#FFD13E', '#A61120'],
+ store: rrdstore,
+ unit: 'percent',
+ },
);
}
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [pve-devel] [PATCH manager v3 08/14] ui: add pressure graphs to node and guest summary
2025-07-15 14:32 ` [pve-devel] [PATCH manager v3 08/14] ui: add pressure graphs to node and guest summary Aaron Lauterer
@ 2025-07-21 12:05 ` Dominik Csapak
0 siblings, 0 replies; 59+ messages in thread
From: Dominik Csapak @ 2025-07-21 12:05 UTC (permalink / raw)
To: Proxmox VE development discussion, Aaron Lauterer
As with the patch before, we might want to refactor the colors somewhere
Otherwise:
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
On 7/15/25 16:32, Aaron Lauterer wrote:
> From: Folke Gleumes <f.gleumes@proxmox.com>
>
> Pressures are indicatios that processes needed to wait for their
> resources. While 'some' means, that some of the processes on the host
> (node summary) or in the guests cgroup had to wait, 'full' means that
> all processes couldn't get the resources fast enough.
>
> We set the colors accordingly. For 'some' we use yellow, for 'full' we
> use red.
> This should make it clear that this is not just another graph, but
> indicates performance issues. It also sets the pressure graphs apart
> from the other graphs that follow the usual color scheme.
>
> Originally-by: Folke Gleumes <f.gleumes@proxmox.com>
> [AL:
> * rebased
> * reworked commit msg
> * set colors
> ]
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> www/manager6/node/Summary.js | 27 +++++++++++++++++++++++++++
> www/manager6/panel/GuestSummary.js | 30 ++++++++++++++++++++++++++++++
> 2 files changed, 57 insertions(+)
>
> diff --git a/www/manager6/node/Summary.js b/www/manager6/node/Summary.js
> index ed3d33d9..b00fcf2e 100644
> --- a/www/manager6/node/Summary.js
> +++ b/www/manager6/node/Summary.js
> @@ -196,6 +196,33 @@ Ext.define('PVE.node.Summary', {
> fields: ['netin', 'netout'],
> store: rrdstore,
> },
> + {
> + xtype: 'proxmoxRRDChart',
> + title: gettext('CPU pressure'),
> + fieldTitles: ['Some'],
> + fields: ['pressurecpusome'],
> + colors: ['#FFD13E', '#A61120'],
> + store: rrdstore,
> + unit: 'percent',
> + },
> + {
> + xtype: 'proxmoxRRDChart',
> + title: gettext('IO pressure'),
> + fieldTitles: ['Some', 'Full'],
> + fields: ['pressureiosome', 'pressureiofull'],
> + colors: ['#FFD13E', '#A61120'],
> + store: rrdstore,
> + unit: 'percent',
> + },
> + {
> + xtype: 'proxmoxRRDChart',
> + title: gettext('Memory pressure'),
> + fieldTitles: ['Some', 'Full'],
> + fields: ['pressurememorysome', 'pressurememoryfull'],
> + colors: ['#FFD13E', '#A61120'],
> + store: rrdstore,
> + unit: 'percent',
> + },
> ],
> listeners: {
> resize: function (panel) {
> diff --git a/www/manager6/panel/GuestSummary.js b/www/manager6/panel/GuestSummary.js
> index 5efbe40f..0b62dbb7 100644
> --- a/www/manager6/panel/GuestSummary.js
> +++ b/www/manager6/panel/GuestSummary.js
> @@ -102,6 +102,36 @@ Ext.define('PVE.guest.Summary', {
> fields: ['diskread', 'diskwrite'],
> store: rrdstore,
> },
> + {
> + xtype: 'proxmoxRRDChart',
> + title: gettext('CPU pressure'),
> + pveSelNode: me.pveSelNode,
> + fieldTitles: ['Some', 'Full'],
> + fields: ['pressurecpusome', 'pressurecpufull'],
> + colors: ['#FFD13E', '#A61120'],
> + store: rrdstore,
> + unit: 'percent',
> + },
> + {
> + xtype: 'proxmoxRRDChart',
> + title: gettext('IO pressure'),
> + pveSelNode: me.pveSelNode,
> + fieldTitles: ['Some', 'Full'],
> + fields: ['pressureiosome', 'pressureiofull'],
> + colors: ['#FFD13E', '#A61120'],
> + store: rrdstore,
> + unit: 'percent',
> + },
> + {
> + xtype: 'proxmoxRRDChart',
> + title: gettext('Memory pressure'),
> + pveSelNode: me.pveSelNode,
> + fieldTitles: ['Some', 'Full'],
> + fields: ['pressurememorysome', 'pressurememoryfull'],
> + colors: ['#FFD13E', '#A61120'],
> + store: rrdstore,
> + unit: 'percent',
> + },
> );
> }
>
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] [PATCH manager v3 09/14] ui: GuestStatusView: add memhost for VM guests
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
` (20 preceding siblings ...)
2025-07-15 14:32 ` [pve-devel] [PATCH manager v3 08/14] ui: add pressure graphs to node and guest summary Aaron Lauterer
@ 2025-07-15 14:32 ` Aaron Lauterer
2025-07-21 12:34 ` Dominik Csapak
2025-07-15 14:32 ` [pve-devel] [PATCH manager v3 10/14] ui: GuestSummary: memory switch to stacked and add hostmem Aaron Lauterer
` (13 subsequent siblings)
35 siblings, 1 reply; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:32 UTC (permalink / raw)
To: pve-devel
With the new memhost field, the vertical space is getting tight. We
therefore reduce the height of the separator boxes.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
www/manager6/panel/GuestStatusView.js | 18 ++++++++++++++++--
1 file changed, 16 insertions(+), 2 deletions(-)
diff --git a/www/manager6/panel/GuestStatusView.js b/www/manager6/panel/GuestStatusView.js
index 0134526c..3369f7b3 100644
--- a/www/manager6/panel/GuestStatusView.js
+++ b/www/manager6/panel/GuestStatusView.js
@@ -94,7 +94,7 @@ Ext.define('PVE.panel.GuestStatusView', {
},
{
xtype: 'box',
- height: 15,
+ height: 10,
},
{
itemId: 'cpu',
@@ -114,6 +114,20 @@ Ext.define('PVE.panel.GuestStatusView', {
valueField: 'mem',
maxField: 'maxmem',
},
+ {
+ itemId: 'memory-host',
+ iconCls: 'fa fa-fw pmx-itype-icon-memory pmx-icon',
+ title: gettext('Host memory usage'),
+ valueField: 'memhost',
+ printBar: false,
+ renderer: function (used, max) {
+ return Proxmox.Utils.render_size(used);
+ },
+ cbind: {
+ hidden: '{isLxc}',
+ disabled: '{isLxc}',
+ },
+ },
{
itemId: 'swap',
iconCls: 'fa fa-refresh fa-fw',
@@ -144,7 +158,7 @@ Ext.define('PVE.panel.GuestStatusView', {
},
{
xtype: 'box',
- height: 15,
+ height: 10,
},
{
itemId: 'ips',
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [pve-devel] [PATCH manager v3 09/14] ui: GuestStatusView: add memhost for VM guests
2025-07-15 14:32 ` [pve-devel] [PATCH manager v3 09/14] ui: GuestStatusView: add memhost for VM guests Aaron Lauterer
@ 2025-07-21 12:34 ` Dominik Csapak
0 siblings, 0 replies; 59+ messages in thread
From: Dominik Csapak @ 2025-07-21 12:34 UTC (permalink / raw)
To: Proxmox VE development discussion, Aaron Lauterer
Since this is getting rather crowded, we should probably think about
how we could redesign this panel to save a bit of space
(e.g. show the status and ha status on one line, moving the node
info somewhere else, ...)
FWICT this is not that big of a problem (yet)
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
On 7/15/25 16:32, Aaron Lauterer wrote:
> With the new memhost field, the vertical space is getting tight. We
> therefore reduce the height of the separator boxes.
>
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> www/manager6/panel/GuestStatusView.js | 18 ++++++++++++++++--
> 1 file changed, 16 insertions(+), 2 deletions(-)
>
> diff --git a/www/manager6/panel/GuestStatusView.js b/www/manager6/panel/GuestStatusView.js
> index 0134526c..3369f7b3 100644
> --- a/www/manager6/panel/GuestStatusView.js
> +++ b/www/manager6/panel/GuestStatusView.js
> @@ -94,7 +94,7 @@ Ext.define('PVE.panel.GuestStatusView', {
> },
> {
> xtype: 'box',
> - height: 15,
> + height: 10,
> },
> {
> itemId: 'cpu',
> @@ -114,6 +114,20 @@ Ext.define('PVE.panel.GuestStatusView', {
> valueField: 'mem',
> maxField: 'maxmem',
> },
> + {
> + itemId: 'memory-host',
> + iconCls: 'fa fa-fw pmx-itype-icon-memory pmx-icon',
> + title: gettext('Host memory usage'),
> + valueField: 'memhost',
> + printBar: false,
> + renderer: function (used, max) {
> + return Proxmox.Utils.render_size(used);
> + },
> + cbind: {
> + hidden: '{isLxc}',
> + disabled: '{isLxc}',
> + },
> + },
> {
> itemId: 'swap',
> iconCls: 'fa fa-refresh fa-fw',
> @@ -144,7 +158,7 @@ Ext.define('PVE.panel.GuestStatusView', {
> },
> {
> xtype: 'box',
> - height: 15,
> + height: 10,
> },
> {
> itemId: 'ips',
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] [PATCH manager v3 10/14] ui: GuestSummary: memory switch to stacked and add hostmem
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
` (21 preceding siblings ...)
2025-07-15 14:32 ` [pve-devel] [PATCH manager v3 09/14] ui: GuestStatusView: add memhost for VM guests Aaron Lauterer
@ 2025-07-15 14:32 ` Aaron Lauterer
2025-07-21 12:37 ` Dominik Csapak
2025-07-15 14:32 ` [pve-devel] [PATCH manager v3 11/14] ui: nodesummary: guestsummary: add tooltip info buttons Aaron Lauterer
` (12 subsequent siblings)
35 siblings, 1 reply; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:32 UTC (permalink / raw)
To: pve-devel
We switch the memory graph to a stacked area graph, similar to what we
have now on the node summary page.
Since the order is important, we need to define the colors manually, as
the default color scheme would switch the colors as we usually have
them.
Additionally we add the host memory view as another data series. But we
keep it as a single line without fill. We chose the grey tone so that is
works for both, bright and dark theme.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
www/manager6/panel/GuestSummary.js | 26 ++++++++++++++++++++++++--
1 file changed, 24 insertions(+), 2 deletions(-)
diff --git a/www/manager6/panel/GuestSummary.js b/www/manager6/panel/GuestSummary.js
index 0b62dbb7..e26b0ada 100644
--- a/www/manager6/panel/GuestSummary.js
+++ b/www/manager6/panel/GuestSummary.js
@@ -30,6 +30,28 @@ Ext.define('PVE.guest.Summary', {
var template = !!me.pveSelNode.data.template;
var rstore = me.statusStore;
+ let memhostField = {
+ type: 'line',
+ fill: false,
+ yField: 'memhost',
+ title: gettext('Host memory usage'),
+ style: {
+ lineWidth: 2.5,
+ opacity: 1,
+ },
+ };
+
+ let memoryFields = [
+ {
+ type: 'area',
+ yField: ['mem', 'maxmem-capped'],
+ title: [gettext('RAM usage'), gettext('Configured')],
+ },
+ ];
+ if (type === 'qemu') {
+ memoryFields.push(memhostField);
+ }
+
var items = [
{
xtype: template ? 'pveTemplateStatusView' : 'pveGuestStatusView',
@@ -82,8 +104,8 @@ Ext.define('PVE.guest.Summary', {
xtype: 'proxmoxRRDChart',
title: gettext('Memory usage'),
pveSelNode: me.pveSelNode,
- fields: ['maxmem', 'mem'],
- fieldTitles: [gettext('Total'), gettext('RAM usage')],
+ fields: memoryFields,
+ colors: ['#115fa6', '#94ae0a', '#c4c0c0'],
unit: 'bytes',
powerOfTwo: true,
store: rrdstore,
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [pve-devel] [PATCH manager v3 10/14] ui: GuestSummary: memory switch to stacked and add hostmem
2025-07-15 14:32 ` [pve-devel] [PATCH manager v3 10/14] ui: GuestSummary: memory switch to stacked and add hostmem Aaron Lauterer
@ 2025-07-21 12:37 ` Dominik Csapak
0 siblings, 0 replies; 59+ messages in thread
From: Dominik Csapak @ 2025-07-21 12:37 UTC (permalink / raw)
To: Proxmox VE development discussion, Aaron Lauterer
two comments inline:
On 7/15/25 16:32, Aaron Lauterer wrote:
> We switch the memory graph to a stacked area graph, similar to what we
> have now on the node summary page.
>
> Since the order is important, we need to define the colors manually, as
> the default color scheme would switch the colors as we usually have
> them.
>
> Additionally we add the host memory view as another data series. But we
> keep it as a single line without fill. We chose the grey tone so that is
> works for both, bright and dark theme.
>
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> www/manager6/panel/GuestSummary.js | 26 ++++++++++++++++++++++++--
> 1 file changed, 24 insertions(+), 2 deletions(-)
>
> diff --git a/www/manager6/panel/GuestSummary.js b/www/manager6/panel/GuestSummary.js
> index 0b62dbb7..e26b0ada 100644
> --- a/www/manager6/panel/GuestSummary.js
> +++ b/www/manager6/panel/GuestSummary.js
> @@ -30,6 +30,28 @@ Ext.define('PVE.guest.Summary', {
> var template = !!me.pveSelNode.data.template;
> var rstore = me.statusStore;
>
> + let memhostField = {
> + type: 'line',
> + fill: false,
> + yField: 'memhost',
> + title: gettext('Host memory usage'),
> + style: {
> + lineWidth: 2.5,
> + opacity: 1,
> + },
> + };
you could define that inline in the 'push' call below,
then there is no need for the extra variable....
> +
> + let memoryFields = [
> + {
> + type: 'area',
> + yField: ['mem', 'maxmem-capped'],
> + title: [gettext('RAM usage'), gettext('Configured')],)
as discussed off-list, 'configured' is not a good name, just keeping
'total' is better.
> + },
> + ];
> + if (type === 'qemu') {
> + memoryFields.push(memhostField);
...here, like:
memoryFields.push({ type: 'line', ... });
> + }
> +
> var items = [
> {
> xtype: template ? 'pveTemplateStatusView' : 'pveGuestStatusView',
> @@ -82,8 +104,8 @@ Ext.define('PVE.guest.Summary', {
> xtype: 'proxmoxRRDChart',
> title: gettext('Memory usage'),
> pveSelNode: me.pveSelNode,
> - fields: ['maxmem', 'mem'],
> - fieldTitles: [gettext('Total'), gettext('RAM usage')],
> + fields: memoryFields,
> + colors: ['#115fa6', '#94ae0a', '#c4c0c0'],
> unit: 'bytes',
> powerOfTwo: true,
> store: rrdstore,
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] [PATCH manager v3 11/14] ui: nodesummary: guestsummary: add tooltip info buttons
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
` (22 preceding siblings ...)
2025-07-15 14:32 ` [pve-devel] [PATCH manager v3 10/14] ui: GuestSummary: memory switch to stacked and add hostmem Aaron Lauterer
@ 2025-07-15 14:32 ` Aaron Lauterer
2025-07-21 12:40 ` Dominik Csapak
2025-07-15 14:32 ` [pve-devel] [PATCH manager v3 12/14] ui: summaries: use titles for disk and network series Aaron Lauterer
` (11 subsequent siblings)
35 siblings, 1 reply; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:32 UTC (permalink / raw)
To: pve-devel
This way, we can provide a bit more context to what the graph is
showing. Hopefully making it easier for our users to draw useful
conclusions from the provided information.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
Notes:
while not available for all graphs for now, this should help users
understand the more complex ones.
The phrasing might be improved of course.
www/manager6/node/Summary.js | 40 ++++++++++++++++++++++++++++++
www/manager6/panel/GuestSummary.js | 30 ++++++++++++++++++++++
2 files changed, 70 insertions(+)
diff --git a/www/manager6/node/Summary.js b/www/manager6/node/Summary.js
index b00fcf2e..7bd3324c 100644
--- a/www/manager6/node/Summary.js
+++ b/www/manager6/node/Summary.js
@@ -162,6 +162,16 @@ Ext.define('PVE.node.Summary', {
{
xtype: 'proxmoxRRDChart',
title: gettext('CPU usage'),
+ tools: [
+ {
+ glyph: 'xf05a@FontAwesome', // fa-info-circle
+ tooltip: gettext("IO Delay is a measure of how much time processes had to wait for IO to be finished."),
+ disabled: false,
+ style: {
+ paddingRight: '5px',
+ },
+ },
+ ],
fields: ['cpu', 'iowait'],
fieldTitles: [gettext('CPU usage'), gettext('IO delay')],
unit: 'percent',
@@ -199,6 +209,16 @@ Ext.define('PVE.node.Summary', {
{
xtype: 'proxmoxRRDChart',
title: gettext('CPU pressure'),
+ tools: [
+ {
+ glyph: 'xf05a@FontAwesome', // fa-info-circle
+ tooltip: gettext("Shows if some processes on the host had to wait for CPU resources."),
+ disabled: false,
+ style: {
+ paddingRight: '5px',
+ },
+ },
+ ],
fieldTitles: ['Some'],
fields: ['pressurecpusome'],
colors: ['#FFD13E', '#A61120'],
@@ -208,6 +228,16 @@ Ext.define('PVE.node.Summary', {
{
xtype: 'proxmoxRRDChart',
title: gettext('IO pressure'),
+ tools: [
+ {
+ glyph: 'xf05a@FontAwesome', // fa-info-circle
+ tooltip: gettext("Shows if some or all (Full) processes on the host had to wait for IO (disk & network) resources."),
+ disabled: false,
+ style: {
+ paddingRight: '5px',
+ },
+ },
+ ],
fieldTitles: ['Some', 'Full'],
fields: ['pressureiosome', 'pressureiofull'],
colors: ['#FFD13E', '#A61120'],
@@ -217,6 +247,16 @@ Ext.define('PVE.node.Summary', {
{
xtype: 'proxmoxRRDChart',
title: gettext('Memory pressure'),
+ tools: [
+ {
+ glyph: 'xf05a@FontAwesome', // fa-info-circle
+ tooltip: gettext("Shows if some or all (Full) processes on the host had to wait for memory resources."),
+ disabled: false,
+ style: {
+ paddingRight: '5px',
+ },
+ },
+ ],
fieldTitles: ['Some', 'Full'],
fields: ['pressurememorysome', 'pressurememoryfull'],
colors: ['#FFD13E', '#A61120'],
diff --git a/www/manager6/panel/GuestSummary.js b/www/manager6/panel/GuestSummary.js
index e26b0ada..cf54f38e 100644
--- a/www/manager6/panel/GuestSummary.js
+++ b/www/manager6/panel/GuestSummary.js
@@ -127,6 +127,16 @@ Ext.define('PVE.guest.Summary', {
{
xtype: 'proxmoxRRDChart',
title: gettext('CPU pressure'),
+ tools: [
+ {
+ glyph: 'xf05a@FontAwesome', // fa-info-circle
+ tooltip: gettext("Shows if some or all (Full) processes belonging to the guest had to wait for CPU resources."),
+ disabled: false,
+ style: {
+ paddingRight: '5px',
+ },
+ },
+ ],
pveSelNode: me.pveSelNode,
fieldTitles: ['Some', 'Full'],
fields: ['pressurecpusome', 'pressurecpufull'],
@@ -137,6 +147,16 @@ Ext.define('PVE.guest.Summary', {
{
xtype: 'proxmoxRRDChart',
title: gettext('IO pressure'),
+ tools: [
+ {
+ glyph: 'xf05a@FontAwesome', // fa-info-circle
+ tooltip: gettext("Shows if some or all (Full) processes belonging to the guest had to wait for IO (disk & network) resources."),
+ disabled: false,
+ style: {
+ paddingRight: '5px',
+ },
+ },
+ ],
pveSelNode: me.pveSelNode,
fieldTitles: ['Some', 'Full'],
fields: ['pressureiosome', 'pressureiofull'],
@@ -147,6 +167,16 @@ Ext.define('PVE.guest.Summary', {
{
xtype: 'proxmoxRRDChart',
title: gettext('Memory pressure'),
+ tools: [
+ {
+ glyph: 'xf05a@FontAwesome', // fa-info-circle
+ tooltip: gettext("Shows if some or all (Full) processes belonging to the guest had to wait for memory resources."),
+ disabled: false,
+ style: {
+ paddingRight: '5px',
+ },
+ },
+ ],
pveSelNode: me.pveSelNode,
fieldTitles: ['Some', 'Full'],
fields: ['pressurememorysome', 'pressurememoryfull'],
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [pve-devel] [PATCH manager v3 11/14] ui: nodesummary: guestsummary: add tooltip info buttons
2025-07-15 14:32 ` [pve-devel] [PATCH manager v3 11/14] ui: nodesummary: guestsummary: add tooltip info buttons Aaron Lauterer
@ 2025-07-21 12:40 ` Dominik Csapak
0 siblings, 0 replies; 59+ messages in thread
From: Dominik Csapak @ 2025-07-21 12:40 UTC (permalink / raw)
To: Proxmox VE development discussion, Aaron Lauterer
mhmm not too sure about this. IMHO this info belongs in the
documentation instead of in a tooltip, but I get why we
want to maybe have an explanation for these easily reachable.
First, I'd use the panel tool icons here instead of font awesomes,
like e.g. the undo zoom and close button (from the windows)
Second, maybe having that button link to the correct section
in the docs is better in the long run.
On 7/15/25 16:32, Aaron Lauterer wrote:
> This way, we can provide a bit more context to what the graph is
> showing. Hopefully making it easier for our users to draw useful
> conclusions from the provided information.
>
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
>
> Notes:
> while not available for all graphs for now, this should help users
> understand the more complex ones.
>
> The phrasing might be improved of course.
>
> www/manager6/node/Summary.js | 40 ++++++++++++++++++++++++++++++
> www/manager6/panel/GuestSummary.js | 30 ++++++++++++++++++++++
> 2 files changed, 70 insertions(+)
>
> diff --git a/www/manager6/node/Summary.js b/www/manager6/node/Summary.js
> index b00fcf2e..7bd3324c 100644
> --- a/www/manager6/node/Summary.js
> +++ b/www/manager6/node/Summary.js
> @@ -162,6 +162,16 @@ Ext.define('PVE.node.Summary', {
> {
> xtype: 'proxmoxRRDChart',
> title: gettext('CPU usage'),
> + tools: [
> + {
> + glyph: 'xf05a@FontAwesome', // fa-info-circle
> + tooltip: gettext("IO Delay is a measure of how much time processes had to wait for IO to be finished."),
> + disabled: false,
> + style: {
> + paddingRight: '5px',
> + },
> + },
> + ],
> fields: ['cpu', 'iowait'],
> fieldTitles: [gettext('CPU usage'), gettext('IO delay')],
> unit: 'percent',
> @@ -199,6 +209,16 @@ Ext.define('PVE.node.Summary', {
> {
> xtype: 'proxmoxRRDChart',
> title: gettext('CPU pressure'),
> + tools: [
> + {
> + glyph: 'xf05a@FontAwesome', // fa-info-circle
> + tooltip: gettext("Shows if some processes on the host had to wait for CPU resources."),
> + disabled: false,
> + style: {
> + paddingRight: '5px',
> + },
> + },
> + ],
> fieldTitles: ['Some'],
> fields: ['pressurecpusome'],
> colors: ['#FFD13E', '#A61120'],
> @@ -208,6 +228,16 @@ Ext.define('PVE.node.Summary', {
> {
> xtype: 'proxmoxRRDChart',
> title: gettext('IO pressure'),
> + tools: [
> + {
> + glyph: 'xf05a@FontAwesome', // fa-info-circle
> + tooltip: gettext("Shows if some or all (Full) processes on the host had to wait for IO (disk & network) resources."),
> + disabled: false,
> + style: {
> + paddingRight: '5px',
> + },
> + },
> + ],
> fieldTitles: ['Some', 'Full'],
> fields: ['pressureiosome', 'pressureiofull'],
> colors: ['#FFD13E', '#A61120'],
> @@ -217,6 +247,16 @@ Ext.define('PVE.node.Summary', {
> {
> xtype: 'proxmoxRRDChart',
> title: gettext('Memory pressure'),
> + tools: [
> + {
> + glyph: 'xf05a@FontAwesome', // fa-info-circle
> + tooltip: gettext("Shows if some or all (Full) processes on the host had to wait for memory resources."),
> + disabled: false,
> + style: {
> + paddingRight: '5px',
> + },
> + },
> + ],
> fieldTitles: ['Some', 'Full'],
> fields: ['pressurememorysome', 'pressurememoryfull'],
> colors: ['#FFD13E', '#A61120'],
> diff --git a/www/manager6/panel/GuestSummary.js b/www/manager6/panel/GuestSummary.js
> index e26b0ada..cf54f38e 100644
> --- a/www/manager6/panel/GuestSummary.js
> +++ b/www/manager6/panel/GuestSummary.js
> @@ -127,6 +127,16 @@ Ext.define('PVE.guest.Summary', {
> {
> xtype: 'proxmoxRRDChart',
> title: gettext('CPU pressure'),
> + tools: [
> + {
> + glyph: 'xf05a@FontAwesome', // fa-info-circle
> + tooltip: gettext("Shows if some or all (Full) processes belonging to the guest had to wait for CPU resources."),
> + disabled: false,
> + style: {
> + paddingRight: '5px',
> + },
> + },
> + ],
> pveSelNode: me.pveSelNode,
> fieldTitles: ['Some', 'Full'],
> fields: ['pressurecpusome', 'pressurecpufull'],
> @@ -137,6 +147,16 @@ Ext.define('PVE.guest.Summary', {
> {
> xtype: 'proxmoxRRDChart',
> title: gettext('IO pressure'),
> + tools: [
> + {
> + glyph: 'xf05a@FontAwesome', // fa-info-circle
> + tooltip: gettext("Shows if some or all (Full) processes belonging to the guest had to wait for IO (disk & network) resources."),
> + disabled: false,
> + style: {
> + paddingRight: '5px',
> + },
> + },
> + ],
> pveSelNode: me.pveSelNode,
> fieldTitles: ['Some', 'Full'],
> fields: ['pressureiosome', 'pressureiofull'],
> @@ -147,6 +167,16 @@ Ext.define('PVE.guest.Summary', {
> {
> xtype: 'proxmoxRRDChart',
> title: gettext('Memory pressure'),
> + tools: [
> + {
> + glyph: 'xf05a@FontAwesome', // fa-info-circle
> + tooltip: gettext("Shows if some or all (Full) processes belonging to the guest had to wait for memory resources."),
> + disabled: false,
> + style: {
> + paddingRight: '5px',
> + },
> + },
> + ],
> pveSelNode: me.pveSelNode,
> fieldTitles: ['Some', 'Full'],
> fields: ['pressurememorysome', 'pressurememoryfull'],
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] [PATCH manager v3 12/14] ui: summaries: use titles for disk and network series
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
` (23 preceding siblings ...)
2025-07-15 14:32 ` [pve-devel] [PATCH manager v3 11/14] ui: nodesummary: guestsummary: add tooltip info buttons Aaron Lauterer
@ 2025-07-15 14:32 ` Aaron Lauterer
2025-07-21 12:40 ` Dominik Csapak
2025-07-15 14:32 ` [pve-devel] [PATCH manager v3 13/14] ui: ResourceStore: add memhost column Aaron Lauterer
` (10 subsequent siblings)
35 siblings, 1 reply; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:32 UTC (permalink / raw)
To: pve-devel
They were missing and just showed the actual field names.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
www/manager6/node/Summary.js | 1 +
www/manager6/panel/GuestSummary.js | 2 ++
2 files changed, 3 insertions(+)
diff --git a/www/manager6/node/Summary.js b/www/manager6/node/Summary.js
index 7bd3324c..a7e54bf4 100644
--- a/www/manager6/node/Summary.js
+++ b/www/manager6/node/Summary.js
@@ -204,6 +204,7 @@ Ext.define('PVE.node.Summary', {
xtype: 'proxmoxRRDChart',
title: gettext('Network traffic'),
fields: ['netin', 'netout'],
+ fieldTitles: [gettext('Incoming'), gettext('Outgoing')],
store: rrdstore,
},
{
diff --git a/www/manager6/panel/GuestSummary.js b/www/manager6/panel/GuestSummary.js
index cf54f38e..22e4a551 100644
--- a/www/manager6/panel/GuestSummary.js
+++ b/www/manager6/panel/GuestSummary.js
@@ -115,6 +115,7 @@ Ext.define('PVE.guest.Summary', {
title: gettext('Network traffic'),
pveSelNode: me.pveSelNode,
fields: ['netin', 'netout'],
+ fieldTitles: [gettext('Incoming'), gettext('Outgoing')],
store: rrdstore,
},
{
@@ -122,6 +123,7 @@ Ext.define('PVE.guest.Summary', {
title: gettext('Disk IO'),
pveSelNode: me.pveSelNode,
fields: ['diskread', 'diskwrite'],
+ fieldTitles: [gettext('Reads'), gettext('Writes')],
store: rrdstore,
},
{
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [pve-devel] [PATCH manager v3 12/14] ui: summaries: use titles for disk and network series
2025-07-15 14:32 ` [pve-devel] [PATCH manager v3 12/14] ui: summaries: use titles for disk and network series Aaron Lauterer
@ 2025-07-21 12:40 ` Dominik Csapak
0 siblings, 0 replies; 59+ messages in thread
From: Dominik Csapak @ 2025-07-21 12:40 UTC (permalink / raw)
To: Proxmox VE development discussion, Aaron Lauterer
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
On 7/15/25 16:32, Aaron Lauterer wrote:
> They were missing and just showed the actual field names.
>
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> www/manager6/node/Summary.js | 1 +
> www/manager6/panel/GuestSummary.js | 2 ++
> 2 files changed, 3 insertions(+)
>
> diff --git a/www/manager6/node/Summary.js b/www/manager6/node/Summary.js
> index 7bd3324c..a7e54bf4 100644
> --- a/www/manager6/node/Summary.js
> +++ b/www/manager6/node/Summary.js
> @@ -204,6 +204,7 @@ Ext.define('PVE.node.Summary', {
> xtype: 'proxmoxRRDChart',
> title: gettext('Network traffic'),
> fields: ['netin', 'netout'],
> + fieldTitles: [gettext('Incoming'), gettext('Outgoing')],
> store: rrdstore,
> },
> {
> diff --git a/www/manager6/panel/GuestSummary.js b/www/manager6/panel/GuestSummary.js
> index cf54f38e..22e4a551 100644
> --- a/www/manager6/panel/GuestSummary.js
> +++ b/www/manager6/panel/GuestSummary.js
> @@ -115,6 +115,7 @@ Ext.define('PVE.guest.Summary', {
> title: gettext('Network traffic'),
> pveSelNode: me.pveSelNode,
> fields: ['netin', 'netout'],
> + fieldTitles: [gettext('Incoming'), gettext('Outgoing')],
> store: rrdstore,
> },
> {
> @@ -122,6 +123,7 @@ Ext.define('PVE.guest.Summary', {
> title: gettext('Disk IO'),
> pveSelNode: me.pveSelNode,
> fields: ['diskread', 'diskwrite'],
> + fieldTitles: [gettext('Reads'), gettext('Writes')],
> store: rrdstore,
> },
> {
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] [PATCH manager v3 13/14] ui: ResourceStore: add memhost column
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
` (24 preceding siblings ...)
2025-07-15 14:32 ` [pve-devel] [PATCH manager v3 12/14] ui: summaries: use titles for disk and network series Aaron Lauterer
@ 2025-07-15 14:32 ` Aaron Lauterer
2025-07-15 14:32 ` [pve-devel] [PATCH manager v3 14/14] fix #6068: ui: utils: calculate and render host memory usage correctly Aaron Lauterer
` (9 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:32 UTC (permalink / raw)
To: pve-devel
It is populated for VMs and we need it for an accurate calculation of
the host memory usage of VMs.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
www/manager6/data/ResourceStore.js | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/www/manager6/data/ResourceStore.js b/www/manager6/data/ResourceStore.js
index 13af3b4e..d1f3fb63 100644
--- a/www/manager6/data/ResourceStore.js
+++ b/www/manager6/data/ResourceStore.js
@@ -167,6 +167,14 @@ Ext.define('PVE.data.ResourceStore', {
hidden: true,
width: 100,
},
+ memhost: {
+ header: gettext('Host Memory usage'),
+ type: 'integer',
+ renderer: PVE.Utils.render_mem_usage,
+ sortable: true,
+ hidden: true,
+ width: 100,
+ },
memuse: {
header: gettext('Memory usage') + ' %',
type: 'number',
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] [PATCH manager v3 14/14] fix #6068: ui: utils: calculate and render host memory usage correctly
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
` (25 preceding siblings ...)
2025-07-15 14:32 ` [pve-devel] [PATCH manager v3 13/14] ui: ResourceStore: add memhost column Aaron Lauterer
@ 2025-07-15 14:32 ` Aaron Lauterer
2025-07-21 12:52 ` Dominik Csapak
2025-07-15 14:32 ` [pve-devel] [PATCH storage v3 1/1] status: rrddata: use new pve-storage-9.0 rrd location if file is present Aaron Lauterer
` (8 subsequent siblings)
35 siblings, 1 reply; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:32 UTC (permalink / raw)
To: pve-devel
First by using the new memhost field if guest is of type qemu and the field is
numerical.
Second by checking in the render function against the 'value' that has
been calculated if we have percentages or bytes. Previously we always
checked the raw record mem data.
As a result, if the cluster is in a mixed PVE8 / PVE9 situation, for
example during a migration, we will not report any host memory usage, in
numbers or percent, as we don't get the memhost metric from the older
PVE8 hosts.
Fixes: #6068 (Node Search tab incorrect Host memory usage %)
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
www/manager6/Utils.js | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index 29334111..ed41714a 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -1166,6 +1166,9 @@ Ext.define('PVE.Utils', {
return -1;
}
+ if (data.type === 'qemu' && Ext.isNumeric(data.memhost)) {
+ return data.memhost / maxmem;
+ }
return data.mem / maxmem;
},
@@ -1206,9 +1209,12 @@ Ext.define('PVE.Utils', {
var node = PVE.data.ResourceStore.getAt(index);
var maxmem = node.data.maxmem || 0;
- if (record.data.mem > 1) {
+ if (value > 1) {
// we got no percentage but bytes
let mem = record.data.mem;
+ if (record.data.type === 'qemu' && Ext.isNumeric(record.data.memhost)) {
+ mem = record.data.memhost;
+ }
if (!record.data.uptime || maxmem === 0 || !Ext.isNumeric(mem)) {
return '';
}
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [pve-devel] [PATCH manager v3 14/14] fix #6068: ui: utils: calculate and render host memory usage correctly
2025-07-15 14:32 ` [pve-devel] [PATCH manager v3 14/14] fix #6068: ui: utils: calculate and render host memory usage correctly Aaron Lauterer
@ 2025-07-21 12:52 ` Dominik Csapak
0 siblings, 0 replies; 59+ messages in thread
From: Dominik Csapak @ 2025-07-21 12:52 UTC (permalink / raw)
To: Proxmox VE development discussion, Aaron Lauterer
IMHO this and the last patch should be combined, since they don't really
make sense without each other..
one comment inline:
On 7/15/25 16:32, Aaron Lauterer wrote:
> First by using the new memhost field if guest is of type qemu and the field is
> numerical.
> Second by checking in the render function against the 'value' that has
> been calculated if we have percentages or bytes. Previously we always
> checked the raw record mem data.
>
> As a result, if the cluster is in a mixed PVE8 / PVE9 situation, for
> example during a migration, we will not report any host memory usage, in
> numbers or percent, as we don't get the memhost metric from the older
> PVE8 hosts.
>
> Fixes: #6068 (Node Search tab incorrect Host memory usage %)
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> www/manager6/Utils.js | 8 +++++++-
> 1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
> index 29334111..ed41714a 100644
> --- a/www/manager6/Utils.js
> +++ b/www/manager6/Utils.js
> @@ -1166,6 +1166,9 @@ Ext.define('PVE.Utils', {
> return -1;
> }
>
> + if (data.type === 'qemu' && Ext.isNumeric(data.memhost)) {
> + return data.memhost / maxmem;
> + }
> return data.mem / maxmem;
> },
>
> @@ -1206,9 +1209,12 @@ Ext.define('PVE.Utils', {
> var node = PVE.data.ResourceStore.getAt(index);
> var maxmem = node.data.maxmem || 0;
>
> - if (record.data.mem > 1) {
> + if (value > 1) {
looking at the code, this 'value' seems to be the calculated
'hostmemuse' value, which is calculated by 'calculate_hostmem_usage' and
i don't see how that could ever return a value > 1?
(it basically calculates 'memory' / 'host max memory' which will always
be >= 0 and <= 1)
so this change seems wrong to me?
> // we got no percentage but bytes
> let mem = record.data.mem;
> + if (record.data.type === 'qemu' && Ext.isNumeric(record.data.memhost)) {
> + mem = record.data.memhost;
> + }
> if (!record.data.uptime || maxmem === 0 || !Ext.isNumeric(mem)) {
> return '';
> }
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] [PATCH storage v3 1/1] status: rrddata: use new pve-storage-9.0 rrd location if file is present
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
` (26 preceding siblings ...)
2025-07-15 14:32 ` [pve-devel] [PATCH manager v3 14/14] fix #6068: ui: utils: calculate and render host memory usage correctly Aaron Lauterer
@ 2025-07-15 14:32 ` Aaron Lauterer
2025-07-15 14:32 ` [pve-devel] [PATCH qemu-server v3 1/4] metrics: add pressure to metrics Aaron Lauterer
` (7 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:32 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
Notes:
changes since:
RFC:
* switch from pve9-storage to pve-storage-90 schema
src/PVE/API2/Storage/Status.pm | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/src/PVE/API2/Storage/Status.pm b/src/PVE/API2/Storage/Status.pm
index c172073..ad8c753 100644
--- a/src/PVE/API2/Storage/Status.pm
+++ b/src/PVE/API2/Storage/Status.pm
@@ -415,11 +415,10 @@ __PACKAGE__->register_method({
code => sub {
my ($param) = @_;
- return PVE::RRD::create_rrd_data(
- "pve2-storage/$param->{node}/$param->{storage}",
- $param->{timeframe},
- $param->{cf},
- );
+ my $path = "pve-storage-9.0/$param->{node}/$param->{storage}";
+ $path = "pve2-storage/$param->{node}/$param->{storage}"
+ if !-e "/var/lib/rrdcached/db/${path}";
+ return PVE::RRD::create_rrd_data($path, $param->{timeframe}, $param->{cf});
},
});
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] [PATCH qemu-server v3 1/4] metrics: add pressure to metrics
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
` (27 preceding siblings ...)
2025-07-15 14:32 ` [pve-devel] [PATCH storage v3 1/1] status: rrddata: use new pve-storage-9.0 rrd location if file is present Aaron Lauterer
@ 2025-07-15 14:32 ` Aaron Lauterer
2025-07-15 14:32 ` [pve-devel] [PATCH qemu-server v3 2/4] vmstatus: add memhost for host view of vm mem consumption Aaron Lauterer
` (6 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:32 UTC (permalink / raw)
To: pve-devel
From: Folke Gleumes <f.gleumes@proxmox.com>
Originally-by: Folke Gleumes <f.gleumes@proxmox.com>
[AL:
* rebased on current master
* switch to new, more generic read_cgroup_pressure function
* add pressures to return properties
]
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
Notes:
changes since:
v2:
* added return properties
* reordered collection prior to the cpu collection, as it would be
skipped, especially when collected via `pvesh`
* added '* 1' to make sure we use numbers in the JSON -> an better
alternative for numbers that are not integers?
src/PVE/QemuServer.pm | 39 +++++++++++++++++++++++++++++++++++++++
1 file changed, 39 insertions(+)
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index d36fd1d..9e2c621 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -2526,6 +2526,36 @@ our $vmstatus_return_properties = {
type => 'boolean',
optional => 1,
},
+ pressurecpusome => {
+ description => "CPU Some pressure average over the last 10 seconds.",
+ type => 'number',
+ optional => 1,
+ },
+ pressurecpufull => {
+ description => "CPU Full pressure average over the last 10 seconds.",
+ type => 'number',
+ optional => 1,
+ },
+ pressureiosome => {
+ description => "IO Some pressure average over the last 10 seconds.",
+ type => 'number',
+ optional => 1,
+ },
+ pressureiofull => {
+ description => "IO Full pressure average over the last 10 seconds.",
+ type => 'number',
+ optional => 1,
+ },
+ pressurememorysome => {
+ description => "Memory Some pressure average over the last 10 seconds.",
+ type => 'number',
+ optional => 1,
+ },
+ pressurememoryfull => {
+ description => "Memory Full pressure average over the last 10 seconds.",
+ type => 'number',
+ optional => 1,
+ },
};
my $last_proc_pid_stat;
@@ -2638,6 +2668,14 @@ sub vmstatus {
$d->{mem} = int(($pstat->{rss} / $pstat->{vsize}) * $d->{maxmem});
}
+ my $pressures = PVE::ProcFSTools::read_cgroup_pressure("qemu.slice/${vmid}.scope");
+ $d->{pressurecpusome} = $pressures->{cpu}->{some}->{avg10} * 1;
+ $d->{pressurecpufull} = $pressures->{cpu}->{full}->{avg10} * 1;
+ $d->{pressureiosome} = $pressures->{io}->{some}->{avg10} * 1;
+ $d->{pressureiofull} = $pressures->{io}->{full}->{avg10} * 1;
+ $d->{pressurememorysome} = $pressures->{memory}->{some}->{avg10} * 1;
+ $d->{pressurememoryfull} = $pressures->{memory}->{full}->{avg10} * 1;
+
my $old = $last_proc_pid_stat->{$pid};
if (!$old) {
$last_proc_pid_stat->{$pid} = {
@@ -2662,6 +2700,7 @@ sub vmstatus {
} else {
$d->{cpu} = $old->{cpu};
}
+
}
return $res if !$full;
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] [PATCH qemu-server v3 2/4] vmstatus: add memhost for host view of vm mem consumption
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
` (28 preceding siblings ...)
2025-07-15 14:32 ` [pve-devel] [PATCH qemu-server v3 1/4] metrics: add pressure to metrics Aaron Lauterer
@ 2025-07-15 14:32 ` Aaron Lauterer
2025-07-15 14:32 ` [pve-devel] [PATCH qemu-server v3 3/4] vmstatus: switch mem stat to PSS of VM cgroup Aaron Lauterer
` (5 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:32 UTC (permalink / raw)
To: pve-devel
The mem field itself will switch from the outside view to the "inside"
view if the VM is reporting detailed memory usage informatio via the
ballooning device.
Since sometimes other processes belong to a VM too, for example swtpm,
we collect all PIDs belonging to the VM cgroup and fetch their PSS data
to account for shared libraries used.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
Notes:
changes since:
v2:
* add memhost description to $vmstatus_return_properties
* reorder to run earlier before the cpu collection. Otherwise it might
be skipped on the first call or when using `pvesh` if the cpu
collection triggers 'next'.
RFC:
* collect memory info for all processes in cgroup directly without too
generic helper function
src/PVE/QemuServer.pm | 26 +++++++++++++++++++++++++-
1 file changed, 25 insertions(+), 1 deletion(-)
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 9e2c621..630cef6 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -2426,6 +2426,12 @@ our $vmstatus_return_properties = {
optional => 1,
renderer => 'bytes',
},
+ memhost => {
+ description => "Current memory usage on the host.",
+ type => 'integer',
+ optional => 1,
+ renderer => 'bytes',
+ },
maxdisk => {
description => "Root disk size in bytes.",
type => 'integer',
@@ -2616,6 +2622,7 @@ sub vmstatus {
$d->{uptime} = 0;
$d->{cpu} = 0;
$d->{mem} = 0;
+ $d->{memhost} = 0;
$d->{netout} = 0;
$d->{netin} = 0;
@@ -2668,6 +2675,24 @@ sub vmstatus {
$d->{mem} = int(($pstat->{rss} / $pstat->{vsize}) * $d->{maxmem});
}
+ my $fh = IO::File->new("/sys/fs/cgroup/qemu.slice/${vmid}.scope/cgroup.procs", "r");
+ if ($fh) {
+ while (my $childPid = <$fh>) {
+ chomp($childPid);
+ open(my $SMAPS_FH, '<', "/proc/$childPid/smaps_rollup")
+ or die "failed to open PSS memory-stat from process - $!\n";
+
+ while (my $line = <$SMAPS_FH>) {
+ if ($line =~ m/^Pss:\s+([0-9]+) kB$/) {
+ $d->{memhost} = $d->{memhost} + int($1) * 1024;
+ last;
+ }
+ }
+ close $SMAPS_FH;
+ }
+ }
+ close($fh);
+
my $pressures = PVE::ProcFSTools::read_cgroup_pressure("qemu.slice/${vmid}.scope");
$d->{pressurecpusome} = $pressures->{cpu}->{some}->{avg10} * 1;
$d->{pressurecpufull} = $pressures->{cpu}->{full}->{avg10} * 1;
@@ -2700,7 +2725,6 @@ sub vmstatus {
} else {
$d->{cpu} = $old->{cpu};
}
-
}
return $res if !$full;
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] [PATCH qemu-server v3 3/4] vmstatus: switch mem stat to PSS of VM cgroup
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
` (29 preceding siblings ...)
2025-07-15 14:32 ` [pve-devel] [PATCH qemu-server v3 2/4] vmstatus: add memhost for host view of vm mem consumption Aaron Lauterer
@ 2025-07-15 14:32 ` Aaron Lauterer
2025-07-15 14:32 ` [pve-devel] [PATCH qemu-server v3 4/4] rrddata: use new pve-vm-9.0 rrd location if file is present Aaron Lauterer
` (4 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:32 UTC (permalink / raw)
To: pve-devel
Instead of RSS, let's use the same PSS values as for the specific host
view as default, in case this value is not overwritten by the balloon
info.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
Notes:
changes since:
v2:
* follow reorder of memhost collection, before cpu collection that might
be trigger the next iteration of the loop in some situations
src/PVE/QemuServer.pm | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/src/PVE/QemuServer.pm b/src/PVE/QemuServer.pm
index 630cef6..abbef56 100644
--- a/src/PVE/QemuServer.pm
+++ b/src/PVE/QemuServer.pm
@@ -2671,10 +2671,6 @@ sub vmstatus {
$d->{uptime} = int(($uptime - $pstat->{starttime}) / $cpuinfo->{user_hz});
- if ($pstat->{vsize}) {
- $d->{mem} = int(($pstat->{rss} / $pstat->{vsize}) * $d->{maxmem});
- }
-
my $fh = IO::File->new("/sys/fs/cgroup/qemu.slice/${vmid}.scope/cgroup.procs", "r");
if ($fh) {
while (my $childPid = <$fh>) {
@@ -2693,6 +2689,8 @@ sub vmstatus {
}
close($fh);
+ $d->{mem} = $d->{memhost};
+
my $pressures = PVE::ProcFSTools::read_cgroup_pressure("qemu.slice/${vmid}.scope");
$d->{pressurecpusome} = $pressures->{cpu}->{some}->{avg10} * 1;
$d->{pressurecpufull} = $pressures->{cpu}->{full}->{avg10} * 1;
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] [PATCH qemu-server v3 4/4] rrddata: use new pve-vm-9.0 rrd location if file is present
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
` (30 preceding siblings ...)
2025-07-15 14:32 ` [pve-devel] [PATCH qemu-server v3 3/4] vmstatus: switch mem stat to PSS of VM cgroup Aaron Lauterer
@ 2025-07-15 14:32 ` Aaron Lauterer
2025-07-15 14:32 ` [pve-devel] [PATCH container v3 1/2] metrics: add pressures to metrics Aaron Lauterer
` (3 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:32 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
Notes:
changes since:
RFC:
* switch from pve9-vm to pve-vm-90 schema
src/PVE/API2/Qemu.pm | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/src/PVE/API2/Qemu.pm b/src/PVE/API2/Qemu.pm
index 2e6358e..2867a53 100644
--- a/src/PVE/API2/Qemu.pm
+++ b/src/PVE/API2/Qemu.pm
@@ -1628,9 +1628,9 @@ __PACKAGE__->register_method({
code => sub {
my ($param) = @_;
- return PVE::RRD::create_rrd_graph(
- "pve2-vm/$param->{vmid}", $param->{timeframe}, $param->{ds}, $param->{cf},
- );
+ my $path = "pve-vm-9.0/$param->{vmid}";
+ $path = "pve2-vm/$param->{vmid}" if !-e "/var/lib/rrdcached/db/${path}";
+ return PVE::RRD::create_rrd_graph($path, $param->{timeframe}, $param->{cf});
},
});
@@ -1672,8 +1672,9 @@ __PACKAGE__->register_method({
code => sub {
my ($param) = @_;
- return PVE::RRD::create_rrd_data("pve2-vm/$param->{vmid}", $param->{timeframe},
- $param->{cf});
+ my $path = "pve-vm-9.0/$param->{vmid}";
+ $path = "pve2-vm/$param->{vmid}" if !-e "/var/lib/rrdcached/db/${path}";
+ return PVE::RRD::create_rrd_data($path, $param->{timeframe}, $param->{cf});
},
});
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] [PATCH container v3 1/2] metrics: add pressures to metrics
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
` (31 preceding siblings ...)
2025-07-15 14:32 ` [pve-devel] [PATCH qemu-server v3 4/4] rrddata: use new pve-vm-9.0 rrd location if file is present Aaron Lauterer
@ 2025-07-15 14:32 ` Aaron Lauterer
2025-07-15 14:32 ` [pve-devel] [PATCH container v3 2/2] rrddata: use new pve-vm-9.0 rrd location if file is present Aaron Lauterer
` (2 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:32 UTC (permalink / raw)
To: pve-devel
From: Folke Gleumes <f.gleumes@proxmox.com>
Originally-by: Folke Gleumes <f.gleumes@proxmox.com>
[AL:
* rebased on current master
* switch to new, more generic read_cgroup_pressure function
* add pressures to return properties
]
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
Notes:
changes since:
v2:
* add return properties for pressures
* reorder to run before cpu info collection, otherwise that might
trigger 'next', skipping the pressure collection. For example when
using `pvesh` for the 'current' API endpoint
src/PVE/LXC.pm | 34 ++++++++++++++++++++++++++++++++++
1 file changed, 34 insertions(+)
diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index ffedcb9..2932c77 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -227,6 +227,32 @@ our $vmstatus_return_properties = {
optional => 1,
default => 0,
},
+ pressurecpusome => {
+ description => "CPU Some pressure average over the last 10 seconds.",
+ type => 'number',
+ optional => 1,
+ },
+ pressureiosome => {
+ description => "IO Some pressure average over the last 10 seconds.",
+ type => 'number',
+ optional => 1,
+ },
+ pressureiofull => {
+ description => "IO Full pressure average over the last 10 seconds.",
+ type => 'number',
+ optional => 1,
+ },
+ pressurememorysome => {
+ description => "Memory Some pressure average over the last 10 seconds.",
+ type => 'number',
+ optional => 1,
+ },
+ pressurememoryfull => {
+ description => "Memory Full pressure average over the last 10 seconds.",
+ type => 'number',
+ optional => 1,
+ },
+
};
sub vmstatus {
@@ -329,6 +355,14 @@ sub vmstatus {
$d->{diskwrite} = 0;
}
+ my $pressures = PVE::ProcFSTools::read_cgroup_pressure("lxc/${vmid}");
+ $d->{pressurecpusome} = $pressures->{cpu}{some}{avg10};
+ $d->{pressurecpufull} = $pressures->{cpu}{full}{avg10};
+ $d->{pressureiosome} = $pressures->{io}{some}{avg10};
+ $d->{pressureiofull} = $pressures->{io}{full}{avg10};
+ $d->{pressurememorysome} = $pressures->{memory}{some}{avg10};
+ $d->{pressurememoryfull} = $pressures->{memory}{full}{avg10};
+
if (defined(my $cpu = $cgroups->get_cpu_stat())) {
# Total time (in milliseconds) used up by the cpu.
my $used_ms = $cpu->{utime} + $cpu->{stime};
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] [PATCH container v3 2/2] rrddata: use new pve-vm-9.0 rrd location if file is present
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
` (32 preceding siblings ...)
2025-07-15 14:32 ` [pve-devel] [PATCH container v3 1/2] metrics: add pressures to metrics Aaron Lauterer
@ 2025-07-15 14:32 ` Aaron Lauterer
2025-07-23 10:15 ` [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Laurențiu Leahu-Vlăducu
2025-07-26 1:13 ` [pve-devel] SUPERSEEDED " Aaron Lauterer
35 siblings, 0 replies; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-15 14:32 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
Notes:
changes since RFC:
* switch from pve9-vm to pve-vm-90 schema
src/PVE/API2/LXC.pm | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 28f7fdd..fc59ec9 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -712,9 +712,9 @@ __PACKAGE__->register_method({
code => sub {
my ($param) = @_;
- return PVE::RRD::create_rrd_graph(
- "pve2-vm/$param->{vmid}", $param->{timeframe}, $param->{ds}, $param->{cf},
- );
+ my $path = "pve-vm-9.0/$param->{vmid}";
+ $path = "pve2-vm/$param->{vmid}" if !-e "/var/lib/rrdcached/db/${path}";
+ return PVE::RRD::create_rrd_graph($path, $param->{timeframe}, $param->{cf});
},
});
@@ -756,8 +756,9 @@ __PACKAGE__->register_method({
code => sub {
my ($param) = @_;
- return PVE::RRD::create_rrd_data("pve2-vm/$param->{vmid}", $param->{timeframe},
- $param->{cf});
+ my $path = "pve-vm-9.0/$param->{vmid}";
+ $path = "pve2-vm/$param->{vmid}" if !-e "/var/lib/rrdcached/db/${path}";
+ return PVE::RRD::create_rrd_data($path, $param->{timeframe}, $param->{cf});
},
});
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
` (33 preceding siblings ...)
2025-07-15 14:32 ` [pve-devel] [PATCH container v3 2/2] rrddata: use new pve-vm-9.0 rrd location if file is present Aaron Lauterer
@ 2025-07-23 10:15 ` Laurențiu Leahu-Vlăducu
2025-07-26 1:13 ` [pve-devel] SUPERSEEDED " Aaron Lauterer
35 siblings, 0 replies; 59+ messages in thread
From: Laurențiu Leahu-Vlăducu @ 2025-07-23 10:15 UTC (permalink / raw)
To: pve-devel
I tested this patch series on a fully up-to-date Proxmox VE 8.4.5
cluster of 3 nodes which I then updated to 9.0 BETA. I tested:
- Having both patched and unpatched nodes
- Migrating VMs from patched nodes to unpatched nodes.
- Having only patched nodes.
My test results and remarks:
1. The updated graphs work as expected. In some cases, I noticed that
the data was aggregated differently when looking at a certain node
depending on the node I was connected to (e.g. looking at data of node 1
from node 1 vs node 2 vs node 3) - but these differences were also
present on unpatched nodes as well (thus unrelated to this patch series).
2. Patching a node made the data appear in the new graphs. Migrating a
VM from node 1 (patched) to node 2 (unpatched) and looking at the node 2
data from node 1 also correctly shows the data before the migration (but
obviously does not generate new data, since node 2 was unpatched). In
other words, it works as expected.
3. Dominik already gave feedback on the tooltips, but I want to mention
one more thing that I noticed: the tooltips don't work when using a
touch screen (tested on latest Firefox and Chromium). This is unrelated
to your patch, since it also doesn't work in the graphs when clicking on
the data points (also on unpatched nodes). However, we should either
explain the graphs differently, or fix the tooltips on larger touch
devices (e.g. tablets).
4. The Hour, Day, Week and Month graphs now show the time spans more
accurately than before (e.g. an hour is really an hour, more or less),
but not perfectly accurate (e.g. an hour actually shows 59 minutes).
However, the Year graph seems to be off by a few months on my side (e.g.
currently shows graphs since mid October 2024 instead of mid July 2024).
5. I'm a bit worried that the "Summary" tab starts getting rather
crowded, with no way to change this (if desired). I think this might not
be a huge issue yet, but if we want to add even more information in the
future, it will probably be annoying, since there is currently:
- no way to minimize/maximize graphs
- no way to resize graphs
- no way to move graphs around (e.g. in case the user in mainly
interested in one or a few graphs)
I'm aware that such changes add additional complexity, but if we want to
add even more information in the future, this might eventually become
necessary. Either that, or moving some of the information to another
tab, e.g. to a dedicated "Pressure" tab separated from "Summary" (but
this would mean not being able to visualize everything at once, which is
also not great).
Laurențiu
On 15.07.25 16:31, Aaron Lauterer wrote:
> This patch series does a few things. It expands the RRD format for nodes and
> VMs. For all types (nodes, VMs, storage) we adjust the aggregation to align
> them with the way they are done on the Backup Server. Therefore, we have new
> RRD defitions for all 3 types.
>
> New values are added for nodes and VMs. In particular:
>
> Nodes:
> * memfree
> * arcsize
> * pressures:
> * cpu some
> * io some
> * io full
> * mem some
> * mem full
>
> VMs:
> * memhost (memory consumption of all processes in the guests cgroup, host view)
> * pressures:
> * cpu some
> * cpu full
> * io some
> * io full
> * mem some
> * mem full
>
> The change in RRD columns and aggregation means, that we need new RRD files. To
> not lose old RRD data, we need to migrate the old RRD files to the ones with
> the new schema. Some initial performance tests showed that migrating 10k VM
> RRD files took ~2m40s single threaded. This is way to long to do it within the
> pmxcfs itself. Therefore this will be a dedicated step. I wrote a small rust
> tool that binds to librrd to to the migraton.
>
> We could include it in a post-install step when upgrading to PVE 9.
>
> This also means, that we need to handle the situation of new and old RRD
> files and formats. Therefore we introduce new keys by which the metrics
> are broadcast in a cluster. Up until now (pre PVE9), it is in the format of
> 'pve2-{type}/{resource id}'.
> Having the version number this early in the string makes it tough to match
> against newer ones, especially in the C code of the pmxcfs. To make it easier
> in the future, we change the key format to 'pve-{type}-{version}/{resource id}'.
> This way, we can fuzzy match against unknown 'pve-{type}-{version}' in the C
> code too and handle those situations better.
>
> The result is, that to avoid breaking changes, we are only allowed to add new
> columns, but not modify or remove existing columns!
>
>
> To avoid missing data and key errors in the journal, we need to ship some
> changes to PVE 8 that can handle the new format sent out by pvestatd. Those
> patches are the first in the series and are marked with a "-pve8" postfix in the
> repo name.
> Those patches are present twice, as we try to keep the same change history on
> the PVE9 branches as well.
>
>
> On the GUI side, we switch memory graphs to stacked area graphs and for VMs
> we also have a dedicated line for the memory consumption as the host sees it.
> Because the current memory view of a VM will switch to the internal guest view,
> if we get detailed infos via the ballooning device.
> To make those slightly more complicated graphs possible, we need to adapt
> RRDChart.js in the widget-toolkit to allow for detailed overrides. Additionally
> we introduce info buttons with tooltips to give users a quick hint what certain
> graphs represent.
>
> While we are at it, we can also fix bug #6068 (Node Search tab incorrect Host
> memory usage %) by switching to memhost if available and one wrong if check.
>
>
> As a side note, now that we got pressure graphs, we could start thinking about
> dropping the server load and IO wait graphs. Those are not very specific and
> mash many different metrics into a single one.
>
>
> Release notes:
> We should probably mention in the release notes, that due to the changed
> aggregation settings, it is expected that the resulting RRD files might have
> some data points that the originals didn't have. We observed that in some
> situation we get could get a data point in one time step earlier than before.
> This is most likely due to how RRD recalculates the aggregated data with the
> different resolution.
>
>
> Plans:
> * pve8to9:
> * have a check how many RRD files are present and verify that there is enough
> space on the root FS
>
>
> How to test:
> 1. build pve-cluster on PVE8
> 2. build the -pve8 patches (cluster & manager) and install them on all PVE8 nodes
> 3. Upgrade the first node to PVE9/trixie and install all the other patches
> build all the other repositories, copy the .deb files over and then ideally
> use something like the following to make shure that any dependency will be
> used from the deb files, and not the apt repositories.
> ```
> apt install ./*.deb --reinstall --allow-downgrades -y
> ```
> 4. build the migration tool with cargo and copy the binary to the nodes for now.
> 5. run the migration tool on the first host
> 6. continue running the migration tool on the other nodes one by one
>
>
> High level changes since:
> v2:
> * several bugfixes that I found, especially regarding pressure and memory
> collection for CTs and VMs
> * add missing return property descriptions for pressures
> * added all the GUI changes
>
> v1:
> * refactored the patches as they were a bit of a mess in v1, sorry for that
> now we have distinct patches for pve8 for both affected repos (cluster & manager)
>
> RFC:
> * drop membuffer and memcached in favor of already present memused and memavailable
> * switch from pve9-{type} to pve-{type}-9.0 schema in all places
> * add patch for PVE8 & 9 that handles different keys in live status to avoid
> question marks in the UI
>
> cluster-pve8:
>
> Aaron Lauterer (2):
> cfs status.c: drop old pve2-vm rrd schema support
> status: handle new metrics update data
>
> src/pmxcfs/status.c | 85 ++++++++++++++++++++++++++++-----------------
> 1 file changed, 53 insertions(+), 32 deletions(-)
>
>
> manager-pve8:
>
> Aaron Lauterer (2):
> api2tools: drop old VM rrd schema
> api2tools: extract stats: handle existence of new pve-{type}-9.0 data
>
> PVE/API2Tools.pm | 44 ++++++++++++++++++++++++--------------------
> 1 file changed, 24 insertions(+), 20 deletions(-)
>
>
> pve9-rrd-migration-tool:
>
> Aaron Lauterer (1):
> introduce rrd migration tool for pve8 -> pve9
>
>
> cluster:
>
> Aaron Lauterer (4):
> cfs status.c: drop old pve2-vm rrd schema support
> status: handle new metrics update data
> status: introduce new pve-{type}- rrd and metric format
> rrd: adapt to new RRD format with different aggregation windows
>
> src/PVE/RRD.pm | 52 ++++++--
> src/pmxcfs/status.c | 318 ++++++++++++++++++++++++++++++++++++++------
> 2 files changed, 317 insertions(+), 53 deletions(-)
>
>
> common:
>
> Folke Gleumes (2):
> fix error in pressure parsing
> add function to retrieve pressures from cgroup
>
> src/PVE/ProcFSTools.pm | 13 ++++++++++++-
> 1 file changed, 12 insertions(+), 1 deletion(-)
>
>
> widget-toolkit:
>
> Aaron Lauterer (2):
> rrdchart: allow to override the series object
> rrdchart: use reference for undo button
>
> src/panel/RRDChart.js | 56 +++++++++++++++++++++++++++++++++----------
> 1 file changed, 43 insertions(+), 13 deletions(-)
>
>
> manager:
>
> Aaron Lauterer (13):
> api2tools: drop old VM rrd schema
> api2tools: extract stats: handle existence of new pve-{type}-9.0 data
> pvestatd: collect and distribute new pve-{type}-9.0 metrics
> api: nodes: rrd and rrddata add decade option and use new pve-node-9.0
> rrd files
> api2tools: extract_vm_status add new vm memhost column
> ui: rrdmodels: add new columns and update existing
> ui: node summary: use stacked memory graph with zfs arc
> ui: GuestStatusView: add memhost for VM guests
> ui: GuestSummary: memory switch to stacked and add hostmem
> ui: nodesummary: guestsummary: add tooltip info buttons
> ui: summaries: use titles for disk and network series
> ui: ResourceStore: add memhost column
> fix #6068: ui: utils: calculate and render host memory usage correctly
>
> Folke Gleumes (1):
> ui: add pressure graphs to node and guest summary
>
> PVE/API2/Cluster.pm | 7 +
> PVE/API2/Nodes.pm | 16 +-
> PVE/API2Tools.pm | 47 ++--
> PVE/Service/pvestatd.pm | 342 +++++++++++++++++++-------
> www/manager6/Utils.js | 8 +-
> www/manager6/data/ResourceStore.js | 8 +
> www/manager6/data/model/RRDModels.js | 44 +++-
> www/manager6/node/Summary.js | 79 +++++-
> www/manager6/panel/GuestStatusView.js | 18 +-
> www/manager6/panel/GuestSummary.js | 88 ++++++-
> 10 files changed, 528 insertions(+), 129 deletions(-)
>
>
> storage:
>
> Aaron Lauterer (1):
> status: rrddata: use new pve-storage-9.0 rrd location if file is
> present
>
> src/PVE/API2/Storage/Status.pm | 9 ++++-----
> 1 file changed, 4 insertions(+), 5 deletions(-)
>
>
> qemu-server:
>
> Aaron Lauterer (3):
> vmstatus: add memhost for host view of vm mem consumption
> vmstatus: switch mem stat to PSS of VM cgroup
> rrddata: use new pve-vm-9.0 rrd location if file is present
>
> Folke Gleumes (1):
> metrics: add pressure to metrics
>
> src/PVE/API2/Qemu.pm | 11 ++++----
> src/PVE/QemuServer.pm | 65 +++++++++++++++++++++++++++++++++++++++++--
> 2 files changed, 69 insertions(+), 7 deletions(-)
>
>
> container:
>
> Aaron Lauterer (1):
> rrddata: use new pve-vm-9.0 rrd location if file is present
>
> Folke Gleumes (1):
> metrics: add pressures to metrics
>
> src/PVE/API2/LXC.pm | 11 ++++++-----
> src/PVE/LXC.pm | 34 ++++++++++++++++++++++++++++++++++
> 2 files changed, 40 insertions(+), 5 deletions(-)
>
>
> Summary over all repositories:
> 21 files changed, 1090 insertions(+), 265 deletions(-)
>
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread
* [pve-devel] SUPERSEEDED Re: [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs
2025-07-15 14:31 [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Aaron Lauterer
` (34 preceding siblings ...)
2025-07-23 10:15 ` [pve-devel] [PATCH many v3 00/34] Expand and migrate RRD data and add/change summary graphs Laurențiu Leahu-Vlăducu
@ 2025-07-26 1:13 ` Aaron Lauterer
35 siblings, 0 replies; 59+ messages in thread
From: Aaron Lauterer @ 2025-07-26 1:13 UTC (permalink / raw)
To: pve-devel
This series has been superseeded by version 4:
https://lore.proxmox.com/pve-devel/20250726010626.1496866-1-a.lauterer@proxmox.com/T/#t
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 59+ messages in thread