* [pbs-devel] [PATCH proxmox v9 1/1] proxmox-metrics: send_data_to_channels: change from slice to IntoIterator
2022-06-10 11:17 [pbs-devel] [PATCH proxmox/proxmox-backup v9] add metrics server capability Dominik Csapak
@ 2022-06-10 11:17 ` Dominik Csapak
2022-06-13 8:04 ` [pbs-devel] applied: " Wolfgang Bumiller
2022-06-10 11:17 ` [pbs-devel] [PATCH proxmox-backup v9 1/7] pbs-api-types: add metrics api types Dominik Csapak
` (7 subsequent siblings)
8 siblings, 1 reply; 11+ messages in thread
From: Dominik Csapak @ 2022-06-10 11:17 UTC (permalink / raw)
To: pbs-devel
which is a bit generic and allows us to use e.g. a map result to be
passed here
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
proxmox-metrics/src/lib.rs | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/proxmox-metrics/src/lib.rs b/proxmox-metrics/src/lib.rs
index a55f0af..000cb39 100644
--- a/proxmox-metrics/src/lib.rs
+++ b/proxmox-metrics/src/lib.rs
@@ -69,11 +69,12 @@ impl MetricsData {
}
/// Helper to send a list of [`MetricsData`] to a list of [`Metrics`].
-pub async fn send_data_to_channels(
+pub async fn send_data_to_channels<'a, I: IntoIterator<Item = &'a Metrics>>(
values: &[Arc<MetricsData>],
- connections: &[Metrics],
+ connections: I,
) -> Vec<Result<(), Error>> {
- let mut futures = Vec::with_capacity(connections.len());
+ let connections = connections.into_iter();
+ let mut futures = Vec::with_capacity(connections.size_hint().0);
for connection in connections {
futures.push(async move {
for data in values {
--
2.30.2
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pbs-devel] applied: [PATCH proxmox v9 1/1] proxmox-metrics: send_data_to_channels: change from slice to IntoIterator
2022-06-10 11:17 ` [pbs-devel] [PATCH proxmox v9 1/1] proxmox-metrics: send_data_to_channels: change from slice to IntoIterator Dominik Csapak
@ 2022-06-13 8:04 ` Wolfgang Bumiller
0 siblings, 0 replies; 11+ messages in thread
From: Wolfgang Bumiller @ 2022-06-13 8:04 UTC (permalink / raw)
To: Dominik Csapak; +Cc: pbs-devel
applied and bumped metrics to 0.2
(will fixup the Cargo.toml entry in the other patches)
On Fri, Jun 10, 2022 at 01:17:50PM +0200, Dominik Csapak wrote:
> which is a bit generic and allows us to use e.g. a map result to be
> passed here
>
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
> proxmox-metrics/src/lib.rs | 7 ++++---
> 1 file changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/proxmox-metrics/src/lib.rs b/proxmox-metrics/src/lib.rs
> index a55f0af..000cb39 100644
> --- a/proxmox-metrics/src/lib.rs
> +++ b/proxmox-metrics/src/lib.rs
> @@ -69,11 +69,12 @@ impl MetricsData {
> }
>
> /// Helper to send a list of [`MetricsData`] to a list of [`Metrics`].
> -pub async fn send_data_to_channels(
> +pub async fn send_data_to_channels<'a, I: IntoIterator<Item = &'a Metrics>>(
> values: &[Arc<MetricsData>],
> - connections: &[Metrics],
> + connections: I,
> ) -> Vec<Result<(), Error>> {
> - let mut futures = Vec::with_capacity(connections.len());
> + let connections = connections.into_iter();
> + let mut futures = Vec::with_capacity(connections.size_hint().0);
> for connection in connections {
> futures.push(async move {
> for data in values {
> --
> 2.30.2
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v9 1/7] pbs-api-types: add metrics api types
2022-06-10 11:17 [pbs-devel] [PATCH proxmox/proxmox-backup v9] add metrics server capability Dominik Csapak
2022-06-10 11:17 ` [pbs-devel] [PATCH proxmox v9 1/1] proxmox-metrics: send_data_to_channels: change from slice to IntoIterator Dominik Csapak
@ 2022-06-10 11:17 ` Dominik Csapak
2022-06-10 11:17 ` [pbs-devel] [PATCH proxmox-backup v9 2/7] pbs-config: add metrics config class Dominik Csapak
` (6 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2022-06-10 11:17 UTC (permalink / raw)
To: pbs-devel
InfluxDbUdp and InfluxDbHttp for now
introduces schemas for host:port and https urls
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
pbs-api-types/src/lib.rs | 17 ++++
pbs-api-types/src/metrics.rs | 148 +++++++++++++++++++++++++++++++++++
2 files changed, 165 insertions(+)
create mode 100644 pbs-api-types/src/metrics.rs
diff --git a/pbs-api-types/src/lib.rs b/pbs-api-types/src/lib.rs
index d9c8cee1..70c9ec45 100644
--- a/pbs-api-types/src/lib.rs
+++ b/pbs-api-types/src/lib.rs
@@ -120,6 +120,9 @@ pub use traffic_control::*;
mod zfs;
pub use zfs::*;
+mod metrics;
+pub use metrics::*;
+
#[rustfmt::skip]
#[macro_use]
mod local_macros {
@@ -131,6 +134,7 @@ mod local_macros {
macro_rules! DNS_ALIAS_NAME {
() => (concat!(r"(?:(?:", DNS_ALIAS_LABEL!() , r"\.)*", DNS_ALIAS_LABEL!(), ")"))
}
+ macro_rules! PORT_REGEX_STR { () => (r"(?:[0-9]{1,4}|[1-5][0-9]{4}|6[0-4][0-9]{3}|65[0-4][0-9]{2}|655[0-2][0-9]|6553[0-5])") }
}
const_regex! {
@@ -144,6 +148,8 @@ const_regex! {
pub DNS_NAME_REGEX = concat!(r"^", DNS_NAME!(), r"$");
pub DNS_ALIAS_REGEX = concat!(r"^", DNS_ALIAS_NAME!(), r"$");
pub DNS_NAME_OR_IP_REGEX = concat!(r"^(?:", DNS_NAME!(), "|", IPRE!(), r")$");
+ pub HOST_PORT_REGEX = concat!(r"^(?:", DNS_NAME!(), "|", IPRE_BRACKET!(), "):", PORT_REGEX_STR!() ,"$");
+ pub HTTP_URL_REGEX = concat!(r"^https?://(?:(?:(?:", DNS_NAME!(), "|", IPRE_BRACKET!(), ")(?::", PORT_REGEX_STR!() ,")?)|", IPV6RE!(),")(?:/[^\x00-\x1F\x7F]*)?$");
pub SHA256_HEX_REGEX = r"^[a-f0-9]{64}$"; // fixme: define in common_regex ?
@@ -201,6 +207,8 @@ pub const SYSTEMD_DATETIME_FORMAT: ApiStringFormat =
pub const HOSTNAME_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&HOSTNAME_REGEX);
pub const OPENSSL_CIPHERS_TLS_FORMAT: ApiStringFormat =
ApiStringFormat::Pattern(&OPENSSL_CIPHERS_REGEX);
+pub const HOST_PORT_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&HOST_PORT_REGEX);
+pub const HTTP_URL_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&HTTP_URL_REGEX);
pub const DNS_ALIAS_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&DNS_ALIAS_REGEX);
@@ -244,6 +252,15 @@ pub const DNS_NAME_OR_IP_SCHEMA: Schema = StringSchema::new("DNS name or IP addr
.format(&DNS_NAME_OR_IP_FORMAT)
.schema();
+pub const HOST_PORT_SCHEMA: Schema =
+ StringSchema::new("host:port combination (Host can be DNS name or IP address).")
+ .format(&HOST_PORT_FORMAT)
+ .schema();
+
+pub const HTTP_URL_SCHEMA: Schema = StringSchema::new("HTTP(s) url with optional port.")
+ .format(&HTTP_URL_FORMAT)
+ .schema();
+
pub const NODE_SCHEMA: Schema = StringSchema::new("Node name (or 'localhost')")
.format(&HOSTNAME_FORMAT)
.schema();
diff --git a/pbs-api-types/src/metrics.rs b/pbs-api-types/src/metrics.rs
new file mode 100644
index 00000000..f5cfe95d
--- /dev/null
+++ b/pbs-api-types/src/metrics.rs
@@ -0,0 +1,148 @@
+use serde::{Deserialize, Serialize};
+
+use crate::{
+ HOST_PORT_SCHEMA, HTTP_URL_SCHEMA, PROXMOX_SAFE_ID_FORMAT, SINGLE_LINE_COMMENT_SCHEMA,
+};
+use proxmox_schema::{api, Schema, StringSchema, Updater};
+
+pub const METRIC_SERVER_ID_SCHEMA: Schema = StringSchema::new("Metrics Server ID.")
+ .format(&PROXMOX_SAFE_ID_FORMAT)
+ .min_length(3)
+ .max_length(32)
+ .schema();
+
+pub const INFLUXDB_BUCKET_SCHEMA: Schema = StringSchema::new("InfluxDB Bucket.")
+ .format(&PROXMOX_SAFE_ID_FORMAT)
+ .min_length(3)
+ .max_length(32)
+ .default("proxmox")
+ .schema();
+
+pub const INFLUXDB_ORGANIZATION_SCHEMA: Schema = StringSchema::new("InfluxDB Organization.")
+ .format(&PROXMOX_SAFE_ID_FORMAT)
+ .min_length(3)
+ .max_length(32)
+ .default("proxmox")
+ .schema();
+
+fn return_true() -> bool {
+ true
+}
+
+fn is_true(b: &bool) -> bool {
+ *b
+}
+
+#[api(
+ properties: {
+ name: {
+ schema: METRIC_SERVER_ID_SCHEMA,
+ },
+ enable: {
+ type: bool,
+ optional: true,
+ default: true,
+ },
+ host: {
+ schema: HOST_PORT_SCHEMA,
+ },
+ mtu: {
+ type: u16,
+ optional: true,
+ default: 1500,
+ },
+ comment: {
+ optional: true,
+ schema: SINGLE_LINE_COMMENT_SCHEMA,
+ },
+ },
+)]
+#[derive(Serialize, Deserialize, Updater)]
+#[serde(rename_all = "kebab-case")]
+/// InfluxDB Server (UDP)
+pub struct InfluxDbUdp {
+ #[updater(skip)]
+ pub name: String,
+ #[serde(default = "return_true", skip_serializing_if = "is_true")]
+ #[updater(serde(skip_serializing_if = "Option::is_none"))]
+ /// Enables or disables the metrics server
+ pub enable: bool,
+ /// the host + port
+ pub host: String,
+ #[serde(skip_serializing_if = "Option::is_none")]
+ /// The MTU
+ pub mtu: Option<u16>,
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub comment: Option<String>,
+}
+
+#[api(
+ properties: {
+ name: {
+ schema: METRIC_SERVER_ID_SCHEMA,
+ },
+ enable: {
+ type: bool,
+ optional: true,
+ default: true,
+ },
+ url: {
+ schema: HTTP_URL_SCHEMA,
+ },
+ token: {
+ type: String,
+ optional: true,
+ },
+ bucket: {
+ schema: INFLUXDB_BUCKET_SCHEMA,
+ optional: true,
+ },
+ organization: {
+ schema: INFLUXDB_ORGANIZATION_SCHEMA,
+ optional: true,
+ },
+ "max-body-size": {
+ type: usize,
+ optional: true,
+ default: 25_000_000,
+ },
+ "verify-tls": {
+ type: bool,
+ optional: true,
+ default: true,
+ },
+ comment: {
+ optional: true,
+ schema: SINGLE_LINE_COMMENT_SCHEMA,
+ },
+ },
+)]
+#[derive(Serialize, Deserialize, Updater)]
+#[serde(rename_all = "kebab-case")]
+/// InfluxDB Server (HTTP(s))
+pub struct InfluxDbHttp {
+ #[updater(skip)]
+ pub name: String,
+ #[serde(default = "return_true", skip_serializing_if = "is_true")]
+ #[updater(serde(skip_serializing_if = "Option::is_none"))]
+ /// Enables or disables the metrics server
+ pub enable: bool,
+ /// The base url of the influxdb server
+ pub url: String,
+ /// The Optional Token
+ #[serde(skip_serializing_if = "Option::is_none")]
+ /// The (optional) API token
+ pub token: Option<String>,
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub bucket: Option<String>,
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub organization: Option<String>,
+ #[serde(skip_serializing_if = "Option::is_none")]
+ /// The (optional) maximum body size
+ pub max_body_size: Option<usize>,
+ #[serde(skip_serializing_if = "Option::is_none")]
+ /// If true, the certificate will be validated.
+ pub verify_tls: Option<bool>,
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub comment: Option<String>,
+}
--
2.30.2
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v9 2/7] pbs-config: add metrics config class
2022-06-10 11:17 [pbs-devel] [PATCH proxmox/proxmox-backup v9] add metrics server capability Dominik Csapak
2022-06-10 11:17 ` [pbs-devel] [PATCH proxmox v9 1/1] proxmox-metrics: send_data_to_channels: change from slice to IntoIterator Dominik Csapak
2022-06-10 11:17 ` [pbs-devel] [PATCH proxmox-backup v9 1/7] pbs-api-types: add metrics api types Dominik Csapak
@ 2022-06-10 11:17 ` Dominik Csapak
2022-06-10 11:17 ` [pbs-devel] [PATCH proxmox-backup v9 3/7] backup-proxy: decouple stats gathering from rrd update Dominik Csapak
` (5 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2022-06-10 11:17 UTC (permalink / raw)
To: pbs-devel
a section config like in pve
also adds a helper to get Metrics structs for all configured servers
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
Cargo.toml | 1 +
pbs-config/Cargo.toml | 1 +
pbs-config/src/lib.rs | 1 +
pbs-config/src/metrics.rs | 105 ++++++++++++++++++++++++++++++++++++++
4 files changed, 108 insertions(+)
create mode 100644 pbs-config/src/metrics.rs
diff --git a/Cargo.toml b/Cargo.toml
index 234f2bee..caa91429 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -96,6 +96,7 @@ pxar = { version = "0.10.1", features = [ "tokio-io" ] }
proxmox-http = { version = "0.6.1", features = [ "client", "http-helpers", "websocket" ] }
proxmox-io = "1"
proxmox-lang = "1.1"
+proxmox-metrics = "0.1"
proxmox-router = { version = "1.2.2", features = [ "cli" ] }
proxmox-schema = { version = "1.3.1", features = [ "api-macro" ] }
proxmox-section-config = "1"
diff --git a/pbs-config/Cargo.toml b/pbs-config/Cargo.toml
index 9af1144c..37f7e263 100644
--- a/pbs-config/Cargo.toml
+++ b/pbs-config/Cargo.toml
@@ -25,6 +25,7 @@ proxmox-time = "1"
proxmox-serde = "0.1"
proxmox-shared-memory = "0.2"
proxmox-sys = "0.3"
+proxmox-metrics = "0.1"
pbs-api-types = { path = "../pbs-api-types" }
pbs-buildcfg = { path = "../pbs-buildcfg" }
diff --git a/pbs-config/src/lib.rs b/pbs-config/src/lib.rs
index a6627caa..a83db4e1 100644
--- a/pbs-config/src/lib.rs
+++ b/pbs-config/src/lib.rs
@@ -6,6 +6,7 @@ pub mod domains;
pub mod drive;
pub mod key_config;
pub mod media_pool;
+pub mod metrics;
pub mod network;
pub mod prune;
pub mod remote;
diff --git a/pbs-config/src/metrics.rs b/pbs-config/src/metrics.rs
new file mode 100644
index 00000000..27290882
--- /dev/null
+++ b/pbs-config/src/metrics.rs
@@ -0,0 +1,105 @@
+use std::collections::HashMap;
+
+use anyhow::Error;
+use lazy_static::lazy_static;
+
+use proxmox_metrics::{influxdb_http, influxdb_udp, Metrics};
+use proxmox_schema::*;
+use proxmox_section_config::{SectionConfig, SectionConfigData, SectionConfigPlugin};
+
+use pbs_api_types::{InfluxDbHttp, InfluxDbUdp, METRIC_SERVER_ID_SCHEMA};
+
+use crate::{open_backup_lockfile, BackupLockGuard};
+
+lazy_static! {
+ pub static ref CONFIG: SectionConfig = init();
+}
+
+fn init() -> SectionConfig {
+ let mut config = SectionConfig::new(&METRIC_SERVER_ID_SCHEMA);
+
+ const UDP_SCHEMA: &ObjectSchema = InfluxDbUdp::API_SCHEMA.unwrap_object_schema();
+ let udp_plugin = SectionConfigPlugin::new(
+ "influxdb-udp".to_string(),
+ Some("name".to_string()),
+ UDP_SCHEMA,
+ );
+ config.register_plugin(udp_plugin);
+
+ const HTTP_SCHEMA: &ObjectSchema = InfluxDbHttp::API_SCHEMA.unwrap_object_schema();
+
+ let http_plugin = SectionConfigPlugin::new(
+ "influxdb-http".to_string(),
+ Some("name".to_string()),
+ HTTP_SCHEMA,
+ );
+
+ config.register_plugin(http_plugin);
+
+ config
+}
+
+pub const METRIC_SERVER_CFG_FILENAME: &str = "/etc/proxmox-backup/metricserver.cfg";
+pub const METRIC_SERVER_CFG_LOCKFILE: &str = "/etc/proxmox-backup/.metricserver.lck";
+
+/// Get exclusive lock
+pub fn lock_config() -> Result<BackupLockGuard, Error> {
+ open_backup_lockfile(METRIC_SERVER_CFG_LOCKFILE, None, true)
+}
+
+pub fn config() -> Result<(SectionConfigData, [u8; 32]), Error> {
+ let content =
+ proxmox_sys::fs::file_read_optional_string(METRIC_SERVER_CFG_FILENAME)?.unwrap_or_default();
+
+ let digest = openssl::sha::sha256(content.as_bytes());
+ let data = CONFIG.parse(METRIC_SERVER_CFG_FILENAME, &content)?;
+ Ok((data, digest))
+}
+
+pub fn save_config(config: &SectionConfigData) -> Result<(), Error> {
+ let raw = CONFIG.write(METRIC_SERVER_CFG_FILENAME, config)?;
+ crate::replace_backup_config(METRIC_SERVER_CFG_FILENAME, raw.as_bytes())
+}
+
+// shell completion helper
+pub fn complete_remote_name(_arg: &str, _param: &HashMap<String, String>) -> Vec<String> {
+ match config() {
+ Ok((data, _digest)) => data.sections.keys().cloned().collect(),
+ Err(_) => return vec![],
+ }
+}
+
+/// Get the metric server connections from a config
+pub fn get_metric_server_connections(
+ metric_config: SectionConfigData,
+) -> Result<Vec<(Metrics, String)>, Error> {
+ let mut res = Vec::new();
+
+ for config in
+ metric_config.convert_to_typed_array::<pbs_api_types::InfluxDbUdp>("influxdb-udp")?
+ {
+ if !config.enable {
+ continue;
+ }
+ let future = influxdb_udp(&config.host, config.mtu);
+ res.push((future, config.name));
+ }
+
+ for config in
+ metric_config.convert_to_typed_array::<pbs_api_types::InfluxDbHttp>("influxdb-http")?
+ {
+ if !config.enable {
+ continue;
+ }
+ let future = influxdb_http(
+ &config.url,
+ config.organization.as_deref().unwrap_or("proxmox"),
+ config.bucket.as_deref().unwrap_or("proxmox"),
+ config.token.as_deref(),
+ config.verify_tls.unwrap_or(true),
+ config.max_body_size.unwrap_or(25_000_000),
+ )?;
+ res.push((future, config.name));
+ }
+ Ok(res)
+}
--
2.30.2
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v9 3/7] backup-proxy: decouple stats gathering from rrd update
2022-06-10 11:17 [pbs-devel] [PATCH proxmox/proxmox-backup v9] add metrics server capability Dominik Csapak
` (2 preceding siblings ...)
2022-06-10 11:17 ` [pbs-devel] [PATCH proxmox-backup v9 2/7] pbs-config: add metrics config class Dominik Csapak
@ 2022-06-10 11:17 ` Dominik Csapak
2022-06-10 11:17 ` [pbs-devel] [PATCH proxmox-backup v9 4/7] proxmox-backup-proxy: send metrics to configured metrics server Dominik Csapak
` (4 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2022-06-10 11:17 UTC (permalink / raw)
To: pbs-devel
that way we can reuse the stats gathered
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/bin/proxmox-backup-proxy.rs | 220 +++++++++++++++++++++-----------
1 file changed, 148 insertions(+), 72 deletions(-)
diff --git a/src/bin/proxmox-backup-proxy.rs b/src/bin/proxmox-backup-proxy.rs
index 9b072bbd..8ca2ff49 100644
--- a/src/bin/proxmox-backup-proxy.rs
+++ b/src/bin/proxmox-backup-proxy.rs
@@ -20,8 +20,11 @@ use tokio_stream::wrappers::ReceiverStream;
use proxmox_http::client::{RateLimitedStream, ShareableRateLimit};
use proxmox_lang::try_block;
use proxmox_router::{RpcEnvironment, RpcEnvironmentType, UserInformation};
-use proxmox_sys::fs::CreateOptions;
-use proxmox_sys::linux::socket::set_tcp_keepalive;
+use proxmox_sys::fs::{CreateOptions, FileSystemInformation};
+use proxmox_sys::linux::{
+ procfs::{Loadavg, ProcFsMemInfo, ProcFsNetDev, ProcFsStat},
+ socket::set_tcp_keepalive,
+};
use proxmox_sys::logrotate::LogRotate;
use proxmox_sys::{task_log, task_warn};
@@ -40,6 +43,7 @@ use proxmox_backup::{
auth::check_pbs_auth,
jobstate::{self, Job},
},
+ tools::disks::BlockDevStat,
traffic_control_cache::TRAFFIC_CONTROL_CACHE,
};
@@ -990,81 +994,98 @@ async fn run_stat_generator() {
loop {
let delay_target = Instant::now() + Duration::from_secs(10);
- generate_host_stats().await;
+ let stats = match tokio::task::spawn_blocking(|| {
+ let hoststats = collect_host_stats_sync();
+ let (hostdisk, datastores) = collect_disk_stats_sync();
+ Arc::new((hoststats, hostdisk, datastores))
+ })
+ .await
+ {
+ Ok(res) => res,
+ Err(err) => {
+ log::error!("collecting host stats panicked: {}", err);
+ tokio::time::sleep_until(tokio::time::Instant::from_std(delay_target)).await;
+ continue;
+ }
+ };
- rrd_sync_journal();
+ if let Err(err) = tokio::task::spawn_blocking(move || {
+ rrd_update_host_stats_sync(&stats.0, &stats.1, &stats.2);
+ rrd_sync_journal();
+ })
+ .await
+ {
+ log::error!("updating rrd panicked: {}", err);
+ }
tokio::time::sleep_until(tokio::time::Instant::from_std(delay_target)).await;
}
}
-async fn generate_host_stats() {
- match tokio::task::spawn_blocking(generate_host_stats_sync).await {
- Ok(()) => (),
- Err(err) => log::error!("generate_host_stats panicked: {}", err),
- }
+struct HostStats {
+ proc: Option<ProcFsStat>,
+ meminfo: Option<ProcFsMemInfo>,
+ net: Option<Vec<ProcFsNetDev>>,
+ load: Option<Loadavg>,
+}
+
+struct DiskStat {
+ name: String,
+ usage: Option<FileSystemInformation>,
+ dev: Option<BlockDevStat>,
}
-fn generate_host_stats_sync() {
+fn collect_host_stats_sync() -> HostStats {
use proxmox_sys::linux::procfs::{
read_loadavg, read_meminfo, read_proc_net_dev, read_proc_stat,
};
- match read_proc_stat() {
- Ok(stat) => {
- rrd_update_gauge("host/cpu", stat.cpu);
- rrd_update_gauge("host/iowait", stat.iowait_percent);
- }
+ let proc = match read_proc_stat() {
+ Ok(stat) => Some(stat),
Err(err) => {
eprintln!("read_proc_stat failed - {}", err);
+ None
}
- }
+ };
- match read_meminfo() {
- Ok(meminfo) => {
- rrd_update_gauge("host/memtotal", meminfo.memtotal as f64);
- rrd_update_gauge("host/memused", meminfo.memused as f64);
- rrd_update_gauge("host/swaptotal", meminfo.swaptotal as f64);
- rrd_update_gauge("host/swapused", meminfo.swapused as f64);
- }
+ let meminfo = match read_meminfo() {
+ Ok(stat) => Some(stat),
Err(err) => {
eprintln!("read_meminfo failed - {}", err);
+ None
}
- }
+ };
- match read_proc_net_dev() {
- Ok(netdev) => {
- use pbs_config::network::is_physical_nic;
- let mut netin = 0;
- let mut netout = 0;
- for item in netdev {
- if !is_physical_nic(&item.device) {
- continue;
- }
- netin += item.receive;
- netout += item.send;
- }
- rrd_update_derive("host/netin", netin as f64);
- rrd_update_derive("host/netout", netout as f64);
- }
+ let net = match read_proc_net_dev() {
+ Ok(netdev) => Some(netdev),
Err(err) => {
eprintln!("read_prox_net_dev failed - {}", err);
+ None
}
- }
+ };
- match read_loadavg() {
- Ok(loadavg) => {
- rrd_update_gauge("host/loadavg", loadavg.0 as f64);
- }
+ let load = match read_loadavg() {
+ Ok(loadavg) => Some(loadavg),
Err(err) => {
eprintln!("read_loadavg failed - {}", err);
+ None
}
+ };
+
+ HostStats {
+ proc,
+ meminfo,
+ net,
+ load,
}
+}
+fn collect_disk_stats_sync() -> (DiskStat, Vec<DiskStat>) {
let disk_manager = DiskManage::new();
- gather_disk_stats(disk_manager.clone(), Path::new("/"), "host");
+ let root = gather_disk_stats(disk_manager.clone(), Path::new("/"), "host");
+ let mut datastores = Vec::new();
match pbs_config::datastore::config() {
Ok((config, _)) => {
let datastore_list: Vec<DataStoreConfig> = config
@@ -1078,15 +1099,80 @@ fn generate_host_stats_sync() {
{
continue;
}
- let rrd_prefix = format!("datastore/{}", config.name);
let path = std::path::Path::new(&config.path);
- gather_disk_stats(disk_manager.clone(), path, &rrd_prefix);
+ datastores.push(gather_disk_stats(disk_manager.clone(), path, &config.name));
}
}
Err(err) => {
eprintln!("read datastore config failed - {}", err);
}
}
+
+ (root, datastores)
+}
+
+fn rrd_update_host_stats_sync(host: &HostStats, hostdisk: &DiskStat, datastores: &[DiskStat]) {
+ if let Some(stat) = &host.proc {
+ rrd_update_gauge("host/cpu", stat.cpu);
+ rrd_update_gauge("host/iowait", stat.iowait_percent);
+ }
+
+ if let Some(meminfo) = &host.meminfo {
+ rrd_update_gauge("host/memtotal", meminfo.memtotal as f64);
+ rrd_update_gauge("host/memused", meminfo.memused as f64);
+ rrd_update_gauge("host/swaptotal", meminfo.swaptotal as f64);
+ rrd_update_gauge("host/swapused", meminfo.swapused as f64);
+ }
+
+ if let Some(netdev) = &host.net {
+ use pbs_config::network::is_physical_nic;
+ let mut netin = 0;
+ let mut netout = 0;
+ for item in netdev {
+ if !is_physical_nic(&item.device) {
+ continue;
+ }
+ netin += item.receive;
+ netout += item.send;
+ }
+ rrd_update_derive("host/netin", netin as f64);
+ rrd_update_derive("host/netout", netout as f64);
+ }
+
+ if let Some(loadavg) = &host.load {
+ rrd_update_gauge("host/loadavg", loadavg.0 as f64);
+ }
+
+ rrd_update_disk_stat(hostdisk, "host");
+
+ for stat in datastores {
+ let rrd_prefix = format!("datastore/{}", stat.name);
+ rrd_update_disk_stat(stat, &rrd_prefix);
+ }
+}
+
+fn rrd_update_disk_stat(disk: &DiskStat, rrd_prefix: &str) {
+ if let Some(status) = &disk.usage {
+ let rrd_key = format!("{}/total", rrd_prefix);
+ rrd_update_gauge(&rrd_key, status.total as f64);
+ let rrd_key = format!("{}/used", rrd_prefix);
+ rrd_update_gauge(&rrd_key, status.used as f64);
+ }
+
+ if let Some(stat) = &disk.dev {
+ let rrd_key = format!("{}/read_ios", rrd_prefix);
+ rrd_update_derive(&rrd_key, stat.read_ios as f64);
+ let rrd_key = format!("{}/read_bytes", rrd_prefix);
+ rrd_update_derive(&rrd_key, (stat.read_sectors * 512) as f64);
+
+ let rrd_key = format!("{}/write_ios", rrd_prefix);
+ rrd_update_derive(&rrd_key, stat.write_ios as f64);
+ let rrd_key = format!("{}/write_bytes", rrd_prefix);
+ rrd_update_derive(&rrd_key, (stat.write_sectors * 512) as f64);
+
+ let rrd_key = format!("{}/io_ticks", rrd_prefix);
+ rrd_update_derive(&rrd_key, (stat.io_ticks as f64) / 1000.0);
+ }
}
fn check_schedule(worker_type: &str, event_str: &str, id: &str) -> bool {
@@ -1122,21 +1208,17 @@ fn check_schedule(worker_type: &str, event_str: &str, id: &str) -> bool {
next <= now
}
-fn gather_disk_stats(disk_manager: Arc<DiskManage>, path: &Path, rrd_prefix: &str) {
- match proxmox_sys::fs::fs_info(path) {
- Ok(status) => {
- let rrd_key = format!("{}/total", rrd_prefix);
- rrd_update_gauge(&rrd_key, status.total as f64);
- let rrd_key = format!("{}/used", rrd_prefix);
- rrd_update_gauge(&rrd_key, status.used as f64);
- }
+fn gather_disk_stats(disk_manager: Arc<DiskManage>, path: &Path, name: &str) -> DiskStat {
+ let usage = match proxmox_sys::fs::fs_info(path) {
+ Ok(status) => Some(status),
Err(err) => {
eprintln!("read fs info on {:?} failed - {}", path, err);
+ None
}
- }
+ };
- match disk_manager.find_mounted_device(path) {
- Ok(None) => {}
+ let dev = match disk_manager.find_mounted_device(path) {
+ Ok(None) => None,
Ok(Some((fs_type, device, source))) => {
let mut device_stat = None;
match (fs_type.as_str(), source) {
@@ -1158,24 +1240,18 @@ fn gather_disk_stats(disk_manager: Arc<DiskManage>, path: &Path, rrd_prefix: &st
}
}
}
- if let Some(stat) = device_stat {
- let rrd_key = format!("{}/read_ios", rrd_prefix);
- rrd_update_derive(&rrd_key, stat.read_ios as f64);
- let rrd_key = format!("{}/read_bytes", rrd_prefix);
- rrd_update_derive(&rrd_key, (stat.read_sectors * 512) as f64);
-
- let rrd_key = format!("{}/write_ios", rrd_prefix);
- rrd_update_derive(&rrd_key, stat.write_ios as f64);
- let rrd_key = format!("{}/write_bytes", rrd_prefix);
- rrd_update_derive(&rrd_key, (stat.write_sectors * 512) as f64);
-
- let rrd_key = format!("{}/io_ticks", rrd_prefix);
- rrd_update_derive(&rrd_key, (stat.io_ticks as f64) / 1000.0);
- }
+ device_stat
}
Err(err) => {
eprintln!("find_mounted_device failed - {}", err);
+ None
}
+ };
+
+ DiskStat {
+ name: name.to_string(),
+ usage,
+ dev,
}
}
--
2.30.2
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v9 4/7] proxmox-backup-proxy: send metrics to configured metrics server
2022-06-10 11:17 [pbs-devel] [PATCH proxmox/proxmox-backup v9] add metrics server capability Dominik Csapak
` (3 preceding siblings ...)
2022-06-10 11:17 ` [pbs-devel] [PATCH proxmox-backup v9 3/7] backup-proxy: decouple stats gathering from rrd update Dominik Csapak
@ 2022-06-10 11:17 ` Dominik Csapak
2022-06-10 11:17 ` [pbs-devel] [PATCH proxmox-backup v9 5/7] api: add metricserver endpoints Dominik Csapak
` (3 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2022-06-10 11:17 UTC (permalink / raw)
To: pbs-devel
and keep the data as similar as possible to pve (tags/fields)
datastores get their own 'object' type and reside in the "blockstat"
measurement
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/bin/proxmox-backup-proxy.rs | 138 ++++++++++++++++++++++++++++++--
1 file changed, 131 insertions(+), 7 deletions(-)
diff --git a/src/bin/proxmox-backup-proxy.rs b/src/bin/proxmox-backup-proxy.rs
index 8ca2ff49..b905de8c 100644
--- a/src/bin/proxmox-backup-proxy.rs
+++ b/src/bin/proxmox-backup-proxy.rs
@@ -19,6 +19,7 @@ use tokio_stream::wrappers::ReceiverStream;
use proxmox_http::client::{RateLimitedStream, ShareableRateLimit};
use proxmox_lang::try_block;
+use proxmox_metrics::MetricsData;
use proxmox_router::{RpcEnvironment, RpcEnvironmentType, UserInformation};
use proxmox_sys::fs::{CreateOptions, FileSystemInformation};
use proxmox_sys::linux::{
@@ -28,6 +29,7 @@ use proxmox_sys::linux::{
use proxmox_sys::logrotate::LogRotate;
use proxmox_sys::{task_log, task_warn};
+use pbs_config::metrics::get_metric_server_connections;
use pbs_datastore::DataStore;
use proxmox_rest_server::{
@@ -1009,19 +1011,121 @@ async fn run_stat_generator() {
}
};
- if let Err(err) = tokio::task::spawn_blocking(move || {
- rrd_update_host_stats_sync(&stats.0, &stats.1, &stats.2);
- rrd_sync_journal();
- })
- .await
- {
- log::error!("updating rrd panicked: {}", err);
+ let rrd_future = tokio::task::spawn_blocking({
+ let stats = Arc::clone(&stats);
+ move || {
+ rrd_update_host_stats_sync(&stats.0, &stats.1, &stats.2);
+ rrd_sync_journal();
+ }
+ });
+
+ let metrics_future = send_data_to_metric_servers(stats);
+
+ let (rrd_res, metrics_res) = join!(rrd_future, metrics_future);
+ if let Err(err) = rrd_res {
+ log::error!("rrd update panicked: {}", err);
+ }
+ if let Err(err) = metrics_res {
+ log::error!("error during metrics sending: {}", err);
}
tokio::time::sleep_until(tokio::time::Instant::from_std(delay_target)).await;
}
}
+async fn send_data_to_metric_servers(
+ stats: Arc<(HostStats, DiskStat, Vec<DiskStat>)>,
+) -> Result<(), Error> {
+ let (config, _digest) = pbs_config::metrics::config()?;
+ let channel_list = get_metric_server_connections(config)?;
+
+ if channel_list.is_empty() {
+ return Ok(());
+ }
+
+ let ctime = proxmox_time::epoch_i64();
+ let nodename = proxmox_sys::nodename();
+
+ let mut values = Vec::new();
+
+ let mut cpuvalue = match &stats.0.proc {
+ Some(stat) => serde_json::to_value(stat)?,
+ None => json!({}),
+ };
+
+ if let Some(loadavg) = &stats.0.load {
+ cpuvalue["avg1"] = Value::from(loadavg.0);
+ cpuvalue["avg5"] = Value::from(loadavg.1);
+ cpuvalue["avg15"] = Value::from(loadavg.2);
+ }
+
+ values.push(Arc::new(
+ MetricsData::new("cpustat", ctime, cpuvalue)?
+ .tag("object", "host")
+ .tag("host", nodename),
+ ));
+
+ if let Some(stat) = &stats.0.meminfo {
+ values.push(Arc::new(
+ MetricsData::new("memory", ctime, stat)?
+ .tag("object", "host")
+ .tag("host", nodename),
+ ));
+ }
+
+ if let Some(netdev) = &stats.0.net {
+ for item in netdev {
+ values.push(Arc::new(
+ MetricsData::new("nics", ctime, item)?
+ .tag("object", "host")
+ .tag("host", nodename)
+ .tag("instance", item.device.clone()),
+ ));
+ }
+ }
+
+ values.push(Arc::new(
+ MetricsData::new("blockstat", ctime, stats.1.to_value())?
+ .tag("object", "host")
+ .tag("host", nodename),
+ ));
+
+ for datastore in stats.2.iter() {
+ values.push(Arc::new(
+ MetricsData::new("blockstat", ctime, datastore.to_value())?
+ .tag("object", "host")
+ .tag("host", nodename)
+ .tag("datastore", datastore.name.clone()),
+ ));
+ }
+
+ // we must have a concrete functions, because the inferred lifetime from a
+ // closure is not general enough for the tokio::spawn call we are in here...
+ fn map_fn(item: &(proxmox_metrics::Metrics, String)) -> &proxmox_metrics::Metrics {
+ &item.0
+ }
+
+ let results =
+ proxmox_metrics::send_data_to_channels(&values, channel_list.iter().map(map_fn)).await;
+ for (res, name) in results
+ .into_iter()
+ .zip(channel_list.iter().map(|(_, name)| name))
+ {
+ if let Err(err) = res {
+ log::error!("error sending into channel of {}: {}", name, err);
+ }
+ }
+
+ futures::future::join_all(channel_list.into_iter().map(|(channel, name)| async move {
+ if let Err(err) = channel.join().await {
+ log::error!("error sending to metric server {}: {}", name, err);
+ }
+ }))
+ .await;
+
+ Ok(())
+}
+
struct HostStats {
proc: Option<ProcFsStat>,
meminfo: Option<ProcFsMemInfo>,
@@ -1035,6 +1139,26 @@ struct DiskStat {
dev: Option<BlockDevStat>,
}
+impl DiskStat {
+ fn to_value(&self) -> Value {
+ let mut value = json!({});
+ if let Some(usage) = &self.usage {
+ value["total"] = Value::from(usage.total);
+ value["used"] = Value::from(usage.used);
+ value["avail"] = Value::from(usage.available);
+ }
+
+ if let Some(dev) = &self.dev {
+ value["read_ios"] = Value::from(dev.read_ios);
+ value["read_bytes"] = Value::from(dev.read_sectors * 512);
+ value["write_ios"] = Value::from(dev.write_ios);
+ value["write_bytes"] = Value::from(dev.write_sectors * 512);
+ value["io_ticks"] = Value::from(dev.io_ticks / 1000);
+ }
+ value
+ }
+}
+
fn collect_host_stats_sync() -> HostStats {
use proxmox_sys::linux::procfs::{
read_loadavg, read_meminfo, read_proc_net_dev, read_proc_stat,
--
2.30.2
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v9 5/7] api: add metricserver endpoints
2022-06-10 11:17 [pbs-devel] [PATCH proxmox/proxmox-backup v9] add metrics server capability Dominik Csapak
` (4 preceding siblings ...)
2022-06-10 11:17 ` [pbs-devel] [PATCH proxmox-backup v9 4/7] proxmox-backup-proxy: send metrics to configured metrics server Dominik Csapak
@ 2022-06-10 11:17 ` Dominik Csapak
2022-06-10 11:17 ` [pbs-devel] [PATCH proxmox-backup v9 6/7] ui: add window/InfluxDbEdit Dominik Csapak
` (2 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2022-06-10 11:17 UTC (permalink / raw)
To: pbs-devel
but in contrast to pve, we split the api by type of the section config,
since we cannot handle multiple types in the updater
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/api2/admin/metrics.rs | 91 +++++++
src/api2/admin/mod.rs | 2 +
src/api2/config/metrics/influxdbhttp.rs | 315 ++++++++++++++++++++++++
src/api2/config/metrics/influxdbudp.rs | 270 ++++++++++++++++++++
src/api2/config/metrics/mod.rs | 16 ++
src/api2/config/mod.rs | 2 +
6 files changed, 696 insertions(+)
create mode 100644 src/api2/admin/metrics.rs
create mode 100644 src/api2/config/metrics/influxdbhttp.rs
create mode 100644 src/api2/config/metrics/influxdbudp.rs
create mode 100644 src/api2/config/metrics/mod.rs
diff --git a/src/api2/admin/metrics.rs b/src/api2/admin/metrics.rs
new file mode 100644
index 00000000..728d1599
--- /dev/null
+++ b/src/api2/admin/metrics.rs
@@ -0,0 +1,91 @@
+use anyhow::Error;
+use serde::{Deserialize, Serialize};
+use serde_json::Value;
+
+use proxmox_router::{Permission, Router, RpcEnvironment};
+use proxmox_schema::api;
+
+use pbs_api_types::{METRIC_SERVER_ID_SCHEMA, PRIV_SYS_AUDIT, SINGLE_LINE_COMMENT_SCHEMA};
+use pbs_config::metrics;
+
+#[api]
+#[derive(Deserialize, Serialize, PartialEq, Eq)]
+/// Type of the metric server
+pub enum MetricServerType {
+ /// InfluxDB HTTP
+ #[serde(rename = "influxdb-http")]
+ InfluxDbHttp,
+ /// InfluxDB UDP
+ #[serde(rename = "influxdb-udp")]
+ InfluxDbUdp,
+}
+
+#[api(
+ properties: {
+ name: {
+ schema: METRIC_SERVER_ID_SCHEMA,
+ },
+ "type": {
+ type: MetricServerType,
+ },
+ comment: {
+ optional: true,
+ schema: SINGLE_LINE_COMMENT_SCHEMA,
+ },
+ },
+)]
+#[derive(Serialize, Deserialize)]
+#[serde(rename_all = "kebab-case")]
+/// Basic information about a metric server thats available for all types
+pub struct MetricServerInfo {
+ pub name: String,
+ #[serde(rename = "type")]
+ pub ty: MetricServerType,
+ /// Enables or disables the metrics server
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub enable: Option<bool>,
+ /// The target server
+ pub server: String,
+ #[serde(skip_serializing_if = "Option::is_none")]
+ pub comment: Option<String>,
+}
+
+#[api(
+ input: {
+ properties: {},
+ },
+ returns: {
+ description: "List of configured metric servers.",
+ type: Array,
+ items: { type: MetricServerInfo },
+ },
+ access: {
+ permission: &Permission::Privilege(&[], PRIV_SYS_AUDIT, false),
+ },
+)]
+/// List configured metric servers.
+pub fn list_metric_servers(
+ _param: Value,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<Vec<MetricServerInfo>, Error> {
+ let (config, digest) = metrics::config()?;
+ let mut list = Vec::new();
+
+ for (_, (section_type, v)) in config.sections.iter() {
+ let mut entry = v.clone();
+ entry["type"] = Value::from(section_type.clone());
+ if entry.get("url").is_some() {
+ entry["server"] = entry["url"].clone();
+ }
+ if entry.get("host").is_some() {
+ entry["server"] = entry["host"].clone();
+ }
+ list.push(serde_json::from_value(entry)?);
+ }
+
+ rpcenv["digest"] = hex::encode(&digest).into();
+
+ Ok(list)
+}
+
+pub const ROUTER: Router = Router::new().get(&API_METHOD_LIST_METRIC_SERVERS);
diff --git a/src/api2/admin/mod.rs b/src/api2/admin/mod.rs
index d5a2c527..9b6fc9ad 100644
--- a/src/api2/admin/mod.rs
+++ b/src/api2/admin/mod.rs
@@ -5,6 +5,7 @@ use proxmox_router::{Router, SubdirMap};
use proxmox_sys::sortable;
pub mod datastore;
+pub mod metrics;
pub mod namespace;
pub mod prune;
pub mod sync;
@@ -14,6 +15,7 @@ pub mod verify;
#[sortable]
const SUBDIRS: SubdirMap = &sorted!([
("datastore", &datastore::ROUTER),
+ ("metrics", &metrics::ROUTER),
("prune", &prune::ROUTER),
("sync", &sync::ROUTER),
("traffic-control", &traffic_control::ROUTER),
diff --git a/src/api2/config/metrics/influxdbhttp.rs b/src/api2/config/metrics/influxdbhttp.rs
new file mode 100644
index 00000000..d12c7487
--- /dev/null
+++ b/src/api2/config/metrics/influxdbhttp.rs
@@ -0,0 +1,315 @@
+use anyhow::{bail, format_err, Error};
+use hex::FromHex;
+use serde::{Deserialize, Serialize};
+use serde_json::Value;
+
+use proxmox_metrics::test_influxdb_http;
+use proxmox_router::{Permission, Router, RpcEnvironment};
+use proxmox_schema::api;
+
+use pbs_api_types::{
+ InfluxDbHttp, InfluxDbHttpUpdater, METRIC_SERVER_ID_SCHEMA, PRIV_SYS_AUDIT, PRIV_SYS_MODIFY,
+ PROXMOX_CONFIG_DIGEST_SCHEMA,
+};
+
+use pbs_config::metrics;
+
+async fn test_server(config: &InfluxDbHttp) -> Result<(), Error> {
+ if config.enable {
+ test_influxdb_http(
+ &config.url,
+ config.organization.as_deref().unwrap_or("proxmox"),
+ config.bucket.as_deref().unwrap_or("proxmox"),
+ config.token.as_deref(),
+ config.verify_tls.unwrap_or(true),
+ )
+ .await
+ .map_err(|err| format_err!("could not connect to {}: {}", config.url, err))
+ } else {
+ Ok(())
+ }
+}
+
+#[api(
+ input: {
+ properties: {},
+ },
+ returns: {
+ description: "List of configured InfluxDB http metric servers.",
+ type: Array,
+ items: { type: InfluxDbHttp },
+ },
+ access: {
+ permission: &Permission::Privilege(&[], PRIV_SYS_AUDIT, false),
+ },
+)]
+/// List configured InfluxDB http metric servers.
+pub fn list_influxdb_http_servers(
+ _param: Value,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<Vec<InfluxDbHttp>, Error> {
+ let (config, digest) = metrics::config()?;
+
+ let mut list: Vec<InfluxDbHttp> = config.convert_to_typed_array("influxdb-http")?;
+
+ // don't return token via api
+ for item in list.iter_mut() {
+ item.token = None;
+ }
+
+ rpcenv["digest"] = hex::encode(&digest).into();
+
+ Ok(list)
+}
+
+#[api(
+ protected: true,
+ input: {
+ properties: {
+ config: {
+ type: InfluxDbHttp,
+ flatten: true,
+ },
+ },
+ },
+ access: {
+ permission: &Permission::Privilege(&[], PRIV_SYS_MODIFY, false),
+ },
+)]
+/// Create a new InfluxDB http server configuration
+pub async fn create_influxdb_http_server(config: InfluxDbHttp) -> Result<(), Error> {
+ let _lock = metrics::lock_config()?;
+
+ let (mut metrics, _digest) = metrics::config()?;
+
+ if metrics.sections.get(&config.name).is_some() {
+ bail!("metric server '{}' already exists.", config.name);
+ }
+
+ test_server(&config).await?;
+
+ metrics.set_data(&config.name, "influxdb-http", &config)?;
+
+ metrics::save_config(&metrics)?;
+
+ Ok(())
+}
+
+#[api(
+ protected: true,
+ input: {
+ properties: {
+ name: {
+ schema: METRIC_SERVER_ID_SCHEMA,
+ },
+ digest: {
+ optional: true,
+ schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
+ },
+ },
+ },
+ access: {
+ permission: &Permission::Privilege(&[], PRIV_SYS_MODIFY, false),
+ },
+)]
+/// Remove a InfluxDB http server configuration
+pub fn delete_influxdb_http_server(
+ name: String,
+ digest: Option<String>,
+ _rpcenv: &mut dyn RpcEnvironment,
+) -> Result<(), Error> {
+ let _lock = metrics::lock_config()?;
+
+ let (mut metrics, expected_digest) = metrics::config()?;
+
+ if let Some(ref digest) = digest {
+ let digest = <[u8; 32]>::from_hex(digest)?;
+ crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
+ }
+
+ if metrics.sections.remove(&name).is_none() {
+ bail!("name '{}' does not exist.", name);
+ }
+
+ metrics::save_config(&metrics)?;
+
+ Ok(())
+}
+
+#[api(
+ input: {
+ properties: {
+ name: {
+ schema: METRIC_SERVER_ID_SCHEMA,
+ },
+ },
+ },
+ returns: { type: InfluxDbHttp },
+ access: {
+ permission: &Permission::Privilege(&[], PRIV_SYS_AUDIT, false),
+ },
+)]
+/// Read the InfluxDB http server configuration
+pub fn read_influxdb_http_server(
+ name: String,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<InfluxDbHttp, Error> {
+ let (metrics, digest) = metrics::config()?;
+
+ let mut config: InfluxDbHttp = metrics.lookup("influxdb-http", &name)?;
+
+ config.token = None;
+
+ rpcenv["digest"] = hex::encode(&digest).into();
+
+ Ok(config)
+}
+
+#[api()]
+#[derive(Serialize, Deserialize)]
+#[serde(rename_all = "kebab-case")]
+/// Deletable property name
+pub enum DeletableProperty {
+ /// Delete the enable property.
+ Enable,
+ /// Delete the token property.
+ Token,
+ /// Delete the bucket property.
+ Bucket,
+ /// Delete the organization property.
+ Organization,
+ /// Delete the max_body_size property.
+ MaxBodySize,
+ /// Delete the verify_tls property.
+ VerifyTls,
+ /// Delete the comment property.
+ Comment,
+}
+
+#[api(
+ protected: true,
+ input: {
+ properties: {
+ name: {
+ schema: METRIC_SERVER_ID_SCHEMA,
+ },
+ update: {
+ type: InfluxDbHttpUpdater,
+ flatten: true,
+ },
+ delete: {
+ description: "List of properties to delete.",
+ type: Array,
+ optional: true,
+ items: {
+ type: DeletableProperty,
+ }
+ },
+ digest: {
+ optional: true,
+ schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
+ },
+ },
+ },
+ access: {
+ permission: &Permission::Privilege(&[], PRIV_SYS_MODIFY, false),
+ },
+)]
+/// Update an InfluxDB http server configuration
+pub async fn update_influxdb_http_server(
+ name: String,
+ update: InfluxDbHttpUpdater,
+ delete: Option<Vec<DeletableProperty>>,
+ digest: Option<String>,
+ _rpcenv: &mut dyn RpcEnvironment,
+) -> Result<(), Error> {
+ let _lock = metrics::lock_config()?;
+
+ let (mut metrics, expected_digest) = metrics::config()?;
+
+ if let Some(ref digest) = digest {
+ let digest = <[u8; 32]>::from_hex(digest)?;
+ crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
+ }
+
+ let mut config: InfluxDbHttp = metrics.lookup("influxdb-http", &name)?;
+
+ if let Some(delete) = delete {
+ for delete_prop in delete {
+ match delete_prop {
+ DeletableProperty::Enable => {
+ config.enable = true;
+ }
+ DeletableProperty::Token => {
+ config.token = None;
+ }
+ DeletableProperty::Bucket => {
+ config.bucket = None;
+ }
+ DeletableProperty::Organization => {
+ config.organization = None;
+ }
+ DeletableProperty::MaxBodySize => {
+ config.max_body_size = None;
+ }
+ DeletableProperty::VerifyTls => {
+ config.verify_tls = None;
+ }
+ DeletableProperty::Comment => {
+ config.comment = None;
+ }
+ }
+ }
+ }
+
+ if let Some(comment) = update.comment {
+ let comment = comment.trim().to_string();
+ if comment.is_empty() {
+ config.comment = None;
+ } else {
+ config.comment = Some(comment);
+ }
+ }
+
+ if let Some(url) = update.url {
+ config.url = url;
+ }
+
+ if let Some(enable) = update.enable {
+ config.enable = enable;
+ }
+
+ if update.token.is_some() {
+ config.token = update.token;
+ }
+ if update.bucket.is_some() {
+ config.bucket = update.bucket;
+ }
+ if update.organization.is_some() {
+ config.organization = update.organization;
+ }
+ if update.max_body_size.is_some() {
+ config.max_body_size = update.max_body_size;
+ }
+ if update.verify_tls.is_some() {
+ config.verify_tls = update.verify_tls;
+ }
+
+ test_server(&config).await?;
+
+ metrics.set_data(&name, "influxdb-http", &config)?;
+
+ metrics::save_config(&metrics)?;
+
+ Ok(())
+}
+
+const ITEM_ROUTER: Router = Router::new()
+ .get(&API_METHOD_READ_INFLUXDB_HTTP_SERVER)
+ .put(&API_METHOD_UPDATE_INFLUXDB_HTTP_SERVER)
+ .delete(&API_METHOD_DELETE_INFLUXDB_HTTP_SERVER);
+
+pub const ROUTER: Router = Router::new()
+ .get(&API_METHOD_LIST_INFLUXDB_HTTP_SERVERS)
+ .post(&API_METHOD_CREATE_INFLUXDB_HTTP_SERVER)
+ .match_all("name", &ITEM_ROUTER);
diff --git a/src/api2/config/metrics/influxdbudp.rs b/src/api2/config/metrics/influxdbudp.rs
new file mode 100644
index 00000000..d1011830
--- /dev/null
+++ b/src/api2/config/metrics/influxdbudp.rs
@@ -0,0 +1,270 @@
+use anyhow::{bail, format_err, Error};
+use hex::FromHex;
+use serde::{Deserialize, Serialize};
+use serde_json::Value;
+
+use proxmox_metrics::test_influxdb_udp;
+use proxmox_router::{Permission, Router, RpcEnvironment};
+use proxmox_schema::api;
+
+use pbs_api_types::{
+ InfluxDbUdp, InfluxDbUdpUpdater, METRIC_SERVER_ID_SCHEMA, PRIV_SYS_AUDIT, PRIV_SYS_MODIFY,
+ PROXMOX_CONFIG_DIGEST_SCHEMA,
+};
+
+use pbs_config::metrics;
+
+async fn test_server(address: &str) -> Result<(), Error> {
+ test_influxdb_udp(address)
+ .await
+ .map_err(|err| format_err!("cannot conect to {}: {}", address, err))
+}
+
+#[api(
+ input: {
+ properties: {},
+ },
+ returns: {
+ description: "List of configured InfluxDB udp metric servers.",
+ type: Array,
+ items: { type: InfluxDbUdp },
+ },
+ access: {
+ permission: &Permission::Privilege(&[], PRIV_SYS_AUDIT, false),
+ },
+)]
+/// List configured InfluxDB udp metric servers.
+pub fn list_influxdb_udp_servers(
+ _param: Value,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<Vec<InfluxDbUdp>, Error> {
+ let (config, digest) = metrics::config()?;
+
+ let list = config.convert_to_typed_array("influxdb-udp")?;
+
+ rpcenv["digest"] = hex::encode(&digest).into();
+
+ Ok(list)
+}
+
+#[api(
+ protected: true,
+ input: {
+ properties: {
+ config: {
+ type: InfluxDbUdp,
+ flatten: true,
+ },
+ },
+ },
+ access: {
+ permission: &Permission::Privilege(&[], PRIV_SYS_MODIFY, false),
+ },
+)]
+/// Create a new InfluxDB udp server configuration
+pub async fn create_influxdb_udp_server(config: InfluxDbUdp) -> Result<(), Error> {
+ let _lock = metrics::lock_config()?;
+
+ let (mut metrics, _digest) = metrics::config()?;
+
+ if metrics.sections.get(&config.name).is_some() {
+ bail!("metric server '{}' already exists.", config.name);
+ }
+
+ if config.enable {
+ test_server(&config.host).await?;
+ }
+
+ metrics.set_data(&config.name, "influxdb-udp", &config)?;
+
+ metrics::save_config(&metrics)?;
+
+ Ok(())
+}
+
+#[api(
+ protected: true,
+ input: {
+ properties: {
+ name: {
+ schema: METRIC_SERVER_ID_SCHEMA,
+ },
+ digest: {
+ optional: true,
+ schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
+ },
+ },
+ },
+ access: {
+ permission: &Permission::Privilege(&[], PRIV_SYS_MODIFY, false),
+ },
+)]
+/// Remove a InfluxDB udp server configuration
+pub fn delete_influxdb_udp_server(
+ name: String,
+ digest: Option<String>,
+ _rpcenv: &mut dyn RpcEnvironment,
+) -> Result<(), Error> {
+ let _lock = metrics::lock_config()?;
+
+ let (mut metrics, expected_digest) = metrics::config()?;
+
+ if let Some(ref digest) = digest {
+ let digest = <[u8; 32]>::from_hex(digest)?;
+ crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
+ }
+
+ if metrics.sections.remove(&name).is_none() {
+ bail!("name '{}' does not exist.", name);
+ }
+
+ metrics::save_config(&metrics)?;
+
+ Ok(())
+}
+
+#[api(
+ input: {
+ properties: {
+ name: {
+ schema: METRIC_SERVER_ID_SCHEMA,
+ },
+ },
+ },
+ returns: { type: InfluxDbUdp },
+ access: {
+ permission: &Permission::Privilege(&[], PRIV_SYS_AUDIT, false),
+ },
+)]
+/// Read the InfluxDB udp server configuration
+pub fn read_influxdb_udp_server(
+ name: String,
+ rpcenv: &mut dyn RpcEnvironment,
+) -> Result<InfluxDbUdp, Error> {
+ let (metrics, digest) = metrics::config()?;
+
+ let config = metrics.lookup("influxdb-udp", &name)?;
+
+ rpcenv["digest"] = hex::encode(&digest).into();
+
+ Ok(config)
+}
+
+#[api()]
+#[derive(Serialize, Deserialize)]
+#[serde(rename_all = "kebab-case")]
+/// Deletable property name
+pub enum DeletableProperty {
+ /// Delete the enable property.
+ Enable,
+ /// Delete the mtu property.
+ Mtu,
+ /// Delete the comment property.
+ Comment,
+}
+
+#[api(
+ protected: true,
+ input: {
+ properties: {
+ name: {
+ schema: METRIC_SERVER_ID_SCHEMA,
+ },
+ update: {
+ type: InfluxDbUdpUpdater,
+ flatten: true,
+ },
+ delete: {
+ description: "List of properties to delete.",
+ type: Array,
+ optional: true,
+ items: {
+ type: DeletableProperty,
+ }
+ },
+ digest: {
+ optional: true,
+ schema: PROXMOX_CONFIG_DIGEST_SCHEMA,
+ },
+ },
+ },
+ access: {
+ permission: &Permission::Privilege(&[], PRIV_SYS_MODIFY, false),
+ },
+)]
+/// Update an InfluxDB udp server configuration
+pub async fn update_influxdb_udp_server(
+ name: String,
+ update: InfluxDbUdpUpdater,
+ delete: Option<Vec<DeletableProperty>>,
+ digest: Option<String>,
+ _rpcenv: &mut dyn RpcEnvironment,
+) -> Result<(), Error> {
+ let _lock = metrics::lock_config()?;
+
+ let (mut metrics, expected_digest) = metrics::config()?;
+
+ if let Some(ref digest) = digest {
+ let digest = <[u8; 32]>::from_hex(digest)?;
+ crate::tools::detect_modified_configuration_file(&digest, &expected_digest)?;
+ }
+
+ let mut config: InfluxDbUdp = metrics.lookup("influxdb-udp", &name)?;
+
+ if let Some(delete) = delete {
+ for delete_prop in delete {
+ match delete_prop {
+ DeletableProperty::Enable => {
+ config.enable = true;
+ }
+ DeletableProperty::Mtu => {
+ config.mtu = None;
+ }
+ DeletableProperty::Comment => {
+ config.comment = None;
+ }
+ }
+ }
+ }
+
+ if let Some(comment) = update.comment {
+ let comment = comment.trim().to_string();
+ if comment.is_empty() {
+ config.comment = None;
+ } else {
+ config.comment = Some(comment);
+ }
+ }
+
+ if let Some(host) = update.host {
+ config.host = host;
+ }
+
+ if let Some(enable) = update.enable {
+ config.enable = enable;
+ }
+
+ if update.mtu.is_some() {
+ config.mtu = update.mtu;
+ }
+
+ metrics.set_data(&name, "influxdb-udp", &config)?;
+
+ if config.enable {
+ test_server(&config.host).await?;
+ }
+
+ metrics::save_config(&metrics)?;
+
+ Ok(())
+}
+
+const ITEM_ROUTER: Router = Router::new()
+ .get(&API_METHOD_READ_INFLUXDB_UDP_SERVER)
+ .put(&API_METHOD_UPDATE_INFLUXDB_UDP_SERVER)
+ .delete(&API_METHOD_DELETE_INFLUXDB_UDP_SERVER);
+
+pub const ROUTER: Router = Router::new()
+ .get(&API_METHOD_LIST_INFLUXDB_UDP_SERVERS)
+ .post(&API_METHOD_CREATE_INFLUXDB_UDP_SERVER)
+ .match_all("name", &ITEM_ROUTER);
diff --git a/src/api2/config/metrics/mod.rs b/src/api2/config/metrics/mod.rs
new file mode 100644
index 00000000..cbce34f7
--- /dev/null
+++ b/src/api2/config/metrics/mod.rs
@@ -0,0 +1,16 @@
+use proxmox_router::{Router, SubdirMap};
+use proxmox_router::list_subdirs_api_method;
+use proxmox_sys::sortable;
+
+pub mod influxdbudp;
+pub mod influxdbhttp;
+
+#[sortable]
+const SUBDIRS: SubdirMap = &sorted!([
+ ("influxdb-http", &influxdbhttp::ROUTER),
+ ("influxdb-udp", &influxdbudp::ROUTER),
+]);
+
+pub const ROUTER: Router = Router::new()
+ .get(&list_subdirs_api_method!(SUBDIRS))
+ .subdirs(SUBDIRS);
diff --git a/src/api2/config/mod.rs b/src/api2/config/mod.rs
index ffba94ba..265b6fc8 100644
--- a/src/api2/config/mod.rs
+++ b/src/api2/config/mod.rs
@@ -10,6 +10,7 @@ pub mod changer;
pub mod datastore;
pub mod drive;
pub mod media_pool;
+pub mod metrics;
pub mod prune;
pub mod remote;
pub mod sync;
@@ -26,6 +27,7 @@ const SUBDIRS: SubdirMap = &sorted!([
("datastore", &datastore::ROUTER),
("drive", &drive::ROUTER),
("media-pool", &media_pool::ROUTER),
+ ("metrics", &metrics::ROUTER),
("prune", &prune::ROUTER),
("remote", &remote::ROUTER),
("sync", &sync::ROUTER),
--
2.30.2
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v9 6/7] ui: add window/InfluxDbEdit
2022-06-10 11:17 [pbs-devel] [PATCH proxmox/proxmox-backup v9] add metrics server capability Dominik Csapak
` (5 preceding siblings ...)
2022-06-10 11:17 ` [pbs-devel] [PATCH proxmox-backup v9 5/7] api: add metricserver endpoints Dominik Csapak
@ 2022-06-10 11:17 ` Dominik Csapak
2022-06-10 11:17 ` [pbs-devel] [PATCH proxmox-backup v9 7/7] ui: add MetricServerView and use it Dominik Csapak
2022-06-13 8:06 ` [pbs-devel] applied-series: [PATCH proxmox/proxmox-backup v9] add metrics server capability Wolfgang Bumiller
8 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2022-06-10 11:17 UTC (permalink / raw)
To: pbs-devel
contains both windows for HTTP and UDP
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/Makefile | 1 +
www/window/InfluxDbEdit.js | 218 +++++++++++++++++++++++++++++++++++++
2 files changed, 219 insertions(+)
create mode 100644 www/window/InfluxDbEdit.js
diff --git a/www/Makefile b/www/Makefile
index 311f4753..3a36daba 100644
--- a/www/Makefile
+++ b/www/Makefile
@@ -82,6 +82,7 @@ JSSRC= \
window/VerifyJobEdit.js \
window/VerifyAll.js \
window/ZFSCreate.js \
+ window/InfluxDbEdit.js \
dashboard/DataStoreStatistics.js \
dashboard/LongestTasks.js \
dashboard/RunningTasks.js \
diff --git a/www/window/InfluxDbEdit.js b/www/window/InfluxDbEdit.js
new file mode 100644
index 00000000..e4467737
--- /dev/null
+++ b/www/window/InfluxDbEdit.js
@@ -0,0 +1,218 @@
+Ext.define('PBS.window.InfluxDbHttpEdit', {
+ extend: 'Proxmox.window.Edit',
+ mixins: ['Proxmox.Mixin.CBind'],
+
+ subject: 'InfluxDB (HTTP)',
+
+ cbindData: function() {
+ let me = this;
+ me.isCreate = !me.serverid;
+ me.serverid = me.serverid || "";
+ me.url = `/api2/extjs/config/metrics/influxdb-http/${me.serverid}`;
+ me.tokenEmptyText = me.isCreate ? '' : gettext('unchanged');
+ me.method = me.isCreate ? 'POST' : 'PUT';
+ if (!me.isCreate) {
+ me.subject = `${me.subject}: ${me.serverid}`;
+ }
+ return {};
+ },
+
+ items: [
+ {
+ xtype: 'inputpanel',
+
+ column1: [
+ {
+ xtype: 'pmxDisplayEditField',
+ name: 'name',
+ fieldLabel: gettext('Name'),
+ allowBlank: false,
+ cbind: {
+ editable: '{isCreate}',
+ value: '{serverid}',
+ },
+ },
+ {
+ xtype: 'proxmoxtextfield',
+ name: 'url',
+ fieldLabel: gettext('URL'),
+ allowBlank: false,
+ },
+ ],
+
+ column2: [
+ {
+ xtype: 'checkbox',
+ name: 'enable',
+ fieldLabel: gettext('Enabled'),
+ inputValue: 1,
+ uncheckedValue: 0,
+ checked: true,
+ },
+ {
+ xtype: 'proxmoxtextfield',
+ name: 'organization',
+ fieldLabel: gettext('Organization'),
+ emptyText: 'proxmox',
+ cbind: {
+ deleteEmpty: '{!isCreate}',
+ },
+ },
+ {
+ xtype: 'proxmoxtextfield',
+ name: 'bucket',
+ fieldLabel: gettext('Bucket'),
+ emptyText: 'proxmox',
+ cbind: {
+ deleteEmpty: '{!isCreate}',
+ },
+ },
+ {
+ xtype: 'proxmoxtextfield',
+ name: 'token',
+ fieldLabel: gettext('Token'),
+ allowBlank: true,
+ deleteEmpty: false,
+ submitEmpty: false,
+ cbind: {
+ emptyText: '{tokenEmptyText}',
+ },
+ },
+ ],
+
+ columnB: [
+ {
+ xtype: 'proxmoxtextfield',
+ name: 'comment',
+ fieldLabel: gettext('Comment'),
+ cbind: {
+ deleteEmpty: '{!isCreate}',
+ },
+ },
+ ],
+
+ advancedColumn1: [
+ {
+ xtype: 'proxmoxintegerfield',
+ name: 'max-body-size',
+ fieldLabel: gettext('Batch Size (b)'),
+ minValue: 1,
+ emptyText: '25000000',
+ submitEmpty: false,
+ cbind: {
+ deleteEmpty: '{!isCreate}',
+ },
+ },
+ ],
+ },
+ ],
+});
+
+Ext.define('PBS.window.InfluxDbUdpEdit', {
+ extend: 'Proxmox.window.Edit',
+ mixins: ['Proxmox.Mixin.CBind'],
+
+ subject: 'InfluxDB (UDP)',
+
+ cbindData: function() {
+ let me = this;
+ me.isCreate = !me.serverid;
+ me.serverid = me.serverid || "";
+ me.url = `/api2/extjs/config/metrics/influxdb-udp/${me.serverid}`;
+ me.method = me.isCreate ? 'POST' : 'PUT';
+ if (!me.isCreate) {
+ me.subject = `${me.subject}: ${me.serverid}`;
+ }
+ return {};
+ },
+
+ items: [
+ {
+ xtype: 'inputpanel',
+
+ onGetValues: function(values) {
+ values.host += `:${values.port}`;
+ delete values.port;
+ return values;
+ },
+
+ column1: [
+ {
+ xtype: 'pmxDisplayEditField',
+ name: 'name',
+ fieldLabel: gettext('Name'),
+ allowBlank: false,
+ cbind: {
+ editable: '{isCreate}',
+ value: '{serverid}',
+ },
+ },
+ {
+ xtype: 'proxmoxtextfield',
+ name: 'host',
+ fieldLabel: gettext('Host'),
+ allowBlank: false,
+ },
+ ],
+
+ column2: [
+ {
+ xtype: 'checkbox',
+ name: 'enable',
+ fieldLabel: gettext('Enabled'),
+ inputValue: 1,
+ uncheckedValue: 0,
+ checked: true,
+ },
+ {
+ xtype: 'proxmoxintegerfield',
+ name: 'port',
+ minValue: 1,
+ maxValue: 65535,
+ fieldLabel: gettext('Port'),
+ allowBlank: false,
+ },
+ ],
+
+ columnB: [
+ {
+ xtype: 'proxmoxtextfield',
+ name: 'comment',
+ fieldLabel: gettext('Comment'),
+ cbind: {
+ deleteEmpty: '{!isCreate}',
+ },
+ },
+ ],
+
+ advancedColumn1: [
+ {
+ xtype: 'proxmoxintegerfield',
+ name: 'mtu',
+ fieldLabel: 'MTU',
+ minValue: 1,
+ emptyText: '1500',
+ submitEmpty: false,
+ cbind: {
+ deleteEmpty: '{!isCreate}',
+ },
+ },
+ ],
+ },
+ ],
+
+ initComponent: function() {
+ let me = this;
+ me.callParent();
+
+ me.load({
+ success: function(response, options) {
+ let values = response.result.data;
+ let [_match, host, port] = /^(.*):(\d+)$/.exec(values.host) || [];
+ values.host = host;
+ values.port = port;
+ me.setValues(values);
+ },
+ });
+ },
+});
--
2.30.2
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v9 7/7] ui: add MetricServerView and use it
2022-06-10 11:17 [pbs-devel] [PATCH proxmox/proxmox-backup v9] add metrics server capability Dominik Csapak
` (6 preceding siblings ...)
2022-06-10 11:17 ` [pbs-devel] [PATCH proxmox-backup v9 6/7] ui: add window/InfluxDbEdit Dominik Csapak
@ 2022-06-10 11:17 ` Dominik Csapak
2022-06-13 8:06 ` [pbs-devel] applied-series: [PATCH proxmox/proxmox-backup v9] add metrics server capability Wolfgang Bumiller
8 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2022-06-10 11:17 UTC (permalink / raw)
To: pbs-devel
simple CRUD interface to show/add/edit/delete metric servers
it's a bit different from PVE's so it's harder to reuse that than to
copy it. If we need it again, we can still refactor and combine them.
introduce 'PBS.Schema' class to hold the server type/xtype mappings
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/Makefile | 2 +
www/Schema.js | 15 ++++
www/SystemConfiguration.js | 6 ++
www/config/MetricServerView.js | 128 +++++++++++++++++++++++++++++++++
4 files changed, 151 insertions(+)
create mode 100644 www/Schema.js
create mode 100644 www/config/MetricServerView.js
diff --git a/www/Makefile b/www/Makefile
index 3a36daba..4aa2b026 100644
--- a/www/Makefile
+++ b/www/Makefile
@@ -36,6 +36,7 @@ TAPE_UI_FILES= \
JSSRC= \
Utils.js \
+ Schema.js \
form/TokenSelector.js \
form/AuthidSelector.js \
form/RemoteSelector.js \
@@ -62,6 +63,7 @@ JSSRC= \
config/WebauthnView.js \
config/CertificateView.js \
config/NodeOptionView.js \
+ config/MetricServerView.js \
window/ACLEdit.js \
window/BackupFileDownloader.js \
window/BackupGroupChangeOwner.js \
diff --git a/www/Schema.js b/www/Schema.js
new file mode 100644
index 00000000..dcd47a4a
--- /dev/null
+++ b/www/Schema.js
@@ -0,0 +1,15 @@
+Ext.define('PBS.Schema', {
+
+ singleton: true,
+
+ metricServer: {
+ 'influxdb-http': {
+ type: 'InfluxDB (HTTP)',
+ xtype: 'InfluxDbHttp',
+ },
+ 'influxdb-udp': {
+ type: 'InfluxDB (UDP)',
+ xtype: 'InfluxDbUdp',
+ },
+ },
+});
diff --git a/www/SystemConfiguration.js b/www/SystemConfiguration.js
index cdc9de35..ddb6ece5 100644
--- a/www/SystemConfiguration.js
+++ b/www/SystemConfiguration.js
@@ -45,6 +45,12 @@ Ext.define('PBS.SystemConfiguration', {
},
],
},
+ {
+ title: gettext('Metric Server'),
+ iconCls: 'fa fa-bar-chart',
+ xtype: 'pbsMetricServerView',
+ itemId: 'metrics',
+ },
{
xtype: 'panel',
title: gettext('Other'),
diff --git a/www/config/MetricServerView.js b/www/config/MetricServerView.js
new file mode 100644
index 00000000..5aaf81b6
--- /dev/null
+++ b/www/config/MetricServerView.js
@@ -0,0 +1,128 @@
+Ext.define('PBS.config.MetricServerView', {
+ extend: 'Ext.grid.Panel',
+ alias: ['widget.pbsMetricServerView'],
+
+ stateful: true,
+ stateId: 'grid-metricserver',
+
+ controller: {
+ xclass: 'Ext.app.ViewController',
+
+ editWindow: function(xtype, id) {
+ let me = this;
+ Ext.create(`PBS.window.${xtype}Edit`, {
+ serverid: id,
+ autoShow: true,
+ autoLoad: !!id,
+ listeners: {
+ destroy: () => me.reload(),
+ },
+ });
+ },
+
+ addServer: function(button) {
+ this.editWindow(PBS.Schema.metricServer[button.type]?.xtype);
+ },
+
+ editServer: function() {
+ let me = this;
+ let view = me.getView();
+ let selection = view.getSelection();
+ if (!selection || selection.length < 1) {
+ return;
+ }
+
+ let cfg = selection[0].data;
+
+ me.editWindow(PBS.Schema.metricServer[cfg.type]?.xtype, cfg.name);
+ },
+
+ reload: function() {
+ this.getView().getStore().load();
+ },
+ },
+
+ store: {
+ autoLoad: true,
+ id: 'metricservers',
+ proxy: {
+ type: 'proxmox',
+ url: '/api2/json/admin/metrics',
+ },
+ },
+
+ columns: [
+ {
+ text: gettext('Name'),
+ flex: 2,
+ dataIndex: 'name',
+ },
+ {
+ text: gettext('Type'),
+ width: 150,
+ dataIndex: 'type',
+ renderer: (v) => PBS.Schema.metricServer[v]?.type ?? v,
+ },
+ {
+ text: gettext('Enabled'),
+ dataIndex: 'disable',
+ width: 100,
+ renderer: Proxmox.Utils.format_neg_boolean,
+ },
+ {
+ text: gettext('Target Server'),
+ width: 200,
+ dataIndex: 'server',
+ },
+ {
+ text: gettext('Comment'),
+ flex: 3,
+ dataIndex: 'comment',
+ renderer: Ext.htmlEncode,
+ },
+ ],
+
+ tbar: [
+ {
+ text: gettext('Add'),
+ menu: [
+ {
+ text: 'InfluxDB (HTTP)',
+ type: 'influxdb-http',
+ iconCls: 'fa fa-fw fa-bar-chart',
+ handler: 'addServer',
+ },
+ {
+ text: 'InfluxDB (UDP)',
+ type: 'influxdb-udp',
+ iconCls: 'fa fa-fw fa-bar-chart',
+ handler: 'addServer',
+ },
+ ],
+ },
+ {
+ text: gettext('Edit'),
+ xtype: 'proxmoxButton',
+ handler: 'editServer',
+ disabled: true,
+ },
+ {
+ xtype: 'proxmoxStdRemoveButton',
+ getUrl: (rec) => `/api2/extjs/config/metrics/${rec.data.type}/${rec.data.name}`,
+ getRecordName: (rec) => rec.data.name,
+ callback: 'reload',
+ },
+ ],
+
+ listeners: {
+ itemdblclick: 'editServer',
+ },
+
+ initComponent: function() {
+ var me = this;
+
+ me.callParent();
+
+ Proxmox.Utils.monStoreErrors(me, me.getStore());
+ },
+});
--
2.30.2
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pbs-devel] applied-series: [PATCH proxmox/proxmox-backup v9] add metrics server capability
2022-06-10 11:17 [pbs-devel] [PATCH proxmox/proxmox-backup v9] add metrics server capability Dominik Csapak
` (7 preceding siblings ...)
2022-06-10 11:17 ` [pbs-devel] [PATCH proxmox-backup v9 7/7] ui: add MetricServerView and use it Dominik Csapak
@ 2022-06-13 8:06 ` Wolfgang Bumiller
8 siblings, 0 replies; 11+ messages in thread
From: Wolfgang Bumiller @ 2022-06-13 8:06 UTC (permalink / raw)
To: Dominik Csapak; +Cc: pbs-devel
applied series thanks
^ permalink raw reply [flat|nested] 11+ messages in thread