From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id C44401FF16B for ; Fri, 7 Nov 2025 13:26:48 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 286DD10268; Fri, 7 Nov 2025 13:27:31 +0100 (CET) Date: Fri, 07 Nov 2025 13:27:22 +0100 Message-Id: To: "Proxmox Datacenter Manager development discussion" From: "Lukas Wagner" Mime-Version: 1.0 X-Mailer: aerc 0.21.0-0-g5549850facc2-dirty References: <20251105163546.450094-1-h.laimer@proxmox.com> <20251105163546.450094-11-h.laimer@proxmox.com> In-Reply-To: <20251105163546.450094-11-h.laimer@proxmox.com> X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1762518422201 X-SPAM-LEVEL: Spam detection results: 0 AWL -0.121 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment POISEN_SPAM_PILL 0.1 Meta: its spam POISEN_SPAM_PILL_1 0.1 random spam to be learned in bayes POISEN_SPAM_PILL_3 0.1 random spam to be learned in bayes SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pdm-devel] [PATCH proxmox-datacenter-manager v2 2/4] api: firewall: add option, rules and status endpoints X-BeenThere: pdm-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox Datacenter Manager development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Proxmox Datacenter Manager development discussion Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: pdm-devel-bounces@lists.proxmox.com Sender: "pdm-devel" Some notes inline On Wed Nov 5, 2025 at 5:35 PM CET, Hannes Laimer wrote: > This adds the following endpoints > * for all PVE remotes: > - GET /pve/firewall/status > > * for PVE remotes > - GET pve/remotes/{remote}/firewall/options > - PUT pve/remotes/{remote}/firewall/options > - GET pve/remotes/{remote}/firewall/rules > - GET pve/remotes/{remote}/firewall/status > > * for PVE node > - GET pve/remotes/{remote}/nodes/{node}/firewall/options > - PUT pve/remotes/{remote}/nodes/{node}/firewall/options > - GET pve/remotes/{remote}/nodes/{node}/firewall/rules > - GET pve/remotes/{remote}/nodes/{node}/firewall/status > > * for guests (both lxc and qemu) > - GET pve/remotes/{remote}/[lxc|qemu]/{vmid}/firewall/options > - PUT pve/remotes/{remote}/[lxc|qemu]/{vmid}/firewall/options > - GET pve/remotes/{remote}/[lxc|qemu]/{vmid}/firewall/rules > > `options` endpoints are for recieving and updating the configured > firewall options for remotes, nodes and guests. Both lxc and qemu guests > share the same type for getting and upating their options. > > `rules` endpoints return the list of firewall rules that exist on the > entity. All remotes, nodes and guests return a list with items of the > same type. > > `status` endpoints return the firewall status of the entity, this > includes: > - name/id > - optional status(enabled, count of enabled rules) > - list of 'child-statuses', so: > for pve status (all remotes) -> list of remote-statuses > for remote status -> list of node-statuses > for node status -> list of guest-statuses > for guest status -> no list > -(only guest) type of guest > > Like this we have a way to limit the amount of requests the PDM has to > make in order to collect all the needed data. Given the rather large > amoutn of requests needed to assemble all the data this made more sense > than always loading everything and filtering on the client side. > > Data to build the status response is fetch in parallel using our > ParallelFetcher. But only for `all remotes` or `single remote` > status requests. The status for single nodes is done sequentially. > > Signed-off-by: Hannes Laimer > --- > server/src/api/pve/firewall.rs | 854 +++++++++++++++++++++++++++++++++ > server/src/api/pve/lxc.rs | 1 + > server/src/api/pve/mod.rs | 3 + > server/src/api/pve/node.rs | 1 + > server/src/api/pve/qemu.rs | 1 + > 5 files changed, 860 insertions(+) > create mode 100644 server/src/api/pve/firewall.rs > > diff --git a/server/src/api/pve/firewall.rs b/server/src/api/pve/firewall.rs > new file mode 100644 > index 0000000..fb8ee82 > --- /dev/null > +++ b/server/src/api/pve/firewall.rs > @@ -0,0 +1,854 @@ > +use anyhow::Error; > +use pdm_api_types::{PRIV_RESOURCE_AUDIT, PRIV_RESOURCE_MODIFY, PRIV_SYS_AUDIT, PRIV_SYS_MODIFY}; > +use proxmox_router::{list_subdirs_api_method, Permission, Router, RpcEnvironment, SubdirMap}; > +use proxmox_schema::api; > +use proxmox_sortable_macro::sortable; > +use pve_api_types::{ClusterResource, ClusterResourceKind, ClusterResourceType}; > +use std::sync::Arc; > + > +use pdm_api_types::firewall::{ > + FirewallStatus, GuestFirewallStatus, GuestKind, NodeFirewallStatus, RemoteFirewallStatus, > + RuleStat, > +}; > +use pdm_api_types::remotes::{Remote, REMOTE_ID_SCHEMA}; > +use pdm_api_types::{NODE_SCHEMA, VMID_SCHEMA}; > + > +use super::{connect_to_remote, find_node_for_vm}; > +use crate::connection::PveClient; > +use crate::parallel_fetcher::ParallelFetcher; tiny nit: usually we try to group the includes as follows: - std - third party crates - proxmox crates - crate-level imports > + > +// top-level firewall routers > +pub const PVE_FW_ROUTER: Router = Router::new() > + .get(&list_subdirs_api_method!(PVE_FW_SUBDIRS)) > + .subdirs(PVE_FW_SUBDIRS); > + > +pub const CLUSTER_FW_ROUTER: Router = Router::new() > + .get(&list_subdirs_api_method!(CLUSTER_FW_SUBDIRS)) > + .subdirs(CLUSTER_FW_SUBDIRS); > + > +pub const NODE_FW_ROUTER: Router = Router::new() > + .get(&list_subdirs_api_method!(NODE_FW_SUBDIRS)) > + .subdirs(NODE_FW_SUBDIRS); > + > +pub const LXC_FW_ROUTER: Router = Router::new() > + .get(&list_subdirs_api_method!(LXC_FW_SUBDIRS)) > + .subdirs(LXC_FW_SUBDIRS); > +pub const QEMU_FW_ROUTER: Router = Router::new() > + .get(&list_subdirs_api_method!(QEMU_FW_SUBDIRS)) > + .subdirs(QEMU_FW_SUBDIRS); > + > +// pve > +#[sortable] > +const PVE_FW_SUBDIRS: SubdirMap = &sorted!([("status", &PVE_STATUS_ROUTER),]); > + > +// cluster > +#[sortable] > +const CLUSTER_FW_SUBDIRS: SubdirMap = &sorted!([ > + ("options", &CLUSTER_OPTIONS_ROUTER), > + ("rules", &CLUSTER_RULES_ROUTER), > + ("status", &CLUSTER_STATUS_ROUTER), > +]); > + > +// node > +#[sortable] > +const NODE_FW_SUBDIRS: SubdirMap = &sorted!([ > + ("options", &NODE_OPTIONS_ROUTER), > + ("rules", &NODE_RULES_ROUTER), > + ("status", &NODE_STATUS_ROUTER), > +]); > + > +// guest > +#[sortable] > +const LXC_FW_SUBDIRS: SubdirMap = &sorted!([ > + ("options", &LXC_OPTIONS_ROUTER), > + ("rules", &LXC_RULES_ROUTER), > +]); > +#[sortable] > +const QEMU_FW_SUBDIRS: SubdirMap = &sorted!([ > + ("options", &QEMU_OPTIONS_ROUTER), > + ("rules", &QEMU_RULES_ROUTER), > +]); > + > +// /options > +const CLUSTER_OPTIONS_ROUTER: Router = Router::new() > + .get(&API_METHOD_CLUSTER_FIREWALL_OPTIONS) > + .put(&API_METHOD_UPDATE_CLUSTER_FIREWALL_OPTIONS); > + > +const NODE_OPTIONS_ROUTER: Router = Router::new() > + .get(&API_METHOD_NODE_FIREWALL_OPTIONS) > + .put(&API_METHOD_UPDATE_NODE_FIREWALL_OPTIONS); > + > +const LXC_OPTIONS_ROUTER: Router = Router::new() > + .get(&API_METHOD_LXC_FIREWALL_OPTIONS) > + .put(&API_METHOD_UPDATE_LXC_FIREWALL_OPTIONS); > +const QEMU_OPTIONS_ROUTER: Router = Router::new() > + .get(&API_METHOD_QEMU_FIREWALL_OPTIONS) > + .put(&API_METHOD_UPDATE_QEMU_FIREWALL_OPTIONS); > + > +// /rules > +const CLUSTER_RULES_ROUTER: Router = Router::new().get(&API_METHOD_CLUSTER_FIREWALL_RULES); > +const NODE_RULES_ROUTER: Router = Router::new().get(&API_METHOD_NODE_FIREWALL_RULES); > +const LXC_RULES_ROUTER: Router = Router::new().get(&API_METHOD_LXC_FIREWALL_RULES); > +const QEMU_RULES_ROUTER: Router = Router::new().get(&API_METHOD_QEMU_FIREWALL_RULES); > + > +// /status > +const PVE_STATUS_ROUTER: Router = Router::new().get(&API_METHOD_PVE_FIREWALL_STATUS); > +const CLUSTER_STATUS_ROUTER: Router = Router::new().get(&API_METHOD_CLUSTER_FIREWALL_STATUS); > +const NODE_STATUS_ROUTER: Router = Router::new().get(&API_METHOD_NODE_FIREWALL_STATUS); > + > +#[derive(Clone)] > +struct FirewallFetchContext { > + guests: Arc>, > +} > + > +#[derive(Clone, Debug)] > +struct ClusterFirewallData { > + status: Option, > + guests: Vec, > +} > + > +async fn fetch_cluster_firewall_data( > + _context: (), > + remote: Remote, > + _node: String, // unused for cluster-level data > +) -> Result { > + let pve = crate::connection::make_pve_client(&remote)?; > + > + let guests = match pve.cluster_resources(Some(ClusterResourceKind::Vm)).await { > + Ok(guests) => guests, > + Err(_) => { > + return Ok(ClusterFirewallData { > + status: None, > + guests: vec![], > + }); > + } > + }; > + > + let options_response = pve.cluster_firewall_options(); > + let rules_response = pve.list_cluster_firewall_rules(); > + > + let enabled = options_response.await.map(|opts| opts.enable != Some(0)); > + let rules = rules_response.await.map(|rules| { > + let all = rules.len(); > + let active = rules.iter().filter(|r| r.enable == Some(1)).count(); > + RuleStat { all, active } > + }); > + > + let status = match (enabled, rules) { > + (Ok(enabled), Ok(rules)) => Some(FirewallStatus { enabled, rules }), > + _ => None, > + }; > + > + Ok(ClusterFirewallData { status, guests }) > +} > + > +async fn load_guests_firewall_status( > + pve: Arc, > + node: String, > + guests: &[ClusterResource], > +) -> Vec { > + let mut result = vec![]; > + > + let guests: Vec<(u32, String, GuestKind)> = guests > + .iter() > + .filter(|g| g.node.as_ref() == Some(&node)) > + .filter_map(|g| { > + let vmid = g.vmid?; > + let name = g.name.clone().unwrap_or("".to_string()); > + match g.ty { > + ClusterResourceType::Lxc => Some((vmid, name, GuestKind::Lxc)), > + ClusterResourceType::Qemu => Some((vmid, name, GuestKind::Qemu)), > + _ => None, > + } > + }) > + .collect(); > + > + for (vmid, name, kind) in guests { > + let options_response = match kind { > + GuestKind::Lxc => pve.lxc_firewall_options(&node, vmid), > + GuestKind::Qemu => pve.qemu_firewall_options(&node, vmid), > + }; > + let rules_response = match kind { > + GuestKind::Lxc => pve.list_lxc_firewall_rules(&node, vmid), > + GuestKind::Qemu => pve.list_qemu_firewall_rules(&node, vmid), > + }; > + > + let enabled = options_response > + .await > + .map(|opts| opts.enable.unwrap_or_default()); > + let rules = rules_response.await.map(|rules| { > + let all = rules.len(); > + let active = rules.iter().filter(|r| r.enable == Some(1)).count(); > + RuleStat { all, active } > + }); I think technically you could use join to await both futures at the same time, which essentially will perform both requests at the same time. But since I guess then this would need to be considered in the connection limits for the ParallelFetcher somehow > + > + let status = match (enabled, rules) { > + (Ok(enabled), Ok(rules)) => Some(FirewallStatus { enabled, rules }), > + _ => None, > + }; > + > + result.push(GuestFirewallStatus { > + vmid, > + name, > + status, > + kind, > + }); > + } > + result > +} > + > +async fn fetch_node_firewall_status( > + context: FirewallFetchContext, > + remote: Remote, > + node: String, > +) -> Result { > + let pve = crate::connection::make_pve_client(&remote)?; > + > + let options_response = pve.node_firewall_options(&node); > + let rules_response = pve.list_node_firewall_rules(&node); > + > + let enabled = options_response > + .await > + .map(|opts| opts.enable.unwrap_or_default()); > + let rules = rules_response.await.map(|rules| { > + let all = rules.len(); > + let active = rules.iter().filter(|r| r.enable == Some(1)).count(); > + RuleStat { all, active } > + }); > + > + let status = match (enabled, rules) { > + (Ok(enabled), Ok(rules)) => Some(FirewallStatus { enabled, rules }), > + _ => None, > + }; > + > + let guests_status = load_guests_firewall_status(pve, node.clone(), &context.guests).await; > + > + Ok(NodeFirewallStatus { > + node, > + status, > + guests: guests_status, > + }) > +} > + > +#[api( > + returns: { > + type: Array, > + description: "Get firewall status of remotes", > + items: { type: RemoteFirewallStatus }, > + }, > + access: { > + permission: &Permission::Privilege(&["resource", "{remote}"], PRIV_SYS_AUDIT, false), I *think* this should be PRIV_RESOURCE_AUDIT, at the moment we only really use PRIV_SYS_AUDIT for the PDM host itself. > + }, > +)] > +/// Get firewall status of all PVE remotes. > +pub async fn pve_firewall_status( > + _rpcenv: &mut dyn RpcEnvironment, > +) -> Result, Error> { > + use std::collections::HashMap; > + > + let (remote_config, _) = pdm_config::remotes::config()?; > + > + let pve_remotes: Vec = remote_config > + .iter() This could be into_iter(), then you can avoid the `remote.clone()` below > + .filter_map(|(_, remote)| match remote.ty { > + pdm_api_types::remotes::RemoteType::Pve => Some(remote.clone()), > + pdm_api_types::remotes::RemoteType::Pbs => None, > + }) > + .collect(); > + > + if pve_remotes.is_empty() { > + return Ok(vec![]); > + } > + > + // 1: fetch cluster-level data (status + guests) > + let cluster_fetcher = ParallelFetcher::new(()); > + let cluster_results = cluster_fetcher > + .do_for_all_remotes(pve_remotes.iter().cloned(), fetch_cluster_firewall_data) > + .await; > + > + // 2: build context with guests for each remote and fetch node-level data > + let mut guests_per_remote = HashMap::new(); > + for (remote_id, remote_result) in &cluster_results.remote_results { > + if let Ok(remote_result) = remote_result { > + if let Ok(node_result) = remote_result.node_results.get("localhost").unwrap() { > + guests_per_remote > + .insert(remote_id.clone(), Arc::new(node_result.data.guests.clone())); > + } > + } > + } > + > + let context = FirewallFetchContext { > + guests: Arc::new(vec![]), > + }; > + > + let node_fetcher = ParallelFetcher::new(context); > + let node_results = node_fetcher > + .do_for_all_remote_nodes(pve_remotes.iter().cloned(), move |mut ctx, remote, node| { > + if let Some(guests) = guests_per_remote.get(&remote.id) { > + ctx.guests = guests.clone(); > + } > + fetch_node_firewall_status(ctx, remote, node) > + }) > + .await; > + > + // 3: combine results > + let mut result = Vec::new(); > + for remote in &pve_remotes { > + let mut cluster_status = cluster_results > + .remote_results > + .get(&remote.id) > + .and_then(|r| r.as_ref().ok()) > + .and_then(|r| r.node_results.get("localhost")) > + .and_then(|n| n.as_ref().ok()) > + .and_then(|n| n.data.status.clone()); > + > + let node_fetch_result = node_results.remote_results.get(&remote.id); > + > + let nodes = node_fetch_result > + .and_then(|r| r.as_ref().ok()) > + .map(|r| { > + r.node_results > + .values() > + .filter_map(|n| n.as_ref().ok().map(|n| n.data.clone())) > + .collect() > + }) > + .unwrap_or_default(); > + > + if node_fetch_result.and_then(|r| r.as_ref().err()).is_some() { > + cluster_status = None; > + } > + > + result.push(RemoteFirewallStatus { > + remote: remote.id.clone(), > + status: cluster_status, > + nodes, > + }); > + } > + > + Ok(result) > +} > + > +#[api( > + input: { > + properties: { > + remote: { schema: REMOTE_ID_SCHEMA }, > + }, > + }, > + returns: { > + type: Array, > + description: "Get firewall options.", > + items: { type: pve_api_types::ClusterFirewallOptions }, > + }, > + access: { > + permission: &Permission::Privilege(&["resource", "{remote}"], PRIV_SYS_AUDIT, false), Same here. > + }, > +)] > +/// Get cluster firewall options. > +pub async fn cluster_firewall_options( > + remote: String, > + _rpcenv: &mut dyn RpcEnvironment, > +) -> Result { > + let (remotes, _) = pdm_config::remotes::config()?; > + let pve = connect_to_remote(&remotes, &remote)?; > + > + Ok(pve.cluster_firewall_options().await?) > +} > + > +#[api( > + input: { > + properties: { > + remote: { schema: REMOTE_ID_SCHEMA }, > + }, > + }, > + returns: { > + type: RemoteFirewallStatus, > + }, > + access: { > + permission: &Permission::Privilege(&["resource", "{remote}"], PRIV_SYS_AUDIT, false), Same here. > + }, > +)] > +/// Get firewall status of a specific remote. > +pub async fn cluster_firewall_status( > + remote: String, > + _rpcenv: &mut dyn RpcEnvironment, > +) -> Result { > + let (remote_config, _) = pdm_config::remotes::config()?; > + > + let remote_obj = remote_config > + .iter() You can use into_iter here and avoid the clone below > + .find(|(id, _)| *id == &remote) > + .map(|(_, r)| r.clone()) > + .ok_or_else(|| anyhow::format_err!("Remote '{}' not found", remote))?; > + > + // 1: fetch cluster-level data (status + guests) > + let cluster_fetcher = ParallelFetcher::new(()); > + let cluster_results = cluster_fetcher > + .do_for_all_remotes( > + std::iter::once(remote_obj.clone()), > + fetch_cluster_firewall_data, > + ) > + .await; > + > + let cluster_data = cluster_results > + .remote_results > + .get(&remote) > + .and_then(|r| r.as_ref().ok()) > + .and_then(|r| r.node_results.get("localhost")) > + .and_then(|n| n.as_ref().ok()) > + .map(|n| &n.data); > + > + let (cluster_status, guests) = match cluster_data { > + Some(data) => (data.status.clone(), data.guests.clone()), > + None => { > + return Ok(RemoteFirewallStatus { > + remote, > + status: None, > + nodes: vec![], > + }); > + } > + }; > + > + // 2: fetch node-level data > + let context = FirewallFetchContext { > + guests: Arc::new(guests), > + }; > + > + let node_fetcher = ParallelFetcher::new(context); > + let node_results = node_fetcher > + .do_for_all_remote_nodes(std::iter::once(remote_obj), fetch_node_firewall_status) > + .await; > + > + // 3: collect node results > + let node_fetch_result = node_results.remote_results.get(&remote); > + > + let nodes = node_fetch_result > + .and_then(|r| r.as_ref().ok()) > + .map(|r| { > + r.node_results > + .values() > + .filter_map(|n| n.as_ref().ok().map(|n| n.data.clone())) > + .collect() > + }) > + .unwrap_or_default(); > + > + let final_status = if node_fetch_result.and_then(|r| r.as_ref().err()).is_some() { > + None > + } else { > + cluster_status > + }; > + > + Ok(RemoteFirewallStatus { > + remote, > + status: final_status, > + nodes, > + }) > +} > + > +#[api( > + input: { > + properties: { > + remote: { schema: REMOTE_ID_SCHEMA }, > + node: { > + schema: NODE_SCHEMA, > + }, > + }, > + }, > + returns: { > + type: Array, > + description: "Get firewall options.", > + items: { type: pve_api_types::NodeFirewallOptions }, > + }, > + access: { > + permission: &Permission::Privilege(&["resource", "{remote}"], PRIV_SYS_AUDIT, false), Same here. > + }, > +)] > +/// Get nodes firewall options. > +pub async fn node_firewall_options( > + remote: String, > + node: String, > + _rpcenv: &mut dyn RpcEnvironment, > +) -> Result { > + let (remotes, _) = pdm_config::remotes::config()?; > + let pve = connect_to_remote(&remotes, &remote)?; > + > + Ok(pve.node_firewall_options(&node).await?) > +} > + > +#[api( > + input: { > + properties: { > + remote: { schema: REMOTE_ID_SCHEMA }, > + node: { schema: NODE_SCHEMA }, > + }, > + }, > + returns: { > + type: NodeFirewallStatus, > + }, > + access: { > + permission: &Permission::Privilege(&["resource", "{remote}"], PRIV_SYS_AUDIT, false), Same here. > + }, > +)] > +/// Get firewall status of a specific node. > +pub async fn node_firewall_status( > + remote: String, > + node: String, > + _rpcenv: &mut dyn RpcEnvironment, > +) -> Result { > + let (remotes, _) = pdm_config::remotes::config()?; > + let pve = connect_to_remote(&remotes, &remote)?; > + > + let guests = pve.cluster_resources(Some(ClusterResourceKind::Vm)).await?; > + > + let options_response = pve.node_firewall_options(&node); > + let rules_response = pve.list_node_firewall_rules(&node); > + > + let enabled = options_response > + .await > + .map(|opts| opts.enable.unwrap_or_default()); > + let rules = rules_response.await.map(|rules| { > + let all = rules.len(); > + let active = rules.iter().filter(|r| r.enable == Some(1)).count(); > + RuleStat { all, active } > + }); > + > + let status = match (enabled, rules) { > + (Ok(enabled), Ok(rules)) => Some(FirewallStatus { enabled, rules }), > + _ => None, > + }; > + > + let guests_status = load_guests_firewall_status(pve, node.clone(), &guests).await; > + > + Ok(NodeFirewallStatus { > + node, > + status, > + guests: guests_status, > + }) > +} > + > +#[api( > + input: { > + properties: { > + remote: { schema: REMOTE_ID_SCHEMA }, > + }, > + }, > + returns: { > + type: Array, > + description: "List cluster firewall rules.", > + items: { type: pve_api_types::ListFirewallRules }, > + }, > + access: { > + permission: &Permission::Privilege(&["resource", "{remote}"], PRIV_SYS_AUDIT, false), Same here. > + }, > +)] > +/// Get cluster firewall rules. > +pub async fn cluster_firewall_rules( > + remote: String, > + _rpcenv: &mut dyn RpcEnvironment, > +) -> Result, Error> { > + let (remotes, _) = pdm_config::remotes::config()?; > + let pve = connect_to_remote(&remotes, &remote)?; > + > + Ok(pve.list_cluster_firewall_rules().await?) > +} > + > +#[api( > + input: { > + properties: { > + remote: { schema: REMOTE_ID_SCHEMA }, > + node: { > + schema: NODE_SCHEMA, > + optional: true, > + }, > + vmid: { schema: VMID_SCHEMA }, > + }, > + }, > + returns: { > + type: Array, > + description: "Get firewall options.", > + items: { type: pve_api_types::GuestFirewallOptions }, > + }, > + access: { > + permission: &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_AUDIT, false), > + }, > +)] > +/// Get LXC firewall options. > +pub async fn lxc_firewall_options( > + remote: String, > + node: Option, > + vmid: u32, > + _rpcenv: &mut dyn RpcEnvironment, > +) -> Result { > + let (remotes, _) = pdm_config::remotes::config()?; > + > + let pve = connect_to_remote(&remotes, &remote)?; > + > + let node = find_node_for_vm(node, vmid, pve.as_ref()).await?; > + > + Ok(pve.lxc_firewall_options(&node, vmid).await?) > +} > + > +#[api( > + input: { > + properties: { > + remote: { schema: REMOTE_ID_SCHEMA }, > + update: { > + type: pve_api_types::UpdateClusterFirewallOptions, > + flatten: true, > + }, > + }, > + }, > + access: { > + permission: &Permission::Privilege(&["resource", "{remote}"], PRIV_SYS_MODIFY, false), Same here. > + }, > +)] > +/// Update cluster firewall configuration > +pub async fn update_cluster_firewall_options( > + remote: String, > + update: pve_api_types::UpdateClusterFirewallOptions, > + _rpcenv: &mut dyn RpcEnvironment, > +) -> Result<(), Error> { > + let (remotes, _) = pdm_config::remotes::config()?; > + let pve = connect_to_remote(&remotes, &remote)?; > + > + Ok(pve.set_cluster_firewall_options(update).await?) > +} > + > +#[api( > + input: { > + properties: { > + remote: { schema: REMOTE_ID_SCHEMA }, > + node: { > + schema: NODE_SCHEMA, > + }, > + update: { > + type: pve_api_types::UpdateNodeFirewallOptions, > + flatten: true, > + }, > + }, > + }, > + access: { > + permission: &Permission::Privilege(&["resource", "{remote}"], PRIV_SYS_MODIFY, false), Same here, but with PRIV_RESOURCE_MODIFY > + }, > +)] > +/// Update a nodes firewall configuration > +pub async fn update_node_firewall_options( > + remote: String, > + node: String, > + update: pve_api_types::UpdateNodeFirewallOptions, > + _rpcenv: &mut dyn RpcEnvironment, > +) -> Result<(), Error> { > + let (remotes, _) = pdm_config::remotes::config()?; > + let pve = connect_to_remote(&remotes, &remote)?; > + > + Ok(pve.set_node_firewall_options(&node, update).await?) > +} > + > +#[api( > + input: { > + properties: { > + remote: { schema: REMOTE_ID_SCHEMA }, > + node: { > + schema: NODE_SCHEMA, > + }, > + }, > + }, > + returns: { > + type: Array, > + description: "List node firewall rules.", > + items: { type: pve_api_types::ListFirewallRules }, > + }, > + access: { > + permission: &Permission::Privilege(&["resource", "{remote}"], PRIV_SYS_AUDIT, false), Same here. > + }, > +)] > +/// Get node firewall rules. > +pub async fn node_firewall_rules( > + remote: String, > + node: String, > + _rpcenv: &mut dyn RpcEnvironment, > +) -> Result, Error> { > + let (remotes, _) = pdm_config::remotes::config()?; > + let pve = connect_to_remote(&remotes, &remote)?; > + > + Ok(pve.list_node_firewall_rules(&node).await?) > +} > + > +#[api( > + input: { > + properties: { > + remote: { schema: REMOTE_ID_SCHEMA }, > + node: { > + schema: NODE_SCHEMA, > + optional: true, > + }, > + vmid: { schema: VMID_SCHEMA }, > + }, > + }, > + returns: { > + type: Array, > + description: "Get firewall options.", > + items: { type: pve_api_types::GuestFirewallOptions }, > + }, > + access: { > + permission: &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_AUDIT, false), > + }, > +)] > +/// Get QEMU firewall options. > +pub async fn qemu_firewall_options( > + remote: String, > + node: Option, > + vmid: u32, > + _rpcenv: &mut dyn RpcEnvironment, > +) -> Result { > + let (remotes, _) = pdm_config::remotes::config()?; > + > + let pve = connect_to_remote(&remotes, &remote)?; > + > + let node = find_node_for_vm(node, vmid, pve.as_ref()).await?; > + > + Ok(pve.qemu_firewall_options(&node, vmid).await?) > +} > + > +#[api( > + input: { > + properties: { > + remote: { schema: REMOTE_ID_SCHEMA }, > + node: { > + schema: NODE_SCHEMA, > + optional: true, > + }, > + vmid: { schema: VMID_SCHEMA, }, > + update: { > + type: pve_api_types::UpdateGuestFirewallOptions, > + flatten: true, > + }, > + }, > + }, > + access: { > + permission: &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_MODIFY, false), > + }, > +)] > +/// Update LXC firewall options > +pub async fn update_lxc_firewall_options( > + remote: String, > + node: Option, > + vmid: u32, > + update: pve_api_types::UpdateGuestFirewallOptions, > + _rpcenv: &mut dyn RpcEnvironment, > +) -> Result<(), Error> { > + let (remotes, _) = pdm_config::remotes::config()?; > + > + let pve = connect_to_remote(&remotes, &remote)?; > + > + let node = find_node_for_vm(node, vmid, pve.as_ref()).await?; > + > + Ok(pve.set_lxc_firewall_options(&node, vmid, update).await?) > +} > + > +#[api( > + input: { > + properties: { > + remote: { schema: REMOTE_ID_SCHEMA }, > + node: { > + schema: NODE_SCHEMA, > + optional: true, > + }, > + vmid: { schema: VMID_SCHEMA, }, > + update: { > + type: pve_api_types::UpdateGuestFirewallOptions, > + flatten: true, > + }, > + }, > + }, > + access: { > + permission: &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_MODIFY, false), > + }, > +)] > +/// Update QEMU firewall options > +pub async fn update_qemu_firewall_options( > + remote: String, > + node: Option, > + vmid: u32, > + update: pve_api_types::UpdateGuestFirewallOptions, > + _rpcenv: &mut dyn RpcEnvironment, > +) -> Result<(), Error> { > + let (remotes, _) = pdm_config::remotes::config()?; > + > + let pve = connect_to_remote(&remotes, &remote)?; > + > + let node = find_node_for_vm(node, vmid, pve.as_ref()).await?; > + > + Ok(pve.set_qemu_firewall_options(&node, vmid, update).await?) > +} > + > +#[api( > + input: { > + properties: { > + remote: { schema: REMOTE_ID_SCHEMA }, > + node: { > + schema: NODE_SCHEMA, > + optional: true, > + }, > + vmid: { schema: VMID_SCHEMA }, > + }, > + }, > + returns: { > + type: Array, > + description: "List LXC firewall rules.", > + items: { type: pve_api_types::ListFirewallRules }, > + }, > + access: { > + permission: &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_AUDIT, false), > + }, > +)] > +/// Get LXC firewall rules. > +pub async fn lxc_firewall_rules( > + remote: String, > + node: Option, > + vmid: u32, > + _rpcenv: &mut dyn RpcEnvironment, > +) -> Result, Error> { > + let (remotes, _) = pdm_config::remotes::config()?; > + > + let pve = connect_to_remote(&remotes, &remote)?; > + > + let node = find_node_for_vm(node, vmid, pve.as_ref()).await?; > + > + Ok(pve.list_lxc_firewall_rules(&node, vmid).await?) > +} > + > +#[api( > + input: { > + properties: { > + remote: { schema: REMOTE_ID_SCHEMA }, > + node: { > + schema: NODE_SCHEMA, > + optional: true, > + }, > + vmid: { schema: VMID_SCHEMA }, > + }, > + }, > + returns: { > + type: Array, > + description: "List QEMU firewall rules.", > + items: { type: pve_api_types::ListFirewallRules }, > + }, > + access: { > + permission: &Permission::Privilege(&["resource", "{remote}", "guest", "{vmid}"], PRIV_RESOURCE_AUDIT, false), > + }, > +)] > +/// Get QEMU firewall rules. > +pub async fn qemu_firewall_rules( > + remote: String, > + node: Option, > + vmid: u32, > + _rpcenv: &mut dyn RpcEnvironment, > +) -> Result, Error> { > + let (remotes, _) = pdm_config::remotes::config()?; > + > + let pve = connect_to_remote(&remotes, &remote)?; > + > + let node = find_node_for_vm(node, vmid, pve.as_ref()).await?; > + > + Ok(pve.list_qemu_firewall_rules(&node, vmid).await?) > +} > diff --git a/server/src/api/pve/lxc.rs b/server/src/api/pve/lxc.rs > index 1b05a30..e1658d7 100644 > --- a/server/src/api/pve/lxc.rs > +++ b/server/src/api/pve/lxc.rs > @@ -33,6 +33,7 @@ const LXC_VM_ROUTER: Router = Router::new() > #[sortable] > const LXC_VM_SUBDIRS: SubdirMap = &sorted!([ > ("config", &Router::new().get(&API_METHOD_LXC_GET_CONFIG)), > + ("firewall", &super::firewall::LXC_FW_ROUTER), > ("rrddata", &super::rrddata::LXC_RRD_ROUTER), > ("start", &Router::new().post(&API_METHOD_LXC_START)), > ("status", &Router::new().get(&API_METHOD_LXC_GET_STATUS)), > diff --git a/server/src/api/pve/mod.rs b/server/src/api/pve/mod.rs > index 2b50afb..34d9b76 100644 > --- a/server/src/api/pve/mod.rs > +++ b/server/src/api/pve/mod.rs > @@ -33,6 +33,7 @@ use crate::connection::PveClient; > use crate::connection::{self, probe_tls_connection}; > use crate::remote_tasks; > > +mod firewall; > mod lxc; > mod node; > mod qemu; > @@ -47,6 +48,7 @@ pub const ROUTER: Router = Router::new() > #[sortable] > const SUBDIRS: SubdirMap = &sorted!([ > ("remotes", &REMOTES_ROUTER), > + ("firewall", &firewall::PVE_FW_ROUTER), > ("probe-tls", &Router::new().post(&API_METHOD_PROBE_TLS)), > ("scan", &Router::new().post(&API_METHOD_SCAN_REMOTE_PVE)), > ( > @@ -66,6 +68,7 @@ const MAIN_ROUTER: Router = Router::new() > #[sortable] > const REMOTE_SUBDIRS: SubdirMap = &sorted!([ > ("lxc", &lxc::ROUTER), > + ("firewall", &firewall::CLUSTER_FW_ROUTER), > ("nodes", &NODES_ROUTER), > ("qemu", &qemu::ROUTER), > ("resources", &RESOURCES_ROUTER), > diff --git a/server/src/api/pve/node.rs b/server/src/api/pve/node.rs > index 301c0b1..3c4fba8 100644 > --- a/server/src/api/pve/node.rs > +++ b/server/src/api/pve/node.rs > @@ -16,6 +16,7 @@ pub const ROUTER: Router = Router::new() > #[sortable] > const SUBDIRS: SubdirMap = &sorted!([ > ("apt", &crate::api::remote_updates::APT_ROUTER), > + ("firewall", &super::firewall::NODE_FW_ROUTER), > ("rrddata", &super::rrddata::NODE_RRD_ROUTER), > ("network", &Router::new().get(&API_METHOD_GET_NETWORK)), > ("storage", &STORAGE_ROUTER), > diff --git a/server/src/api/pve/qemu.rs b/server/src/api/pve/qemu.rs > index 05fa92c..9a446ac 100644 > --- a/server/src/api/pve/qemu.rs > +++ b/server/src/api/pve/qemu.rs > @@ -33,6 +33,7 @@ const QEMU_VM_ROUTER: Router = Router::new() > #[sortable] > const QEMU_VM_SUBDIRS: SubdirMap = &sorted!([ > ("config", &Router::new().get(&API_METHOD_QEMU_GET_CONFIG)), > + ("firewall", &super::firewall::QEMU_FW_ROUTER), > ("rrddata", &super::rrddata::QEMU_RRD_ROUTER), > ("start", &Router::new().post(&API_METHOD_QEMU_START)), > ("status", &Router::new().get(&API_METHOD_QEMU_GET_STATUS)), _______________________________________________ pdm-devel mailing list pdm-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pdm-devel