public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Stefan Hanreich <s.hanreich@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH proxmox-firewall] firewall: properly cleanup tables when firewall is inactive
Date: Tue, 23 Apr 2024 11:21:39 +0200	[thread overview]
Message-ID: <20240423092139.94402-1-s.hanreich@proxmox.com> (raw)

When executing multiple nft commands they are transactional, either
all get applied or none. When only the host or guest firewall is
active, only one table exists and this causes the delete commands to
fail. To fix this we need to send the delete commands separately.

It might make sense to support running multiple separate batches in
the NftClient in the future in order to avoid having to call nft
twice.

Signed-off-by: Stefan Hanreich <s.hanreich@proxmox.com>
---
 proxmox-firewall/src/bin/proxmox-firewall.rs |  9 +++++----
 proxmox-firewall/src/firewall.rs             | 10 +++++-----
 2 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/proxmox-firewall/src/bin/proxmox-firewall.rs b/proxmox-firewall/src/bin/proxmox-firewall.rs
index 2f4875f..4e07993 100644
--- a/proxmox-firewall/src/bin/proxmox-firewall.rs
+++ b/proxmox-firewall/src/bin/proxmox-firewall.rs
@@ -12,11 +12,12 @@ const RULE_BASE: &str = include_str!("../../resources/proxmox-firewall.nft");
 
 fn remove_firewall() -> Result<(), std::io::Error> {
     log::info!("removing existing firewall rules");
-    let commands = Firewall::remove_commands();
 
-    // can ignore other errors, since it fails when tables do not exist
-    if let Err(NftError::Io(err)) = NftClient::run_json_commands(&commands) {
-        return Err(err);
+    for command in Firewall::remove_commands() {
+        // can ignore other errors, since it fails when tables do not exist
+        if let Err(NftError::Io(err)) = NftClient::run_json_commands(&command) {
+            return Err(err);
+        }
     }
 
     Ok(())
diff --git a/proxmox-firewall/src/firewall.rs b/proxmox-firewall/src/firewall.rs
index 2195a07..b137f58 100644
--- a/proxmox-firewall/src/firewall.rs
+++ b/proxmox-firewall/src/firewall.rs
@@ -157,11 +157,11 @@ impl Firewall {
         }
     }
 
-    pub fn remove_commands() -> Commands {
-        Commands::new(vec![
-            Delete::table(Self::cluster_table()),
-            Delete::table(Self::guest_table()),
-        ])
+    pub fn remove_commands() -> Vec<Commands> {
+        vec![
+            Commands::new(vec![Delete::table(Self::cluster_table())]),
+            Commands::new(vec![Delete::table(Self::guest_table())]),
+        ]
     }
 
     fn create_management_ipset(&self, commands: &mut Commands) -> Result<(), Error> {
-- 
2.39.2


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


             reply	other threads:[~2024-04-23  9:22 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-23  9:21 Stefan Hanreich [this message]
2024-04-23 14:32 ` [pve-devel] applied: " Thomas Lamprecht

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240423092139.94402-1-s.hanreich@proxmox.com \
    --to=s.hanreich@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal