From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id 86FD31FF141 for ; Tue, 05 May 2026 13:21:37 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 2571B24ADC; Tue, 5 May 2026 13:21:29 +0200 (CEST) From: Manuel Federanko To: pve-devel@lists.proxmox.com Subject: [PATCH docs v2] pve-firewall: link to implicit rules section. Date: Tue, 5 May 2026 13:20:49 +0200 Message-ID: <20260505112049.52552-1-m.federanko@proxmox.com> X-Mailer: git-send-email 2.47.3 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 AWL -0.286 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy HEADER_FROM_DIFFERENT_DOMAINS 0.25 From and EnvelopeFrom 2nd level mail domains are different KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment KAM_LAZY_DOMAIN_SECURITY 1 Sending domain does not have any anti-forgery methods RDNS_NONE 0.793 Delivered to internal network by a host with no rDNS SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_NONE 0.001 SPF: sender does not publish an SPF Record Message-ID-Hash: KD22WABIOE3AGUILEWZODDCIEZQ2UIXO X-Message-ID-Hash: KD22WABIOE3AGUILEWZODDCIEZQ2UIXO X-MailFrom: mfederanko@dev.localdomain X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header X-Mailman-Version: 3.3.10 Precedence: list List-Id: Proxmox VE development discussion List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: Updated the documentation note to link to the default rules. Also added a section directing users to the macro system if they need additional rules. Add a note to the Ceph section that the firewall has to be configured, if it is enabled. Suggested-by: Friedrich Weber Signed-off-by: Manuel Federanko --- pve-firewall.adoc | 14 +++++++++++--- pveceph.adoc | 3 +++ 2 files changed, 14 insertions(+), 3 deletions(-) diff --git a/pve-firewall.adoc b/pve-firewall.adoc index f04134a..21608c9 100644 --- a/pve-firewall.adoc +++ b/pve-firewall.adoc @@ -172,9 +172,15 @@ set the enable option here: enable: 1 ---- -IMPORTANT: If you enable the firewall, traffic to all hosts is blocked by -default. Only exceptions is WebGUI(8006) and ssh(22) from your local -network. +IMPORTANT: If you enable the firewall, traffic to all hosts will be +blocked by default - with some exceptions for traffic coming from the +local network to the WebUI, SSH and other important services. A full +list of the default firewall rules can be found at +xref:pve_firewall_default_rules. + +Should you have other services running which communicate over the +network, you will have to allow them seperately. For some common +services there are `macros` available. If you want to administrate your {pve} hosts from remote, you need to create rules to allow traffic from those remote IPs to the web @@ -510,6 +516,8 @@ following traffic is still allowed for all {pve} hosts in the cluster: * TCP traffic from management hosts to port 3128 for connections to the SPICE proxy * TCP traffic from management hosts to port 22 to allow ssh access +* TCP traffic from management hots to the port range 60000 to 60050 allowing + traffic for migrations * UDP traffic in the cluster network to ports 5405-5412 for corosync * UDP multicast traffic in the cluster network * ICMP traffic type 3 (Destination Unreachable), 4 (congestion control) or 11 diff --git a/pveceph.adoc b/pveceph.adoc index fc8a072..05b21bb 100644 --- a/pveceph.adoc +++ b/pveceph.adoc @@ -180,6 +180,9 @@ high-performance setups: * one medium bandwidth (1 Gbps) exclusive for the latency sensitive corosync cluster communication. +If a firewall is enabled you will have to open the ports used by Ceph, the +easiest way to do this is to use a firewall `macro`. + [[pve_ceph_recommendation_disk]] .Disks When planning the size of your Ceph cluster, it is important to take the -- 2.47.3