From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id EF7DD1FF2C6 for ; Tue, 9 Jul 2024 15:16:49 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 212D61FAEA; Tue, 9 Jul 2024 15:17:13 +0200 (CEST) From: Max Carrara To: pbs-devel@lists.proxmox.com Date: Tue, 9 Jul 2024 15:16:37 +0200 Message-Id: <20240709131637.266675-4-m.carrara@proxmox.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240709131637.266675-1-m.carrara@proxmox.com> References: <20240709131637.266675-1-m.carrara@proxmox.com> MIME-Version: 1.0 X-SPAM-LEVEL: Spam detection results: 0 AWL 0.028 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: [pbs-devel] [PATCH v3 proxmox 3/3] fix #5105: rest-server: connection: overhaul TLS handshake check logic X-BeenThere: pbs-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox Backup Server development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Proxmox Backup Server development discussion Cc: Wolfgang Bumiller Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: pbs-devel-bounces@lists.proxmox.com Sender: "pbs-devel" On rare occasions, the TLS "client hello" message [1] is delayed after a connection with the server was established, which causes HTTPS requests to fail before TLS was even negotiated. In these cases, the server would incorrectly respond with "HTTP/1.1 400 Bad Request" instead of closing the connection (or similar). The reasons for the "client hello" being delayed seem to vary; one user noticed that the issue went away completely after they turned off UFW [2]. Another user noticed (during private correspondence) that the issue only appeared when connecting to their PBS instance via WAN, but not from within their VPN. In the WAN case a firewall was also present. The same user kindly provided tcpdumps and strace logs on request. The issue was finally reproduced with the following Python script: import socket import time HOST: str = ... PORT: int = ... with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock: sock.connect((HOST, PORT)) time.sleep(1.5) # simulate firewall / proxy / etc. delay sock.sendall(b"\x16\x03\x01\x02\x00") data = sock.recv(256) print(data) The additional delay before sending the first 5 bytes of the "client hello" message causes the handshake checking logic to incorrectly fall back to plain HTTP. All of this is fixed by the following: 1. Increase the timeout duration to 10 seconds (from 1) 2. Instead of falling back to plain HTTP, refuse to accept the connection if the TLS handshake wasn't initiated before the timeout limit is reached 3. Only accept plain HTTP if the first 5 bytes do not correspond to a TLS handshake fragment [3] 4. Do not take the last number of bytes that were in the buffer into account; instead, only perform the actual handshake check if 5 bytes are in the peek buffer using some of tokio's low-level functionality Regarding 1.: This should be generous enough for any client to be able to initiate a TLS handshake, despite its surrounding circumstances. Regarding 4.: While this is not 100% related to the issue, peeking into the buffer in this manner should ensure that our implementation here remains correct, even if the kernel's underlying behaviour regarding edge-triggering is changed [4]. At the same time, there's no need for busy-waiting and continuously yielding to the event loop anymore. [1]: https://www.rfc-editor.org/rfc/rfc8446.html#section-4.1.2 [2]: https://forum.proxmox.com/threads/disable-default-http-redirects-on-8007.142312/post-675352 [3]: https://www.rfc-editor.org/rfc/rfc8446.html#section-5.1 [4]: https://lwn.net/Articles/864947/ Signed-off-by: Max Carrara --- Changes v1 --> v2: * use `socket.async_io` instead of `socket.peek` and rework the future that performs the handshake check * adapt commit message * fix reference in commit message Changes v2 --> v3: * use `std::mem::ManuallyDrop` instead of `std::mem::forget` * adapt comment accordingly proxmox-rest-server/src/connection.rs | 110 ++++++++++++++------------ 1 file changed, 61 insertions(+), 49 deletions(-) diff --git a/proxmox-rest-server/src/connection.rs b/proxmox-rest-server/src/connection.rs index 470021d7..2ca83fe2 100644 --- a/proxmox-rest-server/src/connection.rs +++ b/proxmox-rest-server/src/connection.rs @@ -2,7 +2,10 @@ //! //! Hyper building block. +use std::io; +use std::mem::ManuallyDrop; use std::net::SocketAddr; +use std::os::fd::FromRawFd; use std::os::unix::io::AsRawFd; use std::path::PathBuf; use std::pin::Pin; @@ -418,70 +421,79 @@ impl AcceptBuilder { secure_sender: ClientSender, insecure_sender: InsecureClientSender, ) { + const CLIENT_HANDSHAKE_TIMEOUT: Duration = Duration::from_secs(10); + let peer = state.peer; - let client_initiates_handshake = { - #[cfg(feature = "rate-limited-stream")] - let socket_ref = state.socket.inner(); + #[cfg(feature = "rate-limited-stream")] + let socket_ref = state.socket.inner(); - #[cfg(not(feature = "rate-limited-stream"))] - let socket_ref = &state.socket; + #[cfg(not(feature = "rate-limited-stream"))] + let socket_ref = &state.socket; - match Self::wait_for_client_tls_handshake(socket_ref).await { - Ok(initiates_handshake) => initiates_handshake, - Err(err) => { - log::error!("[{peer}] error checking for TLS handshake: {err}"); - return; - } - } - }; - - if !client_initiates_handshake { - let insecure_stream = Box::pin(state.socket); + let handshake_res = + Self::wait_for_client_tls_handshake(socket_ref, CLIENT_HANDSHAKE_TIMEOUT).await; - if insecure_sender.send(Ok(insecure_stream)).await.is_err() && flags.is_debug { - log::error!("[{peer}] detected closed connection channel") + match handshake_res { + Ok(true) => { + Self::do_accept_tls(state, flags, secure_sender).await; } + Ok(false) => { + let insecure_stream = Box::pin(state.socket); - return; + if let Err(send_err) = insecure_sender.send(Ok(insecure_stream)).await { + log::error!("[{peer}] failed to accept connection - connection channel closed: {send_err}"); + } + } + Err(err) => { + log::error!("[{peer}] failed to check for TLS handshake: {err}"); + } } - - Self::do_accept_tls(state, flags, secure_sender).await } - async fn wait_for_client_tls_handshake(incoming_stream: &TcpStream) -> Result { - const MS_TIMEOUT: u64 = 1000; - const BYTES_BUF_SIZE: usize = 128; - - let mut buf = [0; BYTES_BUF_SIZE]; - let mut last_peek_size = 0; + async fn wait_for_client_tls_handshake( + incoming_stream: &TcpStream, + timeout: Duration, + ) -> Result { + const HANDSHAKE_BYTES_LEN: usize = 5; let future = async { - loop { - let peek_size = incoming_stream - .peek(&mut buf) - .await - .context("couldn't peek into incoming tcp stream")?; - - if contains_tls_handshake_fragment(&buf) { - return Ok(true); - } - - // No more new data came in - if peek_size == last_peek_size { - return Ok(false); - } - - last_peek_size = peek_size; - - // explicitly yield to event loop; this future otherwise blocks ad infinitum - tokio::task::yield_now().await; - } + incoming_stream + .async_io(tokio::io::Interest::READABLE, || { + let mut buf = [0; HANDSHAKE_BYTES_LEN]; + + // Convert to standard lib TcpStream so we can peek without interfering + // with tokio's internals. Wrap the stream in ManuallyDrop in order to prevent + // the destructor from being called, closing the connection and messing up + // invariants. + let raw_fd = incoming_stream.as_raw_fd(); + let std_stream = + unsafe { ManuallyDrop::new(std::net::TcpStream::from_raw_fd(raw_fd)) }; + + let peek_res = std_stream.peek(&mut buf); + + match peek_res { + // If we didn't get enough bytes, raise an EAGAIN / EWOULDBLOCK which tells + // tokio to await the readiness of the socket again. This should normally + // only be used if the socket isn't actually ready, but is fine to do here + // in our case. + // + // This means we will peek into the stream's queue until we got + // HANDSHAKE_BYTE_LEN bytes or an error. + Ok(peek_len) if peek_len < HANDSHAKE_BYTES_LEN => { + Err(io::ErrorKind::WouldBlock.into()) + } + // Either we got Ok(HANDSHAKE_BYTES_LEN) or some error. + res => res.map(|_| contains_tls_handshake_fragment(&buf)), + } + }) + .await + .context("couldn't peek into incoming TCP stream") }; - tokio::time::timeout(Duration::from_millis(MS_TIMEOUT), future) + tokio::time::timeout(timeout, future) .await - .unwrap_or(Ok(false)) + .context("timed out while waiting for client to initiate TLS handshake")? } } -- 2.39.2 _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel