From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id CFE85985A2 for ; Wed, 15 Nov 2023 11:27:48 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id B0A5D3558 for ; Wed, 15 Nov 2023 11:27:18 +0100 (CET) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Wed, 15 Nov 2023 11:27:18 +0100 (CET) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id EE65C42F4F for ; Wed, 15 Nov 2023 11:27:17 +0100 (CET) Date: Wed, 15 Nov 2023 11:27:10 +0100 From: Fabian =?iso-8859-1?q?Gr=FCnbichler?= To: Fiona Ebner , Proxmox VE development discussion References: <20231114140204.27679-1-f.ebner@proxmox.com> <20231114140204.27679-4-f.ebner@proxmox.com> <767911ec-7dee-443e-bb29-513d0c63a74a@proxmox.com> <1700038013.zqvp143ykl.astroid@yuna.none> In-Reply-To: MIME-Version: 1.0 User-Agent: astroid/0.16.0 (https://github.com/astroidmail/astroid) Message-Id: <1700043709.1l8tqlkmok.astroid@yuna.none> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-SPAM-LEVEL: Spam detection results: 0 AWL 0.066 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record T_SCC_BODY_TEXT_LINE -0.01 - Subject: Re: [pve-devel] [RFC common 2/2] fix #4501: next unused port: work around issue with too short expiretime X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Nov 2023 10:27:48 -0000 On November 15, 2023 11:16 am, Fiona Ebner wrote: > Am 15.11.23 um 09:51 schrieb Fabian Gr=C3=BCnbichler: >> On November 14, 2023 3:13 pm, Fiona Ebner wrote: >>> Am 14.11.23 um 15:02 schrieb Fiona Ebner: >>>> For QEMU migration via TCP, there's a bit of time between port >>>> reservation and usage, because currently, the port needs to be >>>> reserved before doing a fork, where the systemd scope needs to be set >>>> up and swtpm might need to be started before the QEMU binary can be >>>> invoked and actually use the port. >>>> >>>> To improve the situation, get the latest port recorded in the >>>> reservation file and start trying from the next port, wrapping around >>>> when hitting the end. Drastically reduces the chances to run into a >>>> conflict, because after a given port reservation, all other ports are >>>> tried first before returning to that port. >>> >>> Sorry, this is not true. It can be that in the meantime, a port for a >>> different range is reserved and that will remove the reservation for th= e >>> port in the migration range if expired. So we'd need to change the code >>> to remove only reservations from the current range to not lose track of >>> the latest previously used migration port. >>=20 >> the whole thing would also still be racy anyway across processes, right? >> not sure it's worth the additional effort compared to the other patches >> then.. if those are not enough (i.e., we still get real-world reports) >> then the "increase expiry further + explicit release" part could still >> be implemented as follow-up.. >>=20 >=20 > No, it's not racy. The reserved ports are recorded in a file while > taking a lock, so each process will see what the others have last used. you are of course, right, sorry for the noise - I misread the diff and thought the new variables were local state, instead of just helper variables inside the sub.. > My question is if the explicit release isn't much more effort than the > round-robin-style approach here, because it puts the burden on the > callers and you need a good way to actually check if the port is now > used successfully (without creating new races!) and a new helper for > removing the reservation. (That said, with round-robin we would need to > remember which range a port was for if we ever want to support > overlapping ranges). yes, we'd need to convert callers to become reserve(); do_thing_that_binds(); clear_reservation(); possibly with the clearing repeated in the error handling code path. clearing the reservation could also just mean setting the expiry a few seconds into the future, for example, to cover any "binding might happen with a slight delay in a forked process" type of situations. > As long as you have competition for early ports, you just need one > instance where the time between reservation and usage is longer than the > expiretime and you're very likely to hit the issue (except another > earlier port is free again). With round-robin, you need such an instance > and have all(!) other ports reserved/used in the meantime. true. the only way to really fix it would be to make the reservation actually already do the binding, and pass around the open socket like Wolfgang suggests. if that works for Qemu, we could at least make that behaviour opt-in and convert this particular (and most likely to be problematic) usage?