From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id 64EBE1FF164 for ; Fri, 31 Jan 2025 14:51:15 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id A25882DDF0; Fri, 31 Jan 2025 14:51:12 +0100 (CET) Date: Fri, 31 Jan 2025 14:51:09 +0100 From: Wolfgang Bumiller To: Lukas Wagner Message-ID: References: <20250128122520.167796-1-l.wagner@proxmox.com> <4d3871ae-6cbc-40e4-96e2-8aee37563f79@proxmox.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-SPAM-LEVEL: Spam detection results: 0 AWL 0.083 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pdm-devel] [PATCH proxmox-datacenter-manager 00/15] change task cache mechanism from time-based to max-size FIFO X-BeenThere: pdm-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox Datacenter Manager development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Proxmox Datacenter Manager development discussion Cc: pdm-devel@lists.proxmox.com, Thomas Lamprecht Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: pdm-devel-bounces@lists.proxmox.com Sender: "pdm-devel" On Fri, Jan 31, 2025 at 02:36:07PM +0100, Wolfgang Bumiller wrote: > On Fri, Jan 31, 2025 at 10:35:03AM +0100, Lukas Wagner wrote: > > > > > > On 2025-01-28 13:25, Lukas Wagner wrote: > > > This patch series changes the remote task caching behavior from being purely > > > time-based cache to a FIFO cache-replacement policy with a maximum number of > > > cached tasks per remote. If the maximum number is exceeded, the oldest tasks > > > are dropped from the cache. > > > > > > When calling the remote-tasks API, the latest missing task data which is not > > > yet in the cache is requested from the remotes. There we limit this to once > > > every 5 minutes at the moment, with the option for a force-refresh (to be triggered > > > by a refresh button in the UI). As before, we augment the cached task data > > > with the currently running tracked tasks which were started by PDM. > > > > > > Some words about the cache storage implementation: > > > Note that the storage backend for this cache probably needs some more love in > > > the future. Right now its just a single JSON file for everything, mainly because this > > > was the quickest approach to implement to unblock UI development work. > > > The problem with the approach is that it does not perform well with setups > > > with a large number of remotes, since every update to the cache rewrites > > > the entire cache file when the cache is persisted, causing additional > > > CPU and IO load. > > > > > > In the future, we should use a similar mechanism as the task archive in PBS. > > > I'm not sure if the exact same mechanism can be used due to some different > > > requirements, but the general direct probably fits quite well. > > > If we can reuse it 1:1, we have to break it out of (iirc) the WorkerTask struct > > > to be reusable. > > > It will probably require some experimentation and benchmarking to find an > > > ideal approach. > > > We probably don't want a separate archive per remote, since we do not want to > > > read hundres/thousands of files when we request an aggregated remote task history > > > via the API. But having the complete archive in a single file also seems quite > > > challenging - we need to keep the data sorted, while we also need to > > > handle task data arriving out of order from different remotes. Keeping > > > the data sorted when new data arrives leads us to the same problem as with > > > JSON file, being that we have to rewrite the file over and over again, causing > > > load and writes. > > > > > > The good news is that since this is just a cache, we are pretty free to change > > > the storage implementation without too much trouble; we don't even have to > > > migrate the old data, since it should not be an issue to simply request the > > > data from the remotes again. This is the main reason why I've opted > > > to keep the JSON file for now; I or somebody else can revisit this at a later > > > time. > > > > > > > Some additional context as to explain the 'why', since @Thomas and @Wolfgang requested it: > > > > The status quo for the task cache is to fetch a certain time-based range of tasks > > (iirc the last seven days) from every remote and cache this for a certain period > > of time (max-age). If the cached data is too old, we discard the task data > > and fetch the same time range again. > > My initial reasoning behind designing it like this was to keep the > > 'ground truth' completely on the remote side, so *if* somebody were to mess with > > I don't think "messing with the task archive" is something we want to > worry about unless it's "easy enough". > > > the task archive, we would be consistent after a refresh. Also this allowed to > > keep the caching logic on PDM side much simpler, since we not doing much more > > than caching the API response from the remote - the same way we already do it > > for resources, subscription status, etc. > > > > The downside is we had some unnecessary traffic, since we keep on fetching old tasks > > that we already received. > > > > I originally posted this as an RFC right before the holidays to get an initial approach > > out to get some feedback on the approach (I wasn't too sure myself if the original approach > > was a good idea) and also to somewhat unblock UI development for the remote task view. > > There are definitely some things which need to be improved regardless of > which version we use. Since this does not fit into the current patches as it is already an issue in the original code: Also mentioned this off list, but this is something general to remember: Whenever we `spawn()` a longer running task we also need to take into account the possibility that the daemons might be reloaded, where these tasks would prevent the original one from shutting down (and potentially race against file locks in the reloaded one), so they should `select()` on a `proxmox_daemon::shutdown_future()`. For this case, this means the list of tasks which are currently being polled in futures needs to be persisted somewhere so the reloaded daemon can pick it up and continue the polling while the polling task in the old daemon just gets cancelled. AFAICT this should be fairly unproblematic here. _______________________________________________ pdm-devel mailing list pdm-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pdm-devel