From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id 0B3211FF187 for ; Mon, 22 Sep 2025 19:26:17 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 9262421252; Mon, 22 Sep 2025 19:26:44 +0200 (CEST) Message-ID: Date: Mon, 22 Sep 2025 19:26:12 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Beta To: Proxmox VE development discussion , Fiona Ebner References: <20250922101749.34397-1-f.ebner@proxmox.com> Content-Language: en-US From: Thomas Lamprecht In-Reply-To: <20250922101749.34397-1-f.ebner@proxmox.com> X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1758561960450 X-SPAM-LEVEL: Spam detection results: 0 AWL -0.027 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pve-devel] [PATCH qemu-server] fix #6207: vm status: cache last disk read/write values X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Proxmox VE development discussion Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: pve-devel-bounces@lists.proxmox.com Sender: "pve-devel" Am 22.09.25 um 12:18 schrieb Fiona Ebner: > If disk read/write cannot be queried because of QMP timeout, they > should not be reported as 0, but the last value should be re-used. > Otherwise, the difference between that reported 0 and the next value, > when the stats are queried successfully, will show up as a huge spike > in the RRD graphs. Fine with the idea in general, but this is effectively relevant for the pvestatd only though? As of now we would also cache in the API daemon, without every using this. Might not be _that_ much, so not really a problem of the amount, but feels a bit wrong to me w.r.t. "code place". Has pvestatd the necessary info, directly on indirectly through the existence of some other vmstatus properties, to derive when it can safely reuse the previous value? Or maybe we could make this caching opt-in through some module flag that only pvestatd sets? But not really thought that through, so please take this with a grain of salt. btw. what about QMP being "stuck" for a prolonged time, should we stop using the previous value after a few times (or duration)? _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel