From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 1A43FC0CFF for ; Fri, 12 Jan 2024 16:11:35 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id EB0E333749 for ; Fri, 12 Jan 2024 16:11:04 +0100 (CET) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Fri, 12 Jan 2024 16:11:03 +0100 (CET) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 5294B49115 for ; Fri, 12 Jan 2024 16:11:03 +0100 (CET) Message-ID: <5fcfc2d8-ec48-48a2-9262-31c3635e09a5@proxmox.com> Date: Fri, 12 Jan 2024 16:11:02 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Content-Language: en-US To: Proxmox VE development discussion , =?UTF-8?Q?Fabian_Gr=C3=BCnbichler?= References: <20240111165826.804669-1-f.weber@proxmox.com> <1705047462.e8upimiin2.astroid@yuna.none> <5970d7e6-ec71-4484-9f59-339f8c1aadcd@proxmox.com> <1705055166.gmeldwg7ib.astroid@yuna.none> From: Friedrich Weber In-Reply-To: <1705055166.gmeldwg7ib.astroid@yuna.none> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 AWL -0.100 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record T_SCC_BODY_TEXT_LINE -0.01 - Subject: Re: [pve-devel] [PATCH storage] lvm: avoid warning due to human-readable text in vgs output X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 12 Jan 2024 15:11:35 -0000 On 12/01/2024 11:28, Fabian Grünbichler wrote: >> The vgs message is printed to stdout, so we could do something like >> >> warn $line if !defined($size); >> >> ? > > yep, that would be an option (warn+return ;)) Right, thanks. Thinking about this some more, printing a user-visible warning sounds more sensible than suppressing the warning complete (either by passing `-qq` or ignoring the line, as in the current patch). I'll send a v2. >> Another complication I forgot about: For that user, /etc/lvm/archive had >> 800000 files amounting to >1GiB, which also slowed down `vgs -o vg_name` >> considerably (to >10s), presumably because `vgs -o vg_name` read all >> those files. But unexpectedly, as soon as `-o` included `pv_name` the >> command was fast again, presumably because it does not do the reads. So >> I was considering to modify `sub lvm_vgs` to always include `-o pv_name` >> in the command (not only if $includepvs is true), but was unsure if the >> edge case warranted this (somewhat weird) workaround. > > that sounds weird ;) Yeah, I think I won't implement this for now. If users wonder about the long status update times of pvestatd, they will look into the journal and, in v2, see the warnings about the large archive. >> By the way, the message also causes `vgs` to print invalid JSON: >> >> # rm -f /etc/lvm/backup/spam ; vgs -o vg_name -q --reportformat json >> 2>/dev/null >> { >> "report": [ >> Consider pruning spam VG archive with more then 8 MiB in 8305 files >> (check archiving is needed in lvm.conf). >> { >> "vg": [ >> {"vg_name":"pve"}, >> {"vg_name":"spam"}, >> {"vg_name":"testvg"} >> ] >> } >> ] >> } >> >> Dominik suggested that this very much looks like an LVM bug, so I'll >> check whether it still happens on latest upstream LVM and, if yes, file >> a bug report there. > > yes, that definitely sounds like a bug. potentially they'd also be open > to switching the log level/target so that it ends up on STDERR at > least.. done: https://github.com/lvmteam/lvm2/issues/137