From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <f.weber@proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id EE3B692075
 for <pve-devel@lists.proxmox.com>; Wed, 31 Jan 2024 16:07:52 +0100 (CET)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id CF7513D0B8
 for <pve-devel@lists.proxmox.com>; Wed, 31 Jan 2024 16:07:52 +0100 (CET)
Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com
 [94.136.29.106])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS
 for <pve-devel@lists.proxmox.com>; Wed, 31 Jan 2024 16:07:51 +0100 (CET)
Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1])
 by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 246694543C
 for <pve-devel@lists.proxmox.com>; Wed, 31 Jan 2024 16:07:51 +0100 (CET)
Message-ID: <80c410ca-3615-45cc-9801-f1e2d14cab78@proxmox.com>
Date: Wed, 31 Jan 2024 16:07:50 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Content-Language: en-US
To: Fiona Ebner <f.ebner@proxmox.com>,
 Proxmox VE development discussion <pve-devel@lists.proxmox.com>
References: <20240111150332.733635-1-f.weber@proxmox.com>
 <c447ba98-649a-40cd-809e-4e120faa1b72@proxmox.com>
From: Friedrich Weber <f.weber@proxmox.com>
In-Reply-To: <c447ba98-649a-40cd-809e-4e120faa1b72@proxmox.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-SPAM-LEVEL: Spam detection results:  0
 AWL -0.087 Adjusted score from AWL reputation of From: address
 BAYES_00                 -1.9 Bayes spam probability is 0 to 1%
 DMARC_MISSING             0.1 Missing DMARC policy
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
 T_SCC_BODY_TEXT_LINE    -0.01 -
 URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See
 http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more
 information. [proxmox.com]
Subject: Re: [pve-devel] [PATCH storage 0/2] fix #4997: lvm: avoid
 autoactivating (new) LVs after boot
X-BeenThere: pve-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/>
List-Post: <mailto:pve-devel@lists.proxmox.com>
List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Wed, 31 Jan 2024 15:07:53 -0000

Thanks for the review!

On 26/01/2024 12:14, Fiona Ebner wrote:
>> Some points to discuss:
>>
>> * Fabian and I discussed whether it may be better to pass `-K` and set the
>>   "activation skip" flag only for LVs on a *shared* LVM storage. But this may
>>   cause issues for users that incorrectly mark an LVM storage as shared, create a
>>   bunch of LVs (with "activation skip" flag), then unset the "shared" flag, and
>>   won't be able to activate LVs afterwards (`lvchange -ay` without `-K` on an LV
>>   with "activation skip" is a noop). What do you think?
>>
> 
> Is there a way to prevent auto-activation on boot for LVs on a shared
> (PVE-managed) LVM storage? Also a breaking change, because users might
> have other LVs on the same storage, but would avoid the need for the
> flag. Not against the current approach, just wondering.

One can also disable autoactivation for a whole VG (i.e., all LVs of
that VG):

	vgchange --setautoactivation n VG

At least in my tests, after setting this no LV in that VG is active
after boot, so this might also solve the problem. I suppose setting this
automatically for existing VGs would be too dangerous (as users might
have other LVs in that VGs). But our LVM storage plugin *could* set this
when creating a new shared VG [1]?

Not sure which option is better, though.

> Guardrails against issues caused by misconfiguration always warrant a
> cost-benefits analysis. What is the cost for also setting the flag for
> LVs on non-shared LVM storages? Or logic needs to be correct either way ;)

AFAICT, setting this LV flag on non-shared LVM storages doesn't have
negative side-effects. I don't think we rely on autoactivation anywhere.
We'd need to take care of passing `-K` for all our `lvchange -ay` calls,
but AFAICT, `lvchange` calls are only done in the LVM/LvmThin plugins in
pve-storage.

[1]
https://git.proxmox.com/?p=pve-storage.git;a=blob;f=src/PVE/Storage/LVMPlugin.pm;h=4b951e7a;hb=8289057e#l94