From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <pve-devel-bounces@lists.proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
	by lore.proxmox.com (Postfix) with ESMTPS id F25461FF176
	for <inbox@lore.proxmox.com>; Fri,  7 Feb 2025 14:13:26 +0100 (CET)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
	by firstgate.proxmox.com (Proxmox) with ESMTP id 4F523B6D0;
	Fri,  7 Feb 2025 14:13:24 +0100 (CET)
Message-ID: <8528719f-58fd-4346-965d-5351c165f623@proxmox.com>
Date: Fri, 7 Feb 2025 14:12:50 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
To: Fiona Ebner <f.ebner@proxmox.com>,
 Proxmox VE development discussion <pve-devel@lists.proxmox.com>
References: <20240111150332.733635-1-f.weber@proxmox.com>
 <c447ba98-649a-40cd-809e-4e120faa1b72@proxmox.com>
 <80c410ca-3615-45cc-9801-f1e2d14cab78@proxmox.com>
 <265ef5c0-601a-41fc-81e8-ca7f908094b0@proxmox.com>
Content-Language: en-US
From: Friedrich Weber <f.weber@proxmox.com>
In-Reply-To: <265ef5c0-601a-41fc-81e8-ca7f908094b0@proxmox.com>
X-SPAM-LEVEL: Spam detection results:  0
 AWL 0.002 Adjusted score from AWL reputation of From: address
 BAYES_00                 -1.9 Bayes spam probability is 0 to 1%
 DMARC_MISSING             0.1 Missing DMARC policy
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to
 Validity was blocked. See
 https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more
 information.
 RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to
 Validity was blocked. See
 https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more
 information.
 RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to
 Validity was blocked. See
 https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more
 information.
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
Subject: Re: [pve-devel] [PATCH storage 0/2] fix #4997: lvm: avoid
 autoactivating (new) LVs after boot
X-BeenThere: pve-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/>
List-Post: <mailto:pve-devel@lists.proxmox.com>
List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe>
Reply-To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Errors-To: pve-devel-bounces@lists.proxmox.com
Sender: "pve-devel" <pve-devel-bounces@lists.proxmox.com>

On 01/02/2024 09:26, Fiona Ebner wrote:
> Am 31.01.24 um 16:07 schrieb Friedrich Weber:
>> Thanks for the review!
>>
>> On 26/01/2024 12:14, Fiona Ebner wrote:
>>>> Some points to discuss:
>>>>
>>>> * Fabian and I discussed whether it may be better to pass `-K` and set the
>>>>   "activation skip" flag only for LVs on a *shared* LVM storage. But this may
>>>>   cause issues for users that incorrectly mark an LVM storage as shared, create a
>>>>   bunch of LVs (with "activation skip" flag), then unset the "shared" flag, and
>>>>   won't be able to activate LVs afterwards (`lvchange -ay` without `-K` on an LV
>>>>   with "activation skip" is a noop). What do you think?
>>>>
>>>
>>> Is there a way to prevent auto-activation on boot for LVs on a shared
>>> (PVE-managed) LVM storage? Also a breaking change, because users might
>>> have other LVs on the same storage, but would avoid the need for the
>>> flag. Not against the current approach, just wondering.
>>
>> One can also disable autoactivation for a whole VG (i.e., all LVs of
>> that VG):
>>
>> 	vgchange --setautoactivation n VG
>>
>> At least in my tests, after setting this no LV in that VG is active
>> after boot, so this might also solve the problem. I suppose setting this
>> automatically for existing VGs would be too dangerous (as users might
>> have other LVs in that VGs). But our LVM storage plugin *could* set this
>> when creating a new shared VG [1]?
>>
>> Not sure which option is better, though.
>>
> 
> Do you still need the -K flag to activate volumes in such a VG? If yes,
> nothing is gained compared to the more fine-grained "setting it on
> individual LVs". If no, we could save that. OTOH, users might want to
> use existing shared VGs and then they would need to apply this setting
> themselves.

Just looked into this again: No, if I set `--setautoactivation n` for a
VG, I don't need to pass -K to activate LVs within that VG. I think the
`--setautoactivation n` flag for VGs/LVs only affects autoactivation
(e.g. `vgchange -aay`, as done by udev, or `lvchange -aay`), not manual
activation (e.g. `vgchange -ay`/`lvchange -ay`).

> 
>>> Guardrails against issues caused by misconfiguration always warrant a
>>> cost-benefits analysis. What is the cost for also setting the flag for
>>> LVs on non-shared LVM storages? Or logic needs to be correct either way ;)
>>
>> AFAICT, setting this LV flag on non-shared LVM storages doesn't have
>> negative side-effects. I don't think we rely on autoactivation anywhere.
>> We'd need to take care of passing `-K` for all our `lvchange -ay` calls,
>> but AFAICT, `lvchange` calls are only done in the LVM/LvmThin plugins in
>> pve-storage.
>>
> 
> We need to have -K for activations, no matter if we only set the
> activationskip flag for shared or all. That's not an additional cost.



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel