From: "Fabian Grünbichler" <f.gruenbichler@proxmox.com>
To: Friedrich Weber <f.weber@proxmox.com>,
Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH storage/zfsonlinux v2 0/3] fix #4997: lvm: avoid autoactivating (new) LVs after boot
Date: Tue, 11 Mar 2025 11:40:59 +0100 [thread overview]
Message-ID: <1741689384.85eclftgb6.astroid@yuna.none> (raw)
In-Reply-To: <f929fe14-45ad-4cf9-a782-8bf4d95b249e@proxmox.com>
On March 10, 2025 3:01 pm, Friedrich Weber wrote:
> On 07/03/2025 13:14, Fabian Grünbichler wrote:
>>> # LVM-thick/LVM-thin
>>>
>>> Note that this change affects all LVs on LVM-thick, not just ones on shared
>>> storage. As a result, also on single-node hosts, local guest disk LVs on
>>> LVM-thick will not be automatically active after boot anymore (after applying
>>> all patches of this series). Guest disk LVs on LVM-thin will still be
>>> auto-activated, but since LVM-thin storage is necessarily local, we don't run
>>> into #4997 here.
>>
>> we could check the shared property, but I don't think having them not
>> auto-activated hurts as long as it is documented..
>
> This is referring to LVs on *local* LVM-*thick* storage, right? In that
> case, I'd agree that not having them autoactivated either is okay
> (cleaner even).
yes
> The patch series currently doesn't touch the LvmThinPlugin at all, so
> all LVM-*thin* LVs will still be auto-activated at boot. We could also
> patch LvmThinPlugin to create new thin LVs with `--setautoactivation n`
> -- though it wouldn't give us much, except consistency with LVM-thick.
well, if you have many volumes not activating them automatically might
also save some time on boot ;) but yeah, shouldn't cause any issues.
>>> # Transition to LVs with `--setautoactivation n`
>>>
>>> Both v1 and v2 approaches only take effect for new LVs, so we should probably
>>> also have pve8to9 check for guest disks on (shared?) LVM that have
>>> autoactivation enabled, and suggest to the user to manually disable
>>> autoactivation on the LVs, or even the entire VG if it holds only PVE-managed
>>> LVs.
>>
>> if we want to wait for PVE 9 anyway to start enabling (disabling? ;)) it, then
>> the upgrade script would be a nice place to tell users to fix up their volumes?
>
> The upgrade script being pve8to9, right? I'm just not sure yet what to
> suggest: `lvchange --setautoactivation n` on each LV, or simply
> `vgchange --setautoactivation n` on the whole shared VG (provided it
> only contains PVE-managed LVs).
yes.
>
>> OTOH, setting the flag automatically starting with PVE 9 also for existing
>> volumes should have no downsides, [...]
>
> Hmm, but how would be do that automatically?
e.g., once on upgrading in postinst, or in activate_storage if we find a
cheap way to skip doing it over and over ;)
>
>> we need to document anyway that the behaviour
>> there changed (so that people that rely on them becoming auto-activated on boot
>> can adapt whatever is relying on that).. or we could provide a script that does
>> it post-upgrade..
>
> Yes, an extra script to run after the upgrade might be an option. Though
> we'd also need to decide whether to deactivate on each individual LV, or
> the complete VG (then we'd just assume that there no other
> non-PVE-managed LVs in the VG that the user wants autoactivated).
I think doing it on each volume managed by PVE is the safer and more
consistent option..
>>> We could implement something on top to make the transition smoother, some ideas:
>>>
>>> - When activating existing LVs, check the auto activation flag, and if auto
>>> activation is still enabled, disable it.
>>
>> the only question is whether we want to "pay" for that on each activate_volume?
>
> Good question. It does seem a little extreme, also considering that once
> all existing LVs have autoactivation disabled, all new LVs will have the
> flag disabled as well and the check becomes obsolete.
>
> It just occurred to me that we could also pass `--setautoactivation n`
> to `lvchange -ay` in `activate_volume`, but a test shows that this
> triggers a metadata update on *each activate_volume*, which sounds like
> a bad idea.
yeah that doesn't sound too good ;)
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
next prev parent reply other threads:[~2025-03-11 10:41 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-07 9:52 Friedrich Weber
2025-03-07 9:52 ` [pve-devel] [PATCH zfsonlinux v2 1/3] zfs-initramfs: add patch to avoid activating all LVs at boot Friedrich Weber
2025-03-07 10:57 ` Friedrich Weber
2025-03-07 11:49 ` Fabian Grünbichler
2025-03-07 17:01 ` Friedrich Weber
2025-03-07 9:52 ` [pve-devel] [PATCH pve-storage v2 2/3] lvm: create: use multiple lines for lvcreate commandline definition Friedrich Weber
2025-03-07 9:52 ` [pve-devel] [PATCH pve-storage v2 3/3] fix #4997: lvm: create: disable autoactivation for new logical volumes Friedrich Weber
2025-03-07 12:14 ` [pve-devel] [PATCH storage/zfsonlinux v2 0/3] fix #4997: lvm: avoid autoactivating (new) LVs after boot Fabian Grünbichler
2025-03-10 14:01 ` Friedrich Weber
2025-03-11 10:40 ` Fabian Grünbichler [this message]
2025-03-12 8:39 ` Friedrich Weber
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1741689384.85eclftgb6.astroid@yuna.none \
--to=f.gruenbichler@proxmox.com \
--cc=f.weber@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal