all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: Friedrich Weber <f.weber@proxmox.com>
To: "Fabian Grünbichler" <f.gruenbichler@proxmox.com>,
	"Proxmox VE development discussion" <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH storage/zfsonlinux v2 0/3] fix #4997: lvm: avoid autoactivating (new) LVs after boot
Date: Wed, 12 Mar 2025 09:39:39 +0100	[thread overview]
Message-ID: <389aaf02-b057-4655-8038-5e79ae53bb75@proxmox.com> (raw)
In-Reply-To: <1741689384.85eclftgb6.astroid@yuna.none>

On 11/03/2025 11:40, Fabian Grünbichler wrote:
> On March 10, 2025 3:01 pm, Friedrich Weber wrote:
>> On 07/03/2025 13:14, Fabian Grünbichler wrote:
>>>> # LVM-thick/LVM-thin
>>>>
>>>> Note that this change affects all LVs on LVM-thick, not just ones on shared
>>>> storage. As a result, also on single-node hosts, local guest disk LVs on
>>>> LVM-thick will not be automatically active after boot anymore (after applying
>>>> all patches of this series). Guest disk LVs on LVM-thin will still be
>>>> auto-activated, but since LVM-thin storage is necessarily local, we don't run
>>>> into #4997 here.
>>>
>>> we could check the shared property, but I don't think having them not
>>> auto-activated hurts as long as it is documented..
>>
>> This is referring to LVs on *local* LVM-*thick* storage, right? In that
>> case, I'd agree that not having them autoactivated either is okay
>> (cleaner even).
> 
> yes
> 
>> The patch series currently doesn't touch the LvmThinPlugin at all, so
>> all LVM-*thin* LVs will still be auto-activated at boot. We could also
>> patch LvmThinPlugin to create new thin LVs with `--setautoactivation n`
>> -- though it wouldn't give us much, except consistency with LVM-thick.
> 
> well, if you have many volumes not activating them automatically might
> also save some time on boot ;) but yeah, shouldn't cause any issues.

Yeah, then I'll limit this change to the LVMPlugin and keep the
LvmThinPlugin unchanged.

>>>> # Transition to LVs with `--setautoactivation n`
>>>>
>>>> Both v1 and v2 approaches only take effect for new LVs, so we should probably
>>>> also have pve8to9 check for guest disks on (shared?) LVM that have
>>>> autoactivation enabled, and suggest to the user to manually disable
>>>> autoactivation on the LVs, or even the entire VG if it holds only PVE-managed
>>>> LVs.
>>>
>>> if we want to wait for PVE 9 anyway to start enabling (disabling? ;)) it, then
>>> the upgrade script would be a nice place to tell users to fix up their volumes?
>>
>> The upgrade script being pve8to9, right? I'm just not sure yet what to
>> suggest: `lvchange --setautoactivation n` on each LV, or simply
>> `vgchange --setautoactivation n` on the whole shared VG (provided it
>> only contains PVE-managed LVs).
> 
> yes.
> 
>>
>>> OTOH, setting the flag automatically starting with PVE 9 also for existing
>>> volumes should have no downsides, [...]
>>
>> Hmm, but how would be do that automatically?
> 
> e.g., once on upgrading in postinst, or in activate_storage if we find a
> cheap way to skip doing it over and over ;)
> 
>>
>>> we need to document anyway that the behaviour
>>> there changed (so that people that rely on them becoming auto-activated on boot
>>> can adapt whatever is relying on that).. or we could provide a script that does
>>> it post-upgrade..
>>
>> Yes, an extra script to run after the upgrade might be an option. Though
>> we'd also need to decide whether to deactivate on each individual LV, or
>> the complete VG (then we'd just assume that there no other
>> non-PVE-managed LVs in the VG that the user wants autoactivated).
> 
> I think doing it on each volume managed by PVE is the safer and more
> consistent option..

Sounds good!

Discussed the next steps with Fabian off-list, the summary:

- I'll wait a few days if zfsonlinux patch 1 gets applied upstream so I
can send a proper cherry-pick. We can apply that one before PVE 9, as it
shouldn't break any reasonable setup (= setup that doesn't have ZFS on
an LVM LV with autoactivation disabled), and it allows the manual
`vgchange --setautoactivation n` workaround to work for FC/SAS LUNs
before PVE 9.

- we keep something like patch 2+3 which pass `--setautoactivation n` to
lvcreate in LVMPlugin's `alloc_image`. This can only be applied for PVE
9 (see "Mixed 7/8 cluster" in cover letter). It solves the
auto-activation problem for new volumes.

- to solve the auto-activation problem for existing volumes, we

  - ship a script (a separate script or even a subcommand of pve8to9?)
that disables autoactivation for all PVE-managed LVs on shared VGs,
after asking for user confirmation. The user has to run this script
manually. The script probably needs to take care to cluster-lock the
storage, as disabling autoactivation triggers a metadata update.

  - add a check to pve8to9 for PVE-managed LVs with autoactivation
enabled (on shared VGs) which suggests to run the script from the
previous point




_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

  reply	other threads:[~2025-03-12  8:39 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-07  9:52 Friedrich Weber
2025-03-07  9:52 ` [pve-devel] [PATCH zfsonlinux v2 1/3] zfs-initramfs: add patch to avoid activating all LVs at boot Friedrich Weber
2025-03-07 10:57   ` Friedrich Weber
2025-03-07 11:49   ` Fabian Grünbichler
2025-03-07 17:01     ` Friedrich Weber
2025-03-16 19:24   ` [pve-devel] applied: " Thomas Lamprecht
2025-03-07  9:52 ` [pve-devel] [PATCH pve-storage v2 2/3] lvm: create: use multiple lines for lvcreate commandline definition Friedrich Weber
2025-03-07  9:52 ` [pve-devel] [PATCH pve-storage v2 3/3] fix #4997: lvm: create: disable autoactivation for new logical volumes Friedrich Weber
2025-03-07 12:14 ` [pve-devel] [PATCH storage/zfsonlinux v2 0/3] fix #4997: lvm: avoid autoactivating (new) LVs after boot Fabian Grünbichler
2025-03-10 14:01   ` Friedrich Weber
2025-03-11 10:40     ` Fabian Grünbichler
2025-03-12  8:39       ` Friedrich Weber [this message]
2025-04-29 11:43         ` Friedrich Weber

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=389aaf02-b057-4655-8038-5e79ae53bb75@proxmox.com \
    --to=f.weber@proxmox.com \
    --cc=f.gruenbichler@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal