all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: "Fabian Grünbichler" <f.gruenbichler@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH pve-manager master v1 2/2] fix #6652: d/postinst: enable autoactivation for Ceph OSD LVs
Date: Wed, 13 Aug 2025 12:05:25 +0200	[thread overview]
Message-ID: <1755079310.1zhsa6x31t.astroid@yuna.none> (raw)
In-Reply-To: <DC178DJFU5K3.2UVPMMNXGASPJ@proxmox.com>

On August 13, 2025 11:40 am, Max R. Carrara wrote:
> On Wed Aug 13, 2025 at 11:02 AM CEST, Fabian Grünbichler wrote:
>> On August 13, 2025 10:50 am, Max R. Carrara wrote:
>> > On Wed Aug 13, 2025 at 9:52 AM CEST, Fabian Grünbichler wrote:
>> >> On August 12, 2025 6:46 pm, Max R. Carrara wrote:
>> >> > Introduce a new helper command pve-osd-lvm-enable-autoactivation,
>> >> > which gracefully tries to enable autoactivation for all logical
>> >> > volumes used by Ceph OSDs while also activating any LVs that aren't
>> >> > active yet. Afterwards, the helper attempts to bring all OSDs online.
>> >>
>> >> I think this is probably overkill - this only affects a specific non
>> >> standard setup, the breakage is really obvious, and the fix is easy:
>> >> either run lvchange on all those LVs, or recreate the OSDs after the fix
>> >> for creation is rolled out..
>> >>
>> >> i.e., the fallout from some edge cases not being handled correctly in
>> >> the 200 line helper script here is probably worse than the few setups
>> >> that run into the original issue that we can easily help along
>> >> manually..
>> > 
>> > I mean, this script doesn't really do much, and the LVs themselves are
>> > fetched via `ceph-volume` ... But then again, you're probably right that
>> > it might just break somebody else's arcane setup somewhere.
>> > 
>> > As an alternative, I wouldn't mind writing something for the release
>> > notes' known issues section (or some other place). Assuming a standard
>> > setup*, all that the user would have to do is identical to what the
>> > script does, so nothing too complicated.
>>
>> but the known issue will be gone, except for the handful of users that
>> ran into it before the fix was rolled out.. this is not something you
>> noticed 5 months later?
> 
> Are you sure, though? There might very well be setups out there that are
> only rebooted every couple months or so--not everybody is as diligent
> with their maintenance, unfortunately. We don't really know how common /
> rare it is to set up a DB / WAL disk for OSDs.

I am not opposed to adding it to the known issues, I just think it's not
a very common issue to run into in the first place.

>> > (*OSDs with WAL + DB on disks / partitions without anything else inbetween)
>>
>> I am not worried about the script breaking things, but about it printing
>> spurious errors/warnings for unaffected setups.
> 
> Well, unaffected here would mean that all OSD LVs have autoactivation
> enabled (and are also activated). Additionally, if the OSDs are already
> up, `ceph-volume` doesn't do anything either.

no, unaffected here means 99% of the systems. and as soon as you run
commands on any system, there will be a small percentage where those
commands fail in noisy fashion for whatever reason. so the tradeoff
needs to be there - if the issue is big enough and the commands are not
very involved, then yes, fixing in postinst like this is a good idea.
for an issue that will affect almost nobody, and that requires running
commands with lots of storage interaction(!), that tradeoff looks rather
different..

> FWIW we could suppress LVM "No medium found" warnings in the LVM
> commands the script runs, like we do in pve-storage [0]. Additionally,
> we could also short-circuit and silently exit early if no changes are
> necessary; e.g. if autoactivation is enabled for all OSD LVs, we can
> assume that they're activated as well, and that the OSDs themselves are
> therefore running, too.
> 
> So, I would really prefer having either (an improved version of) the
> script, or some documentation regarding this *somewhere*, just to not
> leave any users in the dark.
> 
> [0]: https://git.proxmox.com/?p=pve-storage.git;a=blob;f=src/PVE/Storage/LVMPlugin.pm;h=0416c9e02a1d8255c940d2cd9f5e0111b784fe7c;hb=refs/heads/master#l21
> 
>>
>>
>> _______________________________________________
>> pve-devel mailing list
>> pve-devel@lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

  reply	other threads:[~2025-08-13 10:04 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-08-12 16:46 [pve-devel] [PATCH pve-manager master v1 0/2] Fix #6652: LVM Autoactivation Missing " Max R. Carrara
2025-08-12 16:46 ` [pve-devel] [PATCH pve-manager master v1 1/2] fix #6652: ceph: osd: enable autoactivation for OSD LVs on creation Max R. Carrara
2025-08-13  7:48   ` Fabian Grünbichler
2025-08-13  8:28     ` Max R. Carrara
2025-08-12 16:46 ` [pve-devel] [PATCH pve-manager master v1 2/2] fix #6652: d/postinst: enable autoactivation for Ceph OSD LVs Max R. Carrara
2025-08-13  7:52   ` Fabian Grünbichler
2025-08-13  8:50     ` Max R. Carrara
2025-08-13  9:01       ` Friedrich Weber
2025-08-13  9:02       ` Fabian Grünbichler
2025-08-13  9:40         ` Max R. Carrara
2025-08-13 10:05           ` Fabian Grünbichler [this message]
2025-08-13 13:43 ` [pve-devel] superseded: [PATCH pve-manager master v1 0/2] Fix #6652: LVM Autoactivation Missing " Max R. Carrara

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1755079310.1zhsa6x31t.astroid@yuna.none \
    --to=f.gruenbichler@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal