On Fri Mar 20, 2026 at 5:28 AM CET, Song Hu wrote: > Need access to `bin/pve-osd-lvm-enable-autoactivation` (Bug #6652) > > Hi Proxmox team, > > I’m running into the issue described in bug #6652 “LVM Autoactivation > Missing for Ceph OSD LVs”, and I need the helper script > `bin/pve-osd-lvm-enable-autoactivation` in order to enable autoactivation > for existing OSD logical volumes. > > I’m using: > - Proxmox VE: 8.x (based on Debian bookworm) > - pve-manager package: `pve-manager/8.x.x` (you can fill in the exact > version) > - Kernel: `6.8.xx-pve` (adjust if needed) > > I’ve read the fix series for #6652: > - v1: > https://lore.proxmox.com/pve-devel/20250812164631.428424-1-m.carrara@proxmox.com/T/ > > - v2: > https://lore.proxmox.com/pve-devel/20250813134028.292213-1-m.carrara@proxmox.com/T/ > > > From the “applied: (subset)” reply, I understand that only the first patch > (changing `PVE/API2/Ceph/OSD.pm`) has been merged so far: > > Applied the first patch, thanks! > > [1/2] fix #6652: ceph: osd: enable autoactivation for OSD LVs on creation > > > commit: 92bbc0c89fe7331ab122ff396f5e23ab31fa0765 > > The second patch, which introduces `bin/pve-osd-lvm-enable-autoactivation` > and adjusts `debian/postinst`, has not been applied yet. > > I’ve checked the `pve-manager.git` repository: > - https://git.proxmox.com/?p=pve-manager.git;a=tree;hb=refs/heads/master > - https://git.proxmox.com/?p=pve-manager.git;a=tree;hb=refs/heads/stable-8 > > and I can confirm that `bin/pve-osd-lvm-enable-autoactivation` is not > present in either the `master` or `stable-8` branches. The `bin/` directory > contains many other helpers (e.g. `pve-lvm-disable-autoactivation`, > `pve-init-ceph-crash`, etc.) but not this one. > Since I’m affected by #6652 on nodes that already have OSDs with missing > autoactivation, I would like to run this helper script instead of manually > invoking `lvchange --setautoactivation y` on each LV. > Hello! Yeah, we have been on the fence regarding the `pve-osd-lvm-enable-autoactivation` helper, because it touches a lot of things during the "postinst" phase of Debian package installation / updates. So, since the script is rather invasive to just run in postinst and given that not many users are affected (you're the first to show up, in fact), we decided not to merge it, AFAIK. To answer your questions: > Could you please clarify: > 1. Is there an official recommended way to obtain the > `pve-osd-lvm-enable-autoactivation` script for existing deployments while > the second patch is still pending? There currently is not. > 2. If not, would it be possible to provide a standalone copy of the script > (e.g. as a downloadable blob or via a tagged commit) that users can safely > use on production clusters? I've attached a slightly improved version to this mail -- should hopefully not be filtered out. This isn't really the same as officially shipping it with PVE, but I hope that it helps. NOTE: You still have to reboot the node after running the script. I'm curious, how many hosts are affected on your end? There shouldn't be any issues anymore if you update to the latest version of Proxmox VE right after installation -- then newly created OSDs should have autoactivation enabled, just as expected. > 3. Is there a timeline or additional work required before the second patch > can be merged? I’m happy to help with testing or providing feedback on the > behavior in a real-world setup. The help is much appreciated, but given that you're the first and only person to have also run into this apart from the handful of people on our Bugzilla, we probably won't package this. Should more people than expected be affected by this for some reason, we might reconsider, though. > > In the meantime, I’m manually working around the issue by: > > - Listing OSD LVs with: > `lvs --options lv_name,vg_name,autoactivation,active` > - For LVs named `osd-db-*` or `osd-wal-*` in VGs named `ceph-*` where > “AutoAct” is empty: > `lvchange --setautoactivation y /` > - Then reactivating OSDs as described in the v2 cover letter. > Any guidance you can provide would be greatly appreciated. > Thanks, > > buladou Glad that workaround helped so far at least! Let me know if the script gets the job done, too. - Max