public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH manager/storage v4 0/5] fix #4997: lvm, lvm-thin: avoid autoactivating LVs
@ 2025-07-07  8:03 Friedrich Weber
  2025-07-07  8:03 ` [pve-devel] [PATCH pve-storage v4 1/2] fix #4997: lvm: create: disable autoactivation for new logical volumes Friedrich Weber
                   ` (6 more replies)
  0 siblings, 7 replies; 12+ messages in thread
From: Friedrich Weber @ 2025-07-07  8:03 UTC (permalink / raw)
  To: pve-devel

# Summary

With default settings, LVM autoactivates LVs when it sees a new VG, e.g. after
boot or iSCSI login. In a cluster with guest disks on a shared LVM VG (e.g. on
top of iSCSI/Fibre Channel (FC)/direct-attached SAS), this can indirectly cause
guest creation or migration to fail. See bug #4997 [1] and patch #1 for
details.

The primary goal of this series is to avoid autoactivating thick LVs that hold
guest disks in order to fix #4997. For this, it patches the LVM storage plugin
to create new LVs with autoactivation disabled, and implements a pve8to9 check
and subcommand to disable autoactivation on existing LVs (see below for
details).

The series does the same for LVM-thin storages. While LVM-thin storages are
inherently local and cannot run into #4997, it can still make sense to avoid
unnecessarily activating thin LVs at boot.

# Why not before PVE 9

As discussed in v2, we should only apply this series for PVE 9, as we can be
sure all nodes are running at least PVE 8 then. Here is why we couldn't apply
it for PVE 8 already.

PVE 7/Bullseye's LVM does not know `--setautoactivation`. A user upgrading from
PVE 7 will temporarily have a mixed 7/8 cluster. Once this series is applied,
the PVE 8 nodes will create new LVs with `--setautoactivation n`, which the PVE
7 nodes do not know. In my tests, the PVE 7 nodes can read/interact with such
LVs just fine, *but*: As soon as a PVE 7 node creates a new (unrelated) LV, the
`--setautoactivation n` flag is reset to default `y` on *all* LVs of the VG. I
presume this is because creating a new LV rewrites metadata, and the PVE 7 LVM
doesn't write out the `--setautoactivation n` flag. I imagine (have not tested)
this will cause problems on a mixed cluster.

# pve8to9 script

As discussed in v2, this series implements

(a) a pve8to9 check to detect thick and thin LVs with autoactivation enabled
(b) a script to disable autoactivation on LVs when needed, intended to be run
    manually by the user during 8->9 upgrade

The question is where to put the script (b). Patch #4 moves the existing checks
from `pve8to9` to `pve8to9 checklist`, to be able to implement (b) as a new
subcommand `pve8to9 updatelvm`. I realize this is a huge user-facing change,
and we don't have to go with this approach. It is also incomplete, as patch #5
doesn't update the manpage yet. However, I like about this approach that
pve8to9 bundles "tasks that are related to 8->9 upgrades". If we do decide to
go with this, I can send another patch to update the manpage as well as add
documentation.

# Bonus fix for FC/SAS multipath+LVM issue

As it turns out, this series seems to additionally fix an issue on hosts with
LVM on FC/SAS-attached LUNs *with multipath* where LVM would report "Device
mismatch detected" warnings because the LVs are activated too early in the boot
process before multipath is available. Our current suggested workaround is to
install multipath-tools-boot [2]. With this series applied and when users have
upgraded to 9, this shouldn't be necessary anymore, as LVs are not
auto-activated after boot.

# Interaction with zfs-initramfs

zfs-initramfs used to ship an an initramfs-tools script that unconditionally
activates *all* VGs that are visible at boot time, ignoring the autoactivation
flag. A fix was already applied in v2 [3].

# Patch summary

- Patch #1 makes the LVM plugin create new LVs with `--setautoactivation n`
- Patch #2 makes the LVM-thin plugin disable autoactivation for new LVs
- Patch #3 runs perltidy on pve8to9, can be dropped
- Patch #4 moves pve8to9 checks to a subcommand (see pve8to9 section above)
- Patch #5 adds a pve8to9 subcommand to disable autoactivation
  (see pve8to9 section above)

# Changes since v3

- rebase and run perltidy
- see individual patches for other changes
- @Michael tested+reviewed v3 (thx!), but since there were some code changes,
  I'm not including the trailers here, even though the changes on #1+#2 were
  mostly comments/formatting changes.

# Changes since v2

- drop zfsonlinux patch that was since applied
- add patches for LVM-thin
- add pve8to9 patches

v3: https://lore.proxmox.com/pve-devel/20250429113646.25738-1-f.weber@proxmox.com/
v2: https://lore.proxmox.com/pve-devel/20250307095245.65698-1-f.weber@proxmox.com/
v1: https://lore.proxmox.com/pve-devel/20240111150332.733635-1-f.weber@proxmox.com/

[1] https://bugzilla.proxmox.com/show_bug.cgi?id=4997
[2] https://pve.proxmox.com/mediawiki/index.php?title=Multipath&oldid=12039#%22Device_mismatch_detected%22_warnings
[3] https://lore.proxmox.com/pve-devel/ad4c806c-234a-4949-885d-8bb369860cc0@proxmox.com/

pve-storage:

Friedrich Weber (2):
  fix #4997: lvm: create: disable autoactivation for new logical volumes
  lvmthin: disable autoactivation for new logical volumes

 src/PVE/Storage/LVMPlugin.pm     | 13 ++++++++++++-
 src/PVE/Storage/LvmThinPlugin.pm | 17 ++++++++++++++++-
 2 files changed, 28 insertions(+), 2 deletions(-)


pve-manager:

Friedrich Weber (3):
  pve8to9: run perltidy
  pve8to9: move checklist to dedicated subcommand
  pve8to9: detect and (if requested) disable LVM autoactivation

 PVE/CLI/pve8to9.pm | 178 ++++++++++++++++++++++++++++++++++++++++++++-
 bin/Makefile       |   2 +-
 2 files changed, 175 insertions(+), 5 deletions(-)


Summary over all repositories:
  4 files changed, 203 insertions(+), 7 deletions(-)

-- 
Generated by git-murpp 0.8.1


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [pve-devel] [PATCH pve-storage v4 1/2] fix #4997: lvm: create: disable autoactivation for new logical volumes
  2025-07-07  8:03 [pve-devel] [PATCH manager/storage v4 0/5] fix #4997: lvm, lvm-thin: avoid autoactivating LVs Friedrich Weber
@ 2025-07-07  8:03 ` Friedrich Weber
  2025-07-07  8:03 ` [pve-devel] [PATCH pve-storage v4 2/2] lvmthin: " Friedrich Weber
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 12+ messages in thread
From: Friedrich Weber @ 2025-07-07  8:03 UTC (permalink / raw)
  To: pve-devel

When discovering a new volume group (VG), for example on boot, LVM
triggers autoactivation. With the default settings, this activates all
logical volumes (LVs) in the VG. Activating an LV creates a
device-mapper device and a block device under /dev/mapper.

This is not necessarily problematic for local LVM VGs, but it is
problematic for VGs on top of a shared LUN used by multiple cluster
nodes (accessed via e.g. iSCSI/Fibre Channel/direct-attached SAS).

Concretely, in a cluster with a shared LVM VG where an LV is active on
nodes 1 and 2, deleting the LV on node 1 will not clean up the
device-mapper device on node 2. If an LV with the same name is
recreated later, the leftover device-mapper device will cause
activation of that LV on node 2 to fail with:

> device-mapper: create ioctl on [...] failed: Device or resource busy

Hence, certain combinations of guest removal (and thus LV removals)
and node reboots can cause guest creation or VM live migration (which
both entail LV activation) to fail with the above error message for
certain VMIDs, see bug #4997 for more information [1].

To avoid this issue in the future, disable autoactivation when
creating new LVs using the `--setautoactivation` flag. With this
setting, LVM autoactivation will not activate these LVs, and the
storage stack will take care of activating/deactivating the LV (only)
on the correct node when needed.

This additionally fixes an issue with multipath on FC/SAS-attached
LUNs where LVs would be activated too early after boot when multipath
is not yet available, see [3] for more details and current workaround.

The `--setautoactivation` flag was introduced with LVM 2.03.12 [2], so
it is available since Bookworm/PVE 8, which ships 2.03.16. Nodes with
older LVM versions ignore the flag and remove it on metadata updates,
which is why PVE 8 could not use the flag reliably, since there may
still be PVE 7 nodes in the cluster that reset it on metadata updates.

The flag is only set for newly created LVs, so LVs created before this
patch can still trigger #4997. To avoid this, users will be advised to
run a script to disable autoactivation for existing LVs.

[1] https://bugzilla.proxmox.com/show_bug.cgi?id=4997
[2] https://gitlab.com/lvmteam/lvm2/-/blob/main/WHATS_NEW
[3] https://pve.proxmox.com/mediawiki/index.php?title=Multipath&oldid=12039#FC/SAS-specific_configuration

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
---

Notes:
    changes since v3:
    - rebase and run perltidy
    - reword commit message, add rationale for setting this on PVE 9 only
    - squash in v3 patch 1/3 that only spread the $cmd definition over
      multiple lines, as v3 1/3 would have been reformatted by perltidy to
      use three lines, which defeats its purpose
    
    changes since v2:
    - reworded commit message

 src/PVE/Storage/LVMPlugin.pm | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
index 1a992e8..83a4ca9 100644
--- a/src/PVE/Storage/LVMPlugin.pm
+++ b/src/PVE/Storage/LVMPlugin.pm
@@ -405,7 +405,18 @@ sub lvcreate {
         $size .= "k"; # default to kilobytes
     }
 
-    my $cmd = ['/sbin/lvcreate', '-aly', '-Wy', '--yes', '--size', $size, '--name', $name];
+    my $cmd = [
+        '/sbin/lvcreate',
+        '-aly',
+        '-Wy',
+        '--yes',
+        '--size',
+        $size,
+        '--name',
+        $name,
+        '--setautoactivation',
+        'n',
+    ];
     for my $tag (@$tags) {
         push @$cmd, '--addtag', $tag;
     }
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [pve-devel] [PATCH pve-storage v4 2/2] lvmthin: disable autoactivation for new logical volumes
  2025-07-07  8:03 [pve-devel] [PATCH manager/storage v4 0/5] fix #4997: lvm, lvm-thin: avoid autoactivating LVs Friedrich Weber
  2025-07-07  8:03 ` [pve-devel] [PATCH pve-storage v4 1/2] fix #4997: lvm: create: disable autoactivation for new logical volumes Friedrich Weber
@ 2025-07-07  8:03 ` Friedrich Weber
  2025-07-07  8:03 ` [pve-devel] [PATCH pve-manager v4 1/3] pve8to9: run perltidy Friedrich Weber
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 12+ messages in thread
From: Friedrich Weber @ 2025-07-07  8:03 UTC (permalink / raw)
  To: pve-devel

When discovering a new volume group (VG), for example on boot, LVM
triggers autoactivation. With the default settings, this activates all
logical volumes (LVs) in the VG. Activating an LV creates a
device-mapper device and a block device under /dev/mapper.

Autoactivation is problematic for shared LVM storages, see #4997 [1].
For the inherently local LVM-thin storage it is less problematic, but
it still makes sense to avoid unnecessarily activating LVs and thus
making them visible on the host at boot.

To avoid that, disable autoactivation after creating new LVs. lvcreate
on trixie does not accept the --setautoactivation flag for thin LVs
yet, support was only added with [2]. Hence, setting the flag is is
done with an additional lvchange command for now. With this setting,
LVM autoactivation will not activate these LVs, and the storage stack
will take care of activating/deactivating LVs when needed.

The flag is only set for newly created LVs, so LVs created before this
patch can still trigger #4997. To avoid this, users will be advised to
run a script to disable autoactivation for existing LVs.

[1] https://bugzilla.proxmox.com/show_bug.cgi?id=4997
[2] https://gitlab.com/lvmteam/lvm2/-/commit/1fba3b876bbd912a53e13bdf1b4de8792e8400d4

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
---

Notes:
    - would be great to get your opinion on whether we should consider
      LVM-thin storages in this series or not.
    
    changes since v3:
    - run perltidy
    - since we now know that lvcreate will support the flag in the future,
      reword the code comment and add TODO for PVE 10
    - reword commit message accordingly
    - reword commit message to mention the flag is only set for newly
      created LVs
    
    new in v3

 src/PVE/Storage/LvmThinPlugin.pm | 17 ++++++++++++++++-
 1 file changed, 16 insertions(+), 1 deletion(-)

diff --git a/src/PVE/Storage/LvmThinPlugin.pm b/src/PVE/Storage/LvmThinPlugin.pm
index c244c91..fe8a986 100644
--- a/src/PVE/Storage/LvmThinPlugin.pm
+++ b/src/PVE/Storage/LvmThinPlugin.pm
@@ -83,6 +83,18 @@ sub filesystem_path {
     return wantarray ? ($path, $vmid, $vtype) : $path;
 }
 
+# lvcreate on trixie does not accept --setautoactivation for thin LVs yet, so set it via lvchange
+# TODO PVE 10: evaluate if lvcreate accepts --setautoactivation
+my $set_lv_autoactivation = sub {
+    my ($vg, $lv, $autoactivation) = @_;
+
+    my $cmd = [
+        '/sbin/lvchange', '--setautoactivation', $autoactivation ? 'y' : 'n', "$vg/$lv",
+    ];
+    eval { run_command($cmd); };
+    warn "could not set autoactivation: $@" if $@;
+};
+
 sub alloc_image {
     my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
 
@@ -112,6 +124,7 @@ sub alloc_image {
     ];
 
     run_command($cmd, errmsg => "lvcreate '$vg/$name' error");
+    $set_lv_autoactivation->($vg, $name, 0);
 
     return $name;
 }
@@ -298,6 +311,7 @@ sub clone_image {
 
     my $cmd = ['/sbin/lvcreate', '-n', $name, '-prw', '-kn', '-s', $lv];
     run_command($cmd, errmsg => "clone image '$lv' error");
+    $set_lv_autoactivation->($vg, $name, 0);
 
     return $name;
 }
@@ -346,7 +360,7 @@ sub volume_snapshot {
 
     my $cmd = ['/sbin/lvcreate', '-n', $snapvol, '-pr', '-s', "$vg/$volname"];
     run_command($cmd, errmsg => "lvcreate snapshot '$vg/$snapvol' error");
-
+    # disabling autoactivation not needed, as -s defaults to --setautoactivationskip y
 }
 
 sub volume_snapshot_rollback {
@@ -360,6 +374,7 @@ sub volume_snapshot_rollback {
 
     $cmd = ['/sbin/lvcreate', '-kn', '-n', $volname, '-s', "$vg/$snapvol"];
     run_command($cmd, errmsg => "lvm rollback '$vg/$snapvol' error");
+    $set_lv_autoactivation->($vg, $volname, 0);
 }
 
 sub volume_snapshot_delete {
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [pve-devel] [PATCH pve-manager v4 1/3] pve8to9: run perltidy
  2025-07-07  8:03 [pve-devel] [PATCH manager/storage v4 0/5] fix #4997: lvm, lvm-thin: avoid autoactivating LVs Friedrich Weber
  2025-07-07  8:03 ` [pve-devel] [PATCH pve-storage v4 1/2] fix #4997: lvm: create: disable autoactivation for new logical volumes Friedrich Weber
  2025-07-07  8:03 ` [pve-devel] [PATCH pve-storage v4 2/2] lvmthin: " Friedrich Weber
@ 2025-07-07  8:03 ` Friedrich Weber
  2025-07-07  8:03 ` [pve-devel] [PATCH pve-manager v4 2/3] pve8to9: move checklist to dedicated subcommand Friedrich Weber
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 12+ messages in thread
From: Friedrich Weber @ 2025-07-07  8:03 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
---
 PVE/CLI/pve8to9.pm | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/PVE/CLI/pve8to9.pm b/PVE/CLI/pve8to9.pm
index f4cdf1c7..45692748 100644
--- a/PVE/CLI/pve8to9.pm
+++ b/PVE/CLI/pve8to9.pm
@@ -1061,8 +1061,7 @@ sub check_storage_content_dirs {
 sub check_containers_cgroup_compat {
     if ($forced_legacy_cgroup) {
         log_fail("System explicitly configured for legacy hybrid cgroup hierarchy.\n"
-            . "     NOTE: support for the hybrid cgroup hierarchy is removed in Proxmox VE 9!"
-        );
+            . "     NOTE: support for the hybrid cgroup hierarchy is removed in Proxmox VE 9!");
     }
 
     my $supports_cgroupv2 = sub {
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [pve-devel] [PATCH pve-manager v4 2/3] pve8to9: move checklist to dedicated subcommand
  2025-07-07  8:03 [pve-devel] [PATCH manager/storage v4 0/5] fix #4997: lvm, lvm-thin: avoid autoactivating LVs Friedrich Weber
                   ` (2 preceding siblings ...)
  2025-07-07  8:03 ` [pve-devel] [PATCH pve-manager v4 1/3] pve8to9: run perltidy Friedrich Weber
@ 2025-07-07  8:03 ` Friedrich Weber
  2025-07-07 18:46   ` Thomas Lamprecht
  2025-07-07  8:03 ` [pve-devel] [PATCH pve-manager v4 3/3] pve8to9: detect and (if requested) disable LVM autoactivation Friedrich Weber
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 12+ messages in thread
From: Friedrich Weber @ 2025-07-07  8:03 UTC (permalink / raw)
  To: pve-devel

This allows to introduce other subcommands, e.g. for manual actions
rqeuired before the upgrade.

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
---

Notes:
    changes since v3:
    - rephrase commit message
    
    new in v3

 PVE/CLI/pve8to9.pm | 4 +++-
 bin/Makefile       | 2 +-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/PVE/CLI/pve8to9.pm b/PVE/CLI/pve8to9.pm
index 45692748..c3fb0b0b 100644
--- a/PVE/CLI/pve8to9.pm
+++ b/PVE/CLI/pve8to9.pm
@@ -1595,6 +1595,8 @@ __PACKAGE__->register_method({
     },
 });
 
-our $cmddef = [__PACKAGE__, 'checklist', [], {}];
+our $cmddef = {
+    checklist => [__PACKAGE__, 'checklist', []],
+};
 
 1;
diff --git a/bin/Makefile b/bin/Makefile
index caed0af4..23635a16 100644
--- a/bin/Makefile
+++ b/bin/Makefile
@@ -73,7 +73,7 @@ pve8to9.1:
 	 before, and during the upgrade of a Proxmox VE system\n" >> $@.tmp
 	printf "Any failure must be addressed before the upgrade, and any warning must be addressed, \
 	 or at least carefully evaluated, if a false-positive is suspected\n" >> $@.tmp
-	printf ".SH SYNOPSIS\npve8to9 [--full]\n" >> $@.tmp
+	printf ".SH SYNOPSIS\npve8to9 checklist [--full]\n" >> $@.tmp
 	mv $@.tmp $@
 
 
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [pve-devel] [PATCH pve-manager v4 3/3] pve8to9: detect and (if requested) disable LVM autoactivation
  2025-07-07  8:03 [pve-devel] [PATCH manager/storage v4 0/5] fix #4997: lvm, lvm-thin: avoid autoactivating LVs Friedrich Weber
                   ` (3 preceding siblings ...)
  2025-07-07  8:03 ` [pve-devel] [PATCH pve-manager v4 2/3] pve8to9: move checklist to dedicated subcommand Friedrich Weber
@ 2025-07-07  8:03 ` Friedrich Weber
  2025-07-08  8:38 ` [pve-devel] [PATCH manager/storage v4 0/5] fix #4997: lvm, lvm-thin: avoid autoactivating LVs Thomas Lamprecht
  2025-07-09 14:13 ` [pve-devel] superseded: " Friedrich Weber
  6 siblings, 0 replies; 12+ messages in thread
From: Friedrich Weber @ 2025-07-07  8:03 UTC (permalink / raw)
  To: pve-devel

Starting with PVE 9, the LVM and LVM-thin plugins create new LVs with
the `--setautoactivation n` flag to fix #4997 [1]. However, this does
not affect already existing LVs of setups upgrading from PVE 8.

Hence, add a new `updatelvm` subcommand to `pve8to9` that finds guest
volume LVs with autoactivation enabled on LVM and LVM-thin storages,
and disables autoactivation for each of them. This is done while
holding a lock on the storage, to avoid metadata update conflicts for
shared LVM storages.

Also, check for guest volume LVs with autoactivation enabled during
the `checklist` command, and suggest to run `updatelvm` if necessary.
The check is done without a lock, as taking a lock doesn't have
advantages here.

While some users may have disabled autoactivation on the VG level to
work around #4997, consciously do not take this into account:
Disabling autoactivation for LVs too does not hurt, and avoids issues
in case the user decides to re-enable autoactivation on the VG level
in the future.

[1] https://bugzilla.proxmox.com/show_bug.cgi?id=4997

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
---

Notes:
    If users create new LVs on PVE 8 nodes after running `pve8to9
    updatelvm` but before actually upgrading to PVE 9, these will still be
    created with autoactivation enabled. As discussed in v3, we don't
    consider this case, as the alternative would be to suggest runing
    updatelvm *post-upgrade*), and this can be easily forgotten by users.
    
    Open questions:
    
    - `updatelvm` will disable autoactivation on all guest volumes,
      regardless of the guest owner. In other words, it will disable
      autoactivation on LVs owned by a different node. Is this a problem?
      My tests suggests it's unproblematic, but may be worth a second
      look. We can change `updatelvm` to only update LVs owned by the
      local node, but then might miss LVs that are migrated between
      `updatelvm` runs on different nodes.
    
    - should `updatelvm` really be a pve8to9 subcommand, or live somewhere
      else? (see cover letter)
      If it should indeed be a pve8to9 subcommand, I can send another
      patch to update the manpage.
    
    changes since v3:
    - rebase and run perltidy
    - change phrasing of warnings to provide more context (thx @Michael!)
    
    new in v3

 PVE/CLI/pve8to9.pm | 171 ++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 170 insertions(+), 1 deletion(-)

diff --git a/PVE/CLI/pve8to9.pm b/PVE/CLI/pve8to9.pm
index c3fb0b0b..bea99ebb 100644
--- a/PVE/CLI/pve8to9.pm
+++ b/PVE/CLI/pve8to9.pm
@@ -22,7 +22,7 @@ use PVE::NodeConfig;
 use PVE::RPCEnvironment;
 use PVE::Storage;
 use PVE::Storage::Plugin;
-use PVE::Tools qw(run_command split_list file_get_contents);
+use PVE::Tools qw(run_command split_list file_get_contents trim);
 use PVE::QemuConfig;
 use PVE::QemuServer;
 use PVE::VZDump::Common;
@@ -1415,6 +1415,107 @@ sub check_dkms_modules {
     }
 }
 
+my $query_lvm_lv_autoactivation = sub {
+    my ($vg_name) = @_;
+
+    my $cmd = [
+        '/sbin/lvs',
+        '--separator',
+        ':',
+        '--noheadings',
+        '--unbuffered',
+        '--options',
+        "lv_name,autoactivation",
+        $vg_name,
+    ];
+
+    my $result;
+    eval {
+        run_command(
+            $cmd,
+            outfunc => sub {
+                my $line = shift;
+                $line = trim($line);
+
+                my ($name, $autoactivation_flag) = split(':', $line);
+                return if !$name;
+
+                $result->{$name} = $autoactivation_flag eq 'enabled';
+            },
+        );
+    };
+    die "could not list LVM logical volumes: $@\n" if $@;
+    return $result;
+};
+
+my $foreach_lvm_vg_with_autoactivated_guest_volumes = sub {
+    my ($with_lock, $code) = @_;
+
+    my $cfg = PVE::Storage::config();
+    my $storage_info = PVE::Storage::storage_info($cfg);
+
+    for my $storeid (sort keys %$storage_info) {
+        my $scfg = PVE::Storage::storage_config($cfg, $storeid);
+        my $vgname = $scfg->{vgname};
+        my $type = $scfg->{type};
+        next if $type ne 'lvm' && $type ne 'lvmthin';
+
+        my $info = $storage_info->{$storeid};
+        if (!$info->{enabled} || !$info->{active}) {
+            log_skip("storage '$storeid' ($type) is disabled or inactive");
+            next;
+        }
+
+        my $find_autoactivated_lvs = sub {
+            my $lvs_autoactivation = $query_lvm_lv_autoactivation->($vgname);
+            my $vollist = PVE::Storage::volume_list($cfg, $storeid);
+
+            my $autoactivated_guest_lvs = [];
+            for my $volinfo (@$vollist) {
+                my $volname = (PVE::Storage::parse_volume_id($volinfo->{volid}))[1];
+                push @$autoactivated_guest_lvs, $volname if $lvs_autoactivation->{$volname};
+            }
+
+            $code->($storeid, $vgname, $autoactivated_guest_lvs);
+        };
+
+        if ($with_lock) {
+            my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+            $plugin->cluster_lock_storage(
+                $storeid, $scfg->{shared}, undef, $find_autoactivated_lvs,
+            );
+        } else {
+            $find_autoactivated_lvs->();
+        }
+    }
+};
+
+sub check_lvm_autoactivation {
+    log_info("Check for LVM autoactivation settings on 'lvm' and 'lvmthin' storages...");
+
+    my $needs_fix = 0;
+    $foreach_lvm_vg_with_autoactivated_guest_volumes->(
+        0,
+        sub {
+            my ($storeid, $vgname, $autoactivated_lvs) = @_;
+
+            if (scalar(@$autoactivated_lvs) > 0) {
+                log_warn("storage '$storeid' has guest volumes with autoactivation enabled");
+                $needs_fix = 1;
+            } else {
+                log_pass(
+                    "all guest volumes on storage '$storeid' have autoactivation disabled");
+            }
+        },
+    );
+    log_warn(
+        "Starting with PVE 9, autoactivation will be disabled for new LVM guest volumes. Please "
+            . "run 'pve8to9 updatelvm' to disable autoactivation for existing LVM guest volumes!")
+        if $needs_fix;
+
+    return undef;
+}
+
 sub check_misc {
     print_header("MISCELLANEOUS CHECKS");
     my $ssh_config = eval { PVE::Tools::file_get_contents('/root/.ssh/config') };
@@ -1524,6 +1625,7 @@ sub check_misc {
     check_nvidia_vgpu_service();
     check_bootloader();
     check_dkms_modules();
+    check_lvm_autoactivation();
 }
 
 my sub colored_if {
@@ -1595,8 +1697,75 @@ __PACKAGE__->register_method({
     },
 });
 
+__PACKAGE__->register_method({
+    name => 'updatelvm',
+    path => 'updatelvm',
+    method => 'POST',
+    description => "disable autoactivation for all existing LVM guest volumes. "
+        . "Starting with PVE 9, autoactivation will be disabled for new LVM guest volumes.",
+    parameters => {
+        additionalProperties => 0,
+    },
+    returns => { type => 'null' },
+    code => sub {
+        my ($param) = @_;
+
+        my $did_work = 0;
+        eval {
+            $foreach_lvm_vg_with_autoactivated_guest_volumes->(
+                1,
+                sub {
+                    my ($storeid, $vgname, $autoactivated_lvs) = @_;
+
+                    if (scalar(@$autoactivated_lvs) == 0) {
+                        log_pass(
+                            "all guest volumes on storage '$storeid' have autoactivation disabled");
+                        return;
+                    }
+
+                    log_info("disabling autoactivation on storage '$storeid'...");
+                    die "unexpected empty VG name (storage '$storeid')\n" if !$vgname;
+
+                    for my $lvname (@$autoactivated_lvs) {
+                        log_info("disabling autoactivation for $vgname/$lvname on storage "
+                            . "'$storeid'...");
+                        my $cmd = [
+                            '/sbin/lvchange', '--setautoactivation', 'n', "$vgname/$lvname",
+                        ];
+
+                        eval { run_command($cmd); };
+                        my $err = $@;
+                        die "could not disable autoactivation for $vgname/$lvname: $err\n"
+                            if $err;
+
+                        $did_work = 1;
+                    }
+                },
+            );
+        };
+        my $err = $@;
+        if ($err) {
+            log_fail("could not disable autoactivation on enabled and active LVM storages");
+            die $err;
+        }
+
+        if ($did_work) {
+            log_pass(
+                "successfully disabled autoactivation for existing guest volumes on all enabled and "
+                    . "active LVM storages.");
+        } else {
+            log_pass(
+                "all existing guest volumes on enabled and active LVM storages have autoactivation "
+                    . "disabled.");
+        }
+
+        return undef;
+    },
+});
+
 our $cmddef = {
     checklist => [__PACKAGE__, 'checklist', []],
+    updatelvm => [__PACKAGE__, 'updatelvm', []],
 };
 
 1;
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [pve-devel] [PATCH pve-manager v4 2/3] pve8to9: move checklist to dedicated subcommand
  2025-07-07  8:03 ` [pve-devel] [PATCH pve-manager v4 2/3] pve8to9: move checklist to dedicated subcommand Friedrich Weber
@ 2025-07-07 18:46   ` Thomas Lamprecht
  2025-07-08 10:18     ` Friedrich Weber
  0 siblings, 1 reply; 12+ messages in thread
From: Thomas Lamprecht @ 2025-07-07 18:46 UTC (permalink / raw)
  To: Proxmox VE development discussion, Friedrich Weber

Am 07.07.25 um 10:03 schrieb Friedrich Weber:
> This allows to introduce other subcommands, e.g. for manual actions
> rqeuired before the upgrade.
> 
> Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
> ---
> 
> Notes:
>     changes since v3:
>     - rephrase commit message
>     
>     new in v3
> 
>  PVE/CLI/pve8to9.pm | 4 +++-
>  bin/Makefile       | 2 +-
>  2 files changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/PVE/CLI/pve8to9.pm b/PVE/CLI/pve8to9.pm
> index 45692748..c3fb0b0b 100644
> --- a/PVE/CLI/pve8to9.pm
> +++ b/PVE/CLI/pve8to9.pm
> @@ -1595,6 +1595,8 @@ __PACKAGE__->register_method({
>      },
>  });
>  
> -our $cmddef = [__PACKAGE__, 'checklist', [], {}];
> +our $cmddef = {
> +    checklist => [__PACKAGE__, 'checklist', []],

we nowadays normally use minus to separate command words, but it
probably would be fine to just drop the "list" part.

I'd also strongly prefer making this the default command, if none is
supplied.

> +};
>  
>  1;
> diff --git a/bin/Makefile b/bin/Makefile
> index caed0af4..23635a16 100644
> --- a/bin/Makefile
> +++ b/bin/Makefile
> @@ -73,7 +73,7 @@ pve8to9.1:
>  	 before, and during the upgrade of a Proxmox VE system\n" >> $@.tmp
>  	printf "Any failure must be addressed before the upgrade, and any warning must be addressed, \
>  	 or at least carefully evaluated, if a false-positive is suspected\n" >> $@.tmp
> -	printf ".SH SYNOPSIS\npve8to9 [--full]\n" >> $@.tmp
> +	printf ".SH SYNOPSIS\npve8to9 checklist [--full]\n" >> $@.tmp
>  	mv $@.tmp $@
>  
>  



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [pve-devel] [PATCH manager/storage v4 0/5] fix #4997: lvm, lvm-thin: avoid autoactivating LVs
  2025-07-07  8:03 [pve-devel] [PATCH manager/storage v4 0/5] fix #4997: lvm, lvm-thin: avoid autoactivating LVs Friedrich Weber
                   ` (4 preceding siblings ...)
  2025-07-07  8:03 ` [pve-devel] [PATCH pve-manager v4 3/3] pve8to9: detect and (if requested) disable LVM autoactivation Friedrich Weber
@ 2025-07-08  8:38 ` Thomas Lamprecht
  2025-07-08 10:16   ` Friedrich Weber
  2025-07-09 14:13 ` [pve-devel] superseded: " Friedrich Weber
  6 siblings, 1 reply; 12+ messages in thread
From: Thomas Lamprecht @ 2025-07-08  8:38 UTC (permalink / raw)
  To: Proxmox VE development discussion, Friedrich Weber

Thanks for your polished submission, very easy to digest!

Am 07.07.25 um 10:03 schrieb Friedrich Weber:
> # pve8to9 script
> 
> As discussed in v2, this series implements
> 
> (a) a pve8to9 check to detect thick and thin LVs with autoactivation enabled
> (b) a script to disable autoactivation on LVs when needed, intended to be run
>     manually by the user during 8->9 upgrade
> 
> The question is where to put the script (b). Patch #4 moves the existing checks
> from `pve8to9` to `pve8to9 checklist`, to be able to implement (b) as a new
> subcommand `pve8to9 updatelvm`. I realize this is a huge user-facing change,
> and we don't have to go with this approach. It is also incomplete, as patch #5
> doesn't update the manpage yet. However, I like about this approach that
> pve8to9 bundles "tasks that are related to 8->9 upgrades". If we do decide to
> go with this, I can send another patch to update the manpage as well as add
> documentation.

I'd prefer shipping the update in it's own script and outside of path.
We should ship that either below /usr/libexec, or–maybe even nicer–in a
package specific /usr/share path, like e.g. inside:
/usr/share/pve-manager/migrations/...

The checker script would then output the respective path to use.

Keeping such migration scripts, that actively change the host, independent
from the XtoY scripts avoids the need for multiple copies for multiple XtoY
scripts, as often the next future version might benefit from still having
these. Having a strict boundary between idempotent checks and code that
actually changes the hosts seems a bit safer to me too, especially if the
latter is not accessible via $PATH and also because we execute commands even
if they are not specified completely but if the user supplied string uniquely
matches the start of an existing command, like "u" or "update" in this case
here would be enough to trigger the command.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [pve-devel] [PATCH manager/storage v4 0/5] fix #4997: lvm, lvm-thin: avoid autoactivating LVs
  2025-07-08  8:38 ` [pve-devel] [PATCH manager/storage v4 0/5] fix #4997: lvm, lvm-thin: avoid autoactivating LVs Thomas Lamprecht
@ 2025-07-08 10:16   ` Friedrich Weber
  2025-07-08 10:19     ` Thomas Lamprecht
  0 siblings, 1 reply; 12+ messages in thread
From: Friedrich Weber @ 2025-07-08 10:16 UTC (permalink / raw)
  To: Thomas Lamprecht, Proxmox VE development discussion

On 08/07/2025 10:38, Thomas Lamprecht wrote:
> Thanks for your polished submission, very easy to digest!
> 
> Am 07.07.25 um 10:03 schrieb Friedrich Weber:
>> # pve8to9 script
>>
>> As discussed in v2, this series implements
>>
>> (a) a pve8to9 check to detect thick and thin LVs with autoactivation enabled
>> (b) a script to disable autoactivation on LVs when needed, intended to be run
>>     manually by the user during 8->9 upgrade
>>
>> The question is where to put the script (b). Patch #4 moves the existing checks
>> from `pve8to9` to `pve8to9 checklist`, to be able to implement (b) as a new
>> subcommand `pve8to9 updatelvm`. I realize this is a huge user-facing change,
>> and we don't have to go with this approach. It is also incomplete, as patch #5
>> doesn't update the manpage yet. However, I like about this approach that
>> pve8to9 bundles "tasks that are related to 8->9 upgrades". If we do decide to
>> go with this, I can send another patch to update the manpage as well as add
>> documentation.
> 
> I'd prefer shipping the update in it's own script and outside of path.
> We should ship that either below /usr/libexec, or–maybe even nicer–in a
> package specific /usr/share path, like e.g. inside:
> /usr/share/pve-manager/migrations/...
> 
> The checker script would then output the respective path to use.

OK, then I'll prepare a v5 that moves the `updatelvm` code to a script
under /usr/share/pve-manager/migrations.

> Keeping such migration scripts, that actively change the host, independent
> from the XtoY scripts avoids the need for multiple copies for multiple XtoY
> scripts, as often the next future version might benefit from still having
> these. Having a strict boundary between idempotent checks and code that
> actually changes the hosts seems a bit safer to me too, especially if the
> latter is not accessible via $PATH [...]

Yeah, fair point.

> [...] and also because we execute commands even
> if they are not specified completely but if the user supplied string uniquely
> matches the start of an existing command, like "u" or "update" in this case
> here would be enough to trigger the command.

Right, that might be a bit dangerous. But, thinking about it, even if we
move the `updatelvm` code to a migration helper script, it could still
ask the user for confirmation before actually making changes (e.g.
"Disable autoactivation for all guest volumes in VG 'foo'? [Y/n]").


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [pve-devel] [PATCH pve-manager v4 2/3] pve8to9: move checklist to dedicated subcommand
  2025-07-07 18:46   ` Thomas Lamprecht
@ 2025-07-08 10:18     ` Friedrich Weber
  0 siblings, 0 replies; 12+ messages in thread
From: Friedrich Weber @ 2025-07-08 10:18 UTC (permalink / raw)
  To: Thomas Lamprecht, Proxmox VE development discussion



On 07/07/2025 20:46, Thomas Lamprecht wrote:
> Am 07.07.25 um 10:03 schrieb Friedrich Weber:
>> This allows to introduce other subcommands, e.g. for manual actions
>> rqeuired before the upgrade.
>>
>> Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
>> ---
>>
>> Notes:
>>     changes since v3:
>>     - rephrase commit message
>>     
>>     new in v3
>>
>>  PVE/CLI/pve8to9.pm | 4 +++-
>>  bin/Makefile       | 2 +-
>>  2 files changed, 4 insertions(+), 2 deletions(-)
>>
>> diff --git a/PVE/CLI/pve8to9.pm b/PVE/CLI/pve8to9.pm
>> index 45692748..c3fb0b0b 100644
>> --- a/PVE/CLI/pve8to9.pm
>> +++ b/PVE/CLI/pve8to9.pm
>> @@ -1595,6 +1595,8 @@ __PACKAGE__->register_method({
>>      },
>>  });
>>  
>> -our $cmddef = [__PACKAGE__, 'checklist', [], {}];
>> +our $cmddef = {
>> +    checklist => [__PACKAGE__, 'checklist', []],
> 
> we nowadays normally use minus to separate command words, but it
> probably would be fine to just drop the "list" part.
> 
> I'd also strongly prefer making this the default command, if none is
> supplied.
> 

Thanks for looking into this! But since I'm going to move `updatelvm` to
its own script anyway (see the reply to the cover letter), I'll just
drop this patch and keep $cmddef as it is in v5.

>> +};
>>  
>>  1;
>> diff --git a/bin/Makefile b/bin/Makefile
>> index caed0af4..23635a16 100644
>> --- a/bin/Makefile
>> +++ b/bin/Makefile
>> @@ -73,7 +73,7 @@ pve8to9.1:
>>  	 before, and during the upgrade of a Proxmox VE system\n" >> $@.tmp
>>  	printf "Any failure must be addressed before the upgrade, and any warning must be addressed, \
>>  	 or at least carefully evaluated, if a false-positive is suspected\n" >> $@.tmp
>> -	printf ".SH SYNOPSIS\npve8to9 [--full]\n" >> $@.tmp
>> +	printf ".SH SYNOPSIS\npve8to9 checklist [--full]\n" >> $@.tmp
>>  	mv $@.tmp $@
>>  
>>  
> 



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [pve-devel] [PATCH manager/storage v4 0/5] fix #4997: lvm, lvm-thin: avoid autoactivating LVs
  2025-07-08 10:16   ` Friedrich Weber
@ 2025-07-08 10:19     ` Thomas Lamprecht
  0 siblings, 0 replies; 12+ messages in thread
From: Thomas Lamprecht @ 2025-07-08 10:19 UTC (permalink / raw)
  To: Friedrich Weber, Proxmox VE development discussion

Am 08.07.25 um 12:16 schrieb Friedrich Weber:
>> [...] and also because we execute commands even
>> if they are not specified completely but if the user supplied string uniquely
>> matches the start of an existing command, like "u" or "update" in this case
>> here would be enough to trigger the command.
> Right, that might be a bit dangerous. But, thinking about it, even if we
> move the `updatelvm` code to a migration helper script, it could still
> ask the user for confirmation before actually making changes (e.g.
> "Disable autoactivation for all guest volumes in VG 'foo'? [Y/n]").

Yes, that seems sensible for the `-t STDOUT` case, which we normally
use as heuristic to check if it's likely an interactive session.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [pve-devel] superseded: [PATCH manager/storage v4 0/5] fix #4997: lvm, lvm-thin: avoid autoactivating LVs
  2025-07-07  8:03 [pve-devel] [PATCH manager/storage v4 0/5] fix #4997: lvm, lvm-thin: avoid autoactivating LVs Friedrich Weber
                   ` (5 preceding siblings ...)
  2025-07-08  8:38 ` [pve-devel] [PATCH manager/storage v4 0/5] fix #4997: lvm, lvm-thin: avoid autoactivating LVs Thomas Lamprecht
@ 2025-07-09 14:13 ` Friedrich Weber
  6 siblings, 0 replies; 12+ messages in thread
From: Friedrich Weber @ 2025-07-09 14:13 UTC (permalink / raw)
  To: pve-devel

Superseded by
https://lore.proxmox.com/pve-devel/20250709141034.169726-1-f.weber@proxmox.com/




_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2025-07-09 14:13 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-07-07  8:03 [pve-devel] [PATCH manager/storage v4 0/5] fix #4997: lvm, lvm-thin: avoid autoactivating LVs Friedrich Weber
2025-07-07  8:03 ` [pve-devel] [PATCH pve-storage v4 1/2] fix #4997: lvm: create: disable autoactivation for new logical volumes Friedrich Weber
2025-07-07  8:03 ` [pve-devel] [PATCH pve-storage v4 2/2] lvmthin: " Friedrich Weber
2025-07-07  8:03 ` [pve-devel] [PATCH pve-manager v4 1/3] pve8to9: run perltidy Friedrich Weber
2025-07-07  8:03 ` [pve-devel] [PATCH pve-manager v4 2/3] pve8to9: move checklist to dedicated subcommand Friedrich Weber
2025-07-07 18:46   ` Thomas Lamprecht
2025-07-08 10:18     ` Friedrich Weber
2025-07-07  8:03 ` [pve-devel] [PATCH pve-manager v4 3/3] pve8to9: detect and (if requested) disable LVM autoactivation Friedrich Weber
2025-07-08  8:38 ` [pve-devel] [PATCH manager/storage v4 0/5] fix #4997: lvm, lvm-thin: avoid autoactivating LVs Thomas Lamprecht
2025-07-08 10:16   ` Friedrich Weber
2025-07-08 10:19     ` Thomas Lamprecht
2025-07-09 14:13 ` [pve-devel] superseded: " Friedrich Weber

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal