public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH manager/storage v5 0/3] fix #4997: lvm, lvm-thin: avoid autoactivating LVs
@ 2025-07-09 14:09 Friedrich Weber
  2025-07-09 14:09 ` [pve-devel] [PATCH pve-storage v5 1/2] fix #4997: lvm: create: disable autoactivation for new logical volumes Friedrich Weber
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Friedrich Weber @ 2025-07-09 14:09 UTC (permalink / raw)
  To: pve-devel

# Summary

With default settings, LVM autoactivates LVs when it sees a new VG, e.g. after
boot or iSCSI login. In a cluster with guest disks on a shared LVM VG (e.g. on
top of iSCSI/Fibre Channel (FC)/direct-attached SAS), this can indirectly cause
guest creation or migration to fail. See bug #4997 [1] and patch #1 for
details.

The primary goal of this series is to avoid autoactivating thick LVs that hold
guest disks in order to fix #4997. For this, it patches the LVM storage plugin
to create new LVs with autoactivation disabled, and implements a pve8to9 check
and subcommand to disable autoactivation on existing LVs (see below for
details).

The series does the same for LVM-thin storages. While LVM-thin storages are
inherently local and cannot run into #4997, it can still make sense to avoid
unnecessarily activating thin LVs at boot.

# Why not before PVE 9

As discussed in v2, we should only apply this series for PVE 9, as we can be
sure all nodes are running at least PVE 8 then. Here is why we couldn't apply
it for PVE 8 already.

PVE 7/Bullseye's LVM does not know `--setautoactivation`. A user upgrading from
PVE 7 will temporarily have a mixed 7/8 cluster. Once this series is applied,
the PVE 8 nodes will create new LVs with `--setautoactivation n`, which the PVE
7 nodes do not know. In my tests, the PVE 7 nodes can read/interact with such
LVs just fine, *but*: As soon as a PVE 7 node creates a new (unrelated) LV, the
`--setautoactivation n` flag is reset to default `y` on *all* LVs of the VG. I
presume this is because creating a new LV rewrites metadata, and the PVE 7 LVM
doesn't write out the `--setautoactivation n` flag. I imagine (have not tested)
this will cause problems on a mixed cluster.

# pve8to9 script

As discussed in v2+v4, this series implements

(a) a pve8to9 check to detect thick and thin LVs with autoactivation enabled
(b) a script to disable autoactivation on LVs when needed, intended to be run
    manually by the user during 8->9 upgrade

As suggested by Thomas in v4, the script (b) is installed under
/usr/share/pve-manager/migrations/.

# Bonus fix for FC/SAS multipath+LVM issue

As it turns out, this series seems to additionally fix an issue on hosts with
LVM on FC/SAS-attached LUNs *with multipath* where LVM would report "Device
mismatch detected" warnings because the LVs are activated too early in the boot
process before multipath is available. Our current suggested workaround is to
install multipath-tools-boot [2]. With this series applied and when users have
upgraded to 9, this shouldn't be necessary anymore, as LVs are not
auto-activated after boot.

# Interaction with zfs-initramfs

zfs-initramfs used to ship an an initramfs-tools script that unconditionally
activates *all* VGs that are visible at boot time, ignoring the autoactivation
flag. A fix was already applied in v2 [3].

# Patch summary

- Patch #1 makes the LVM plugin create new LVs with `--setautoactivation n`
- Patch #2 makes the LVM-thin plugin disable autoactivation for new LVs
- Patch #3 adds a pve8to9 check for LVM/LVM-thin LV autoactivation, and a
  migration script to disable autoactivation for existing LVs

# Changes since v4

- move the migration code to a dedicated script under
  /usr/share/pve-manager/migrations, and ask for user confirmation
  before taking action (thx Thomas!)
- batch lvchange calls and release lock inbetween (thx Fabian!)
- drop patch that moves `pve8to9` to `pve8to9 checklist`
- drop patch that ran `proxmox-perltidy` on pve8to9

# Changes since v3

- rebase and run perltidy
- see individual patches for other changes
- @Michael tested+reviewed v3 (thx!), but since there were some code changes,
  I'm not including the trailers here, even though the changes on #1+#2 were
  mostly comments/formatting changes.

# Changes since v2

- drop zfsonlinux patch that was since applied
- add patches for LVM-thin
- add pve8to9 patches

v3: https://lore.proxmox.com/pve-devel/20250429113646.25738-1-f.weber@proxmox.com/
v2: https://lore.proxmox.com/pve-devel/20250307095245.65698-1-f.weber@proxmox.com/
v1: https://lore.proxmox.com/pve-devel/20240111150332.733635-1-f.weber@proxmox.com/

[1] https://bugzilla.proxmox.com/show_bug.cgi?id=4997
[2] https://pve.proxmox.com/mediawiki/index.php?title=Multipath&oldid=12039#%22Device_mismatch_detected%22_warnings
[3] https://lore.proxmox.com/pve-devel/ad4c806c-234a-4949-885d-8bb369860cc0@proxmox.com/

pve-storage:

Friedrich Weber (2):
  fix #4997: lvm: create: disable autoactivation for new logical volumes
  lvmthin: disable autoactivation for new logical volumes

 src/PVE/Storage/LVMPlugin.pm     | 13 ++++++++++++-
 src/PVE/Storage/LvmThinPlugin.pm | 17 ++++++++++++++++-
 2 files changed, 28 insertions(+), 2 deletions(-)


pve-manager:

Friedrich Weber (1):
  pve8to9: check for LVM autoactivation and provide migration script

 PVE/CLI/pve8to9.pm                 |  86 +++++++++++++-
 bin/Makefile                       |   7 +-
 bin/pve-lvm-disable-autoactivation | 174 +++++++++++++++++++++++++++++
 3 files changed, 265 insertions(+), 2 deletions(-)
 create mode 100755 bin/pve-lvm-disable-autoactivation


Summary over all repositories:
  5 files changed, 293 insertions(+), 4 deletions(-)

-- 
Generated by git-murpp 0.8.1


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [pve-devel] [PATCH pve-storage v5 1/2] fix #4997: lvm: create: disable autoactivation for new logical volumes
  2025-07-09 14:09 [pve-devel] [PATCH manager/storage v5 0/3] fix #4997: lvm, lvm-thin: avoid autoactivating LVs Friedrich Weber
@ 2025-07-09 14:09 ` Friedrich Weber
  2025-07-09 15:39   ` [pve-devel] applied: " Thomas Lamprecht
  2025-07-09 14:09 ` [pve-devel] [PATCH pve-storage v5 2/2] lvmthin: " Friedrich Weber
  2025-07-09 14:10 ` [pve-devel] [PATCH pve-manager v5 1/1] pve8to9: check for LVM autoactivation and provide migration script Friedrich Weber
  2 siblings, 1 reply; 7+ messages in thread
From: Friedrich Weber @ 2025-07-09 14:09 UTC (permalink / raw)
  To: pve-devel

When discovering a new volume group (VG), for example on boot, LVM
triggers autoactivation. With the default settings, this activates all
logical volumes (LVs) in the VG. Activating an LV creates a
device-mapper device and a block device under /dev/mapper.

This is not necessarily problematic for local LVM VGs, but it is
problematic for VGs on top of a shared LUN used by multiple cluster
nodes (accessed via e.g. iSCSI/Fibre Channel/direct-attached SAS).

Concretely, in a cluster with a shared LVM VG where an LV is active on
nodes 1 and 2, deleting the LV on node 1 will not clean up the
device-mapper device on node 2. If an LV with the same name is
recreated later, the leftover device-mapper device will cause
activation of that LV on node 2 to fail with:

> device-mapper: create ioctl on [...] failed: Device or resource busy

Hence, certain combinations of guest removal (and thus LV removals)
and node reboots can cause guest creation or VM live migration (which
both entail LV activation) to fail with the above error message for
certain VMIDs, see bug #4997 for more information [1].

To avoid this issue in the future, disable autoactivation when
creating new LVs using the `--setautoactivation` flag. With this
setting, LVM autoactivation will not activate these LVs, and the
storage stack will take care of activating/deactivating the LV (only)
on the correct node when needed.

This additionally fixes an issue with multipath on FC/SAS-attached
LUNs where LVs would be activated too early after boot when multipath
is not yet available, see [3] for more details and current workaround.

The `--setautoactivation` flag was introduced with LVM 2.03.12 [2], so
it is available since Bookworm/PVE 8, which ships 2.03.16. Nodes with
older LVM versions ignore the flag and remove it on metadata updates,
which is why PVE 8 could not use the flag reliably, since there may
still be PVE 7 nodes in the cluster that reset it on metadata updates.

The flag is only set for newly created LVs, so LVs created before this
patch can still trigger #4997. To avoid this, users will be advised to
run a script to disable autoactivation for existing LVs.

[1] https://bugzilla.proxmox.com/show_bug.cgi?id=4997
[2] https://gitlab.com/lvmteam/lvm2/-/blob/main/WHATS_NEW
[3] https://pve.proxmox.com/mediawiki/index.php?title=Multipath&oldid=12039#FC/SAS-specific_configuration

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
---

Notes:
    no changes since v4
    
    changes since v3:
    - rebase and run perltidy
    - reword commit message, add rationale for setting this on PVE 9 only
    - squash in v3 patch 1/3 that only spread the $cmd definition over
      multiple lines, as v3 1/3 would have been reformatted by perltidy to
      use three lines, which defeats its purpose
    
    changes since v2:
    - reworded commit message

 src/PVE/Storage/LVMPlugin.pm | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
index 1a992e8..83a4ca9 100644
--- a/src/PVE/Storage/LVMPlugin.pm
+++ b/src/PVE/Storage/LVMPlugin.pm
@@ -405,7 +405,18 @@ sub lvcreate {
         $size .= "k"; # default to kilobytes
     }
 
-    my $cmd = ['/sbin/lvcreate', '-aly', '-Wy', '--yes', '--size', $size, '--name', $name];
+    my $cmd = [
+        '/sbin/lvcreate',
+        '-aly',
+        '-Wy',
+        '--yes',
+        '--size',
+        $size,
+        '--name',
+        $name,
+        '--setautoactivation',
+        'n',
+    ];
     for my $tag (@$tags) {
         push @$cmd, '--addtag', $tag;
     }
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [pve-devel] [PATCH pve-storage v5 2/2] lvmthin: disable autoactivation for new logical volumes
  2025-07-09 14:09 [pve-devel] [PATCH manager/storage v5 0/3] fix #4997: lvm, lvm-thin: avoid autoactivating LVs Friedrich Weber
  2025-07-09 14:09 ` [pve-devel] [PATCH pve-storage v5 1/2] fix #4997: lvm: create: disable autoactivation for new logical volumes Friedrich Weber
@ 2025-07-09 14:09 ` Friedrich Weber
  2025-07-09 15:39   ` [pve-devel] applied: " Thomas Lamprecht
  2025-07-09 14:10 ` [pve-devel] [PATCH pve-manager v5 1/1] pve8to9: check for LVM autoactivation and provide migration script Friedrich Weber
  2 siblings, 1 reply; 7+ messages in thread
From: Friedrich Weber @ 2025-07-09 14:09 UTC (permalink / raw)
  To: pve-devel

When discovering a new volume group (VG), for example on boot, LVM
triggers autoactivation. With the default settings, this activates all
logical volumes (LVs) in the VG. Activating an LV creates a
device-mapper device and a block device under /dev/mapper.

Autoactivation is problematic for shared LVM storages, see #4997 [1].
For the inherently local LVM-thin storage it is less problematic, but
it still makes sense to avoid unnecessarily activating LVs and thus
making them visible on the host at boot.

To avoid that, disable autoactivation after creating new LVs. lvcreate
on trixie does not accept the --setautoactivation flag for thin LVs
yet, support was only added with [2]. Hence, setting the flag is is
done with an additional lvchange command for now. With this setting,
LVM autoactivation will not activate these LVs, and the storage stack
will take care of activating/deactivating LVs when needed.

The flag is only set for newly created LVs, so LVs created before this
patch can still trigger #4997. To avoid this, users will be advised to
run a script to disable autoactivation for existing LVs.

[1] https://bugzilla.proxmox.com/show_bug.cgi?id=4997
[2] https://gitlab.com/lvmteam/lvm2/-/commit/1fba3b876bbd912a53e13bdf1b4de8792e8400d4

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
---

Notes:
    - would be great to get your opinion on whether we should consider
      LVM-thin storages in this series or not.
    
    no changes since v4
    
    changes since v3:
    - run perltidy
    - since we now know that lvcreate will support the flag in the future,
      reword the code comment and add TODO for PVE 10
    - reword commit message accordingly
    - reword commit message to mention the flag is only set for newly
      created LVs
    
    new in v3

 src/PVE/Storage/LvmThinPlugin.pm | 17 ++++++++++++++++-
 1 file changed, 16 insertions(+), 1 deletion(-)

diff --git a/src/PVE/Storage/LvmThinPlugin.pm b/src/PVE/Storage/LvmThinPlugin.pm
index c244c91..fe8a986 100644
--- a/src/PVE/Storage/LvmThinPlugin.pm
+++ b/src/PVE/Storage/LvmThinPlugin.pm
@@ -83,6 +83,18 @@ sub filesystem_path {
     return wantarray ? ($path, $vmid, $vtype) : $path;
 }
 
+# lvcreate on trixie does not accept --setautoactivation for thin LVs yet, so set it via lvchange
+# TODO PVE 10: evaluate if lvcreate accepts --setautoactivation
+my $set_lv_autoactivation = sub {
+    my ($vg, $lv, $autoactivation) = @_;
+
+    my $cmd = [
+        '/sbin/lvchange', '--setautoactivation', $autoactivation ? 'y' : 'n', "$vg/$lv",
+    ];
+    eval { run_command($cmd); };
+    warn "could not set autoactivation: $@" if $@;
+};
+
 sub alloc_image {
     my ($class, $storeid, $scfg, $vmid, $fmt, $name, $size) = @_;
 
@@ -112,6 +124,7 @@ sub alloc_image {
     ];
 
     run_command($cmd, errmsg => "lvcreate '$vg/$name' error");
+    $set_lv_autoactivation->($vg, $name, 0);
 
     return $name;
 }
@@ -298,6 +311,7 @@ sub clone_image {
 
     my $cmd = ['/sbin/lvcreate', '-n', $name, '-prw', '-kn', '-s', $lv];
     run_command($cmd, errmsg => "clone image '$lv' error");
+    $set_lv_autoactivation->($vg, $name, 0);
 
     return $name;
 }
@@ -346,7 +360,7 @@ sub volume_snapshot {
 
     my $cmd = ['/sbin/lvcreate', '-n', $snapvol, '-pr', '-s', "$vg/$volname"];
     run_command($cmd, errmsg => "lvcreate snapshot '$vg/$snapvol' error");
-
+    # disabling autoactivation not needed, as -s defaults to --setautoactivationskip y
 }
 
 sub volume_snapshot_rollback {
@@ -360,6 +374,7 @@ sub volume_snapshot_rollback {
 
     $cmd = ['/sbin/lvcreate', '-kn', '-n', $volname, '-s', "$vg/$snapvol"];
     run_command($cmd, errmsg => "lvm rollback '$vg/$snapvol' error");
+    $set_lv_autoactivation->($vg, $volname, 0);
 }
 
 sub volume_snapshot_delete {
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [pve-devel] [PATCH pve-manager v5 1/1] pve8to9: check for LVM autoactivation and provide migration script
  2025-07-09 14:09 [pve-devel] [PATCH manager/storage v5 0/3] fix #4997: lvm, lvm-thin: avoid autoactivating LVs Friedrich Weber
  2025-07-09 14:09 ` [pve-devel] [PATCH pve-storage v5 1/2] fix #4997: lvm: create: disable autoactivation for new logical volumes Friedrich Weber
  2025-07-09 14:09 ` [pve-devel] [PATCH pve-storage v5 2/2] lvmthin: " Friedrich Weber
@ 2025-07-09 14:10 ` Friedrich Weber
  2025-07-09 15:57   ` [pve-devel] applied: " Thomas Lamprecht
  2 siblings, 1 reply; 7+ messages in thread
From: Friedrich Weber @ 2025-07-09 14:10 UTC (permalink / raw)
  To: pve-devel

Starting with PVE 9, the LVM and LVM-thin plugins create new LVs with
the `--setautoactivation n` flag to fix #4997 [1]. However, this does
not affect already existing LVs of setups upgrading from PVE 8.

Hence, add a new script under /usr/share/pve-manager/migrations that
finds guest volume LVs with autoactivation enabled on enabled and
active LVM and LVM-thin storages, and disables autoactivation for each
of them. Before doing so, ask the user for confirmation for each
storage. For non-interactive usecases, the user can pass an flag to
assume confirmation. Disabling autoactivation via lvchange updates the
LVM metadata, hence, the storage lock is acquired before proceeding.
To avoid holding the lock for too long and blocking other consumers,
the lvchange calls are batched and the lock is released between two
batches. Afterwards, check for remaining guest volume LVs with
autoactivation enabled (these may have been created while the lock was
not held). If there are still such LVs left, or if other errors
occurred, suggest the user to run the command again.

Also, check for guest volume LVs with autoactivation enabled in
pve8to9, and suggest to run the migration script if necessary. The
check is done without a lock, as taking a lock does not have
advantages here.

While some users may have disabled autoactivation on the VG level to
work around #4997, consciously do not take this into account:
Disabling autoactivation for LVs too does not hurt, and avoids issues
in case the user decides to re-enable autoactivation on the VG level
in the future.

[1] https://bugzilla.proxmox.com/show_bug.cgi?id=4997

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
---

Notes:
    If users create new LVs on PVE 8 nodes after running the script but
    before actually upgrading to PVE 9, these will still be created with
    autoactivation enabled. As discussed in v3, we don't consider this
    case, as the alternative would be to suggest runing updatelvm
    *post-upgrade*), and this can be easily forgotten by users.
    
    The migration script uses a subroutine from PVE::CLI::pve8to9. Thomas
    suggested we could move such helpers (also e.g. the log_* helpers) to
    a dedicated module such as PVE/UpgradeHelpers.pm, but I haven't done
    so here to keep the scope of this patch series small. Can be done in a
    follow-up.
    
    Open questions:
    
    - The script will disable autoactivation on all guest volumes,
      regardless of the guest owner. In other words, it will disable
      autoactivation on LVs owned by a different node. Is this a problem?
      My tests suggests it's unproblematic, but may be worth a second
      look. We can change the script to only update LVs owned by the local
      node, but then might miss LVs that are migrated between script runs
      on different nodes.
    
    changes since v4:
    - move update code from pve8to9 to a dedicated migration script
    - add pve8to9 suggestion to run the migration script
    - ask for confirmation in the migration script, provide CLI option to
      assume confirmation
    - update LVs in batches (thx Fabian!), check for remaining LVs
      afterwards
    
    changes since v3:
    - rebase and run perltidy
    - change phrasing of warnings to provide more context (thx @Michael!)
    
    new in v3

 PVE/CLI/pve8to9.pm                 |  86 +++++++++++++-
 bin/Makefile                       |   7 +-
 bin/pve-lvm-disable-autoactivation | 174 +++++++++++++++++++++++++++++
 3 files changed, 265 insertions(+), 2 deletions(-)
 create mode 100755 bin/pve-lvm-disable-autoactivation

diff --git a/PVE/CLI/pve8to9.pm b/PVE/CLI/pve8to9.pm
index 7a5725cb..148efcca 100644
--- a/PVE/CLI/pve8to9.pm
+++ b/PVE/CLI/pve8to9.pm
@@ -23,7 +23,7 @@ use PVE::NodeConfig;
 use PVE::RPCEnvironment;
 use PVE::Storage;
 use PVE::Storage::Plugin;
-use PVE::Tools qw(run_command split_list file_get_contents);
+use PVE::Tools qw(run_command split_list file_get_contents trim);
 use PVE::QemuConfig;
 use PVE::QemuServer;
 use PVE::VZDump::Common;
@@ -1485,6 +1485,89 @@ sub check_legacy_backup_job_options {
     }
 }
 
+sub query_autoactivated_lvm_guest_volumes {
+    my ($cfg, $storeid, $vgname) = @_;
+
+    my $cmd = [
+        '/sbin/lvs',
+        '--separator',
+        ':',
+        '--noheadings',
+        '--unbuffered',
+        '--options',
+        "lv_name,autoactivation",
+        $vgname,
+    ];
+
+    my $autoactivated_lvs;
+    eval {
+        run_command(
+            $cmd,
+            outfunc => sub {
+                my $line = shift;
+                $line = trim($line);
+
+                my ($name, $autoactivation_flag) = split(':', $line);
+                return if !$name;
+
+                $autoactivated_lvs->{$name} = $autoactivation_flag eq 'enabled';
+            },
+        );
+    };
+    die "could not list LVM logical volumes: $@\n" if $@;
+
+    my $vollist = PVE::Storage::volume_list($cfg, $storeid);
+
+    my $autoactivated_guest_lvs = [];
+    for my $volinfo (@$vollist) {
+        my $volname = (PVE::Storage::parse_volume_id($volinfo->{volid}))[1];
+        push @$autoactivated_guest_lvs, $volname if $autoactivated_lvs->{$volname};
+    }
+
+    return $autoactivated_guest_lvs;
+}
+
+sub check_lvm_autoactivation {
+    my $cfg = PVE::Storage::config();
+    my $storage_info = PVE::Storage::storage_info($cfg);
+
+    log_info("Check for LVM autoactivation settings on LVM and LVM-thin storages...");
+
+    my $needs_fix = 0;
+
+    for my $storeid (sort keys %$storage_info) {
+        my $scfg = PVE::Storage::storage_config($cfg, $storeid);
+        my $type = $scfg->{type};
+        next if $type ne 'lvm' && $type ne 'lvmthin';
+
+        my $vgname = $scfg->{vgname};
+        die "unexpected empty VG name (storage '$storeid')\n" if !$vgname;
+
+        my $info = $storage_info->{$storeid};
+        if (!$info->{enabled} || !$info->{active}) {
+            log_skip("storage '$storeid' ($type) is disabled or inactive");
+            next;
+        }
+
+        my $autoactivated_guest_lvs =
+            query_autoactivated_lvm_guest_volumes($cfg, $storeid, $vgname);
+        if (scalar(@$autoactivated_guest_lvs) > 0) {
+            log_warn("storage '$storeid' has guest volumes with autoactivation enabled");
+            $needs_fix = 1;
+        } else {
+            log_pass("all guest volumes on storage '$storeid' have autoactivation disabled");
+        }
+    }
+    log_warn(
+        "Starting with PVE 9, autoactivation will be disabled for new LVM/LVM-thin guest volumes. "
+            . "Please run the following command to disable autoactivation for existing LVM/LVM-thin "
+            . "guest volumes:" . "\n\n"
+            . "\t/usr/share/pve-manager/migrations/pve-lvm-disable-autoactivation" . "\n")
+        if $needs_fix;
+
+    return undef;
+}
+
 sub check_misc {
     print_header("MISCELLANEOUS CHECKS");
     my $ssh_config = eval { PVE::Tools::file_get_contents('/root/.ssh/config') };
@@ -1596,6 +1679,7 @@ sub check_misc {
     check_dkms_modules();
     check_legacy_notification_sections();
     check_legacy_backup_job_options();
+    check_lvm_autoactivation();
 }
 
 my sub colored_if {
diff --git a/bin/Makefile b/bin/Makefile
index 3931804b..51683596 100644
--- a/bin/Makefile
+++ b/bin/Makefile
@@ -30,6 +30,9 @@ HELPERS =			\
 	pve-startall-delay	\
 	pve-init-ceph-crash
 
+MIGRATIONS =			\
+	pve-lvm-disable-autoactivation
+
 SERVICE_MANS = $(addsuffix .8, $(SERVICES))
 
 CLI_MANS = 				\
@@ -83,7 +86,7 @@ pvereport.1.pod: pvereport
 
 .PHONY: tidy
 tidy:
-	echo $(SCRIPTS) $(HELPERS) | xargs -n4 -P0 proxmox-perltidy
+	echo $(SCRIPTS) $(HELPERS) $(MIGRATIONS) | xargs -n4 -P0 proxmox-perltidy
 
 .PHONY: check
 check: $(addsuffix .service-api-verified, $(SERVICES)) $(addsuffix .api-verified, $(CLITOOLS))
@@ -95,6 +98,8 @@ install: $(SCRIPTS) $(CLI_MANS) $(SERVICE_MANS) $(BASH_COMPLETIONS) $(ZSH_COMPLE
 	install -m 0755 $(SCRIPTS) $(BINDIR)
 	install -d $(USRSHARE)/helpers
 	install -m 0755 $(HELPERS) $(USRSHARE)/helpers
+	install -d $(USRSHARE)/migrations
+	install -m 0755 $(MIGRATIONS) $(USRSHARE)/migrations
 	install -d $(MAN1DIR)
 	install -m 0644 $(CLI_MANS) $(MAN1DIR)
 	install -d $(MAN8DIR)
diff --git a/bin/pve-lvm-disable-autoactivation b/bin/pve-lvm-disable-autoactivation
new file mode 100755
index 00000000..69a9f556
--- /dev/null
+++ b/bin/pve-lvm-disable-autoactivation
@@ -0,0 +1,174 @@
+#!/usr/bin/perl
+
+use strict;
+use warnings;
+
+use Getopt::Long;
+use Term::ANSIColor;
+
+use PVE::CLI::pve8to9;
+use PVE::Tools qw(run_command);
+
+my $is_tty = (-t STDOUT);
+
+my $level2color = {
+    pass => 'green',
+    warn => 'yellow',
+    fail => 'bold red',
+};
+
+my $log_line = sub {
+    my ($level, $line) = @_;
+
+    my $color = $level2color->{$level} // '';
+    print color($color) if $is_tty && $color && $color ne '';
+
+    print uc($level), ': ' if defined($level);
+    print "$line\n";
+
+    print color('reset') if $is_tty;
+};
+
+sub log_pass { $log_line->('pass', @_); }
+sub log_info { $log_line->('info', @_); }
+sub log_warn { $log_line->('warn', @_); }
+sub log_fail { $log_line->('fail', @_); }
+
+sub main {
+    my $assume_yes = 0;
+
+    if (!GetOptions('assume-yes|y', \$assume_yes)) {
+        print "USAGE $0 [ --assume-yes | -y ]\n";
+        exit(-1);
+    }
+
+    my $cfg = PVE::Storage::config();
+    my $storage_info = PVE::Storage::storage_info($cfg);
+
+    my $got_error = 0;
+    my $skipped_storage = 0;
+    my $still_found_autoactivated_lvs = 0;
+    my $did_work = 0;
+
+    log_info("Starting with PVE 9, autoactivation will be disabled for new LVM/LVM-thin guest "
+        . "volumes. This script disables autoactivation for existing LVM/LVM-thin guest volumes."
+    );
+
+    for my $storeid (sort keys %$storage_info) {
+        eval {
+            my $scfg = PVE::Storage::storage_config($cfg, $storeid);
+            my $type = $scfg->{type};
+            return if $type ne 'lvm' && $type ne 'lvmthin';
+
+            my $info = $storage_info->{$storeid};
+            if (!$info->{enabled} || !$info->{active}) {
+                log_info("storage '$storeid' ($type) is disabled or inactive");
+                return;
+            }
+
+            my $vgname = $scfg->{vgname};
+            die "unexpected empty VG name (storage '$storeid')\n" if !$vgname;
+
+            my $autoactivated_lvs =
+                PVE::CLI::pve8to9::query_autoactivated_lvm_guest_volumes($cfg, $storeid, $vgname);
+            if (scalar(@$autoactivated_lvs) == 0) {
+                log_pass("all guest volumes on storage '$storeid' have "
+                    . "autoactivation disabled");
+                return;
+            }
+
+            print "Disable autoactivation on "
+                . scalar(@$autoactivated_lvs)
+                . " guest volumes on storage '$storeid'? (y/N)? ";
+
+            my $continue;
+            if ($assume_yes) {
+                print "Assuming 'yes' because '--assume-yes' was passed.\n";
+                $continue = 1;
+            } elsif ($is_tty) {
+                my $answer = <STDIN>;
+                $continue = defined($answer) && $answer =~ m/^\s*y(?:es)?\s*$/i;
+            } else {
+                print "Assuming 'no'. Pass '--assume-yes' to always assume 'yes'.\n";
+                $continue = 0;
+            }
+
+            if (!$continue) {
+                $skipped_storage = 1;
+                log_warn("Skipping '$storeid' as requested ...\n");
+                return;
+            }
+
+            # in order to avoid holding the lock for too long at a time, update LVs in batches of 32
+            # and release the lock inbetween
+            my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+            while (@$autoactivated_lvs) {
+                my @current_lvs = splice @$autoactivated_lvs, 0, 32;
+                $plugin->cluster_lock_storage(
+                    $storeid,
+                    $scfg->{shared},
+                    undef,
+                    sub {
+                        for my $lvname (@current_lvs) {
+                            log_info("disabling autoactivation for $vgname/$lvname on storage "
+                                . "'$storeid'...");
+                            my $cmd = [
+                                '/sbin/lvchange', '--setautoactivation', 'n', "$vgname/$lvname",
+                            ];
+
+                            eval { run_command($cmd); };
+                            my $err = $@;
+                            die "could not disable autoactivation for $vgname/$lvname: $err\n"
+                                if $err;
+
+                            $did_work = 1;
+                        }
+                    },
+                );
+            }
+
+            # new LVs might have been created in the meantime while the lock was not held
+            my $still_autoactivated_lvs =
+                PVE::CLI::pve8to9::query_autoactivated_lvm_guest_volumes($cfg, $storeid, $vgname);
+            if (scalar(@$still_autoactivated_lvs) == 0) {
+                log_pass("all guest volumes on storage '$storeid' now have "
+                    . "autoactivation disabled");
+            } else {
+                $still_found_autoactivated_lvs = 1;
+                log_warn("some guest volumes on storage '$storeid' still have "
+                    . "autoactivation enabled");
+            }
+        };
+
+        my $err = $@;
+        if ($err) {
+            $got_error = 1;
+            log_fail("could not disable autoactivation on enabled storage '$storeid': $err");
+        }
+    }
+
+    my $exit_code;
+    if ($got_error) {
+        $exit_code = 1;
+        log_fail("at least one error was encountered");
+    } elsif ($skipped_storage) {
+        $exit_code = 1;
+        log_warn("at least one enabled and active LVM/LVM-thin storage was skipped. "
+            . "Please run this script again!");
+    } elsif ($still_found_autoactivated_lvs) {
+        $exit_code = 1;
+        log_warn("some guest volumes still have autoactivation enabled. "
+            . "Please run this script again!");
+    } elsif ($did_work) {
+        $exit_code = 0;
+        log_pass("successfully disabled autoactivation for existing guest volumes on all "
+            . "enabled and active LVM/LVM-thin storages.");
+    } else {
+        $exit_code = 0;
+        log_pass("all existing guest volumes on enabled and active LVM/LVM-thin storages "
+            . "already have autoactivation disabled.");
+    }
+    exit($exit_code);
+}
+
+main();
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [pve-devel] applied: [PATCH pve-storage v5 1/2] fix #4997: lvm: create: disable autoactivation for new logical volumes
  2025-07-09 14:09 ` [pve-devel] [PATCH pve-storage v5 1/2] fix #4997: lvm: create: disable autoactivation for new logical volumes Friedrich Weber
@ 2025-07-09 15:39   ` Thomas Lamprecht
  0 siblings, 0 replies; 7+ messages in thread
From: Thomas Lamprecht @ 2025-07-09 15:39 UTC (permalink / raw)
  To: pve-devel, Friedrich Weber

On Wed, 09 Jul 2025 16:09:58 +0200, Friedrich Weber wrote:
> When discovering a new volume group (VG), for example on boot, LVM
> triggers autoactivation. With the default settings, this activates all
> logical volumes (LVs) in the VG. Activating an LV creates a
> device-mapper device and a block device under /dev/mapper.
> 
> This is not necessarily problematic for local LVM VGs, but it is
> problematic for VGs on top of a shared LUN used by multiple cluster
> nodes (accessed via e.g. iSCSI/Fibre Channel/direct-attached SAS).
> 
> [...]

Applied, thanks!

[1/2] fix #4997: lvm: create: disable autoactivation for new logical volumes
      commit: f296ffc4e4d64b574c3001dc7cc6af3da1406441


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [pve-devel] applied: [PATCH pve-storage v5 2/2] lvmthin: disable autoactivation for new logical volumes
  2025-07-09 14:09 ` [pve-devel] [PATCH pve-storage v5 2/2] lvmthin: " Friedrich Weber
@ 2025-07-09 15:39   ` Thomas Lamprecht
  0 siblings, 0 replies; 7+ messages in thread
From: Thomas Lamprecht @ 2025-07-09 15:39 UTC (permalink / raw)
  To: pve-devel, Friedrich Weber

On Wed, 09 Jul 2025 16:09:59 +0200, Friedrich Weber wrote:
> When discovering a new volume group (VG), for example on boot, LVM
> triggers autoactivation. With the default settings, this activates all
> logical volumes (LVs) in the VG. Activating an LV creates a
> device-mapper device and a block device under /dev/mapper.
> 
> Autoactivation is problematic for shared LVM storages, see #4997 [1].
> For the inherently local LVM-thin storage it is less problematic, but
> it still makes sense to avoid unnecessarily activating LVs and thus
> making them visible on the host at boot.
> 
> [...]

Applied, thanks!

[2/2] lvmthin: disable autoactivation for new logical volumes
      commit: 2796d6b6395c2c4d0da55df51ca41b0e045af3a0


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [pve-devel] applied: [PATCH pve-manager v5 1/1] pve8to9: check for LVM autoactivation and provide migration script
  2025-07-09 14:10 ` [pve-devel] [PATCH pve-manager v5 1/1] pve8to9: check for LVM autoactivation and provide migration script Friedrich Weber
@ 2025-07-09 15:57   ` Thomas Lamprecht
  0 siblings, 0 replies; 7+ messages in thread
From: Thomas Lamprecht @ 2025-07-09 15:57 UTC (permalink / raw)
  To: pve-devel, Friedrich Weber

On Wed, 09 Jul 2025 16:10:00 +0200, Friedrich Weber wrote:
> Starting with PVE 9, the LVM and LVM-thin plugins create new LVs with
> the `--setautoactivation n` flag to fix #4997 [1]. However, this does
> not affect already existing LVs of setups upgrading from PVE 8.
> 
> Hence, add a new script under /usr/share/pve-manager/migrations that
> finds guest volume LVs with autoactivation enabled on enabled and
> active LVM and LVM-thin storages, and disables autoactivation for each
> of them. Before doing so, ask the user for confirmation for each
> storage. For non-interactive usecases, the user can pass an flag to
> assume confirmation. Disabling autoactivation via lvchange updates the
> LVM metadata, hence, the storage lock is acquired before proceeding.
> To avoid holding the lock for too long and blocking other consumers,
> the lvchange calls are batched and the lock is released between two
> batches. Afterwards, check for remaining guest volume LVs with
> autoactivation enabled (these may have been created while the lock was
> not held). If there are still such LVs left, or if other errors
> occurred, suggest the user to run the command again.
> 
> [...]

Applied, thanks! As quickly talked off-list: I pushed a follow-up commit to
change the warning to a notice for the case where only local LVM storages are
affected, as in that case a warning is IMO not justified, as keeping the status
quo then has no known problems, and while it might be a bit cleaner and save a
little bit of resources to not activate all local volumes on boot, it's not
worth the relatively scary warning level.

[1/1] pve8to9: check for LVM autoactivation and provide migration script
      commit: ebed71c822e25c03211cbf0f97a3ef0c25a3f86e


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2025-07-09 15:59 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-07-09 14:09 [pve-devel] [PATCH manager/storage v5 0/3] fix #4997: lvm, lvm-thin: avoid autoactivating LVs Friedrich Weber
2025-07-09 14:09 ` [pve-devel] [PATCH pve-storage v5 1/2] fix #4997: lvm: create: disable autoactivation for new logical volumes Friedrich Weber
2025-07-09 15:39   ` [pve-devel] applied: " Thomas Lamprecht
2025-07-09 14:09 ` [pve-devel] [PATCH pve-storage v5 2/2] lvmthin: " Friedrich Weber
2025-07-09 15:39   ` [pve-devel] applied: " Thomas Lamprecht
2025-07-09 14:10 ` [pve-devel] [PATCH pve-manager v5 1/1] pve8to9: check for LVM autoactivation and provide migration script Friedrich Weber
2025-07-09 15:57   ` [pve-devel] applied: " Thomas Lamprecht

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal