all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: Friedrich Weber <f.weber@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH pve-manager v4 3/3] pve8to9: detect and (if requested) disable LVM autoactivation
Date: Mon,  7 Jul 2025 10:03:31 +0200	[thread overview]
Message-ID: <20250707080352.58713-6-f.weber@proxmox.com> (raw)
In-Reply-To: <20250707080352.58713-1-f.weber@proxmox.com>

Starting with PVE 9, the LVM and LVM-thin plugins create new LVs with
the `--setautoactivation n` flag to fix #4997 [1]. However, this does
not affect already existing LVs of setups upgrading from PVE 8.

Hence, add a new `updatelvm` subcommand to `pve8to9` that finds guest
volume LVs with autoactivation enabled on LVM and LVM-thin storages,
and disables autoactivation for each of them. This is done while
holding a lock on the storage, to avoid metadata update conflicts for
shared LVM storages.

Also, check for guest volume LVs with autoactivation enabled during
the `checklist` command, and suggest to run `updatelvm` if necessary.
The check is done without a lock, as taking a lock doesn't have
advantages here.

While some users may have disabled autoactivation on the VG level to
work around #4997, consciously do not take this into account:
Disabling autoactivation for LVs too does not hurt, and avoids issues
in case the user decides to re-enable autoactivation on the VG level
in the future.

[1] https://bugzilla.proxmox.com/show_bug.cgi?id=4997

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
---

Notes:
    If users create new LVs on PVE 8 nodes after running `pve8to9
    updatelvm` but before actually upgrading to PVE 9, these will still be
    created with autoactivation enabled. As discussed in v3, we don't
    consider this case, as the alternative would be to suggest runing
    updatelvm *post-upgrade*), and this can be easily forgotten by users.
    
    Open questions:
    
    - `updatelvm` will disable autoactivation on all guest volumes,
      regardless of the guest owner. In other words, it will disable
      autoactivation on LVs owned by a different node. Is this a problem?
      My tests suggests it's unproblematic, but may be worth a second
      look. We can change `updatelvm` to only update LVs owned by the
      local node, but then might miss LVs that are migrated between
      `updatelvm` runs on different nodes.
    
    - should `updatelvm` really be a pve8to9 subcommand, or live somewhere
      else? (see cover letter)
      If it should indeed be a pve8to9 subcommand, I can send another
      patch to update the manpage.
    
    changes since v3:
    - rebase and run perltidy
    - change phrasing of warnings to provide more context (thx @Michael!)
    
    new in v3

 PVE/CLI/pve8to9.pm | 171 ++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 170 insertions(+), 1 deletion(-)

diff --git a/PVE/CLI/pve8to9.pm b/PVE/CLI/pve8to9.pm
index c3fb0b0b..bea99ebb 100644
--- a/PVE/CLI/pve8to9.pm
+++ b/PVE/CLI/pve8to9.pm
@@ -22,7 +22,7 @@ use PVE::NodeConfig;
 use PVE::RPCEnvironment;
 use PVE::Storage;
 use PVE::Storage::Plugin;
-use PVE::Tools qw(run_command split_list file_get_contents);
+use PVE::Tools qw(run_command split_list file_get_contents trim);
 use PVE::QemuConfig;
 use PVE::QemuServer;
 use PVE::VZDump::Common;
@@ -1415,6 +1415,107 @@ sub check_dkms_modules {
     }
 }
 
+my $query_lvm_lv_autoactivation = sub {
+    my ($vg_name) = @_;
+
+    my $cmd = [
+        '/sbin/lvs',
+        '--separator',
+        ':',
+        '--noheadings',
+        '--unbuffered',
+        '--options',
+        "lv_name,autoactivation",
+        $vg_name,
+    ];
+
+    my $result;
+    eval {
+        run_command(
+            $cmd,
+            outfunc => sub {
+                my $line = shift;
+                $line = trim($line);
+
+                my ($name, $autoactivation_flag) = split(':', $line);
+                return if !$name;
+
+                $result->{$name} = $autoactivation_flag eq 'enabled';
+            },
+        );
+    };
+    die "could not list LVM logical volumes: $@\n" if $@;
+    return $result;
+};
+
+my $foreach_lvm_vg_with_autoactivated_guest_volumes = sub {
+    my ($with_lock, $code) = @_;
+
+    my $cfg = PVE::Storage::config();
+    my $storage_info = PVE::Storage::storage_info($cfg);
+
+    for my $storeid (sort keys %$storage_info) {
+        my $scfg = PVE::Storage::storage_config($cfg, $storeid);
+        my $vgname = $scfg->{vgname};
+        my $type = $scfg->{type};
+        next if $type ne 'lvm' && $type ne 'lvmthin';
+
+        my $info = $storage_info->{$storeid};
+        if (!$info->{enabled} || !$info->{active}) {
+            log_skip("storage '$storeid' ($type) is disabled or inactive");
+            next;
+        }
+
+        my $find_autoactivated_lvs = sub {
+            my $lvs_autoactivation = $query_lvm_lv_autoactivation->($vgname);
+            my $vollist = PVE::Storage::volume_list($cfg, $storeid);
+
+            my $autoactivated_guest_lvs = [];
+            for my $volinfo (@$vollist) {
+                my $volname = (PVE::Storage::parse_volume_id($volinfo->{volid}))[1];
+                push @$autoactivated_guest_lvs, $volname if $lvs_autoactivation->{$volname};
+            }
+
+            $code->($storeid, $vgname, $autoactivated_guest_lvs);
+        };
+
+        if ($with_lock) {
+            my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+            $plugin->cluster_lock_storage(
+                $storeid, $scfg->{shared}, undef, $find_autoactivated_lvs,
+            );
+        } else {
+            $find_autoactivated_lvs->();
+        }
+    }
+};
+
+sub check_lvm_autoactivation {
+    log_info("Check for LVM autoactivation settings on 'lvm' and 'lvmthin' storages...");
+
+    my $needs_fix = 0;
+    $foreach_lvm_vg_with_autoactivated_guest_volumes->(
+        0,
+        sub {
+            my ($storeid, $vgname, $autoactivated_lvs) = @_;
+
+            if (scalar(@$autoactivated_lvs) > 0) {
+                log_warn("storage '$storeid' has guest volumes with autoactivation enabled");
+                $needs_fix = 1;
+            } else {
+                log_pass(
+                    "all guest volumes on storage '$storeid' have autoactivation disabled");
+            }
+        },
+    );
+    log_warn(
+        "Starting with PVE 9, autoactivation will be disabled for new LVM guest volumes. Please "
+            . "run 'pve8to9 updatelvm' to disable autoactivation for existing LVM guest volumes!")
+        if $needs_fix;
+
+    return undef;
+}
+
 sub check_misc {
     print_header("MISCELLANEOUS CHECKS");
     my $ssh_config = eval { PVE::Tools::file_get_contents('/root/.ssh/config') };
@@ -1524,6 +1625,7 @@ sub check_misc {
     check_nvidia_vgpu_service();
     check_bootloader();
     check_dkms_modules();
+    check_lvm_autoactivation();
 }
 
 my sub colored_if {
@@ -1595,8 +1697,75 @@ __PACKAGE__->register_method({
     },
 });
 
+__PACKAGE__->register_method({
+    name => 'updatelvm',
+    path => 'updatelvm',
+    method => 'POST',
+    description => "disable autoactivation for all existing LVM guest volumes. "
+        . "Starting with PVE 9, autoactivation will be disabled for new LVM guest volumes.",
+    parameters => {
+        additionalProperties => 0,
+    },
+    returns => { type => 'null' },
+    code => sub {
+        my ($param) = @_;
+
+        my $did_work = 0;
+        eval {
+            $foreach_lvm_vg_with_autoactivated_guest_volumes->(
+                1,
+                sub {
+                    my ($storeid, $vgname, $autoactivated_lvs) = @_;
+
+                    if (scalar(@$autoactivated_lvs) == 0) {
+                        log_pass(
+                            "all guest volumes on storage '$storeid' have autoactivation disabled");
+                        return;
+                    }
+
+                    log_info("disabling autoactivation on storage '$storeid'...");
+                    die "unexpected empty VG name (storage '$storeid')\n" if !$vgname;
+
+                    for my $lvname (@$autoactivated_lvs) {
+                        log_info("disabling autoactivation for $vgname/$lvname on storage "
+                            . "'$storeid'...");
+                        my $cmd = [
+                            '/sbin/lvchange', '--setautoactivation', 'n', "$vgname/$lvname",
+                        ];
+
+                        eval { run_command($cmd); };
+                        my $err = $@;
+                        die "could not disable autoactivation for $vgname/$lvname: $err\n"
+                            if $err;
+
+                        $did_work = 1;
+                    }
+                },
+            );
+        };
+        my $err = $@;
+        if ($err) {
+            log_fail("could not disable autoactivation on enabled and active LVM storages");
+            die $err;
+        }
+
+        if ($did_work) {
+            log_pass(
+                "successfully disabled autoactivation for existing guest volumes on all enabled and "
+                    . "active LVM storages.");
+        } else {
+            log_pass(
+                "all existing guest volumes on enabled and active LVM storages have autoactivation "
+                    . "disabled.");
+        }
+
+        return undef;
+    },
+});
+
 our $cmddef = {
     checklist => [__PACKAGE__, 'checklist', []],
+    updatelvm => [__PACKAGE__, 'updatelvm', []],
 };
 
 1;
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


  parent reply	other threads:[~2025-07-07  8:03 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-07-07  8:03 [pve-devel] [PATCH manager/storage v4 0/5] fix #4997: lvm, lvm-thin: avoid autoactivating LVs Friedrich Weber
2025-07-07  8:03 ` [pve-devel] [PATCH pve-storage v4 1/2] fix #4997: lvm: create: disable autoactivation for new logical volumes Friedrich Weber
2025-07-07  8:03 ` [pve-devel] [PATCH pve-storage v4 2/2] lvmthin: " Friedrich Weber
2025-07-07  8:03 ` [pve-devel] [PATCH pve-manager v4 1/3] pve8to9: run perltidy Friedrich Weber
2025-07-07  8:03 ` [pve-devel] [PATCH pve-manager v4 2/3] pve8to9: move checklist to dedicated subcommand Friedrich Weber
2025-07-07 18:46   ` Thomas Lamprecht
2025-07-08 10:18     ` Friedrich Weber
2025-07-07  8:03 ` Friedrich Weber [this message]
2025-07-08  8:38 ` [pve-devel] [PATCH manager/storage v4 0/5] fix #4997: lvm, lvm-thin: avoid autoactivating LVs Thomas Lamprecht
2025-07-08 10:16   ` Friedrich Weber
2025-07-08 10:19     ` Thomas Lamprecht
2025-07-09 14:13 ` [pve-devel] superseded: " Friedrich Weber

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250707080352.58713-6-f.weber@proxmox.com \
    --to=f.weber@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal