all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: Friedrich Weber <f.weber@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH pve-manager stable-8 v3 3/3] pve8to9: detect and (if requested) disable LVM autoactivation
Date: Tue, 29 Apr 2025 13:36:46 +0200	[thread overview]
Message-ID: <20250429113646.25738-7-f.weber@proxmox.com> (raw)
In-Reply-To: <20250429113646.25738-1-f.weber@proxmox.com>

Starting with PVE 9, the LVM and LVM-thin plugins create new LVs with
the `--setautoactivation n` flag to fix #4997 [1]. However, this does
not affect already existing LVs of setups upgrading from PVE 8.

Hence, add a new `updatelvm` subcommand to `pve8to9` that finds guest
volume LVs with autoactivation enabled on LVM and LVM-thin storages,
and disables autoactivation on each of them. This is done while
holding a lock on the storage, to avoid metadata update conflicts for
shared LVM storages.

Also, check for guest volume LVs with autoactivation enabled during
the `checklist` command, and suggest to run `updatelvm` if necessary.
The check is done without a lock, as taking a lock doesn't have
advantages here.

While some users may have disabled autoactivation on the VG level to
work around #4997, consciously do not take this into account:
Disabling autoactivation on LVs too does not hurt, and avoids issues
in case the user decides to re-enable autoactivation on the VG level
in the future.

[1] https://bugzilla.proxmox.com/show_bug.cgi?id=4997

Signed-off-by: Friedrich Weber <f.weber@proxmox.com>
---

Notes:
    open questions:
    
    - if users create new LVs on PVE 8 nodes after running `pve8to9
      updatelvm` but before actually upgrading to PVE 9, these will still
      be created with autoactivation enabled. Is this a case we want to
      consider? If yes, we could suggest running `pve8to updatelvm` (only)
      post-upgrade on all nodes, but users may forget this...
    
    - `updatelvm` will disable autoactivation on all guest volumes,
      regardless of the guest owner. In other words, it will disable
      autoactivation on LVs owned by a different node. Is this a problem?
      My tests suggests it's unproblematic, but may be worth a second
      look. We can change `updatelvm` to only update LVs owned by the
      local node, but then might miss LVs that are migrated between
      `updatelvm` runs on different nodes.
    
    new in v3

 PVE/CLI/pve8to9.pm | 152 ++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 151 insertions(+), 1 deletion(-)

diff --git a/PVE/CLI/pve8to9.pm b/PVE/CLI/pve8to9.pm
index fbb96491..972c91ea 100644
--- a/PVE/CLI/pve8to9.pm
+++ b/PVE/CLI/pve8to9.pm
@@ -22,7 +22,7 @@ use PVE::NodeConfig;
 use PVE::RPCEnvironment;
 use PVE::Storage;
 use PVE::Storage::Plugin;
-use PVE::Tools qw(run_command split_list file_get_contents);
+use PVE::Tools qw(run_command split_list file_get_contents trim);
 use PVE::QemuConfig;
 use PVE::QemuServer;
 use PVE::VZDump::Common;
@@ -1372,6 +1372,95 @@ sub check_dkms_modules {
     }
 }
 
+my $query_lvm_lv_autoactivation = sub {
+    my ($vg_name) = @_;
+
+    my $cmd = [
+	'/sbin/lvs',
+	'--separator', ':',
+	'--noheadings',
+	'--unbuffered',
+	'--options', "lv_name,autoactivation",
+	$vg_name,
+    ];
+
+    my $result;
+    eval {
+	run_command($cmd, outfunc => sub {
+	    my $line = shift;
+	    $line = trim($line);
+
+	    my ($name, $autoactivation_flag) = split(':', $line);
+	    return if !$name;
+
+	    $result->{$name} = $autoactivation_flag eq 'enabled';
+	});
+    };
+    die "could not list LVM logical volumes: $@\n" if $@;
+    return $result;
+};
+
+my $foreach_lvm_vg_with_autoactivated_guest_volumes = sub {
+    my ($with_lock, $code) = @_;
+
+    my $cfg = PVE::Storage::config();
+    my $storage_info = PVE::Storage::storage_info($cfg);
+
+    for my $storeid (sort keys %$storage_info) {
+	my $scfg = PVE::Storage::storage_config($cfg, $storeid);
+	my $vgname = $scfg->{vgname};
+	my $type = $scfg->{type};
+	next if $type ne 'lvm' && $type ne 'lvmthin';
+
+	my $info = $storage_info->{$storeid};
+	if (!$info->{enabled} || !$info->{active}) {
+	    log_skip("storage '$storeid' ($type) is disabled or inactive");
+	    next;
+	}
+
+	my $find_autoactivated_lvs = sub {
+	    my $lvs_autoactivation = $query_lvm_lv_autoactivation->($vgname);
+	    my $vollist = PVE::Storage::volume_list($cfg, $storeid);
+
+	    my $autoactivated_guest_lvs = [];
+	    for my $volinfo (@$vollist) {
+		my $volname = (PVE::Storage::parse_volume_id($volinfo->{volid}))[1];
+		push @$autoactivated_guest_lvs, $volname if $lvs_autoactivation->{$volname};
+	    }
+
+	    $code->($storeid, $vgname, $autoactivated_guest_lvs);
+	};
+
+	if ($with_lock) {
+	    my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
+	    $plugin->cluster_lock_storage($storeid, $scfg->{shared}, undef,
+		$find_autoactivated_lvs);
+	} else {
+	    $find_autoactivated_lvs->();
+	}
+    };
+};
+
+sub check_lvm_autoactivation {
+    log_info("Check for LVM autoactivation settings on 'lvm' and 'lvmthin' storages...");
+
+    my $needs_fix = 0;
+    $foreach_lvm_vg_with_autoactivated_guest_volumes->(0, sub {
+	my ($storeid, $vgname, $autoactivated_lvs) = @_;
+
+	if (scalar(@$autoactivated_lvs) > 0) {
+	    log_warn("storage '$storeid' has guest volumes with autoactivation enabled");
+	    $needs_fix = 1;
+	} else {
+	    log_pass("all guest volumes on storage '$storeid' have autoactivation disabled");
+	}
+    });
+    log_warn("please run 'pve8to9 updatelvm' to disable autoactivation on LVM guest volumes")
+	if $needs_fix;
+
+    return undef;
+}
+
 sub check_misc {
     print_header("MISCELLANEOUS CHECKS");
     my $ssh_config = eval { PVE::Tools::file_get_contents('/root/.ssh/config') };
@@ -1474,6 +1563,7 @@ sub check_misc {
     check_nvidia_vgpu_service();
     check_bootloader();
     check_dkms_modules();
+    check_lvm_autoactivation();
 }
 
 my sub colored_if {
@@ -1538,8 +1628,68 @@ __PACKAGE__->register_method ({
 	return undef;
     }});
 
+__PACKAGE__->register_method ({
+    name => 'updatelvm',
+    path => 'updatelvm',
+    method => 'POST',
+    description => 'disable autoactivation for all LVM guest volumes',
+    parameters => {
+	additionalProperties => 0,
+    },
+    returns => { type => 'null' },
+    code => sub {
+	my ($param) = @_;
+
+	my $did_work = 0;
+	eval {
+	    $foreach_lvm_vg_with_autoactivated_guest_volumes->(1, sub {
+		my ($storeid, $vgname, $autoactivated_lvs) = @_;
+
+		if (scalar(@$autoactivated_lvs) == 0) {
+		    log_pass("all guest volumes on storage '$storeid' have autoactivation disabled");
+		    return;
+		}
+
+		log_info("disabling autoactivation on storage '$storeid'...");
+		die "unexpected empty VG name (storage '$storeid')\n" if !$vgname;
+
+		for my $lvname (@$autoactivated_lvs) {
+		    log_info("disabling autoactivation for $vgname/$lvname"
+			." on storage '$storeid'...");
+		    my $cmd = [
+			'/sbin/lvchange',
+			'--setautoactivation', 'n',
+			"$vgname/$lvname",
+		    ];
+
+		    eval { run_command($cmd); };
+		    my $err = $@;
+		    die "could not disable autoactivation on $vgname/$lvname: $err\n" if $err;
+
+		    $did_work = 1;
+		}
+	    });
+	};
+	my $err = $@;
+	if ($err) {
+	    log_fail("could not disable autoactivation on enabled and active LVM storages");
+	    die $err;
+	}
+
+	if($did_work) {
+	    log_pass("successfully disabled autoactivation on guest volumes on all"
+		." enabled and active LVM storages");
+	} else {
+	    log_pass("all guest volumes on enabled and active LVM storages"
+		." have autoactivation disabled");
+	};
+
+	return undef;
+    }});
+
 our $cmddef = {
     checklist => [ __PACKAGE__, 'checklist', []],
+    updatelvm => [ __PACKAGE__, 'updatelvm', []],
 };
 
 1;
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


  parent reply	other threads:[~2025-04-29 11:37 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-04-29 11:36 [pve-devel] [RFC storage/manager v3 0/6] fix #4997: lvm, lvm-thin: avoid autoactivating LVs Friedrich Weber
2025-04-29 11:36 ` [pve-devel] [PATCH pve-storage v3 1/3] lvm: create: use multiple lines for lvcreate command line Friedrich Weber
2025-04-29 11:36 ` [pve-devel] [PATCH pve-storage v3 2/3] fix #4997: lvm: create: disable autoactivation for new logical volumes Friedrich Weber
2025-04-29 11:36 ` [pve-devel] [PATCH pve-storage v3 3/3] lvmthin: " Friedrich Weber
2025-06-10 15:00   ` Michael Köppl
2025-04-29 11:36 ` [pve-devel] [PATCH pve-manager stable-8 v3 1/3] cli: create pve8to9 script as a copy of pve7to8 Friedrich Weber
2025-04-29 11:36 ` [pve-devel] [PATCH pve-manager stable-8 v3 2/3] pve8to9: move checklist to dedicated subcommand Friedrich Weber
2025-04-29 11:36 ` Friedrich Weber [this message]
2025-06-10 14:25   ` [pve-devel] [PATCH pve-manager stable-8 v3 3/3] pve8to9: detect and (if requested) disable LVM autoactivation Michael Köppl
2025-06-10 15:03 ` [pve-devel] [RFC storage/manager v3 0/6] fix #4997: lvm, lvm-thin: avoid autoactivating LVs Michael Köppl

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250429113646.25738-7-f.weber@proxmox.com \
    --to=f.weber@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal