From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: <pve-devel-bounces@lists.proxmox.com> Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id 9C8901FF16F for <inbox@lore.proxmox.com>; Tue, 10 Jun 2025 16:26:01 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id C831D1E459; Tue, 10 Jun 2025 16:26:23 +0200 (CEST) Message-ID: <bdfa472c-32e1-4ca0-8380-60154b30f646@proxmox.com> Date: Tue, 10 Jun 2025 16:25:49 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>, Friedrich Weber <f.weber@proxmox.com> References: <20250429113646.25738-1-f.weber@proxmox.com> <20250429113646.25738-7-f.weber@proxmox.com> From: =?UTF-8?Q?Michael_K=C3=B6ppl?= <m.koeppl@proxmox.com> Content-Language: en-US In-Reply-To: <20250429113646.25738-7-f.weber@proxmox.com> X-SPAM-LEVEL: Spam detection results: 0 AWL 0.016 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [proxmox.com, pve8to9.pm] Subject: Re: [pve-devel] [PATCH pve-manager stable-8 v3 3/3] pve8to9: detect and (if requested) disable LVM autoactivation X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com> List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe> List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/> List-Post: <mailto:pve-devel@lists.proxmox.com> List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help> List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe> Reply-To: Proxmox VE development discussion <pve-devel@lists.proxmox.com> Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: pve-devel-bounces@lists.proxmox.com Sender: "pve-devel" <pve-devel-bounces@lists.proxmox.com> Left 2 comments inline. The changes overall look good to me and worked during my tests. On 4/29/25 13:36, Friedrich Weber wrote: > Starting with PVE 9, the LVM and LVM-thin plugins create new LVs with > the `--setautoactivation n` flag to fix #4997 [1]. However, this does > not affect already existing LVs of setups upgrading from PVE 8. > > Hence, add a new `updatelvm` subcommand to `pve8to9` that finds guest > volume LVs with autoactivation enabled on LVM and LVM-thin storages, > and disables autoactivation on each of them. This is done while > holding a lock on the storage, to avoid metadata update conflicts for > shared LVM storages. > > Also, check for guest volume LVs with autoactivation enabled during > the `checklist` command, and suggest to run `updatelvm` if necessary. > The check is done without a lock, as taking a lock doesn't have > advantages here. > > While some users may have disabled autoactivation on the VG level to > work around #4997, consciously do not take this into account: > Disabling autoactivation on LVs too does not hurt, and avoids issues > in case the user decides to re-enable autoactivation on the VG level > in the future. > > [1] https://bugzilla.proxmox.com/show_bug.cgi?id=4997 > > Signed-off-by: Friedrich Weber <f.weber@proxmox.com> > --- > > Notes: > open questions: > > - if users create new LVs on PVE 8 nodes after running `pve8to9 > updatelvm` but before actually upgrading to PVE 9, these will still > be created with autoactivation enabled. Is this a case we want to > consider? If yes, we could suggest running `pve8to updatelvm` (only) > post-upgrade on all nodes, but users may forget this... Pondered a bit on this, but I'm not sure if it really makes sense to consider this case. It should be clear that if I create LVs after running `updatelvm`, the new LVs will still behave like they always have on PVE 8. I think someone forgetting to run `updatelvm` post-upgrade is far more likely than someone running the checklist for PVE 9, continuing to create new LVs, and only then upgrade to PVE 9. > > - `updatelvm` will disable autoactivation on all guest volumes, > regardless of the guest owner. In other words, it will disable > autoactivation on LVs owned by a different node. Is this a problem? > My tests suggests it's unproblematic, but may be worth a second > look. We can change `updatelvm` to only update LVs owned by the > local node, but then might miss LVs that are migrated between > `updatelvm` runs on different nodes. > > new in v3 > > PVE/CLI/pve8to9.pm | 152 ++++++++++++++++++++++++++++++++++++++++++++- > 1 file changed, 151 insertions(+), 1 deletion(-) > > diff --git a/PVE/CLI/pve8to9.pm b/PVE/CLI/pve8to9.pm > index fbb96491..972c91ea 100644 > --- a/PVE/CLI/pve8to9.pm > +++ b/PVE/CLI/pve8to9.pm > @@ -22,7 +22,7 @@ use PVE::NodeConfig; > use PVE::RPCEnvironment; > use PVE::Storage; > use PVE::Storage::Plugin; > -use PVE::Tools qw(run_command split_list file_get_contents); > +use PVE::Tools qw(run_command split_list file_get_contents trim); > use PVE::QemuConfig; > use PVE::QemuServer; > use PVE::VZDump::Common; > @@ -1372,6 +1372,95 @@ sub check_dkms_modules { > } > } > > +my $query_lvm_lv_autoactivation = sub { > + my ($vg_name) = @_; > + > + my $cmd = [ > + '/sbin/lvs', > + '--separator', ':', > + '--noheadings', > + '--unbuffered', > + '--options', "lv_name,autoactivation", > + $vg_name, > + ]; > + > + my $result; > + eval { > + run_command($cmd, outfunc => sub { > + my $line = shift; > + $line = trim($line); > + > + my ($name, $autoactivation_flag) = split(':', $line); > + return if !$name; > + > + $result->{$name} = $autoactivation_flag eq 'enabled'; > + }); > + }; > + die "could not list LVM logical volumes: $@\n" if $@; > + return $result; > +}; > + > +my $foreach_lvm_vg_with_autoactivated_guest_volumes = sub { > + my ($with_lock, $code) = @_; > + > + my $cfg = PVE::Storage::config(); > + my $storage_info = PVE::Storage::storage_info($cfg); > + > + for my $storeid (sort keys %$storage_info) { > + my $scfg = PVE::Storage::storage_config($cfg, $storeid); > + my $vgname = $scfg->{vgname}; > + my $type = $scfg->{type}; > + next if $type ne 'lvm' && $type ne 'lvmthin'; > + > + my $info = $storage_info->{$storeid}; > + if (!$info->{enabled} || !$info->{active}) { > + log_skip("storage '$storeid' ($type) is disabled or inactive"); > + next; > + } > + > + my $find_autoactivated_lvs = sub { > + my $lvs_autoactivation = $query_lvm_lv_autoactivation->($vgname); > + my $vollist = PVE::Storage::volume_list($cfg, $storeid); > + > + my $autoactivated_guest_lvs = []; > + for my $volinfo (@$vollist) { > + my $volname = (PVE::Storage::parse_volume_id($volinfo->{volid}))[1]; > + push @$autoactivated_guest_lvs, $volname if $lvs_autoactivation->{$volname}; > + } > + > + $code->($storeid, $vgname, $autoactivated_guest_lvs); > + }; > + > + if ($with_lock) { > + my $plugin = PVE::Storage::Plugin->lookup($scfg->{type}); > + $plugin->cluster_lock_storage($storeid, $scfg->{shared}, undef, > + $find_autoactivated_lvs); > + } else { > + $find_autoactivated_lvs->(); > + } > + }; > +}; > + > +sub check_lvm_autoactivation { > + log_info("Check for LVM autoactivation settings on 'lvm' and 'lvmthin' storages..."); > + > + my $needs_fix = 0; > + $foreach_lvm_vg_with_autoactivated_guest_volumes->(0, sub { > + my ($storeid, $vgname, $autoactivated_lvs) = @_; > + > + if (scalar(@$autoactivated_lvs) > 0) { > + log_warn("storage '$storeid' has guest volumes with autoactivation enabled"); > + $needs_fix = 1; > + } else { > + log_pass("all guest volumes on storage '$storeid' have autoactivation disabled"); > + } > + }); > + log_warn("please run 'pve8to9 updatelvm' to disable autoactivation on LVM guest volumes") I think for users it would help if some additional info was added to this message. Disabling autoactivation sounds like it might affect how LVs work (do I need to do something if they're not autoactivated?, etc.) if you're not familiar with the inner workings. Maybe this could also mention that the storage stack will take care of activating/deactivating when needed? Just a suggestion since I noticed it during my testing. > + if $needs_fix; > + > + return undef; > +} > + > sub check_misc { > print_header("MISCELLANEOUS CHECKS"); > my $ssh_config = eval { PVE::Tools::file_get_contents('/root/.ssh/config') }; > @@ -1474,6 +1563,7 @@ sub check_misc { > check_nvidia_vgpu_service(); > check_bootloader(); > check_dkms_modules(); > + check_lvm_autoactivation(); > } > > my sub colored_if { > @@ -1538,8 +1628,68 @@ __PACKAGE__->register_method ({ > return undef; > }}); > > +__PACKAGE__->register_method ({ > + name => 'updatelvm', > + path => 'updatelvm', > + method => 'POST', > + description => 'disable autoactivation for all LVM guest volumes', > + parameters => { > + additionalProperties => 0, > + }, > + returns => { type => 'null' }, > + code => sub { > + my ($param) = @_; > + > + my $did_work = 0; > + eval { > + $foreach_lvm_vg_with_autoactivated_guest_volumes->(1, sub { > + my ($storeid, $vgname, $autoactivated_lvs) = @_; > + > + if (scalar(@$autoactivated_lvs) == 0) { > + log_pass("all guest volumes on storage '$storeid' have autoactivation disabled"); > + return; > + } > + > + log_info("disabling autoactivation on storage '$storeid'..."); > + die "unexpected empty VG name (storage '$storeid')\n" if !$vgname; > + > + for my $lvname (@$autoactivated_lvs) { > + log_info("disabling autoactivation for $vgname/$lvname" > + ." on storage '$storeid'..."); > + my $cmd = [ > + '/sbin/lvchange', > + '--setautoactivation', 'n', > + "$vgname/$lvname", > + ]; > + > + eval { run_command($cmd); }; > + my $err = $@; > + die "could not disable autoactivation on $vgname/$lvname: $err\n" if $err; > + > + $did_work = 1; > + } > + }); > + }; > + my $err = $@; > + if ($err) { > + log_fail("could not disable autoactivation on enabled and active LVM storages"); > + die $err; > + } > + > + if($did_work) { > + log_pass("successfully disabled autoactivation on guest volumes on all" > + ." enabled and active LVM storages"); > + } else { > + log_pass("all guest volumes on enabled and active LVM storages" > + ." have autoactivation disabled"); > + }; > + > + return undef; > + }}); > + > our $cmddef = { > checklist => [ __PACKAGE__, 'checklist', []], > + updatelvm => [ __PACKAGE__, 'updatelvm', []], > }; > > 1; _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel