From: Fiona Ebner <f.ebner@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH manager 1/2] 8 to 9: lvm config: check that --clear-needs-check-flag is set if there is a thin_check_options override
Date: Fri, 8 Aug 2025 16:03:23 +0200 [thread overview]
Message-ID: <20250808140419.119992-2-f.ebner@proxmox.com> (raw)
In-Reply-To: <20250808140419.119992-1-f.ebner@proxmox.com>
Quoting the commit message from [0] verbatim:
thin_check v1.0.x reveals data block ref count issue that is not being
detected by previous versions, which blocks the pool from activation if
there are any leaked blocks. To reduce potential user complaints on
inactive pools after upgrading and also maintain backward compatibility
between LVM and older thin_check, we decided to adopt the 'auto-repair'
functionality in the --clear-needs-check-flag option, rather than
passing --auto-repair from lvm.conf.
[0]: https://github.com/device-mapper-utils/thin-provisioning-tools/commit/eb28ab94
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
PVE/CLI/pve8to9.pm | 27 +++++++++++++++++++++++++++
1 file changed, 27 insertions(+)
diff --git a/PVE/CLI/pve8to9.pm b/PVE/CLI/pve8to9.pm
index dc3bf9ea..dcae429e 100644
--- a/PVE/CLI/pve8to9.pm
+++ b/PVE/CLI/pve8to9.pm
@@ -1796,6 +1796,32 @@ sub check_lvm_autoactivation {
return undef;
}
+sub check_lvm_thin_check_options {
+ log_info("Checking lvm config for thin_check_options...");
+
+ my $section;
+ my $detected;
+ my $detect_thin_check_override = sub {
+ my $line = shift;
+ if ($line =~ m/^(\S+) \{/) {
+ $section = $1;
+ return;
+ }
+ if ($line =~ m/thin_check_options/ && $line !~ m/--clear-needs-check-flag/) {
+ $detected = 1;
+ log_fail(
+ "detected override for 'thin_check_options' in '$section' section without"
+ . " '--clear-needs-check-flag' option - add the option to your override (most"
+ . " likely in /etc/lvm/lvm.conf)");
+ }
+ };
+ eval {
+ run_command(['lvmconfig'], outfunc => $detect_thin_check_override);
+ log_pass("Check for correct thin_check_options passed") if !$detected;
+ };
+ log_fail("unable to run 'lvmconfig' command - $@") if $@;
+}
+
sub check_glusterfs_storage_usage {
my $cfg = PVE::Storage::config();
my $storage_info = PVE::Storage::storage_info($cfg);
@@ -2229,6 +2255,7 @@ sub check_misc {
check_legacy_notification_sections();
check_legacy_backup_job_options();
check_lvm_autoactivation();
+ check_lvm_thin_check_options();
check_rrd_migration();
check_legacy_ipam_files();
check_legacy_sysctl_conf();
--
2.47.2
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
next prev parent reply other threads:[~2025-08-08 14:03 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-08 14:03 [pve-devel] [PATCH-SERIES manager 0/2] " Fiona Ebner
2025-08-08 14:03 ` Fiona Ebner [this message]
2025-08-08 14:03 ` [pve-devel] [PATCH manager 2/2] d/postinst: " Fiona Ebner
2025-08-08 14:19 ` [pve-devel] [PATCH-SERIES manager 0/2] " Fabian Grünbichler
2025-08-08 14:23 ` Fiona Ebner
2025-08-08 14:19 ` [pve-devel] applied: " Fabian Grünbichler
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250808140419.119992-2-f.ebner@proxmox.com \
--to=f.ebner@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox