public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: "Michael Köppl" <m.koeppl@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH installer v4 8/8] tui, gui: streamline error messages around disk and RAID checks
Date: Fri, 11 Jul 2025 18:27:11 +0200	[thread overview]
Message-ID: <20250711162711.133248-10-m.koeppl@proxmox.com> (raw)
In-Reply-To: <20250711162711.133248-1-m.koeppl@proxmox.com>

Adapt error messages around these checks to follow a similar structure
as in the TUI (<filesystem>: <message>) instead of displaying them as
warnings, since they actually stop users from continuing anyway.

Signed-off-by: Michael Köppl <m.koeppl@proxmox.com>
---
This is mostly to have both the TUI and GUI display actual errors with
context instead of warnings since users cannot continue because it was a
bit irritating to have the TUI display an error with context and the GUI
just saying "Warning". Additionally, the error messages now have a
similar format in both installers. The TUI uses ZFS (RAID0), BTRFS
(RAID0), etc., whereas the GUI uses zfs (RAID0), etc. I decided not to
change that for now since it would be a bit out of scope for this
series, I think. Unifying any informational or error messages
throughout the installers in general would make sense for a future
series, though.

 Proxmox/Install.pm                               | 16 ++++++++--------
 proxinstall                                      |  6 +++---
 proxmox-auto-installer/src/utils.rs              |  4 ++--
 .../btrfs_raid_single_disk.json                  |  2 +-
 .../parse_answer_fail/duplicate_disk.json        |  2 +-
 .../parse_answer_fail/zfs_raid_single_disk.json  |  2 +-
 6 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/Proxmox/Install.pm b/Proxmox/Install.pm
index f852147..c6dd8c4 100644
--- a/Proxmox/Install.pm
+++ b/Proxmox/Install.pm
@@ -287,7 +287,7 @@ sub get_zfs_raid_setup {
     my $devlist = &$get_raid_devlist();
 
     my $diskcount = scalar(@$devlist);
-    die "$filesys needs at least one device\n" if $diskcount < 1;
+    die "$filesys: need at least one device\n" if $diskcount < 1;
 
     my $cmd = '';
     if ($filesys eq 'zfs (RAID0)') {
@@ -296,7 +296,7 @@ sub get_zfs_raid_setup {
             $cmd .= " @$hd[1]";
         }
     } elsif ($filesys eq 'zfs (RAID1)') {
-        die "zfs (RAID1) needs at least 2 devices\n" if $diskcount < 2;
+        die "$filesys: need at least 2 devices\n" if $diskcount < 2;
         $cmd .= ' mirror ';
         my $hd = @$devlist[0];
         my $expected_size = @$hd[2]; # all disks need approximately same size
@@ -306,8 +306,8 @@ sub get_zfs_raid_setup {
             $cmd .= " @$hd[1]";
         }
     } elsif ($filesys eq 'zfs (RAID10)') {
-        die "zfs (RAID10) needs at least 4 devices\n" if $diskcount < 4;
-        die "zfs (RAID10) needs an even number of devices\n" if $diskcount & 1;
+        die "$filesys: need at least 4 devices\n" if $diskcount < 4;
+        die "$filesys: need an even number of devices\n" if $diskcount & 1;
 
         for (my $i = 0; $i < $diskcount; $i += 2) {
             my $hd1 = @$devlist[$i];
@@ -321,7 +321,7 @@ sub get_zfs_raid_setup {
     } elsif ($filesys =~ m/^zfs \(RAIDZ-([123])\)$/) {
         my $level = $1;
         my $mindisks = 2 + $level;
-        die "zfs (RAIDZ-$level) needs at least $mindisks devices\n"
+        die "zfs (RAIDZ-$level): need at least $mindisks devices\n"
             if scalar(@$devlist) < $mindisks;
         my $hd = @$devlist[0];
         my $expected_size = @$hd[2]; # all disks need approximately same size
@@ -355,7 +355,7 @@ sub get_btrfs_raid_setup {
     my $devlist = &$get_raid_devlist();
 
     my $diskcount = scalar(@$devlist);
-    die "$filesys needs at least one device\n" if $diskcount < 1;
+    die "$filesys: need at least one device\n" if $diskcount < 1;
 
     foreach my $hd (@$devlist) {
         legacy_bios_4k_check(@$hd[4]);
@@ -369,10 +369,10 @@ sub get_btrfs_raid_setup {
         if ($filesys eq 'btrfs (RAID0)') {
             $mode = 'raid0';
         } elsif ($filesys eq 'btrfs (RAID1)') {
-            die "btrfs (RAID1) needs at least 2 devices\n" if $diskcount < 2;
+            die "$filesys: need at least 2 devices\n" if $diskcount < 2;
             $mode = 'raid1';
         } elsif ($filesys eq 'btrfs (RAID10)') {
-            die "btrfs (RAID10) needs at least 4 devices\n" if $diskcount < 4;
+            die "$filesys: need at least 4 devices\n" if $diskcount < 4;
             $mode = 'raid10';
         } else {
             die "unknown btrfs mode '$filesys'\n";
diff --git a/proxinstall b/proxinstall
index 84f1a91..5e7eb1b 100755
--- a/proxinstall
+++ b/proxinstall
@@ -1593,7 +1593,7 @@ sub create_hdsel_view {
                 apply_raid_disk_selection();
                 my ($devlist) = eval { Proxmox::Install::get_zfs_raid_setup() };
                 if (my $err = $@) {
-                    Proxmox::UI::message("Warning: $err\nPlease fix ZFS setup first.");
+                    Proxmox::UI::message("$err");
                     return;
                 }
                 $target_hds = [map { $_->[1] } @$devlist];
@@ -1601,7 +1601,7 @@ sub create_hdsel_view {
                 apply_raid_disk_selection();
                 my ($devlist) = eval { Proxmox::Install::get_btrfs_raid_setup() };
                 if (my $err = $@) {
-                    Proxmox::UI::message("Warning: $err\nPlease fix BTRFS setup first.");
+                    Proxmox::UI::message("$err");
                     return;
                 }
                 $target_hds = [map { $_->[1] } @$devlist];
@@ -1614,7 +1614,7 @@ sub create_hdsel_view {
                     Proxmox::Install::swapsize_check($hdsize);
                 };
                 if (my $err = $@) {
-                    Proxmox::UI::message("Warning: $err\n");
+                    Proxmox::UI::message("$filesys: $err");
                     return;
                 }
                 $target_hds = [$target_hd];
diff --git a/proxmox-auto-installer/src/utils.rs b/proxmox-auto-installer/src/utils.rs
index 62646f2..7d42f2c 100644
--- a/proxmox-auto-installer/src/utils.rs
+++ b/proxmox-auto-installer/src/utils.rs
@@ -395,7 +395,7 @@ pub fn verify_disks_settings(answer: &Answer) -> Result<()> {
         let min_disks = answer.disks.fs_type.get_min_disks();
         if selection.len() < min_disks {
             bail!(
-                "{} requires at least {} disks",
+                "{}: need at least {} disks",
                 answer.disks.fs_type,
                 min_disks
             );
@@ -404,7 +404,7 @@ pub fn verify_disks_settings(answer: &Answer) -> Result<()> {
         let mut disk_set = HashSet::new();
         for disk in selection {
             if !disk_set.insert(disk) {
-                bail!("List of disks contains duplicate disk {disk}");
+                bail!("List of disks contains duplicate device {disk}");
             }
         }
     }
diff --git a/proxmox-auto-installer/tests/resources/parse_answer_fail/btrfs_raid_single_disk.json b/proxmox-auto-installer/tests/resources/parse_answer_fail/btrfs_raid_single_disk.json
index 37f35fe..957cb49 100644
--- a/proxmox-auto-installer/tests/resources/parse_answer_fail/btrfs_raid_single_disk.json
+++ b/proxmox-auto-installer/tests/resources/parse_answer_fail/btrfs_raid_single_disk.json
@@ -1,3 +1,3 @@
 {
-  "error": "BTRFS (RAID1) requires at least 2 disks"
+  "error": "BTRFS (RAID1): need at least 2 disks"
 }
diff --git a/proxmox-auto-installer/tests/resources/parse_answer_fail/duplicate_disk.json b/proxmox-auto-installer/tests/resources/parse_answer_fail/duplicate_disk.json
index d01fabe..93f6b6c 100644
--- a/proxmox-auto-installer/tests/resources/parse_answer_fail/duplicate_disk.json
+++ b/proxmox-auto-installer/tests/resources/parse_answer_fail/duplicate_disk.json
@@ -1,3 +1,3 @@
 {
-  "error": "List of disks contains duplicate disk sda"
+  "error": "List of disks contains duplicate device sda"
 }
diff --git a/proxmox-auto-installer/tests/resources/parse_answer_fail/zfs_raid_single_disk.json b/proxmox-auto-installer/tests/resources/parse_answer_fail/zfs_raid_single_disk.json
index 9a8cc90..41c99d8 100644
--- a/proxmox-auto-installer/tests/resources/parse_answer_fail/zfs_raid_single_disk.json
+++ b/proxmox-auto-installer/tests/resources/parse_answer_fail/zfs_raid_single_disk.json
@@ -1,3 +1,3 @@
 {
-  "error": "ZFS (RAID10) requires at least 4 disks"
+  "error": "ZFS (RAID10): need at least 4 disks"
 }
-- 
2.47.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

  parent reply	other threads:[~2025-07-11 16:27 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-07-11 16:27 [pve-devel] [PATCH docs/installer v4 0/9] add early disk and network sanity checks Michael Köppl
2025-07-11 16:27 ` [pve-devel] [PATCH docs v4 1/1] installation: remove maxroot size requirement and mention default instead Michael Köppl
2025-07-11 16:27 ` [pve-devel] [PATCH installer v4 1/8] auto: add early answer file sanity check for RAID configurations Michael Köppl
2025-07-11 16:27 ` [pve-devel] [PATCH installer v4 2/8] move RAID setup checks to RAID level enum implementations Michael Köppl
2025-07-11 16:27 ` [pve-devel] [PATCH installer v4 3/8] close #5887: add sanity check for LVM swapsize Michael Köppl
2025-07-11 17:54   ` Thomas Lamprecht
2025-07-15  9:43     ` Michael Köppl
2025-07-11 16:27 ` [pve-devel] [PATCH installer v4 4/8] auto: add check for duplicate disks in answer file Michael Köppl
2025-07-11 16:27 ` [pve-devel] [PATCH installer v4 5/8] common: add more descriptive errors for invalid network configs Michael Köppl
2025-07-11 16:27 ` [pve-devel] [PATCH installer v4 6/8] tui: change get_value return type for easier error handling Michael Köppl
2025-07-11 16:27 ` [pve-devel] [PATCH installer v4 7/8] common: add checks for valid subnet mask and IPv4 address within subnet Michael Köppl
2025-07-11 16:27 ` Michael Köppl [this message]
2025-07-15  9:50 ` [pve-devel] superseded: [PATCH docs/installer v4 0/9] add early disk and network sanity checks Michael Köppl

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250711162711.133248-10-m.koeppl@proxmox.com \
    --to=m.koeppl@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal