* [pve-devel] [PATCH installer 0/6] add early disk and network sanity checks
@ 2025-04-22 16:27 Michael Köppl
2025-04-22 16:27 ` [pve-devel] [PATCH pve-installer 1/6] auto: add early answer file sanity check for RAID configurations Michael Köppl
` (6 more replies)
0 siblings, 7 replies; 23+ messages in thread
From: Michael Köppl @ 2025-04-22 16:27 UTC (permalink / raw)
To: pve-devel
The goal of this series is to add additional sanity checks to the
auto-installer and the TUI and GUI installers. The following checks were
added:
* Btrfs / ZFS RAID: check if the required number of disks is available
* LVM: check if swapsize < hdsize (RFC)
* LVM: check if maxroot < hdsize/4 (RFC)
* Networking: check if IPv4 address is valid within subnet (RFC)
The disk-related checks aim to close [0], whereas the IPv4 check would
partially close [1].
[0] https://bugzilla.proxmox.com/show_bug.cgi?id=5887
[1] https://bugzilla.proxmox.com/show_bug.cgi?id=5757
The patches marked as RFC work, but I think some discussion regarding
what we even want to check is warranted here. I added some more context
for a discussion to the individual patches.
pve-installer:
Michael Köppl (6):
auto: add early answer file sanity check for RAID configurations
common: use get_min_disks as single source of truth for RAID config
checks
close #5887: add sanity check for LVM swapsize and maxroot
run rustfmt
common: add more descriptive errors for invalid network configs
common: add checks for valid IPv4 address within subnet
Proxmox/Install.pm | 14 +++++
proxinstall | 5 +-
proxmox-auto-install-assistant/src/main.rs | 7 ++-
proxmox-auto-installer/src/utils.rs | 37 +++++++++--
proxmox-auto-installer/tests/parse-answer.rs | 8 ++-
.../btrfs_raid_single_disk.json | 3 +
.../btrfs_raid_single_disk.toml | 15 +++++
.../lvm_maxroot_greater_than_maximum.json | 3 +
.../lvm_maxroot_greater_than_maximum.toml | 16 +++++
.../lvm_swapsize_greater_than_hdsize.json | 3 +
.../lvm_swapsize_greater_than_hdsize.toml | 16 +++++
.../zfs_raid_single_disk.json | 3 +
.../zfs_raid_single_disk.toml | 15 +++++
proxmox-installer-common/src/disk_checks.rs | 38 ++++++-----
proxmox-installer-common/src/options.rs | 36 ++++++++++-
proxmox-installer-common/src/utils.rs | 63 ++++++++++++++++---
proxmox-tui-installer/src/views/bootdisk.rs | 14 +++--
17 files changed, 256 insertions(+), 40 deletions(-)
create mode 100644 proxmox-auto-installer/tests/resources/parse_answer_fail/btrfs_raid_single_disk.json
create mode 100644 proxmox-auto-installer/tests/resources/parse_answer_fail/btrfs_raid_single_disk.toml
create mode 100644 proxmox-auto-installer/tests/resources/parse_answer_fail/lvm_maxroot_greater_than_maximum.json
create mode 100644 proxmox-auto-installer/tests/resources/parse_answer_fail/lvm_maxroot_greater_than_maximum.toml
create mode 100644 proxmox-auto-installer/tests/resources/parse_answer_fail/lvm_swapsize_greater_than_hdsize.json
create mode 100644 proxmox-auto-installer/tests/resources/parse_answer_fail/lvm_swapsize_greater_than_hdsize.toml
create mode 100644 proxmox-auto-installer/tests/resources/parse_answer_fail/zfs_raid_single_disk.json
create mode 100644 proxmox-auto-installer/tests/resources/parse_answer_fail/zfs_raid_single_disk.toml
Summary over all repositories:
17 files changed, 256 insertions(+), 40 deletions(-)
--
Generated by git-murpp 0.8.0
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 23+ messages in thread
* [pve-devel] [PATCH pve-installer 1/6] auto: add early answer file sanity check for RAID configurations
2025-04-22 16:27 [pve-devel] [PATCH installer 0/6] add early disk and network sanity checks Michael Köppl
@ 2025-04-22 16:27 ` Michael Köppl
2025-04-28 11:25 ` Christoph Heiss
2025-04-22 16:27 ` [pve-devel] [PATCH pve-installer 2/6] common: use get_min_disks as single source of truth for RAID config checks Michael Köppl
` (5 subsequent siblings)
6 siblings, 1 reply; 23+ messages in thread
From: Michael Köppl @ 2025-04-22 16:27 UTC (permalink / raw)
To: pve-devel
The GUI and TUI installers already implement checks to ensure systems
have the minimum required number of disks available for the various RAID
configurations (min 2 disks for RAID1, min 4 disks for RAID10, etc).
This change adds an early check of the answer file to the
auto-installer, improving the user experience by avoiding failure during
the actual installation.
Signed-off-by: Michael Köppl <m.koeppl@proxmox.com>
---
proxmox-auto-install-assistant/src/main.rs | 7 ++--
proxmox-auto-installer/src/utils.rs | 18 ++++++++++-
proxmox-auto-installer/tests/parse-answer.rs | 4 ++-
.../btrfs_raid_single_disk.json | 3 ++
.../btrfs_raid_single_disk.toml | 15 +++++++++
.../zfs_raid_single_disk.json | 3 ++
.../zfs_raid_single_disk.toml | 15 +++++++++
proxmox-installer-common/src/options.rs | 32 +++++++++++++++++++
8 files changed, 92 insertions(+), 5 deletions(-)
create mode 100644 proxmox-auto-installer/tests/resources/parse_answer_fail/btrfs_raid_single_disk.json
create mode 100644 proxmox-auto-installer/tests/resources/parse_answer_fail/btrfs_raid_single_disk.toml
create mode 100644 proxmox-auto-installer/tests/resources/parse_answer_fail/zfs_raid_single_disk.json
create mode 100644 proxmox-auto-installer/tests/resources/parse_answer_fail/zfs_raid_single_disk.toml
diff --git a/proxmox-auto-install-assistant/src/main.rs b/proxmox-auto-install-assistant/src/main.rs
index b64623b..3cf3abc 100644
--- a/proxmox-auto-install-assistant/src/main.rs
+++ b/proxmox-auto-install-assistant/src/main.rs
@@ -14,9 +14,9 @@ use proxmox_auto_installer::{
answer::{Answer, FilterMatch},
sysinfo::SysInfo,
utils::{
- AutoInstSettings, FetchAnswerFrom, HttpOptions, get_matched_udev_indexes, get_nic_list,
- get_single_udev_index, verify_email_and_root_password_settings, verify_first_boot_settings,
- verify_locale_settings,
+ get_matched_udev_indexes, get_nic_list, get_single_udev_index, verify_disks_settings,
+ verify_email_and_root_password_settings, verify_first_boot_settings,
+ verify_locale_settings, AutoInstSettings, FetchAnswerFrom, HttpOptions,
},
};
use proxmox_installer_common::{FIRST_BOOT_EXEC_MAX_SIZE, FIRST_BOOT_EXEC_NAME};
@@ -597,6 +597,7 @@ fn parse_answer(path: impl AsRef<Path> + fmt::Debug) -> Result<Answer> {
match toml::from_str(&contents) {
Ok(answer) => {
verify_locale_settings(&answer, &serde_json::from_str(LOCALE_INFO)?)?;
+ verify_disks_settings(&answer)?;
verify_first_boot_settings(&answer)?;
verify_email_and_root_password_settings(&answer)?;
println!("The answer file was parsed successfully, no errors found!");
diff --git a/proxmox-auto-installer/src/utils.rs b/proxmox-auto-installer/src/utils.rs
index 365e01a..d6bc6e3 100644
--- a/proxmox-auto-installer/src/utils.rs
+++ b/proxmox-auto-installer/src/utils.rs
@@ -6,7 +6,8 @@ use std::{collections::BTreeMap, process::Command};
use crate::{
answer::{
- self, Answer, FirstBootHookSourceMode, FqdnConfig, FqdnExtendedConfig, FqdnSourceMode,
+ self, Answer, DiskSelection, FirstBootHookSourceMode, FqdnConfig, FqdnExtendedConfig,
+ FqdnSourceMode,
},
udevinfo::UdevInfo,
};
@@ -384,6 +385,20 @@ pub fn verify_email_and_root_password_settings(answer: &Answer) -> Result<()> {
}
}
+pub fn verify_disks_settings(answer: &Answer) -> Result<()> {
+ if let DiskSelection::Selection(selection) = &answer.disks.disk_selection {
+ let min_disks = answer.disks.fs_type.get_min_disks();
+ if selection.len() < min_disks {
+ bail!(
+ "{} requires at least {} disks",
+ answer.disks.fs_type,
+ min_disks
+ );
+ }
+ }
+ Ok(())
+}
+
pub fn verify_first_boot_settings(answer: &Answer) -> Result<()> {
info!("Verifying first boot settings");
@@ -414,6 +429,7 @@ pub fn parse_answer(
let network_settings = get_network_settings(answer, udev_info, runtime_info, setup_info)?;
verify_locale_settings(answer, locales)?;
+ verify_disks_settings(answer)?;
verify_email_and_root_password_settings(answer)?;
verify_first_boot_settings(answer)?;
diff --git a/proxmox-auto-installer/tests/parse-answer.rs b/proxmox-auto-installer/tests/parse-answer.rs
index 34bc969..92dba63 100644
--- a/proxmox-auto-installer/tests/parse-answer.rs
+++ b/proxmox-auto-installer/tests/parse-answer.rs
@@ -142,11 +142,13 @@ mod tests {
run_named_fail_parse_test,
// Keep below entries alphabetically sorted
both_password_and_hashed_set,
+ btrfs_raid_single_disk,
fqdn_from_dhcp_no_default_domain,
fqdn_hostname_only,
no_fqdn_from_dhcp,
no_root_password_set,
- short_password
+ short_password,
+ zfs_raid_single_disk
);
}
}
diff --git a/proxmox-auto-installer/tests/resources/parse_answer_fail/btrfs_raid_single_disk.json b/proxmox-auto-installer/tests/resources/parse_answer_fail/btrfs_raid_single_disk.json
new file mode 100644
index 0000000..37f35fe
--- /dev/null
+++ b/proxmox-auto-installer/tests/resources/parse_answer_fail/btrfs_raid_single_disk.json
@@ -0,0 +1,3 @@
+{
+ "error": "BTRFS (RAID1) requires at least 2 disks"
+}
diff --git a/proxmox-auto-installer/tests/resources/parse_answer_fail/btrfs_raid_single_disk.toml b/proxmox-auto-installer/tests/resources/parse_answer_fail/btrfs_raid_single_disk.toml
new file mode 100644
index 0000000..e96fb16
--- /dev/null
+++ b/proxmox-auto-installer/tests/resources/parse_answer_fail/btrfs_raid_single_disk.toml
@@ -0,0 +1,15 @@
+[global]
+keyboard = "de"
+country = "at"
+fqdn = "btrfs-raid-single-disk.fail.testinstall"
+mailto = "mail@no.invalid"
+timezone = "Europe/Vienna"
+root-password = "12345678"
+
+[network]
+source = "from-dhcp"
+
+[disk-setup]
+filesystem = "btrfs"
+btrfs.raid = "raid1"
+disk-list = ["sda"]
diff --git a/proxmox-auto-installer/tests/resources/parse_answer_fail/zfs_raid_single_disk.json b/proxmox-auto-installer/tests/resources/parse_answer_fail/zfs_raid_single_disk.json
new file mode 100644
index 0000000..9a8cc90
--- /dev/null
+++ b/proxmox-auto-installer/tests/resources/parse_answer_fail/zfs_raid_single_disk.json
@@ -0,0 +1,3 @@
+{
+ "error": "ZFS (RAID10) requires at least 4 disks"
+}
diff --git a/proxmox-auto-installer/tests/resources/parse_answer_fail/zfs_raid_single_disk.toml b/proxmox-auto-installer/tests/resources/parse_answer_fail/zfs_raid_single_disk.toml
new file mode 100644
index 0000000..94ae748
--- /dev/null
+++ b/proxmox-auto-installer/tests/resources/parse_answer_fail/zfs_raid_single_disk.toml
@@ -0,0 +1,15 @@
+[global]
+keyboard = "de"
+country = "at"
+fqdn = "zfs-raid-single-disk.fail.testinstall"
+mailto = "mail@no.invalid"
+timezone = "Europe/Vienna"
+root-password = "12345678"
+
+[network]
+source = "from-dhcp"
+
+[disk-setup]
+filesystem = "zfs"
+zfs.raid = "raid10"
+disk-list = ["sda"]
diff --git a/proxmox-installer-common/src/options.rs b/proxmox-installer-common/src/options.rs
index 9cc4ee0..9271b8b 100644
--- a/proxmox-installer-common/src/options.rs
+++ b/proxmox-installer-common/src/options.rs
@@ -20,6 +20,16 @@ pub enum BtrfsRaidLevel {
Raid10,
}
+impl BtrfsRaidLevel {
+ pub fn get_min_disks(&self) -> usize {
+ match self {
+ BtrfsRaidLevel::Raid0 => 1,
+ BtrfsRaidLevel::Raid1 => 2,
+ BtrfsRaidLevel::Raid10 => 4,
+ }
+ }
+}
+
serde_plain::derive_display_from_serialize!(BtrfsRaidLevel);
#[derive(Copy, Clone, Debug, Deserialize, Serialize, Eq, PartialEq)]
@@ -48,6 +58,19 @@ pub enum ZfsRaidLevel {
RaidZ3,
}
+impl ZfsRaidLevel {
+ pub fn get_min_disks(&self) -> usize {
+ match self {
+ ZfsRaidLevel::Raid0 => 1,
+ ZfsRaidLevel::Raid1 => 2,
+ ZfsRaidLevel::Raid10 => 4,
+ ZfsRaidLevel::RaidZ => 3,
+ ZfsRaidLevel::RaidZ2 => 4,
+ ZfsRaidLevel::RaidZ3 => 5,
+ }
+ }
+}
+
serde_plain::derive_display_from_serialize!(ZfsRaidLevel);
#[derive(Copy, Clone, Debug, Eq, PartialEq)]
@@ -67,6 +90,15 @@ impl FsType {
pub fn is_lvm(&self) -> bool {
matches!(self, FsType::Ext4 | FsType::Xfs)
}
+
+ pub fn get_min_disks(&self) -> usize {
+ match self {
+ FsType::Ext4 => 1,
+ FsType::Xfs => 1,
+ FsType::Zfs(level) => level.get_min_disks(),
+ FsType::Btrfs(level) => level.get_min_disks(),
+ }
+ }
}
impl fmt::Display for FsType {
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 23+ messages in thread
* [pve-devel] [PATCH pve-installer 2/6] common: use get_min_disks as single source of truth for RAID config checks
2025-04-22 16:27 [pve-devel] [PATCH installer 0/6] add early disk and network sanity checks Michael Köppl
2025-04-22 16:27 ` [pve-devel] [PATCH pve-installer 1/6] auto: add early answer file sanity check for RAID configurations Michael Köppl
@ 2025-04-22 16:27 ` Michael Köppl
2025-04-28 11:48 ` Christoph Heiss
2025-04-22 16:27 ` [pve-devel] [RFC PATCH pve-installer 3/6] close #5887: add sanity check for LVM swapsize and maxroot Michael Köppl
` (4 subsequent siblings)
6 siblings, 1 reply; 23+ messages in thread
From: Michael Köppl @ 2025-04-22 16:27 UTC (permalink / raw)
To: pve-devel
To avoid having multiple places where the minimum number of disks
required for a certain RAID setup is defined, use get_min_disks in the
common installer RAID config checks as well, keeping the behavior the
same as for the answer file sanity checks.
Signed-off-by: Michael Köppl <m.koeppl@proxmox.com>
---
An argument could be made here that there should be a declarative single
source of truth in the form of a JSON or similar that both Perl and
Rust-based installers can use. I would love some input on this as I
think it could make sense, but would require more extensive changes.
We already use JSON files as sources for selectable values, etc. in
other parts of the installer, so we would not be doing anything new
here.
proxmox-installer-common/src/disk_checks.rs | 18 ++++--------------
1 file changed, 4 insertions(+), 14 deletions(-)
diff --git a/proxmox-installer-common/src/disk_checks.rs b/proxmox-installer-common/src/disk_checks.rs
index ecc43bd..1d17e2c 100644
--- a/proxmox-installer-common/src/disk_checks.rs
+++ b/proxmox-installer-common/src/disk_checks.rs
@@ -69,17 +69,16 @@ pub fn check_zfs_raid_config(level: ZfsRaidLevel, disks: &[Disk]) -> Result<(),
}
};
+ check_raid_min_disks(disks, level.get_min_disks())?;
+
match level {
- ZfsRaidLevel::Raid0 => check_raid_min_disks(disks, 1)?,
+ ZfsRaidLevel::Raid0 => {}
ZfsRaidLevel::Raid1 => {
- check_raid_min_disks(disks, 2)?;
for disk in disks {
check_mirror_size(&disks[0], disk)?;
}
}
ZfsRaidLevel::Raid10 => {
- check_raid_min_disks(disks, 4)?;
-
if disks.len() % 2 != 0 {
return Err(format!(
"Needs an even number of disks, currently selected: {}",
@@ -94,19 +93,16 @@ pub fn check_zfs_raid_config(level: ZfsRaidLevel, disks: &[Disk]) -> Result<(),
}
// For RAID-Z: minimum disks number is level + 2
ZfsRaidLevel::RaidZ => {
- check_raid_min_disks(disks, 3)?;
for disk in disks {
check_mirror_size(&disks[0], disk)?;
}
}
ZfsRaidLevel::RaidZ2 => {
- check_raid_min_disks(disks, 4)?;
for disk in disks {
check_mirror_size(&disks[0], disk)?;
}
}
ZfsRaidLevel::RaidZ3 => {
- check_raid_min_disks(disks, 5)?;
for disk in disks {
check_mirror_size(&disks[0], disk)?;
}
@@ -125,13 +121,7 @@ pub fn check_zfs_raid_config(level: ZfsRaidLevel, disks: &[Disk]) -> Result<(),
/// * `disks` - List of disks designated as RAID targets.
pub fn check_btrfs_raid_config(level: BtrfsRaidLevel, disks: &[Disk]) -> Result<(), String> {
// See also Proxmox/Install.pm:get_btrfs_raid_setup()
-
- match level {
- BtrfsRaidLevel::Raid0 => check_raid_min_disks(disks, 1)?,
- BtrfsRaidLevel::Raid1 => check_raid_min_disks(disks, 2)?,
- BtrfsRaidLevel::Raid10 => check_raid_min_disks(disks, 4)?,
- }
-
+ check_raid_min_disks(disks, level.get_min_disks())?;
Ok(())
}
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 23+ messages in thread
* [pve-devel] [RFC PATCH pve-installer 3/6] close #5887: add sanity check for LVM swapsize and maxroot
2025-04-22 16:27 [pve-devel] [PATCH installer 0/6] add early disk and network sanity checks Michael Köppl
2025-04-22 16:27 ` [pve-devel] [PATCH pve-installer 1/6] auto: add early answer file sanity check for RAID configurations Michael Köppl
2025-04-22 16:27 ` [pve-devel] [PATCH pve-installer 2/6] common: use get_min_disks as single source of truth for RAID config checks Michael Köppl
@ 2025-04-22 16:27 ` Michael Köppl
2025-04-28 12:00 ` Christoph Heiss
2025-04-22 16:27 ` [pve-devel] [PATCH pve-installer 4/6] run rustfmt Michael Köppl
` (3 subsequent siblings)
6 siblings, 1 reply; 23+ messages in thread
From: Michael Köppl @ 2025-04-22 16:27 UTC (permalink / raw)
To: pve-devel
Check that the configured swapsize is not greater than the total size
of the disk and that maxroot is at most hdsize / 4. Define the
behavior for the auto-installer as well as the TUI and GUI installers.
Signed-off-by: Michael Köppl <m.koeppl@proxmox.com>
---
The changes implemented in this patch regarding swap size only avoid
rather obvious misconfigurations. However, users could still configure a
32GB swap volume on a 32GB hard disk and the installer would fail later
on. Users could benefit another check that ensures the swap volume size
does not go beyond a certain threshold. One option could be to set the
limit similar to the 8GB maximum when the swap volume size is calculated
from the size of the memory. OTOH, this might also be
considered too restrictive. Would love some input on this.
Note regarding the change around set_hdsize in proxinstall: Before this
change, the hdsize was only set if the user manually changed it. If the
user did not change it, any checks against hdsize would check against
undefined.
Proxmox/Install.pm | 14 +++++++++++++
proxinstall | 5 ++++-
proxmox-auto-installer/src/utils.rs | 13 ++++++++++++
proxmox-auto-installer/tests/parse-answer.rs | 4 +++-
.../lvm_maxroot_greater_than_maximum.json | 3 +++
.../lvm_maxroot_greater_than_maximum.toml | 16 +++++++++++++++
.../lvm_swapsize_greater_than_hdsize.json | 3 +++
.../lvm_swapsize_greater_than_hdsize.toml | 16 +++++++++++++++
proxmox-installer-common/src/disk_checks.rs | 20 ++++++++++++++++++-
proxmox-tui-installer/src/views/bootdisk.rs | 4 +++-
10 files changed, 94 insertions(+), 4 deletions(-)
create mode 100644 proxmox-auto-installer/tests/resources/parse_answer_fail/lvm_maxroot_greater_than_maximum.json
create mode 100644 proxmox-auto-installer/tests/resources/parse_answer_fail/lvm_maxroot_greater_than_maximum.toml
create mode 100644 proxmox-auto-installer/tests/resources/parse_answer_fail/lvm_swapsize_greater_than_hdsize.json
create mode 100644 proxmox-auto-installer/tests/resources/parse_answer_fail/lvm_swapsize_greater_than_hdsize.toml
diff --git a/Proxmox/Install.pm b/Proxmox/Install.pm
index f673604..d93a54b 100644
--- a/Proxmox/Install.pm
+++ b/Proxmox/Install.pm
@@ -592,6 +592,20 @@ sub compute_swapsize {
return $swapsize_kb;
}
+sub swapsize_check {
+ my ($hdsize) = @_;
+ my $swapsize = Proxmox::Install::Config::get_swapsize();
+ die "swapsize cannot be greater than hdsize\n"
+ if $swapsize > $hdsize;
+}
+
+sub maxroot_size_check {
+ my ($hdsize) = @_;
+ my $maxroot = Proxmox::Install::Config::get_maxroot();
+ die "maxroot cannot be greater than hdsize / 4\n"
+ if $maxroot > ($hdsize/4);
+}
+
my sub chroot_chown {
my ($root, $path, %param) = @_;
diff --git a/proxinstall b/proxinstall
index bc9ade6..e9ff6e8 100755
--- a/proxinstall
+++ b/proxinstall
@@ -1406,7 +1406,7 @@ sub create_hdoption_view {
my $tmp;
- if (($tmp = &$get_float($spinbutton_hdsize)) && ($tmp != $hdsize)) {
+ if (defined($tmp = &$get_float($spinbutton_hdsize))) {
Proxmox::Install::Config::set_hdsize($tmp);
} else {
Proxmox::Install::Config::set_hdsize(undef);
@@ -1521,9 +1521,12 @@ sub create_hdsel_view {
$target_hds = [ map { $_->[1] } @$devlist ];
} else {
my $target_hd = Proxmox::Install::Config::get_target_hd();
+ my $hdsize = Proxmox::Install::Config::get_hdsize();
eval {
my $target_block_size = Proxmox::Sys::Block::logical_blocksize($target_hd);
Proxmox::Install::legacy_bios_4k_check($target_block_size);
+ Proxmox::Install::swapsize_check($hdsize);
+ Proxmox::Install::maxroot_size_check($hdsize);
};
if (my $err = $@) {
Proxmox::UI::message("Warning: $err\n");
diff --git a/proxmox-auto-installer/src/utils.rs b/proxmox-auto-installer/src/utils.rs
index d6bc6e3..85a1f52 100644
--- a/proxmox-auto-installer/src/utils.rs
+++ b/proxmox-auto-installer/src/utils.rs
@@ -396,6 +396,19 @@ pub fn verify_disks_settings(answer: &Answer) -> Result<()> {
);
}
}
+
+ if let answer::FsOptions::LVM(lvm) = &answer.disks.fs_options {
+ if lvm.swapsize > lvm.hdsize {
+ bail!("LVM swapsize cannot be greater than hdsize");
+ }
+
+ if let Some((maxroot, hdsize)) = lvm.maxroot.zip(lvm.hdsize) {
+ if maxroot > hdsize / 4.0 {
+ bail!("LVM maxroot cannot be greater than hdsize / 4");
+ }
+ }
+ }
+
Ok(())
}
diff --git a/proxmox-auto-installer/tests/parse-answer.rs b/proxmox-auto-installer/tests/parse-answer.rs
index 92dba63..e615672 100644
--- a/proxmox-auto-installer/tests/parse-answer.rs
+++ b/proxmox-auto-installer/tests/parse-answer.rs
@@ -7,7 +7,7 @@ use proxmox_auto_installer::udevinfo::UdevInfo;
use proxmox_auto_installer::utils::parse_answer;
use proxmox_installer_common::setup::{
- LocaleInfo, RuntimeInfo, SetupInfo, load_installer_setup_files, read_json,
+ load_installer_setup_files, read_json, LocaleInfo, RuntimeInfo, SetupInfo,
};
fn get_test_resource_path() -> Result<PathBuf, String> {
@@ -145,6 +145,8 @@ mod tests {
btrfs_raid_single_disk,
fqdn_from_dhcp_no_default_domain,
fqdn_hostname_only,
+ lvm_maxroot_greater_than_maximum,
+ lvm_swapsize_greater_than_hdsize,
no_fqdn_from_dhcp,
no_root_password_set,
short_password,
diff --git a/proxmox-auto-installer/tests/resources/parse_answer_fail/lvm_maxroot_greater_than_maximum.json b/proxmox-auto-installer/tests/resources/parse_answer_fail/lvm_maxroot_greater_than_maximum.json
new file mode 100644
index 0000000..f05d6ee
--- /dev/null
+++ b/proxmox-auto-installer/tests/resources/parse_answer_fail/lvm_maxroot_greater_than_maximum.json
@@ -0,0 +1,3 @@
+{
+ "error": "LVM maxroot cannot be greater than hdsize / 4"
+}
diff --git a/proxmox-auto-installer/tests/resources/parse_answer_fail/lvm_maxroot_greater_than_maximum.toml b/proxmox-auto-installer/tests/resources/parse_answer_fail/lvm_maxroot_greater_than_maximum.toml
new file mode 100644
index 0000000..e934d29
--- /dev/null
+++ b/proxmox-auto-installer/tests/resources/parse_answer_fail/lvm_maxroot_greater_than_maximum.toml
@@ -0,0 +1,16 @@
+[global]
+keyboard = "de"
+country = "at"
+fqdn = "btrfs-raid-single-disk.fail.testinstall"
+mailto = "mail@no.invalid"
+timezone = "Europe/Vienna"
+root-password = "12345678"
+
+[network]
+source = "from-dhcp"
+
+[disk-setup]
+filesystem = "ext4"
+lvm.maxroot = 8.01
+lvm.hdsize = 32
+disk-list = ["sda"]
diff --git a/proxmox-auto-installer/tests/resources/parse_answer_fail/lvm_swapsize_greater_than_hdsize.json b/proxmox-auto-installer/tests/resources/parse_answer_fail/lvm_swapsize_greater_than_hdsize.json
new file mode 100644
index 0000000..562d834
--- /dev/null
+++ b/proxmox-auto-installer/tests/resources/parse_answer_fail/lvm_swapsize_greater_than_hdsize.json
@@ -0,0 +1,3 @@
+{
+ "error": "LVM swapsize cannot be greater than hdsize"
+}
diff --git a/proxmox-auto-installer/tests/resources/parse_answer_fail/lvm_swapsize_greater_than_hdsize.toml b/proxmox-auto-installer/tests/resources/parse_answer_fail/lvm_swapsize_greater_than_hdsize.toml
new file mode 100644
index 0000000..a29c36a
--- /dev/null
+++ b/proxmox-auto-installer/tests/resources/parse_answer_fail/lvm_swapsize_greater_than_hdsize.toml
@@ -0,0 +1,16 @@
+[global]
+keyboard = "de"
+country = "at"
+fqdn = "btrfs-raid-single-disk.fail.testinstall"
+mailto = "mail@no.invalid"
+timezone = "Europe/Vienna"
+root-password = "12345678"
+
+[network]
+source = "from-dhcp"
+
+[disk-setup]
+filesystem = "ext4"
+lvm.swapsize = 33
+lvm.hdsize = 32
+disk-list = ["sda"]
diff --git a/proxmox-installer-common/src/disk_checks.rs b/proxmox-installer-common/src/disk_checks.rs
index 1d17e2c..bb33baf 100644
--- a/proxmox-installer-common/src/disk_checks.rs
+++ b/proxmox-installer-common/src/disk_checks.rs
@@ -1,6 +1,6 @@
use std::collections::HashSet;
-use crate::options::{BtrfsRaidLevel, Disk, ZfsRaidLevel};
+use crate::options::{BtrfsRaidLevel, Disk, LvmBootdiskOptions, ZfsRaidLevel};
use crate::setup::BootType;
/// Checks a list of disks for duplicate entries, using their index as key.
@@ -49,6 +49,24 @@ pub fn check_disks_4kn_legacy_boot(boot_type: BootType, disks: &[Disk]) -> Resul
Ok(())
}
+/// Checks whether a user-supplied LVM setup is valid or not, such as the swapsize or maxroot not
+/// exceeding certain thresholds.
+///
+/// # Arguments
+///
+/// * `bootdisk_opts` - The LVM options set by the user.
+pub fn check_lvm_bootdisk_opts(bootdisk_opts: &LvmBootdiskOptions) -> Result<(), &str> {
+ if bootdisk_opts.swap_size > Some(bootdisk_opts.total_size) {
+ return Err("Swap size cannot be greater than total size");
+ }
+
+ if bootdisk_opts.max_root_size > Some(bootdisk_opts.total_size / 4.0) {
+ return Err("Max root size cannot be greater than (total size / 4)");
+ }
+
+ Ok(())
+}
+
/// Checks whether a user-supplied ZFS RAID setup is valid or not, such as disk sizes andminimum
/// number of disks.
///
diff --git a/proxmox-tui-installer/src/views/bootdisk.rs b/proxmox-tui-installer/src/views/bootdisk.rs
index 313a3c9..60d5316 100644
--- a/proxmox-tui-installer/src/views/bootdisk.rs
+++ b/proxmox-tui-installer/src/views/bootdisk.rs
@@ -19,7 +19,7 @@ use crate::options::FS_TYPES;
use proxmox_installer_common::{
disk_checks::{
check_btrfs_raid_config, check_disks_4kn_legacy_boot, check_for_duplicate_disks,
- check_zfs_raid_config,
+ check_lvm_bootdisk_opts, check_zfs_raid_config,
},
options::{
AdvancedBootdiskOptions, BTRFS_COMPRESS_OPTIONS, BootdiskOptions, BtrfsBootdiskOptions,
@@ -264,6 +264,8 @@ impl AdvancedBootdiskOptionsView {
.get_values()
.ok_or("Failed to retrieve advanced bootdisk options")?;
+ check_lvm_bootdisk_opts(&advanced).map_err(|err| format!("{fstype}: {err}"))?;
+
Ok(BootdiskOptions {
disks: vec![disk],
fstype,
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 23+ messages in thread
* [pve-devel] [PATCH pve-installer 4/6] run rustfmt
2025-04-22 16:27 [pve-devel] [PATCH installer 0/6] add early disk and network sanity checks Michael Köppl
` (2 preceding siblings ...)
2025-04-22 16:27 ` [pve-devel] [RFC PATCH pve-installer 3/6] close #5887: add sanity check for LVM swapsize and maxroot Michael Köppl
@ 2025-04-22 16:27 ` Michael Köppl
2025-04-23 11:56 ` Christoph Heiss
2025-04-22 16:27 ` [pve-devel] [PATCH pve-installer 5/6] common: add more descriptive errors for invalid network configs Michael Köppl
` (2 subsequent siblings)
6 siblings, 1 reply; 23+ messages in thread
From: Michael Köppl @ 2025-04-22 16:27 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Michael Köppl <m.koeppl@proxmox.com>
---
proxmox-auto-installer/src/utils.rs | 6 +++---
proxmox-installer-common/src/options.rs | 4 ++--
proxmox-tui-installer/src/views/bootdisk.rs | 10 +++++-----
3 files changed, 10 insertions(+), 10 deletions(-)
diff --git a/proxmox-auto-installer/src/utils.rs b/proxmox-auto-installer/src/utils.rs
index 85a1f52..b1d8787 100644
--- a/proxmox-auto-installer/src/utils.rs
+++ b/proxmox-auto-installer/src/utils.rs
@@ -1,4 +1,4 @@
-use anyhow::{Context, Result, bail};
+use anyhow::{bail, Context, Result};
use clap::ValueEnum;
use glob::Pattern;
use log::info;
@@ -12,12 +12,12 @@ use crate::{
udevinfo::UdevInfo,
};
use proxmox_installer_common::{
- ROOT_PASSWORD_MIN_LENGTH,
- options::{FsType, NetworkOptions, ZfsChecksumOption, ZfsCompressOption, email_validate},
+ options::{email_validate, FsType, NetworkOptions, ZfsChecksumOption, ZfsCompressOption},
setup::{
InstallBtrfsOption, InstallConfig, InstallFirstBootSetup, InstallRootPassword,
InstallZfsOption, LocaleInfo, RuntimeInfo, SetupInfo,
},
+ ROOT_PASSWORD_MIN_LENGTH,
};
use serde::{Deserialize, Serialize};
diff --git a/proxmox-installer-common/src/options.rs b/proxmox-installer-common/src/options.rs
index 9271b8b..4b48130 100644
--- a/proxmox-installer-common/src/options.rs
+++ b/proxmox-installer-common/src/options.rs
@@ -1,4 +1,4 @@
-use anyhow::{Result, bail};
+use anyhow::{bail, Result};
use regex::Regex;
use serde::{Deserialize, Serialize};
use std::net::{IpAddr, Ipv4Addr};
@@ -552,7 +552,7 @@ mod tests {
state: InterfaceState::Up,
mac: "01:23:45:67:89:ab".to_owned(),
addresses: Some(vec![
- CidrAddress::new(Ipv4Addr::new(192, 168, 0, 2), 24).unwrap(),
+ CidrAddress::new(Ipv4Addr::new(192, 168, 0, 2), 24).unwrap()
]),
},
);
diff --git a/proxmox-tui-installer/src/views/bootdisk.rs b/proxmox-tui-installer/src/views/bootdisk.rs
index 60d5316..9d228cc 100644
--- a/proxmox-tui-installer/src/views/bootdisk.rs
+++ b/proxmox-tui-installer/src/views/bootdisk.rs
@@ -4,17 +4,17 @@ use std::{
};
use cursive::{
- Cursive, Vec2, View,
view::{Nameable, Resizable, ViewWrapper},
views::{
Button, Dialog, DummyView, LinearLayout, NamedView, PaddedView, Panel, ScrollView,
SelectView, TextView, ViewRef,
},
+ Cursive, Vec2, View,
};
use super::{DiskSizeEditView, FormView, IntegerEditView, TabbedView};
-use crate::InstallerState;
use crate::options::FS_TYPES;
+use crate::InstallerState;
use proxmox_installer_common::{
disk_checks::{
@@ -22,9 +22,9 @@ use proxmox_installer_common::{
check_lvm_bootdisk_opts, check_zfs_raid_config,
},
options::{
- AdvancedBootdiskOptions, BTRFS_COMPRESS_OPTIONS, BootdiskOptions, BtrfsBootdiskOptions,
- Disk, FsType, LvmBootdiskOptions, ZFS_CHECKSUM_OPTIONS, ZFS_COMPRESS_OPTIONS,
- ZfsBootdiskOptions,
+ AdvancedBootdiskOptions, BootdiskOptions, BtrfsBootdiskOptions, Disk, FsType,
+ LvmBootdiskOptions, ZfsBootdiskOptions, BTRFS_COMPRESS_OPTIONS, ZFS_CHECKSUM_OPTIONS,
+ ZFS_COMPRESS_OPTIONS,
},
setup::{BootType, ProductConfig, ProxmoxProduct, RuntimeInfo},
};
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 23+ messages in thread
* [pve-devel] [PATCH pve-installer 5/6] common: add more descriptive errors for invalid network configs
2025-04-22 16:27 [pve-devel] [PATCH installer 0/6] add early disk and network sanity checks Michael Köppl
` (3 preceding siblings ...)
2025-04-22 16:27 ` [pve-devel] [PATCH pve-installer 4/6] run rustfmt Michael Köppl
@ 2025-04-22 16:27 ` Michael Köppl
2025-04-28 12:20 ` Christoph Heiss
2025-04-22 16:27 ` [pve-devel] [RFC PATCH pve-installer 6/6] closes #5757: common: add checks for valid IPv4 address within subnet Michael Köppl
2025-04-28 12:25 ` [pve-devel] [PATCH installer 0/6] add early disk and network sanity checks Christoph Heiss
6 siblings, 1 reply; 23+ messages in thread
From: Michael Köppl @ 2025-04-22 16:27 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Michael Köppl <m.koeppl@proxmox.com>
---
proxmox-installer-common/src/utils.rs | 16 +++++++++++++++-
1 file changed, 15 insertions(+), 1 deletion(-)
diff --git a/proxmox-installer-common/src/utils.rs b/proxmox-installer-common/src/utils.rs
index 8adcec0..49f1c9f 100644
--- a/proxmox-installer-common/src/utils.rs
+++ b/proxmox-installer-common/src/utils.rs
@@ -18,6 +18,20 @@ pub enum CidrAddressParseError {
InvalidMask(Option<ParseIntError>),
}
+impl fmt::Display for CidrAddressParseError {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ let msg = match &self {
+ CidrAddressParseError::NoDelimiter => {
+ String::from("No delimiter for separating address and mask was found")
+ }
+ CidrAddressParseError::InvalidAddr(addr_parse_error) => format!("{addr_parse_error}"),
+ CidrAddressParseError::InvalidMask(parse_int_error) => format!("{:?}", parse_int_error),
+ };
+
+ write!(f, "Invalid CIDR: {msg}")
+ }
+}
+
/// An IP address (IPv4 or IPv6), including network mask.
///
/// See the [`IpAddr`] type for more information how IP addresses are handled.
@@ -110,7 +124,7 @@ impl<'de> Deserialize<'de> for CidrAddress {
{
let s: String = Deserialize::deserialize(deserializer)?;
s.parse()
- .map_err(|_| serde::de::Error::custom("invalid CIDR"))
+ .map_err(|err| serde::de::Error::custom(format!("{err}")))
}
}
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 23+ messages in thread
* [pve-devel] [RFC PATCH pve-installer 6/6] closes #5757: common: add checks for valid IPv4 address within subnet
2025-04-22 16:27 [pve-devel] [PATCH installer 0/6] add early disk and network sanity checks Michael Köppl
` (4 preceding siblings ...)
2025-04-22 16:27 ` [pve-devel] [PATCH pve-installer 5/6] common: add more descriptive errors for invalid network configs Michael Köppl
@ 2025-04-22 16:27 ` Michael Köppl
2025-04-28 10:22 ` Christoph Heiss
2025-04-28 12:25 ` [pve-devel] [PATCH installer 0/6] add early disk and network sanity checks Christoph Heiss
6 siblings, 1 reply; 23+ messages in thread
From: Michael Köppl @ 2025-04-22 16:27 UTC (permalink / raw)
To: pve-devel
Implement check if the address entered by the user is valid within the
given subnet, i.e. not a network address or broadcast address.
Partially closes [0].
[0] https://bugzilla.proxmox.com/show_bug.cgi?id=5757
Signed-off-by: Michael Köppl <m.koeppl@proxmox.com>
---
Some input / discussion would be much appreciated here, since this might
again be considered too restrictive. Multiple questions came up during
in-person discussion:
* Is check for broadcast address desired or is it considered a valid
configuration for PVE?
* Is IPv6 check necessary and if so, is allowing to set the address to a
broadcast address a valid setting for IPv6?
proxmox-installer-common/src/utils.rs | 47 +++++++++++++++++++++++----
1 file changed, 41 insertions(+), 6 deletions(-)
diff --git a/proxmox-installer-common/src/utils.rs b/proxmox-installer-common/src/utils.rs
index 49f1c9f..3d79749 100644
--- a/proxmox-installer-common/src/utils.rs
+++ b/proxmox-installer-common/src/utils.rs
@@ -16,6 +16,10 @@ pub enum CidrAddressParseError {
InvalidAddr(AddrParseError),
/// The mask could not be parsed.
InvalidMask(Option<ParseIntError>),
+ /// The IP address is a network address.
+ IsNetworkAddr,
+ /// The IP address is a broadcast address.
+ IsBroadcastAddr,
}
impl fmt::Display for CidrAddressParseError {
@@ -26,6 +30,10 @@ impl fmt::Display for CidrAddressParseError {
}
CidrAddressParseError::InvalidAddr(addr_parse_error) => format!("{addr_parse_error}"),
CidrAddressParseError::InvalidMask(parse_int_error) => format!("{:?}", parse_int_error),
+ CidrAddressParseError::IsNetworkAddr => String::from("Address is a network address"),
+ CidrAddressParseError::IsBroadcastAddr => {
+ String::from("Address is a broadcast address")
+ }
};
write!(f, "Invalid CIDR: {msg}")
@@ -62,10 +70,12 @@ impl CidrAddress {
let addr = addr.into();
if mask > mask_limit(&addr) {
- Err(CidrAddressParseError::InvalidMask(None))
- } else {
- Ok(Self { addr, mask })
+ return Err(CidrAddressParseError::InvalidMask(None));
}
+
+ check_addr_valid_in_subnet(&addr, mask)?;
+
+ Ok(Self { addr, mask })
}
/// Returns only the IP address part of the address.
@@ -104,10 +114,12 @@ impl FromStr for CidrAddress {
.map_err(|err| CidrAddressParseError::InvalidMask(Some(err)))?;
if mask > mask_limit(&addr) {
- Err(CidrAddressParseError::InvalidMask(None))
- } else {
- Ok(Self { addr, mask })
+ return Err(CidrAddressParseError::InvalidMask(None));
}
+
+ check_addr_valid_in_subnet(&addr, mask)?;
+
+ Ok(Self { addr, mask })
}
}
@@ -134,6 +146,29 @@ fn mask_limit(addr: &IpAddr) -> usize {
if addr.is_ipv4() { 32 } else { 128 }
}
+fn check_addr_valid_in_subnet(addr: &IpAddr, mask: usize) -> Result<(), CidrAddressParseError> {
+ match &addr {
+ IpAddr::V4(addr_v4) => {
+ let num_host_bits = 32 - mask;
+ let host_part_mask = (1u32 << num_host_bits) - 1;
+
+ let ip_bits = u32::from_be_bytes(addr_v4.octets());
+
+ let network_addr = ip_bits & !host_part_mask;
+ let broadcast_addr = network_addr | host_part_mask;
+
+ if ip_bits == network_addr {
+ Err(CidrAddressParseError::IsNetworkAddr)
+ } else if ip_bits == broadcast_addr {
+ Err(CidrAddressParseError::IsBroadcastAddr)
+ } else {
+ Ok(())
+ }
+ }
+ IpAddr::V6(_) => Ok(()),
+ }
+}
+
/// Possible errors that might occur when parsing FQDNs.
#[derive(Debug, Eq, PartialEq)]
pub enum FqdnParseError {
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [pve-devel] [PATCH pve-installer 4/6] run rustfmt
2025-04-22 16:27 ` [pve-devel] [PATCH pve-installer 4/6] run rustfmt Michael Köppl
@ 2025-04-23 11:56 ` Christoph Heiss
2025-04-25 12:22 ` Michael Köppl
0 siblings, 1 reply; 23+ messages in thread
From: Christoph Heiss @ 2025-04-23 11:56 UTC (permalink / raw)
To: Michael Köppl; +Cc: Proxmox VE development discussion
Have you checked out the latest commit (952832147)?
These are apparently the "old" 2021 edition rules which are being
applied here - latest master is on the 2024 edition for some time now
[0]. The changes here are the exact reverse of some of the changes done
in the follow-up tree-wide formatting commit [1].
(Also, running `cargo fmt` on latest master does not produce any
changes, FWIW. So this should not be necessary - that's what caught me.)
[0] https://git.proxmox.com/?p=pve-installer.git;a=commitdiff;h=c305be5e
[1] https://git.proxmox.com/?p=pve-installer.git;a=commitdiff;h=a1575df4
On Tue Apr 22, 2025 at 6:27 PM CEST, Michael Köppl wrote:
> Signed-off-by: Michael Köppl <m.koeppl@proxmox.com>
> ---
> proxmox-auto-installer/src/utils.rs | 6 +++---
> proxmox-installer-common/src/options.rs | 4 ++--
> proxmox-tui-installer/src/views/bootdisk.rs | 10 +++++-----
> 3 files changed, 10 insertions(+), 10 deletions(-)
>
> diff --git a/proxmox-auto-installer/src/utils.rs b/proxmox-auto-installer/src/utils.rs
> index 85a1f52..b1d8787 100644
> --- a/proxmox-auto-installer/src/utils.rs
> +++ b/proxmox-auto-installer/src/utils.rs
> @@ -1,4 +1,4 @@
> -use anyhow::{Context, Result, bail};
> +use anyhow::{bail, Context, Result};
> use clap::ValueEnum;
> use glob::Pattern;
> use log::info;
> @@ -12,12 +12,12 @@ use crate::{
> udevinfo::UdevInfo,
> };
> use proxmox_installer_common::{
> - ROOT_PASSWORD_MIN_LENGTH,
> - options::{FsType, NetworkOptions, ZfsChecksumOption, ZfsCompressOption, email_validate},
> + options::{email_validate, FsType, NetworkOptions, ZfsChecksumOption, ZfsCompressOption},
> setup::{
> InstallBtrfsOption, InstallConfig, InstallFirstBootSetup, InstallRootPassword,
> InstallZfsOption, LocaleInfo, RuntimeInfo, SetupInfo,
> },
> + ROOT_PASSWORD_MIN_LENGTH,
> };
> use serde::{Deserialize, Serialize};
>
> diff --git a/proxmox-installer-common/src/options.rs b/proxmox-installer-common/src/options.rs
> index 9271b8b..4b48130 100644
> --- a/proxmox-installer-common/src/options.rs
> +++ b/proxmox-installer-common/src/options.rs
> @@ -1,4 +1,4 @@
> -use anyhow::{Result, bail};
> +use anyhow::{bail, Result};
> use regex::Regex;
> use serde::{Deserialize, Serialize};
> use std::net::{IpAddr, Ipv4Addr};
> @@ -552,7 +552,7 @@ mod tests {
> state: InterfaceState::Up,
> mac: "01:23:45:67:89:ab".to_owned(),
> addresses: Some(vec![
> - CidrAddress::new(Ipv4Addr::new(192, 168, 0, 2), 24).unwrap(),
> + CidrAddress::new(Ipv4Addr::new(192, 168, 0, 2), 24).unwrap()
> ]),
> },
> );
> diff --git a/proxmox-tui-installer/src/views/bootdisk.rs b/proxmox-tui-installer/src/views/bootdisk.rs
> index 60d5316..9d228cc 100644
> --- a/proxmox-tui-installer/src/views/bootdisk.rs
> +++ b/proxmox-tui-installer/src/views/bootdisk.rs
> @@ -4,17 +4,17 @@ use std::{
> };
>
> use cursive::{
> - Cursive, Vec2, View,
> view::{Nameable, Resizable, ViewWrapper},
> views::{
> Button, Dialog, DummyView, LinearLayout, NamedView, PaddedView, Panel, ScrollView,
> SelectView, TextView, ViewRef,
> },
> + Cursive, Vec2, View,
> };
>
> use super::{DiskSizeEditView, FormView, IntegerEditView, TabbedView};
> -use crate::InstallerState;
> use crate::options::FS_TYPES;
> +use crate::InstallerState;
>
> use proxmox_installer_common::{
> disk_checks::{
> @@ -22,9 +22,9 @@ use proxmox_installer_common::{
> check_lvm_bootdisk_opts, check_zfs_raid_config,
> },
> options::{
> - AdvancedBootdiskOptions, BTRFS_COMPRESS_OPTIONS, BootdiskOptions, BtrfsBootdiskOptions,
> - Disk, FsType, LvmBootdiskOptions, ZFS_CHECKSUM_OPTIONS, ZFS_COMPRESS_OPTIONS,
> - ZfsBootdiskOptions,
> + AdvancedBootdiskOptions, BootdiskOptions, BtrfsBootdiskOptions, Disk, FsType,
> + LvmBootdiskOptions, ZfsBootdiskOptions, BTRFS_COMPRESS_OPTIONS, ZFS_CHECKSUM_OPTIONS,
> + ZFS_COMPRESS_OPTIONS,
> },
> setup::{BootType, ProductConfig, ProxmoxProduct, RuntimeInfo},
> };
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [pve-devel] [PATCH pve-installer 4/6] run rustfmt
2025-04-23 11:56 ` Christoph Heiss
@ 2025-04-25 12:22 ` Michael Köppl
0 siblings, 0 replies; 23+ messages in thread
From: Michael Köppl @ 2025-04-25 12:22 UTC (permalink / raw)
To: Christoph Heiss; +Cc: Proxmox VE development discussion
On 4/23/25 13:56, Christoph Heiss wrote:
> Have you checked out the latest commit (952832147)?
>
> These are apparently the "old" 2021 edition rules which are being
> applied here - latest master is on the 2024 edition for some time now
> [0]. The changes here are the exact reverse of some of the changes done
> in the follow-up tree-wide formatting commit [1].
>
> (Also, running `cargo fmt` on latest master does not produce any
> changes, FWIW. So this should not be necessary - that's what caught me.)
>
> [0] https://git.proxmox.com/?p=pve-installer.git;a=commitdiff;h=c305be5e
> [1] https://git.proxmox.com/?p=pve-installer.git;a=commitdiff;h=a1575df4
>
Hi, thanks for having a look and for your feedback! It seems this stems
from a configuration issue with my Neovim. For some reason the rustfmt
used by my editor plugin did not pick up the edition = "2024" in
rustfmt.toml. FWIW, it did work when I instead set style_edition =
"2024", which allows explicitly setting the style used for formatting
according to the Rust documentation [0]. I suppose it's more of a
configuration error on my side, though. I checked again to ensure the
other patches are not affected. I suppose this one could just be
omitted? Or should I send a new series?
[0]
https://doc.rust-lang.org/edition-guide/rust-2024/rustfmt-style-edition.html
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [pve-devel] [RFC PATCH pve-installer 6/6] closes #5757: common: add checks for valid IPv4 address within subnet
2025-04-22 16:27 ` [pve-devel] [RFC PATCH pve-installer 6/6] closes #5757: common: add checks for valid IPv4 address within subnet Michael Köppl
@ 2025-04-28 10:22 ` Christoph Heiss
2025-04-28 14:20 ` Michael Köppl
0 siblings, 1 reply; 23+ messages in thread
From: Christoph Heiss @ 2025-04-28 10:22 UTC (permalink / raw)
To: Michael Köppl; +Cc: Proxmox VE development discussion
On Tue Apr 22, 2025 at 6:27 PM CEST, Michael Köppl wrote:
> Implement check if the address entered by the user is valid within the
> given subnet, i.e. not a network address or broadcast address.
>
> Partially closes [0].
>
> [0] https://bugzilla.proxmox.com/show_bug.cgi?id=5757
>
> Signed-off-by: Michael Köppl <m.koeppl@proxmox.com>
> ---
> Some input / discussion would be much appreciated here, since this might
> again be considered too restrictive. Multiple questions came up during
> in-person discussion:
> * Is check for broadcast address desired or is it considered a valid
> configuration for PVE?
At least for IPv4 /31 prefixes, this check (including the network
address part) is wrong. RFC 3021 [0] explicitly allows such subnets for
point-to-point links.
E.g. 192.168.0.0/31 is a valid subnet with 2 hosts, 192.168.0.0/31 and
192.168.0.1/31.
[0] https://www.rfc-editor.org/rfc/rfc3021
> * Is IPv6 check necessary and if so, is allowing to set the address to a
> broadcast address a valid setting for IPv6?
There's no traditional broadcast in IPv6, so no check necessary.
(FWIW, multicast takes that role basically)
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [pve-devel] [PATCH pve-installer 1/6] auto: add early answer file sanity check for RAID configurations
2025-04-22 16:27 ` [pve-devel] [PATCH pve-installer 1/6] auto: add early answer file sanity check for RAID configurations Michael Köppl
@ 2025-04-28 11:25 ` Christoph Heiss
2025-04-28 14:31 ` Michael Köppl
0 siblings, 1 reply; 23+ messages in thread
From: Christoph Heiss @ 2025-04-28 11:25 UTC (permalink / raw)
To: Michael Köppl; +Cc: Proxmox VE development discussion
Thanks for tackling this!
On Tue Apr 22, 2025 at 6:27 PM CEST, Michael Köppl wrote:
> [..]
> +pub fn verify_disks_settings(answer: &Answer) -> Result<()> {
> + if let DiskSelection::Selection(selection) = &answer.disks.disk_selection {
> + let min_disks = answer.disks.fs_type.get_min_disks();
> + if selection.len() < min_disks {
> + bail!(
> + "{} requires at least {} disks",
> + answer.disks.fs_type,
> + min_disks
> + );
> + }
> + }
Perhaps another, pretty simple but useful check here would be if all
disks are unique, i.e. that there are no duplicates in the list?
> [..]
> diff --git a/proxmox-installer-common/src/options.rs b/proxmox-installer-common/src/options.rs
> index 9cc4ee0..9271b8b 100644
> --- a/proxmox-installer-common/src/options.rs
> +++ b/proxmox-installer-common/src/options.rs
> @@ -48,6 +58,19 @@ pub enum ZfsRaidLevel {
> RaidZ3,
> }
>
> +impl ZfsRaidLevel {
> + pub fn get_min_disks(&self) -> usize {
> + match self {
> + ZfsRaidLevel::Raid0 => 1,
> + ZfsRaidLevel::Raid1 => 2,
> + ZfsRaidLevel::Raid10 => 4,
> + ZfsRaidLevel::RaidZ => 3,
> + ZfsRaidLevel::RaidZ2 => 4,
> + ZfsRaidLevel::RaidZ3 => 5,
ZFS actually lets one create RAIDZ{1,2,3} pools with 2, 3 and 4 disks,
respectively. While maybe not really _that_ practical for real-world
usecases (starting with the overhead), do we want to still allow it?
> + }
> + }
> +}
> +
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [pve-devel] [PATCH pve-installer 2/6] common: use get_min_disks as single source of truth for RAID config checks
2025-04-22 16:27 ` [pve-devel] [PATCH pve-installer 2/6] common: use get_min_disks as single source of truth for RAID config checks Michael Köppl
@ 2025-04-28 11:48 ` Christoph Heiss
2025-04-28 15:36 ` Michael Köppl
0 siblings, 1 reply; 23+ messages in thread
From: Christoph Heiss @ 2025-04-28 11:48 UTC (permalink / raw)
To: Michael Köppl; +Cc: Proxmox VE development discussion
On Tue Apr 22, 2025 at 6:27 PM CEST, Michael Köppl wrote:
> To avoid having multiple places where the minimum number of disks
> required for a certain RAID setup is defined, use get_min_disks in the
> common installer RAID config checks as well, keeping the behavior the
> same as for the answer file sanity checks.
>
> Signed-off-by: Michael Köppl <m.koeppl@proxmox.com>
> ---
> An argument could be made here that there should be a declarative single
> source of truth in the form of a JSON or similar that both Perl and
> Rust-based installers can use. I would love some input on this as I
> think it could make sense, but would require more extensive changes.
> We already use JSON files as sources for selectable values, etc. in
> other parts of the installer, so we would not be doing anything new
> here.
Thinking about it again (after our short talk off-list), I think putting
it into a separate (JSON) file might be to much work than it's really
worth. After all, there are only two places, these values won't change
and are not all that magic. So keeping separate definitions between the
Perl and Rust code is fine IMHO.
>
> proxmox-installer-common/src/disk_checks.rs | 18 ++++--------------
> 1 file changed, 4 insertions(+), 14 deletions(-)
>
> diff --git a/proxmox-installer-common/src/disk_checks.rs b/proxmox-installer-common/src/disk_checks.rs
> index ecc43bd..1d17e2c 100644
> --- a/proxmox-installer-common/src/disk_checks.rs
> +++ b/proxmox-installer-common/src/disk_checks.rs
> @@ -69,17 +69,16 @@ pub fn check_zfs_raid_config(level: ZfsRaidLevel, disks: &[Disk]) -> Result<(),
While looking at all the disks checks, most of them could be made
methods of the respective enum, e.g. so one could do something like
`ZfsRaidLevel::Raid1.check_disks(disks)`.
What do you think? Would be a nice cleanup (via a separate series would
also be okay, of course).
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [pve-devel] [RFC PATCH pve-installer 3/6] close #5887: add sanity check for LVM swapsize and maxroot
2025-04-22 16:27 ` [pve-devel] [RFC PATCH pve-installer 3/6] close #5887: add sanity check for LVM swapsize and maxroot Michael Köppl
@ 2025-04-28 12:00 ` Christoph Heiss
2025-04-29 11:30 ` Michael Köppl
0 siblings, 1 reply; 23+ messages in thread
From: Christoph Heiss @ 2025-04-28 12:00 UTC (permalink / raw)
To: Michael Köppl; +Cc: Proxmox VE development discussion
On Tue Apr 22, 2025 at 6:27 PM CEST, Michael Köppl wrote:
> Check that the configured swapsize is not greater than the total size
> of the disk and that maxroot is at most hdsize / 4. Define the
> behavior for the auto-installer as well as the TUI and GUI installers.
>
> Signed-off-by: Michael Köppl <m.koeppl@proxmox.com>
> ---
> The changes implemented in this patch regarding swap size only avoid
> rather obvious misconfigurations. However, users could still configure a
> 32GB swap volume on a 32GB hard disk and the installer would fail later
> on. Users could benefit another check that ensures the swap volume size
> does not go beyond a certain threshold. One option could be to set the
> limit similar to the 8GB maximum when the swap volume size is calculated
> from the size of the memory. OTOH, this might also be
> considered too restrictive. Would love some input on this.
I'd limit the swapsize to hdsize/8, as we state in the admin guide [0].
Although I'm not sure if we actually check this somewhere currently.
[0] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#advanced_lvm_options
> [..]
> diff --git a/proxmox-auto-installer/src/utils.rs b/proxmox-auto-installer/src/utils.rs
> index d6bc6e3..85a1f52 100644
> --- a/proxmox-auto-installer/src/utils.rs
> +++ b/proxmox-auto-installer/src/utils.rs
> @@ -396,6 +396,19 @@ pub fn verify_disks_settings(answer: &Answer) -> Result<()> {
> );
> }
> }
> +
> + if let answer::FsOptions::LVM(lvm) = &answer.disks.fs_options {
> + if lvm.swapsize > lvm.hdsize {
> + bail!("LVM swapsize cannot be greater than hdsize");
> + }
> +
> + if let Some((maxroot, hdsize)) = lvm.maxroot.zip(lvm.hdsize) {
> + if maxroot > hdsize / 4.0 {
> + bail!("LVM maxroot cannot be greater than hdsize / 4");
> + }
> + }
> + }
If feasible, could these checks be de-duplicated with the ones below?
Since they do check the exact same things.
> +
> Ok(())
> }
>
> diff --git a/proxmox-installer-common/src/disk_checks.rs b/proxmox-installer-common/src/disk_checks.rs
> index 1d17e2c..bb33baf 100644
> --- a/proxmox-installer-common/src/disk_checks.rs
> +++ b/proxmox-installer-common/src/disk_checks.rs
> [..]
> @@ -49,6 +49,24 @@ pub fn check_disks_4kn_legacy_boot(boot_type: BootType, disks: &[Disk]) -> Result
> Ok(())
> }
>
> +/// Checks whether a user-supplied LVM setup is valid or not, such as the swapsize or maxroot not
> +/// exceeding certain thresholds.
> +///
> +/// # Arguments
> +///
> +/// * `bootdisk_opts` - The LVM options set by the user.
> +pub fn check_lvm_bootdisk_opts(bootdisk_opts: &LvmBootdiskOptions) -> Result<(), &str> {
> + if bootdisk_opts.swap_size > Some(bootdisk_opts.total_size) {
> + return Err("Swap size cannot be greater than total size");
I'd include the actual value of the swap size and total size here, for
better feedback to the user, e.g.
"Swap (32 GiB) cannot be bigger than total harddisk size (16 GiB)"
> + }
> +
> + if bootdisk_opts.max_root_size > Some(bootdisk_opts.total_size / 4.0) {
> + return Err("Max root size cannot be greater than (total size / 4)");
Similar here, e.g.
"Maximum root volume size (16 GiB) cannot be bigger than (hard size size / 4 = 8 GiB)"
> + }
> +
> + Ok(())
> +}
> +
> diff --git a/proxmox-tui-installer/src/views/bootdisk.rs b/proxmox-tui-installer/src/views/bootdisk.rs
> index 313a3c9..60d5316 100644
> --- a/proxmox-tui-installer/src/views/bootdisk.rs
> +++ b/proxmox-tui-installer/src/views/bootdisk.rs
> [..]
> @@ -264,6 +264,8 @@ impl AdvancedBootdiskOptionsView {
> .get_values()
> .ok_or("Failed to retrieve advanced bootdisk options")?;
>
> + check_lvm_bootdisk_opts(&advanced).map_err(|err| format!("{fstype}: {err}"))?;
Can be:
check_lvm_bootdisk_opts(&advanced).with_context(|| fstype.to_string())
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [pve-devel] [PATCH pve-installer 5/6] common: add more descriptive errors for invalid network configs
2025-04-22 16:27 ` [pve-devel] [PATCH pve-installer 5/6] common: add more descriptive errors for invalid network configs Michael Köppl
@ 2025-04-28 12:20 ` Christoph Heiss
0 siblings, 0 replies; 23+ messages in thread
From: Christoph Heiss @ 2025-04-28 12:20 UTC (permalink / raw)
To: Michael Köppl; +Cc: Proxmox VE development discussion
On Tue Apr 22, 2025 at 6:27 PM CEST, Michael Köppl wrote:
> Signed-off-by: Michael Köppl <m.koeppl@proxmox.com>
> ---
> proxmox-installer-common/src/utils.rs | 16 +++++++++++++++-
> 1 file changed, 15 insertions(+), 1 deletion(-)
>
> diff --git a/proxmox-installer-common/src/utils.rs b/proxmox-installer-common/src/utils.rs
> index 8adcec0..49f1c9f 100644
> --- a/proxmox-installer-common/src/utils.rs
> +++ b/proxmox-installer-common/src/utils.rs
> @@ -18,6 +18,20 @@ pub enum CidrAddressParseError {
> InvalidMask(Option<ParseIntError>),
> }
>
> +impl fmt::Display for CidrAddressParseError {
> + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
> + let msg = match &self {
> + CidrAddressParseError::NoDelimiter => {
> + String::from("No delimiter for separating address and mask was found")
> + }
> + CidrAddressParseError::InvalidAddr(addr_parse_error) => format!("{addr_parse_error}"),
> + CidrAddressParseError::InvalidMask(parse_int_error) => format!("{:?}", parse_int_error),
> + };
> +
> + write!(f, "Invalid CIDR: {msg}")
> + }
This can be rewritten slightly to avoid unneeded string allocation and
formatting:
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "Invalid CIDR: ")?;
match self {
CidrAddressParseError::NoDelimiter => {
write!(f, "No delimiter for separating address and mask was found")
}
CidrAddressParseError::InvalidAddr(err) => write!(f, "{err}"),
CidrAddressParseError::InvalidMask(err) => {
if let Some(e) = err {
write!(f, "{e:?}")
} else {
write!(f, "Invalid mask")
}
}
}
}
> +}
> +
> /// An IP address (IPv4 or IPv6), including network mask.
> ///
> /// See the [`IpAddr`] type for more information how IP addresses are handled.
> @@ -110,7 +124,7 @@ impl<'de> Deserialize<'de> for CidrAddress {
> {
> let s: String = Deserialize::deserialize(deserializer)?;
> s.parse()
> - .map_err(|_| serde::de::Error::custom("invalid CIDR"))
> + .map_err(|err| serde::de::Error::custom(format!("{err}")))
nit: Could also just use
s.parse().map_err(serde::de::Error::custom)
here now.
> }
> }
>
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [pve-devel] [PATCH installer 0/6] add early disk and network sanity checks
2025-04-22 16:27 [pve-devel] [PATCH installer 0/6] add early disk and network sanity checks Michael Köppl
` (5 preceding siblings ...)
2025-04-22 16:27 ` [pve-devel] [RFC PATCH pve-installer 6/6] closes #5757: common: add checks for valid IPv4 address within subnet Michael Köppl
@ 2025-04-28 12:25 ` Christoph Heiss
2025-04-29 14:14 ` Michael Köppl
6 siblings, 1 reply; 23+ messages in thread
From: Christoph Heiss @ 2025-04-28 12:25 UTC (permalink / raw)
To: Michael Köppl; +Cc: Proxmox VE development discussion
Thanks for tackling this.
Reviewed the series and left some comments on the individual patches,
mostly some thoughts and nits.
Also tested `proxmox-auto-install-assistant validate-answer` for the new
cases, as well as the early RAID check in the GUI and the auto
installer.
On Tue Apr 22, 2025 at 6:27 PM CEST, Michael Köppl wrote:
> The goal of this series is to add additional sanity checks to the
> auto-installer and the TUI and GUI installers. The following checks were
> added:
> * Btrfs / ZFS RAID: check if the required number of disks is available
> * LVM: check if swapsize < hdsize (RFC)
> * LVM: check if maxroot < hdsize/4 (RFC)
> * Networking: check if IPv4 address is valid within subnet (RFC)
>
> The disk-related checks aim to close [0], whereas the IPv4 check would
> partially close [1].
>
> [0] https://bugzilla.proxmox.com/show_bug.cgi?id=5887
> [1] https://bugzilla.proxmox.com/show_bug.cgi?id=5757
>
> The patches marked as RFC work, but I think some discussion regarding
> what we even want to check is warranted here. I added some more context
> for a discussion to the individual patches.
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [pve-devel] [RFC PATCH pve-installer 6/6] closes #5757: common: add checks for valid IPv4 address within subnet
2025-04-28 10:22 ` Christoph Heiss
@ 2025-04-28 14:20 ` Michael Köppl
0 siblings, 0 replies; 23+ messages in thread
From: Michael Köppl @ 2025-04-28 14:20 UTC (permalink / raw)
To: Christoph Heiss; +Cc: Proxmox VE development discussion
On 4/28/25 12:22, Christoph Heiss wrote:
> At least for IPv4 /31 prefixes, this check (including the network
> address part) is wrong. RFC 3021 [0] explicitly allows such subnets for
> point-to-point links.
>
> E.g. 192.168.0.0/31 is a valid subnet with 2 hosts, 192.168.0.0/31 and
> 192.168.0.1/31.
>
> [0] https://www.rfc-editor.org/rfc/rfc3021
You're right, I missed that. Thanks! I think for /32 and /31 it makes
sense to explicitly allow them, avoiding the calculations done for any
other valid subnet mask. I'll adapt the implementation accordingly in a v2.
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [pve-devel] [PATCH pve-installer 1/6] auto: add early answer file sanity check for RAID configurations
2025-04-28 11:25 ` Christoph Heiss
@ 2025-04-28 14:31 ` Michael Köppl
2025-04-29 8:26 ` Christoph Heiss
0 siblings, 1 reply; 23+ messages in thread
From: Michael Köppl @ 2025-04-28 14:31 UTC (permalink / raw)
To: Christoph Heiss; +Cc: Proxmox VE development discussion
On 4/28/25 13:25, Christoph Heiss wrote:
> On Tue Apr 22, 2025 at 6:27 PM CEST, Michael Köppl wrote:
>> [..]
>> +pub fn verify_disks_settings(answer: &Answer) -> Result<()> {
>> + if let DiskSelection::Selection(selection) = &answer.disks.disk_selection {
>> + let min_disks = answer.disks.fs_type.get_min_disks();
>> + if selection.len() < min_disks {
>> + bail!(
>> + "{} requires at least {} disks",
>> + answer.disks.fs_type,
>> + min_disks
>> + );
>> + }
>> + }
>
> Perhaps another, pretty simple but useful check here would be if all
> disks are unique, i.e. that there are no duplicates in the list?
Good idea, I'll add that for the v2. Thanks!
>
>> [..]
>> diff --git a/proxmox-installer-common/src/options.rs b/proxmox-installer-common/src/options.rs
>> index 9cc4ee0..9271b8b 100644
>> --- a/proxmox-installer-common/src/options.rs
>> +++ b/proxmox-installer-common/src/options.rs
>> @@ -48,6 +58,19 @@ pub enum ZfsRaidLevel {
>> RaidZ3,
>> }
>>
>> +impl ZfsRaidLevel {
>> + pub fn get_min_disks(&self) -> usize {
>> + match self {
>> + ZfsRaidLevel::Raid0 => 1,
>> + ZfsRaidLevel::Raid1 => 2,
>> + ZfsRaidLevel::Raid10 => 4,
>> + ZfsRaidLevel::RaidZ => 3,
>> + ZfsRaidLevel::RaidZ2 => 4,
>> + ZfsRaidLevel::RaidZ3 => 5,
>
> ZFS actually lets one create RAIDZ{1,2,3} pools with 2, 3 and 4 disks,
> respectively. While maybe not really _that_ practical for real-world
> usecases (starting with the overhead), do we want to still allow it?
I personally don't like putting too many constraints on what users can
do. Even if not every setting is practical, I think the installer should
allow them as long as they don't mean that the whole installation is
going to crash halfway through, especially if manually creating pools
like that would work. Maybe someone else has an opinion on this and can
weigh in, though. In any case, thanks for the suggestion!
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [pve-devel] [PATCH pve-installer 2/6] common: use get_min_disks as single source of truth for RAID config checks
2025-04-28 11:48 ` Christoph Heiss
@ 2025-04-28 15:36 ` Michael Köppl
0 siblings, 0 replies; 23+ messages in thread
From: Michael Köppl @ 2025-04-28 15:36 UTC (permalink / raw)
To: Christoph Heiss; +Cc: Proxmox VE development discussion
On 4/28/25 13:48, Christoph Heiss wrote:
>>
>> proxmox-installer-common/src/disk_checks.rs | 18 ++++--------------
>> 1 file changed, 4 insertions(+), 14 deletions(-)
>>
>> diff --git a/proxmox-installer-common/src/disk_checks.rs b/proxmox-installer-common/src/disk_checks.rs
>> index ecc43bd..1d17e2c 100644
>> --- a/proxmox-installer-common/src/disk_checks.rs
>> +++ b/proxmox-installer-common/src/disk_checks.rs
>> @@ -69,17 +69,16 @@ pub fn check_zfs_raid_config(level: ZfsRaidLevel, disks: &[Disk]) -> Result<(),
>
> While looking at all the disks checks, most of them could be made
> methods of the respective enum, e.g. so one could do something like
> `ZfsRaidLevel::Raid1.check_disks(disks)`.
>
> What do you think? Would be a nice cleanup (via a separate series would
> also be okay, of course).
I think it's a good idea, already working on it. I'll add it to the v2.
Thanks for the suggestion!
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [pve-devel] [PATCH pve-installer 1/6] auto: add early answer file sanity check for RAID configurations
2025-04-28 14:31 ` Michael Köppl
@ 2025-04-29 8:26 ` Christoph Heiss
2025-04-29 9:32 ` Michael Köppl
0 siblings, 1 reply; 23+ messages in thread
From: Christoph Heiss @ 2025-04-29 8:26 UTC (permalink / raw)
To: Michael Köppl; +Cc: Proxmox VE development discussion
>> ZFS actually lets one create RAIDZ{1,2,3} pools with 2, 3 and 4 disks,
>> respectively. While maybe not really _that_ practical for real-world
>> usecases (starting with the overhead), do we want to still allow it?
>
> I personally don't like putting too many constraints on what users can
> do. Even if not every setting is practical, I think the installer should
> allow them as long as they don't mean that the whole installation is
> going to crash halfway through,
Yep, definitely. I also like to err on the side of caution and rather
allow more than what might be technical feasible and/or allowed - latter
especially w.r.t. network settings.
I'd then just lower it to the actual allowed minimum as mentioned above,
doesn't hurt in any case :^)
> especially if manually creating pools
> like that would work. Maybe someone else has an opinion on this and can
> weigh in, though. In any case, thanks for the suggestion!
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [pve-devel] [PATCH pve-installer 1/6] auto: add early answer file sanity check for RAID configurations
2025-04-29 8:26 ` Christoph Heiss
@ 2025-04-29 9:32 ` Michael Köppl
2025-04-29 9:40 ` Christoph Heiss
0 siblings, 1 reply; 23+ messages in thread
From: Michael Köppl @ 2025-04-29 9:32 UTC (permalink / raw)
To: Christoph Heiss; +Cc: Proxmox VE development discussion
On 4/29/25 10:26, Christoph Heiss wrote:
>>> ZFS actually lets one create RAIDZ{1,2,3} pools with 2, 3 and 4 disks,
>>> respectively. While maybe not really _that_ practical for real-world
>>> usecases (starting with the overhead), do we want to still allow it?
>>
>> I personally don't like putting too many constraints on what users can
>> do. Even if not every setting is practical, I think the installer should
>> allow them as long as they don't mean that the whole installation is
>> going to crash halfway through,
>
> Yep, definitely. I also like to err on the side of caution and rather
> allow more than what might be technical feasible and/or allowed - latter
> especially w.r.t. network settings.
>
> I'd then just lower it to the actual allowed minimum as mentioned above,
> doesn't hurt in any case :^)
I think I'd prefer doing this in a separate series, though. Changes
would have to be made in the installer, the UI (there's a check there as
well when creating a ZFS pool) and, IMO, also PBS. There are checks in
PBS as well and I don't think it's a good idea to have diverging
behavior between PVE and PBS about what is and isn't allowed w.r.t. ZFS
RAID setups.
>
>> especially if manually creating pools
>> like that would work. Maybe someone else has an opinion on this and can
>> weigh in, though. In any case, thanks for the suggestion!
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [pve-devel] [PATCH pve-installer 1/6] auto: add early answer file sanity check for RAID configurations
2025-04-29 9:32 ` Michael Köppl
@ 2025-04-29 9:40 ` Christoph Heiss
0 siblings, 0 replies; 23+ messages in thread
From: Christoph Heiss @ 2025-04-29 9:40 UTC (permalink / raw)
To: Michael Köppl; +Cc: Proxmox VE development discussion
On Tue Apr 29, 2025 at 11:32 AM CEST, Michael Köppl wrote:
> On 4/29/25 10:26, Christoph Heiss wrote:
>>>> ZFS actually lets one create RAIDZ{1,2,3} pools with 2, 3 and 4 disks,
>>>> respectively. While maybe not really _that_ practical for real-world
>>>> usecases (starting with the overhead), do we want to still allow it?
>>>
>>> I personally don't like putting too many constraints on what users can
>>> do. Even if not every setting is practical, I think the installer should
>>> allow them as long as they don't mean that the whole installation is
>>> going to crash halfway through,
>>
>> Yep, definitely. I also like to err on the side of caution and rather
>> allow more than what might be technical feasible and/or allowed - latter
>> especially w.r.t. network settings.
>>
>> I'd then just lower it to the actual allowed minimum as mentioned above,
>> doesn't hurt in any case :^)
>
> I think I'd prefer doing this in a separate series, though. Changes
> would have to be made in the installer, the UI (there's a check there as
> well when creating a ZFS pool) and, IMO, also PBS. There are checks in
> PBS as well and I don't think it's a good idea to have diverging
> behavior between PVE and PBS about what is and isn't allowed w.r.t. ZFS
> RAID setups.
I see, didn't think of all the other places where we assume this.
You can just disregard it then.
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [pve-devel] [RFC PATCH pve-installer 3/6] close #5887: add sanity check for LVM swapsize and maxroot
2025-04-28 12:00 ` Christoph Heiss
@ 2025-04-29 11:30 ` Michael Köppl
0 siblings, 0 replies; 23+ messages in thread
From: Michael Köppl @ 2025-04-29 11:30 UTC (permalink / raw)
To: Christoph Heiss; +Cc: Proxmox VE development discussion
On 4/28/25 14:00, Christoph Heiss wrote:
> On Tue Apr 22, 2025 at 6:27 PM CEST, Michael Köppl wrote:
>> Check that the configured swapsize is not greater than the total size
>> of the disk and that maxroot is at most hdsize / 4. Define the
>> behavior for the auto-installer as well as the TUI and GUI installers.
>>
>> Signed-off-by: Michael Köppl <m.koeppl@proxmox.com>
>> ---
>> The changes implemented in this patch regarding swap size only avoid
>> rather obvious misconfigurations. However, users could still configure a
>> 32GB swap volume on a 32GB hard disk and the installer would fail later
>> on. Users could benefit another check that ensures the swap volume size
>> does not go beyond a certain threshold. One option could be to set the
>> limit similar to the 8GB maximum when the swap volume size is calculated
>> from the size of the memory. OTOH, this might also be
>> considered too restrictive. Would love some input on this.
>
> I'd limit the swapsize to hdsize/8, as we state in the admin guide [0].
> Although I'm not sure if we actually check this somewhere currently.
As far as I can tell, the hdsize/8 limit in the admin guide at the
moment only refers to the value being calculated from the memory size. I
added a check to limit the maximum swap size entered by users to
hdsize/8 for the v2.
>
> [0] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#advanced_lvm_options
>
>> [..]
>> diff --git a/proxmox-auto-installer/src/utils.rs b/proxmox-auto-installer/src/utils.rs
>> index d6bc6e3..85a1f52 100644
>> --- a/proxmox-auto-installer/src/utils.rs
>> +++ b/proxmox-auto-installer/src/utils.rs
>> @@ -396,6 +396,19 @@ pub fn verify_disks_settings(answer: &Answer) -> Result<()> {
>> );
>> }
>> }
>> +
>> + if let answer::FsOptions::LVM(lvm) = &answer.disks.fs_options {
>> + if lvm.swapsize > lvm.hdsize {
>> + bail!("LVM swapsize cannot be greater than hdsize");
>> + }
>> +
>> + if let Some((maxroot, hdsize)) = lvm.maxroot.zip(lvm.hdsize) {
>> + if maxroot > hdsize / 4.0 {
>> + bail!("LVM maxroot cannot be greater than hdsize / 4");
>> + }
>> + }
>> + }
>
> If feasible, could these checks be de-duplicated with the ones below?
> Since they do check the exact same things.
I added 2 check functions (swap and maxroot) that are used by both the
autoinstaller utils and in the installer-common disk_checks for the v2.
Thanks for the suggestion!
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [pve-devel] [PATCH installer 0/6] add early disk and network sanity checks
2025-04-28 12:25 ` [pve-devel] [PATCH installer 0/6] add early disk and network sanity checks Christoph Heiss
@ 2025-04-29 14:14 ` Michael Köppl
0 siblings, 0 replies; 23+ messages in thread
From: Michael Köppl @ 2025-04-29 14:14 UTC (permalink / raw)
To: Christoph Heiss; +Cc: Proxmox VE development discussion
Superseded-by:
https://lore.proxmox.com/pve-devel/20250429140940.161711-1-m.koeppl@proxmox.com
On 4/28/25 14:25, Christoph Heiss wrote:
> Thanks for tackling this.
>
> Reviewed the series and left some comments on the individual patches,
> mostly some thoughts and nits.
>
> Also tested `proxmox-auto-install-assistant validate-answer` for the new
> cases, as well as the early RAID check in the GUI and the auto
> installer.
>
> On Tue Apr 22, 2025 at 6:27 PM CEST, Michael Köppl wrote:
>> The goal of this series is to add additional sanity checks to the
>> auto-installer and the TUI and GUI installers. The following checks were
>> added:
>> * Btrfs / ZFS RAID: check if the required number of disks is available
>> * LVM: check if swapsize < hdsize (RFC)
>> * LVM: check if maxroot < hdsize/4 (RFC)
>> * Networking: check if IPv4 address is valid within subnet (RFC)
>>
>> The disk-related checks aim to close [0], whereas the IPv4 check would
>> partially close [1].
>>
>> [0] https://bugzilla.proxmox.com/show_bug.cgi?id=5887
>> [1] https://bugzilla.proxmox.com/show_bug.cgi?id=5757
>>
>> The patches marked as RFC work, but I think some discussion regarding
>> what we even want to check is warranted here. I added some more context
>> for a discussion to the individual patches.
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 23+ messages in thread
end of thread, other threads:[~2025-04-29 14:14 UTC | newest]
Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-04-22 16:27 [pve-devel] [PATCH installer 0/6] add early disk and network sanity checks Michael Köppl
2025-04-22 16:27 ` [pve-devel] [PATCH pve-installer 1/6] auto: add early answer file sanity check for RAID configurations Michael Köppl
2025-04-28 11:25 ` Christoph Heiss
2025-04-28 14:31 ` Michael Köppl
2025-04-29 8:26 ` Christoph Heiss
2025-04-29 9:32 ` Michael Köppl
2025-04-29 9:40 ` Christoph Heiss
2025-04-22 16:27 ` [pve-devel] [PATCH pve-installer 2/6] common: use get_min_disks as single source of truth for RAID config checks Michael Köppl
2025-04-28 11:48 ` Christoph Heiss
2025-04-28 15:36 ` Michael Köppl
2025-04-22 16:27 ` [pve-devel] [RFC PATCH pve-installer 3/6] close #5887: add sanity check for LVM swapsize and maxroot Michael Köppl
2025-04-28 12:00 ` Christoph Heiss
2025-04-29 11:30 ` Michael Köppl
2025-04-22 16:27 ` [pve-devel] [PATCH pve-installer 4/6] run rustfmt Michael Köppl
2025-04-23 11:56 ` Christoph Heiss
2025-04-25 12:22 ` Michael Köppl
2025-04-22 16:27 ` [pve-devel] [PATCH pve-installer 5/6] common: add more descriptive errors for invalid network configs Michael Köppl
2025-04-28 12:20 ` Christoph Heiss
2025-04-22 16:27 ` [pve-devel] [RFC PATCH pve-installer 6/6] closes #5757: common: add checks for valid IPv4 address within subnet Michael Köppl
2025-04-28 10:22 ` Christoph Heiss
2025-04-28 14:20 ` Michael Köppl
2025-04-28 12:25 ` [pve-devel] [PATCH installer 0/6] add early disk and network sanity checks Christoph Heiss
2025-04-29 14:14 ` Michael Köppl
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal