public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH storage v2 0/3] disks: add checks, allow add_storage on other nodes
@ 2022-07-15 11:58 Aaron Lauterer
  2022-07-15 11:58 ` [pve-devel] [PATCH storage v2 1/3] diskmanage: add mounted_paths Aaron Lauterer
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Aaron Lauterer @ 2022-07-15 11:58 UTC (permalink / raw)
  To: pve-devel

This series deals mainly with 2 things, adding more checks prior to
actually setting up a new local storage to avoid leaving behind half
provisioned disks in case a storage (lvm, zfs, ...) of the same name
already exists.
Secondly, to change the behavior regarding the "Add Storage" flag. It
allows to keep in enabled and will create the local storage and add the
node to the nodes list of the PVE storage config if there already exists
one in the cluster.

The first patch is a prerequisite to be able to check if a mount path
for the Directory storage type is already mounted.
The second patch implements the actual checks.
The third patch adds the changed behavior for the "Add Storage" part.

More in the actual patches.

changes since v1:
- recommended changes by Dominik & Fabian E
- some lines were in the wrong patches due to some mistakes during
reordering the patches

Aaron Lauterer (3):
  diskmanage: add mounted_paths
  disks: die if storage name is already in use
  disks: allow add_storage for already configured local storage

 PVE/API2/Disks/Directory.pm | 19 +++++---
 PVE/API2/Disks/LVM.pm       | 11 +++--
 PVE/API2/Disks/LVMThin.pm   | 11 +++--
 PVE/API2/Disks/ZFS.pm       | 86 +++++++++++++++++++++----------------
 PVE/API2/Storage/Config.pm  | 22 ++++++++++
 PVE/Diskmanage.pm           | 12 ++++++
 PVE/Storage.pm              | 27 ++++++++++++
 7 files changed, 138 insertions(+), 50 deletions(-)

-- 
2.30.2





^ permalink raw reply	[flat|nested] 13+ messages in thread

* [pve-devel] [PATCH storage v2 1/3] diskmanage: add mounted_paths
  2022-07-15 11:58 [pve-devel] [PATCH storage v2 0/3] disks: add checks, allow add_storage on other nodes Aaron Lauterer
@ 2022-07-15 11:58 ` Aaron Lauterer
  2022-08-17 11:35   ` Fabian Grünbichler
  2022-07-15 11:58 ` [pve-devel] [PATCH storage v2 2/3] disks: die if storage name is already in use Aaron Lauterer
  2022-07-15 11:58 ` [pve-devel] [PATCH storage v2 3/3] disks: allow add_storage for already configured local storage Aaron Lauterer
  2 siblings, 1 reply; 13+ messages in thread
From: Aaron Lauterer @ 2022-07-15 11:58 UTC (permalink / raw)
  To: pve-devel

returns a list of mounted paths with the backing devices

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
changes since v1: dropped limit to /dev path, returning all mounted
paths now

 PVE/Diskmanage.pm | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/PVE/Diskmanage.pm b/PVE/Diskmanage.pm
index 8ed7a8b..b149685 100644
--- a/PVE/Diskmanage.pm
+++ b/PVE/Diskmanage.pm
@@ -499,6 +499,18 @@ sub mounted_blockdevs {
     return $mounted;
 }
 
+sub mounted_paths {
+    my $mounted = {};
+
+    my $mounts = PVE::ProcFSTools::parse_proc_mounts();
+
+    foreach my $mount (@$mounts) {
+	$mounted->{abs_path($mount->[1])} = $mount->[0];
+    };
+
+    return $mounted;
+}
+
 sub get_disks {
     my ($disks, $nosmart, $include_partitions) = @_;
     my $disklist = {};
-- 
2.30.2





^ permalink raw reply	[flat|nested] 13+ messages in thread

* [pve-devel] [PATCH storage v2 2/3] disks: die if storage name is already in use
  2022-07-15 11:58 [pve-devel] [PATCH storage v2 0/3] disks: add checks, allow add_storage on other nodes Aaron Lauterer
  2022-07-15 11:58 ` [pve-devel] [PATCH storage v2 1/3] diskmanage: add mounted_paths Aaron Lauterer
@ 2022-07-15 11:58 ` Aaron Lauterer
  2022-08-17 11:42   ` Fabian Grünbichler
  2022-07-15 11:58 ` [pve-devel] [PATCH storage v2 3/3] disks: allow add_storage for already configured local storage Aaron Lauterer
  2 siblings, 1 reply; 13+ messages in thread
From: Aaron Lauterer @ 2022-07-15 11:58 UTC (permalink / raw)
  To: pve-devel

If a storage of that type and name already exists (LVM, zpool, ...) but
we do not have a Proxmox VE Storage config for it, it is possible that
the creation will fail midway due to checks done by the underlying
storage layer itself. This in turn can lead to disks that are already
partitioned. Users would need to clean this up themselves.

By adding checks early on, not only checking against the PVE storage
config, but against the actual storage type itself, we can die early
enough, before we touch any disk.

For ZFS, the logic to gather pool data is moved into its own function to
be called from the index API endpoint and the check in the create
endpoint.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---

changes since v1:
- moved zfs pool gathering from index api endpoint into its own function
to easily call it for the check as well
- zfs pool check is now using perl grep
- removed check if mounted path if a /dev one
- added check if a mount unit exists
    - moved definitions of $path, $mountunitname and $mountunitpath out
    of the worker

 PVE/API2/Disks/Directory.pm | 11 ++++--
 PVE/API2/Disks/LVM.pm       |  3 ++
 PVE/API2/Disks/LVMThin.pm   |  3 ++
 PVE/API2/Disks/ZFS.pm       | 77 +++++++++++++++++++++----------------
 4 files changed, 56 insertions(+), 38 deletions(-)

diff --git a/PVE/API2/Disks/Directory.pm b/PVE/API2/Disks/Directory.pm
index df63ba9..07190de 100644
--- a/PVE/API2/Disks/Directory.pm
+++ b/PVE/API2/Disks/Directory.pm
@@ -203,16 +203,19 @@ __PACKAGE__->register_method ({
 	my $dev = $param->{device};
 	my $node = $param->{node};
 	my $type = $param->{filesystem} // 'ext4';
+	my $path = "/mnt/pve/$name";
+	my $mountunitname = PVE::Systemd::escape_unit($path, 1) . ".mount";
+	my $mountunitpath = "/etc/systemd/system/$mountunitname";
 
 	$dev = PVE::Diskmanage::verify_blockdev_path($dev);
 	PVE::Diskmanage::assert_disk_unused($dev);
 	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
 
-	my $worker = sub {
-	    my $path = "/mnt/pve/$name";
-	    my $mountunitname = PVE::Systemd::escape_unit($path, 1) . ".mount";
-	    my $mountunitpath = "/etc/systemd/system/$mountunitname";
+	my $mounted = PVE::Diskmanage::mounted_paths();
+	die "a mountpoint for '${name}' already exists: ${path} ($1)\n" if $mounted->{$path};
+	die "a systemd mount unit already exists: ${mountunitpath}\n" if -e $mountunitpath;
 
+	my $worker = sub {
 	    PVE::Diskmanage::locked_disk_action(sub {
 		PVE::Diskmanage::assert_disk_unused($dev);
 
diff --git a/PVE/API2/Disks/LVM.pm b/PVE/API2/Disks/LVM.pm
index 6e4331a..a27afe2 100644
--- a/PVE/API2/Disks/LVM.pm
+++ b/PVE/API2/Disks/LVM.pm
@@ -152,6 +152,9 @@ __PACKAGE__->register_method ({
 	PVE::Diskmanage::assert_disk_unused($dev);
 	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
 
+	die "volume group with name '${name}' already exists\n"
+	    if PVE::Storage::LVMPlugin::lvm_vgs()->{$name};
+
 	my $worker = sub {
 	    PVE::Diskmanage::locked_disk_action(sub {
 		PVE::Diskmanage::assert_disk_unused($dev);
diff --git a/PVE/API2/Disks/LVMThin.pm b/PVE/API2/Disks/LVMThin.pm
index 58ecb37..690c183 100644
--- a/PVE/API2/Disks/LVMThin.pm
+++ b/PVE/API2/Disks/LVMThin.pm
@@ -110,6 +110,9 @@ __PACKAGE__->register_method ({
 	PVE::Diskmanage::assert_disk_unused($dev);
 	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
 
+	die "volume group with name '${name}' already exists\n"
+	    if PVE::Storage::LVMPlugin::lvm_vgs()->{$name};
+
 	my $worker = sub {
 	    PVE::Diskmanage::locked_disk_action(sub {
 		PVE::Diskmanage::assert_disk_unused($dev);
diff --git a/PVE/API2/Disks/ZFS.pm b/PVE/API2/Disks/ZFS.pm
index eeb9f48..51380c4 100644
--- a/PVE/API2/Disks/ZFS.pm
+++ b/PVE/API2/Disks/ZFS.pm
@@ -18,6 +18,43 @@ use base qw(PVE::RESTHandler);
 my $ZPOOL = '/sbin/zpool';
 my $ZFS = '/sbin/zfs';
 
+sub get_pool_data {
+    if (!-f $ZPOOL) {
+	die "zfsutils-linux not installed\n";
+    }
+
+    my $propnames = [qw(name size alloc free frag dedup health)];
+    my $numbers = {
+	size => 1,
+	alloc => 1,
+	free => 1,
+	frag => 1,
+	dedup => 1,
+    };
+
+    my $cmd = [$ZPOOL,'list', '-HpPLo', join(',', @$propnames)];
+
+    my $pools = [];
+
+    run_command($cmd, outfunc => sub {
+	my ($line) = @_;
+
+	    my @props = split('\s+', trim($line));
+	    my $pool = {};
+	    for (my $i = 0; $i < scalar(@$propnames); $i++) {
+		if ($numbers->{$propnames->[$i]}) {
+		    $pool->{$propnames->[$i]} = $props[$i] + 0;
+		} else {
+		    $pool->{$propnames->[$i]} = $props[$i];
+		}
+	    }
+
+	    push @$pools, $pool;
+    });
+
+    return $pools;
+}
+
 __PACKAGE__->register_method ({
     name => 'index',
     path => '',
@@ -74,40 +111,7 @@ __PACKAGE__->register_method ({
     code => sub {
 	my ($param) = @_;
 
-	if (!-f $ZPOOL) {
-	    die "zfsutils-linux not installed\n";
-	}
-
-	my $propnames = [qw(name size alloc free frag dedup health)];
-	my $numbers = {
-	    size => 1,
-	    alloc => 1,
-	    free => 1,
-	    frag => 1,
-	    dedup => 1,
-	};
-
-	my $cmd = [$ZPOOL,'list', '-HpPLo', join(',', @$propnames)];
-
-	my $pools = [];
-
-	run_command($cmd, outfunc => sub {
-	    my ($line) = @_;
-
-		my @props = split('\s+', trim($line));
-		my $pool = {};
-		for (my $i = 0; $i < scalar(@$propnames); $i++) {
-		    if ($numbers->{$propnames->[$i]}) {
-			$pool->{$propnames->[$i]} = $props[$i] + 0;
-		    } else {
-			$pool->{$propnames->[$i]} = $props[$i];
-		    }
-		}
-
-		push @$pools, $pool;
-	});
-
-	return $pools;
+	return get_pool_data();
     }});
 
 sub preparetree {
@@ -336,6 +340,7 @@ __PACKAGE__->register_method ({
 	my $user = $rpcenv->get_user();
 
 	my $name = $param->{name};
+	my $node = $param->{node};
 	my $devs = [PVE::Tools::split_list($param->{devices})];
 	my $raidlevel = $param->{raidlevel};
 	my $compression = $param->{compression} // 'on';
@@ -346,6 +351,10 @@ __PACKAGE__->register_method ({
 	}
 	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
 
+	my $pools = get_pool_data();
+	die "pool '${name}' already exists on node '$node'\n"
+	    if grep { $_->{name} eq $name } @{$pools};
+
 	my $numdisks = scalar(@$devs);
 	my $mindisks = {
 	    single => 1,
-- 
2.30.2





^ permalink raw reply	[flat|nested] 13+ messages in thread

* [pve-devel] [PATCH storage v2 3/3] disks: allow add_storage for already configured local storage
  2022-07-15 11:58 [pve-devel] [PATCH storage v2 0/3] disks: add checks, allow add_storage on other nodes Aaron Lauterer
  2022-07-15 11:58 ` [pve-devel] [PATCH storage v2 1/3] diskmanage: add mounted_paths Aaron Lauterer
  2022-07-15 11:58 ` [pve-devel] [PATCH storage v2 2/3] disks: die if storage name is already in use Aaron Lauterer
@ 2022-07-15 11:58 ` Aaron Lauterer
  2022-08-17 11:53   ` Fabian Grünbichler
  2 siblings, 1 reply; 13+ messages in thread
From: Aaron Lauterer @ 2022-07-15 11:58 UTC (permalink / raw)
  To: pve-devel

One of the smaller annoyances, especially for less experienced users, is
the fact, that when creating a local storage (ZFS, LVM (thin), dir) in a
cluster, one can only leave the "Add Storage" option enabled the first
time.

On any following node, this option needed to be disabled and the new
node manually added to the list of nodes for that storage.

This patch changes the behavior. If a storage of the same name already
exists, it will verify that necessary parameters match the already
existing one.
Then, if the 'nodes' parameter is set, it adds the current node and
updates the storage config.
In case there is no nodes list, nothing else needs to be done, and the
GUI will stop showing the question mark for the configured, but until
then, not existing local storage.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
changes since v1:
- restructured the logic a bit. checks still need to be done the same,
    but the create or update logic is in its own
    API2::Storage::Config->create_or_update function which then either
    calls the create or update API directly. This reduces a lot of code
    repition.
- in Storage::assert_sid_usable_on_node additionally checks if the
parameter is configured in the storage and fails if not. We would have
gotten warnings about using and undef in a string concat due to how the
other error was presented.

 PVE/API2/Disks/Directory.pm |  8 +++++---
 PVE/API2/Disks/LVM.pm       |  8 +++++---
 PVE/API2/Disks/LVMThin.pm   |  8 +++++---
 PVE/API2/Disks/ZFS.pm       |  9 ++++++---
 PVE/API2/Storage/Config.pm  | 22 ++++++++++++++++++++++
 PVE/Storage.pm              | 27 +++++++++++++++++++++++++++
 6 files changed, 70 insertions(+), 12 deletions(-)

diff --git a/PVE/API2/Disks/Directory.pm b/PVE/API2/Disks/Directory.pm
index 07190de..581e970 100644
--- a/PVE/API2/Disks/Directory.pm
+++ b/PVE/API2/Disks/Directory.pm
@@ -209,7 +209,10 @@ __PACKAGE__->register_method ({
 
 	$dev = PVE::Diskmanage::verify_blockdev_path($dev);
 	PVE::Diskmanage::assert_disk_unused($dev);
-	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
+
+	my $verify_params = { path => $path };
+	PVE::Storage::assert_sid_usable_on_node($name, $node, 'dir', $verify_params)
+	    if $param->{add_storage};
 
 	my $mounted = PVE::Diskmanage::mounted_paths();
 	die "a mountpoint for '${name}' already exists: ${path} ($1)\n" if $mounted->{$path};
@@ -293,8 +296,7 @@ __PACKAGE__->register_method ({
 			path => $path,
 			nodes => $node,
 		    };
-
-		    PVE::API2::Storage::Config->create($storage_params);
+		    PVE::API2::Storage::Config->create_or_update($name, $node, $storage_params);
 		}
 	    });
 	};
diff --git a/PVE/API2/Disks/LVM.pm b/PVE/API2/Disks/LVM.pm
index a27afe2..e9b3f25 100644
--- a/PVE/API2/Disks/LVM.pm
+++ b/PVE/API2/Disks/LVM.pm
@@ -150,7 +150,10 @@ __PACKAGE__->register_method ({
 
 	$dev = PVE::Diskmanage::verify_blockdev_path($dev);
 	PVE::Diskmanage::assert_disk_unused($dev);
-	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
+
+	my $verify_params = { vgname => $name };
+	PVE::Storage::assert_sid_usable_on_node($name, $node, 'lvm', $verify_params)
+	    if $param->{add_storage};
 
 	die "volume group with name '${name}' already exists\n"
 	    if PVE::Storage::LVMPlugin::lvm_vgs()->{$name};
@@ -177,8 +180,7 @@ __PACKAGE__->register_method ({
 			shared => 0,
 			nodes => $node,
 		    };
-
-		    PVE::API2::Storage::Config->create($storage_params);
+		    PVE::API2::Storage::Config->create_or_update($name, $node, $storage_params);
 		}
 	    });
 	};
diff --git a/PVE/API2/Disks/LVMThin.pm b/PVE/API2/Disks/LVMThin.pm
index 690c183..fb61489 100644
--- a/PVE/API2/Disks/LVMThin.pm
+++ b/PVE/API2/Disks/LVMThin.pm
@@ -108,7 +108,10 @@ __PACKAGE__->register_method ({
 
 	$dev = PVE::Diskmanage::verify_blockdev_path($dev);
 	PVE::Diskmanage::assert_disk_unused($dev);
-	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
+
+	my $verify_params = { vgname => $name, thinpool => $name };
+	PVE::Storage::assert_sid_usable_on_node($name, $node, 'lvmthin', $verify_params)
+	    if $param->{add_storage};
 
 	die "volume group with name '${name}' already exists\n"
 	    if PVE::Storage::LVMPlugin::lvm_vgs()->{$name};
@@ -155,8 +158,7 @@ __PACKAGE__->register_method ({
 			content => 'rootdir,images',
 			nodes => $node,
 		    };
-
-		    PVE::API2::Storage::Config->create($storage_params);
+		    PVE::API2::Storage::Config->create_or_update($name, $node, $storage_params);
 		}
 	    });
 	};
diff --git a/PVE/API2/Disks/ZFS.pm b/PVE/API2/Disks/ZFS.pm
index 51380c4..e1c4b59 100644
--- a/PVE/API2/Disks/ZFS.pm
+++ b/PVE/API2/Disks/ZFS.pm
@@ -342,6 +342,7 @@ __PACKAGE__->register_method ({
 	my $name = $param->{name};
 	my $node = $param->{node};
 	my $devs = [PVE::Tools::split_list($param->{devices})];
+	my $node = $param->{node};
 	my $raidlevel = $param->{raidlevel};
 	my $compression = $param->{compression} // 'on';
 
@@ -349,7 +350,10 @@ __PACKAGE__->register_method ({
 	    $dev = PVE::Diskmanage::verify_blockdev_path($dev);
 	    PVE::Diskmanage::assert_disk_unused($dev);
 	}
-	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
+
+	my $verify_params = { pool => $name };
+	PVE::Storage::assert_sid_usable_on_node($name, $node, 'zfspool', $verify_params)
+	    if $param->{add_storage};
 
 	my $pools = get_pool_data();
 	die "pool '${name}' already exists on node '$node'\n"
@@ -439,8 +443,7 @@ __PACKAGE__->register_method ({
 		    content => 'rootdir,images',
 		    nodes => $param->{node},
 		};
-
-		PVE::API2::Storage::Config->create($storage_params);
+		PVE::API2::Storage::Config->create_or_update($name, $node, $storage_params);
 	    }
 	};
 
diff --git a/PVE/API2/Storage/Config.pm b/PVE/API2/Storage/Config.pm
index 6bd770e..2ea91b7 100755
--- a/PVE/API2/Storage/Config.pm
+++ b/PVE/API2/Storage/Config.pm
@@ -65,6 +65,28 @@ sub cleanup_storages_for_node {
     }
 }
 
+# Decides if a storage needs to be created or updated. An update is needed, if
+# the storage has a node list configured, then the current node will be added.
+# Check before if this is a sensible action, e.g. storage type and other params
+# that need to match
+sub create_or_update {
+    my ($self, $sid, $node, $storage_params) = @_;
+
+    my $cfg = PVE::Storage::config();
+    if (my $scfg = PVE::Storage::storage_config($cfg, $sid, 1)) {
+	if ($scfg->{nodes}) {
+	    $scfg->{nodes}->{$node} = 1;
+	    $self->update({
+		nodes => join(',', sort keys $scfg->{nodes}->%*),
+		storage => $sid,
+	    });
+	    print "Added '${node}' to nodes for storage '${sid}'\n";
+	}
+    } else {
+	$self->create($storage_params);
+    }
+}
+
 __PACKAGE__->register_method ({
     name => 'index',
     path => '',
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index b9c53a1..21d575c 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -2127,6 +2127,33 @@ sub assert_sid_unused {
     return undef;
 }
 
+# Checks if storage id can be used on the node, intended for local storage types
+# Compares the type and verify_params hash against a potentially configured
+# storage. Use it for storage parameters that need to be the same throughout
+# the cluster. For example, pool for ZFS, vg_name for LVM, ...
+sub assert_sid_usable_on_node {
+    my ($sid, $node, $type, $verify_params) = @_;
+
+    my $cfg = config();
+    if (my $scfg = storage_config($cfg, $sid, 1)) {
+	$node = PVE::INotify::nodename() if !$node || ($node eq 'localhost');
+	die "Storage ID '${sid}' already exists on node ${node}\n"
+	    if defined($scfg->{nodes}) && $scfg->{nodes}->{$node};
+
+	$verify_params->{type} = $type;
+	for my $key (keys %$verify_params) {
+	    if (!defined($scfg->{$key})) {
+		die "Option '${key}' is not configured for storage '$sid', "
+		    ."expected it to be '$verify_params->{$key}'";
+	    }
+	    if ($verify_params->{$key} ne $scfg->{$key}) {
+		die "Option '${key}' ($verify_params->{$key}) does not match "
+		    ."existing storage configuration '$scfg->{$key}'\n";
+	    }
+	}
+    }
+}
+
 # removes leading/trailing spaces and (back)slashes completely
 # substitutes every non-ASCII-alphanumerical char with '_', except '_.-'
 sub normalize_content_filename {
-- 
2.30.2





^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [pve-devel] [PATCH storage v2 1/3] diskmanage: add mounted_paths
  2022-07-15 11:58 ` [pve-devel] [PATCH storage v2 1/3] diskmanage: add mounted_paths Aaron Lauterer
@ 2022-08-17 11:35   ` Fabian Grünbichler
  0 siblings, 0 replies; 13+ messages in thread
From: Fabian Grünbichler @ 2022-08-17 11:35 UTC (permalink / raw)
  To: Proxmox VE development discussion

On July 15, 2022 1:58 pm, Aaron Lauterer wrote:
> returns a list of mounted paths with the backing devices
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> changes since v1: dropped limit to /dev path, returning all mounted
> paths now
> 
>  PVE/Diskmanage.pm | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
> 
> diff --git a/PVE/Diskmanage.pm b/PVE/Diskmanage.pm
> index 8ed7a8b..b149685 100644
> --- a/PVE/Diskmanage.pm
> +++ b/PVE/Diskmanage.pm
> @@ -499,6 +499,18 @@ sub mounted_blockdevs {
>      return $mounted;
>  }
>  

nit: a short comment here telling us about the returned data might be 
helpful - else I have to look at parse_proc_mounts or guess what the key 
and what the value represent..

> +sub mounted_paths {
> +    my $mounted = {};
> +
> +    my $mounts = PVE::ProcFSTools::parse_proc_mounts();
> +
> +    foreach my $mount (@$mounts) {
> +	$mounted->{abs_path($mount->[1])} = $mount->[0];
> +    };
> +
> +    return $mounted;
> +}
> +
>  sub get_disks {
>      my ($disks, $nosmart, $include_partitions) = @_;
>      my $disklist = {};
> -- 
> 2.30.2
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 




^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [pve-devel] [PATCH storage v2 2/3] disks: die if storage name is already in use
  2022-07-15 11:58 ` [pve-devel] [PATCH storage v2 2/3] disks: die if storage name is already in use Aaron Lauterer
@ 2022-08-17 11:42   ` Fabian Grünbichler
  2022-08-18 15:22     ` Aaron Lauterer
  0 siblings, 1 reply; 13+ messages in thread
From: Fabian Grünbichler @ 2022-08-17 11:42 UTC (permalink / raw)
  To: Proxmox VE development discussion

On July 15, 2022 1:58 pm, Aaron Lauterer wrote:
> If a storage of that type and name already exists (LVM, zpool, ...) but
> we do not have a Proxmox VE Storage config for it, it is possible that
> the creation will fail midway due to checks done by the underlying
> storage layer itself. This in turn can lead to disks that are already
> partitioned. Users would need to clean this up themselves.
> 
> By adding checks early on, not only checking against the PVE storage
> config, but against the actual storage type itself, we can die early
> enough, before we touch any disk.
> 
> For ZFS, the logic to gather pool data is moved into its own function to
> be called from the index API endpoint and the check in the create
> endpoint.
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> 
> changes since v1:
> - moved zfs pool gathering from index api endpoint into its own function
> to easily call it for the check as well
> - zfs pool check is now using perl grep
> - removed check if mounted path if a /dev one
> - added check if a mount unit exists
>     - moved definitions of $path, $mountunitname and $mountunitpath out
>     of the worker
> 
>  PVE/API2/Disks/Directory.pm | 11 ++++--
>  PVE/API2/Disks/LVM.pm       |  3 ++
>  PVE/API2/Disks/LVMThin.pm   |  3 ++
>  PVE/API2/Disks/ZFS.pm       | 77 +++++++++++++++++++++----------------
>  4 files changed, 56 insertions(+), 38 deletions(-)
> 
> diff --git a/PVE/API2/Disks/Directory.pm b/PVE/API2/Disks/Directory.pm
> index df63ba9..07190de 100644
> --- a/PVE/API2/Disks/Directory.pm
> +++ b/PVE/API2/Disks/Directory.pm
> @@ -203,16 +203,19 @@ __PACKAGE__->register_method ({
>  	my $dev = $param->{device};
>  	my $node = $param->{node};
>  	my $type = $param->{filesystem} // 'ext4';
> +	my $path = "/mnt/pve/$name";
> +	my $mountunitname = PVE::Systemd::escape_unit($path, 1) . ".mount";
> +	my $mountunitpath = "/etc/systemd/system/$mountunitname";
>  
>  	$dev = PVE::Diskmanage::verify_blockdev_path($dev);
>  	PVE::Diskmanage::assert_disk_unused($dev);
>  	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
>  
> -	my $worker = sub {
> -	    my $path = "/mnt/pve/$name";
> -	    my $mountunitname = PVE::Systemd::escape_unit($path, 1) . ".mount";
> -	    my $mountunitpath = "/etc/systemd/system/$mountunitname";
> +	my $mounted = PVE::Diskmanage::mounted_paths();
> +	die "a mountpoint for '${name}' already exists: ${path} ($1)\n" if $mounted->{$path};

nit: the mountpoint existing is not the issue (something already being 
mounted there is!). also, where does the $1 come from?

> +	die "a systemd mount unit already exists: ${mountunitpath}\n" if -e $mountunitpath;

could check if it's identical to the one we'd generate (in the spirit of 
patch #3 ;))

>  
> +	my $worker = sub {
>  	    PVE::Diskmanage::locked_disk_action(sub {
>  		PVE::Diskmanage::assert_disk_unused($dev);
>  
> diff --git a/PVE/API2/Disks/LVM.pm b/PVE/API2/Disks/LVM.pm
> index 6e4331a..a27afe2 100644
> --- a/PVE/API2/Disks/LVM.pm
> +++ b/PVE/API2/Disks/LVM.pm
> @@ -152,6 +152,9 @@ __PACKAGE__->register_method ({
>  	PVE::Diskmanage::assert_disk_unused($dev);
>  	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
>  
> +	die "volume group with name '${name}' already exists\n"
> +	    if PVE::Storage::LVMPlugin::lvm_vgs()->{$name};

probably better off inside the worker, since `vgs` might take a while 
(although we also use it in the index API call in this module..)

> +
>  	my $worker = sub {
>  	    PVE::Diskmanage::locked_disk_action(sub {
>  		PVE::Diskmanage::assert_disk_unused($dev);
> diff --git a/PVE/API2/Disks/LVMThin.pm b/PVE/API2/Disks/LVMThin.pm
> index 58ecb37..690c183 100644
> --- a/PVE/API2/Disks/LVMThin.pm
> +++ b/PVE/API2/Disks/LVMThin.pm
> @@ -110,6 +110,9 @@ __PACKAGE__->register_method ({
>  	PVE::Diskmanage::assert_disk_unused($dev);
>  	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
>  
> +	die "volume group with name '${name}' already exists\n"
> +	    if PVE::Storage::LVMPlugin::lvm_vgs()->{$name};
> +

same here

>  	my $worker = sub {
>  	    PVE::Diskmanage::locked_disk_action(sub {
>  		PVE::Diskmanage::assert_disk_unused($dev);
> diff --git a/PVE/API2/Disks/ZFS.pm b/PVE/API2/Disks/ZFS.pm
> index eeb9f48..51380c4 100644
> --- a/PVE/API2/Disks/ZFS.pm
> +++ b/PVE/API2/Disks/ZFS.pm
> @@ -18,6 +18,43 @@ use base qw(PVE::RESTHandler);
>  my $ZPOOL = '/sbin/zpool';
>  my $ZFS = '/sbin/zfs';
>  
> +sub get_pool_data {
> +    if (!-f $ZPOOL) {
> +	die "zfsutils-linux not installed\n";
> +    }
> +
> +    my $propnames = [qw(name size alloc free frag dedup health)];
> +    my $numbers = {
> +	size => 1,
> +	alloc => 1,
> +	free => 1,
> +	frag => 1,
> +	dedup => 1,
> +    };
> +
> +    my $cmd = [$ZPOOL,'list', '-HpPLo', join(',', @$propnames)];
> +
> +    my $pools = [];
> +
> +    run_command($cmd, outfunc => sub {
> +	my ($line) = @_;
> +
> +	    my @props = split('\s+', trim($line));
> +	    my $pool = {};
> +	    for (my $i = 0; $i < scalar(@$propnames); $i++) {
> +		if ($numbers->{$propnames->[$i]}) {
> +		    $pool->{$propnames->[$i]} = $props[$i] + 0;
> +		} else {
> +		    $pool->{$propnames->[$i]} = $props[$i];
> +		}
> +	    }
> +
> +	    push @$pools, $pool;
> +    });
> +
> +    return $pools;
> +}
> +
>  __PACKAGE__->register_method ({
>      name => 'index',
>      path => '',
> @@ -74,40 +111,7 @@ __PACKAGE__->register_method ({
>      code => sub {
>  	my ($param) = @_;
>  
> -	if (!-f $ZPOOL) {
> -	    die "zfsutils-linux not installed\n";
> -	}
> -
> -	my $propnames = [qw(name size alloc free frag dedup health)];
> -	my $numbers = {
> -	    size => 1,
> -	    alloc => 1,
> -	    free => 1,
> -	    frag => 1,
> -	    dedup => 1,
> -	};
> -
> -	my $cmd = [$ZPOOL,'list', '-HpPLo', join(',', @$propnames)];
> -
> -	my $pools = [];
> -
> -	run_command($cmd, outfunc => sub {
> -	    my ($line) = @_;
> -
> -		my @props = split('\s+', trim($line));
> -		my $pool = {};
> -		for (my $i = 0; $i < scalar(@$propnames); $i++) {
> -		    if ($numbers->{$propnames->[$i]}) {
> -			$pool->{$propnames->[$i]} = $props[$i] + 0;
> -		    } else {
> -			$pool->{$propnames->[$i]} = $props[$i];
> -		    }
> -		}
> -
> -		push @$pools, $pool;
> -	});
> -
> -	return $pools;
> +	return get_pool_data();
>      }});
>  
>  sub preparetree {
> @@ -336,6 +340,7 @@ __PACKAGE__->register_method ({
>  	my $user = $rpcenv->get_user();
>  
>  	my $name = $param->{name};
> +	my $node = $param->{node};

nit: please also change the usage further below in this API handler if 
you do this

>  	my $devs = [PVE::Tools::split_list($param->{devices})];
>  	my $raidlevel = $param->{raidlevel};
>  	my $compression = $param->{compression} // 'on';
> @@ -346,6 +351,10 @@ __PACKAGE__->register_method ({
>  	}
>  	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
>  
> +	my $pools = get_pool_data();

and also here

> +	die "pool '${name}' already exists on node '$node'\n"
> +	    if grep { $_->{name} eq $name } @{$pools};
> +
>  	my $numdisks = scalar(@$devs);
>  	my $mindisks = {
>  	    single => 1,
> -- 
> 2.30.2
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 




^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [pve-devel] [PATCH storage v2 3/3] disks: allow add_storage for already configured local storage
  2022-07-15 11:58 ` [pve-devel] [PATCH storage v2 3/3] disks: allow add_storage for already configured local storage Aaron Lauterer
@ 2022-08-17 11:53   ` Fabian Grünbichler
  0 siblings, 0 replies; 13+ messages in thread
From: Fabian Grünbichler @ 2022-08-17 11:53 UTC (permalink / raw)
  To: Proxmox VE development discussion

On July 15, 2022 1:58 pm, Aaron Lauterer wrote:
> One of the smaller annoyances, especially for less experienced users, is
> the fact, that when creating a local storage (ZFS, LVM (thin), dir) in a
> cluster, one can only leave the "Add Storage" option enabled the first
> time.
> 
> On any following node, this option needed to be disabled and the new
> node manually added to the list of nodes for that storage.
> 
> This patch changes the behavior. If a storage of the same name already
> exists, it will verify that necessary parameters match the already
> existing one.
> Then, if the 'nodes' parameter is set, it adds the current node and
> updates the storage config.
> In case there is no nodes list, nothing else needs to be done, and the
> GUI will stop showing the question mark for the configured, but until
> then, not existing local storage.
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> changes since v1:
> - restructured the logic a bit. checks still need to be done the same,
>     but the create or update logic is in its own
>     API2::Storage::Config->create_or_update function which then either
>     calls the create or update API directly. This reduces a lot of code
>     repition.
> - in Storage::assert_sid_usable_on_node additionally checks if the
> parameter is configured in the storage and fails if not. We would have
> gotten warnings about using and undef in a string concat due to how the
> other error was presented.
> 
>  PVE/API2/Disks/Directory.pm |  8 +++++---
>  PVE/API2/Disks/LVM.pm       |  8 +++++---
>  PVE/API2/Disks/LVMThin.pm   |  8 +++++---
>  PVE/API2/Disks/ZFS.pm       |  9 ++++++---
>  PVE/API2/Storage/Config.pm  | 22 ++++++++++++++++++++++
>  PVE/Storage.pm              | 27 +++++++++++++++++++++++++++
>  6 files changed, 70 insertions(+), 12 deletions(-)
> 
> diff --git a/PVE/API2/Disks/Directory.pm b/PVE/API2/Disks/Directory.pm
> index 07190de..581e970 100644
> --- a/PVE/API2/Disks/Directory.pm
> +++ b/PVE/API2/Disks/Directory.pm
> @@ -209,7 +209,10 @@ __PACKAGE__->register_method ({
>  
>  	$dev = PVE::Diskmanage::verify_blockdev_path($dev);
>  	PVE::Diskmanage::assert_disk_unused($dev);
> -	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
> +
> +	my $verify_params = { path => $path };
> +	PVE::Storage::assert_sid_usable_on_node($name, $node, 'dir', $verify_params)
> +	    if $param->{add_storage};
>  
>  	my $mounted = PVE::Diskmanage::mounted_paths();
>  	die "a mountpoint for '${name}' already exists: ${path} ($1)\n" if $mounted->{$path};
> @@ -293,8 +296,7 @@ __PACKAGE__->register_method ({
>  			path => $path,
>  			nodes => $node,
>  		    };
> -
> -		    PVE::API2::Storage::Config->create($storage_params);
> +		    PVE::API2::Storage::Config->create_or_update($name, $node, $storage_params);
>  		}
>  	    });
>  	};
> diff --git a/PVE/API2/Disks/LVM.pm b/PVE/API2/Disks/LVM.pm
> index a27afe2..e9b3f25 100644
> --- a/PVE/API2/Disks/LVM.pm
> +++ b/PVE/API2/Disks/LVM.pm
> @@ -150,7 +150,10 @@ __PACKAGE__->register_method ({
>  
>  	$dev = PVE::Diskmanage::verify_blockdev_path($dev);
>  	PVE::Diskmanage::assert_disk_unused($dev);
> -	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
> +
> +	my $verify_params = { vgname => $name };
> +	PVE::Storage::assert_sid_usable_on_node($name, $node, 'lvm', $verify_params)
> +	    if $param->{add_storage};
>  
>  	die "volume group with name '${name}' already exists\n"
>  	    if PVE::Storage::LVMPlugin::lvm_vgs()->{$name};
> @@ -177,8 +180,7 @@ __PACKAGE__->register_method ({
>  			shared => 0,
>  			nodes => $node,
>  		    };
> -
> -		    PVE::API2::Storage::Config->create($storage_params);
> +		    PVE::API2::Storage::Config->create_or_update($name, $node, $storage_params);
>  		}
>  	    });
>  	};
> diff --git a/PVE/API2/Disks/LVMThin.pm b/PVE/API2/Disks/LVMThin.pm
> index 690c183..fb61489 100644
> --- a/PVE/API2/Disks/LVMThin.pm
> +++ b/PVE/API2/Disks/LVMThin.pm
> @@ -108,7 +108,10 @@ __PACKAGE__->register_method ({
>  
>  	$dev = PVE::Diskmanage::verify_blockdev_path($dev);
>  	PVE::Diskmanage::assert_disk_unused($dev);
> -	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
> +
> +	my $verify_params = { vgname => $name, thinpool => $name };
> +	PVE::Storage::assert_sid_usable_on_node($name, $node, 'lvmthin', $verify_params)
> +	    if $param->{add_storage};
>  
>  	die "volume group with name '${name}' already exists\n"
>  	    if PVE::Storage::LVMPlugin::lvm_vgs()->{$name};
> @@ -155,8 +158,7 @@ __PACKAGE__->register_method ({
>  			content => 'rootdir,images',
>  			nodes => $node,
>  		    };
> -
> -		    PVE::API2::Storage::Config->create($storage_params);
> +		    PVE::API2::Storage::Config->create_or_update($name, $node, $storage_params);
>  		}
>  	    });
>  	};
> diff --git a/PVE/API2/Disks/ZFS.pm b/PVE/API2/Disks/ZFS.pm
> index 51380c4..e1c4b59 100644
> --- a/PVE/API2/Disks/ZFS.pm
> +++ b/PVE/API2/Disks/ZFS.pm
> @@ -342,6 +342,7 @@ __PACKAGE__->register_method ({
>  	my $name = $param->{name};
>  	my $node = $param->{node};
>  	my $devs = [PVE::Tools::split_list($param->{devices})];
> +	my $node = $param->{node};
>  	my $raidlevel = $param->{raidlevel};
>  	my $compression = $param->{compression} // 'on';
>  
> @@ -349,7 +350,10 @@ __PACKAGE__->register_method ({
>  	    $dev = PVE::Diskmanage::verify_blockdev_path($dev);
>  	    PVE::Diskmanage::assert_disk_unused($dev);
>  	}
> -	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
> +
> +	my $verify_params = { pool => $name };
> +	PVE::Storage::assert_sid_usable_on_node($name, $node, 'zfspool', $verify_params)
> +	    if $param->{add_storage};
>  
>  	my $pools = get_pool_data();
>  	die "pool '${name}' already exists on node '$node'\n"
> @@ -439,8 +443,7 @@ __PACKAGE__->register_method ({
>  		    content => 'rootdir,images',
>  		    nodes => $param->{node},
>  		};
> -
> -		PVE::API2::Storage::Config->create($storage_params);
> +		PVE::API2::Storage::Config->create_or_update($name, $node, $storage_params);
>  	    }
>  	};
>  
> diff --git a/PVE/API2/Storage/Config.pm b/PVE/API2/Storage/Config.pm
> index 6bd770e..2ea91b7 100755
> --- a/PVE/API2/Storage/Config.pm
> +++ b/PVE/API2/Storage/Config.pm
> @@ -65,6 +65,28 @@ sub cleanup_storages_for_node {
>      }
>  }
>  
> +# Decides if a storage needs to be created or updated. An update is needed, if
> +# the storage has a node list configured, then the current node will be added.
> +# Check before if this is a sensible action, e.g. storage type and other params
> +# that need to match
> +sub create_or_update {
> +    my ($self, $sid, $node, $storage_params) = @_;
> +
> +    my $cfg = PVE::Storage::config();
> +    if (my $scfg = PVE::Storage::storage_config($cfg, $sid, 1)) {
> +	if ($scfg->{nodes}) {
> +	    $scfg->{nodes}->{$node} = 1;
> +	    $self->update({
> +		nodes => join(',', sort keys $scfg->{nodes}->%*),
> +		storage => $sid,
> +	    });
> +	    print "Added '${node}' to nodes for storage '${sid}'\n";
> +	}
> +    } else {
> +	$self->create($storage_params);
> +    }
> +}
> +
>  __PACKAGE__->register_method ({
>      name => 'index',
>      path => '',
> diff --git a/PVE/Storage.pm b/PVE/Storage.pm
> index b9c53a1..21d575c 100755
> --- a/PVE/Storage.pm
> +++ b/PVE/Storage.pm
> @@ -2127,6 +2127,33 @@ sub assert_sid_unused {
>      return undef;
>  }
>  
> +# Checks if storage id can be used on the node, intended for local storage types
> +# Compares the type and verify_params hash against a potentially configured
> +# storage. Use it for storage parameters that need to be the same throughout
> +# the cluster. For example, pool for ZFS, vg_name for LVM, ...
> +sub assert_sid_usable_on_node {
> +    my ($sid, $node, $type, $verify_params) = @_;

couldn't this be folded into create_or_update, e.g. by adding a 
check/dry_run flag that skips the actual update/creation? then we could 
call it once at the beginning (with dry_run set) and once for actual 
adding/updating, and we don't risk forgetting about some parameter later 
on when changing the code.

$node is always the local node (and passed to create_or_update via 
$storage_params->{nodes}), $type is passed to create_or_update (via 
$storage_params), and $verify_params is just a subset of $storage_params 
(where we actually want to check that everything matches anyway, except 
for nodes which already is special-cased for the update part), so we 
might as well pass the full thing ;) or am I missing something here?

> +
> +    my $cfg = config();
> +    if (my $scfg = storage_config($cfg, $sid, 1)) {
> +	$node = PVE::INotify::nodename() if !$node || ($node eq 'localhost');
> +	die "Storage ID '${sid}' already exists on node ${node}\n"
> +	    if defined($scfg->{nodes}) && $scfg->{nodes}->{$node};
> +
> +	$verify_params->{type} = $type;
> +	for my $key (keys %$verify_params) {
> +	    if (!defined($scfg->{$key})) {
> +		die "Option '${key}' is not configured for storage '$sid', "
> +		    ."expected it to be '$verify_params->{$key}'";
> +	    }
> +	    if ($verify_params->{$key} ne $scfg->{$key}) {
> +		die "Option '${key}' ($verify_params->{$key}) does not match "
> +		    ."existing storage configuration '$scfg->{$key}'\n";
> +	    }
> +	}
> +    }
> +}
> +
>  # removes leading/trailing spaces and (back)slashes completely
>  # substitutes every non-ASCII-alphanumerical char with '_', except '_.-'
>  sub normalize_content_filename {
> -- 
> 2.30.2
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 




^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [pve-devel] [PATCH storage v2 2/3] disks: die if storage name is already in use
  2022-08-17 11:42   ` Fabian Grünbichler
@ 2022-08-18 15:22     ` Aaron Lauterer
  2022-08-18 15:31       ` Aaron Lauterer
  2022-08-19  8:20       ` Fabian Grünbichler
  0 siblings, 2 replies; 13+ messages in thread
From: Aaron Lauterer @ 2022-08-18 15:22 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fabian Grünbichler



On 8/17/22 13:42, Fabian Grünbichler wrote:
> On July 15, 2022 1:58 pm, Aaron Lauterer wrote:
[...]
>> -	my $worker = sub {
>> -	    my $path = "/mnt/pve/$name";
>> -	    my $mountunitname = PVE::Systemd::escape_unit($path, 1) . ".mount";
>> -	    my $mountunitpath = "/etc/systemd/system/$mountunitname";
>> +	my $mounted = PVE::Diskmanage::mounted_paths();
>> +	die "a mountpoint for '${name}' already exists: ${path} ($1)\n" if $mounted->{$path};
> 
> nit: the mountpoint existing is not the issue (something already being
> mounted there is!). also, where does the $1 come from?

good point and thanks for catching the $1
> 
>> +	die "a systemd mount unit already exists: ${mountunitpath}\n" if -e $mountunitpath;
> 
> could check if it's identical to the one we'd generate (in the spirit of
> patch #3 ;))

I looked into it, depending on how hard we want to match the mount unit, this 
could be a bit hard. It contains the /dev/disk/by-uuid/... path which will not 
be the same as it changes with each FS creation (IIUC).

> 
>>   
>> +	my $worker = sub {
>>   	    PVE::Diskmanage::locked_disk_action(sub {
>>   		PVE::Diskmanage::assert_disk_unused($dev);
>>   
>> diff --git a/PVE/API2/Disks/LVM.pm b/PVE/API2/Disks/LVM.pm
>> index 6e4331a..a27afe2 100644
>> --- a/PVE/API2/Disks/LVM.pm
>> +++ b/PVE/API2/Disks/LVM.pm
>> @@ -152,6 +152,9 @@ __PACKAGE__->register_method ({
>>   	PVE::Diskmanage::assert_disk_unused($dev);
>>   	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
>>   
>> +	die "volume group with name '${name}' already exists\n"
>> +	    if PVE::Storage::LVMPlugin::lvm_vgs()->{$name};
> 
> probably better off inside the worker, since `vgs` might take a while
> (although we also use it in the index API call in this module..)

 From a GUI perspective: putting it in a worker would result in the user to hit 
okay and then will see the failed task right? Keeping it as is will result in an 
error popping up when clicking ok/create and the user can edit the name instead 
of starting all over again. Though, if `vgs` really needs a bit, that error 
popping up could take a moment or two.

[...]

>>   sub preparetree {
>> @@ -336,6 +340,7 @@ __PACKAGE__->register_method ({
>>   	my $user = $rpcenv->get_user();
>>   
>>   	my $name = $param->{name};
>> +	my $node = $param->{node};
> 
> nit: please also change the usage further below in this API handler if
> you do this

what exactly do you mean? defining local variables for $param->{..} ones?

> 
>>   	my $devs = [PVE::Tools::split_list($param->{devices})];
>>   	my $raidlevel = $param->{raidlevel};
>>   	my $compression = $param->{compression} // 'on';
>> @@ -346,6 +351,10 @@ __PACKAGE__->register_method ({
>>   	}
>>   	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
>>   
>> +	my $pools = get_pool_data();
> 
> and also here
> 
>> +	die "pool '${name}' already exists on node '$node'\n"
>> +	    if grep { $_->{name} eq $name } @{$pools};
>> +
>>   	my $numdisks = scalar(@$devs);
>>   	my $mindisks = {
>>   	    single => 1,
>> -- 
>> 2.30.2
>>
>>
>>
>> _______________________________________________
>> pve-devel mailing list
>> pve-devel@lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>>
>>
>>
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 




^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [pve-devel] [PATCH storage v2 2/3] disks: die if storage name is already in use
  2022-08-18 15:22     ` Aaron Lauterer
@ 2022-08-18 15:31       ` Aaron Lauterer
  2022-08-19  8:21         ` Fabian Grünbichler
  2022-08-19  8:20       ` Fabian Grünbichler
  1 sibling, 1 reply; 13+ messages in thread
From: Aaron Lauterer @ 2022-08-18 15:31 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fabian Grünbichler



On 8/18/22 17:22, Aaron Lauterer wrote:
> 
> 
> On 8/17/22 13:42, Fabian Grünbichler wrote:[..]
>>> +    die "a systemd mount unit already exists: ${mountunitpath}\n" if -e 
>>> $mountunitpath;
>>
>> could check if it's identical to the one we'd generate (in the spirit of
>> patch #3 ;))
> 
> I looked into it, depending on how hard we want to match the mount unit, this 
> could be a bit hard. It contains the /dev/disk/by-uuid/... path which will not 
> be the same as it changes with each FS creation (IIUC).

The question is, if it is a good idea to have the check since there is no easy 
way for the user to remedy the problem without doing a manual `rm 
/etc/systemd/system/foo.mount`.

Putting more work into improving the whole storage mgmt situation is of course 
also something we could do.

[...]
> 
>>>   sub preparetree {
>>> @@ -336,6 +340,7 @@ __PACKAGE__->register_method ({
>>>       my $user = $rpcenv->get_user();
>>>       my $name = $param->{name};
>>> +    my $node = $param->{node};
>>
>> nit: please also change the usage further below in this API handler if
>> you do this
> 
> what exactly do you mean? defining local variables for $param->{..} ones?
> 

okay I think I got it. was confused by looking at another part of the code.




^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [pve-devel] [PATCH storage v2 2/3] disks: die if storage name is already in use
  2022-08-18 15:22     ` Aaron Lauterer
  2022-08-18 15:31       ` Aaron Lauterer
@ 2022-08-19  8:20       ` Fabian Grünbichler
  2022-08-19  9:28         ` Aaron Lauterer
  1 sibling, 1 reply; 13+ messages in thread
From: Fabian Grünbichler @ 2022-08-19  8:20 UTC (permalink / raw)
  To: Aaron Lauterer, Proxmox VE development discussion

On August 18, 2022 5:22 pm, Aaron Lauterer wrote:
> 
> 
> On 8/17/22 13:42, Fabian Grünbichler wrote:
>> On July 15, 2022 1:58 pm, Aaron Lauterer wrote:
> [...]
>>> -	my $worker = sub {
>>> -	    my $path = "/mnt/pve/$name";
>>> -	    my $mountunitname = PVE::Systemd::escape_unit($path, 1) . ".mount";
>>> -	    my $mountunitpath = "/etc/systemd/system/$mountunitname";
>>> +	my $mounted = PVE::Diskmanage::mounted_paths();
>>> +	die "a mountpoint for '${name}' already exists: ${path} ($1)\n" if $mounted->{$path};
>> 
>> nit: the mountpoint existing is not the issue (something already being
>> mounted there is!). also, where does the $1 come from?
> 
> good point and thanks for catching the $1
>> 
>>> +	die "a systemd mount unit already exists: ${mountunitpath}\n" if -e $mountunitpath;
>> 
>> could check if it's identical to the one we'd generate (in the spirit of
>> patch #3 ;))
> 
> I looked into it, depending on how hard we want to match the mount unit, this 
> could be a bit hard. It contains the /dev/disk/by-uuid/... path which will not 
> be the same as it changes with each FS creation (IIUC).
> 

makes sense, and no, we definitely don't want any fancy heuristics for 
"close enough to ignore it already exists" ;) so this can stay as it is.

>> 
>>>   
>>> +	my $worker = sub {
>>>   	    PVE::Diskmanage::locked_disk_action(sub {
>>>   		PVE::Diskmanage::assert_disk_unused($dev);
>>>   
>>> diff --git a/PVE/API2/Disks/LVM.pm b/PVE/API2/Disks/LVM.pm
>>> index 6e4331a..a27afe2 100644
>>> --- a/PVE/API2/Disks/LVM.pm
>>> +++ b/PVE/API2/Disks/LVM.pm
>>> @@ -152,6 +152,9 @@ __PACKAGE__->register_method ({
>>>   	PVE::Diskmanage::assert_disk_unused($dev);
>>>   	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
>>>   
>>> +	die "volume group with name '${name}' already exists\n"
>>> +	    if PVE::Storage::LVMPlugin::lvm_vgs()->{$name};
>> 
>> probably better off inside the worker, since `vgs` might take a while
>> (although we also use it in the index API call in this module..)
> 
>  From a GUI perspective: putting it in a worker would result in the user to hit 
> okay and then will see the failed task right? Keeping it as is will result in an 
> error popping up when clicking ok/create and the user can edit the name instead 
> of starting all over again. Though, if `vgs` really needs a bit, that error 
> popping up could take a moment or two.

yes, putting it in the worker means no early-failure. but not putting it 
in the worker potentially means this API endpoint cannot be called on 
systems with busy LVM at all (30s timeout for API requests!). so 
early-failure checks should almost always only do "logical" things that 
are cheap (like reading a conf file and checking invariants), and 
nothing that can reasonably block for some time (like storage 
operations, starting guests, ..). I know we are not 100% consistent 
there (the worst offender is probably the storage content API endpoint 
;)), but we should try to not introduce more problems of that fashion 
but rather work on removing the existing ones.

> 
> [...]
> 
>>>   sub preparetree {
>>> @@ -336,6 +340,7 @@ __PACKAGE__->register_method ({
>>>   	my $user = $rpcenv->get_user();
>>>   
>>>   	my $name = $param->{name};
>>> +	my $node = $param->{node};
>> 
>> nit: please also change the usage further below in this API handler if
>> you do this
> 
> what exactly do you mean? defining local variables for $param->{..} ones?

the usage of $param->{node} below in this handler should (also) switch 
to $node - we don't want two places with the same information, that can 
cause subtle breakage in the future when only one of them gets 
modified/..

> 
>> 
>>>   	my $devs = [PVE::Tools::split_list($param->{devices})];
>>>   	my $raidlevel = $param->{raidlevel};
>>>   	my $compression = $param->{compression} // 'on';
>>> @@ -346,6 +351,10 @@ __PACKAGE__->register_method ({
>>>   	}
>>>   	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
>>>   
>>> +	my $pools = get_pool_data();
>> 
>> and also here
>> 
>>> +	die "pool '${name}' already exists on node '$node'\n"
>>> +	    if grep { $_->{name} eq $name } @{$pools};
>>> +
>>>   	my $numdisks = scalar(@$devs);
>>>   	my $mindisks = {
>>>   	    single => 1,
>>> -- 
>>> 2.30.2




^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [pve-devel] [PATCH storage v2 2/3] disks: die if storage name is already in use
  2022-08-18 15:31       ` Aaron Lauterer
@ 2022-08-19  8:21         ` Fabian Grünbichler
  2022-08-19  9:29           ` Aaron Lauterer
  0 siblings, 1 reply; 13+ messages in thread
From: Fabian Grünbichler @ 2022-08-19  8:21 UTC (permalink / raw)
  To: Proxmox VE development discussion

On August 18, 2022 5:31 pm, Aaron Lauterer wrote:
> 
> 
> On 8/18/22 17:22, Aaron Lauterer wrote:
>> 
>> 
>> On 8/17/22 13:42, Fabian Grünbichler wrote:[..]
>>>> +    die "a systemd mount unit already exists: ${mountunitpath}\n" if -e 
>>>> $mountunitpath;
>>>
>>> could check if it's identical to the one we'd generate (in the spirit of
>>> patch #3 ;))
>> 
>> I looked into it, depending on how hard we want to match the mount unit, this 
>> could be a bit hard. It contains the /dev/disk/by-uuid/... path which will not 
>> be the same as it changes with each FS creation (IIUC).
> 
> The question is, if it is a good idea to have the check since there is no easy 
> way for the user to remedy the problem without doing a manual `rm 
> /etc/systemd/system/foo.mount`.

*could* be solved by having a force parameter I guess? not sure that's a 
good idea, just throwing it out there ;)

> Putting more work into improving the whole storage mgmt situation is of course 
> also something we could do.
> 
> [...]
>> 
>>>>   sub preparetree {
>>>> @@ -336,6 +340,7 @@ __PACKAGE__->register_method ({
>>>>       my $user = $rpcenv->get_user();
>>>>       my $name = $param->{name};
>>>> +    my $node = $param->{node};
>>>
>>> nit: please also change the usage further below in this API handler if
>>> you do this
>> 
>> what exactly do you mean? defining local variables for $param->{..} ones?
>> 
> 
> okay I think I got it. was confused by looking at another part of the code.
> 




^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [pve-devel] [PATCH storage v2 2/3] disks: die if storage name is already in use
  2022-08-19  8:20       ` Fabian Grünbichler
@ 2022-08-19  9:28         ` Aaron Lauterer
  0 siblings, 0 replies; 13+ messages in thread
From: Aaron Lauterer @ 2022-08-19  9:28 UTC (permalink / raw)
  To: Fabian Grünbichler, Proxmox VE development discussion



On 8/19/22 10:20, Fabian Grünbichler wrote:
> On August 18, 2022 5:22 pm, Aaron Lauterer wrote:
>>
>>>>    
>>>> diff --git a/PVE/API2/Disks/LVM.pm b/PVE/API2/Disks/LVM.pm
>>>> index 6e4331a..a27afe2 100644
>>>> --- a/PVE/API2/Disks/LVM.pm
>>>> +++ b/PVE/API2/Disks/LVM.pm
>>>> @@ -152,6 +152,9 @@ __PACKAGE__->register_method ({
>>>>    	PVE::Diskmanage::assert_disk_unused($dev);
>>>>    	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
>>>>    
>>>> +	die "volume group with name '${name}' already exists\n"
>>>> +	    if PVE::Storage::LVMPlugin::lvm_vgs()->{$name};
>>>
>>> probably better off inside the worker, since `vgs` might take a while
>>> (although we also use it in the index API call in this module..)
>>
>>   From a GUI perspective: putting it in a worker would result in the user to hit
>> okay and then will see the failed task right? Keeping it as is will result in an
>> error popping up when clicking ok/create and the user can edit the name instead
>> of starting all over again. Though, if `vgs` really needs a bit, that error
>> popping up could take a moment or two.
> 
> yes, putting it in the worker means no early-failure. but not putting it
> in the worker potentially means this API endpoint cannot be called on
> systems with busy LVM at all (30s timeout for API requests!). so
> early-failure checks should almost always only do "logical" things that
> are cheap (like reading a conf file and checking invariants), and
> nothing that can reasonably block for some time (like storage
> operations, starting guests, ..). I know we are not 100% consistent
> there (the worst offender is probably the storage content API endpoint
> ;)), but we should try to not introduce more problems of that fashion
> but rather work on removing the existing ones.

good point. I'll put the check inside the worker.




^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [pve-devel] [PATCH storage v2 2/3] disks: die if storage name is already in use
  2022-08-19  8:21         ` Fabian Grünbichler
@ 2022-08-19  9:29           ` Aaron Lauterer
  0 siblings, 0 replies; 13+ messages in thread
From: Aaron Lauterer @ 2022-08-19  9:29 UTC (permalink / raw)
  To: pve-devel



On 8/19/22 10:21, Fabian Grünbichler wrote:
> On August 18, 2022 5:31 pm, Aaron Lauterer wrote:
>>
>>
>> On 8/18/22 17:22, Aaron Lauterer wrote:
>>>
>>>
>>> On 8/17/22 13:42, Fabian Grünbichler wrote:[..]
>>>>> +    die "a systemd mount unit already exists: ${mountunitpath}\n" if -e
>>>>> $mountunitpath;
>>>>
>>>> could check if it's identical to the one we'd generate (in the spirit of
>>>> patch #3 ;))
>>>
>>> I looked into it, depending on how hard we want to match the mount unit, this
>>> could be a bit hard. It contains the /dev/disk/by-uuid/... path which will not
>>> be the same as it changes with each FS creation (IIUC).
>>
>> The question is, if it is a good idea to have the check since there is no easy
>> way for the user to remedy the problem without doing a manual `rm
>> /etc/systemd/system/foo.mount`.
> 
> *could* be solved by having a force parameter I guess? not sure that's a
> good idea, just throwing it out there ;)
> 

a short off list discussion brought up that we indeed can remove dangling mount 
units already, so keeping the check is okay since users can quite easily remove it





^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2022-08-19  9:29 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-15 11:58 [pve-devel] [PATCH storage v2 0/3] disks: add checks, allow add_storage on other nodes Aaron Lauterer
2022-07-15 11:58 ` [pve-devel] [PATCH storage v2 1/3] diskmanage: add mounted_paths Aaron Lauterer
2022-08-17 11:35   ` Fabian Grünbichler
2022-07-15 11:58 ` [pve-devel] [PATCH storage v2 2/3] disks: die if storage name is already in use Aaron Lauterer
2022-08-17 11:42   ` Fabian Grünbichler
2022-08-18 15:22     ` Aaron Lauterer
2022-08-18 15:31       ` Aaron Lauterer
2022-08-19  8:21         ` Fabian Grünbichler
2022-08-19  9:29           ` Aaron Lauterer
2022-08-19  8:20       ` Fabian Grünbichler
2022-08-19  9:28         ` Aaron Lauterer
2022-07-15 11:58 ` [pve-devel] [PATCH storage v2 3/3] disks: allow add_storage for already configured local storage Aaron Lauterer
2022-08-17 11:53   ` Fabian Grünbichler

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal