public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH storage v3 0/3] disks: add checks, allow add_storage on other nodes
@ 2022-08-19 15:01 Aaron Lauterer
  2022-08-19 15:01 ` [pve-devel] [PATCH storage v3 1/3] diskmanage: add mounted_paths Aaron Lauterer
                   ` (3 more replies)
  0 siblings, 4 replies; 6+ messages in thread
From: Aaron Lauterer @ 2022-08-19 15:01 UTC (permalink / raw)
  To: pve-devel

This series deals mainly with 2 things, adding more checks prior to
actually setting up a new local storage to avoid leaving behind half
provisioned disks in case a storage (lvm, zfs, ...) of the same name
already exists.
Secondly, to change the behavior regarding the "Add Storage" flag. It
allows to keep in enabled and will create the local storage and add the
node to the nodes list of the PVE storage config if there already exists
one in the cluster.

The first patch is a prerequisite to be able to check if a mount path
for the Directory storage type is already mounted.
The second patch implements the actual checks.
The third patch adds the changed behavior for the "Add Storage" part.

More in the actual patches.

changes since
v2:
- add comments about return value
- moved VGs checks into worker
- moved storage parameter checks into create_or_update with dry run
option

v1:
- recommended changes by Dominik & Fabian E
- some lines were in the wrong patches due to some mistakes during
reordering the patches

Aaron Lauterer (3):
  diskmanage: add mounted_paths
  disks: die if storage name is already in use
  disks: allow add_storage for already configured local storage

 PVE/API2/Disks/Directory.pm |  49 +++++++++++-----
 PVE/API2/Disks/LVM.pm       |  39 +++++++++----
 PVE/API2/Disks/LVMThin.pm   |  40 +++++++++----
 PVE/API2/Disks/ZFS.pm       | 113 ++++++++++++++++++++++--------------
 PVE/API2/Storage/Config.pm  |  51 ++++++++++++++++
 PVE/Diskmanage.pm           |  13 +++++
 6 files changed, 224 insertions(+), 81 deletions(-)

-- 
2.30.2





^ permalink raw reply	[flat|nested] 6+ messages in thread

* [pve-devel] [PATCH storage v3 1/3] diskmanage: add mounted_paths
  2022-08-19 15:01 [pve-devel] [PATCH storage v3 0/3] disks: add checks, allow add_storage on other nodes Aaron Lauterer
@ 2022-08-19 15:01 ` Aaron Lauterer
  2022-08-19 15:01 ` [pve-devel] [PATCH storage v3 2/3] disks: die if storage name is already in use Aaron Lauterer
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 6+ messages in thread
From: Aaron Lauterer @ 2022-08-19 15:01 UTC (permalink / raw)
  To: pve-devel

returns a list of mounted paths with the backing devices

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
changes since
v2: added comment about return value
v1: dropped limit to /dev path, returning all mounted paths now

 PVE/Diskmanage.pm | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/PVE/Diskmanage.pm b/PVE/Diskmanage.pm
index 8ed7a8b..81df67f 100644
--- a/PVE/Diskmanage.pm
+++ b/PVE/Diskmanage.pm
@@ -499,6 +499,19 @@ sub mounted_blockdevs {
     return $mounted;
 }
 
+# returns hashmap of abs mount path -> first part of /proc/mounts (what)
+sub mounted_paths {
+    my $mounted = {};
+
+    my $mounts = PVE::ProcFSTools::parse_proc_mounts();
+
+    foreach my $mount (@$mounts) {
+	$mounted->{abs_path($mount->[1])} = $mount->[0];
+    };
+
+    return $mounted;
+}
+
 sub get_disks {
     my ($disks, $nosmart, $include_partitions) = @_;
     my $disklist = {};
-- 
2.30.2





^ permalink raw reply	[flat|nested] 6+ messages in thread

* [pve-devel] [PATCH storage v3 2/3] disks: die if storage name is already in use
  2022-08-19 15:01 [pve-devel] [PATCH storage v3 0/3] disks: add checks, allow add_storage on other nodes Aaron Lauterer
  2022-08-19 15:01 ` [pve-devel] [PATCH storage v3 1/3] diskmanage: add mounted_paths Aaron Lauterer
@ 2022-08-19 15:01 ` Aaron Lauterer
  2022-08-19 15:01 ` [pve-devel] [PATCH storage v3 3/3] disks: allow add_storage for already configured local storage Aaron Lauterer
  2022-09-05  9:50 ` [pve-devel] [PATCH storage v3 0/3] disks: add checks, allow add_storage on other nodes Dominik Csapak
  3 siblings, 0 replies; 6+ messages in thread
From: Aaron Lauterer @ 2022-08-19 15:01 UTC (permalink / raw)
  To: pve-devel

If a storage of that type and name already exists (LVM, zpool, ...) but
we do not have a Proxmox VE Storage config for it, it is possible that
the creation will fail midway due to checks done by the underlying
storage layer itself. This in turn can lead to disks that are already
partitioned. Users would need to clean this up themselves.

By adding checks early on, not only checking against the PVE storage
config, but against the actual storage type itself, we can die early
enough, before we touch any disk.

For ZFS, the logic to gather pool data is moved into its own function to
be called from the index API endpoint and the check in the create
endpoint.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
changes since
v2:
- moved checking against exising VGs into worker as it could be a longer
operation
- improved error msg if dir path is already mounted
- smaller code style improvements
v1:
- moved zfs pool gathering from index api endpoint into its own function
to easily call it for the check as well
- zfs pool check is now using perl grep
- removed check if mounted path if a /dev one
- added check if a mount unit exists
- moved definitions of $path, $mountunitname and $mountunitpath out of
the worker

 PVE/API2/Disks/Directory.pm | 12 ++++--
 PVE/API2/Disks/LVM.pm       |  2 +
 PVE/API2/Disks/LVMThin.pm   |  3 ++
 PVE/API2/Disks/ZFS.pm       | 79 +++++++++++++++++++++----------------
 4 files changed, 57 insertions(+), 39 deletions(-)

diff --git a/PVE/API2/Disks/Directory.pm b/PVE/API2/Disks/Directory.pm
index df63ba9..e2272fb 100644
--- a/PVE/API2/Disks/Directory.pm
+++ b/PVE/API2/Disks/Directory.pm
@@ -203,16 +203,20 @@ __PACKAGE__->register_method ({
 	my $dev = $param->{device};
 	my $node = $param->{node};
 	my $type = $param->{filesystem} // 'ext4';
+	my $path = "/mnt/pve/$name";
+	my $mountunitname = PVE::Systemd::escape_unit($path, 1) . ".mount";
+	my $mountunitpath = "/etc/systemd/system/$mountunitname";
 
 	$dev = PVE::Diskmanage::verify_blockdev_path($dev);
 	PVE::Diskmanage::assert_disk_unused($dev);
 	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
 
-	my $worker = sub {
-	    my $path = "/mnt/pve/$name";
-	    my $mountunitname = PVE::Systemd::escape_unit($path, 1) . ".mount";
-	    my $mountunitpath = "/etc/systemd/system/$mountunitname";
+	my $mounted = PVE::Diskmanage::mounted_paths();
+	die "the path for '${name}' is already mounted: ${path} ($mounted->{$path})\n"
+	    if $mounted->{$path};
+	die "a systemd mount unit already exists: ${mountunitpath}\n" if -e $mountunitpath;
 
+	my $worker = sub {
 	    PVE::Diskmanage::locked_disk_action(sub {
 		PVE::Diskmanage::assert_disk_unused($dev);
 
diff --git a/PVE/API2/Disks/LVM.pm b/PVE/API2/Disks/LVM.pm
index 6e4331a..ef341d1 100644
--- a/PVE/API2/Disks/LVM.pm
+++ b/PVE/API2/Disks/LVM.pm
@@ -155,6 +155,8 @@ __PACKAGE__->register_method ({
 	my $worker = sub {
 	    PVE::Diskmanage::locked_disk_action(sub {
 		PVE::Diskmanage::assert_disk_unused($dev);
+		die "volume group with name '${name}' already exists on node '${node}'\n"
+		    if PVE::Storage::LVMPlugin::lvm_vgs()->{$name};
 
 		if (PVE::Diskmanage::is_partition($dev)) {
 		    eval { PVE::Diskmanage::change_parttype($dev, '8E00'); };
diff --git a/PVE/API2/Disks/LVMThin.pm b/PVE/API2/Disks/LVMThin.pm
index 58ecb37..9a25e7c 100644
--- a/PVE/API2/Disks/LVMThin.pm
+++ b/PVE/API2/Disks/LVMThin.pm
@@ -114,6 +114,9 @@ __PACKAGE__->register_method ({
 	    PVE::Diskmanage::locked_disk_action(sub {
 		PVE::Diskmanage::assert_disk_unused($dev);
 
+		die "volume group with name '${name}' already exists on node '${node}'\n"
+		    if PVE::Storage::LVMPlugin::lvm_vgs()->{$name};
+
 		if (PVE::Diskmanage::is_partition($dev)) {
 		    eval { PVE::Diskmanage::change_parttype($dev, '8E00'); };
 		    warn $@ if $@;
diff --git a/PVE/API2/Disks/ZFS.pm b/PVE/API2/Disks/ZFS.pm
index eeb9f48..27873cc 100644
--- a/PVE/API2/Disks/ZFS.pm
+++ b/PVE/API2/Disks/ZFS.pm
@@ -18,6 +18,43 @@ use base qw(PVE::RESTHandler);
 my $ZPOOL = '/sbin/zpool';
 my $ZFS = '/sbin/zfs';
 
+sub get_pool_data {
+    if (!-f $ZPOOL) {
+	die "zfsutils-linux not installed\n";
+    }
+
+    my $propnames = [qw(name size alloc free frag dedup health)];
+    my $numbers = {
+	size => 1,
+	alloc => 1,
+	free => 1,
+	frag => 1,
+	dedup => 1,
+    };
+
+    my $cmd = [$ZPOOL,'list', '-HpPLo', join(',', @$propnames)];
+
+    my $pools = [];
+
+    run_command($cmd, outfunc => sub {
+	my ($line) = @_;
+
+	    my @props = split('\s+', trim($line));
+	    my $pool = {};
+	    for (my $i = 0; $i < scalar(@$propnames); $i++) {
+		if ($numbers->{$propnames->[$i]}) {
+		    $pool->{$propnames->[$i]} = $props[$i] + 0;
+		} else {
+		    $pool->{$propnames->[$i]} = $props[$i];
+		}
+	    }
+
+	    push @$pools, $pool;
+    });
+
+    return $pools;
+}
+
 __PACKAGE__->register_method ({
     name => 'index',
     path => '',
@@ -74,40 +111,7 @@ __PACKAGE__->register_method ({
     code => sub {
 	my ($param) = @_;
 
-	if (!-f $ZPOOL) {
-	    die "zfsutils-linux not installed\n";
-	}
-
-	my $propnames = [qw(name size alloc free frag dedup health)];
-	my $numbers = {
-	    size => 1,
-	    alloc => 1,
-	    free => 1,
-	    frag => 1,
-	    dedup => 1,
-	};
-
-	my $cmd = [$ZPOOL,'list', '-HpPLo', join(',', @$propnames)];
-
-	my $pools = [];
-
-	run_command($cmd, outfunc => sub {
-	    my ($line) = @_;
-
-		my @props = split('\s+', trim($line));
-		my $pool = {};
-		for (my $i = 0; $i < scalar(@$propnames); $i++) {
-		    if ($numbers->{$propnames->[$i]}) {
-			$pool->{$propnames->[$i]} = $props[$i] + 0;
-		    } else {
-			$pool->{$propnames->[$i]} = $props[$i];
-		    }
-		}
-
-		push @$pools, $pool;
-	});
-
-	return $pools;
+	return get_pool_data();
     }});
 
 sub preparetree {
@@ -336,6 +340,7 @@ __PACKAGE__->register_method ({
 	my $user = $rpcenv->get_user();
 
 	my $name = $param->{name};
+	my $node = $param->{node};
 	my $devs = [PVE::Tools::split_list($param->{devices})];
 	my $raidlevel = $param->{raidlevel};
 	my $compression = $param->{compression} // 'on';
@@ -346,6 +351,10 @@ __PACKAGE__->register_method ({
 	}
 	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
 
+	my $pools = get_pool_data();
+	die "pool '${name}' already exists on node '${node}'\n"
+	    if grep { $_->{name} eq $name } @{$pools};
+
 	my $numdisks = scalar(@$devs);
 	my $mindisks = {
 	    single => 1,
@@ -428,7 +437,7 @@ __PACKAGE__->register_method ({
 		    pool => $name,
 		    storage => $name,
 		    content => 'rootdir,images',
-		    nodes => $param->{node},
+		    nodes => $node,
 		};
 
 		PVE::API2::Storage::Config->create($storage_params);
-- 
2.30.2





^ permalink raw reply	[flat|nested] 6+ messages in thread

* [pve-devel] [PATCH storage v3 3/3] disks: allow add_storage for already configured local storage
  2022-08-19 15:01 [pve-devel] [PATCH storage v3 0/3] disks: add checks, allow add_storage on other nodes Aaron Lauterer
  2022-08-19 15:01 ` [pve-devel] [PATCH storage v3 1/3] diskmanage: add mounted_paths Aaron Lauterer
  2022-08-19 15:01 ` [pve-devel] [PATCH storage v3 2/3] disks: die if storage name is already in use Aaron Lauterer
@ 2022-08-19 15:01 ` Aaron Lauterer
  2022-09-05  9:50 ` [pve-devel] [PATCH storage v3 0/3] disks: add checks, allow add_storage on other nodes Dominik Csapak
  3 siblings, 0 replies; 6+ messages in thread
From: Aaron Lauterer @ 2022-08-19 15:01 UTC (permalink / raw)
  To: pve-devel

One of the smaller annoyances, especially for less experienced users, is
the fact, that when creating a local storage (ZFS, LVM (thin), dir) in a
cluster, one can only leave the "Add Storage" option enabled the first
time.

On any following node, this option needed to be disabled and the new
node manually added to the list of nodes for that storage.

This patch changes the behavior. If a storage of the same name already
exists, it will verify that necessary parameters match the already
existing one.
Then, if the 'nodes' parameter is set, it adds the current node and
updates the storage config.
In case there is no nodes list, nothing else needs to be done, and the
GUI will stop showing the question mark for the configured, but until
then, not existing local storage.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
changes since
v2:
- move assert_sid_usable_on_node code into create_or_update
  means we now pass a list of parameter names to verify and an optional
  dry run parameter so we can run it early just for the checks
- reordered code, especially defining $storage_params early on
- improve detection if storage is already defined for node, by detecting
an empty node list as configured

v1:
- restructured the logic a bit. checks still need to be done the same,
    but the create or update logic is in its own
    API2::Storage::Config->create_or_update function which then either
    calls the create or update API directly. This reduces a lot of code
    repition.
- in Storage::assert_sid_usable_on_node additionally checks if the
parameter is configured in the storage and fails if not. We would have
gotten warnings about using and undef in a string concat due to how the
other error was presented.

 PVE/API2/Disks/Directory.pm | 37 +++++++++++++++++++--------
 PVE/API2/Disks/LVM.pm       | 37 +++++++++++++++++++--------
 PVE/API2/Disks/LVMThin.pm   | 37 +++++++++++++++++++--------
 PVE/API2/Disks/ZFS.pm       | 36 ++++++++++++++++++--------
 PVE/API2/Storage/Config.pm  | 51 +++++++++++++++++++++++++++++++++++++
 5 files changed, 155 insertions(+), 43 deletions(-)

diff --git a/PVE/API2/Disks/Directory.pm b/PVE/API2/Disks/Directory.pm
index e2272fb..4fdb068 100644
--- a/PVE/API2/Disks/Directory.pm
+++ b/PVE/API2/Disks/Directory.pm
@@ -209,7 +209,26 @@ __PACKAGE__->register_method ({
 
 	$dev = PVE::Diskmanage::verify_blockdev_path($dev);
 	PVE::Diskmanage::assert_disk_unused($dev);
-	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
+
+	my $storage_params = {
+	    type => 'dir',
+	    storage => $name,
+	    content => 'rootdir,images,iso,backup,vztmpl,snippets',
+	    is_mountpoint => 1,
+	    path => $path,
+	    nodes => $node,
+	};
+	my $verify_params = [qw(path)];
+
+	if ($param->{add_storage}) {
+	    PVE::API2::Storage::Config->create_or_update(
+		$name,
+		$node,
+		$storage_params,
+		$verify_params,
+		1,
+	    );
+	}
 
 	my $mounted = PVE::Diskmanage::mounted_paths();
 	die "the path for '${name}' is already mounted: ${path} ($mounted->{$path})\n"
@@ -286,16 +305,12 @@ __PACKAGE__->register_method ({
 		run_command(['systemctl', 'start', $mountunitname]);
 
 		if ($param->{add_storage}) {
-		    my $storage_params = {
-			type => 'dir',
-			storage => $name,
-			content => 'rootdir,images,iso,backup,vztmpl,snippets',
-			is_mountpoint => 1,
-			path => $path,
-			nodes => $node,
-		    };
-
-		    PVE::API2::Storage::Config->create($storage_params);
+		    PVE::API2::Storage::Config->create_or_update(
+			$name,
+			$node,
+			$storage_params,
+			$verify_params,
+		    );
 		}
 	    });
 	};
diff --git a/PVE/API2/Disks/LVM.pm b/PVE/API2/Disks/LVM.pm
index ef341d1..fe87545 100644
--- a/PVE/API2/Disks/LVM.pm
+++ b/PVE/API2/Disks/LVM.pm
@@ -150,7 +150,26 @@ __PACKAGE__->register_method ({
 
 	$dev = PVE::Diskmanage::verify_blockdev_path($dev);
 	PVE::Diskmanage::assert_disk_unused($dev);
-	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
+
+	my $storage_params = {
+	    type => 'lvm',
+	    vgname => $name,
+	    storage => $name,
+	    content => 'rootdir,images',
+	    shared => 0,
+	    nodes => $node,
+	};
+	my $verify_params = [qw(vgname)];
+
+	if ($param->{add_storage}) {
+	    PVE::API2::Storage::Config->create_or_update(
+		$name,
+		$node,
+		$storage_params,
+		$verify_params,
+		1,
+	    );
+	}
 
 	my $worker = sub {
 	    PVE::Diskmanage::locked_disk_action(sub {
@@ -168,16 +187,12 @@ __PACKAGE__->register_method ({
 		PVE::Diskmanage::udevadm_trigger($dev);
 
 		if ($param->{add_storage}) {
-		    my $storage_params = {
-			type => 'lvm',
-			vgname => $name,
-			storage => $name,
-			content => 'rootdir,images',
-			shared => 0,
-			nodes => $node,
-		    };
-
-		    PVE::API2::Storage::Config->create($storage_params);
+		    PVE::API2::Storage::Config->create_or_update(
+			$name,
+			$node,
+			$storage_params,
+			$verify_params,
+		    );
 		}
 	    });
 	};
diff --git a/PVE/API2/Disks/LVMThin.pm b/PVE/API2/Disks/LVMThin.pm
index 9a25e7c..038310a 100644
--- a/PVE/API2/Disks/LVMThin.pm
+++ b/PVE/API2/Disks/LVMThin.pm
@@ -108,7 +108,26 @@ __PACKAGE__->register_method ({
 
 	$dev = PVE::Diskmanage::verify_blockdev_path($dev);
 	PVE::Diskmanage::assert_disk_unused($dev);
-	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
+
+	my $storage_params = {
+	    type => 'lvmthin',
+	    vgname => $name,
+	    thinpool => $name,
+	    storage => $name,
+	    content => 'rootdir,images',
+	    nodes => $node,
+	};
+	my $verify_params = [qw(vgname thinpool)];
+
+	if ($param->{add_storage}) {
+	    PVE::API2::Storage::Config->create_or_update(
+		$name,
+		$node,
+		$storage_params,
+		$verify_params,
+		1,
+	    );
+	}
 
 	my $worker = sub {
 	    PVE::Diskmanage::locked_disk_action(sub {
@@ -147,16 +166,12 @@ __PACKAGE__->register_method ({
 		PVE::Diskmanage::udevadm_trigger($dev);
 
 		if ($param->{add_storage}) {
-		    my $storage_params = {
-			type => 'lvmthin',
-			vgname => $name,
-			thinpool => $name,
-			storage => $name,
-			content => 'rootdir,images',
-			nodes => $node,
-		    };
-
-		    PVE::API2::Storage::Config->create($storage_params);
+		    PVE::API2::Storage::Config->create_or_update(
+			$name,
+			$node,
+			$storage_params,
+			$verify_params,
+		    );
 		}
 	    });
 	};
diff --git a/PVE/API2/Disks/ZFS.pm b/PVE/API2/Disks/ZFS.pm
index 27873cc..dd59375 100644
--- a/PVE/API2/Disks/ZFS.pm
+++ b/PVE/API2/Disks/ZFS.pm
@@ -342,14 +342,33 @@ __PACKAGE__->register_method ({
 	my $name = $param->{name};
 	my $node = $param->{node};
 	my $devs = [PVE::Tools::split_list($param->{devices})];
+	my $node = $param->{node};
 	my $raidlevel = $param->{raidlevel};
 	my $compression = $param->{compression} // 'on';
 
 	for my $dev (@$devs) {
 	    $dev = PVE::Diskmanage::verify_blockdev_path($dev);
 	    PVE::Diskmanage::assert_disk_unused($dev);
+
+	}
+	my $storage_params = {
+	    type => 'zfspool',
+	    pool => $name,
+	    storage => $name,
+	    content => 'rootdir,images',
+	    nodes => $node,
+	};
+	my $verify_params = [qw(pool)];
+
+	if ($param->{add_storage}) {
+	    PVE::API2::Storage::Config->create_or_update(
+		$name,
+		$node,
+		$storage_params,
+		$verify_params,
+		1,
+	    );
 	}
-	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
 
 	my $pools = get_pool_data();
 	die "pool '${name}' already exists on node '${node}'\n"
@@ -432,15 +451,12 @@ __PACKAGE__->register_method ({
 	    PVE::Diskmanage::udevadm_trigger($devs->@*);
 
 	    if ($param->{add_storage}) {
-		my $storage_params = {
-		    type => 'zfspool',
-		    pool => $name,
-		    storage => $name,
-		    content => 'rootdir,images',
-		    nodes => $node,
-		};
-
-		PVE::API2::Storage::Config->create($storage_params);
+		PVE::API2::Storage::Config->create_or_update(
+		    $name,
+		    $node,
+		    $storage_params,
+		    $verify_params,
+		);
 	    }
 	};
 
diff --git a/PVE/API2/Storage/Config.pm b/PVE/API2/Storage/Config.pm
index 6bd770e..821db21 100755
--- a/PVE/API2/Storage/Config.pm
+++ b/PVE/API2/Storage/Config.pm
@@ -65,6 +65,57 @@ sub cleanup_storages_for_node {
     }
 }
 
+# Decides if a storage needs to be created or updated. An update is needed, if
+# the storage has a node list configured, then the current node will be added.
+# The verify_params parameter is an array of parameter names that need to match
+# if there already is a storage config of the same name present.  This is
+# mainly intended for local storage types as certain parameters need to be the
+# same.  For exmaple 'pool' for ZFS, 'vg_name' for LVM, ...
+# Set the dryrun parameter, to only verify the parameters without updating or
+# creating the storage.
+sub create_or_update {
+    my ($self, $sid, $node, $storage_params, $verify_params, $dryrun) = @_;
+
+    my $cfg = PVE::Storage::config();
+    my $scfg = PVE::Storage::storage_config($cfg, $sid, 1);
+
+    if ($scfg) {
+	die "storage config for '${sid}' exists but no parameters to verify were provided\n"
+	    if !$verify_params;
+
+	$node = PVE::INotify::nodename() if !$node || ($node eq 'localhost');
+	die "Storage ID '${sid}' already exists on node ${node}\n"
+	    if !defined($scfg->{nodes}) || $scfg->{nodes}->{$node};
+
+	push @$verify_params, 'type';
+	for my $key (@$verify_params) {
+	    if (!defined($scfg->{$key})) {
+		die "Option '${key}' is not configured for storage '$sid', "
+		    ."expected it to be '$storage_params->{$key}'";
+	    }
+	    if ($storage_params->{$key} ne $scfg->{$key}) {
+		die "Option '${key}' ($storage_params->{$key}) does not match "
+		    ."existing storage configuration '$scfg->{$key}'\n";
+	    }
+	}
+    }
+
+    if (!$dryrun) {
+	if ($scfg) {
+	    if ($scfg->{nodes}) {
+		$scfg->{nodes}->{$node} = 1;
+		$self->update({
+		    nodes => join(',', sort keys $scfg->{nodes}->%*),
+		    storage => $sid,
+		});
+		print "Added '${node}' to nodes for storage '${sid}'\n";
+	    }
+	} else {
+	    $self->create($storage_params);
+	}
+    }
+}
+
 __PACKAGE__->register_method ({
     name => 'index',
     path => '',
-- 
2.30.2





^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [pve-devel] [PATCH storage v3 0/3] disks: add checks, allow add_storage on other nodes
  2022-08-19 15:01 [pve-devel] [PATCH storage v3 0/3] disks: add checks, allow add_storage on other nodes Aaron Lauterer
                   ` (2 preceding siblings ...)
  2022-08-19 15:01 ` [pve-devel] [PATCH storage v3 3/3] disks: allow add_storage for already configured local storage Aaron Lauterer
@ 2022-09-05  9:50 ` Dominik Csapak
  2022-09-13  8:06   ` [pve-devel] applied-series: " Fabian Grünbichler
  3 siblings, 1 reply; 6+ messages in thread
From: Dominik Csapak @ 2022-09-05  9:50 UTC (permalink / raw)
  To: Proxmox VE development discussion, Aaron Lauterer

Code LGTM and tested fine
consider the series:

Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>




^ permalink raw reply	[flat|nested] 6+ messages in thread

* [pve-devel] applied-series: [PATCH storage v3 0/3] disks: add checks, allow add_storage on other nodes
  2022-09-05  9:50 ` [pve-devel] [PATCH storage v3 0/3] disks: add checks, allow add_storage on other nodes Dominik Csapak
@ 2022-09-13  8:06   ` Fabian Grünbichler
  0 siblings, 0 replies; 6+ messages in thread
From: Fabian Grünbichler @ 2022-09-13  8:06 UTC (permalink / raw)
  To: Aaron Lauterer, Proxmox VE development discussion

with the following added - thanks :)

On September 5, 2022 11:50 am, Dominik Csapak wrote:
> Code LGTM and tested fine
> consider the series:
> 
> Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
> Tested-by: Dominik Csapak <d.csapak@proxmox.com>
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 




^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2022-09-13  8:06 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-19 15:01 [pve-devel] [PATCH storage v3 0/3] disks: add checks, allow add_storage on other nodes Aaron Lauterer
2022-08-19 15:01 ` [pve-devel] [PATCH storage v3 1/3] diskmanage: add mounted_paths Aaron Lauterer
2022-08-19 15:01 ` [pve-devel] [PATCH storage v3 2/3] disks: die if storage name is already in use Aaron Lauterer
2022-08-19 15:01 ` [pve-devel] [PATCH storage v3 3/3] disks: allow add_storage for already configured local storage Aaron Lauterer
2022-09-05  9:50 ` [pve-devel] [PATCH storage v3 0/3] disks: add checks, allow add_storage on other nodes Dominik Csapak
2022-09-13  8:06   ` [pve-devel] applied-series: " Fabian Grünbichler

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal