public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Aaron Lauterer <a.lauterer@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH storage v3 3/3] disks: allow add_storage for already configured local storage
Date: Fri, 19 Aug 2022 17:01:21 +0200	[thread overview]
Message-ID: <20220819150121.1633021-4-a.lauterer@proxmox.com> (raw)
In-Reply-To: <20220819150121.1633021-1-a.lauterer@proxmox.com>

One of the smaller annoyances, especially for less experienced users, is
the fact, that when creating a local storage (ZFS, LVM (thin), dir) in a
cluster, one can only leave the "Add Storage" option enabled the first
time.

On any following node, this option needed to be disabled and the new
node manually added to the list of nodes for that storage.

This patch changes the behavior. If a storage of the same name already
exists, it will verify that necessary parameters match the already
existing one.
Then, if the 'nodes' parameter is set, it adds the current node and
updates the storage config.
In case there is no nodes list, nothing else needs to be done, and the
GUI will stop showing the question mark for the configured, but until
then, not existing local storage.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
changes since
v2:
- move assert_sid_usable_on_node code into create_or_update
  means we now pass a list of parameter names to verify and an optional
  dry run parameter so we can run it early just for the checks
- reordered code, especially defining $storage_params early on
- improve detection if storage is already defined for node, by detecting
an empty node list as configured

v1:
- restructured the logic a bit. checks still need to be done the same,
    but the create or update logic is in its own
    API2::Storage::Config->create_or_update function which then either
    calls the create or update API directly. This reduces a lot of code
    repition.
- in Storage::assert_sid_usable_on_node additionally checks if the
parameter is configured in the storage and fails if not. We would have
gotten warnings about using and undef in a string concat due to how the
other error was presented.

 PVE/API2/Disks/Directory.pm | 37 +++++++++++++++++++--------
 PVE/API2/Disks/LVM.pm       | 37 +++++++++++++++++++--------
 PVE/API2/Disks/LVMThin.pm   | 37 +++++++++++++++++++--------
 PVE/API2/Disks/ZFS.pm       | 36 ++++++++++++++++++--------
 PVE/API2/Storage/Config.pm  | 51 +++++++++++++++++++++++++++++++++++++
 5 files changed, 155 insertions(+), 43 deletions(-)

diff --git a/PVE/API2/Disks/Directory.pm b/PVE/API2/Disks/Directory.pm
index e2272fb..4fdb068 100644
--- a/PVE/API2/Disks/Directory.pm
+++ b/PVE/API2/Disks/Directory.pm
@@ -209,7 +209,26 @@ __PACKAGE__->register_method ({
 
 	$dev = PVE::Diskmanage::verify_blockdev_path($dev);
 	PVE::Diskmanage::assert_disk_unused($dev);
-	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
+
+	my $storage_params = {
+	    type => 'dir',
+	    storage => $name,
+	    content => 'rootdir,images,iso,backup,vztmpl,snippets',
+	    is_mountpoint => 1,
+	    path => $path,
+	    nodes => $node,
+	};
+	my $verify_params = [qw(path)];
+
+	if ($param->{add_storage}) {
+	    PVE::API2::Storage::Config->create_or_update(
+		$name,
+		$node,
+		$storage_params,
+		$verify_params,
+		1,
+	    );
+	}
 
 	my $mounted = PVE::Diskmanage::mounted_paths();
 	die "the path for '${name}' is already mounted: ${path} ($mounted->{$path})\n"
@@ -286,16 +305,12 @@ __PACKAGE__->register_method ({
 		run_command(['systemctl', 'start', $mountunitname]);
 
 		if ($param->{add_storage}) {
-		    my $storage_params = {
-			type => 'dir',
-			storage => $name,
-			content => 'rootdir,images,iso,backup,vztmpl,snippets',
-			is_mountpoint => 1,
-			path => $path,
-			nodes => $node,
-		    };
-
-		    PVE::API2::Storage::Config->create($storage_params);
+		    PVE::API2::Storage::Config->create_or_update(
+			$name,
+			$node,
+			$storage_params,
+			$verify_params,
+		    );
 		}
 	    });
 	};
diff --git a/PVE/API2/Disks/LVM.pm b/PVE/API2/Disks/LVM.pm
index ef341d1..fe87545 100644
--- a/PVE/API2/Disks/LVM.pm
+++ b/PVE/API2/Disks/LVM.pm
@@ -150,7 +150,26 @@ __PACKAGE__->register_method ({
 
 	$dev = PVE::Diskmanage::verify_blockdev_path($dev);
 	PVE::Diskmanage::assert_disk_unused($dev);
-	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
+
+	my $storage_params = {
+	    type => 'lvm',
+	    vgname => $name,
+	    storage => $name,
+	    content => 'rootdir,images',
+	    shared => 0,
+	    nodes => $node,
+	};
+	my $verify_params = [qw(vgname)];
+
+	if ($param->{add_storage}) {
+	    PVE::API2::Storage::Config->create_or_update(
+		$name,
+		$node,
+		$storage_params,
+		$verify_params,
+		1,
+	    );
+	}
 
 	my $worker = sub {
 	    PVE::Diskmanage::locked_disk_action(sub {
@@ -168,16 +187,12 @@ __PACKAGE__->register_method ({
 		PVE::Diskmanage::udevadm_trigger($dev);
 
 		if ($param->{add_storage}) {
-		    my $storage_params = {
-			type => 'lvm',
-			vgname => $name,
-			storage => $name,
-			content => 'rootdir,images',
-			shared => 0,
-			nodes => $node,
-		    };
-
-		    PVE::API2::Storage::Config->create($storage_params);
+		    PVE::API2::Storage::Config->create_or_update(
+			$name,
+			$node,
+			$storage_params,
+			$verify_params,
+		    );
 		}
 	    });
 	};
diff --git a/PVE/API2/Disks/LVMThin.pm b/PVE/API2/Disks/LVMThin.pm
index 9a25e7c..038310a 100644
--- a/PVE/API2/Disks/LVMThin.pm
+++ b/PVE/API2/Disks/LVMThin.pm
@@ -108,7 +108,26 @@ __PACKAGE__->register_method ({
 
 	$dev = PVE::Diskmanage::verify_blockdev_path($dev);
 	PVE::Diskmanage::assert_disk_unused($dev);
-	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
+
+	my $storage_params = {
+	    type => 'lvmthin',
+	    vgname => $name,
+	    thinpool => $name,
+	    storage => $name,
+	    content => 'rootdir,images',
+	    nodes => $node,
+	};
+	my $verify_params = [qw(vgname thinpool)];
+
+	if ($param->{add_storage}) {
+	    PVE::API2::Storage::Config->create_or_update(
+		$name,
+		$node,
+		$storage_params,
+		$verify_params,
+		1,
+	    );
+	}
 
 	my $worker = sub {
 	    PVE::Diskmanage::locked_disk_action(sub {
@@ -147,16 +166,12 @@ __PACKAGE__->register_method ({
 		PVE::Diskmanage::udevadm_trigger($dev);
 
 		if ($param->{add_storage}) {
-		    my $storage_params = {
-			type => 'lvmthin',
-			vgname => $name,
-			thinpool => $name,
-			storage => $name,
-			content => 'rootdir,images',
-			nodes => $node,
-		    };
-
-		    PVE::API2::Storage::Config->create($storage_params);
+		    PVE::API2::Storage::Config->create_or_update(
+			$name,
+			$node,
+			$storage_params,
+			$verify_params,
+		    );
 		}
 	    });
 	};
diff --git a/PVE/API2/Disks/ZFS.pm b/PVE/API2/Disks/ZFS.pm
index 27873cc..dd59375 100644
--- a/PVE/API2/Disks/ZFS.pm
+++ b/PVE/API2/Disks/ZFS.pm
@@ -342,14 +342,33 @@ __PACKAGE__->register_method ({
 	my $name = $param->{name};
 	my $node = $param->{node};
 	my $devs = [PVE::Tools::split_list($param->{devices})];
+	my $node = $param->{node};
 	my $raidlevel = $param->{raidlevel};
 	my $compression = $param->{compression} // 'on';
 
 	for my $dev (@$devs) {
 	    $dev = PVE::Diskmanage::verify_blockdev_path($dev);
 	    PVE::Diskmanage::assert_disk_unused($dev);
+
+	}
+	my $storage_params = {
+	    type => 'zfspool',
+	    pool => $name,
+	    storage => $name,
+	    content => 'rootdir,images',
+	    nodes => $node,
+	};
+	my $verify_params = [qw(pool)];
+
+	if ($param->{add_storage}) {
+	    PVE::API2::Storage::Config->create_or_update(
+		$name,
+		$node,
+		$storage_params,
+		$verify_params,
+		1,
+	    );
 	}
-	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
 
 	my $pools = get_pool_data();
 	die "pool '${name}' already exists on node '${node}'\n"
@@ -432,15 +451,12 @@ __PACKAGE__->register_method ({
 	    PVE::Diskmanage::udevadm_trigger($devs->@*);
 
 	    if ($param->{add_storage}) {
-		my $storage_params = {
-		    type => 'zfspool',
-		    pool => $name,
-		    storage => $name,
-		    content => 'rootdir,images',
-		    nodes => $node,
-		};
-
-		PVE::API2::Storage::Config->create($storage_params);
+		PVE::API2::Storage::Config->create_or_update(
+		    $name,
+		    $node,
+		    $storage_params,
+		    $verify_params,
+		);
 	    }
 	};
 
diff --git a/PVE/API2/Storage/Config.pm b/PVE/API2/Storage/Config.pm
index 6bd770e..821db21 100755
--- a/PVE/API2/Storage/Config.pm
+++ b/PVE/API2/Storage/Config.pm
@@ -65,6 +65,57 @@ sub cleanup_storages_for_node {
     }
 }
 
+# Decides if a storage needs to be created or updated. An update is needed, if
+# the storage has a node list configured, then the current node will be added.
+# The verify_params parameter is an array of parameter names that need to match
+# if there already is a storage config of the same name present.  This is
+# mainly intended for local storage types as certain parameters need to be the
+# same.  For exmaple 'pool' for ZFS, 'vg_name' for LVM, ...
+# Set the dryrun parameter, to only verify the parameters without updating or
+# creating the storage.
+sub create_or_update {
+    my ($self, $sid, $node, $storage_params, $verify_params, $dryrun) = @_;
+
+    my $cfg = PVE::Storage::config();
+    my $scfg = PVE::Storage::storage_config($cfg, $sid, 1);
+
+    if ($scfg) {
+	die "storage config for '${sid}' exists but no parameters to verify were provided\n"
+	    if !$verify_params;
+
+	$node = PVE::INotify::nodename() if !$node || ($node eq 'localhost');
+	die "Storage ID '${sid}' already exists on node ${node}\n"
+	    if !defined($scfg->{nodes}) || $scfg->{nodes}->{$node};
+
+	push @$verify_params, 'type';
+	for my $key (@$verify_params) {
+	    if (!defined($scfg->{$key})) {
+		die "Option '${key}' is not configured for storage '$sid', "
+		    ."expected it to be '$storage_params->{$key}'";
+	    }
+	    if ($storage_params->{$key} ne $scfg->{$key}) {
+		die "Option '${key}' ($storage_params->{$key}) does not match "
+		    ."existing storage configuration '$scfg->{$key}'\n";
+	    }
+	}
+    }
+
+    if (!$dryrun) {
+	if ($scfg) {
+	    if ($scfg->{nodes}) {
+		$scfg->{nodes}->{$node} = 1;
+		$self->update({
+		    nodes => join(',', sort keys $scfg->{nodes}->%*),
+		    storage => $sid,
+		});
+		print "Added '${node}' to nodes for storage '${sid}'\n";
+	    }
+	} else {
+	    $self->create($storage_params);
+	}
+    }
+}
+
 __PACKAGE__->register_method ({
     name => 'index',
     path => '',
-- 
2.30.2





  parent reply	other threads:[~2022-08-19 15:01 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-19 15:01 [pve-devel] [PATCH storage v3 0/3] disks: add checks, allow add_storage on other nodes Aaron Lauterer
2022-08-19 15:01 ` [pve-devel] [PATCH storage v3 1/3] diskmanage: add mounted_paths Aaron Lauterer
2022-08-19 15:01 ` [pve-devel] [PATCH storage v3 2/3] disks: die if storage name is already in use Aaron Lauterer
2022-08-19 15:01 ` Aaron Lauterer [this message]
2022-09-05  9:50 ` [pve-devel] [PATCH storage v3 0/3] disks: add checks, allow add_storage on other nodes Dominik Csapak
2022-09-13  8:06   ` [pve-devel] applied-series: " Fabian Grünbichler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220819150121.1633021-4-a.lauterer@proxmox.com \
    --to=a.lauterer@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal