public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Aaron Lauterer <a.lauterer@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH storage v2 3/3] disks: allow add_storage for already configured local storage
Date: Fri, 15 Jul 2022 13:58:08 +0200	[thread overview]
Message-ID: <20220715115808.1385388-4-a.lauterer@proxmox.com> (raw)
In-Reply-To: <20220715115808.1385388-1-a.lauterer@proxmox.com>

One of the smaller annoyances, especially for less experienced users, is
the fact, that when creating a local storage (ZFS, LVM (thin), dir) in a
cluster, one can only leave the "Add Storage" option enabled the first
time.

On any following node, this option needed to be disabled and the new
node manually added to the list of nodes for that storage.

This patch changes the behavior. If a storage of the same name already
exists, it will verify that necessary parameters match the already
existing one.
Then, if the 'nodes' parameter is set, it adds the current node and
updates the storage config.
In case there is no nodes list, nothing else needs to be done, and the
GUI will stop showing the question mark for the configured, but until
then, not existing local storage.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
changes since v1:
- restructured the logic a bit. checks still need to be done the same,
    but the create or update logic is in its own
    API2::Storage::Config->create_or_update function which then either
    calls the create or update API directly. This reduces a lot of code
    repition.
- in Storage::assert_sid_usable_on_node additionally checks if the
parameter is configured in the storage and fails if not. We would have
gotten warnings about using and undef in a string concat due to how the
other error was presented.

 PVE/API2/Disks/Directory.pm |  8 +++++---
 PVE/API2/Disks/LVM.pm       |  8 +++++---
 PVE/API2/Disks/LVMThin.pm   |  8 +++++---
 PVE/API2/Disks/ZFS.pm       |  9 ++++++---
 PVE/API2/Storage/Config.pm  | 22 ++++++++++++++++++++++
 PVE/Storage.pm              | 27 +++++++++++++++++++++++++++
 6 files changed, 70 insertions(+), 12 deletions(-)

diff --git a/PVE/API2/Disks/Directory.pm b/PVE/API2/Disks/Directory.pm
index 07190de..581e970 100644
--- a/PVE/API2/Disks/Directory.pm
+++ b/PVE/API2/Disks/Directory.pm
@@ -209,7 +209,10 @@ __PACKAGE__->register_method ({
 
 	$dev = PVE::Diskmanage::verify_blockdev_path($dev);
 	PVE::Diskmanage::assert_disk_unused($dev);
-	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
+
+	my $verify_params = { path => $path };
+	PVE::Storage::assert_sid_usable_on_node($name, $node, 'dir', $verify_params)
+	    if $param->{add_storage};
 
 	my $mounted = PVE::Diskmanage::mounted_paths();
 	die "a mountpoint for '${name}' already exists: ${path} ($1)\n" if $mounted->{$path};
@@ -293,8 +296,7 @@ __PACKAGE__->register_method ({
 			path => $path,
 			nodes => $node,
 		    };
-
-		    PVE::API2::Storage::Config->create($storage_params);
+		    PVE::API2::Storage::Config->create_or_update($name, $node, $storage_params);
 		}
 	    });
 	};
diff --git a/PVE/API2/Disks/LVM.pm b/PVE/API2/Disks/LVM.pm
index a27afe2..e9b3f25 100644
--- a/PVE/API2/Disks/LVM.pm
+++ b/PVE/API2/Disks/LVM.pm
@@ -150,7 +150,10 @@ __PACKAGE__->register_method ({
 
 	$dev = PVE::Diskmanage::verify_blockdev_path($dev);
 	PVE::Diskmanage::assert_disk_unused($dev);
-	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
+
+	my $verify_params = { vgname => $name };
+	PVE::Storage::assert_sid_usable_on_node($name, $node, 'lvm', $verify_params)
+	    if $param->{add_storage};
 
 	die "volume group with name '${name}' already exists\n"
 	    if PVE::Storage::LVMPlugin::lvm_vgs()->{$name};
@@ -177,8 +180,7 @@ __PACKAGE__->register_method ({
 			shared => 0,
 			nodes => $node,
 		    };
-
-		    PVE::API2::Storage::Config->create($storage_params);
+		    PVE::API2::Storage::Config->create_or_update($name, $node, $storage_params);
 		}
 	    });
 	};
diff --git a/PVE/API2/Disks/LVMThin.pm b/PVE/API2/Disks/LVMThin.pm
index 690c183..fb61489 100644
--- a/PVE/API2/Disks/LVMThin.pm
+++ b/PVE/API2/Disks/LVMThin.pm
@@ -108,7 +108,10 @@ __PACKAGE__->register_method ({
 
 	$dev = PVE::Diskmanage::verify_blockdev_path($dev);
 	PVE::Diskmanage::assert_disk_unused($dev);
-	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
+
+	my $verify_params = { vgname => $name, thinpool => $name };
+	PVE::Storage::assert_sid_usable_on_node($name, $node, 'lvmthin', $verify_params)
+	    if $param->{add_storage};
 
 	die "volume group with name '${name}' already exists\n"
 	    if PVE::Storage::LVMPlugin::lvm_vgs()->{$name};
@@ -155,8 +158,7 @@ __PACKAGE__->register_method ({
 			content => 'rootdir,images',
 			nodes => $node,
 		    };
-
-		    PVE::API2::Storage::Config->create($storage_params);
+		    PVE::API2::Storage::Config->create_or_update($name, $node, $storage_params);
 		}
 	    });
 	};
diff --git a/PVE/API2/Disks/ZFS.pm b/PVE/API2/Disks/ZFS.pm
index 51380c4..e1c4b59 100644
--- a/PVE/API2/Disks/ZFS.pm
+++ b/PVE/API2/Disks/ZFS.pm
@@ -342,6 +342,7 @@ __PACKAGE__->register_method ({
 	my $name = $param->{name};
 	my $node = $param->{node};
 	my $devs = [PVE::Tools::split_list($param->{devices})];
+	my $node = $param->{node};
 	my $raidlevel = $param->{raidlevel};
 	my $compression = $param->{compression} // 'on';
 
@@ -349,7 +350,10 @@ __PACKAGE__->register_method ({
 	    $dev = PVE::Diskmanage::verify_blockdev_path($dev);
 	    PVE::Diskmanage::assert_disk_unused($dev);
 	}
-	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
+
+	my $verify_params = { pool => $name };
+	PVE::Storage::assert_sid_usable_on_node($name, $node, 'zfspool', $verify_params)
+	    if $param->{add_storage};
 
 	my $pools = get_pool_data();
 	die "pool '${name}' already exists on node '$node'\n"
@@ -439,8 +443,7 @@ __PACKAGE__->register_method ({
 		    content => 'rootdir,images',
 		    nodes => $param->{node},
 		};
-
-		PVE::API2::Storage::Config->create($storage_params);
+		PVE::API2::Storage::Config->create_or_update($name, $node, $storage_params);
 	    }
 	};
 
diff --git a/PVE/API2/Storage/Config.pm b/PVE/API2/Storage/Config.pm
index 6bd770e..2ea91b7 100755
--- a/PVE/API2/Storage/Config.pm
+++ b/PVE/API2/Storage/Config.pm
@@ -65,6 +65,28 @@ sub cleanup_storages_for_node {
     }
 }
 
+# Decides if a storage needs to be created or updated. An update is needed, if
+# the storage has a node list configured, then the current node will be added.
+# Check before if this is a sensible action, e.g. storage type and other params
+# that need to match
+sub create_or_update {
+    my ($self, $sid, $node, $storage_params) = @_;
+
+    my $cfg = PVE::Storage::config();
+    if (my $scfg = PVE::Storage::storage_config($cfg, $sid, 1)) {
+	if ($scfg->{nodes}) {
+	    $scfg->{nodes}->{$node} = 1;
+	    $self->update({
+		nodes => join(',', sort keys $scfg->{nodes}->%*),
+		storage => $sid,
+	    });
+	    print "Added '${node}' to nodes for storage '${sid}'\n";
+	}
+    } else {
+	$self->create($storage_params);
+    }
+}
+
 __PACKAGE__->register_method ({
     name => 'index',
     path => '',
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index b9c53a1..21d575c 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -2127,6 +2127,33 @@ sub assert_sid_unused {
     return undef;
 }
 
+# Checks if storage id can be used on the node, intended for local storage types
+# Compares the type and verify_params hash against a potentially configured
+# storage. Use it for storage parameters that need to be the same throughout
+# the cluster. For example, pool for ZFS, vg_name for LVM, ...
+sub assert_sid_usable_on_node {
+    my ($sid, $node, $type, $verify_params) = @_;
+
+    my $cfg = config();
+    if (my $scfg = storage_config($cfg, $sid, 1)) {
+	$node = PVE::INotify::nodename() if !$node || ($node eq 'localhost');
+	die "Storage ID '${sid}' already exists on node ${node}\n"
+	    if defined($scfg->{nodes}) && $scfg->{nodes}->{$node};
+
+	$verify_params->{type} = $type;
+	for my $key (keys %$verify_params) {
+	    if (!defined($scfg->{$key})) {
+		die "Option '${key}' is not configured for storage '$sid', "
+		    ."expected it to be '$verify_params->{$key}'";
+	    }
+	    if ($verify_params->{$key} ne $scfg->{$key}) {
+		die "Option '${key}' ($verify_params->{$key}) does not match "
+		    ."existing storage configuration '$scfg->{$key}'\n";
+	    }
+	}
+    }
+}
+
 # removes leading/trailing spaces and (back)slashes completely
 # substitutes every non-ASCII-alphanumerical char with '_', except '_.-'
 sub normalize_content_filename {
-- 
2.30.2





  parent reply	other threads:[~2022-07-15 11:58 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-15 11:58 [pve-devel] [PATCH storage v2 0/3] disks: add checks, allow add_storage on other nodes Aaron Lauterer
2022-07-15 11:58 ` [pve-devel] [PATCH storage v2 1/3] diskmanage: add mounted_paths Aaron Lauterer
2022-08-17 11:35   ` Fabian Grünbichler
2022-07-15 11:58 ` [pve-devel] [PATCH storage v2 2/3] disks: die if storage name is already in use Aaron Lauterer
2022-08-17 11:42   ` Fabian Grünbichler
2022-08-18 15:22     ` Aaron Lauterer
2022-08-18 15:31       ` Aaron Lauterer
2022-08-19  8:21         ` Fabian Grünbichler
2022-08-19  9:29           ` Aaron Lauterer
2022-08-19  8:20       ` Fabian Grünbichler
2022-08-19  9:28         ` Aaron Lauterer
2022-07-15 11:58 ` Aaron Lauterer [this message]
2022-08-17 11:53   ` [pve-devel] [PATCH storage v2 3/3] disks: allow add_storage for already configured local storage Fabian Grünbichler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220715115808.1385388-4-a.lauterer@proxmox.com \
    --to=a.lauterer@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal