public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Aaron Lauterer <a.lauterer@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH storage 3/3] disks: allow add_storage for already configured local storage
Date: Wed, 13 Jul 2022 12:47:58 +0200	[thread overview]
Message-ID: <20220713104758.651614-4-a.lauterer@proxmox.com> (raw)
In-Reply-To: <20220713104758.651614-1-a.lauterer@proxmox.com>

One of the smaller annoyances, especially for less experienced users, is
the fact, that when creating a local storage (ZFS, LVM (thin), dir) in a
cluster, one can only leave the "Add Storage" option enabled the first
time.

On any following node, this option needed to be disabled and the new
node manually added to the list of nodes for that storage.

This patch changes the behavior. If a storage of the same name already
exists, it will verify that necessary parameters match the already
existing one.
Then, if the 'nodes' parameter is set, it adds the current node and
updates the storage config.
In case there is no nodes list, nothing else needs to be done, and the
GUI will stop showing the question mark for the configured, but until
then, not existing local storage.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---

I decided to make the storage type a parameter of the
'assert_sid_usable_on_node' function and not part of the $verify_params,
so that it cannot be forgotten when calling it.

 PVE/API2/Disks/Directory.pm | 32 ++++++++++++++++++-----------
 PVE/API2/Disks/LVM.pm       | 30 +++++++++++++++++----------
 PVE/API2/Disks/LVMThin.pm   | 30 +++++++++++++++++----------
 PVE/API2/Disks/ZFS.pm       | 29 +++++++++++++++++---------
 PVE/Storage.pm              | 41 +++++++++++++++++++++++++++++++++++++
 5 files changed, 118 insertions(+), 44 deletions(-)

diff --git a/PVE/API2/Disks/Directory.pm b/PVE/API2/Disks/Directory.pm
index 8e03229..00b9a37 100644
--- a/PVE/API2/Disks/Directory.pm
+++ b/PVE/API2/Disks/Directory.pm
@@ -203,10 +203,14 @@ __PACKAGE__->register_method ({
 	my $dev = $param->{device};
 	my $node = $param->{node};
 	my $type = $param->{filesystem} // 'ext4';
+	my $path = "/mnt/pve/$name";
 
 	$dev = PVE::Diskmanage::verify_blockdev_path($dev);
 	PVE::Diskmanage::assert_disk_unused($dev);
-	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
+
+	my $verify_params = { path => $path };
+	PVE::Storage::assert_sid_usable_on_node($name, $node, 'dir', $verify_params)
+	    if $param->{add_storage};
 
 	my $mounted = PVE::Diskmanage::mounted_paths();
 	if ($mounted->{$path} =~ /^(\/dev\/.+)$/ ) {
@@ -214,7 +218,6 @@ __PACKAGE__->register_method ({
 	}
 
 	my $worker = sub {
-	    my $path = "/mnt/pve/$name";
 	    my $mountunitname = PVE::Systemd::escape_unit($path, 1) . ".mount";
 	    my $mountunitpath = "/etc/systemd/system/$mountunitname";
 
@@ -287,16 +290,21 @@ __PACKAGE__->register_method ({
 		run_command(['systemctl', 'start', $mountunitname]);
 
 		if ($param->{add_storage}) {
-		    my $storage_params = {
-			type => 'dir',
-			storage => $name,
-			content => 'rootdir,images,iso,backup,vztmpl,snippets',
-			is_mountpoint => 1,
-			path => $path,
-			nodes => $node,
-		    };
-
-		    PVE::API2::Storage::Config->create($storage_params);
+		    my $status = PVE::Storage::create_or_update_storage($name, $node);
+		    if ($status->{create}) {
+			my $storage_params = {
+			    type => 'dir',
+			    storage => $name,
+			    content => 'rootdir,images,iso,backup,vztmpl,snippets',
+			    is_mountpoint => 1,
+			    path => $path,
+			    nodes => $node,
+			};
+			PVE::API2::Storage::Config->create($storage_params);
+		    } elsif ($status->{update_params}) {
+			print "Adding '${node}' to nodes for storage '${name}'\n";
+			PVE::API2::Storage::Config->update($status->{update_params});
+		    } # no action needed if storage exists, but not limited to nodes
 		}
 	    });
 	};
diff --git a/PVE/API2/Disks/LVM.pm b/PVE/API2/Disks/LVM.pm
index a27afe2..88b8d0b 100644
--- a/PVE/API2/Disks/LVM.pm
+++ b/PVE/API2/Disks/LVM.pm
@@ -150,7 +150,10 @@ __PACKAGE__->register_method ({
 
 	$dev = PVE::Diskmanage::verify_blockdev_path($dev);
 	PVE::Diskmanage::assert_disk_unused($dev);
-	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
+
+	my $verify_params = { vgname => $name };
+	PVE::Storage::assert_sid_usable_on_node($name, $node, 'lvm', $verify_params)
+	    if $param->{add_storage};
 
 	die "volume group with name '${name}' already exists\n"
 	    if PVE::Storage::LVMPlugin::lvm_vgs()->{$name};
@@ -169,16 +172,21 @@ __PACKAGE__->register_method ({
 		PVE::Diskmanage::udevadm_trigger($dev);
 
 		if ($param->{add_storage}) {
-		    my $storage_params = {
-			type => 'lvm',
-			vgname => $name,
-			storage => $name,
-			content => 'rootdir,images',
-			shared => 0,
-			nodes => $node,
-		    };
-
-		    PVE::API2::Storage::Config->create($storage_params);
+		    my $status = PVE::Storage::create_or_update_storage($name, $node);
+		    if ($status->{create}) {
+			my $storage_params = {
+			    type => 'lvm',
+			    vgname => $name,
+			    storage => $name,
+			    content => 'rootdir,images',
+			    shared => 0,
+			    nodes => $node,
+			};
+			PVE::API2::Storage::Config->create($storage_params);
+		    } elsif ($status->{update_params}) {
+			print "Adding '${node}' to nodes for storage '${name}'\n";
+			PVE::API2::Storage::Config->update($status->{update_params});
+		    } # no action needed if storage exists, but not limited to nodes
 		}
 	    });
 	};
diff --git a/PVE/API2/Disks/LVMThin.pm b/PVE/API2/Disks/LVMThin.pm
index 690c183..d880154 100644
--- a/PVE/API2/Disks/LVMThin.pm
+++ b/PVE/API2/Disks/LVMThin.pm
@@ -108,7 +108,10 @@ __PACKAGE__->register_method ({
 
 	$dev = PVE::Diskmanage::verify_blockdev_path($dev);
 	PVE::Diskmanage::assert_disk_unused($dev);
-	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
+
+	my $verify_params = { vgname => $name, thinpool => $name };
+	PVE::Storage::assert_sid_usable_on_node($name, $node, 'lvmthin', $verify_params)
+	    if $param->{add_storage};
 
 	die "volume group with name '${name}' already exists\n"
 	    if PVE::Storage::LVMPlugin::lvm_vgs()->{$name};
@@ -147,16 +150,21 @@ __PACKAGE__->register_method ({
 		PVE::Diskmanage::udevadm_trigger($dev);
 
 		if ($param->{add_storage}) {
-		    my $storage_params = {
-			type => 'lvmthin',
-			vgname => $name,
-			thinpool => $name,
-			storage => $name,
-			content => 'rootdir,images',
-			nodes => $node,
-		    };
-
-		    PVE::API2::Storage::Config->create($storage_params);
+		    my $status = PVE::Storage::create_or_update_storage($name, $node);
+		    if ($status->{create}) {
+			my $storage_params = {
+			    type => 'lvmthin',
+			    vgname => $name,
+			    thinpool => $name,
+			    storage => $name,
+			    content => 'rootdir,images',
+			    nodes => $node,
+			};
+			PVE::API2::Storage::Config->create($storage_params);
+		    } elsif ($status->{update_params}) {
+			print "Adding '${node}' to nodes for storage '${name}'\n";
+			PVE::API2::Storage::Config->update($status->{update_params});
+		    } # no action needed if storage exists, but not limited to nodes
 		}
 	    });
 	};
diff --git a/PVE/API2/Disks/ZFS.pm b/PVE/API2/Disks/ZFS.pm
index ceb0212..793bc14 100644
--- a/PVE/API2/Disks/ZFS.pm
+++ b/PVE/API2/Disks/ZFS.pm
@@ -337,6 +337,7 @@ __PACKAGE__->register_method ({
 
 	my $name = $param->{name};
 	my $devs = [PVE::Tools::split_list($param->{devices})];
+	my $node = $param->{node};
 	my $raidlevel = $param->{raidlevel};
 	my $compression = $param->{compression} // 'on';
 
@@ -344,7 +345,10 @@ __PACKAGE__->register_method ({
 	    $dev = PVE::Diskmanage::verify_blockdev_path($dev);
 	    PVE::Diskmanage::assert_disk_unused($dev);
 	}
-	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
+
+	my $verify_params = { pool => $name };
+	PVE::Storage::assert_sid_usable_on_node($name, $node, 'zfspool', $verify_params)
+	    if $param->{add_storage};
 
 	my $pools = PVE::API2::Disks::ZFS->index({ node => $param->{node} });
 	my $poollist = { map { $_->{name} => 1 } @{$pools} };
@@ -427,15 +431,20 @@ __PACKAGE__->register_method ({
 	    PVE::Diskmanage::udevadm_trigger($devs->@*);
 
 	    if ($param->{add_storage}) {
-		my $storage_params = {
-		    type => 'zfspool',
-		    pool => $name,
-		    storage => $name,
-		    content => 'rootdir,images',
-		    nodes => $param->{node},
-		};
-
-		PVE::API2::Storage::Config->create($storage_params);
+		my $status = PVE::Storage::create_or_update_storage($name, $node);
+		if ($status->{create}) {
+		    my $storage_params = {
+			type => 'zfspool',
+			pool => $name,
+			storage => $name,
+			content => 'rootdir,images',
+			nodes => $param->{node},
+		    };
+		    PVE::API2::Storage::Config->create($storage_params);
+		} elsif ($status->{update_params}) {
+		    print "Adding '${node}' to nodes for storage '${name}'\n";
+		    PVE::API2::Storage::Config->update($status->{update_params});
+		} # no action needed if storage exists, but not limited to nodes
 	    }
 	};
 
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index b9c53a1..6dfcc3c 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -2127,6 +2127,47 @@ sub assert_sid_unused {
     return undef;
 }
 
+# Checks if storage id can be used on the node, intended for local storage types
+# Compares the type and verify_params hash against a potentially configured
+# storage. Use it for storage parameters that need to be the same throughout
+# the cluster. For example, pool for ZFS, vg_name for LVM, ...
+sub assert_sid_usable_on_node {
+    my ($sid, $node, $type, $verify_params) = @_;
+
+    my $cfg = config();
+    if (my $scfg = storage_config($cfg, $sid, 1)) {
+	$node = PVE::INotify::nodename() if !$node || ($node eq 'localhost');
+	die "storage ID '${sid}' already exists on node ${node}\n"
+	    if defined($scfg->{nodes}) && $scfg->{nodes}->{$node};
+
+	$verify_params->{type} = $type;
+	for my $key (keys %$verify_params) {
+	    die "option '${key}' ($verify_params->{$key}) does not match ".
+		"existing storage configuration: $scfg->{$key}\n"
+		if $verify_params->{$key} ne $scfg->{$key};
+	}
+    }
+}
+
+# returns if storage needs to be created or updated. In the update case it
+# checks for a node list and returns the needed parameters to update the
+# storage with the new node list
+sub create_or_update_storage {
+    my ($sid, $node) = @_;
+
+    my $cfg = config();
+    my $status = { create => 1 };
+    if (my $scfg = storage_config($cfg, $sid, 1)) {
+	$status->{create} = 0;
+	if ($scfg->{nodes}) {
+	    $scfg->{nodes}->{$node} = 1;
+	    $status->{update_params}->{nodes} = join(',', sort keys $scfg->{nodes}->%*),
+	    $status->{update_params}->{storage} = $sid;
+	}
+    }
+    return $status;
+}
+
 # removes leading/trailing spaces and (back)slashes completely
 # substitutes every non-ASCII-alphanumerical char with '_', except '_.-'
 sub normalize_content_filename {
-- 
2.30.2





  parent reply	other threads:[~2022-07-13 10:48 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-13 10:47 [pve-devel] [PATCH storage 0/3] disks: add checks, allow add_storage on other nodes Aaron Lauterer
2022-07-13 10:47 ` [pve-devel] [PATCH storage 1/3] diskmanage: add mounted_paths Aaron Lauterer
2022-07-14 11:13   ` Dominik Csapak
2022-07-14 11:37     ` Aaron Lauterer
2022-07-13 10:47 ` [pve-devel] [PATCH storage 2/3] disks: die if storage name is already in use Aaron Lauterer
2022-07-14 11:13   ` Dominik Csapak
2022-07-14 12:12     ` Fabian Ebner
2022-07-14 12:30       ` Dominik Csapak
2022-07-13 10:47 ` Aaron Lauterer [this message]
2022-07-14 11:23   ` [pve-devel] [PATCH storage 3/3] disks: allow add_storage for already configured local storage Dominik Csapak

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220713104758.651614-4-a.lauterer@proxmox.com \
    --to=a.lauterer@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal