From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 377D78AAFE for ; Fri, 19 Aug 2022 17:01:26 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id C4BF31BCA8 for ; Fri, 19 Aug 2022 17:01:25 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Fri, 19 Aug 2022 17:01:23 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 87FFD43DE2 for ; Fri, 19 Aug 2022 17:01:23 +0200 (CEST) From: Aaron Lauterer To: pve-devel@lists.proxmox.com Date: Fri, 19 Aug 2022 17:01:21 +0200 Message-Id: <20220819150121.1633021-4-a.lauterer@proxmox.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220819150121.1633021-1-a.lauterer@proxmox.com> References: <20220819150121.1633021-1-a.lauterer@proxmox.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 AWL -0.015 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record T_SCC_BODY_TEXT_LINE -0.01 - URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [lvm.pm, lvmthin.pm, config.pm, directory.pm, zfs.pm] Subject: [pve-devel] [PATCH storage v3 3/3] disks: allow add_storage for already configured local storage X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Aug 2022 15:01:26 -0000 One of the smaller annoyances, especially for less experienced users, is the fact, that when creating a local storage (ZFS, LVM (thin), dir) in a cluster, one can only leave the "Add Storage" option enabled the first time. On any following node, this option needed to be disabled and the new node manually added to the list of nodes for that storage. This patch changes the behavior. If a storage of the same name already exists, it will verify that necessary parameters match the already existing one. Then, if the 'nodes' parameter is set, it adds the current node and updates the storage config. In case there is no nodes list, nothing else needs to be done, and the GUI will stop showing the question mark for the configured, but until then, not existing local storage. Signed-off-by: Aaron Lauterer --- changes since v2: - move assert_sid_usable_on_node code into create_or_update means we now pass a list of parameter names to verify and an optional dry run parameter so we can run it early just for the checks - reordered code, especially defining $storage_params early on - improve detection if storage is already defined for node, by detecting an empty node list as configured v1: - restructured the logic a bit. checks still need to be done the same, but the create or update logic is in its own API2::Storage::Config->create_or_update function which then either calls the create or update API directly. This reduces a lot of code repition. - in Storage::assert_sid_usable_on_node additionally checks if the parameter is configured in the storage and fails if not. We would have gotten warnings about using and undef in a string concat due to how the other error was presented. PVE/API2/Disks/Directory.pm | 37 +++++++++++++++++++-------- PVE/API2/Disks/LVM.pm | 37 +++++++++++++++++++-------- PVE/API2/Disks/LVMThin.pm | 37 +++++++++++++++++++-------- PVE/API2/Disks/ZFS.pm | 36 ++++++++++++++++++-------- PVE/API2/Storage/Config.pm | 51 +++++++++++++++++++++++++++++++++++++ 5 files changed, 155 insertions(+), 43 deletions(-) diff --git a/PVE/API2/Disks/Directory.pm b/PVE/API2/Disks/Directory.pm index e2272fb..4fdb068 100644 --- a/PVE/API2/Disks/Directory.pm +++ b/PVE/API2/Disks/Directory.pm @@ -209,7 +209,26 @@ __PACKAGE__->register_method ({ $dev = PVE::Diskmanage::verify_blockdev_path($dev); PVE::Diskmanage::assert_disk_unused($dev); - PVE::Storage::assert_sid_unused($name) if $param->{add_storage}; + + my $storage_params = { + type => 'dir', + storage => $name, + content => 'rootdir,images,iso,backup,vztmpl,snippets', + is_mountpoint => 1, + path => $path, + nodes => $node, + }; + my $verify_params = [qw(path)]; + + if ($param->{add_storage}) { + PVE::API2::Storage::Config->create_or_update( + $name, + $node, + $storage_params, + $verify_params, + 1, + ); + } my $mounted = PVE::Diskmanage::mounted_paths(); die "the path for '${name}' is already mounted: ${path} ($mounted->{$path})\n" @@ -286,16 +305,12 @@ __PACKAGE__->register_method ({ run_command(['systemctl', 'start', $mountunitname]); if ($param->{add_storage}) { - my $storage_params = { - type => 'dir', - storage => $name, - content => 'rootdir,images,iso,backup,vztmpl,snippets', - is_mountpoint => 1, - path => $path, - nodes => $node, - }; - - PVE::API2::Storage::Config->create($storage_params); + PVE::API2::Storage::Config->create_or_update( + $name, + $node, + $storage_params, + $verify_params, + ); } }); }; diff --git a/PVE/API2/Disks/LVM.pm b/PVE/API2/Disks/LVM.pm index ef341d1..fe87545 100644 --- a/PVE/API2/Disks/LVM.pm +++ b/PVE/API2/Disks/LVM.pm @@ -150,7 +150,26 @@ __PACKAGE__->register_method ({ $dev = PVE::Diskmanage::verify_blockdev_path($dev); PVE::Diskmanage::assert_disk_unused($dev); - PVE::Storage::assert_sid_unused($name) if $param->{add_storage}; + + my $storage_params = { + type => 'lvm', + vgname => $name, + storage => $name, + content => 'rootdir,images', + shared => 0, + nodes => $node, + }; + my $verify_params = [qw(vgname)]; + + if ($param->{add_storage}) { + PVE::API2::Storage::Config->create_or_update( + $name, + $node, + $storage_params, + $verify_params, + 1, + ); + } my $worker = sub { PVE::Diskmanage::locked_disk_action(sub { @@ -168,16 +187,12 @@ __PACKAGE__->register_method ({ PVE::Diskmanage::udevadm_trigger($dev); if ($param->{add_storage}) { - my $storage_params = { - type => 'lvm', - vgname => $name, - storage => $name, - content => 'rootdir,images', - shared => 0, - nodes => $node, - }; - - PVE::API2::Storage::Config->create($storage_params); + PVE::API2::Storage::Config->create_or_update( + $name, + $node, + $storage_params, + $verify_params, + ); } }); }; diff --git a/PVE/API2/Disks/LVMThin.pm b/PVE/API2/Disks/LVMThin.pm index 9a25e7c..038310a 100644 --- a/PVE/API2/Disks/LVMThin.pm +++ b/PVE/API2/Disks/LVMThin.pm @@ -108,7 +108,26 @@ __PACKAGE__->register_method ({ $dev = PVE::Diskmanage::verify_blockdev_path($dev); PVE::Diskmanage::assert_disk_unused($dev); - PVE::Storage::assert_sid_unused($name) if $param->{add_storage}; + + my $storage_params = { + type => 'lvmthin', + vgname => $name, + thinpool => $name, + storage => $name, + content => 'rootdir,images', + nodes => $node, + }; + my $verify_params = [qw(vgname thinpool)]; + + if ($param->{add_storage}) { + PVE::API2::Storage::Config->create_or_update( + $name, + $node, + $storage_params, + $verify_params, + 1, + ); + } my $worker = sub { PVE::Diskmanage::locked_disk_action(sub { @@ -147,16 +166,12 @@ __PACKAGE__->register_method ({ PVE::Diskmanage::udevadm_trigger($dev); if ($param->{add_storage}) { - my $storage_params = { - type => 'lvmthin', - vgname => $name, - thinpool => $name, - storage => $name, - content => 'rootdir,images', - nodes => $node, - }; - - PVE::API2::Storage::Config->create($storage_params); + PVE::API2::Storage::Config->create_or_update( + $name, + $node, + $storage_params, + $verify_params, + ); } }); }; diff --git a/PVE/API2/Disks/ZFS.pm b/PVE/API2/Disks/ZFS.pm index 27873cc..dd59375 100644 --- a/PVE/API2/Disks/ZFS.pm +++ b/PVE/API2/Disks/ZFS.pm @@ -342,14 +342,33 @@ __PACKAGE__->register_method ({ my $name = $param->{name}; my $node = $param->{node}; my $devs = [PVE::Tools::split_list($param->{devices})]; + my $node = $param->{node}; my $raidlevel = $param->{raidlevel}; my $compression = $param->{compression} // 'on'; for my $dev (@$devs) { $dev = PVE::Diskmanage::verify_blockdev_path($dev); PVE::Diskmanage::assert_disk_unused($dev); + + } + my $storage_params = { + type => 'zfspool', + pool => $name, + storage => $name, + content => 'rootdir,images', + nodes => $node, + }; + my $verify_params = [qw(pool)]; + + if ($param->{add_storage}) { + PVE::API2::Storage::Config->create_or_update( + $name, + $node, + $storage_params, + $verify_params, + 1, + ); } - PVE::Storage::assert_sid_unused($name) if $param->{add_storage}; my $pools = get_pool_data(); die "pool '${name}' already exists on node '${node}'\n" @@ -432,15 +451,12 @@ __PACKAGE__->register_method ({ PVE::Diskmanage::udevadm_trigger($devs->@*); if ($param->{add_storage}) { - my $storage_params = { - type => 'zfspool', - pool => $name, - storage => $name, - content => 'rootdir,images', - nodes => $node, - }; - - PVE::API2::Storage::Config->create($storage_params); + PVE::API2::Storage::Config->create_or_update( + $name, + $node, + $storage_params, + $verify_params, + ); } }; diff --git a/PVE/API2/Storage/Config.pm b/PVE/API2/Storage/Config.pm index 6bd770e..821db21 100755 --- a/PVE/API2/Storage/Config.pm +++ b/PVE/API2/Storage/Config.pm @@ -65,6 +65,57 @@ sub cleanup_storages_for_node { } } +# Decides if a storage needs to be created or updated. An update is needed, if +# the storage has a node list configured, then the current node will be added. +# The verify_params parameter is an array of parameter names that need to match +# if there already is a storage config of the same name present. This is +# mainly intended for local storage types as certain parameters need to be the +# same. For exmaple 'pool' for ZFS, 'vg_name' for LVM, ... +# Set the dryrun parameter, to only verify the parameters without updating or +# creating the storage. +sub create_or_update { + my ($self, $sid, $node, $storage_params, $verify_params, $dryrun) = @_; + + my $cfg = PVE::Storage::config(); + my $scfg = PVE::Storage::storage_config($cfg, $sid, 1); + + if ($scfg) { + die "storage config for '${sid}' exists but no parameters to verify were provided\n" + if !$verify_params; + + $node = PVE::INotify::nodename() if !$node || ($node eq 'localhost'); + die "Storage ID '${sid}' already exists on node ${node}\n" + if !defined($scfg->{nodes}) || $scfg->{nodes}->{$node}; + + push @$verify_params, 'type'; + for my $key (@$verify_params) { + if (!defined($scfg->{$key})) { + die "Option '${key}' is not configured for storage '$sid', " + ."expected it to be '$storage_params->{$key}'"; + } + if ($storage_params->{$key} ne $scfg->{$key}) { + die "Option '${key}' ($storage_params->{$key}) does not match " + ."existing storage configuration '$scfg->{$key}'\n"; + } + } + } + + if (!$dryrun) { + if ($scfg) { + if ($scfg->{nodes}) { + $scfg->{nodes}->{$node} = 1; + $self->update({ + nodes => join(',', sort keys $scfg->{nodes}->%*), + storage => $sid, + }); + print "Added '${node}' to nodes for storage '${sid}'\n"; + } + } else { + $self->create($storage_params); + } + } +} + __PACKAGE__->register_method ({ name => 'index', path => '', -- 2.30.2