From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 9AA967C4D8 for ; Fri, 15 Jul 2022 13:58:41 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 9123D27CDB for ; Fri, 15 Jul 2022 13:58:11 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Fri, 15 Jul 2022 13:58:10 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 21DEB403E3 for ; Fri, 15 Jul 2022 13:58:10 +0200 (CEST) From: Aaron Lauterer To: pve-devel@lists.proxmox.com Date: Fri, 15 Jul 2022 13:58:07 +0200 Message-Id: <20220715115808.1385388-3-a.lauterer@proxmox.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220715115808.1385388-1-a.lauterer@proxmox.com> References: <20220715115808.1385388-1-a.lauterer@proxmox.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 AWL -0.016 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record T_SCC_BODY_TEXT_LINE -0.01 - Subject: [pve-devel] [PATCH storage v2 2/3] disks: die if storage name is already in use X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 15 Jul 2022 11:58:41 -0000 If a storage of that type and name already exists (LVM, zpool, ...) but we do not have a Proxmox VE Storage config for it, it is possible that the creation will fail midway due to checks done by the underlying storage layer itself. This in turn can lead to disks that are already partitioned. Users would need to clean this up themselves. By adding checks early on, not only checking against the PVE storage config, but against the actual storage type itself, we can die early enough, before we touch any disk. For ZFS, the logic to gather pool data is moved into its own function to be called from the index API endpoint and the check in the create endpoint. Signed-off-by: Aaron Lauterer --- changes since v1: - moved zfs pool gathering from index api endpoint into its own function to easily call it for the check as well - zfs pool check is now using perl grep - removed check if mounted path if a /dev one - added check if a mount unit exists - moved definitions of $path, $mountunitname and $mountunitpath out of the worker PVE/API2/Disks/Directory.pm | 11 ++++-- PVE/API2/Disks/LVM.pm | 3 ++ PVE/API2/Disks/LVMThin.pm | 3 ++ PVE/API2/Disks/ZFS.pm | 77 +++++++++++++++++++++---------------- 4 files changed, 56 insertions(+), 38 deletions(-) diff --git a/PVE/API2/Disks/Directory.pm b/PVE/API2/Disks/Directory.pm index df63ba9..07190de 100644 --- a/PVE/API2/Disks/Directory.pm +++ b/PVE/API2/Disks/Directory.pm @@ -203,16 +203,19 @@ __PACKAGE__->register_method ({ my $dev = $param->{device}; my $node = $param->{node}; my $type = $param->{filesystem} // 'ext4'; + my $path = "/mnt/pve/$name"; + my $mountunitname = PVE::Systemd::escape_unit($path, 1) . ".mount"; + my $mountunitpath = "/etc/systemd/system/$mountunitname"; $dev = PVE::Diskmanage::verify_blockdev_path($dev); PVE::Diskmanage::assert_disk_unused($dev); PVE::Storage::assert_sid_unused($name) if $param->{add_storage}; - my $worker = sub { - my $path = "/mnt/pve/$name"; - my $mountunitname = PVE::Systemd::escape_unit($path, 1) . ".mount"; - my $mountunitpath = "/etc/systemd/system/$mountunitname"; + my $mounted = PVE::Diskmanage::mounted_paths(); + die "a mountpoint for '${name}' already exists: ${path} ($1)\n" if $mounted->{$path}; + die "a systemd mount unit already exists: ${mountunitpath}\n" if -e $mountunitpath; + my $worker = sub { PVE::Diskmanage::locked_disk_action(sub { PVE::Diskmanage::assert_disk_unused($dev); diff --git a/PVE/API2/Disks/LVM.pm b/PVE/API2/Disks/LVM.pm index 6e4331a..a27afe2 100644 --- a/PVE/API2/Disks/LVM.pm +++ b/PVE/API2/Disks/LVM.pm @@ -152,6 +152,9 @@ __PACKAGE__->register_method ({ PVE::Diskmanage::assert_disk_unused($dev); PVE::Storage::assert_sid_unused($name) if $param->{add_storage}; + die "volume group with name '${name}' already exists\n" + if PVE::Storage::LVMPlugin::lvm_vgs()->{$name}; + my $worker = sub { PVE::Diskmanage::locked_disk_action(sub { PVE::Diskmanage::assert_disk_unused($dev); diff --git a/PVE/API2/Disks/LVMThin.pm b/PVE/API2/Disks/LVMThin.pm index 58ecb37..690c183 100644 --- a/PVE/API2/Disks/LVMThin.pm +++ b/PVE/API2/Disks/LVMThin.pm @@ -110,6 +110,9 @@ __PACKAGE__->register_method ({ PVE::Diskmanage::assert_disk_unused($dev); PVE::Storage::assert_sid_unused($name) if $param->{add_storage}; + die "volume group with name '${name}' already exists\n" + if PVE::Storage::LVMPlugin::lvm_vgs()->{$name}; + my $worker = sub { PVE::Diskmanage::locked_disk_action(sub { PVE::Diskmanage::assert_disk_unused($dev); diff --git a/PVE/API2/Disks/ZFS.pm b/PVE/API2/Disks/ZFS.pm index eeb9f48..51380c4 100644 --- a/PVE/API2/Disks/ZFS.pm +++ b/PVE/API2/Disks/ZFS.pm @@ -18,6 +18,43 @@ use base qw(PVE::RESTHandler); my $ZPOOL = '/sbin/zpool'; my $ZFS = '/sbin/zfs'; +sub get_pool_data { + if (!-f $ZPOOL) { + die "zfsutils-linux not installed\n"; + } + + my $propnames = [qw(name size alloc free frag dedup health)]; + my $numbers = { + size => 1, + alloc => 1, + free => 1, + frag => 1, + dedup => 1, + }; + + my $cmd = [$ZPOOL,'list', '-HpPLo', join(',', @$propnames)]; + + my $pools = []; + + run_command($cmd, outfunc => sub { + my ($line) = @_; + + my @props = split('\s+', trim($line)); + my $pool = {}; + for (my $i = 0; $i < scalar(@$propnames); $i++) { + if ($numbers->{$propnames->[$i]}) { + $pool->{$propnames->[$i]} = $props[$i] + 0; + } else { + $pool->{$propnames->[$i]} = $props[$i]; + } + } + + push @$pools, $pool; + }); + + return $pools; +} + __PACKAGE__->register_method ({ name => 'index', path => '', @@ -74,40 +111,7 @@ __PACKAGE__->register_method ({ code => sub { my ($param) = @_; - if (!-f $ZPOOL) { - die "zfsutils-linux not installed\n"; - } - - my $propnames = [qw(name size alloc free frag dedup health)]; - my $numbers = { - size => 1, - alloc => 1, - free => 1, - frag => 1, - dedup => 1, - }; - - my $cmd = [$ZPOOL,'list', '-HpPLo', join(',', @$propnames)]; - - my $pools = []; - - run_command($cmd, outfunc => sub { - my ($line) = @_; - - my @props = split('\s+', trim($line)); - my $pool = {}; - for (my $i = 0; $i < scalar(@$propnames); $i++) { - if ($numbers->{$propnames->[$i]}) { - $pool->{$propnames->[$i]} = $props[$i] + 0; - } else { - $pool->{$propnames->[$i]} = $props[$i]; - } - } - - push @$pools, $pool; - }); - - return $pools; + return get_pool_data(); }}); sub preparetree { @@ -336,6 +340,7 @@ __PACKAGE__->register_method ({ my $user = $rpcenv->get_user(); my $name = $param->{name}; + my $node = $param->{node}; my $devs = [PVE::Tools::split_list($param->{devices})]; my $raidlevel = $param->{raidlevel}; my $compression = $param->{compression} // 'on'; @@ -346,6 +351,10 @@ __PACKAGE__->register_method ({ } PVE::Storage::assert_sid_unused($name) if $param->{add_storage}; + my $pools = get_pool_data(); + die "pool '${name}' already exists on node '$node'\n" + if grep { $_->{name} eq $name } @{$pools}; + my $numdisks = scalar(@$devs); my $mindisks = { single => 1, -- 2.30.2