From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 02D458A63C for ; Wed, 17 Aug 2022 13:43:21 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id E4A0426A88 for ; Wed, 17 Aug 2022 13:42:50 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Wed, 17 Aug 2022 13:42:49 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 3EC1742DA8 for ; Wed, 17 Aug 2022 13:42:49 +0200 (CEST) Date: Wed, 17 Aug 2022 13:42:41 +0200 From: Fabian =?iso-8859-1?q?Gr=FCnbichler?= To: Proxmox VE development discussion References: <20220715115808.1385388-1-a.lauterer@proxmox.com> <20220715115808.1385388-3-a.lauterer@proxmox.com> In-Reply-To: <20220715115808.1385388-3-a.lauterer@proxmox.com> MIME-Version: 1.0 User-Agent: astroid/0.15.0 (https://github.com/astroidmail/astroid) Message-Id: <1660736115.7grkvr18dc.astroid@nora.none> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-SPAM-LEVEL: Spam detection results: 0 AWL 0.162 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record T_SCC_BODY_TEXT_LINE -0.01 - URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [proxmox.com, lvm.pm, lvmthin.pm, directory.pm, zfs.pm] Subject: Re: [pve-devel] [PATCH storage v2 2/3] disks: die if storage name is already in use X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Aug 2022 11:43:21 -0000 On July 15, 2022 1:58 pm, Aaron Lauterer wrote: > If a storage of that type and name already exists (LVM, zpool, ...) but > we do not have a Proxmox VE Storage config for it, it is possible that > the creation will fail midway due to checks done by the underlying > storage layer itself. This in turn can lead to disks that are already > partitioned. Users would need to clean this up themselves. >=20 > By adding checks early on, not only checking against the PVE storage > config, but against the actual storage type itself, we can die early > enough, before we touch any disk. >=20 > For ZFS, the logic to gather pool data is moved into its own function to > be called from the index API endpoint and the check in the create > endpoint. >=20 > Signed-off-by: Aaron Lauterer > --- >=20 > changes since v1: > - moved zfs pool gathering from index api endpoint into its own function > to easily call it for the check as well > - zfs pool check is now using perl grep > - removed check if mounted path if a /dev one > - added check if a mount unit exists > - moved definitions of $path, $mountunitname and $mountunitpath out > of the worker >=20 > PVE/API2/Disks/Directory.pm | 11 ++++-- > PVE/API2/Disks/LVM.pm | 3 ++ > PVE/API2/Disks/LVMThin.pm | 3 ++ > PVE/API2/Disks/ZFS.pm | 77 +++++++++++++++++++++---------------- > 4 files changed, 56 insertions(+), 38 deletions(-) >=20 > diff --git a/PVE/API2/Disks/Directory.pm b/PVE/API2/Disks/Directory.pm > index df63ba9..07190de 100644 > --- a/PVE/API2/Disks/Directory.pm > +++ b/PVE/API2/Disks/Directory.pm > @@ -203,16 +203,19 @@ __PACKAGE__->register_method ({ > my $dev =3D $param->{device}; > my $node =3D $param->{node}; > my $type =3D $param->{filesystem} // 'ext4'; > + my $path =3D "/mnt/pve/$name"; > + my $mountunitname =3D PVE::Systemd::escape_unit($path, 1) . ".mount"; > + my $mountunitpath =3D "/etc/systemd/system/$mountunitname"; > =20 > $dev =3D PVE::Diskmanage::verify_blockdev_path($dev); > PVE::Diskmanage::assert_disk_unused($dev); > PVE::Storage::assert_sid_unused($name) if $param->{add_storage}; > =20 > - my $worker =3D sub { > - my $path =3D "/mnt/pve/$name"; > - my $mountunitname =3D PVE::Systemd::escape_unit($path, 1) . ".mount= "; > - my $mountunitpath =3D "/etc/systemd/system/$mountunitname"; > + my $mounted =3D PVE::Diskmanage::mounted_paths(); > + die "a mountpoint for '${name}' already exists: ${path} ($1)\n" if $mou= nted->{$path}; nit: the mountpoint existing is not the issue (something already being=20 mounted there is!). also, where does the $1 come from? > + die "a systemd mount unit already exists: ${mountunitpath}\n" if -e $mo= untunitpath; could check if it's identical to the one we'd generate (in the spirit of=20 patch #3 ;)) > =20 > + my $worker =3D sub { > PVE::Diskmanage::locked_disk_action(sub { > PVE::Diskmanage::assert_disk_unused($dev); > =20 > diff --git a/PVE/API2/Disks/LVM.pm b/PVE/API2/Disks/LVM.pm > index 6e4331a..a27afe2 100644 > --- a/PVE/API2/Disks/LVM.pm > +++ b/PVE/API2/Disks/LVM.pm > @@ -152,6 +152,9 @@ __PACKAGE__->register_method ({ > PVE::Diskmanage::assert_disk_unused($dev); > PVE::Storage::assert_sid_unused($name) if $param->{add_storage}; > =20 > + die "volume group with name '${name}' already exists\n" > + if PVE::Storage::LVMPlugin::lvm_vgs()->{$name}; probably better off inside the worker, since `vgs` might take a while=20 (although we also use it in the index API call in this module..) > + > my $worker =3D sub { > PVE::Diskmanage::locked_disk_action(sub { > PVE::Diskmanage::assert_disk_unused($dev); > diff --git a/PVE/API2/Disks/LVMThin.pm b/PVE/API2/Disks/LVMThin.pm > index 58ecb37..690c183 100644 > --- a/PVE/API2/Disks/LVMThin.pm > +++ b/PVE/API2/Disks/LVMThin.pm > @@ -110,6 +110,9 @@ __PACKAGE__->register_method ({ > PVE::Diskmanage::assert_disk_unused($dev); > PVE::Storage::assert_sid_unused($name) if $param->{add_storage}; > =20 > + die "volume group with name '${name}' already exists\n" > + if PVE::Storage::LVMPlugin::lvm_vgs()->{$name}; > + same here > my $worker =3D sub { > PVE::Diskmanage::locked_disk_action(sub { > PVE::Diskmanage::assert_disk_unused($dev); > diff --git a/PVE/API2/Disks/ZFS.pm b/PVE/API2/Disks/ZFS.pm > index eeb9f48..51380c4 100644 > --- a/PVE/API2/Disks/ZFS.pm > +++ b/PVE/API2/Disks/ZFS.pm > @@ -18,6 +18,43 @@ use base qw(PVE::RESTHandler); > my $ZPOOL =3D '/sbin/zpool'; > my $ZFS =3D '/sbin/zfs'; > =20 > +sub get_pool_data { > + if (!-f $ZPOOL) { > + die "zfsutils-linux not installed\n"; > + } > + > + my $propnames =3D [qw(name size alloc free frag dedup health)]; > + my $numbers =3D { > + size =3D> 1, > + alloc =3D> 1, > + free =3D> 1, > + frag =3D> 1, > + dedup =3D> 1, > + }; > + > + my $cmd =3D [$ZPOOL,'list', '-HpPLo', join(',', @$propnames)]; > + > + my $pools =3D []; > + > + run_command($cmd, outfunc =3D> sub { > + my ($line) =3D @_; > + > + my @props =3D split('\s+', trim($line)); > + my $pool =3D {}; > + for (my $i =3D 0; $i < scalar(@$propnames); $i++) { > + if ($numbers->{$propnames->[$i]}) { > + $pool->{$propnames->[$i]} =3D $props[$i] + 0; > + } else { > + $pool->{$propnames->[$i]} =3D $props[$i]; > + } > + } > + > + push @$pools, $pool; > + }); > + > + return $pools; > +} > + > __PACKAGE__->register_method ({ > name =3D> 'index', > path =3D> '', > @@ -74,40 +111,7 @@ __PACKAGE__->register_method ({ > code =3D> sub { > my ($param) =3D @_; > =20 > - if (!-f $ZPOOL) { > - die "zfsutils-linux not installed\n"; > - } > - > - my $propnames =3D [qw(name size alloc free frag dedup health)]; > - my $numbers =3D { > - size =3D> 1, > - alloc =3D> 1, > - free =3D> 1, > - frag =3D> 1, > - dedup =3D> 1, > - }; > - > - my $cmd =3D [$ZPOOL,'list', '-HpPLo', join(',', @$propnames)]; > - > - my $pools =3D []; > - > - run_command($cmd, outfunc =3D> sub { > - my ($line) =3D @_; > - > - my @props =3D split('\s+', trim($line)); > - my $pool =3D {}; > - for (my $i =3D 0; $i < scalar(@$propnames); $i++) { > - if ($numbers->{$propnames->[$i]}) { > - $pool->{$propnames->[$i]} =3D $props[$i] + 0; > - } else { > - $pool->{$propnames->[$i]} =3D $props[$i]; > - } > - } > - > - push @$pools, $pool; > - }); > - > - return $pools; > + return get_pool_data(); > }}); > =20 > sub preparetree { > @@ -336,6 +340,7 @@ __PACKAGE__->register_method ({ > my $user =3D $rpcenv->get_user(); > =20 > my $name =3D $param->{name}; > + my $node =3D $param->{node}; nit: please also change the usage further below in this API handler if=20 you do this > my $devs =3D [PVE::Tools::split_list($param->{devices})]; > my $raidlevel =3D $param->{raidlevel}; > my $compression =3D $param->{compression} // 'on'; > @@ -346,6 +351,10 @@ __PACKAGE__->register_method ({ > } > PVE::Storage::assert_sid_unused($name) if $param->{add_storage}; > =20 > + my $pools =3D get_pool_data(); and also here > + die "pool '${name}' already exists on node '$node'\n" > + if grep { $_->{name} eq $name } @{$pools}; > + > my $numdisks =3D scalar(@$devs); > my $mindisks =3D { > single =3D> 1, > --=20 > 2.30.2 >=20 >=20 >=20 > _______________________________________________ > pve-devel mailing list > pve-devel@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel >=20 >=20 >=20