public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: "Fabian Grünbichler" <f.gruenbichler@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH storage v2 2/3] disks: die if storage name is already in use
Date: Wed, 17 Aug 2022 13:42:41 +0200	[thread overview]
Message-ID: <1660736115.7grkvr18dc.astroid@nora.none> (raw)
In-Reply-To: <20220715115808.1385388-3-a.lauterer@proxmox.com>

On July 15, 2022 1:58 pm, Aaron Lauterer wrote:
> If a storage of that type and name already exists (LVM, zpool, ...) but
> we do not have a Proxmox VE Storage config for it, it is possible that
> the creation will fail midway due to checks done by the underlying
> storage layer itself. This in turn can lead to disks that are already
> partitioned. Users would need to clean this up themselves.
> 
> By adding checks early on, not only checking against the PVE storage
> config, but against the actual storage type itself, we can die early
> enough, before we touch any disk.
> 
> For ZFS, the logic to gather pool data is moved into its own function to
> be called from the index API endpoint and the check in the create
> endpoint.
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> 
> changes since v1:
> - moved zfs pool gathering from index api endpoint into its own function
> to easily call it for the check as well
> - zfs pool check is now using perl grep
> - removed check if mounted path if a /dev one
> - added check if a mount unit exists
>     - moved definitions of $path, $mountunitname and $mountunitpath out
>     of the worker
> 
>  PVE/API2/Disks/Directory.pm | 11 ++++--
>  PVE/API2/Disks/LVM.pm       |  3 ++
>  PVE/API2/Disks/LVMThin.pm   |  3 ++
>  PVE/API2/Disks/ZFS.pm       | 77 +++++++++++++++++++++----------------
>  4 files changed, 56 insertions(+), 38 deletions(-)
> 
> diff --git a/PVE/API2/Disks/Directory.pm b/PVE/API2/Disks/Directory.pm
> index df63ba9..07190de 100644
> --- a/PVE/API2/Disks/Directory.pm
> +++ b/PVE/API2/Disks/Directory.pm
> @@ -203,16 +203,19 @@ __PACKAGE__->register_method ({
>  	my $dev = $param->{device};
>  	my $node = $param->{node};
>  	my $type = $param->{filesystem} // 'ext4';
> +	my $path = "/mnt/pve/$name";
> +	my $mountunitname = PVE::Systemd::escape_unit($path, 1) . ".mount";
> +	my $mountunitpath = "/etc/systemd/system/$mountunitname";
>  
>  	$dev = PVE::Diskmanage::verify_blockdev_path($dev);
>  	PVE::Diskmanage::assert_disk_unused($dev);
>  	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
>  
> -	my $worker = sub {
> -	    my $path = "/mnt/pve/$name";
> -	    my $mountunitname = PVE::Systemd::escape_unit($path, 1) . ".mount";
> -	    my $mountunitpath = "/etc/systemd/system/$mountunitname";
> +	my $mounted = PVE::Diskmanage::mounted_paths();
> +	die "a mountpoint for '${name}' already exists: ${path} ($1)\n" if $mounted->{$path};

nit: the mountpoint existing is not the issue (something already being 
mounted there is!). also, where does the $1 come from?

> +	die "a systemd mount unit already exists: ${mountunitpath}\n" if -e $mountunitpath;

could check if it's identical to the one we'd generate (in the spirit of 
patch #3 ;))

>  
> +	my $worker = sub {
>  	    PVE::Diskmanage::locked_disk_action(sub {
>  		PVE::Diskmanage::assert_disk_unused($dev);
>  
> diff --git a/PVE/API2/Disks/LVM.pm b/PVE/API2/Disks/LVM.pm
> index 6e4331a..a27afe2 100644
> --- a/PVE/API2/Disks/LVM.pm
> +++ b/PVE/API2/Disks/LVM.pm
> @@ -152,6 +152,9 @@ __PACKAGE__->register_method ({
>  	PVE::Diskmanage::assert_disk_unused($dev);
>  	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
>  
> +	die "volume group with name '${name}' already exists\n"
> +	    if PVE::Storage::LVMPlugin::lvm_vgs()->{$name};

probably better off inside the worker, since `vgs` might take a while 
(although we also use it in the index API call in this module..)

> +
>  	my $worker = sub {
>  	    PVE::Diskmanage::locked_disk_action(sub {
>  		PVE::Diskmanage::assert_disk_unused($dev);
> diff --git a/PVE/API2/Disks/LVMThin.pm b/PVE/API2/Disks/LVMThin.pm
> index 58ecb37..690c183 100644
> --- a/PVE/API2/Disks/LVMThin.pm
> +++ b/PVE/API2/Disks/LVMThin.pm
> @@ -110,6 +110,9 @@ __PACKAGE__->register_method ({
>  	PVE::Diskmanage::assert_disk_unused($dev);
>  	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
>  
> +	die "volume group with name '${name}' already exists\n"
> +	    if PVE::Storage::LVMPlugin::lvm_vgs()->{$name};
> +

same here

>  	my $worker = sub {
>  	    PVE::Diskmanage::locked_disk_action(sub {
>  		PVE::Diskmanage::assert_disk_unused($dev);
> diff --git a/PVE/API2/Disks/ZFS.pm b/PVE/API2/Disks/ZFS.pm
> index eeb9f48..51380c4 100644
> --- a/PVE/API2/Disks/ZFS.pm
> +++ b/PVE/API2/Disks/ZFS.pm
> @@ -18,6 +18,43 @@ use base qw(PVE::RESTHandler);
>  my $ZPOOL = '/sbin/zpool';
>  my $ZFS = '/sbin/zfs';
>  
> +sub get_pool_data {
> +    if (!-f $ZPOOL) {
> +	die "zfsutils-linux not installed\n";
> +    }
> +
> +    my $propnames = [qw(name size alloc free frag dedup health)];
> +    my $numbers = {
> +	size => 1,
> +	alloc => 1,
> +	free => 1,
> +	frag => 1,
> +	dedup => 1,
> +    };
> +
> +    my $cmd = [$ZPOOL,'list', '-HpPLo', join(',', @$propnames)];
> +
> +    my $pools = [];
> +
> +    run_command($cmd, outfunc => sub {
> +	my ($line) = @_;
> +
> +	    my @props = split('\s+', trim($line));
> +	    my $pool = {};
> +	    for (my $i = 0; $i < scalar(@$propnames); $i++) {
> +		if ($numbers->{$propnames->[$i]}) {
> +		    $pool->{$propnames->[$i]} = $props[$i] + 0;
> +		} else {
> +		    $pool->{$propnames->[$i]} = $props[$i];
> +		}
> +	    }
> +
> +	    push @$pools, $pool;
> +    });
> +
> +    return $pools;
> +}
> +
>  __PACKAGE__->register_method ({
>      name => 'index',
>      path => '',
> @@ -74,40 +111,7 @@ __PACKAGE__->register_method ({
>      code => sub {
>  	my ($param) = @_;
>  
> -	if (!-f $ZPOOL) {
> -	    die "zfsutils-linux not installed\n";
> -	}
> -
> -	my $propnames = [qw(name size alloc free frag dedup health)];
> -	my $numbers = {
> -	    size => 1,
> -	    alloc => 1,
> -	    free => 1,
> -	    frag => 1,
> -	    dedup => 1,
> -	};
> -
> -	my $cmd = [$ZPOOL,'list', '-HpPLo', join(',', @$propnames)];
> -
> -	my $pools = [];
> -
> -	run_command($cmd, outfunc => sub {
> -	    my ($line) = @_;
> -
> -		my @props = split('\s+', trim($line));
> -		my $pool = {};
> -		for (my $i = 0; $i < scalar(@$propnames); $i++) {
> -		    if ($numbers->{$propnames->[$i]}) {
> -			$pool->{$propnames->[$i]} = $props[$i] + 0;
> -		    } else {
> -			$pool->{$propnames->[$i]} = $props[$i];
> -		    }
> -		}
> -
> -		push @$pools, $pool;
> -	});
> -
> -	return $pools;
> +	return get_pool_data();
>      }});
>  
>  sub preparetree {
> @@ -336,6 +340,7 @@ __PACKAGE__->register_method ({
>  	my $user = $rpcenv->get_user();
>  
>  	my $name = $param->{name};
> +	my $node = $param->{node};

nit: please also change the usage further below in this API handler if 
you do this

>  	my $devs = [PVE::Tools::split_list($param->{devices})];
>  	my $raidlevel = $param->{raidlevel};
>  	my $compression = $param->{compression} // 'on';
> @@ -346,6 +351,10 @@ __PACKAGE__->register_method ({
>  	}
>  	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
>  
> +	my $pools = get_pool_data();

and also here

> +	die "pool '${name}' already exists on node '$node'\n"
> +	    if grep { $_->{name} eq $name } @{$pools};
> +
>  	my $numdisks = scalar(@$devs);
>  	my $mindisks = {
>  	    single => 1,
> -- 
> 2.30.2
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 




  reply	other threads:[~2022-08-17 11:43 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-15 11:58 [pve-devel] [PATCH storage v2 0/3] disks: add checks, allow add_storage on other nodes Aaron Lauterer
2022-07-15 11:58 ` [pve-devel] [PATCH storage v2 1/3] diskmanage: add mounted_paths Aaron Lauterer
2022-08-17 11:35   ` Fabian Grünbichler
2022-07-15 11:58 ` [pve-devel] [PATCH storage v2 2/3] disks: die if storage name is already in use Aaron Lauterer
2022-08-17 11:42   ` Fabian Grünbichler [this message]
2022-08-18 15:22     ` Aaron Lauterer
2022-08-18 15:31       ` Aaron Lauterer
2022-08-19  8:21         ` Fabian Grünbichler
2022-08-19  9:29           ` Aaron Lauterer
2022-08-19  8:20       ` Fabian Grünbichler
2022-08-19  9:28         ` Aaron Lauterer
2022-07-15 11:58 ` [pve-devel] [PATCH storage v2 3/3] disks: allow add_storage for already configured local storage Aaron Lauterer
2022-08-17 11:53   ` Fabian Grünbichler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1660736115.7grkvr18dc.astroid@nora.none \
    --to=f.gruenbichler@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal