public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Aaron Lauterer <a.lauterer@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH storage 2/3] disks: die if storage name is already in use
Date: Wed, 13 Jul 2022 12:47:57 +0200	[thread overview]
Message-ID: <20220713104758.651614-3-a.lauterer@proxmox.com> (raw)
In-Reply-To: <20220713104758.651614-1-a.lauterer@proxmox.com>

If a storage of that type and name already exists (LVM, zpool, ...) but
we do not have a Proxmox VE Storage config for it, it is likely that the
creation will fail mid way due to checks done by the underlying storage
layer itself. This in turn can lead to disks that are already
partitioned. Users would need to clean this up themselves.

By adding checks early on, not only checking against the PVE storage
config, but against the actual storage type itself, we can die early
enough, before we touch any disk.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---

A somewhat sensible way I found for Directory storages was to check if the
path is already in use / mounted. Maybe there are additional ways?

For zpools we don't have anything in the ZFSPoolPlugin.pm, in contrast
to LVM where the storage plugins provides easily callable methods to get
a list of VGs.
I therefore chose to call the zpool index API to get the list of ZFS
pools. Not sure if I should refactor that logic into a separate function
right away or wait until we might need it at more and different places?

 PVE/API2/Disks/Directory.pm | 5 +++++
 PVE/API2/Disks/LVM.pm       | 3 +++
 PVE/API2/Disks/LVMThin.pm   | 3 +++
 PVE/API2/Disks/ZFS.pm       | 4 ++++
 4 files changed, 15 insertions(+)

diff --git a/PVE/API2/Disks/Directory.pm b/PVE/API2/Disks/Directory.pm
index df63ba9..8e03229 100644
--- a/PVE/API2/Disks/Directory.pm
+++ b/PVE/API2/Disks/Directory.pm
@@ -208,6 +208,11 @@ __PACKAGE__->register_method ({
 	PVE::Diskmanage::assert_disk_unused($dev);
 	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
 
+	my $mounted = PVE::Diskmanage::mounted_paths();
+	if ($mounted->{$path} =~ /^(\/dev\/.+)$/ ) {
+	    die "a mountpoint for '${name}' already exists: ${path} ($1)\n";
+	}
+
 	my $worker = sub {
 	    my $path = "/mnt/pve/$name";
 	    my $mountunitname = PVE::Systemd::escape_unit($path, 1) . ".mount";
diff --git a/PVE/API2/Disks/LVM.pm b/PVE/API2/Disks/LVM.pm
index 6e4331a..a27afe2 100644
--- a/PVE/API2/Disks/LVM.pm
+++ b/PVE/API2/Disks/LVM.pm
@@ -152,6 +152,9 @@ __PACKAGE__->register_method ({
 	PVE::Diskmanage::assert_disk_unused($dev);
 	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
 
+	die "volume group with name '${name}' already exists\n"
+	    if PVE::Storage::LVMPlugin::lvm_vgs()->{$name};
+
 	my $worker = sub {
 	    PVE::Diskmanage::locked_disk_action(sub {
 		PVE::Diskmanage::assert_disk_unused($dev);
diff --git a/PVE/API2/Disks/LVMThin.pm b/PVE/API2/Disks/LVMThin.pm
index 58ecb37..690c183 100644
--- a/PVE/API2/Disks/LVMThin.pm
+++ b/PVE/API2/Disks/LVMThin.pm
@@ -110,6 +110,9 @@ __PACKAGE__->register_method ({
 	PVE::Diskmanage::assert_disk_unused($dev);
 	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
 
+	die "volume group with name '${name}' already exists\n"
+	    if PVE::Storage::LVMPlugin::lvm_vgs()->{$name};
+
 	my $worker = sub {
 	    PVE::Diskmanage::locked_disk_action(sub {
 		PVE::Diskmanage::assert_disk_unused($dev);
diff --git a/PVE/API2/Disks/ZFS.pm b/PVE/API2/Disks/ZFS.pm
index eeb9f48..ceb0212 100644
--- a/PVE/API2/Disks/ZFS.pm
+++ b/PVE/API2/Disks/ZFS.pm
@@ -346,6 +346,10 @@ __PACKAGE__->register_method ({
 	}
 	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
 
+	my $pools = PVE::API2::Disks::ZFS->index({ node => $param->{node} });
+	my $poollist = { map { $_->{name} => 1 } @{$pools} };
+	die "pool '${name}' already exists on node '$node'\n" if $poollist->{$name};
+
 	my $numdisks = scalar(@$devs);
 	my $mindisks = {
 	    single => 1,
-- 
2.30.2





  parent reply	other threads:[~2022-07-13 10:48 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-13 10:47 [pve-devel] [PATCH storage 0/3] disks: add checks, allow add_storage on other nodes Aaron Lauterer
2022-07-13 10:47 ` [pve-devel] [PATCH storage 1/3] diskmanage: add mounted_paths Aaron Lauterer
2022-07-14 11:13   ` Dominik Csapak
2022-07-14 11:37     ` Aaron Lauterer
2022-07-13 10:47 ` Aaron Lauterer [this message]
2022-07-14 11:13   ` [pve-devel] [PATCH storage 2/3] disks: die if storage name is already in use Dominik Csapak
2022-07-14 12:12     ` Fabian Ebner
2022-07-14 12:30       ` Dominik Csapak
2022-07-13 10:47 ` [pve-devel] [PATCH storage 3/3] disks: allow add_storage for already configured local storage Aaron Lauterer
2022-07-14 11:23   ` Dominik Csapak

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220713104758.651614-3-a.lauterer@proxmox.com \
    --to=a.lauterer@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal