public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH storage 0/3] disks: add checks, allow add_storage on other nodes
@ 2022-07-13 10:47 Aaron Lauterer
  2022-07-13 10:47 ` [pve-devel] [PATCH storage 1/3] diskmanage: add mounted_paths Aaron Lauterer
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Aaron Lauterer @ 2022-07-13 10:47 UTC (permalink / raw)
  To: pve-devel

This series deals mainly with 2 things, adding more checks prior to
actually setting up a new local storage to avoid leaving behind half
provisioned disks in case a storage (lvm, zfs, ...) of the same name
already exists.
Secondly, to change the behavior regarding the "Add Storage" flag. It
allows to keep in enabled and will create the local storage and add the
node to the nodes list of the PVE storage config if there already exists
one in the cluster.

The first patch is a prerequisite to be able to check if a mount path
for the Directory storage type is already mounted.
The second patch implements the actual checks.
The third patch adds the changed behavior for the "Add Storage" part.

More in the actual patches.

Aaron Lauterer (3):
  diskmanage: add mounted_paths
  disks: die if storage name is already in use
  disks: allow add_storage for already configured local storage

 PVE/API2/Disks/Directory.pm | 37 ++++++++++++++++++++++-----------
 PVE/API2/Disks/LVM.pm       | 33 +++++++++++++++++++----------
 PVE/API2/Disks/LVMThin.pm   | 33 +++++++++++++++++++----------
 PVE/API2/Disks/ZFS.pm       | 33 ++++++++++++++++++++---------
 PVE/Diskmanage.pm           | 13 ++++++++++++
 PVE/Storage.pm              | 41 +++++++++++++++++++++++++++++++++++++
 6 files changed, 146 insertions(+), 44 deletions(-)

-- 
2.30.2





^ permalink raw reply	[flat|nested] 10+ messages in thread

* [pve-devel] [PATCH storage 1/3] diskmanage: add mounted_paths
  2022-07-13 10:47 [pve-devel] [PATCH storage 0/3] disks: add checks, allow add_storage on other nodes Aaron Lauterer
@ 2022-07-13 10:47 ` Aaron Lauterer
  2022-07-14 11:13   ` Dominik Csapak
  2022-07-13 10:47 ` [pve-devel] [PATCH storage 2/3] disks: die if storage name is already in use Aaron Lauterer
  2022-07-13 10:47 ` [pve-devel] [PATCH storage 3/3] disks: allow add_storage for already configured local storage Aaron Lauterer
  2 siblings, 1 reply; 10+ messages in thread
From: Aaron Lauterer @ 2022-07-13 10:47 UTC (permalink / raw)
  To: pve-devel

returns similar values as mounted_blockdevs, but uses the mounted path
as key and the blockdev path as value

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
used for the Directory check in patch 2

 PVE/Diskmanage.pm | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/PVE/Diskmanage.pm b/PVE/Diskmanage.pm
index 8ed7a8b..c5c20de 100644
--- a/PVE/Diskmanage.pm
+++ b/PVE/Diskmanage.pm
@@ -499,6 +499,19 @@ sub mounted_blockdevs {
     return $mounted;
 }
 
+sub mounted_paths {
+    my $mounted = {};
+
+    my $mounts = PVE::ProcFSTools::parse_proc_mounts();
+
+    foreach my $mount (@$mounts) {
+	next if $mount->[0] !~ m|^/dev/|;
+	$mounted->{abs_path($mount->[1])} = $mount->[0];
+    };
+
+    return $mounted;
+}
+
 sub get_disks {
     my ($disks, $nosmart, $include_partitions) = @_;
     my $disklist = {};
-- 
2.30.2





^ permalink raw reply	[flat|nested] 10+ messages in thread

* [pve-devel] [PATCH storage 2/3] disks: die if storage name is already in use
  2022-07-13 10:47 [pve-devel] [PATCH storage 0/3] disks: add checks, allow add_storage on other nodes Aaron Lauterer
  2022-07-13 10:47 ` [pve-devel] [PATCH storage 1/3] diskmanage: add mounted_paths Aaron Lauterer
@ 2022-07-13 10:47 ` Aaron Lauterer
  2022-07-14 11:13   ` Dominik Csapak
  2022-07-13 10:47 ` [pve-devel] [PATCH storage 3/3] disks: allow add_storage for already configured local storage Aaron Lauterer
  2 siblings, 1 reply; 10+ messages in thread
From: Aaron Lauterer @ 2022-07-13 10:47 UTC (permalink / raw)
  To: pve-devel

If a storage of that type and name already exists (LVM, zpool, ...) but
we do not have a Proxmox VE Storage config for it, it is likely that the
creation will fail mid way due to checks done by the underlying storage
layer itself. This in turn can lead to disks that are already
partitioned. Users would need to clean this up themselves.

By adding checks early on, not only checking against the PVE storage
config, but against the actual storage type itself, we can die early
enough, before we touch any disk.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---

A somewhat sensible way I found for Directory storages was to check if the
path is already in use / mounted. Maybe there are additional ways?

For zpools we don't have anything in the ZFSPoolPlugin.pm, in contrast
to LVM where the storage plugins provides easily callable methods to get
a list of VGs.
I therefore chose to call the zpool index API to get the list of ZFS
pools. Not sure if I should refactor that logic into a separate function
right away or wait until we might need it at more and different places?

 PVE/API2/Disks/Directory.pm | 5 +++++
 PVE/API2/Disks/LVM.pm       | 3 +++
 PVE/API2/Disks/LVMThin.pm   | 3 +++
 PVE/API2/Disks/ZFS.pm       | 4 ++++
 4 files changed, 15 insertions(+)

diff --git a/PVE/API2/Disks/Directory.pm b/PVE/API2/Disks/Directory.pm
index df63ba9..8e03229 100644
--- a/PVE/API2/Disks/Directory.pm
+++ b/PVE/API2/Disks/Directory.pm
@@ -208,6 +208,11 @@ __PACKAGE__->register_method ({
 	PVE::Diskmanage::assert_disk_unused($dev);
 	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
 
+	my $mounted = PVE::Diskmanage::mounted_paths();
+	if ($mounted->{$path} =~ /^(\/dev\/.+)$/ ) {
+	    die "a mountpoint for '${name}' already exists: ${path} ($1)\n";
+	}
+
 	my $worker = sub {
 	    my $path = "/mnt/pve/$name";
 	    my $mountunitname = PVE::Systemd::escape_unit($path, 1) . ".mount";
diff --git a/PVE/API2/Disks/LVM.pm b/PVE/API2/Disks/LVM.pm
index 6e4331a..a27afe2 100644
--- a/PVE/API2/Disks/LVM.pm
+++ b/PVE/API2/Disks/LVM.pm
@@ -152,6 +152,9 @@ __PACKAGE__->register_method ({
 	PVE::Diskmanage::assert_disk_unused($dev);
 	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
 
+	die "volume group with name '${name}' already exists\n"
+	    if PVE::Storage::LVMPlugin::lvm_vgs()->{$name};
+
 	my $worker = sub {
 	    PVE::Diskmanage::locked_disk_action(sub {
 		PVE::Diskmanage::assert_disk_unused($dev);
diff --git a/PVE/API2/Disks/LVMThin.pm b/PVE/API2/Disks/LVMThin.pm
index 58ecb37..690c183 100644
--- a/PVE/API2/Disks/LVMThin.pm
+++ b/PVE/API2/Disks/LVMThin.pm
@@ -110,6 +110,9 @@ __PACKAGE__->register_method ({
 	PVE::Diskmanage::assert_disk_unused($dev);
 	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
 
+	die "volume group with name '${name}' already exists\n"
+	    if PVE::Storage::LVMPlugin::lvm_vgs()->{$name};
+
 	my $worker = sub {
 	    PVE::Diskmanage::locked_disk_action(sub {
 		PVE::Diskmanage::assert_disk_unused($dev);
diff --git a/PVE/API2/Disks/ZFS.pm b/PVE/API2/Disks/ZFS.pm
index eeb9f48..ceb0212 100644
--- a/PVE/API2/Disks/ZFS.pm
+++ b/PVE/API2/Disks/ZFS.pm
@@ -346,6 +346,10 @@ __PACKAGE__->register_method ({
 	}
 	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
 
+	my $pools = PVE::API2::Disks::ZFS->index({ node => $param->{node} });
+	my $poollist = { map { $_->{name} => 1 } @{$pools} };
+	die "pool '${name}' already exists on node '$node'\n" if $poollist->{$name};
+
 	my $numdisks = scalar(@$devs);
 	my $mindisks = {
 	    single => 1,
-- 
2.30.2





^ permalink raw reply	[flat|nested] 10+ messages in thread

* [pve-devel] [PATCH storage 3/3] disks: allow add_storage for already configured local storage
  2022-07-13 10:47 [pve-devel] [PATCH storage 0/3] disks: add checks, allow add_storage on other nodes Aaron Lauterer
  2022-07-13 10:47 ` [pve-devel] [PATCH storage 1/3] diskmanage: add mounted_paths Aaron Lauterer
  2022-07-13 10:47 ` [pve-devel] [PATCH storage 2/3] disks: die if storage name is already in use Aaron Lauterer
@ 2022-07-13 10:47 ` Aaron Lauterer
  2022-07-14 11:23   ` Dominik Csapak
  2 siblings, 1 reply; 10+ messages in thread
From: Aaron Lauterer @ 2022-07-13 10:47 UTC (permalink / raw)
  To: pve-devel

One of the smaller annoyances, especially for less experienced users, is
the fact, that when creating a local storage (ZFS, LVM (thin), dir) in a
cluster, one can only leave the "Add Storage" option enabled the first
time.

On any following node, this option needed to be disabled and the new
node manually added to the list of nodes for that storage.

This patch changes the behavior. If a storage of the same name already
exists, it will verify that necessary parameters match the already
existing one.
Then, if the 'nodes' parameter is set, it adds the current node and
updates the storage config.
In case there is no nodes list, nothing else needs to be done, and the
GUI will stop showing the question mark for the configured, but until
then, not existing local storage.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---

I decided to make the storage type a parameter of the
'assert_sid_usable_on_node' function and not part of the $verify_params,
so that it cannot be forgotten when calling it.

 PVE/API2/Disks/Directory.pm | 32 ++++++++++++++++++-----------
 PVE/API2/Disks/LVM.pm       | 30 +++++++++++++++++----------
 PVE/API2/Disks/LVMThin.pm   | 30 +++++++++++++++++----------
 PVE/API2/Disks/ZFS.pm       | 29 +++++++++++++++++---------
 PVE/Storage.pm              | 41 +++++++++++++++++++++++++++++++++++++
 5 files changed, 118 insertions(+), 44 deletions(-)

diff --git a/PVE/API2/Disks/Directory.pm b/PVE/API2/Disks/Directory.pm
index 8e03229..00b9a37 100644
--- a/PVE/API2/Disks/Directory.pm
+++ b/PVE/API2/Disks/Directory.pm
@@ -203,10 +203,14 @@ __PACKAGE__->register_method ({
 	my $dev = $param->{device};
 	my $node = $param->{node};
 	my $type = $param->{filesystem} // 'ext4';
+	my $path = "/mnt/pve/$name";
 
 	$dev = PVE::Diskmanage::verify_blockdev_path($dev);
 	PVE::Diskmanage::assert_disk_unused($dev);
-	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
+
+	my $verify_params = { path => $path };
+	PVE::Storage::assert_sid_usable_on_node($name, $node, 'dir', $verify_params)
+	    if $param->{add_storage};
 
 	my $mounted = PVE::Diskmanage::mounted_paths();
 	if ($mounted->{$path} =~ /^(\/dev\/.+)$/ ) {
@@ -214,7 +218,6 @@ __PACKAGE__->register_method ({
 	}
 
 	my $worker = sub {
-	    my $path = "/mnt/pve/$name";
 	    my $mountunitname = PVE::Systemd::escape_unit($path, 1) . ".mount";
 	    my $mountunitpath = "/etc/systemd/system/$mountunitname";
 
@@ -287,16 +290,21 @@ __PACKAGE__->register_method ({
 		run_command(['systemctl', 'start', $mountunitname]);
 
 		if ($param->{add_storage}) {
-		    my $storage_params = {
-			type => 'dir',
-			storage => $name,
-			content => 'rootdir,images,iso,backup,vztmpl,snippets',
-			is_mountpoint => 1,
-			path => $path,
-			nodes => $node,
-		    };
-
-		    PVE::API2::Storage::Config->create($storage_params);
+		    my $status = PVE::Storage::create_or_update_storage($name, $node);
+		    if ($status->{create}) {
+			my $storage_params = {
+			    type => 'dir',
+			    storage => $name,
+			    content => 'rootdir,images,iso,backup,vztmpl,snippets',
+			    is_mountpoint => 1,
+			    path => $path,
+			    nodes => $node,
+			};
+			PVE::API2::Storage::Config->create($storage_params);
+		    } elsif ($status->{update_params}) {
+			print "Adding '${node}' to nodes for storage '${name}'\n";
+			PVE::API2::Storage::Config->update($status->{update_params});
+		    } # no action needed if storage exists, but not limited to nodes
 		}
 	    });
 	};
diff --git a/PVE/API2/Disks/LVM.pm b/PVE/API2/Disks/LVM.pm
index a27afe2..88b8d0b 100644
--- a/PVE/API2/Disks/LVM.pm
+++ b/PVE/API2/Disks/LVM.pm
@@ -150,7 +150,10 @@ __PACKAGE__->register_method ({
 
 	$dev = PVE::Diskmanage::verify_blockdev_path($dev);
 	PVE::Diskmanage::assert_disk_unused($dev);
-	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
+
+	my $verify_params = { vgname => $name };
+	PVE::Storage::assert_sid_usable_on_node($name, $node, 'lvm', $verify_params)
+	    if $param->{add_storage};
 
 	die "volume group with name '${name}' already exists\n"
 	    if PVE::Storage::LVMPlugin::lvm_vgs()->{$name};
@@ -169,16 +172,21 @@ __PACKAGE__->register_method ({
 		PVE::Diskmanage::udevadm_trigger($dev);
 
 		if ($param->{add_storage}) {
-		    my $storage_params = {
-			type => 'lvm',
-			vgname => $name,
-			storage => $name,
-			content => 'rootdir,images',
-			shared => 0,
-			nodes => $node,
-		    };
-
-		    PVE::API2::Storage::Config->create($storage_params);
+		    my $status = PVE::Storage::create_or_update_storage($name, $node);
+		    if ($status->{create}) {
+			my $storage_params = {
+			    type => 'lvm',
+			    vgname => $name,
+			    storage => $name,
+			    content => 'rootdir,images',
+			    shared => 0,
+			    nodes => $node,
+			};
+			PVE::API2::Storage::Config->create($storage_params);
+		    } elsif ($status->{update_params}) {
+			print "Adding '${node}' to nodes for storage '${name}'\n";
+			PVE::API2::Storage::Config->update($status->{update_params});
+		    } # no action needed if storage exists, but not limited to nodes
 		}
 	    });
 	};
diff --git a/PVE/API2/Disks/LVMThin.pm b/PVE/API2/Disks/LVMThin.pm
index 690c183..d880154 100644
--- a/PVE/API2/Disks/LVMThin.pm
+++ b/PVE/API2/Disks/LVMThin.pm
@@ -108,7 +108,10 @@ __PACKAGE__->register_method ({
 
 	$dev = PVE::Diskmanage::verify_blockdev_path($dev);
 	PVE::Diskmanage::assert_disk_unused($dev);
-	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
+
+	my $verify_params = { vgname => $name, thinpool => $name };
+	PVE::Storage::assert_sid_usable_on_node($name, $node, 'lvmthin', $verify_params)
+	    if $param->{add_storage};
 
 	die "volume group with name '${name}' already exists\n"
 	    if PVE::Storage::LVMPlugin::lvm_vgs()->{$name};
@@ -147,16 +150,21 @@ __PACKAGE__->register_method ({
 		PVE::Diskmanage::udevadm_trigger($dev);
 
 		if ($param->{add_storage}) {
-		    my $storage_params = {
-			type => 'lvmthin',
-			vgname => $name,
-			thinpool => $name,
-			storage => $name,
-			content => 'rootdir,images',
-			nodes => $node,
-		    };
-
-		    PVE::API2::Storage::Config->create($storage_params);
+		    my $status = PVE::Storage::create_or_update_storage($name, $node);
+		    if ($status->{create}) {
+			my $storage_params = {
+			    type => 'lvmthin',
+			    vgname => $name,
+			    thinpool => $name,
+			    storage => $name,
+			    content => 'rootdir,images',
+			    nodes => $node,
+			};
+			PVE::API2::Storage::Config->create($storage_params);
+		    } elsif ($status->{update_params}) {
+			print "Adding '${node}' to nodes for storage '${name}'\n";
+			PVE::API2::Storage::Config->update($status->{update_params});
+		    } # no action needed if storage exists, but not limited to nodes
 		}
 	    });
 	};
diff --git a/PVE/API2/Disks/ZFS.pm b/PVE/API2/Disks/ZFS.pm
index ceb0212..793bc14 100644
--- a/PVE/API2/Disks/ZFS.pm
+++ b/PVE/API2/Disks/ZFS.pm
@@ -337,6 +337,7 @@ __PACKAGE__->register_method ({
 
 	my $name = $param->{name};
 	my $devs = [PVE::Tools::split_list($param->{devices})];
+	my $node = $param->{node};
 	my $raidlevel = $param->{raidlevel};
 	my $compression = $param->{compression} // 'on';
 
@@ -344,7 +345,10 @@ __PACKAGE__->register_method ({
 	    $dev = PVE::Diskmanage::verify_blockdev_path($dev);
 	    PVE::Diskmanage::assert_disk_unused($dev);
 	}
-	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
+
+	my $verify_params = { pool => $name };
+	PVE::Storage::assert_sid_usable_on_node($name, $node, 'zfspool', $verify_params)
+	    if $param->{add_storage};
 
 	my $pools = PVE::API2::Disks::ZFS->index({ node => $param->{node} });
 	my $poollist = { map { $_->{name} => 1 } @{$pools} };
@@ -427,15 +431,20 @@ __PACKAGE__->register_method ({
 	    PVE::Diskmanage::udevadm_trigger($devs->@*);
 
 	    if ($param->{add_storage}) {
-		my $storage_params = {
-		    type => 'zfspool',
-		    pool => $name,
-		    storage => $name,
-		    content => 'rootdir,images',
-		    nodes => $param->{node},
-		};
-
-		PVE::API2::Storage::Config->create($storage_params);
+		my $status = PVE::Storage::create_or_update_storage($name, $node);
+		if ($status->{create}) {
+		    my $storage_params = {
+			type => 'zfspool',
+			pool => $name,
+			storage => $name,
+			content => 'rootdir,images',
+			nodes => $param->{node},
+		    };
+		    PVE::API2::Storage::Config->create($storage_params);
+		} elsif ($status->{update_params}) {
+		    print "Adding '${node}' to nodes for storage '${name}'\n";
+		    PVE::API2::Storage::Config->update($status->{update_params});
+		} # no action needed if storage exists, but not limited to nodes
 	    }
 	};
 
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index b9c53a1..6dfcc3c 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -2127,6 +2127,47 @@ sub assert_sid_unused {
     return undef;
 }
 
+# Checks if storage id can be used on the node, intended for local storage types
+# Compares the type and verify_params hash against a potentially configured
+# storage. Use it for storage parameters that need to be the same throughout
+# the cluster. For example, pool for ZFS, vg_name for LVM, ...
+sub assert_sid_usable_on_node {
+    my ($sid, $node, $type, $verify_params) = @_;
+
+    my $cfg = config();
+    if (my $scfg = storage_config($cfg, $sid, 1)) {
+	$node = PVE::INotify::nodename() if !$node || ($node eq 'localhost');
+	die "storage ID '${sid}' already exists on node ${node}\n"
+	    if defined($scfg->{nodes}) && $scfg->{nodes}->{$node};
+
+	$verify_params->{type} = $type;
+	for my $key (keys %$verify_params) {
+	    die "option '${key}' ($verify_params->{$key}) does not match ".
+		"existing storage configuration: $scfg->{$key}\n"
+		if $verify_params->{$key} ne $scfg->{$key};
+	}
+    }
+}
+
+# returns if storage needs to be created or updated. In the update case it
+# checks for a node list and returns the needed parameters to update the
+# storage with the new node list
+sub create_or_update_storage {
+    my ($sid, $node) = @_;
+
+    my $cfg = config();
+    my $status = { create => 1 };
+    if (my $scfg = storage_config($cfg, $sid, 1)) {
+	$status->{create} = 0;
+	if ($scfg->{nodes}) {
+	    $scfg->{nodes}->{$node} = 1;
+	    $status->{update_params}->{nodes} = join(',', sort keys $scfg->{nodes}->%*),
+	    $status->{update_params}->{storage} = $sid;
+	}
+    }
+    return $status;
+}
+
 # removes leading/trailing spaces and (back)slashes completely
 # substitutes every non-ASCII-alphanumerical char with '_', except '_.-'
 sub normalize_content_filename {
-- 
2.30.2





^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pve-devel] [PATCH storage 1/3] diskmanage: add mounted_paths
  2022-07-13 10:47 ` [pve-devel] [PATCH storage 1/3] diskmanage: add mounted_paths Aaron Lauterer
@ 2022-07-14 11:13   ` Dominik Csapak
  2022-07-14 11:37     ` Aaron Lauterer
  0 siblings, 1 reply; 10+ messages in thread
From: Dominik Csapak @ 2022-07-14 11:13 UTC (permalink / raw)
  To: Proxmox VE development discussion, Aaron Lauterer

comment inline

On 7/13/22 12:47, Aaron Lauterer wrote:
> returns similar values as mounted_blockdevs, but uses the mounted path
> as key and the blockdev path as value
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> used for the Directory check in patch 2
> 
>   PVE/Diskmanage.pm | 13 +++++++++++++
>   1 file changed, 13 insertions(+)
> 
> diff --git a/PVE/Diskmanage.pm b/PVE/Diskmanage.pm
> index 8ed7a8b..c5c20de 100644
> --- a/PVE/Diskmanage.pm
> +++ b/PVE/Diskmanage.pm
> @@ -499,6 +499,19 @@ sub mounted_blockdevs {
>       return $mounted;
>   }
>   
> +sub mounted_paths {
> +    my $mounted = {};
> +
> +    my $mounts = PVE::ProcFSTools::parse_proc_mounts();
> +
> +    foreach my $mount (@$mounts) {
> +	next if $mount->[0] !~ m|^/dev/|;

does it really make sense here to filter by /dev/ ?
for 'mounted_blockdevs' it makes sense since we want to have
the mounted 'block devices' but here we talk about 'paths'
and /sys,/proc, etc are paths too, so you could simply omit that check

> +	$mounted->{abs_path($mount->[1])} = $mount->[0];
> +    };
> +
> +    return $mounted;
> +}
> +
>   sub get_disks {
>       my ($disks, $nosmart, $include_partitions) = @_;
>       my $disklist = {};




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pve-devel] [PATCH storage 2/3] disks: die if storage name is already in use
  2022-07-13 10:47 ` [pve-devel] [PATCH storage 2/3] disks: die if storage name is already in use Aaron Lauterer
@ 2022-07-14 11:13   ` Dominik Csapak
  2022-07-14 12:12     ` Fabian Ebner
  0 siblings, 1 reply; 10+ messages in thread
From: Dominik Csapak @ 2022-07-14 11:13 UTC (permalink / raw)
  To: Proxmox VE development discussion, Aaron Lauterer

comments inline

On 7/13/22 12:47, Aaron Lauterer wrote:
> If a storage of that type and name already exists (LVM, zpool, ...) but
> we do not have a Proxmox VE Storage config for it, it is likely that the
> creation will fail mid way due to checks done by the underlying storage
> layer itself. This in turn can lead to disks that are already
> partitioned. Users would need to clean this up themselves.
> 
> By adding checks early on, not only checking against the PVE storage
> config, but against the actual storage type itself, we can die early
> enough, before we touch any disk.
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> 
> A somewhat sensible way I found for Directory storages was to check if the
> path is already in use / mounted. Maybe there are additional ways?
> 
> For zpools we don't have anything in the ZFSPoolPlugin.pm, in contrast
> to LVM where the storage plugins provides easily callable methods to get
> a list of VGs.
> I therefore chose to call the zpool index API to get the list of ZFS
> pools. Not sure if I should refactor that logic into a separate function
> right away or wait until we might need it at more and different places?
> 
>   PVE/API2/Disks/Directory.pm | 5 +++++
>   PVE/API2/Disks/LVM.pm       | 3 +++
>   PVE/API2/Disks/LVMThin.pm   | 3 +++
>   PVE/API2/Disks/ZFS.pm       | 4 ++++
>   4 files changed, 15 insertions(+)
> 
> diff --git a/PVE/API2/Disks/Directory.pm b/PVE/API2/Disks/Directory.pm
> index df63ba9..8e03229 100644
> --- a/PVE/API2/Disks/Directory.pm
> +++ b/PVE/API2/Disks/Directory.pm
> @@ -208,6 +208,11 @@ __PACKAGE__->register_method ({
>   	PVE::Diskmanage::assert_disk_unused($dev);
>   	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
>   
> +	my $mounted = PVE::Diskmanage::mounted_paths();
> +	if ($mounted->{$path} =~ /^(\/dev\/.+)$/ ) {

same reasoning as the last patch, we don't need to actually filter
by '/dev/' here since we don't want *anything* mounted there.

but even if we do want to filter, there's no need imo for doing
both in 'mounted_paths' and here, one place should be enough

aside from that, there are other checks that we could do here too,
e.g. checking if the mount unit already exists because the dir does not
have to be currently mounted

(we even have that variable below, we'd just have to pull it out
of the worker and do a '-e $mountunitpath')

> +	    die "a mountpoint for '${name}' already exists: ${path} ($1)\n";
> +	}
> +
>   	my $worker = sub {
>   	    my $path = "/mnt/pve/$name";
>   	    my $mountunitname = PVE::Systemd::escape_unit($path, 1) . ".mount";
> diff --git a/PVE/API2/Disks/LVM.pm b/PVE/API2/Disks/LVM.pm
> index 6e4331a..a27afe2 100644
> --- a/PVE/API2/Disks/LVM.pm
> +++ b/PVE/API2/Disks/LVM.pm
> @@ -152,6 +152,9 @@ __PACKAGE__->register_method ({
>   	PVE::Diskmanage::assert_disk_unused($dev);
>   	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
>   
> +	die "volume group with name '${name}' already exists\n"
> +	    if PVE::Storage::LVMPlugin::lvm_vgs()->{$name};
> +
>   	my $worker = sub {
>   	    PVE::Diskmanage::locked_disk_action(sub {
>   		PVE::Diskmanage::assert_disk_unused($dev);
> diff --git a/PVE/API2/Disks/LVMThin.pm b/PVE/API2/Disks/LVMThin.pm
> index 58ecb37..690c183 100644
> --- a/PVE/API2/Disks/LVMThin.pm
> +++ b/PVE/API2/Disks/LVMThin.pm
> @@ -110,6 +110,9 @@ __PACKAGE__->register_method ({
>   	PVE::Diskmanage::assert_disk_unused($dev);
>   	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
>   
> +	die "volume group with name '${name}' already exists\n"
> +	    if PVE::Storage::LVMPlugin::lvm_vgs()->{$name};
> +
>   	my $worker = sub {
>   	    PVE::Diskmanage::locked_disk_action(sub {
>   		PVE::Diskmanage::assert_disk_unused($dev);
> diff --git a/PVE/API2/Disks/ZFS.pm b/PVE/API2/Disks/ZFS.pm
> index eeb9f48..ceb0212 100644
> --- a/PVE/API2/Disks/ZFS.pm
> +++ b/PVE/API2/Disks/ZFS.pm
> @@ -346,6 +346,10 @@ __PACKAGE__->register_method ({
>   	}
>   	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
>   
> +	my $pools = PVE::API2::Disks::ZFS->index({ node => $param->{node} });

personally i'd refactor that out, especially since were in the same 
file, so there's not much headache regarding public functions/scope

> +	my $poollist = { map { $_->{name} => 1 } @{$pools} };

does that really make sense here? would it not be easier to just 
iterate? e.g.

----
for my $pool (@$pools) {
     die "..." if $pool->{name} eq $name;
}
----

(i admit, it's 1 line longer, but a bit more readable?)

> +	die "pool '${name}' already exists on node '$node'\n" if $poollist->{$name};
> +
>   	my $numdisks = scalar(@$devs);
>   	my $mindisks = {
>   	    single => 1,




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pve-devel] [PATCH storage 3/3] disks: allow add_storage for already configured local storage
  2022-07-13 10:47 ` [pve-devel] [PATCH storage 3/3] disks: allow add_storage for already configured local storage Aaron Lauterer
@ 2022-07-14 11:23   ` Dominik Csapak
  0 siblings, 0 replies; 10+ messages in thread
From: Dominik Csapak @ 2022-07-14 11:23 UTC (permalink / raw)
  To: Proxmox VE development discussion, Aaron Lauterer

comments inline

On 7/13/22 12:47, Aaron Lauterer wrote:
> One of the smaller annoyances, especially for less experienced users, is
> the fact, that when creating a local storage (ZFS, LVM (thin), dir) in a
> cluster, one can only leave the "Add Storage" option enabled the first
> time.
> 
> On any following node, this option needed to be disabled and the new
> node manually added to the list of nodes for that storage.
> 
> This patch changes the behavior. If a storage of the same name already
> exists, it will verify that necessary parameters match the already
> existing one.
> Then, if the 'nodes' parameter is set, it adds the current node and
> updates the storage config.
> In case there is no nodes list, nothing else needs to be done, and the
> GUI will stop showing the question mark for the configured, but until
> then, not existing local storage.
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> 
> I decided to make the storage type a parameter of the
> 'assert_sid_usable_on_node' function and not part of the $verify_params,
> so that it cannot be forgotten when calling it.
> 
>   PVE/API2/Disks/Directory.pm | 32 ++++++++++++++++++-----------
>   PVE/API2/Disks/LVM.pm       | 30 +++++++++++++++++----------
>   PVE/API2/Disks/LVMThin.pm   | 30 +++++++++++++++++----------
>   PVE/API2/Disks/ZFS.pm       | 29 +++++++++++++++++---------
>   PVE/Storage.pm              | 41 +++++++++++++++++++++++++++++++++++++
>   5 files changed, 118 insertions(+), 44 deletions(-)
> 
> diff --git a/PVE/API2/Disks/Directory.pm b/PVE/API2/Disks/Directory.pm
> index 8e03229..00b9a37 100644
> --- a/PVE/API2/Disks/Directory.pm
> +++ b/PVE/API2/Disks/Directory.pm
> @@ -203,10 +203,14 @@ __PACKAGE__->register_method ({
>   	my $dev = $param->{device};
>   	my $node = $param->{node};
>   	my $type = $param->{filesystem} // 'ext4';
> +	my $path = "/mnt/pve/$name";
>   
>   	$dev = PVE::Diskmanage::verify_blockdev_path($dev);
>   	PVE::Diskmanage::assert_disk_unused($dev);
> -	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
> +
> +	my $verify_params = { path => $path };
> +	PVE::Storage::assert_sid_usable_on_node($name, $node, 'dir', $verify_params)
> +	    if $param->{add_storage};
>   
>   	my $mounted = PVE::Diskmanage::mounted_paths();
>   	if ($mounted->{$path} =~ /^(\/dev\/.+)$/ ) {
> @@ -214,7 +218,6 @@ __PACKAGE__->register_method ({
>   	}
>   
>   	my $worker = sub {
> -	    my $path = "/mnt/pve/$name";
>   	    my $mountunitname = PVE::Systemd::escape_unit($path, 1) . ".mount";
>   	    my $mountunitpath = "/etc/systemd/system/$mountunitname";
>   
> @@ -287,16 +290,21 @@ __PACKAGE__->register_method ({
>   		run_command(['systemctl', 'start', $mountunitname]);
>   
>   		if ($param->{add_storage}) {
> -		    my $storage_params = {
> -			type => 'dir',
> -			storage => $name,
> -			content => 'rootdir,images,iso,backup,vztmpl,snippets',
> -			is_mountpoint => 1,
> -			path => $path,
> -			nodes => $node,
> -		    };
> -
> -		    PVE::API2::Storage::Config->create($storage_params);
> +		    my $status = PVE::Storage::create_or_update_storage($name, $node);
> +		    if ($status->{create}) {
> +			my $storage_params = {
> +			    type => 'dir',
> +			    storage => $name,
> +			    content => 'rootdir,images,iso,backup,vztmpl,snippets',
> +			    is_mountpoint => 1,
> +			    path => $path,
> +			    nodes => $node,
> +			};
> +			PVE::API2::Storage::Config->create($storage_params);
> +		    } elsif ($status->{update_params}) {
> +			print "Adding '${node}' to nodes for storage '${name}'\n";
> +			PVE::API2::Storage::Config->update($status->{update_params});
> +		    } # no action needed if storage exists, but not limited to nodes
>   		}
>   	    });
>   	};
> diff --git a/PVE/API2/Disks/LVM.pm b/PVE/API2/Disks/LVM.pm
> index a27afe2..88b8d0b 100644
> --- a/PVE/API2/Disks/LVM.pm
> +++ b/PVE/API2/Disks/LVM.pm
> @@ -150,7 +150,10 @@ __PACKAGE__->register_method ({
>   
>   	$dev = PVE::Diskmanage::verify_blockdev_path($dev);
>   	PVE::Diskmanage::assert_disk_unused($dev);
> -	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
> +
> +	my $verify_params = { vgname => $name };
> +	PVE::Storage::assert_sid_usable_on_node($name, $node, 'lvm', $verify_params)
> +	    if $param->{add_storage};
>   
>   	die "volume group with name '${name}' already exists\n"
>   	    if PVE::Storage::LVMPlugin::lvm_vgs()->{$name};
> @@ -169,16 +172,21 @@ __PACKAGE__->register_method ({
>   		PVE::Diskmanage::udevadm_trigger($dev);
>   
>   		if ($param->{add_storage}) {
> -		    my $storage_params = {
> -			type => 'lvm',
> -			vgname => $name,
> -			storage => $name,
> -			content => 'rootdir,images',
> -			shared => 0,
> -			nodes => $node,
> -		    };
> -
> -		    PVE::API2::Storage::Config->create($storage_params);
> +		    my $status = PVE::Storage::create_or_update_storage($name, $node);
> +		    if ($status->{create}) {
> +			my $storage_params = {
> +			    type => 'lvm',
> +			    vgname => $name,
> +			    storage => $name,
> +			    content => 'rootdir,images',
> +			    shared => 0,
> +			    nodes => $node,
> +			};
> +			PVE::API2::Storage::Config->create($storage_params);
> +		    } elsif ($status->{update_params}) {
> +			print "Adding '${node}' to nodes for storage '${name}'\n";
> +			PVE::API2::Storage::Config->update($status->{update_params});
> +		    } # no action needed if storage exists, but not limited to nodes
>   		}
>   	    });
>   	};
> diff --git a/PVE/API2/Disks/LVMThin.pm b/PVE/API2/Disks/LVMThin.pm
> index 690c183..d880154 100644
> --- a/PVE/API2/Disks/LVMThin.pm
> +++ b/PVE/API2/Disks/LVMThin.pm
> @@ -108,7 +108,10 @@ __PACKAGE__->register_method ({
>   
>   	$dev = PVE::Diskmanage::verify_blockdev_path($dev);
>   	PVE::Diskmanage::assert_disk_unused($dev);
> -	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
> +
> +	my $verify_params = { vgname => $name, thinpool => $name };
> +	PVE::Storage::assert_sid_usable_on_node($name, $node, 'lvmthin', $verify_params)
> +	    if $param->{add_storage};
>   
>   	die "volume group with name '${name}' already exists\n"
>   	    if PVE::Storage::LVMPlugin::lvm_vgs()->{$name};
> @@ -147,16 +150,21 @@ __PACKAGE__->register_method ({
>   		PVE::Diskmanage::udevadm_trigger($dev);
>   
>   		if ($param->{add_storage}) {
> -		    my $storage_params = {
> -			type => 'lvmthin',
> -			vgname => $name,
> -			thinpool => $name,
> -			storage => $name,
> -			content => 'rootdir,images',
> -			nodes => $node,
> -		    };
> -
> -		    PVE::API2::Storage::Config->create($storage_params);
> +		    my $status = PVE::Storage::create_or_update_storage($name, $node);
> +		    if ($status->{create}) {
> +			my $storage_params = {
> +			    type => 'lvmthin',
> +			    vgname => $name,
> +			    thinpool => $name,
> +			    storage => $name,
> +			    content => 'rootdir,images',
> +			    nodes => $node,
> +			};
> +			PVE::API2::Storage::Config->create($storage_params);
> +		    } elsif ($status->{update_params}) {
> +			print "Adding '${node}' to nodes for storage '${name}'\n";
> +			PVE::API2::Storage::Config->update($status->{update_params});
> +		    } # no action needed if storage exists, but not limited to nodes
>   		}
>   	    });
>   	};
> diff --git a/PVE/API2/Disks/ZFS.pm b/PVE/API2/Disks/ZFS.pm
> index ceb0212..793bc14 100644
> --- a/PVE/API2/Disks/ZFS.pm
> +++ b/PVE/API2/Disks/ZFS.pm
> @@ -337,6 +337,7 @@ __PACKAGE__->register_method ({
>   
>   	my $name = $param->{name};
>   	my $devs = [PVE::Tools::split_list($param->{devices})];
> +	my $node = $param->{node};
>   	my $raidlevel = $param->{raidlevel};
>   	my $compression = $param->{compression} // 'on';
>   
> @@ -344,7 +345,10 @@ __PACKAGE__->register_method ({
>   	    $dev = PVE::Diskmanage::verify_blockdev_path($dev);
>   	    PVE::Diskmanage::assert_disk_unused($dev);
>   	}
> -	PVE::Storage::assert_sid_unused($name) if $param->{add_storage};
> +
> +	my $verify_params = { pool => $name };
> +	PVE::Storage::assert_sid_usable_on_node($name, $node, 'zfspool', $verify_params)
> +	    if $param->{add_storage};
>   
>   	my $pools = PVE::API2::Disks::ZFS->index({ node => $param->{node} });
>   	my $poollist = { map { $_->{name} => 1 } @{$pools} };
> @@ -427,15 +431,20 @@ __PACKAGE__->register_method ({
>   	    PVE::Diskmanage::udevadm_trigger($devs->@*);
>   
>   	    if ($param->{add_storage}) {
> -		my $storage_params = {
> -		    type => 'zfspool',
> -		    pool => $name,
> -		    storage => $name,
> -		    content => 'rootdir,images',
> -		    nodes => $param->{node},
> -		};
> -
> -		PVE::API2::Storage::Config->create($storage_params);
> +		my $status = PVE::Storage::create_or_update_storage($name, $node);
> +		if ($status->{create}) {
> +		    my $storage_params = {
> +			type => 'zfspool',
> +			pool => $name,
> +			storage => $name,
> +			content => 'rootdir,images',
> +			nodes => $param->{node},
> +		    };
> +		    PVE::API2::Storage::Config->create($storage_params);
> +		} elsif ($status->{update_params}) {
> +		    print "Adding '${node}' to nodes for storage '${name}'\n";
> +		    PVE::API2::Storage::Config->update($status->{update_params});
> +		} # no action needed if storage exists, but not limited to nodes

i've read that chunk 4 times now (apart from the params assembly ofc),
and imho i think this could belong into the 'create_or_update_storage' 
function itself, at least we wouldn't have the same
if (create) {
     create()
} else {
     update()
}

code multiple times

if you're concerned with dependency of PVE::Storage and 
PVE::API2::Storage::Config, id just move it there, then there is no real 
problem

>   	    }
>   	};
>   
> diff --git a/PVE/Storage.pm b/PVE/Storage.pm
> index b9c53a1..6dfcc3c 100755
> --- a/PVE/Storage.pm
> +++ b/PVE/Storage.pm
> @@ -2127,6 +2127,47 @@ sub assert_sid_unused {
>       return undef;
>   }
>   
> +# Checks if storage id can be used on the node, intended for local storage types
> +# Compares the type and verify_params hash against a potentially configured
> +# storage. Use it for storage parameters that need to be the same throughout
> +# the cluster. For example, pool for ZFS, vg_name for LVM, ...
> +sub assert_sid_usable_on_node {
> +    my ($sid, $node, $type, $verify_params) = @_;
> +
> +    my $cfg = config();
> +    if (my $scfg = storage_config($cfg, $sid, 1)) {
> +	$node = PVE::INotify::nodename() if !$node || ($node eq 'localhost');
> +	die "storage ID '${sid}' already exists on node ${node}\n"
> +	    if defined($scfg->{nodes}) && $scfg->{nodes}->{$node};
> +
> +	$verify_params->{type} = $type;
> +	for my $key (keys %$verify_params) {
> +	    die "option '${key}' ($verify_params->{$key}) does not match ".
> +		"existing storage configuration: $scfg->{$key}\n"
> +		if $verify_params->{$key} ne $scfg->{$key};

that'll log a warning if $scfg->{$key} is undef so a "// ''" could help 
here (or explicit definedness check)

> +	}
> +    }
> +}
> +
> +# returns if storage needs to be created or updated. In the update case it
> +# checks for a node list and returns the needed parameters to update the
> +# storage with the new node list
> +sub create_or_update_storage {
> +    my ($sid, $node) = @_;
> +
> +    my $cfg = config();
> +    my $status = { create => 1 };
> +    if (my $scfg = storage_config($cfg, $sid, 1)) {
> +	$status->{create} = 0;
> +	if ($scfg->{nodes}) {
> +	    $scfg->{nodes}->{$node} = 1;
> +	    $status->{update_params}->{nodes} = join(',', sort keys $scfg->{nodes}->%*),
> +	    $status->{update_params}->{storage} = $sid;
> +	}
> +    }
> +    return $status;
> +}
> +
>   # removes leading/trailing spaces and (back)slashes completely
>   # substitutes every non-ASCII-alphanumerical char with '_', except '_.-'
>   sub normalize_content_filename {




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pve-devel] [PATCH storage 1/3] diskmanage: add mounted_paths
  2022-07-14 11:13   ` Dominik Csapak
@ 2022-07-14 11:37     ` Aaron Lauterer
  0 siblings, 0 replies; 10+ messages in thread
From: Aaron Lauterer @ 2022-07-14 11:37 UTC (permalink / raw)
  To: Dominik Csapak, Proxmox VE development discussion



On 7/14/22 13:13, Dominik Csapak wrote:
> comment inline
> 
> On 7/13/22 12:47, Aaron Lauterer wrote:
>> returns similar values as mounted_blockdevs, but uses the mounted path
>> as key and the blockdev path as value
>>
>> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
>> ---
>> used for the Directory check in patch 2
>>
>>   PVE/Diskmanage.pm | 13 +++++++++++++
>>   1 file changed, 13 insertions(+)
>>
>> diff --git a/PVE/Diskmanage.pm b/PVE/Diskmanage.pm
>> index 8ed7a8b..c5c20de 100644
>> --- a/PVE/Diskmanage.pm
>> +++ b/PVE/Diskmanage.pm
>> @@ -499,6 +499,19 @@ sub mounted_blockdevs {
>>       return $mounted;
>>   }
>> +sub mounted_paths {
>> +    my $mounted = {};
>> +
>> +    my $mounts = PVE::ProcFSTools::parse_proc_mounts();
>> +
>> +    foreach my $mount (@$mounts) {
>> +    next if $mount->[0] !~ m|^/dev/|;
> 
> does it really make sense here to filter by /dev/ ?
> for 'mounted_blockdevs' it makes sense since we want to have
> the mounted 'block devices' but here we talk about 'paths'
> and /sys,/proc, etc are paths too, so you could simply omit that check

Good point.
> 
>> +    $mounted->{abs_path($mount->[1])} = $mount->[0];
>> +    };
>> +
>> +    return $mounted;
>> +}
>> +
>>   sub get_disks {
>>       my ($disks, $nosmart, $include_partitions) = @_;
>>       my $disklist = {};




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pve-devel] [PATCH storage 2/3] disks: die if storage name is already in use
  2022-07-14 11:13   ` Dominik Csapak
@ 2022-07-14 12:12     ` Fabian Ebner
  2022-07-14 12:30       ` Dominik Csapak
  0 siblings, 1 reply; 10+ messages in thread
From: Fabian Ebner @ 2022-07-14 12:12 UTC (permalink / raw)
  To: pve-devel, Aaron Lauterer

Am 14.07.22 um 13:13 schrieb Dominik Csapak:> On 7/13/22 12:47, Aaron
Lauterer wrote:
>> +    my $poollist = { map { $_->{name} => 1 } @{$pools} };
> 
> does that really make sense here? would it not be easier to just
> iterate? e.g.
> 
> ----
> for my $pool (@$pools) {
>     die "..." if $pool->{name} eq $name;
> }
> ----
> 
> (i admit, it's 1 line longer, but a bit more readable?)
> 
>> +    die "pool '${name}' already exists on node '$node'\n" if

Or just use grep ;)




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [pve-devel] [PATCH storage 2/3] disks: die if storage name is already in use
  2022-07-14 12:12     ` Fabian Ebner
@ 2022-07-14 12:30       ` Dominik Csapak
  0 siblings, 0 replies; 10+ messages in thread
From: Dominik Csapak @ 2022-07-14 12:30 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fabian Ebner, Aaron Lauterer



On 7/14/22 14:12, Fabian Ebner wrote:
> Am 14.07.22 um 13:13 schrieb Dominik Csapak:> On 7/13/22 12:47, Aaron
> Lauterer wrote:
>>> +    my $poollist = { map { $_->{name} => 1 } @{$pools} };
>>
>> does that really make sense here? would it not be easier to just
>> iterate? e.g.
>>
>> ----
>> for my $pool (@$pools) {
>>      die "..." if $pool->{name} eq $name;
>> }
>> ----
>>
>> (i admit, it's 1 line longer, but a bit more readable?)
>>
>>> +    die "pool '${name}' already exists on node '$node'\n" if
> 
> Or just use grep ;)
> 

even better :) (i always forget about perls grep)




^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-07-14 12:30 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-13 10:47 [pve-devel] [PATCH storage 0/3] disks: add checks, allow add_storage on other nodes Aaron Lauterer
2022-07-13 10:47 ` [pve-devel] [PATCH storage 1/3] diskmanage: add mounted_paths Aaron Lauterer
2022-07-14 11:13   ` Dominik Csapak
2022-07-14 11:37     ` Aaron Lauterer
2022-07-13 10:47 ` [pve-devel] [PATCH storage 2/3] disks: die if storage name is already in use Aaron Lauterer
2022-07-14 11:13   ` Dominik Csapak
2022-07-14 12:12     ` Fabian Ebner
2022-07-14 12:30       ` Dominik Csapak
2022-07-13 10:47 ` [pve-devel] [PATCH storage 3/3] disks: allow add_storage for already configured local storage Aaron Lauterer
2022-07-14 11:23   ` Dominik Csapak

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal