* [pve-devel] [PATCH-SERIES storage/manager] disk-level storage removal for directory, LVM, LVM-thin, ZFS
@ 2021-10-25 13:47 Fabian Ebner
2021-10-25 13:47 ` [pve-devel] [PATCH storage 1/6] LVM: add lvm_destroy_volume_group Fabian Ebner
` (12 more replies)
0 siblings, 13 replies; 14+ messages in thread
From: Fabian Ebner @ 2021-10-25 13:47 UTC (permalink / raw)
To: pve-devel
So that disks can easily be repurposed when a storage is no longer
needed.
Dependency bump pve-manager -> pve-storage is needed.
pve-storage:
Fabian Ebner (6):
LVM: add lvm_destroy_volume_group
api: disks: add DELETE endpoint for directory, lvm, lvmthin, zfs
api: list thin pools: add volume group to properties
diskmanage: add helper for udev workaround
api: disks: delete: add flag for wiping disks
api: disks: delete: add flag for cleaning up storage config
PVE/API2/Disks.pm | 7 +--
PVE/API2/Disks/Directory.pm | 96 +++++++++++++++++++++++++++++--
PVE/API2/Disks/LVM.pm | 84 +++++++++++++++++++++++++--
PVE/API2/Disks/LVMThin.pm | 98 ++++++++++++++++++++++++++++++--
PVE/API2/Disks/ZFS.pm | 107 +++++++++++++++++++++++++++++++++--
PVE/API2/Storage/Config.pm | 27 +++++++++
PVE/Diskmanage.pm | 11 ++++
PVE/Storage/LVMPlugin.pm | 11 ++++
PVE/Storage/LvmThinPlugin.pm | 1 +
9 files changed, 416 insertions(+), 26 deletions(-)
pve-manager:
Fabian Ebner (6):
ui: node: directory: use gettext for 'Directory'
ui: node: lvmthin: add volume group to columns
ui: utils: add task descriptions for disk removal
ui: node: add destroy menu for directory, lvm, lvmthin, zfs
ui: node: storage removal: add checkbox for cleaning up disks
ui: node: storage removal: add checkbox for cleaning up storage config
www/manager6/Makefile | 1 +
www/manager6/Utils.js | 3 +
www/manager6/node/Directory.js | 83 ++++++++++++++++++-
www/manager6/node/LVM.js | 82 +++++++++++++++++++
www/manager6/node/LVMThin.js | 97 +++++++++++++++++++++++
www/manager6/node/ZFS.js | 80 ++++++++++++++++++-
www/manager6/window/SafeDestroyStorage.js | 39 +++++++++
7 files changed, 382 insertions(+), 3 deletions(-)
create mode 100644 www/manager6/window/SafeDestroyStorage.js
--
2.30.2
^ permalink raw reply [flat|nested] 14+ messages in thread
* [pve-devel] [PATCH storage 1/6] LVM: add lvm_destroy_volume_group
2021-10-25 13:47 [pve-devel] [PATCH-SERIES storage/manager] disk-level storage removal for directory, LVM, LVM-thin, ZFS Fabian Ebner
@ 2021-10-25 13:47 ` Fabian Ebner
2021-10-25 13:47 ` [pve-devel] [PATCH storage 2/6] api: disks: add DELETE endpoint for directory, lvm, lvmthin, zfs Fabian Ebner
` (11 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Fabian Ebner @ 2021-10-25 13:47 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
PVE/Storage/LVMPlugin.pm | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/PVE/Storage/LVMPlugin.pm b/PVE/Storage/LVMPlugin.pm
index 139d391..9743ee8 100644
--- a/PVE/Storage/LVMPlugin.pm
+++ b/PVE/Storage/LVMPlugin.pm
@@ -97,6 +97,17 @@ sub lvm_create_volume_group {
run_command($cmd, errmsg => "vgcreate $vgname $device error", errfunc => $ignore_no_medium_warnings, outfunc => $ignore_no_medium_warnings);
}
+sub lvm_destroy_volume_group {
+ my ($vgname) = @_;
+
+ run_command(
+ ['vgremove', '-y', $vgname],
+ errmsg => "unable to remove volume group $vgname",
+ errfunc => $ignore_no_medium_warnings,
+ outfunc => $ignore_no_medium_warnings,
+ );
+}
+
sub lvm_vgs {
my ($includepvs) = @_;
--
2.30.2
^ permalink raw reply [flat|nested] 14+ messages in thread
* [pve-devel] [PATCH storage 2/6] api: disks: add DELETE endpoint for directory, lvm, lvmthin, zfs
2021-10-25 13:47 [pve-devel] [PATCH-SERIES storage/manager] disk-level storage removal for directory, LVM, LVM-thin, ZFS Fabian Ebner
2021-10-25 13:47 ` [pve-devel] [PATCH storage 1/6] LVM: add lvm_destroy_volume_group Fabian Ebner
@ 2021-10-25 13:47 ` Fabian Ebner
2021-10-25 13:47 ` [pve-devel] [PATCH storage 3/6] api: list thin pools: add volume group to properties Fabian Ebner
` (10 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Fabian Ebner @ 2021-10-25 13:47 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
PVE/API2/Disks/Directory.pm | 44 +++++++++++++++++++++++++++++++++++++
PVE/API2/Disks/LVM.pm | 35 +++++++++++++++++++++++++++++
PVE/API2/Disks/LVMThin.pm | 42 +++++++++++++++++++++++++++++++++++
PVE/API2/Disks/ZFS.pm | 40 +++++++++++++++++++++++++++++++++
4 files changed, 161 insertions(+)
diff --git a/PVE/API2/Disks/Directory.pm b/PVE/API2/Disks/Directory.pm
index 3a90a2e..36cebbc 100644
--- a/PVE/API2/Disks/Directory.pm
+++ b/PVE/API2/Disks/Directory.pm
@@ -3,6 +3,8 @@ package PVE::API2::Disks::Directory;
use strict;
use warnings;
+use POSIX;
+
use PVE::Diskmanage;
use PVE::JSONSchema qw(get_standard_option);
use PVE::RESTHandler;
@@ -301,4 +303,46 @@ __PACKAGE__->register_method ({
return $rpcenv->fork_worker('dircreate', $name, $user, $worker);
}});
+__PACKAGE__->register_method ({
+ name => 'delete',
+ path => '{name}',
+ method => 'DELETE',
+ proxyto => 'node',
+ protected => 1,
+ permissions => {
+ check => ['perm', '/', ['Sys.Modify', 'Datastore.Allocate']],
+ },
+ description => "Unmounts the storage and removes the mount unit.",
+ parameters => {
+ additionalProperties => 0,
+ properties => {
+ node => get_standard_option('pve-node'),
+ name => get_standard_option('pve-storage-id'),
+ },
+ },
+ returns => { type => 'string' },
+ code => sub {
+ my ($param) = @_;
+
+ my $rpcenv = PVE::RPCEnvironment::get();
+ my $user = $rpcenv->get_user();
+
+ my $name = $param->{name};
+
+ my $worker = sub {
+ my $path = "/mnt/pve/$name";
+ my $mountunitname = PVE::Systemd::escape_unit($path, 1) . ".mount";
+ my $mountunitpath = "/etc/systemd/system/$mountunitname";
+
+ PVE::Diskmanage::locked_disk_action(sub {
+ run_command(['systemctl', 'stop', $mountunitname]);
+ run_command(['systemctl', 'disable', $mountunitname]);
+
+ unlink $mountunitpath or $! == ENOENT or die "cannot remove $mountunitpath - $!\n";
+ });
+ };
+
+ return $rpcenv->fork_worker('dirremove', $name, $user, $worker);
+ }});
+
1;
diff --git a/PVE/API2/Disks/LVM.pm b/PVE/API2/Disks/LVM.pm
index 885e02b..ee9e282 100644
--- a/PVE/API2/Disks/LVM.pm
+++ b/PVE/API2/Disks/LVM.pm
@@ -187,4 +187,39 @@ __PACKAGE__->register_method ({
return $rpcenv->fork_worker('lvmcreate', $name, $user, $worker);
}});
+__PACKAGE__->register_method ({
+ name => 'delete',
+ path => '{name}',
+ method => 'DELETE',
+ proxyto => 'node',
+ protected => 1,
+ permissions => {
+ check => ['perm', '/', ['Sys.Modify', 'Datastore.Allocate']],
+ },
+ description => "Remove an LVM Volume Group.",
+ parameters => {
+ additionalProperties => 0,
+ properties => {
+ node => get_standard_option('pve-node'),
+ name => get_standard_option('pve-storage-id'),
+ },
+ },
+ returns => { type => 'string' },
+ code => sub {
+ my ($param) = @_;
+
+ my $rpcenv = PVE::RPCEnvironment::get();
+ my $user = $rpcenv->get_user();
+
+ my $name = $param->{name};
+
+ my $worker = sub {
+ PVE::Diskmanage::locked_disk_action(sub {
+ PVE::Storage::LVMPlugin::lvm_destroy_volume_group($name);
+ });
+ };
+
+ return $rpcenv->fork_worker('lvmremove', $name, $user, $worker);
+ }});
+
1;
diff --git a/PVE/API2/Disks/LVMThin.pm b/PVE/API2/Disks/LVMThin.pm
index 83ebc46..6c0a458 100644
--- a/PVE/API2/Disks/LVMThin.pm
+++ b/PVE/API2/Disks/LVMThin.pm
@@ -161,4 +161,46 @@ __PACKAGE__->register_method ({
return $rpcenv->fork_worker('lvmthincreate', $name, $user, $worker);
}});
+__PACKAGE__->register_method ({
+ name => 'delete',
+ path => '{name}',
+ method => 'DELETE',
+ proxyto => 'node',
+ protected => 1,
+ permissions => {
+ check => ['perm', '/', ['Sys.Modify', 'Datastore.Allocate']],
+ },
+ description => "Remove an LVM thin pool.",
+ parameters => {
+ additionalProperties => 0,
+ properties => {
+ node => get_standard_option('pve-node'),
+ name => get_standard_option('pve-storage-id'),
+ 'volume-group' => get_standard_option('pve-storage-id'),
+ },
+ },
+ returns => { type => 'string' },
+ code => sub {
+ my ($param) = @_;
+
+ my $rpcenv = PVE::RPCEnvironment::get();
+ my $user = $rpcenv->get_user();
+
+ my $vg = $param->{'volume-group'};
+ my $lv = $param->{name};
+
+ my $worker = sub {
+ PVE::Diskmanage::locked_disk_action(sub {
+ my $thinpools = PVE::Storage::LvmThinPlugin::list_thinpools();
+
+ die "no such thin pool ${vg}/${lv}\n"
+ if !grep { $_->{lv} eq $lv && $_->{vg} eq $vg } $thinpools->@*;
+
+ run_command(['lvremove', '-y', "${vg}/${lv}"]);
+ });
+ };
+
+ return $rpcenv->fork_worker('lvmthinremove', "${vg}-${lv}", $user, $worker);
+ }});
+
1;
diff --git a/PVE/API2/Disks/ZFS.pm b/PVE/API2/Disks/ZFS.pm
index 7f96bb7..e8d5e7c 100644
--- a/PVE/API2/Disks/ZFS.pm
+++ b/PVE/API2/Disks/ZFS.pm
@@ -449,4 +449,44 @@ __PACKAGE__->register_method ({
return $rpcenv->fork_worker('zfscreate', $name, $user, $worker);
}});
+__PACKAGE__->register_method ({
+ name => 'delete',
+ path => '{name}',
+ method => 'DELETE',
+ proxyto => 'node',
+ protected => 1,
+ permissions => {
+ check => ['perm', '/', ['Sys.Modify', 'Datastore.Allocate']],
+ },
+ description => "Destroy a ZFS pool.",
+ parameters => {
+ additionalProperties => 0,
+ properties => {
+ node => get_standard_option('pve-node'),
+ name => get_standard_option('pve-storage-id'),
+ },
+ },
+ returns => { type => 'string' },
+ code => sub {
+ my ($param) = @_;
+
+ my $rpcenv = PVE::RPCEnvironment::get();
+ my $user = $rpcenv->get_user();
+
+ my $name = $param->{name};
+
+ my $worker = sub {
+ PVE::Diskmanage::locked_disk_action(sub {
+ if (-e '/lib/systemd/system/zfs-import@.service') {
+ my $importunit = 'zfs-import@' . PVE::Systemd::escape_unit($name) . '.service';
+ run_command(['systemctl', 'disable', $importunit]);
+ }
+
+ run_command(['zpool', 'destroy', $name]);
+ });
+ };
+
+ return $rpcenv->fork_worker('zfsremove', $name, $user, $worker);
+ }});
+
1;
--
2.30.2
^ permalink raw reply [flat|nested] 14+ messages in thread
* [pve-devel] [PATCH storage 3/6] api: list thin pools: add volume group to properties
2021-10-25 13:47 [pve-devel] [PATCH-SERIES storage/manager] disk-level storage removal for directory, LVM, LVM-thin, ZFS Fabian Ebner
2021-10-25 13:47 ` [pve-devel] [PATCH storage 1/6] LVM: add lvm_destroy_volume_group Fabian Ebner
2021-10-25 13:47 ` [pve-devel] [PATCH storage 2/6] api: disks: add DELETE endpoint for directory, lvm, lvmthin, zfs Fabian Ebner
@ 2021-10-25 13:47 ` Fabian Ebner
2021-10-25 13:47 ` [pve-devel] [PATCH storage 4/6] diskmanage: add helper for udev workaround Fabian Ebner
` (9 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Fabian Ebner @ 2021-10-25 13:47 UTC (permalink / raw)
To: pve-devel
So that DELETE can be called using only information from GET.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
PVE/API2/Disks/LVMThin.pm | 4 ++++
PVE/Storage/LvmThinPlugin.pm | 1 +
2 files changed, 5 insertions(+)
diff --git a/PVE/API2/Disks/LVMThin.pm b/PVE/API2/Disks/LVMThin.pm
index 6c0a458..52f3062 100644
--- a/PVE/API2/Disks/LVMThin.pm
+++ b/PVE/API2/Disks/LVMThin.pm
@@ -40,6 +40,10 @@ __PACKAGE__->register_method ({
type => 'string',
description => 'The name of the thinpool.',
},
+ vg => {
+ type => 'string',
+ description => 'The associated volume group.',
+ },
lv_size => {
type => 'integer',
description => 'The size of the thinpool in bytes.',
diff --git a/PVE/Storage/LvmThinPlugin.pm b/PVE/Storage/LvmThinPlugin.pm
index 4ba6f90..e5ffe44 100644
--- a/PVE/Storage/LvmThinPlugin.pm
+++ b/PVE/Storage/LvmThinPlugin.pm
@@ -184,6 +184,7 @@ sub list_thinpools {
next if $lvs->{$vg}->{$lvname}->{lv_type} ne 't';
my $lv = $lvs->{$vg}->{$lvname};
$lv->{lv} = $lvname;
+ $lv->{vg} = $vg;
push @$thinpools, $lv;
}
}
--
2.30.2
^ permalink raw reply [flat|nested] 14+ messages in thread
* [pve-devel] [PATCH storage 4/6] diskmanage: add helper for udev workaround
2021-10-25 13:47 [pve-devel] [PATCH-SERIES storage/manager] disk-level storage removal for directory, LVM, LVM-thin, ZFS Fabian Ebner
` (2 preceding siblings ...)
2021-10-25 13:47 ` [pve-devel] [PATCH storage 3/6] api: list thin pools: add volume group to properties Fabian Ebner
@ 2021-10-25 13:47 ` Fabian Ebner
2021-10-25 13:47 ` [pve-devel] [PATCH storage 5/6] api: disks: delete: add flag for wiping disks Fabian Ebner
` (8 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Fabian Ebner @ 2021-10-25 13:47 UTC (permalink / raw)
To: pve-devel
to avoid duplication. Current callers pass along at least one device,
but anticipate future callers that might call with the empty list. Do
nothing in that case, rather than triggering everything.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
PVE/API2/Disks.pm | 7 +------
PVE/API2/Disks/Directory.pm | 6 +-----
PVE/API2/Disks/LVM.pm | 6 +-----
PVE/API2/Disks/LVMThin.pm | 6 +-----
PVE/API2/Disks/ZFS.pm | 6 +-----
PVE/Diskmanage.pm | 11 +++++++++++
6 files changed, 16 insertions(+), 26 deletions(-)
diff --git a/PVE/API2/Disks.pm b/PVE/API2/Disks.pm
index 25c9ded..b618057 100644
--- a/PVE/API2/Disks.pm
+++ b/PVE/API2/Disks.pm
@@ -306,12 +306,7 @@ __PACKAGE__->register_method ({
my $worker = sub {
PVE::Diskmanage::wipe_blockdev($disk);
-
- # FIXME: Remove once we depend on systemd >= v249.
- # Work around udev bug https://github.com/systemd/systemd/issues/18525 to ensure the
- # udev database is updated.
- eval { run_command(['udevadm', 'trigger', $disk]); };
- warn $@ if $@;
+ PVE::Diskmanage::udevadm_trigger($disk);
};
my $basename = basename($disk); # avoid '/' in the ID
diff --git a/PVE/API2/Disks/Directory.pm b/PVE/API2/Disks/Directory.pm
index 36cebbc..e9b05be 100644
--- a/PVE/API2/Disks/Directory.pm
+++ b/PVE/API2/Disks/Directory.pm
@@ -275,11 +275,7 @@ __PACKAGE__->register_method ({
$write_ini->($ini, $mountunitpath);
- # FIXME: Remove once we depend on systemd >= v249.
- # Work around udev bug https://github.com/systemd/systemd/issues/18525 to ensure the
- # udev database is updated and the $uuid_path symlink is actually created!
- eval { run_command(['udevadm', 'trigger', $part]); };
- warn $@ if $@;
+ PVE::Diskmanage::udevadm_trigger($part);
run_command(['systemctl', 'daemon-reload']);
run_command(['systemctl', 'enable', $mountunitname]);
diff --git a/PVE/API2/Disks/LVM.pm b/PVE/API2/Disks/LVM.pm
index ee9e282..1b88af2 100644
--- a/PVE/API2/Disks/LVM.pm
+++ b/PVE/API2/Disks/LVM.pm
@@ -163,11 +163,7 @@ __PACKAGE__->register_method ({
PVE::Storage::LVMPlugin::lvm_create_volume_group($dev, $name);
- # FIXME: Remove once we depend on systemd >= v249.
- # Work around udev bug https://github.com/systemd/systemd/issues/18525 to ensure the
- # udev database is updated.
- eval { run_command(['udevadm', 'trigger', $dev]); };
- warn $@ if $@;
+ PVE::Diskmanage::udevadm_trigger($dev);
if ($param->{add_storage}) {
my $storage_params = {
diff --git a/PVE/API2/Disks/LVMThin.pm b/PVE/API2/Disks/LVMThin.pm
index 52f3062..23f262a 100644
--- a/PVE/API2/Disks/LVMThin.pm
+++ b/PVE/API2/Disks/LVMThin.pm
@@ -141,11 +141,7 @@ __PACKAGE__->register_method ({
$name
]);
- # FIXME: Remove once we depend on systemd >= v249.
- # Work around udev bug https://github.com/systemd/systemd/issues/18525 to ensure the
- # udev database is updated.
- eval { run_command(['udevadm', 'trigger', $dev]); };
- warn $@ if $@;
+ PVE::Diskmanage::udevadm_trigger($dev);
if ($param->{add_storage}) {
my $storage_params = {
diff --git a/PVE/API2/Disks/ZFS.pm b/PVE/API2/Disks/ZFS.pm
index e8d5e7c..e892712 100644
--- a/PVE/API2/Disks/ZFS.pm
+++ b/PVE/API2/Disks/ZFS.pm
@@ -426,11 +426,7 @@ __PACKAGE__->register_method ({
run_command($cmd);
}
- # FIXME: Remove once we depend on systemd >= v249.
- # Work around udev bug https://github.com/systemd/systemd/issues/18525 to ensure the
- # udev database is updated.
- eval { run_command(['udevadm', 'trigger', $devs->@*]); };
- warn $@ if $@;
+ PVE::Diskmanage::udevadm_trigger($devs->@*);
if ($param->{add_storage}) {
my $storage_params = {
diff --git a/PVE/Diskmanage.pm b/PVE/Diskmanage.pm
index 18459f9..d67cc6b 100644
--- a/PVE/Diskmanage.pm
+++ b/PVE/Diskmanage.pm
@@ -966,4 +966,15 @@ sub wipe_blockdev {
}
}
+# FIXME: Remove once we depend on systemd >= v249.
+# Work around udev bug https://github.com/systemd/systemd/issues/18525 ensuring database is updated.
+sub udevadm_trigger {
+ my @devs = @_;
+
+ return if scalar(@devs) == 0;
+
+ eval { run_command(['udevadm', 'trigger', @devs]); };
+ warn $@ if $@;
+}
+
1;
--
2.30.2
^ permalink raw reply [flat|nested] 14+ messages in thread
* [pve-devel] [PATCH storage 5/6] api: disks: delete: add flag for wiping disks
2021-10-25 13:47 [pve-devel] [PATCH-SERIES storage/manager] disk-level storage removal for directory, LVM, LVM-thin, ZFS Fabian Ebner
` (3 preceding siblings ...)
2021-10-25 13:47 ` [pve-devel] [PATCH storage 4/6] diskmanage: add helper for udev workaround Fabian Ebner
@ 2021-10-25 13:47 ` Fabian Ebner
2021-10-25 13:47 ` [pve-devel] [PATCH storage 6/6] api: disks: delete: add flag for cleaning up storage config Fabian Ebner
` (7 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Fabian Ebner @ 2021-10-25 13:47 UTC (permalink / raw)
To: pve-devel
For ZFS and directory storages, clean up the whole disk when the
layout is as usual to avoid left-overs.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
PVE/API2/Disks/Directory.pm | 26 +++++++++++++++++++++++
PVE/API2/Disks/LVM.pm | 23 +++++++++++++++++++++
PVE/API2/Disks/LVMThin.pm | 25 ++++++++++++++++++++++
PVE/API2/Disks/ZFS.pm | 41 +++++++++++++++++++++++++++++++++++++
4 files changed, 115 insertions(+)
diff --git a/PVE/API2/Disks/Directory.pm b/PVE/API2/Disks/Directory.pm
index e9b05be..c9dcb52 100644
--- a/PVE/API2/Disks/Directory.pm
+++ b/PVE/API2/Disks/Directory.pm
@@ -314,6 +314,12 @@ __PACKAGE__->register_method ({
properties => {
node => get_standard_option('pve-node'),
name => get_standard_option('pve-storage-id'),
+ 'cleanup-disks' => {
+ description => "Also wipe disk so it can be repurposed afterwards.",
+ type => 'boolean',
+ optional => 1,
+ default => 0,
+ },
},
},
returns => { type => 'string' },
@@ -331,10 +337,30 @@ __PACKAGE__->register_method ({
my $mountunitpath = "/etc/systemd/system/$mountunitname";
PVE::Diskmanage::locked_disk_action(sub {
+ my $to_wipe;
+ if ($param->{'cleanup-disks'}) {
+ my $unit = $read_ini->($mountunitpath);
+
+ my $dev = PVE::Diskmanage::verify_blockdev_path($unit->{'Mount'}->{'What'});
+ $to_wipe = $dev;
+
+ # clean up whole device if this is the only partition
+ $dev =~ s|^/dev/||;
+ my $info = PVE::Diskmanage::get_disks($dev, 1, 1);
+ die "unable to obtain information for disk '$dev'\n" if !$info->{$dev};
+ $to_wipe = $info->{$dev}->{parent}
+ if $info->{$dev}->{parent} && scalar(keys $info->%*) == 2;
+ }
+
run_command(['systemctl', 'stop', $mountunitname]);
run_command(['systemctl', 'disable', $mountunitname]);
unlink $mountunitpath or $! == ENOENT or die "cannot remove $mountunitpath - $!\n";
+
+ if ($to_wipe) {
+ PVE::Diskmanage::wipe_blockdev($to_wipe);
+ PVE::Diskmanage::udevadm_trigger($to_wipe);
+ }
});
};
diff --git a/PVE/API2/Disks/LVM.pm b/PVE/API2/Disks/LVM.pm
index 1b88af2..1af3d43 100644
--- a/PVE/API2/Disks/LVM.pm
+++ b/PVE/API2/Disks/LVM.pm
@@ -198,6 +198,12 @@ __PACKAGE__->register_method ({
properties => {
node => get_standard_option('pve-node'),
name => get_standard_option('pve-storage-id'),
+ 'cleanup-disks' => {
+ description => "Also wipe disks so they can be repurposed afterwards.",
+ type => 'boolean',
+ optional => 1,
+ default => 0,
+ },
},
},
returns => { type => 'string' },
@@ -211,7 +217,24 @@ __PACKAGE__->register_method ({
my $worker = sub {
PVE::Diskmanage::locked_disk_action(sub {
+ my $vgs = PVE::Storage::LVMPlugin::lvm_vgs(1);
+ die "no such volume group '$name'\n" if !$vgs->{$name};
+
PVE::Storage::LVMPlugin::lvm_destroy_volume_group($name);
+
+ if ($param->{'cleanup-disks'}) {
+ my $wiped = [];
+ eval {
+ for my $pv ($vgs->{$name}->{pvs}->@*) {
+ my $dev = PVE::Diskmanage::verify_blockdev_path($pv->{name});
+ PVE::Diskmanage::wipe_blockdev($dev);
+ push $wiped->@*, $dev;
+ }
+ };
+ my $err = $@;
+ PVE::Diskmanage::udevadm_trigger($wiped->@*);
+ die "cleanup failed - $err" if $err;
+ }
});
};
diff --git a/PVE/API2/Disks/LVMThin.pm b/PVE/API2/Disks/LVMThin.pm
index 23f262a..ea36ce2 100644
--- a/PVE/API2/Disks/LVMThin.pm
+++ b/PVE/API2/Disks/LVMThin.pm
@@ -177,6 +177,12 @@ __PACKAGE__->register_method ({
node => get_standard_option('pve-node'),
name => get_standard_option('pve-storage-id'),
'volume-group' => get_standard_option('pve-storage-id'),
+ 'cleanup-disks' => {
+ description => "Also wipe disks so they can be repurposed afterwards.",
+ type => 'boolean',
+ optional => 1,
+ default => 0,
+ },
},
},
returns => { type => 'string' },
@@ -197,6 +203,25 @@ __PACKAGE__->register_method ({
if !grep { $_->{lv} eq $lv && $_->{vg} eq $vg } $thinpools->@*;
run_command(['lvremove', '-y', "${vg}/${lv}"]);
+
+ if ($param->{'cleanup-disks'}) {
+ my $vgs = PVE::Storage::LVMPlugin::lvm_vgs(1);
+
+ die "no such volume group '$vg'\n" if !$vgs->{$vg};
+ die "volume group '$vg' still in use\n" if $vgs->{$vg}->{lvcount} > 0;
+
+ my $wiped = [];
+ eval {
+ for my $pv ($vgs->{$vg}->{pvs}->@*) {
+ my $dev = PVE::Diskmanage::verify_blockdev_path($pv->{name});
+ PVE::Diskmanage::wipe_blockdev($dev);
+ push $wiped->@*, $dev;
+ }
+ };
+ my $err = $@;
+ PVE::Diskmanage::udevadm_trigger($wiped->@*);
+ die "cleanup failed - $err" if $err;
+ }
});
};
diff --git a/PVE/API2/Disks/ZFS.pm b/PVE/API2/Disks/ZFS.pm
index e892712..10b73a5 100644
--- a/PVE/API2/Disks/ZFS.pm
+++ b/PVE/API2/Disks/ZFS.pm
@@ -460,6 +460,12 @@ __PACKAGE__->register_method ({
properties => {
node => get_standard_option('pve-node'),
name => get_standard_option('pve-storage-id'),
+ 'cleanup-disks' => {
+ description => "Also wipe disks so they can be repurposed afterwards.",
+ type => 'boolean',
+ optional => 1,
+ default => 0,
+ },
},
},
returns => { type => 'string' },
@@ -473,12 +479,47 @@ __PACKAGE__->register_method ({
my $worker = sub {
PVE::Diskmanage::locked_disk_action(sub {
+ my $to_wipe = [];
+ if ($param->{'cleanup-disks'}) {
+ # Using -o name does not only output the name in combination with -v.
+ run_command(['zpool', 'list', '-vHPL', $name], outfunc => sub {
+ my ($line) = @_;
+
+ my ($name) = PVE::Tools::split_list($line);
+ return if $name !~ m|^/dev/.+|;
+
+ my $dev = PVE::Diskmanage::verify_blockdev_path($name);
+ my $wipe = $dev;
+
+ $dev =~ s|^/dev/||;
+ my $info = PVE::Diskmanage::get_disks($dev, 1, 1);
+ die "unable to obtain information for disk '$dev'\n" if !$info->{$dev};
+
+ # Wipe whole disk if usual ZFS layout with partition 9 as ZFS reserved.
+ my $parent = $info->{$dev}->{parent};
+ if ($parent && scalar(keys $info->%*) == 3) {
+ $parent =~ s|^/dev/||;
+ my $info9 = $info->{"${parent}9"};
+
+ $wipe = $info->{$dev}->{parent} # need leading /dev/
+ if $info9 && $info9->{used} && $info9->{used} =~ m/^ZFS reserved/;
+ }
+
+ push $to_wipe->@*, $wipe;
+ });
+ }
+
if (-e '/lib/systemd/system/zfs-import@.service') {
my $importunit = 'zfs-import@' . PVE::Systemd::escape_unit($name) . '.service';
run_command(['systemctl', 'disable', $importunit]);
}
run_command(['zpool', 'destroy', $name]);
+
+ eval { PVE::Diskmanage::wipe_blockdev($_) for $to_wipe->@*; };
+ my $err = $@;
+ PVE::Diskmanage::udevadm_trigger($to_wipe->@*);
+ die "cleanup failed - $err" if $err;
});
};
--
2.30.2
^ permalink raw reply [flat|nested] 14+ messages in thread
* [pve-devel] [PATCH storage 6/6] api: disks: delete: add flag for cleaning up storage config
2021-10-25 13:47 [pve-devel] [PATCH-SERIES storage/manager] disk-level storage removal for directory, LVM, LVM-thin, ZFS Fabian Ebner
` (4 preceding siblings ...)
2021-10-25 13:47 ` [pve-devel] [PATCH storage 5/6] api: disks: delete: add flag for wiping disks Fabian Ebner
@ 2021-10-25 13:47 ` Fabian Ebner
2021-10-25 13:47 ` [pve-devel] [PATCH manager 1/6] ui: node: directory: use gettext for 'Directory' Fabian Ebner
` (6 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Fabian Ebner @ 2021-10-25 13:47 UTC (permalink / raw)
To: pve-devel
Update node restrictions to reflect that the storage is not available
anymore on the particular node. If the storage was only configured for
that node, remove it altogether.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
PVE/API2/Disks/Directory.pm | 20 ++++++++++++++++++++
PVE/API2/Disks/LVM.pm | 20 ++++++++++++++++++++
PVE/API2/Disks/LVMThin.pm | 21 +++++++++++++++++++++
PVE/API2/Disks/ZFS.pm | 20 ++++++++++++++++++++
PVE/API2/Storage/Config.pm | 27 +++++++++++++++++++++++++++
5 files changed, 108 insertions(+)
diff --git a/PVE/API2/Disks/Directory.pm b/PVE/API2/Disks/Directory.pm
index c9dcb52..df63ba9 100644
--- a/PVE/API2/Disks/Directory.pm
+++ b/PVE/API2/Disks/Directory.pm
@@ -314,6 +314,13 @@ __PACKAGE__->register_method ({
properties => {
node => get_standard_option('pve-node'),
name => get_standard_option('pve-storage-id'),
+ 'cleanup-config' => {
+ description => "Marks associated storage(s) as not available on this node anymore ".
+ "or removes them from the configuration (if configured for this node only).",
+ type => 'boolean',
+ optional => 1,
+ default => 0,
+ },
'cleanup-disks' => {
description => "Also wipe disk so it can be repurposed afterwards.",
type => 'boolean',
@@ -330,6 +337,7 @@ __PACKAGE__->register_method ({
my $user = $rpcenv->get_user();
my $name = $param->{name};
+ my $node = $param->{node};
my $worker = sub {
my $path = "/mnt/pve/$name";
@@ -357,10 +365,22 @@ __PACKAGE__->register_method ({
unlink $mountunitpath or $! == ENOENT or die "cannot remove $mountunitpath - $!\n";
+ my $config_err;
+ if ($param->{'cleanup-config'}) {
+ my $match = sub {
+ my ($scfg) = @_;
+ return $scfg->{type} eq 'dir' && $scfg->{path} eq $path;
+ };
+ eval { PVE::API2::Storage::Config->cleanup_storages_for_node($match, $node); };
+ warn $config_err = $@ if $@;
+ }
+
if ($to_wipe) {
PVE::Diskmanage::wipe_blockdev($to_wipe);
PVE::Diskmanage::udevadm_trigger($to_wipe);
}
+
+ die "config cleanup failed - $config_err" if $config_err;
});
};
diff --git a/PVE/API2/Disks/LVM.pm b/PVE/API2/Disks/LVM.pm
index 1af3d43..6e4331a 100644
--- a/PVE/API2/Disks/LVM.pm
+++ b/PVE/API2/Disks/LVM.pm
@@ -198,6 +198,13 @@ __PACKAGE__->register_method ({
properties => {
node => get_standard_option('pve-node'),
name => get_standard_option('pve-storage-id'),
+ 'cleanup-config' => {
+ description => "Marks associated storage(s) as not available on this node anymore ".
+ "or removes them from the configuration (if configured for this node only).",
+ type => 'boolean',
+ optional => 1,
+ default => 0,
+ },
'cleanup-disks' => {
description => "Also wipe disks so they can be repurposed afterwards.",
type => 'boolean',
@@ -214,6 +221,7 @@ __PACKAGE__->register_method ({
my $user = $rpcenv->get_user();
my $name = $param->{name};
+ my $node = $param->{node};
my $worker = sub {
PVE::Diskmanage::locked_disk_action(sub {
@@ -222,6 +230,16 @@ __PACKAGE__->register_method ({
PVE::Storage::LVMPlugin::lvm_destroy_volume_group($name);
+ my $config_err;
+ if ($param->{'cleanup-config'}) {
+ my $match = sub {
+ my ($scfg) = @_;
+ return $scfg->{type} eq 'lvm' && $scfg->{vgname} eq $name;
+ };
+ eval { PVE::API2::Storage::Config->cleanup_storages_for_node($match, $node); };
+ warn $config_err = $@ if $@;
+ }
+
if ($param->{'cleanup-disks'}) {
my $wiped = [];
eval {
@@ -235,6 +253,8 @@ __PACKAGE__->register_method ({
PVE::Diskmanage::udevadm_trigger($wiped->@*);
die "cleanup failed - $err" if $err;
}
+
+ die "config cleanup failed - $config_err" if $config_err;
});
};
diff --git a/PVE/API2/Disks/LVMThin.pm b/PVE/API2/Disks/LVMThin.pm
index ea36ce2..a82ab15 100644
--- a/PVE/API2/Disks/LVMThin.pm
+++ b/PVE/API2/Disks/LVMThin.pm
@@ -177,6 +177,13 @@ __PACKAGE__->register_method ({
node => get_standard_option('pve-node'),
name => get_standard_option('pve-storage-id'),
'volume-group' => get_standard_option('pve-storage-id'),
+ 'cleanup-config' => {
+ description => "Marks associated storage(s) as not available on this node anymore ".
+ "or removes them from the configuration (if configured for this node only).",
+ type => 'boolean',
+ optional => 1,
+ default => 0,
+ },
'cleanup-disks' => {
description => "Also wipe disks so they can be repurposed afterwards.",
type => 'boolean',
@@ -194,6 +201,7 @@ __PACKAGE__->register_method ({
my $vg = $param->{'volume-group'};
my $lv = $param->{name};
+ my $node = $param->{node};
my $worker = sub {
PVE::Diskmanage::locked_disk_action(sub {
@@ -204,6 +212,17 @@ __PACKAGE__->register_method ({
run_command(['lvremove', '-y', "${vg}/${lv}"]);
+ my $config_err;
+ if ($param->{'cleanup-config'}) {
+ my $match = sub {
+ my ($scfg) = @_;
+ return if $scfg->{type} ne 'lvmthin';
+ return $scfg->{vgname} eq $vg && $scfg->{thinpool} eq $lv;
+ };
+ eval { PVE::API2::Storage::Config->cleanup_storages_for_node($match, $node); };
+ warn $config_err = $@ if $@;
+ }
+
if ($param->{'cleanup-disks'}) {
my $vgs = PVE::Storage::LVMPlugin::lvm_vgs(1);
@@ -222,6 +241,8 @@ __PACKAGE__->register_method ({
PVE::Diskmanage::udevadm_trigger($wiped->@*);
die "cleanup failed - $err" if $err;
}
+
+ die "config cleanup failed - $config_err" if $config_err;
});
};
diff --git a/PVE/API2/Disks/ZFS.pm b/PVE/API2/Disks/ZFS.pm
index 10b73a5..63bc435 100644
--- a/PVE/API2/Disks/ZFS.pm
+++ b/PVE/API2/Disks/ZFS.pm
@@ -460,6 +460,13 @@ __PACKAGE__->register_method ({
properties => {
node => get_standard_option('pve-node'),
name => get_standard_option('pve-storage-id'),
+ 'cleanup-config' => {
+ description => "Marks associated storage(s) as not available on this node anymore ".
+ "or removes them from the configuration (if configured for this node only).",
+ type => 'boolean',
+ optional => 1,
+ default => 0,
+ },
'cleanup-disks' => {
description => "Also wipe disks so they can be repurposed afterwards.",
type => 'boolean',
@@ -476,6 +483,7 @@ __PACKAGE__->register_method ({
my $user = $rpcenv->get_user();
my $name = $param->{name};
+ my $node = $param->{node};
my $worker = sub {
PVE::Diskmanage::locked_disk_action(sub {
@@ -516,10 +524,22 @@ __PACKAGE__->register_method ({
run_command(['zpool', 'destroy', $name]);
+ my $config_err;
+ if ($param->{'cleanup-config'}) {
+ my $match = sub {
+ my ($scfg) = @_;
+ return $scfg->{type} eq 'zfspool' && $scfg->{pool} eq $name;
+ };
+ eval { PVE::API2::Storage::Config->cleanup_storages_for_node($match, $node); };
+ warn $config_err = $@ if $@;
+ }
+
eval { PVE::Diskmanage::wipe_blockdev($_) for $to_wipe->@*; };
my $err = $@;
PVE::Diskmanage::udevadm_trigger($to_wipe->@*);
die "cleanup failed - $err" if $err;
+
+ die "config cleanup failed - $config_err" if $config_err;
});
};
diff --git a/PVE/API2/Storage/Config.pm b/PVE/API2/Storage/Config.pm
index bf38df3..6bd770e 100755
--- a/PVE/API2/Storage/Config.pm
+++ b/PVE/API2/Storage/Config.pm
@@ -38,6 +38,33 @@ my $api_storage_config = sub {
return $scfg;
};
+# For storages that $match->($scfg), update node restrictions to not include $node anymore and
+# in case no node remains, remove the storage altogether.
+sub cleanup_storages_for_node {
+ my ($self, $match, $node) = @_;
+
+ my $config = PVE::Storage::config();
+ my $cluster_nodes = PVE::Cluster::get_nodelist();
+
+ for my $storeid (keys $config->{ids}->%*) {
+ my $scfg = PVE::Storage::storage_config($config, $storeid);
+ next if !$match->($scfg);
+
+ my $nodes = $scfg->{nodes} || { map { $_ => 1 } $cluster_nodes->@* };
+ next if !$nodes->{$node}; # not configured on $node, so nothing to do
+ delete $nodes->{$node};
+
+ if (scalar(keys $nodes->%*) > 0) {
+ $self->update({
+ nodes => join(',', sort keys $nodes->%*),
+ storage => $storeid,
+ });
+ } else {
+ $self->delete({storage => $storeid});
+ }
+ }
+}
+
__PACKAGE__->register_method ({
name => 'index',
path => '',
--
2.30.2
^ permalink raw reply [flat|nested] 14+ messages in thread
* [pve-devel] [PATCH manager 1/6] ui: node: directory: use gettext for 'Directory'
2021-10-25 13:47 [pve-devel] [PATCH-SERIES storage/manager] disk-level storage removal for directory, LVM, LVM-thin, ZFS Fabian Ebner
` (5 preceding siblings ...)
2021-10-25 13:47 ` [pve-devel] [PATCH storage 6/6] api: disks: delete: add flag for cleaning up storage config Fabian Ebner
@ 2021-10-25 13:47 ` Fabian Ebner
2021-10-25 13:47 ` [pve-devel] [PATCH manager 2/6] ui: node: lvmthin: add volume group to columns Fabian Ebner
` (5 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Fabian Ebner @ 2021-10-25 13:47 UTC (permalink / raw)
To: pve-devel
It's also localised when adding a storage for example.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
www/manager6/node/Directory.js | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/www/manager6/node/Directory.js b/www/manager6/node/Directory.js
index 50c8d677..67f56870 100644
--- a/www/manager6/node/Directory.js
+++ b/www/manager6/node/Directory.js
@@ -105,7 +105,7 @@ Ext.define('PVE.node.Directorylist', {
},
},
{
- text: gettext('Create') + ': Directory',
+ text: `${gettext('Create')}: ${gettext('Directory')}`,
handler: function() {
let view = this.up('panel');
Ext.create('PVE.node.CreateDirectory', {
--
2.30.2
^ permalink raw reply [flat|nested] 14+ messages in thread
* [pve-devel] [PATCH manager 2/6] ui: node: lvmthin: add volume group to columns
2021-10-25 13:47 [pve-devel] [PATCH-SERIES storage/manager] disk-level storage removal for directory, LVM, LVM-thin, ZFS Fabian Ebner
` (6 preceding siblings ...)
2021-10-25 13:47 ` [pve-devel] [PATCH manager 1/6] ui: node: directory: use gettext for 'Directory' Fabian Ebner
@ 2021-10-25 13:47 ` Fabian Ebner
2021-10-25 13:47 ` [pve-devel] [PATCH manager 3/6] ui: utils: add task descriptions for disk removal Fabian Ebner
` (4 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Fabian Ebner @ 2021-10-25 13:47 UTC (permalink / raw)
To: pve-devel
to be able to identify thin pools even if they have the same name.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
www/manager6/node/LVMThin.js | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/www/manager6/node/LVMThin.js b/www/manager6/node/LVMThin.js
index db9ea249..4a0b21ba 100644
--- a/www/manager6/node/LVMThin.js
+++ b/www/manager6/node/LVMThin.js
@@ -64,6 +64,11 @@ Ext.define('PVE.node.LVMThinList', {
dataIndex: 'lv',
flex: 1,
},
+ {
+ header: 'Volume Group',
+ width: 110,
+ dataIndex: 'vg',
+ },
{
header: gettext('Usage'),
width: 110,
--
2.30.2
^ permalink raw reply [flat|nested] 14+ messages in thread
* [pve-devel] [PATCH manager 3/6] ui: utils: add task descriptions for disk removal
2021-10-25 13:47 [pve-devel] [PATCH-SERIES storage/manager] disk-level storage removal for directory, LVM, LVM-thin, ZFS Fabian Ebner
` (7 preceding siblings ...)
2021-10-25 13:47 ` [pve-devel] [PATCH manager 2/6] ui: node: lvmthin: add volume group to columns Fabian Ebner
@ 2021-10-25 13:47 ` Fabian Ebner
2021-10-25 13:47 ` [pve-devel] [PATCH manager 4/6] ui: node: add destroy menu for directory, lvm, lvmthin, zfs Fabian Ebner
` (3 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Fabian Ebner @ 2021-10-25 13:47 UTC (permalink / raw)
To: pve-devel
For 'dirremove', it already exists.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
www/manager6/Utils.js | 3 +++
1 file changed, 3 insertions(+)
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index ee92cd43..e4be46c7 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -1845,7 +1845,9 @@ Ext.define('PVE.Utils', {
imgcopy: ['', gettext('Copy data')],
imgdel: ['', gettext('Erase data')],
lvmcreate: [gettext('LVM Storage'), gettext('Create')],
+ lvmremove: ['Volume Group', gettext('Remove')],
lvmthincreate: [gettext('LVM-Thin Storage'), gettext('Create')],
+ lvmthinremove: ['Thinpool', gettext('Remove')],
migrateall: ['', gettext('Migrate all VMs and Containers')],
'move_volume': ['CT', gettext('Move Volume')],
'pbs-download': ['VM/CT', gettext('File Restore Download')],
@@ -1897,6 +1899,7 @@ Ext.define('PVE.Utils', {
vztemplate: ['CT', gettext('Convert to template')],
vzumount: ['CT', gettext('Unmount')],
zfscreate: [gettext('ZFS Storage'), gettext('Create')],
+ zfsremove: ['ZFS Pool', gettext('Remove')],
});
},
--
2.30.2
^ permalink raw reply [flat|nested] 14+ messages in thread
* [pve-devel] [PATCH manager 4/6] ui: node: add destroy menu for directory, lvm, lvmthin, zfs
2021-10-25 13:47 [pve-devel] [PATCH-SERIES storage/manager] disk-level storage removal for directory, LVM, LVM-thin, ZFS Fabian Ebner
` (8 preceding siblings ...)
2021-10-25 13:47 ` [pve-devel] [PATCH manager 3/6] ui: utils: add task descriptions for disk removal Fabian Ebner
@ 2021-10-25 13:47 ` Fabian Ebner
2021-10-25 13:47 ` [pve-devel] [PATCH manager 5/6] ui: node: storage removal: add checkbox for cleaning up disks Fabian Ebner
` (2 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Fabian Ebner @ 2021-10-25 13:47 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
Dependency bump for pve-storage needed.
www/manager6/node/Directory.js | 82 ++++++++++++++++++++++++++++++
www/manager6/node/LVM.js | 83 ++++++++++++++++++++++++++++++
www/manager6/node/LVMThin.js | 93 ++++++++++++++++++++++++++++++++++
www/manager6/node/ZFS.js | 81 ++++++++++++++++++++++++++++-
4 files changed, 337 insertions(+), 2 deletions(-)
diff --git a/www/manager6/node/Directory.js b/www/manager6/node/Directory.js
index 67f56870..c3dba2ef 100644
--- a/www/manager6/node/Directory.js
+++ b/www/manager6/node/Directory.js
@@ -63,6 +63,43 @@ Ext.define('PVE.node.Directorylist', {
extend: 'Ext.grid.Panel',
xtype: 'pveDirectoryList',
+ viewModel: {
+ data: {
+ path: '',
+ },
+ formulas: {
+ dirName: (get) => get('path')?.replace('/mnt/pve/', '') || undefined,
+ },
+ },
+
+ controller: {
+ xclass: 'Ext.app.ViewController',
+
+ destroyDirectory: function() {
+ let me = this;
+ let vm = me.getViewModel();
+ let view = me.getView();
+
+ const dirName = vm.get('dirName');
+
+ if (!view.nodename) {
+ throw "no node name specified";
+ }
+
+ if (!dirName) {
+ throw "no directory name specified";
+ }
+
+ Ext.create('Proxmox.window.SafeDestroy', {
+ url: `/nodes/${view.nodename}/disks/directory/${dirName}`,
+ item: { id: dirName },
+ showProgress: true,
+ taskName: 'dirremove',
+ taskDone: () => { view.reload(); },
+ }).show();
+ },
+ },
+
stateful: true,
stateId: 'grid-node-directory',
columns: [
@@ -117,6 +154,45 @@ Ext.define('PVE.node.Directorylist', {
});
},
},
+ '->',
+ {
+ xtype: 'tbtext',
+ data: {
+ dirName: undefined,
+ },
+ bind: {
+ data: {
+ dirName: "{dirName}",
+ },
+ },
+ tpl: [
+ '<tpl if="dirName">',
+ gettext('Directory') + ' {dirName}:',
+ '<tpl else>',
+ Ext.String.format(gettext('No {0} selected'), gettext('directory')),
+ '</tpl>',
+ ],
+ },
+ {
+ text: gettext('More'),
+ iconCls: 'fa fa-bars',
+ disabled: true,
+ bind: {
+ disabled: '{!dirName}',
+ },
+ menu: [
+ {
+ text: gettext('Destroy'),
+ itemId: 'remove',
+ iconCls: 'fa fa-fw fa-trash-o',
+ handler: 'destroyDirectory',
+ disabled: true,
+ bind: {
+ disabled: '{!dirName}',
+ },
+ },
+ ],
+ },
],
reload: function() {
@@ -129,6 +205,12 @@ Ext.define('PVE.node.Directorylist', {
activate: function() {
this.reload();
},
+ selectionchange: function(model, selected) {
+ let me = this;
+ let vm = me.getViewModel();
+
+ vm.set('path', selected[0]?.data.path || '');
+ },
},
initComponent: function() {
diff --git a/www/manager6/node/LVM.js b/www/manager6/node/LVM.js
index 4b5225d8..70ddf451 100644
--- a/www/manager6/node/LVM.js
+++ b/www/manager6/node/LVM.js
@@ -52,6 +52,40 @@ Ext.define('PVE.node.LVMList', {
extend: 'Ext.tree.Panel',
xtype: 'pveLVMList',
+ viewModel: {
+ data: {
+ volumeGroup: '',
+ },
+ },
+
+ controller: {
+ xclass: 'Ext.app.ViewController',
+
+ destroyVolumeGroup: function() {
+ let me = this;
+ let vm = me.getViewModel();
+ let view = me.getView();
+
+ const volumeGroup = vm.get('volumeGroup');
+
+ if (!view.nodename) {
+ throw "no node name specified";
+ }
+
+ if (!volumeGroup) {
+ throw "no volume group specified";
+ }
+
+ Ext.create('Proxmox.window.SafeDestroy', {
+ url: `/nodes/${view.nodename}/disks/lvm/${volumeGroup}`,
+ item: { id: volumeGroup },
+ showProgress: true,
+ taskName: 'lvmremove',
+ taskDone: () => { view.reload(); },
+ }).show();
+ },
+ },
+
emptyText: gettext('No Volume Groups found'),
stateful: true,
@@ -120,6 +154,45 @@ Ext.define('PVE.node.LVMList', {
});
},
},
+ '->',
+ {
+ xtype: 'tbtext',
+ data: {
+ volumeGroup: undefined,
+ },
+ bind: {
+ data: {
+ volumeGroup: "{volumeGroup}",
+ },
+ },
+ tpl: [
+ '<tpl if="volumeGroup">',
+ 'Volume group {volumeGroup}:',
+ '<tpl else>',
+ Ext.String.format(gettext('No {0} selected'), 'volume group'),
+ '</tpl>',
+ ],
+ },
+ {
+ text: gettext('More'),
+ iconCls: 'fa fa-bars',
+ disabled: true,
+ bind: {
+ disabled: '{!volumeGroup}',
+ },
+ menu: [
+ {
+ text: gettext('Destroy'),
+ itemId: 'remove',
+ iconCls: 'fa fa-fw fa-trash-o',
+ handler: 'destroyVolumeGroup',
+ disabled: true,
+ bind: {
+ disabled: '{!volumeGroup}',
+ },
+ },
+ ],
+ },
],
reload: function() {
@@ -142,6 +215,16 @@ Ext.define('PVE.node.LVMList', {
activate: function() {
this.reload();
},
+ selectionchange: function(model, selected) {
+ let me = this;
+ let vm = me.getViewModel();
+
+ if (selected.length < 1 || selected[0].data.parentId !== 'root') {
+ vm.set('volumeGroup', '');
+ } else {
+ vm.set('volumeGroup', selected[0].data.name);
+ }
+ },
},
selModel: 'treemodel',
diff --git a/www/manager6/node/LVMThin.js b/www/manager6/node/LVMThin.js
index 4a0b21ba..ca32bb3b 100644
--- a/www/manager6/node/LVMThin.js
+++ b/www/manager6/node/LVMThin.js
@@ -50,6 +50,47 @@ Ext.define('PVE.node.LVMThinList', {
extend: 'Ext.grid.Panel',
xtype: 'pveLVMThinList',
+ viewModel: {
+ data: {
+ thinPool: '',
+ volumeGroup: '',
+ },
+ },
+
+ controller: {
+ xclass: 'Ext.app.ViewController',
+
+ destroyThinPool: function() {
+ let me = this;
+ let vm = me.getViewModel();
+ let view = me.getView();
+
+ const thinPool = vm.get('thinPool');
+ const volumeGroup = vm.get('volumeGroup');
+
+ if (!view.nodename) {
+ throw "no node name specified";
+ }
+
+ if (!thinPool) {
+ throw "no thin pool specified";
+ }
+
+ if (!volumeGroup) {
+ throw "no volume group specified";
+ }
+
+ Ext.create('Proxmox.window.SafeDestroy', {
+ url: `/nodes/${view.nodename}/disks/lvmthin/${thinPool}`,
+ params: { 'volume-group': volumeGroup },
+ item: { id: `${volumeGroup}/${thinPool}` },
+ showProgress: true,
+ taskName: 'lvmthinremove',
+ taskDone: () => { view.reload(); },
+ }).show();
+ },
+ },
+
emptyText: gettext('No thinpools found'),
stateful: true,
@@ -142,6 +183,51 @@ Ext.define('PVE.node.LVMThinList', {
});
},
},
+ '->',
+ {
+ xtype: 'tbtext',
+ data: {
+ thinPool: undefined,
+ volumeGroup: undefined,
+ },
+ bind: {
+ data: {
+ thinPool: "{thinPool}",
+ volumeGroup: "{volumeGroup}",
+ },
+ },
+ tpl: [
+ '<tpl if="thinPool">',
+ '<tpl if="volumeGroup">',
+ 'Thinpool {volumeGroup}/{thinPool}:',
+ '<tpl else>', // volumeGroup
+ 'Missing volume group (node running old version?)',
+ '</tpl>',
+ '<tpl else>', // thinPool
+ Ext.String.format(gettext('No {0} selected'), 'thinpool'),
+ '</tpl>',
+ ],
+ },
+ {
+ text: gettext('More'),
+ iconCls: 'fa fa-bars',
+ disabled: true,
+ bind: {
+ disabled: '{!volumeGroup || !thinPool}',
+ },
+ menu: [
+ {
+ text: gettext('Destroy'),
+ itemId: 'remove',
+ iconCls: 'fa fa-fw fa-trash-o',
+ handler: 'destroyThinPool',
+ disabled: true,
+ bind: {
+ disabled: '{!volumeGroup || !thinPool}',
+ },
+ },
+ ],
+ },
],
reload: function() {
@@ -154,6 +240,13 @@ Ext.define('PVE.node.LVMThinList', {
activate: function() {
this.reload();
},
+ selectionchange: function(model, selected) {
+ let me = this;
+ let vm = me.getViewModel();
+
+ vm.set('volumeGroup', selected[0]?.data.vg || '');
+ vm.set('thinPool', selected[0]?.data.lv || '');
+ },
},
initComponent: function() {
diff --git a/www/manager6/node/ZFS.js b/www/manager6/node/ZFS.js
index 8ea364bf..c5c5aac8 100644
--- a/www/manager6/node/ZFS.js
+++ b/www/manager6/node/ZFS.js
@@ -297,6 +297,40 @@ Ext.define('PVE.node.ZFSList', {
extend: 'Ext.grid.Panel',
xtype: 'pveZFSList',
+ viewModel: {
+ data: {
+ pool: '',
+ },
+ },
+
+ controller: {
+ xclass: 'Ext.app.ViewController',
+
+ destroyPool: function() {
+ let me = this;
+ let vm = me.getViewModel();
+ let view = me.getView();
+
+ const pool = vm.get('pool');
+
+ if (!view.nodename) {
+ throw "no node name specified";
+ }
+
+ if (!pool) {
+ throw "no pool specified";
+ }
+
+ Ext.create('Proxmox.window.SafeDestroy', {
+ url: `/nodes/${view.nodename}/disks/zfs/${pool}`,
+ item: { id: pool },
+ showProgress: true,
+ taskName: 'zfsremove',
+ taskDone: () => { view.reload(); },
+ }).show();
+ },
+ },
+
stateful: true,
stateId: 'grid-node-zfs',
columns: [
@@ -378,6 +412,45 @@ Ext.define('PVE.node.ZFSList', {
}
},
},
+ '->',
+ {
+ xtype: 'tbtext',
+ data: {
+ pool: undefined,
+ },
+ bind: {
+ data: {
+ pool: "{pool}",
+ },
+ },
+ tpl: [
+ '<tpl if="pool">',
+ 'Pool {pool}:',
+ '<tpl else>',
+ Ext.String.format(gettext('No {0} selected'), 'pool'),
+ '</tpl>',
+ ],
+ },
+ {
+ text: gettext('More'),
+ iconCls: 'fa fa-bars',
+ disabled: true,
+ bind: {
+ disabled: '{!pool}',
+ },
+ menu: [
+ {
+ text: gettext('Destroy'),
+ itemId: 'remove',
+ iconCls: 'fa fa-fw fa-trash-o',
+ handler: 'destroyPool',
+ disabled: true,
+ bind: {
+ disabled: '{!pool}',
+ },
+ },
+ ],
+ },
],
show_detail: function(zpool) {
@@ -445,8 +518,12 @@ Ext.define('PVE.node.ZFSList', {
activate: function() {
this.reload();
},
- selectionchange: function() {
- this.down('#detailbtn').setDisabled(this.getSelection().length === 0);
+ selectionchange: function(model, selected) {
+ let me = this;
+ let vm = me.getViewModel();
+
+ me.down('#detailbtn').setDisabled(selected.length === 0);
+ vm.set('pool', selected[0]?.data.name || '');
},
itemdblclick: function(grid, record) {
this.show_detail(record.get('name'));
--
2.30.2
^ permalink raw reply [flat|nested] 14+ messages in thread
* [pve-devel] [PATCH manager 5/6] ui: node: storage removal: add checkbox for cleaning up disks
2021-10-25 13:47 [pve-devel] [PATCH-SERIES storage/manager] disk-level storage removal for directory, LVM, LVM-thin, ZFS Fabian Ebner
` (9 preceding siblings ...)
2021-10-25 13:47 ` [pve-devel] [PATCH manager 4/6] ui: node: add destroy menu for directory, lvm, lvmthin, zfs Fabian Ebner
@ 2021-10-25 13:47 ` Fabian Ebner
2021-10-25 13:47 ` [pve-devel] [PATCH manager 6/6] ui: node: storage removal: add checkbox for cleaning up storage config Fabian Ebner
2021-11-10 13:30 ` [pve-devel] applied-series: [PATCH-SERIES storage/manager] disk-level storage removal for directory, LVM, LVM-thin, ZFS Fabian Grünbichler
12 siblings, 0 replies; 14+ messages in thread
From: Fabian Ebner @ 2021-10-25 13:47 UTC (permalink / raw)
To: pve-devel
and factor out a SafeDestroyStorage sub-class to avoid duplication.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
www/manager6/Makefile | 1 +
www/manager6/node/Directory.js | 3 +--
www/manager6/node/LVM.js | 3 +--
www/manager6/node/LVMThin.js | 3 +--
www/manager6/node/ZFS.js | 3 +--
www/manager6/window/SafeDestroyStorage.js | 31 +++++++++++++++++++++++
6 files changed, 36 insertions(+), 8 deletions(-)
create mode 100644 www/manager6/window/SafeDestroyStorage.js
diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 3d1778c2..ecff8216 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -103,6 +103,7 @@ JSSRC= \
window/Prune.js \
window/Restore.js \
window/SafeDestroyGuest.js \
+ window/SafeDestroyStorage.js \
window/Settings.js \
window/Snapshot.js \
window/StartupEdit.js \
diff --git a/www/manager6/node/Directory.js b/www/manager6/node/Directory.js
index c3dba2ef..cab3d28b 100644
--- a/www/manager6/node/Directory.js
+++ b/www/manager6/node/Directory.js
@@ -90,10 +90,9 @@ Ext.define('PVE.node.Directorylist', {
throw "no directory name specified";
}
- Ext.create('Proxmox.window.SafeDestroy', {
+ Ext.create('PVE.window.SafeDestroyStorage', {
url: `/nodes/${view.nodename}/disks/directory/${dirName}`,
item: { id: dirName },
- showProgress: true,
taskName: 'dirremove',
taskDone: () => { view.reload(); },
}).show();
diff --git a/www/manager6/node/LVM.js b/www/manager6/node/LVM.js
index 70ddf451..d4024de1 100644
--- a/www/manager6/node/LVM.js
+++ b/www/manager6/node/LVM.js
@@ -76,10 +76,9 @@ Ext.define('PVE.node.LVMList', {
throw "no volume group specified";
}
- Ext.create('Proxmox.window.SafeDestroy', {
+ Ext.create('PVE.window.SafeDestroyStorage', {
url: `/nodes/${view.nodename}/disks/lvm/${volumeGroup}`,
item: { id: volumeGroup },
- showProgress: true,
taskName: 'lvmremove',
taskDone: () => { view.reload(); },
}).show();
diff --git a/www/manager6/node/LVMThin.js b/www/manager6/node/LVMThin.js
index ca32bb3b..ebd83c54 100644
--- a/www/manager6/node/LVMThin.js
+++ b/www/manager6/node/LVMThin.js
@@ -80,11 +80,10 @@ Ext.define('PVE.node.LVMThinList', {
throw "no volume group specified";
}
- Ext.create('Proxmox.window.SafeDestroy', {
+ Ext.create('PVE.window.SafeDestroyStorage', {
url: `/nodes/${view.nodename}/disks/lvmthin/${thinPool}`,
params: { 'volume-group': volumeGroup },
item: { id: `${volumeGroup}/${thinPool}` },
- showProgress: true,
taskName: 'lvmthinremove',
taskDone: () => { view.reload(); },
}).show();
diff --git a/www/manager6/node/ZFS.js b/www/manager6/node/ZFS.js
index c5c5aac8..01c56e40 100644
--- a/www/manager6/node/ZFS.js
+++ b/www/manager6/node/ZFS.js
@@ -321,10 +321,9 @@ Ext.define('PVE.node.ZFSList', {
throw "no pool specified";
}
- Ext.create('Proxmox.window.SafeDestroy', {
+ Ext.create('PVE.window.SafeDestroyStorage', {
url: `/nodes/${view.nodename}/disks/zfs/${pool}`,
item: { id: pool },
- showProgress: true,
taskName: 'zfsremove',
taskDone: () => { view.reload(); },
}).show();
diff --git a/www/manager6/window/SafeDestroyStorage.js b/www/manager6/window/SafeDestroyStorage.js
new file mode 100644
index 00000000..62882f37
--- /dev/null
+++ b/www/manager6/window/SafeDestroyStorage.js
@@ -0,0 +1,31 @@
+/*
+ * SafeDestroy window with additional checkboxes for removing a storage on the disk level.
+ */
+Ext.define('PVE.window.SafeDestroyStorage', {
+ extend: 'Proxmox.window.SafeDestroy',
+ alias: 'widget.pveSafeDestroyStorage',
+
+ showProgress: true,
+
+ additionalItems: [
+ {
+ xtype: 'proxmoxcheckbox',
+ name: 'wipeDisks',
+ reference: 'wipeDisksCheckbox',
+ boxLabel: gettext('Cleanup Disks'),
+ checked: true,
+ autoEl: {
+ tag: 'div',
+ 'data-qtip': gettext('Wipe labels and other left-overs'),
+ },
+ },
+ ],
+
+ getParams: function() {
+ let me = this;
+
+ me.params['cleanup-disks'] = me.lookupReference('wipeDisksCheckbox').checked ? 1 : 0;
+
+ return me.callParent();
+ },
+});
--
2.30.2
^ permalink raw reply [flat|nested] 14+ messages in thread
* [pve-devel] [PATCH manager 6/6] ui: node: storage removal: add checkbox for cleaning up storage config
2021-10-25 13:47 [pve-devel] [PATCH-SERIES storage/manager] disk-level storage removal for directory, LVM, LVM-thin, ZFS Fabian Ebner
` (10 preceding siblings ...)
2021-10-25 13:47 ` [pve-devel] [PATCH manager 5/6] ui: node: storage removal: add checkbox for cleaning up disks Fabian Ebner
@ 2021-10-25 13:47 ` Fabian Ebner
2021-11-10 13:30 ` [pve-devel] applied-series: [PATCH-SERIES storage/manager] disk-level storage removal for directory, LVM, LVM-thin, ZFS Fabian Grünbichler
12 siblings, 0 replies; 14+ messages in thread
From: Fabian Ebner @ 2021-10-25 13:47 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
---
www/manager6/window/SafeDestroyStorage.js | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/www/manager6/window/SafeDestroyStorage.js b/www/manager6/window/SafeDestroyStorage.js
index 62882f37..660fd35e 100644
--- a/www/manager6/window/SafeDestroyStorage.js
+++ b/www/manager6/window/SafeDestroyStorage.js
@@ -19,12 +19,20 @@ Ext.define('PVE.window.SafeDestroyStorage', {
'data-qtip': gettext('Wipe labels and other left-overs'),
},
},
+ {
+ xtype: 'proxmoxcheckbox',
+ name: 'cleanupConfig',
+ reference: 'cleanupConfigCheckbox',
+ boxLabel: gettext('Cleanup Storage Configuration'),
+ checked: true,
+ },
],
getParams: function() {
let me = this;
me.params['cleanup-disks'] = me.lookupReference('wipeDisksCheckbox').checked ? 1 : 0;
+ me.params['cleanup-config'] = me.lookupReference('cleanupConfigCheckbox').checked ? 1 : 0;
return me.callParent();
},
--
2.30.2
^ permalink raw reply [flat|nested] 14+ messages in thread
* [pve-devel] applied-series: [PATCH-SERIES storage/manager] disk-level storage removal for directory, LVM, LVM-thin, ZFS
2021-10-25 13:47 [pve-devel] [PATCH-SERIES storage/manager] disk-level storage removal for directory, LVM, LVM-thin, ZFS Fabian Ebner
` (11 preceding siblings ...)
2021-10-25 13:47 ` [pve-devel] [PATCH manager 6/6] ui: node: storage removal: add checkbox for cleaning up storage config Fabian Ebner
@ 2021-11-10 13:30 ` Fabian Grünbichler
12 siblings, 0 replies; 14+ messages in thread
From: Fabian Grünbichler @ 2021-11-10 13:30 UTC (permalink / raw)
To: Proxmox VE development discussion
with a reordering of patches (storage #3 before storage #2) and a slight
cosmetic fixup in the last patch.
On October 25, 2021 3:47 pm, Fabian Ebner wrote:
> So that disks can easily be repurposed when a storage is no longer
> needed.
>
>
> Dependency bump pve-manager -> pve-storage is needed.
>
>
> pve-storage:
>
> Fabian Ebner (6):
> LVM: add lvm_destroy_volume_group
> api: disks: add DELETE endpoint for directory, lvm, lvmthin, zfs
> api: list thin pools: add volume group to properties
> diskmanage: add helper for udev workaround
> api: disks: delete: add flag for wiping disks
> api: disks: delete: add flag for cleaning up storage config
>
> PVE/API2/Disks.pm | 7 +--
> PVE/API2/Disks/Directory.pm | 96 +++++++++++++++++++++++++++++--
> PVE/API2/Disks/LVM.pm | 84 +++++++++++++++++++++++++--
> PVE/API2/Disks/LVMThin.pm | 98 ++++++++++++++++++++++++++++++--
> PVE/API2/Disks/ZFS.pm | 107 +++++++++++++++++++++++++++++++++--
> PVE/API2/Storage/Config.pm | 27 +++++++++
> PVE/Diskmanage.pm | 11 ++++
> PVE/Storage/LVMPlugin.pm | 11 ++++
> PVE/Storage/LvmThinPlugin.pm | 1 +
> 9 files changed, 416 insertions(+), 26 deletions(-)
>
>
> pve-manager:
>
> Fabian Ebner (6):
> ui: node: directory: use gettext for 'Directory'
> ui: node: lvmthin: add volume group to columns
> ui: utils: add task descriptions for disk removal
> ui: node: add destroy menu for directory, lvm, lvmthin, zfs
> ui: node: storage removal: add checkbox for cleaning up disks
> ui: node: storage removal: add checkbox for cleaning up storage config
>
> www/manager6/Makefile | 1 +
> www/manager6/Utils.js | 3 +
> www/manager6/node/Directory.js | 83 ++++++++++++++++++-
> www/manager6/node/LVM.js | 82 +++++++++++++++++++
> www/manager6/node/LVMThin.js | 97 +++++++++++++++++++++++
> www/manager6/node/ZFS.js | 80 ++++++++++++++++++-
> www/manager6/window/SafeDestroyStorage.js | 39 +++++++++
> 7 files changed, 382 insertions(+), 3 deletions(-)
> create mode 100644 www/manager6/window/SafeDestroyStorage.js
>
> --
> 2.30.2
>
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
>
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2021-11-10 13:31 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-25 13:47 [pve-devel] [PATCH-SERIES storage/manager] disk-level storage removal for directory, LVM, LVM-thin, ZFS Fabian Ebner
2021-10-25 13:47 ` [pve-devel] [PATCH storage 1/6] LVM: add lvm_destroy_volume_group Fabian Ebner
2021-10-25 13:47 ` [pve-devel] [PATCH storage 2/6] api: disks: add DELETE endpoint for directory, lvm, lvmthin, zfs Fabian Ebner
2021-10-25 13:47 ` [pve-devel] [PATCH storage 3/6] api: list thin pools: add volume group to properties Fabian Ebner
2021-10-25 13:47 ` [pve-devel] [PATCH storage 4/6] diskmanage: add helper for udev workaround Fabian Ebner
2021-10-25 13:47 ` [pve-devel] [PATCH storage 5/6] api: disks: delete: add flag for wiping disks Fabian Ebner
2021-10-25 13:47 ` [pve-devel] [PATCH storage 6/6] api: disks: delete: add flag for cleaning up storage config Fabian Ebner
2021-10-25 13:47 ` [pve-devel] [PATCH manager 1/6] ui: node: directory: use gettext for 'Directory' Fabian Ebner
2021-10-25 13:47 ` [pve-devel] [PATCH manager 2/6] ui: node: lvmthin: add volume group to columns Fabian Ebner
2021-10-25 13:47 ` [pve-devel] [PATCH manager 3/6] ui: utils: add task descriptions for disk removal Fabian Ebner
2021-10-25 13:47 ` [pve-devel] [PATCH manager 4/6] ui: node: add destroy menu for directory, lvm, lvmthin, zfs Fabian Ebner
2021-10-25 13:47 ` [pve-devel] [PATCH manager 5/6] ui: node: storage removal: add checkbox for cleaning up disks Fabian Ebner
2021-10-25 13:47 ` [pve-devel] [PATCH manager 6/6] ui: node: storage removal: add checkbox for cleaning up storage config Fabian Ebner
2021-11-10 13:30 ` [pve-devel] applied-series: [PATCH-SERIES storage/manager] disk-level storage removal for directory, LVM, LVM-thin, ZFS Fabian Grünbichler
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox