* [pve-devel] [PATCH manager v3 00/10] ceph: allow pools settings to be changed
@ 2021-01-12 10:21 Alwin Antreich
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 01/10] api: ceph: subclass pools Alwin Antreich
` (9 more replies)
0 siblings, 10 replies; 15+ messages in thread
From: Alwin Antreich @ 2021-01-12 10:21 UTC (permalink / raw)
To: pve-devel
This set allows to edit pools via GUI & CLI. This should make it easier
for users to adjust pool settings, since they don't have to go the ceph
tool route.
v1 -> v2:
- move pools endpoint to a subclass
- add pg autsocale status and settings
- rework and flatten the grid view of ceph pools
- rework the create input panel
- add an edit button using the reworked input panel
- fix broken add_storages
- extend setp_pool function to avoid a race condition
- remove the pg_autoscale_mode default to allow Ceph's setting to
take precedence
v2 -> v3:
- incorporate suggestions and comments from Dominik
- drop 'fix broken add_storages', similar patch has been applied prior
- drop 'remove default pg_autoscale_mode', kept on warning - see
forum thread [0].
- add adjustment of pg_num_min, tuning for the pg_autoscaler
[0] https://forum.proxmox.com/threads/ceph-octopus-upgrade-notes-think-twice-before-enabling-auto-scale.80105
Alwin Antreich (10):
api: ceph: subclass pools
ceph: setpool, use parameter extraction instead
ceph: add titles to ceph_pool_common_options
ceph: add get api call for single pool
ceph: add autoscale_status to api calls
ceph: gui: add autoscale & flatten pool view
ceph: set allowed minimal pg_num down to 1
ceph: gui: rework pool input panel
ceph: gui: add min num of PG
fix: ceph: always set pool size first
PVE/API2/Ceph/Makefile | 1 +
PVE/API2/Ceph.pm | 378 +------------------------
PVE/API2/Ceph/Pools.pm | 573 ++++++++++++++++++++++++++++++++++++++
PVE/CLI/pveceph.pm | 16 +-
PVE/Ceph/Tools.pm | 61 ++--
www/manager6/ceph/Pool.js | 403 +++++++++++++++++++--------
6 files changed, 922 insertions(+), 510 deletions(-)
create mode 100644 PVE/API2/Ceph/Pools.pm
--
2.29.2
^ permalink raw reply [flat|nested] 15+ messages in thread
* [pve-devel] [PATCH manager v3 01/10] api: ceph: subclass pools
2021-01-12 10:21 [pve-devel] [PATCH manager v3 00/10] ceph: allow pools settings to be changed Alwin Antreich
@ 2021-01-12 10:21 ` Alwin Antreich
2021-02-06 13:28 ` [pve-devel] applied: " Thomas Lamprecht
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 02/10] ceph: setpool, use parameter extraction instead Alwin Antreich
` (8 subsequent siblings)
9 siblings, 1 reply; 15+ messages in thread
From: Alwin Antreich @ 2021-01-12 10:21 UTC (permalink / raw)
To: pve-devel
for better handling and since the pool endpoints got more entries.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
---
PVE/API2/Ceph/Makefile | 1 +
PVE/API2/Ceph.pm | 378 +--------------------------------------
PVE/API2/Ceph/Pools.pm | 395 +++++++++++++++++++++++++++++++++++++++++
PVE/CLI/pveceph.pm | 8 +-
4 files changed, 406 insertions(+), 376 deletions(-)
create mode 100644 PVE/API2/Ceph/Pools.pm
diff --git a/PVE/API2/Ceph/Makefile b/PVE/API2/Ceph/Makefile
index 5b6493d5..45daafda 100644
--- a/PVE/API2/Ceph/Makefile
+++ b/PVE/API2/Ceph/Makefile
@@ -5,6 +5,7 @@ PERLSOURCE= \
MON.pm \
OSD.pm \
FS.pm \
+ Pools.pm \
MDS.pm
all:
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index 0c647489..ad300b12 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@ -23,6 +23,7 @@ use PVE::API2::Ceph::FS;
use PVE::API2::Ceph::MDS;
use PVE::API2::Ceph::MGR;
use PVE::API2::Ceph::MON;
+use PVE::API2::Ceph::Pools;
use PVE::API2::Storage::Config;
use base qw(PVE::RESTHandler);
@@ -54,6 +55,11 @@ __PACKAGE__->register_method ({
path => 'fs',
});
+__PACKAGE__->register_method ({
+ subclass => "PVE::API2::Ceph::Pools",
+ path => 'pools',
+});
+
__PACKAGE__->register_method ({
name => 'index',
path => '',
@@ -239,35 +245,6 @@ __PACKAGE__->register_method ({
return $res;
}});
-my $add_storage = sub {
- my ($pool, $storeid) = @_;
-
- my $storage_params = {
- type => 'rbd',
- pool => $pool,
- storage => $storeid,
- krbd => 0,
- content => 'rootdir,images',
- };
-
- PVE::API2::Storage::Config->create($storage_params);
-};
-
-my $get_storages = sub {
- my ($pool) = @_;
-
- my $cfg = PVE::Storage::config();
-
- my $storages = $cfg->{ids};
- my $res = {};
- foreach my $storeid (keys %$storages) {
- my $curr = $storages->{$storeid};
- $res->{$storeid} = $storages->{$storeid}
- if $curr->{type} eq 'rbd' && $pool eq $curr->{pool};
- }
-
- return $res;
-};
__PACKAGE__->register_method ({
name => 'init',
@@ -583,224 +560,6 @@ __PACKAGE__->register_method ({
return PVE::Ceph::Tools::ceph_cluster_status();
}});
-__PACKAGE__->register_method ({
- name => 'lspools',
- path => 'pools',
- method => 'GET',
- description => "List all pools.",
- proxyto => 'node',
- protected => 1,
- permissions => {
- check => ['perm', '/', [ 'Sys.Audit', 'Datastore.Audit' ], any => 1],
- },
- parameters => {
- additionalProperties => 0,
- properties => {
- node => get_standard_option('pve-node'),
- },
- },
- returns => {
- type => 'array',
- items => {
- type => "object",
- properties => {
- pool => { type => 'integer', title => 'ID' },
- pool_name => { type => 'string', title => 'Name' },
- size => { type => 'integer', title => 'Size' },
- min_size => { type => 'integer', title => 'Min Size' },
- pg_num => { type => 'integer', title => 'PG Num' },
- pg_autoscale_mode => { type => 'string', optional => 1, title => 'PG Autoscale Mode' },
- crush_rule => { type => 'integer', title => 'Crush Rule' },
- crush_rule_name => { type => 'string', title => 'Crush Rule Name' },
- percent_used => { type => 'number', title => '%-Used' },
- bytes_used => { type => 'integer', title => 'Used' },
- },
- },
- links => [ { rel => 'child', href => "{pool_name}" } ],
- },
- code => sub {
- my ($param) = @_;
-
- PVE::Ceph::Tools::check_ceph_inited();
-
- my $rados = PVE::RADOS->new();
-
- my $stats = {};
- my $res = $rados->mon_command({ prefix => 'df' });
-
- foreach my $d (@{$res->{pools}}) {
- next if !$d->{stats};
- next if !defined($d->{id});
- $stats->{$d->{id}} = $d->{stats};
- }
-
- $res = $rados->mon_command({ prefix => 'osd dump' });
- my $rulestmp = $rados->mon_command({ prefix => 'osd crush rule dump'});
-
- my $rules = {};
- for my $rule (@$rulestmp) {
- $rules->{$rule->{rule_id}} = $rule->{rule_name};
- }
-
- my $data = [];
- my $attr_list = [
- 'pool',
- 'pool_name',
- 'size',
- 'min_size',
- 'pg_num',
- 'crush_rule',
- 'pg_autoscale_mode',
- ];
-
- foreach my $e (@{$res->{pools}}) {
- my $d = {};
- foreach my $attr (@$attr_list) {
- $d->{$attr} = $e->{$attr} if defined($e->{$attr});
- }
-
- if (defined($d->{crush_rule}) && defined($rules->{$d->{crush_rule}})) {
- $d->{crush_rule_name} = $rules->{$d->{crush_rule}};
- }
-
- if (my $s = $stats->{$d->{pool}}) {
- $d->{bytes_used} = $s->{bytes_used};
- $d->{percent_used} = $s->{percent_used};
- }
- push @$data, $d;
- }
-
-
- return $data;
- }});
-
-
-my $ceph_pool_common_options = sub {
- my ($nodefault) = shift;
- my $options = {
- name => {
- description => "The name of the pool. It must be unique.",
- type => 'string',
- },
- size => {
- description => 'Number of replicas per object',
- type => 'integer',
- default => 3,
- optional => 1,
- minimum => 1,
- maximum => 7,
- },
- min_size => {
- description => 'Minimum number of replicas per object',
- type => 'integer',
- default => 2,
- optional => 1,
- minimum => 1,
- maximum => 7,
- },
- pg_num => {
- description => "Number of placement groups.",
- type => 'integer',
- default => 128,
- optional => 1,
- minimum => 8,
- maximum => 32768,
- },
- crush_rule => {
- description => "The rule to use for mapping object placement in the cluster.",
- type => 'string',
- optional => 1,
- },
- application => {
- description => "The application of the pool.",
- default => 'rbd',
- type => 'string',
- enum => ['rbd', 'cephfs', 'rgw'],
- optional => 1,
- },
- pg_autoscale_mode => {
- description => "The automatic PG scaling mode of the pool.",
- type => 'string',
- enum => ['on', 'off', 'warn'],
- default => 'warn',
- optional => 1,
- },
- };
-
- if ($nodefault) {
- delete $options->{$_}->{default} for keys %$options;
- }
- return $options;
-};
-
-
-__PACKAGE__->register_method ({
- name => 'createpool',
- path => 'pools',
- method => 'POST',
- description => "Create POOL",
- proxyto => 'node',
- protected => 1,
- permissions => {
- check => ['perm', '/', [ 'Sys.Modify' ]],
- },
- parameters => {
- additionalProperties => 0,
- properties => {
- node => get_standard_option('pve-node'),
- add_storages => {
- description => "Configure VM and CT storage using the new pool.",
- type => 'boolean',
- optional => 1,
- },
- %{ $ceph_pool_common_options->() },
- },
- },
- returns => { type => 'string' },
- code => sub {
- my ($param) = @_;
-
- PVE::Cluster::check_cfs_quorum();
- PVE::Ceph::Tools::check_ceph_configured();
-
- my $pool = extract_param($param, 'name');
- my $node = extract_param($param, 'node');
- my $add_storages = extract_param($param, 'add_storages');
-
- my $rpcenv = PVE::RPCEnvironment::get();
- my $user = $rpcenv->get_user();
-
- if ($add_storages) {
- $rpcenv->check($user, '/storage', ['Datastore.Allocate']);
- die "pool name contains characters which are illegal for storage naming\n"
- if !PVE::JSONSchema::parse_storage_id($pool);
- }
-
- # pool defaults
- $param->{pg_num} //= 128;
- $param->{size} //= 3;
- $param->{min_size} //= 2;
- $param->{application} //= 'rbd';
- $param->{pg_autoscale_mode} //= 'warn';
-
- my $worker = sub {
-
- PVE::Ceph::Tools::create_pool($pool, $param);
-
- if ($add_storages) {
- my $err;
- eval { $add_storage->($pool, "${pool}"); };
- if ($@) {
- warn "failed to add storage: $@";
- $err = 1;
- }
- die "adding storage for pool '$pool' failed, check log and add manually!\n"
- if $err;
- }
- };
-
- return $rpcenv->fork_worker('cephcreatepool', $pool, $user, $worker);
- }});
my $possible_flags = PVE::Ceph::Tools::get_possible_osd_flags();
my $possible_flags_list = [ sort keys %$possible_flags ];
@@ -910,131 +669,6 @@ __PACKAGE__->register_method ({
return undef;
}});
-__PACKAGE__->register_method ({
- name => 'destroypool',
- path => 'pools/{name}',
- method => 'DELETE',
- description => "Destroy pool",
- proxyto => 'node',
- protected => 1,
- permissions => {
- check => ['perm', '/', [ 'Sys.Modify' ]],
- },
- parameters => {
- additionalProperties => 0,
- properties => {
- node => get_standard_option('pve-node'),
- name => {
- description => "The name of the pool. It must be unique.",
- type => 'string',
- },
- force => {
- description => "If true, destroys pool even if in use",
- type => 'boolean',
- optional => 1,
- default => 0,
- },
- remove_storages => {
- description => "Remove all pveceph-managed storages configured for this pool",
- type => 'boolean',
- optional => 1,
- default => 0,
- },
- },
- },
- returns => { type => 'string' },
- code => sub {
- my ($param) = @_;
-
- PVE::Ceph::Tools::check_ceph_inited();
-
- my $rpcenv = PVE::RPCEnvironment::get();
- my $user = $rpcenv->get_user();
- $rpcenv->check($user, '/storage', ['Datastore.Allocate'])
- if $param->{remove_storages};
-
- my $pool = $param->{name};
-
- my $worker = sub {
- my $storages = $get_storages->($pool);
-
- # if not forced, destroy ceph pool only when no
- # vm disks are on it anymore
- if (!$param->{force}) {
- my $storagecfg = PVE::Storage::config();
- foreach my $storeid (keys %$storages) {
- my $storage = $storages->{$storeid};
-
- # check if any vm disks are on the pool
- print "checking storage '$storeid' for RBD images..\n";
- my $res = PVE::Storage::vdisk_list($storagecfg, $storeid);
- die "ceph pool '$pool' still in use by storage '$storeid'\n"
- if @{$res->{$storeid}} != 0;
- }
- }
-
- PVE::Ceph::Tools::destroy_pool($pool);
-
- if ($param->{remove_storages}) {
- my $err;
- foreach my $storeid (keys %$storages) {
- # skip external clusters, not managed by pveceph
- next if $storages->{$storeid}->{monhost};
- eval { PVE::API2::Storage::Config->delete({storage => $storeid}) };
- if ($@) {
- warn "failed to remove storage '$storeid': $@\n";
- $err = 1;
- }
- }
- die "failed to remove (some) storages - check log and remove manually!\n"
- if $err;
- }
- };
- return $rpcenv->fork_worker('cephdestroypool', $pool, $user, $worker);
- }});
-
-
-__PACKAGE__->register_method ({
- name => 'setpool',
- path => 'pools/{name}',
- method => 'PUT',
- description => "Change POOL settings",
- proxyto => 'node',
- protected => 1,
- permissions => {
- check => ['perm', '/', [ 'Sys.Modify' ]],
- },
- parameters => {
- additionalProperties => 0,
- properties => {
- node => get_standard_option('pve-node'),
- %{ $ceph_pool_common_options->('nodefault') },
- },
- },
- returns => { type => 'string' },
- code => sub {
- my ($param) = @_;
-
- PVE::Ceph::Tools::check_ceph_configured();
-
- my $rpcenv = PVE::RPCEnvironment::get();
- my $authuser = $rpcenv->get_user();
-
- my $pool = $param->{name};
- my $ceph_param = \%$param;
- for my $item ('name', 'node') {
- # not ceph parameters
- delete $ceph_param->{$item};
- }
-
- my $worker = sub {
- PVE::Ceph::Tools::set_pool($pool, $ceph_param);
- };
-
- return $rpcenv->fork_worker('cephsetpool', $pool, $authuser, $worker);
- }});
-
-
__PACKAGE__->register_method ({
name => 'crush',
path => 'crush',
diff --git a/PVE/API2/Ceph/Pools.pm b/PVE/API2/Ceph/Pools.pm
new file mode 100644
index 00000000..fac21301
--- /dev/null
+++ b/PVE/API2/Ceph/Pools.pm
@@ -0,0 +1,395 @@
+package PVE::API2::Ceph::Pools;
+
+use strict;
+use warnings;
+
+use PVE::Ceph::Tools;
+use PVE::Ceph::Services;
+use PVE::JSONSchema qw(get_standard_option);
+use PVE::RADOS;
+use PVE::RESTHandler;
+use PVE::RPCEnvironment;
+use PVE::Storage;
+use PVE::Tools qw(extract_param);
+
+use PVE::API2::Storage::Config;
+
+use base qw(PVE::RESTHandler);
+
+__PACKAGE__->register_method ({
+ name => 'lspools',
+ path => '',
+ method => 'GET',
+ description => "List all pools.",
+ proxyto => 'node',
+ protected => 1,
+ permissions => {
+ check => ['perm', '/', [ 'Sys.Audit', 'Datastore.Audit' ], any => 1],
+ },
+ parameters => {
+ additionalProperties => 0,
+ properties => {
+ node => get_standard_option('pve-node'),
+ },
+ },
+ returns => {
+ type => 'array',
+ items => {
+ type => "object",
+ properties => {
+ pool => { type => 'integer', title => 'ID' },
+ pool_name => { type => 'string', title => 'Name' },
+ size => { type => 'integer', title => 'Size' },
+ min_size => { type => 'integer', title => 'Min Size' },
+ pg_num => { type => 'integer', title => 'PG Num' },
+ pg_autoscale_mode => { type => 'string', optional => 1, title => 'PG Autoscale Mode' },
+ crush_rule => { type => 'integer', title => 'Crush Rule' },
+ crush_rule_name => { type => 'string', title => 'Crush Rule Name' },
+ percent_used => { type => 'number', title => '%-Used' },
+ bytes_used => { type => 'integer', title => 'Used' },
+ },
+ },
+ links => [ { rel => 'child', href => "{pool_name}" } ],
+ },
+ code => sub {
+ my ($param) = @_;
+
+ PVE::Ceph::Tools::check_ceph_inited();
+
+ my $rados = PVE::RADOS->new();
+
+ my $stats = {};
+ my $res = $rados->mon_command({ prefix => 'df' });
+
+ foreach my $d (@{$res->{pools}}) {
+ next if !$d->{stats};
+ next if !defined($d->{id});
+ $stats->{$d->{id}} = $d->{stats};
+ }
+
+ $res = $rados->mon_command({ prefix => 'osd dump' });
+ my $rulestmp = $rados->mon_command({ prefix => 'osd crush rule dump'});
+
+ my $rules = {};
+ for my $rule (@$rulestmp) {
+ $rules->{$rule->{rule_id}} = $rule->{rule_name};
+ }
+
+ my $data = [];
+ my $attr_list = [
+ 'pool',
+ 'pool_name',
+ 'size',
+ 'min_size',
+ 'pg_num',
+ 'crush_rule',
+ 'pg_autoscale_mode',
+ ];
+
+ foreach my $e (@{$res->{pools}}) {
+ my $d = {};
+ foreach my $attr (@$attr_list) {
+ $d->{$attr} = $e->{$attr} if defined($e->{$attr});
+ }
+
+ if (defined($d->{crush_rule}) && defined($rules->{$d->{crush_rule}})) {
+ $d->{crush_rule_name} = $rules->{$d->{crush_rule}};
+ }
+
+ if (my $s = $stats->{$d->{pool}}) {
+ $d->{bytes_used} = $s->{bytes_used};
+ $d->{percent_used} = $s->{percent_used};
+ }
+ push @$data, $d;
+ }
+
+
+ return $data;
+ }});
+
+
+my $ceph_pool_common_options = sub {
+ my ($nodefault) = shift;
+ my $options = {
+ name => {
+ description => "The name of the pool. It must be unique.",
+ type => 'string',
+ },
+ size => {
+ description => 'Number of replicas per object',
+ type => 'integer',
+ default => 3,
+ optional => 1,
+ minimum => 1,
+ maximum => 7,
+ },
+ min_size => {
+ description => 'Minimum number of replicas per object',
+ type => 'integer',
+ default => 2,
+ optional => 1,
+ minimum => 1,
+ maximum => 7,
+ },
+ pg_num => {
+ description => "Number of placement groups.",
+ type => 'integer',
+ default => 128,
+ optional => 1,
+ minimum => 8,
+ maximum => 32768,
+ },
+ crush_rule => {
+ description => "The rule to use for mapping object placement in the cluster.",
+ type => 'string',
+ optional => 1,
+ },
+ application => {
+ description => "The application of the pool.",
+ default => 'rbd',
+ type => 'string',
+ enum => ['rbd', 'cephfs', 'rgw'],
+ optional => 1,
+ },
+ pg_autoscale_mode => {
+ description => "The automatic PG scaling mode of the pool.",
+ type => 'string',
+ enum => ['on', 'off', 'warn'],
+ default => 'warn',
+ optional => 1,
+ },
+ };
+
+ if ($nodefault) {
+ delete $options->{$_}->{default} for keys %$options;
+ }
+ return $options;
+};
+
+
+my $add_storage = sub {
+ my ($pool, $storeid) = @_;
+
+ my $storage_params = {
+ type => 'rbd',
+ pool => $pool,
+ storage => $storeid,
+ krbd => 0,
+ content => 'rootdir,images',
+ };
+
+ PVE::API2::Storage::Config->create($storage_params);
+};
+
+my $get_storages = sub {
+ my ($pool) = @_;
+
+ my $cfg = PVE::Storage::config();
+
+ my $storages = $cfg->{ids};
+ my $res = {};
+ foreach my $storeid (keys %$storages) {
+ my $curr = $storages->{$storeid};
+ $res->{$storeid} = $storages->{$storeid}
+ if $curr->{type} eq 'rbd' && $pool eq $curr->{pool};
+ }
+
+ return $res;
+};
+
+
+__PACKAGE__->register_method ({
+ name => 'createpool',
+ path => '',
+ method => 'POST',
+ description => "Create POOL",
+ proxyto => 'node',
+ protected => 1,
+ permissions => {
+ check => ['perm', '/', [ 'Sys.Modify' ]],
+ },
+ parameters => {
+ additionalProperties => 0,
+ properties => {
+ node => get_standard_option('pve-node'),
+ add_storages => {
+ description => "Configure VM and CT storage using the new pool.",
+ type => 'boolean',
+ optional => 1,
+ },
+ %{ $ceph_pool_common_options->() },
+ },
+ },
+ returns => { type => 'string' },
+ code => sub {
+ my ($param) = @_;
+
+ PVE::Cluster::check_cfs_quorum();
+ PVE::Ceph::Tools::check_ceph_configured();
+
+ my $pool = extract_param($param, 'name');
+ my $node = extract_param($param, 'node');
+ my $add_storages = extract_param($param, 'add_storages');
+
+ my $rpcenv = PVE::RPCEnvironment::get();
+ my $user = $rpcenv->get_user();
+
+ if ($add_storages) {
+ $rpcenv->check($user, '/storage', ['Datastore.Allocate']);
+ die "pool name contains characters which are illegal for storage naming\n"
+ if !PVE::JSONSchema::parse_storage_id($pool);
+ }
+
+ # pool defaults
+ $param->{pg_num} //= 128;
+ $param->{size} //= 3;
+ $param->{min_size} //= 2;
+ $param->{application} //= 'rbd';
+ $param->{pg_autoscale_mode} //= 'warn';
+
+ my $worker = sub {
+
+ PVE::Ceph::Tools::create_pool($pool, $param);
+
+ if ($add_storages) {
+ my $err;
+ eval { $add_storage->($pool, "${pool}"); };
+ if ($@) {
+ warn "failed to add storage: $@";
+ $err = 1;
+ }
+ die "adding storage for pool '$pool' failed, check log and add manually!\n"
+ if $err;
+ }
+ };
+
+ return $rpcenv->fork_worker('cephcreatepool', $pool, $user, $worker);
+ }});
+
+
+__PACKAGE__->register_method ({
+ name => 'destroypool',
+ path => '{name}',
+ method => 'DELETE',
+ description => "Destroy pool",
+ proxyto => 'node',
+ protected => 1,
+ permissions => {
+ check => ['perm', '/', [ 'Sys.Modify' ]],
+ },
+ parameters => {
+ additionalProperties => 0,
+ properties => {
+ node => get_standard_option('pve-node'),
+ name => {
+ description => "The name of the pool. It must be unique.",
+ type => 'string',
+ },
+ force => {
+ description => "If true, destroys pool even if in use",
+ type => 'boolean',
+ optional => 1,
+ default => 0,
+ },
+ remove_storages => {
+ description => "Remove all pveceph-managed storages configured for this pool",
+ type => 'boolean',
+ optional => 1,
+ default => 0,
+ },
+ },
+ },
+ returns => { type => 'string' },
+ code => sub {
+ my ($param) = @_;
+
+ PVE::Ceph::Tools::check_ceph_inited();
+
+ my $rpcenv = PVE::RPCEnvironment::get();
+ my $user = $rpcenv->get_user();
+ $rpcenv->check($user, '/storage', ['Datastore.Allocate'])
+ if $param->{remove_storages};
+
+ my $pool = $param->{name};
+
+ my $worker = sub {
+ my $storages = $get_storages->($pool);
+
+ # if not forced, destroy ceph pool only when no
+ # vm disks are on it anymore
+ if (!$param->{force}) {
+ my $storagecfg = PVE::Storage::config();
+ foreach my $storeid (keys %$storages) {
+ my $storage = $storages->{$storeid};
+
+ # check if any vm disks are on the pool
+ print "checking storage '$storeid' for RBD images..\n";
+ my $res = PVE::Storage::vdisk_list($storagecfg, $storeid);
+ die "ceph pool '$pool' still in use by storage '$storeid'\n"
+ if @{$res->{$storeid}} != 0;
+ }
+ }
+
+ PVE::Ceph::Tools::destroy_pool($pool);
+
+ if ($param->{remove_storages}) {
+ my $err;
+ foreach my $storeid (keys %$storages) {
+ # skip external clusters, not managed by pveceph
+ next if $storages->{$storeid}->{monhost};
+ eval { PVE::API2::Storage::Config->delete({storage => $storeid}) };
+ if ($@) {
+ warn "failed to remove storage '$storeid': $@\n";
+ $err = 1;
+ }
+ }
+ die "failed to remove (some) storages - check log and remove manually!\n"
+ if $err;
+ }
+ };
+ return $rpcenv->fork_worker('cephdestroypool', $pool, $user, $worker);
+ }});
+
+
+__PACKAGE__->register_method ({
+ name => 'setpool',
+ path => '{name}',
+ method => 'PUT',
+ description => "Change POOL settings",
+ proxyto => 'node',
+ protected => 1,
+ permissions => {
+ check => ['perm', '/', [ 'Sys.Modify' ]],
+ },
+ parameters => {
+ additionalProperties => 0,
+ properties => {
+ node => get_standard_option('pve-node'),
+ %{ $ceph_pool_common_options->('nodefault') },
+ },
+ },
+ returns => { type => 'string' },
+ code => sub {
+ my ($param) = @_;
+
+ PVE::Ceph::Tools::check_ceph_configured();
+
+ my $rpcenv = PVE::RPCEnvironment::get();
+ my $authuser = $rpcenv->get_user();
+
+ my $pool = $param->{name};
+ my $ceph_param = \%$param;
+ for my $item ('name', 'node') {
+ # not ceph parameters
+ delete $ceph_param->{$item};
+ }
+
+ my $worker = sub {
+ PVE::Ceph::Tools::set_pool($pool, $ceph_param);
+ };
+
+ return $rpcenv->fork_worker('cephsetpool', $pool, $authuser, $worker);
+ }});
+
+
+1;
diff --git a/PVE/CLI/pveceph.pm b/PVE/CLI/pveceph.pm
index edcc7ded..4114df7e 100755
--- a/PVE/CLI/pveceph.pm
+++ b/PVE/CLI/pveceph.pm
@@ -199,7 +199,7 @@ __PACKAGE__->register_method ({
our $cmddef = {
init => [ 'PVE::API2::Ceph', 'init', [], { node => $nodename } ],
pool => {
- ls => [ 'PVE::API2::Ceph', 'lspools', [], { node => $nodename }, sub {
+ ls => [ 'PVE::API2::Ceph::Pools', 'lspools', [], { node => $nodename }, sub {
my ($data, $schema, $options) = @_;
PVE::CLIFormatter::print_api_result($data, $schema,
[
@@ -214,9 +214,9 @@ our $cmddef = {
],
$options);
}, $PVE::RESTHandler::standard_output_options],
- create => [ 'PVE::API2::Ceph', 'createpool', ['name'], { node => $nodename }],
- destroy => [ 'PVE::API2::Ceph', 'destroypool', ['name'], { node => $nodename } ],
- set => [ 'PVE::API2::Ceph', 'setpool', ['name'], { node => $nodename } ],
+ create => [ 'PVE::API2::Ceph::Pools', 'createpool', ['name'], { node => $nodename }],
+ destroy => [ 'PVE::API2::Ceph::Pools', 'destroypool', ['name'], { node => $nodename } ],
+ set => [ 'PVE::API2::Ceph::Pools', 'setpool', ['name'], { node => $nodename } ],
},
lspools => { alias => 'pool ls' },
createpool => { alias => 'pool create' },
--
2.29.2
^ permalink raw reply [flat|nested] 15+ messages in thread
* [pve-devel] [PATCH manager v3 02/10] ceph: setpool, use parameter extraction instead
2021-01-12 10:21 [pve-devel] [PATCH manager v3 00/10] ceph: allow pools settings to be changed Alwin Antreich
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 01/10] api: ceph: subclass pools Alwin Antreich
@ 2021-01-12 10:21 ` Alwin Antreich
2021-02-06 13:29 ` [pve-devel] applied: " Thomas Lamprecht
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 03/10] ceph: add titles to ceph_pool_common_options Alwin Antreich
` (7 subsequent siblings)
9 siblings, 1 reply; 15+ messages in thread
From: Alwin Antreich @ 2021-01-12 10:21 UTC (permalink / raw)
To: pve-devel
of the unneeded ref copy for params.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
---
PVE/API2/Ceph/Pools.pm | 10 +++-------
1 file changed, 3 insertions(+), 7 deletions(-)
diff --git a/PVE/API2/Ceph/Pools.pm b/PVE/API2/Ceph/Pools.pm
index fac21301..b9e295f5 100644
--- a/PVE/API2/Ceph/Pools.pm
+++ b/PVE/API2/Ceph/Pools.pm
@@ -377,15 +377,11 @@ __PACKAGE__->register_method ({
my $rpcenv = PVE::RPCEnvironment::get();
my $authuser = $rpcenv->get_user();
- my $pool = $param->{name};
- my $ceph_param = \%$param;
- for my $item ('name', 'node') {
- # not ceph parameters
- delete $ceph_param->{$item};
- }
+ my $pool = extract_param($param, 'name');
+ my $node = extract_param($param, 'node');
my $worker = sub {
- PVE::Ceph::Tools::set_pool($pool, $ceph_param);
+ PVE::Ceph::Tools::set_pool($pool, $param);
};
return $rpcenv->fork_worker('cephsetpool', $pool, $authuser, $worker);
--
2.29.2
^ permalink raw reply [flat|nested] 15+ messages in thread
* [pve-devel] [PATCH manager v3 03/10] ceph: add titles to ceph_pool_common_options
2021-01-12 10:21 [pve-devel] [PATCH manager v3 00/10] ceph: allow pools settings to be changed Alwin Antreich
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 01/10] api: ceph: subclass pools Alwin Antreich
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 02/10] ceph: setpool, use parameter extraction instead Alwin Antreich
@ 2021-01-12 10:21 ` Alwin Antreich
2021-02-06 13:29 ` [pve-devel] applied: " Thomas Lamprecht
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 04/10] ceph: add get api call for single pool Alwin Antreich
` (6 subsequent siblings)
9 siblings, 1 reply; 15+ messages in thread
From: Alwin Antreich @ 2021-01-12 10:21 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
---
PVE/API2/Ceph/Pools.pm | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/PVE/API2/Ceph/Pools.pm b/PVE/API2/Ceph/Pools.pm
index b9e295f5..24562456 100644
--- a/PVE/API2/Ceph/Pools.pm
+++ b/PVE/API2/Ceph/Pools.pm
@@ -112,10 +112,12 @@ my $ceph_pool_common_options = sub {
my ($nodefault) = shift;
my $options = {
name => {
+ title => 'Name',
description => "The name of the pool. It must be unique.",
type => 'string',
},
size => {
+ title => 'Size',
description => 'Number of replicas per object',
type => 'integer',
default => 3,
@@ -124,6 +126,7 @@ my $ceph_pool_common_options = sub {
maximum => 7,
},
min_size => {
+ title => 'Min Size',
description => 'Minimum number of replicas per object',
type => 'integer',
default => 2,
@@ -132,6 +135,7 @@ my $ceph_pool_common_options = sub {
maximum => 7,
},
pg_num => {
+ title => 'PG Num',
description => "Number of placement groups.",
type => 'integer',
default => 128,
@@ -140,11 +144,13 @@ my $ceph_pool_common_options = sub {
maximum => 32768,
},
crush_rule => {
+ title => 'Crush Rule Name',
description => "The rule to use for mapping object placement in the cluster.",
type => 'string',
optional => 1,
},
application => {
+ title => 'Application',
description => "The application of the pool.",
default => 'rbd',
type => 'string',
@@ -152,6 +158,7 @@ my $ceph_pool_common_options = sub {
optional => 1,
},
pg_autoscale_mode => {
+ title => 'PG Autoscale Mode',
description => "The automatic PG scaling mode of the pool.",
type => 'string',
enum => ['on', 'off', 'warn'],
--
2.29.2
^ permalink raw reply [flat|nested] 15+ messages in thread
* [pve-devel] [PATCH manager v3 04/10] ceph: add get api call for single pool
2021-01-12 10:21 [pve-devel] [PATCH manager v3 00/10] ceph: allow pools settings to be changed Alwin Antreich
` (2 preceding siblings ...)
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 03/10] ceph: add titles to ceph_pool_common_options Alwin Antreich
@ 2021-01-12 10:21 ` Alwin Antreich
2021-01-14 16:49 ` Alwin Antreich
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 05/10] ceph: add autoscale_status to api calls Alwin Antreich
` (5 subsequent siblings)
9 siblings, 1 reply; 15+ messages in thread
From: Alwin Antreich @ 2021-01-12 10:21 UTC (permalink / raw)
To: pve-devel
Information of a single pool can be queried.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
---
PVE/API2/Ceph/Pools.pm | 99 ++++++++++++++++++++++++++++++++++++++++++
PVE/CLI/pveceph.pm | 4 ++
2 files changed, 103 insertions(+)
diff --git a/PVE/API2/Ceph/Pools.pm b/PVE/API2/Ceph/Pools.pm
index 24562456..01c11100 100644
--- a/PVE/API2/Ceph/Pools.pm
+++ b/PVE/API2/Ceph/Pools.pm
@@ -395,4 +395,103 @@ __PACKAGE__->register_method ({
}});
+__PACKAGE__->register_method ({
+ name => 'getpool',
+ path => '{name}',
+ method => 'GET',
+ description => "List pool settings.",
+ proxyto => 'node',
+ protected => 1,
+ permissions => {
+ check => ['perm', '/', [ 'Sys.Audit', 'Datastore.Audit' ], any => 1],
+ },
+ parameters => {
+ additionalProperties => 0,
+ properties => {
+ node => get_standard_option('pve-node'),
+ name => {
+ description => "The name of the pool. It must be unique.",
+ type => 'string',
+ },
+ verbose => {
+ type => 'boolean',
+ default => 0,
+ optional => 1,
+ description => "If enabled, will display additional data".
+ "(eg. statistics).",
+ },
+ },
+ },
+ returns => {
+ type => "object",
+ properties => {
+ id => { type => 'integer', title => 'ID' },
+ pgp_num => { type => 'integer', title => 'PGP num' },
+ noscrub => { type => 'boolean', title => 'noscrub' },
+ 'nodeep-scrub' => { type => 'boolean', title => 'nodeep-scrub' },
+ nodelete => { type => 'boolean', title => 'nodelete' },
+ nopgchange => { type => 'boolean', title => 'nopgchange' },
+ nosizechange => { type => 'boolean', title => 'nosizechange' },
+ write_fadvise_dontneed => { type => 'boolean', title => 'write_fadvise_dontneed' },
+ hashpspool => { type => 'boolean', title => 'hashpspool' },
+ use_gmt_hitset => { type => 'boolean', title => 'use_gmt_hitset' },
+ fast_read => { type => 'boolean', title => 'Fast Read' },
+ application_list => { type => 'array', title => 'Application', optional => 1 },
+ statistics => { type => 'object', title => 'Statistics', optional => 1 },
+ %{ $ceph_pool_common_options->() },
+ },
+ },
+ code => sub {
+ my ($param) = @_;
+
+ PVE::Ceph::Tools::check_ceph_inited();
+
+ my $verbose = $param->{verbose};
+ my $pool = $param->{name};
+
+ my $rados = PVE::RADOS->new();
+ my $res = $rados->mon_command({
+ prefix => 'osd pool get',
+ pool => "$pool",
+ var => 'all',
+ });
+
+ my $data = {
+ id => $res->{pool_id},
+ name => $pool,
+ size => $res->{size},
+ min_size => $res->{min_size},
+ pg_num => $res->{pg_num},
+ pgp_num => $res->{pgp_num},
+ crush_rule => $res->{crush_rule},
+ pg_autoscale_mode => $res->{pg_autoscale_mode},
+ noscrub => "$res->{noscrub}",
+ 'nodeep-scrub' => "$res->{'nodeep-scrub'}",
+ nodelete => "$res->{nodelete}",
+ nopgchange => "$res->{nopgchange}",
+ nosizechange => "$res->{nosizechange}",
+ write_fadvise_dontneed => "$res->{write_fadvise_dontneed}",
+ hashpspool => "$res->{hashpspool}",
+ use_gmt_hitset => "$res->{use_gmt_hitset}",
+ fast_read => "$res->{fast_read}",
+ };
+
+ if ($verbose) {
+ my $stats;
+ my $res = $rados->mon_command({ prefix => 'df' });
+
+ foreach my $d (@{$res->{pools}}) {
+ next if !$d->{stats};
+ next if !defined($d->{name}) && !$d->{name} ne "$pool";
+ $data->{statistics} = $d->{stats};
+ }
+
+ my $apps = $rados->mon_command({ prefix => "osd pool application get", pool => "$pool", });
+ $data->{application_list} = [ keys %$apps ];
+ }
+
+ return $data;
+ }});
+
+
1;
diff --git a/PVE/CLI/pveceph.pm b/PVE/CLI/pveceph.pm
index 4114df7e..ba5067b1 100755
--- a/PVE/CLI/pveceph.pm
+++ b/PVE/CLI/pveceph.pm
@@ -217,6 +217,10 @@ our $cmddef = {
create => [ 'PVE::API2::Ceph::Pools', 'createpool', ['name'], { node => $nodename }],
destroy => [ 'PVE::API2::Ceph::Pools', 'destroypool', ['name'], { node => $nodename } ],
set => [ 'PVE::API2::Ceph::Pools', 'setpool', ['name'], { node => $nodename } ],
+ get => [ 'PVE::API2::Ceph::Pools', 'getpool', ['name'], { node => $nodename }, sub {
+ my ($data, $schema, $options) = @_;
+ PVE::CLIFormatter::print_api_result($data, $schema, undef, $options);
+ }, $PVE::RESTHandler::standard_output_options],
},
lspools => { alias => 'pool ls' },
createpool => { alias => 'pool create' },
--
2.29.2
^ permalink raw reply [flat|nested] 15+ messages in thread
* [pve-devel] [PATCH manager v3 05/10] ceph: add autoscale_status to api calls
2021-01-12 10:21 [pve-devel] [PATCH manager v3 00/10] ceph: allow pools settings to be changed Alwin Antreich
` (3 preceding siblings ...)
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 04/10] ceph: add get api call for single pool Alwin Antreich
@ 2021-01-12 10:21 ` Alwin Antreich
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 06/10] ceph: gui: add autoscale & flatten pool view Alwin Antreich
` (4 subsequent siblings)
9 siblings, 0 replies; 15+ messages in thread
From: Alwin Antreich @ 2021-01-12 10:21 UTC (permalink / raw)
To: pve-devel
the properties target_size_ratio, target_size_bytes and pg_num_min are
used to fine-tune the pg_autoscaler and are set on a pool. The updated
pool list shows now autoscale settings & status. Including the new
(optimal) target PGs. To make it easier for new users to get/set the
correct amount of PGs.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
---
PVE/API2/Ceph/Pools.pm | 96 +++++++++++++++++++++++++++++++++++++-----
PVE/CLI/pveceph.pm | 4 ++
2 files changed, 90 insertions(+), 10 deletions(-)
diff --git a/PVE/API2/Ceph/Pools.pm b/PVE/API2/Ceph/Pools.pm
index 01c11100..014e6be7 100644
--- a/PVE/API2/Ceph/Pools.pm
+++ b/PVE/API2/Ceph/Pools.pm
@@ -16,6 +16,24 @@ use PVE::API2::Storage::Config;
use base qw(PVE::RESTHandler);
+my $get_autoscale_status = sub {
+ my ($rados) = shift;
+
+ $rados = PVE::RADOS->new() if !defined($rados);
+
+ my $autoscale = $rados->mon_command({
+ prefix => 'osd pool autoscale-status'});
+
+ my $data;
+ foreach my $p (@$autoscale) {
+ $p->{would_adjust} = "$p->{would_adjust}"; # boolean
+ $data->{$p->{pool_name}} = $p;
+ }
+
+ return $data;
+};
+
+
__PACKAGE__->register_method ({
name => 'lspools',
path => '',
@@ -37,16 +55,21 @@ __PACKAGE__->register_method ({
items => {
type => "object",
properties => {
- pool => { type => 'integer', title => 'ID' },
- pool_name => { type => 'string', title => 'Name' },
- size => { type => 'integer', title => 'Size' },
- min_size => { type => 'integer', title => 'Min Size' },
- pg_num => { type => 'integer', title => 'PG Num' },
- pg_autoscale_mode => { type => 'string', optional => 1, title => 'PG Autoscale Mode' },
- crush_rule => { type => 'integer', title => 'Crush Rule' },
- crush_rule_name => { type => 'string', title => 'Crush Rule Name' },
- percent_used => { type => 'number', title => '%-Used' },
- bytes_used => { type => 'integer', title => 'Used' },
+ pool => { type => 'integer', title => 'ID' },
+ pool_name => { type => 'string', title => 'Name' },
+ size => { type => 'integer', title => 'Size' },
+ min_size => { type => 'integer', title => 'Min Size' },
+ pg_num => { type => 'integer', title => 'PG Num' },
+ pg_num_min => { type => 'integer', title => 'min. PG Num', optional => 1, },
+ pg_num_final => { type => 'integer', title => 'Optimal PG Num', optional => 1, },
+ pg_autoscale_mode => { type => 'string', title => 'PG Autoscale Mode', optional => 1, },
+ crush_rule => { type => 'integer', title => 'Crush Rule' },
+ crush_rule_name => { type => 'string', title => 'Crush Rule Name' },
+ percent_used => { type => 'number', title => '%-Used' },
+ bytes_used => { type => 'integer', title => 'Used' },
+ target_size => { type => 'integer', title => 'PG Autoscale Target Size', optional => 1 },
+ target_size_ratio => { type => 'number', title => 'PG Autoscale Target Ratio',optional => 1, },
+ autoscale_status => { type => 'object', title => 'Autoscale Status', optional => 1 },
},
},
links => [ { rel => 'child', href => "{pool_name}" } ],
@@ -86,12 +109,24 @@ __PACKAGE__->register_method ({
'pg_autoscale_mode',
];
+ # pg_autoscaler module is not enabled in Nautilus
+ my $autoscale = eval { $get_autoscale_status->($rados) };
+
foreach my $e (@{$res->{pools}}) {
my $d = {};
foreach my $attr (@$attr_list) {
$d->{$attr} = $e->{$attr} if defined($e->{$attr});
}
+ if ($autoscale) {
+ $d->{autoscale_status} = $autoscale->{$d->{pool_name}};
+ $d->{pg_num_final} = $d->{autoscale_status}->{pg_num_final};
+ # some info is nested under options instead
+ $d->{pg_num_min} = $e->{options}->{pg_num_min};
+ $d->{target_size} = $e->{options}->{target_size_bytes};
+ $d->{target_size_ratio} = $e->{options}->{target_size_ratio};
+ }
+
if (defined($d->{crush_rule}) && defined($rules->{$d->{crush_rule}})) {
$d->{crush_rule_name} = $rules->{$d->{crush_rule}};
}
@@ -143,6 +178,13 @@ my $ceph_pool_common_options = sub {
minimum => 8,
maximum => 32768,
},
+ pg_num_min => {
+ title => 'min. PG Num',
+ description => "Minimal number of placement groups.",
+ type => 'integer',
+ optional => 1,
+ maximum => 32768,
+ },
crush_rule => {
title => 'Crush Rule Name',
description => "The rule to use for mapping object placement in the cluster.",
@@ -165,6 +207,19 @@ my $ceph_pool_common_options = sub {
default => 'warn',
optional => 1,
},
+ target_size => {
+ description => "The estimated target size of the pool for the PG autoscaler.",
+ title => 'PG Autoscale Target Size',
+ type => 'string',
+ pattern => '^(\d+(\.\d+)?)([KMGT])?$',
+ optional => 1,
+ },
+ target_size_ratio => {
+ description => "The estimated target ratio of the pool for the PG autoscaler.",
+ title => 'PG Autoscale Target Ratio',
+ type => 'number',
+ optional => 1,
+ },
};
if ($nodefault) {
@@ -241,6 +296,12 @@ __PACKAGE__->register_method ({
my $rpcenv = PVE::RPCEnvironment::get();
my $user = $rpcenv->get_user();
+ # Ceph uses target_size_bytes
+ if (defined($param->{'target_size'})) {
+ my $target_sizestr = extract_param($param, 'target_size');
+ $param->{target_size_bytes} = PVE::JSONSchema::parse_size($target_sizestr);
+ }
+
if ($add_storages) {
$rpcenv->check($user, '/storage', ['Datastore.Allocate']);
die "pool name contains characters which are illegal for storage naming\n"
@@ -387,6 +448,12 @@ __PACKAGE__->register_method ({
my $pool = extract_param($param, 'name');
my $node = extract_param($param, 'node');
+ # Ceph uses target_size_bytes
+ if (defined($param->{'target_size'})) {
+ my $target_sizestr = extract_param($param, 'target_size');
+ $param->{target_size_bytes} = PVE::JSONSchema::parse_size($target_sizestr);
+ }
+
my $worker = sub {
PVE::Ceph::Tools::set_pool($pool, $param);
};
@@ -438,6 +505,7 @@ __PACKAGE__->register_method ({
fast_read => { type => 'boolean', title => 'Fast Read' },
application_list => { type => 'array', title => 'Application', optional => 1 },
statistics => { type => 'object', title => 'Statistics', optional => 1 },
+ autoscale_status => { type => 'object', title => 'Autoscale Status', optional => 1 },
%{ $ceph_pool_common_options->() },
},
},
@@ -462,6 +530,7 @@ __PACKAGE__->register_method ({
size => $res->{size},
min_size => $res->{min_size},
pg_num => $res->{pg_num},
+ pg_num_min => $res->{pg_num_min},
pgp_num => $res->{pgp_num},
crush_rule => $res->{crush_rule},
pg_autoscale_mode => $res->{pg_autoscale_mode},
@@ -474,12 +543,19 @@ __PACKAGE__->register_method ({
hashpspool => "$res->{hashpspool}",
use_gmt_hitset => "$res->{use_gmt_hitset}",
fast_read => "$res->{fast_read}",
+ target_size => $res->{target_size_bytes},
+ target_size_ratio => $res->{target_size_ratio},
};
if ($verbose) {
my $stats;
my $res = $rados->mon_command({ prefix => 'df' });
+ # pg_autoscaler module is not enabled in Nautilus
+ # avoid partial read further down, use new rados instance
+ my $autoscale_status = eval { $get_autoscale_status->() };
+ $data->{autoscale_status} = $autoscale_status->{$pool};
+
foreach my $d (@{$res->{pools}}) {
next if !$d->{stats};
next if !defined($d->{name}) && !$d->{name} ne "$pool";
diff --git a/PVE/CLI/pveceph.pm b/PVE/CLI/pveceph.pm
index ba5067b1..4c000881 100755
--- a/PVE/CLI/pveceph.pm
+++ b/PVE/CLI/pveceph.pm
@@ -207,7 +207,11 @@ our $cmddef = {
'size',
'min_size',
'pg_num',
+ 'pg_num_min',
+ 'pg_num_final',
'pg_autoscale_mode',
+ 'target_size',
+ 'target_size_ratio',
'crush_rule_name',
'percent_used',
'bytes_used',
--
2.29.2
^ permalink raw reply [flat|nested] 15+ messages in thread
* [pve-devel] [PATCH manager v3 06/10] ceph: gui: add autoscale & flatten pool view
2021-01-12 10:21 [pve-devel] [PATCH manager v3 00/10] ceph: allow pools settings to be changed Alwin Antreich
` (4 preceding siblings ...)
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 05/10] ceph: add autoscale_status to api calls Alwin Antreich
@ 2021-01-12 10:21 ` Alwin Antreich
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 07/10] ceph: set allowed minimal pg_num down to 1 Alwin Antreich
` (3 subsequent siblings)
9 siblings, 0 replies; 15+ messages in thread
From: Alwin Antreich @ 2021-01-12 10:21 UTC (permalink / raw)
To: pve-devel
Letting the columns flex needs a flat column head structure.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
---
www/manager6/ceph/Pool.js | 138 ++++++++++++++++++++++----------------
1 file changed, 82 insertions(+), 56 deletions(-)
diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index 271dcc3c..75c95fce 100644
--- a/www/manager6/ceph/Pool.js
+++ b/www/manager6/ceph/Pool.js
@@ -105,14 +105,16 @@ Ext.define('PVE.node.CephPoolList', {
columns: [
{
- header: gettext('Name'),
- width: 120,
+ text: gettext('Name'),
+ minWidth: 120,
+ flex: 2,
sortable: true,
dataIndex: 'pool_name'
},
{
- header: gettext('Size') + '/min',
- width: 100,
+ text: gettext('Size') + '/min',
+ minWidth: 100,
+ flex: 1,
align: 'right',
renderer: function(v, meta, rec) {
return v + '/' + rec.data.min_size;
@@ -120,62 +122,82 @@ Ext.define('PVE.node.CephPoolList', {
dataIndex: 'size'
},
{
- text: 'Placement Groups',
- columns: [
- {
- text: '# of PGs', // pg_num',
- width: 150,
- align: 'right',
- dataIndex: 'pg_num'
- },
- {
- text: gettext('Autoscale'),
- width: 140,
- align: 'right',
- dataIndex: 'pg_autoscale_mode'
- },
- ]
+ text: '# of Placement Groups',
+ flex: 1,
+ minWidth: 150,
+ align: 'right',
+ dataIndex: 'pg_num'
},
{
- text: 'CRUSH Rule',
- columns: [
- {
- text: 'ID',
- align: 'right',
- width: 50,
- dataIndex: 'crush_rule'
- },
- {
- text: gettext('Name'),
- width: 150,
- dataIndex: 'crush_rule_name',
- },
- ]
+ text: gettext('Optimal # of PGs'),
+ flex: 1,
+ minWidth: 140,
+ align: 'right',
+ dataIndex: 'pg_num_final',
+ renderer: function(value, metaData) {
+ if (!value) {
+ value = '<i class="fa fa-info-circle faded"></i> n/a';
+ metaData.tdAttr = 'data-qtip="Needs pg_autoscaler module enabled."';
+ }
+ return value;
+ },
},
{
- text: gettext('Used'),
- columns: [
- {
- text: '%',
- width: 100,
- sortable: true,
- align: 'right',
- renderer: function(val) {
- return Ext.util.Format.percent(val, '0.00');
- },
- dataIndex: 'percent_used',
- },
- {
- text: gettext('Total'),
- width: 100,
- sortable: true,
- renderer: PVE.Utils.render_size,
- align: 'right',
- dataIndex: 'bytes_used',
- summaryType: 'sum',
- summaryRenderer: PVE.Utils.render_size
+ text: gettext('Target Size Ratio'),
+ flex: 1,
+ minWidth: 140,
+ align: 'right',
+ dataIndex: 'target_size_ratio',
+ renderer: Ext.util.Format.numberRenderer('0.0000'),
+ hidden: true,
+ },
+ {
+ text: gettext('Target Size'),
+ flex: 1,
+ minWidth: 140,
+ align: 'right',
+ dataIndex: 'target_size',
+ hidden: true,
+ renderer: function(v, metaData, rec) {
+ let value = PVE.Utils.render_size(v);
+ if (rec.data.target_size_ratio > 0) {
+ value = '<i class="fa fa-info-circle faded"></i> ' + value;
+ metaData.tdAttr = 'data-qtip="Target Size Ratio takes precedence over Target Size."';
}
- ]
+ return value;
+ },
+ },
+ {
+ text: gettext('Autoscale Mode'),
+ flex: 1,
+ minWidth: 140,
+ align: 'right',
+ dataIndex: 'pg_autoscale_mode',
+ },
+ {
+ text: 'CRUSH Rule (ID)',
+ flex: 1,
+ align: 'right',
+ minWidth: 150,
+ renderer: function(v, meta, rec) {
+ return v + ' (' + rec.data.crush_rule + ')';
+ },
+ dataIndex: 'crush_rule_name',
+ },
+ {
+ text: gettext('Used') + ' (%)',
+ flex: 1,
+ minWidth: 180,
+ sortable: true,
+ align: 'right',
+ dataIndex: 'bytes_used',
+ summaryType: 'sum',
+ summaryRenderer: PVE.Utils.render_size,
+ renderer: function(v, meta, rec) {
+ let percentage = Ext.util.Format.percent(rec.data.percent_used, '0.00');
+ let used = PVE.Utils.render_size(v);
+ return used + ' (' + percentage + ')';
+ },
}
],
initComponent: function() {
@@ -276,7 +298,11 @@ Ext.define('PVE.node.CephPoolList', {
{ name: 'bytes_used', type: 'integer'},
{ name: 'percent_used', type: 'number'},
{ name: 'crush_rule', type: 'integer'},
- { name: 'crush_rule_name', type: 'string'}
+ { name: 'crush_rule_name', type: 'string'},
+ { name: 'pg_autoscale_mode', type: 'string'},
+ { name: 'pg_num_final', type: 'integer'},
+ { name: 'target_size_ratio', type: 'number'},
+ { name: 'target_size_bytes', type: 'integer'},
],
idProperty: 'pool_name'
});
--
2.29.2
^ permalink raw reply [flat|nested] 15+ messages in thread
* [pve-devel] [PATCH manager v3 07/10] ceph: set allowed minimal pg_num down to 1
2021-01-12 10:21 [pve-devel] [PATCH manager v3 00/10] ceph: allow pools settings to be changed Alwin Antreich
` (5 preceding siblings ...)
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 06/10] ceph: gui: add autoscale & flatten pool view Alwin Antreich
@ 2021-01-12 10:21 ` Alwin Antreich
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 08/10] ceph: gui: rework pool input panel Alwin Antreich
` (2 subsequent siblings)
9 siblings, 0 replies; 15+ messages in thread
From: Alwin Antreich @ 2021-01-12 10:21 UTC (permalink / raw)
To: pve-devel
In Ceph Octopus the device_health_metrics pool is auto-created with 1
PG. Since Ceph has the ability to split/merge PGs, hitting the wrong PG
count is now less of an issue anyhow.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
---
PVE/API2/Ceph/Pools.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/API2/Ceph/Pools.pm b/PVE/API2/Ceph/Pools.pm
index 014e6be7..939a1f8a 100644
--- a/PVE/API2/Ceph/Pools.pm
+++ b/PVE/API2/Ceph/Pools.pm
@@ -175,7 +175,7 @@ my $ceph_pool_common_options = sub {
type => 'integer',
default => 128,
optional => 1,
- minimum => 8,
+ minimum => 1,
maximum => 32768,
},
pg_num_min => {
--
2.29.2
^ permalink raw reply [flat|nested] 15+ messages in thread
* [pve-devel] [PATCH manager v3 08/10] ceph: gui: rework pool input panel
2021-01-12 10:21 [pve-devel] [PATCH manager v3 00/10] ceph: allow pools settings to be changed Alwin Antreich
` (6 preceding siblings ...)
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 07/10] ceph: set allowed minimal pg_num down to 1 Alwin Antreich
@ 2021-01-12 10:21 ` Alwin Antreich
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 09/10] ceph: gui: add min num of PG Alwin Antreich
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 10/10] fix: ceph: always set pool size first Alwin Antreich
9 siblings, 0 replies; 15+ messages in thread
From: Alwin Antreich @ 2021-01-12 10:21 UTC (permalink / raw)
To: pve-devel
* add the ability to edit an existing pool
* allow adjustment of autoscale settings
* warn if user specifies min_size 1
* disallow min_size 1 on pool create
* calculate min_size replica by size
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
---
www/manager6/ceph/Pool.js | 249 +++++++++++++++++++++++++++++---------
1 file changed, 191 insertions(+), 58 deletions(-)
diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index 75c95fce..bd395956 100644
--- a/www/manager6/ceph/Pool.js
+++ b/www/manager6/ceph/Pool.js
@@ -1,17 +1,21 @@
-Ext.define('PVE.CephCreatePool', {
- extend: 'Proxmox.window.Edit',
- alias: 'widget.pveCephCreatePool',
+Ext.define('PVE.CephPoolInputPanel', {
+ extend: 'Proxmox.panel.InputPanel',
+ xtype: 'pveCephPoolInputPanel',
+ mixins: ['Proxmox.Mixin.CBind'],
showProgress: true,
onlineHelp: 'pve_ceph_pools',
subject: 'Ceph Pool',
- isCreate: true,
- method: 'POST',
- items: [
+ column1: [
{
- xtype: 'textfield',
+ xtype: 'pmxDisplayEditField',
fieldLabel: gettext('Name'),
+ cbind: {
+ editable: '{isCreate}',
+ value: '{pool_name}',
+ disabled: '{!isCreate}'
+ },
name: 'name',
allowBlank: false
},
@@ -20,75 +24,180 @@ Ext.define('PVE.CephCreatePool', {
fieldLabel: gettext('Size'),
name: 'size',
value: 3,
- minValue: 1,
+ minValue: 2,
maxValue: 7,
- allowBlank: false
+ allowBlank: false,
+ listeners: {
+ change: function(field, val) {
+ let size = Math.round(val / 2);
+ if (size > 1) {
+ field.up('inputpanel').down('field[name=min_size]').setValue(size);
+ }
+ },
+ },
},
+ ],
+ column2: [
+ {
+ xtype: 'proxmoxKVComboBox',
+ fieldLabel: 'PG Autoscale Mode',
+ name: 'pg_autoscale_mode',
+ comboItems: [
+ ['warn', 'warn'],
+ ['on', 'on'],
+ ['off', 'off'],
+ ],
+ value: 'warn',
+ allowBlank: false,
+ autoSelect: false,
+ labelWidth: 140,
+ },
+ {
+ xtype: 'proxmoxcheckbox',
+ fieldLabel: gettext('Add as Storage'),
+ cbind: {
+ value: '{isCreate}',
+ hidden: '{!isCreate}',
+ },
+ name: 'add_storages',
+ labelWidth: 140,
+ autoEl: {
+ tag: 'div',
+ 'data-qtip': gettext('Add the new pool to the cluster storage configuration.'),
+ },
+ },
+ ],
+ advancedColumn1: [
{
xtype: 'proxmoxintegerfield',
fieldLabel: gettext('Min. Size'),
name: 'min_size',
value: 2,
- minValue: 1,
+ cbind: {
+ minValue: (get) => get('isCreate') ? 2 : 1,
+ },
maxValue: 7,
- allowBlank: false
+ allowBlank: false,
+ listeners: {
+ change: function(field, val) {
+ let warn = true;
+ let warn_text = gettext('Min. Size');
+
+ if (val < 2) {
+ warn = false;
+ warn_text = gettext('Min. Size') + ' <i class="fa fa-exclamation-triangle warning"></i>';
+ }
+
+ field.up().down('field[name=min_size-warning]').setHidden(warn);
+ field.setFieldLabel(warn_text);
+ }
+ },
+ },
+ {
+ xtype: 'displayfield',
+ name: 'min_size-warning',
+ userCls: 'pmx-hint',
+ value: 'A pool with min_size=1 could lead to data loss, incomplete PGs or unfound objects.',
+ hidden: true,
},
{
xtype: 'pveCephRuleSelector',
fieldLabel: 'Crush Rule', // do not localize
+ cbind: { nodename: '{nodename}' },
name: 'crush_rule',
allowBlank: false
},
- {
- xtype: 'proxmoxKVComboBox',
- fieldLabel: 'PG Autoscale Mode', // do not localize
- name: 'pg_autoscale_mode',
- comboItems: [
- ['warn', 'warn'],
- ['on', 'on'],
- ['off', 'off'],
- ],
- value: 'warn',
- allowBlank: false,
- autoSelect: false,
- },
{
xtype: 'proxmoxintegerfield',
- fieldLabel: 'pg_num',
+ fieldLabel: '# of PGs',
name: 'pg_num',
value: 128,
- minValue: 8,
+ minValue: 1,
maxValue: 32768,
+ allowBlank: false,
+ emptyText: 128,
+ },
+ ],
+ advancedColumn2: [
+ {
+ xtype: 'numberfield',
+ fieldLabel: gettext('Target Size Ratio'),
+ name: 'target_size_ratio',
+ labelWidth: 140,
+ minValue: 0,
+ decimalPrecision: 3,
allowBlank: true,
- emptyText: gettext('Autoscale'),
+ emptyText: '0.0',
},
{
- xtype: 'proxmoxcheckbox',
- fieldLabel: gettext('Add as Storage'),
- value: true,
- name: 'add_storages',
- autoEl: {
- tag: 'div',
- 'data-qtip': gettext('Add the new pool to the cluster storage configuration.'),
- },
- }
+ xtype: 'numberfield',
+ fieldLabel: gettext('Target Size') + ' (GiB)',
+ name: 'target_size',
+ labelWidth: 140,
+ minValue: 0,
+ allowBlank: true,
+ emptyText: '0',
+ },
+ {
+ xtype: 'displayfield',
+ name: 'min_size-warning',
+ userCls: 'pmx-hint',
+ value: 'Target Size Ratio takes precedence.',
+ },
],
- initComponent : function() {
- var me = this;
- if (!me.nodename) {
- throw "no node name specified";
+ onGetValues: function(values) {
+ Object.keys(values || {}).forEach(function(name) {
+ if (values[name] === '') {
+ delete values[name];
+ }
+ });
+
+ if (values['target_size'] && values['target_size'] !== 0) {
+ values['target_size'] = values['target_size']*1024*1024*1024;
}
+ return values;
+ },
- Ext.apply(me, {
- url: "/nodes/" + me.nodename + "/ceph/pools",
- defaults: {
- nodename: me.nodename
- }
- });
+ setValues: function(values) {
+ if (values['target_size'] && values['target_size'] !== 0) {
+ values['target_size'] = values['target_size']/1024/1024/1024;
+ }
- me.callParent();
- }
+ this.callParent([values]);
+ },
+
+});
+
+Ext.define('PVE.CephPoolEdit', {
+ extend: 'Proxmox.window.Edit',
+ alias: 'widget.pveCephPoolEdit',
+ xtype: 'pveCephPoolEdit',
+ mixins: ['Proxmox.Mixin.CBind'],
+
+ cbindData: {
+ pool_name: '',
+ isCreate: (cfg) => !cfg.pool_name,
+ },
+
+ cbind: {
+ autoLoad: get => !get('isCreate'),
+ url: get => get('isCreate') ?
+ `/nodes/${get('nodename')}/ceph/pools` :
+ `/nodes/${get('nodename')}/ceph/pools/${get('pool_name')}`,
+ method: get => get('isCreate') ? 'POST' : 'PUT',
+ },
+
+ subject: gettext('Ceph Pool'),
+
+ items: [{
+ xtype: 'pveCephPoolInputPanel',
+ cbind: {
+ nodename: '{nodename}',
+ pool_name: '{pool_name}',
+ isCreate: '{isCreate}',
+ },
+ }],
});
Ext.define('PVE.node.CephPoolList', {
@@ -221,6 +330,9 @@ Ext.define('PVE.node.CephPoolList', {
});
var store = Ext.create('Proxmox.data.DiffStore', { rstore: rstore });
+ var reload = function() {
+ rstore.load();
+ };
var regex = new RegExp("not (installed|initialized)", "i");
PVE.Utils.handleStoreErrorOrMask(me, rstore, regex, function(me, error){
@@ -237,14 +349,36 @@ Ext.define('PVE.node.CephPoolList', {
var create_btn = new Ext.Button({
text: gettext('Create'),
handler: function() {
- var win = Ext.create('PVE.CephCreatePool', {
- nodename: nodename
+ var win = Ext.create('PVE.CephPoolEdit', {
+ title: gettext('Create') + ': Ceph Pool',
+ isCreate: true,
+ nodename: nodename,
});
win.show();
- win.on('destroy', function() {
- rstore.load();
- });
+ win.on('destroy', reload);
+ }
+ });
+
+ var run_editor = function() {
+ var rec = sm.getSelection()[0];
+ if (!rec) {
+ return;
}
+
+ var win = Ext.create('PVE.CephPoolEdit', {
+ title: gettext('Edit') + ': Ceph Pool',
+ nodename: nodename,
+ pool_name: rec.data.pool_name,
+ });
+ win.on('destroy', reload);
+ win.show();
+ };
+
+ var edit_btn = new Proxmox.button.Button({
+ text: gettext('Edit'),
+ disabled: true,
+ selModel: sm,
+ handler: run_editor,
});
var destroy_btn = Ext.create('Proxmox.button.Button', {
@@ -268,19 +402,18 @@ Ext.define('PVE.node.CephPoolList', {
},
item: { type: 'CephPool', id: rec.data.pool_name }
}).show();
- win.on('destroy', function() {
- rstore.load();
- });
+ win.on('destroy', reload);
}
});
Ext.apply(me, {
store: store,
selModel: sm,
- tbar: [ create_btn, destroy_btn ],
+ tbar: [ create_btn, edit_btn, destroy_btn ],
listeners: {
activate: () => rstore.startUpdate(),
destroy: () => rstore.stopUpdate(),
+ itemdblclick: run_editor,
}
});
@@ -302,7 +435,7 @@ Ext.define('PVE.node.CephPoolList', {
{ name: 'pg_autoscale_mode', type: 'string'},
{ name: 'pg_num_final', type: 'integer'},
{ name: 'target_size_ratio', type: 'number'},
- { name: 'target_size_bytes', type: 'integer'},
+ { name: 'target_size', type: 'integer'},
],
idProperty: 'pool_name'
});
--
2.29.2
^ permalink raw reply [flat|nested] 15+ messages in thread
* [pve-devel] [PATCH manager v3 09/10] ceph: gui: add min num of PG
2021-01-12 10:21 [pve-devel] [PATCH manager v3 00/10] ceph: allow pools settings to be changed Alwin Antreich
` (7 preceding siblings ...)
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 08/10] ceph: gui: rework pool input panel Alwin Antreich
@ 2021-01-12 10:21 ` Alwin Antreich
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 10/10] fix: ceph: always set pool size first Alwin Antreich
9 siblings, 0 replies; 15+ messages in thread
From: Alwin Antreich @ 2021-01-12 10:21 UTC (permalink / raw)
To: pve-devel
this is used to fine-tune the ceph autoscaler
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
---
www/manager6/ceph/Pool.js | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)
diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index bd395956..9b8b68dd 100644
--- a/www/manager6/ceph/Pool.js
+++ b/www/manager6/ceph/Pool.js
@@ -144,6 +144,15 @@ Ext.define('PVE.CephPoolInputPanel', {
userCls: 'pmx-hint',
value: 'Target Size Ratio takes precedence.',
},
+ {
+ xtype: 'proxmoxintegerfield',
+ fieldLabel: 'Min. # of PGs',
+ name: 'pg_num_min',
+ labelWidth: 140,
+ minValue: 0,
+ allowBlank: true,
+ emptyText: '0',
+ },
],
onGetValues: function(values) {
@@ -251,6 +260,14 @@ Ext.define('PVE.node.CephPoolList', {
return value;
},
},
+ {
+ text: gettext('Min. # of PGs'),
+ flex: 1,
+ minWidth: 140,
+ align: 'right',
+ dataIndex: 'pg_num_min',
+ hidden: true,
+ },
{
text: gettext('Target Size Ratio'),
flex: 1,
@@ -428,6 +445,7 @@ Ext.define('PVE.node.CephPoolList', {
{ name: 'size', type: 'integer'},
{ name: 'min_size', type: 'integer'},
{ name: 'pg_num', type: 'integer'},
+ { name: 'pg_num_min', type: 'integer'},
{ name: 'bytes_used', type: 'integer'},
{ name: 'percent_used', type: 'number'},
{ name: 'crush_rule', type: 'integer'},
--
2.29.2
^ permalink raw reply [flat|nested] 15+ messages in thread
* [pve-devel] [PATCH manager v3 10/10] fix: ceph: always set pool size first
2021-01-12 10:21 [pve-devel] [PATCH manager v3 00/10] ceph: allow pools settings to be changed Alwin Antreich
` (8 preceding siblings ...)
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 09/10] ceph: gui: add min num of PG Alwin Antreich
@ 2021-01-12 10:21 ` Alwin Antreich
9 siblings, 0 replies; 15+ messages in thread
From: Alwin Antreich @ 2021-01-12 10:21 UTC (permalink / raw)
To: pve-devel
Since Ceph Nautilus 14.2.10 and Octopus 15.2.2 the min_size of a pool is
calculated by the size (round(size / 2)). When size is applied after
min_size to the pool, the manual specified min_size will be overwritten.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
---
PVE/Ceph/Tools.pm | 61 +++++++++++++++++++++++++++++++----------------
1 file changed, 40 insertions(+), 21 deletions(-)
diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
index ab38f7bc..9d4d595f 100644
--- a/PVE/Ceph/Tools.pm
+++ b/PVE/Ceph/Tools.pm
@@ -200,33 +200,52 @@ sub check_ceph_enabled {
return 1;
}
+my $set_pool_setting = sub {
+ my ($pool, $setting, $value) = @_;
+
+ my $command;
+ if ($setting eq 'application') {
+ $command = {
+ prefix => "osd pool application enable",
+ pool => "$pool",
+ app => "$value",
+ };
+ } else {
+ $command = {
+ prefix => "osd pool set",
+ pool => "$pool",
+ var => "$setting",
+ val => "$value",
+ format => 'plain',
+ };
+ }
+
+ my $rados = PVE::RADOS->new();
+ eval { $rados->mon_command($command); };
+ return $@ ? $@ : undef;
+};
+
sub set_pool {
my ($pool, $param) = @_;
- foreach my $setting (keys %$param) {
- my $value = $param->{$setting};
-
- my $command;
- if ($setting eq 'application') {
- $command = {
- prefix => "osd pool application enable",
- pool => "$pool",
- app => "$value",
- };
+ # by default, pool size always sets min_size,
+ # set it and forget it, as first item
+ # https://tracker.ceph.com/issues/44862
+ if ($param->{size}) {
+ my $value = $param->{size};
+ if (my $err = $set_pool_setting->($pool, 'size', $value)) {
+ print "$err";
} else {
- $command = {
- prefix => "osd pool set",
- pool => "$pool",
- var => "$setting",
- val => "$value",
- format => 'plain',
- };
+ delete $param->{size};
}
+ }
+
+ foreach my $setting (keys %$param) {
+ my $value = $param->{$setting};
+ next if $setting eq 'size';
- my $rados = PVE::RADOS->new();
- eval { $rados->mon_command($command); };
- if ($@) {
- print "$@";
+ if (my $err = $set_pool_setting->($pool, $setting, $value)) {
+ print "$err";
} else {
delete $param->{$setting};
}
--
2.29.2
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [pve-devel] [PATCH manager v3 04/10] ceph: add get api call for single pool
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 04/10] ceph: add get api call for single pool Alwin Antreich
@ 2021-01-14 16:49 ` Alwin Antreich
0 siblings, 0 replies; 15+ messages in thread
From: Alwin Antreich @ 2021-01-14 16:49 UTC (permalink / raw)
To: pve-devel
On Tue, Jan 12, 2021 at 11:21:47AM +0100, Alwin Antreich wrote:
> Information of a single pool can be queried.
>
> Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
> ---
> PVE/API2/Ceph/Pools.pm | 99 ++++++++++++++++++++++++++++++++++++++++++
> PVE/CLI/pveceph.pm | 4 ++
> 2 files changed, 103 insertions(+)
>
> diff --git a/PVE/API2/Ceph/Pools.pm b/PVE/API2/Ceph/Pools.pm
> index 24562456..01c11100 100644
> --- a/PVE/API2/Ceph/Pools.pm
> +++ b/PVE/API2/Ceph/Pools.pm
> @@ -395,4 +395,103 @@ __PACKAGE__->register_method ({
> }});
>
>
> +__PACKAGE__->register_method ({
> + name => 'getpool',
> + path => '{name}',
> + method => 'GET',
> + description => "List pool settings.",
> + proxyto => 'node',
> + protected => 1,
> + permissions => {
> + check => ['perm', '/', [ 'Sys.Audit', 'Datastore.Audit' ], any => 1],
> + },
> + parameters => {
> + additionalProperties => 0,
> + properties => {
> + node => get_standard_option('pve-node'),
> + name => {
> + description => "The name of the pool. It must be unique.",
> + type => 'string',
> + },
> + verbose => {
> + type => 'boolean',
> + default => 0,
> + optional => 1,
> + description => "If enabled, will display additional data".
> + "(eg. statistics).",
> + },
> + },
> + },
> + returns => {
> + type => "object",
> + properties => {
> + id => { type => 'integer', title => 'ID' },
> + pgp_num => { type => 'integer', title => 'PGP num' },
> + noscrub => { type => 'boolean', title => 'noscrub' },
> + 'nodeep-scrub' => { type => 'boolean', title => 'nodeep-scrub' },
> + nodelete => { type => 'boolean', title => 'nodelete' },
> + nopgchange => { type => 'boolean', title => 'nopgchange' },
> + nosizechange => { type => 'boolean', title => 'nosizechange' },
> + write_fadvise_dontneed => { type => 'boolean', title => 'write_fadvise_dontneed' },
> + hashpspool => { type => 'boolean', title => 'hashpspool' },
> + use_gmt_hitset => { type => 'boolean', title => 'use_gmt_hitset' },
> + fast_read => { type => 'boolean', title => 'Fast Read' },
> + application_list => { type => 'array', title => 'Application', optional => 1 },
> + statistics => { type => 'object', title => 'Statistics', optional => 1 },
> + %{ $ceph_pool_common_options->() },
> + },
> + },
> + code => sub {
> + my ($param) = @_;
> +
> + PVE::Ceph::Tools::check_ceph_inited();
> +
> + my $verbose = $param->{verbose};
> + my $pool = $param->{name};
> +
> + my $rados = PVE::RADOS->new();
> + my $res = $rados->mon_command({
> + prefix => 'osd pool get',
> + pool => "$pool",
> + var => 'all',
> + });
> +
> + my $data = {
> + id => $res->{pool_id},
> + name => $pool,
> + size => $res->{size},
> + min_size => $res->{min_size},
> + pg_num => $res->{pg_num},
> + pgp_num => $res->{pgp_num},
> + crush_rule => $res->{crush_rule},
> + pg_autoscale_mode => $res->{pg_autoscale_mode},
> + noscrub => "$res->{noscrub}",
> + 'nodeep-scrub' => "$res->{'nodeep-scrub'}",
> + nodelete => "$res->{nodelete}",
> + nopgchange => "$res->{nopgchange}",
> + nosizechange => "$res->{nosizechange}",
> + write_fadvise_dontneed => "$res->{write_fadvise_dontneed}",
> + hashpspool => "$res->{hashpspool}",
> + use_gmt_hitset => "$res->{use_gmt_hitset}",
> + fast_read => "$res->{fast_read}",
> + };
> +
> + if ($verbose) {
> + my $stats;
> + my $res = $rados->mon_command({ prefix => 'df' });
> +
> + foreach my $d (@{$res->{pools}}) {
> + next if !$d->{stats};
> + next if !defined($d->{name}) && !$d->{name} ne "$pool";
> + $data->{statistics} = $d->{stats};
> + }
> +
> + my $apps = $rados->mon_command({ prefix => "osd pool application get", pool => "$pool", });
> + $data->{application_list} = [ keys %$apps ];
> + }
> +
> + return $data;
> + }});
> +
> +
> 1;
> diff --git a/PVE/CLI/pveceph.pm b/PVE/CLI/pveceph.pm
> index 4114df7e..ba5067b1 100755
> --- a/PVE/CLI/pveceph.pm
> +++ b/PVE/CLI/pveceph.pm
> @@ -217,6 +217,10 @@ our $cmddef = {
> create => [ 'PVE::API2::Ceph::Pools', 'createpool', ['name'], { node => $nodename }],
> destroy => [ 'PVE::API2::Ceph::Pools', 'destroypool', ['name'], { node => $nodename } ],
> set => [ 'PVE::API2::Ceph::Pools', 'setpool', ['name'], { node => $nodename } ],
> + get => [ 'PVE::API2::Ceph::Pools', 'getpool', ['name'], { node => $nodename }, sub {
> + my ($data, $schema, $options) = @_;
> + PVE::CLIFormatter::print_api_result($data, $schema, undef, $options);
> + }, $PVE::RESTHandler::standard_output_options],
> },
> lspools => { alias => 'pool ls' },
> createpool => { alias => 'pool create' },
> --
> 2.29.2
Note:
I noticed that the crush_rule selection doesn't work. The value from the
get request isn't used. The field uses the pveCephRuleSelector.
^ permalink raw reply [flat|nested] 15+ messages in thread
* [pve-devel] applied: [PATCH manager v3 01/10] api: ceph: subclass pools
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 01/10] api: ceph: subclass pools Alwin Antreich
@ 2021-02-06 13:28 ` Thomas Lamprecht
0 siblings, 0 replies; 15+ messages in thread
From: Thomas Lamprecht @ 2021-02-06 13:28 UTC (permalink / raw)
To: Proxmox VE development discussion, Alwin Antreich
On 12.01.21 11:21, Alwin Antreich wrote:
> for better handling and since the pool endpoints got more entries.
>
> Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
> ---
> PVE/API2/Ceph/Makefile | 1 +
> PVE/API2/Ceph.pm | 378 +--------------------------------------
> PVE/API2/Ceph/Pools.pm | 395 +++++++++++++++++++++++++++++++++++++++++
> PVE/CLI/pveceph.pm | 8 +-
> 4 files changed, 406 insertions(+), 376 deletions(-)
> create mode 100644 PVE/API2/Ceph/Pools.pm
>
>
applied, thanks!
^ permalink raw reply [flat|nested] 15+ messages in thread
* [pve-devel] applied: [PATCH manager v3 02/10] ceph: setpool, use parameter extraction instead
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 02/10] ceph: setpool, use parameter extraction instead Alwin Antreich
@ 2021-02-06 13:29 ` Thomas Lamprecht
0 siblings, 0 replies; 15+ messages in thread
From: Thomas Lamprecht @ 2021-02-06 13:29 UTC (permalink / raw)
To: Proxmox VE development discussion, Alwin Antreich
On 12.01.21 11:21, Alwin Antreich wrote:
> of the unneeded ref copy for params.
>
> Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
> ---
> PVE/API2/Ceph/Pools.pm | 10 +++-------
> 1 file changed, 3 insertions(+), 7 deletions(-)
>
>
applied, thanks!
^ permalink raw reply [flat|nested] 15+ messages in thread
* [pve-devel] applied: [PATCH manager v3 03/10] ceph: add titles to ceph_pool_common_options
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 03/10] ceph: add titles to ceph_pool_common_options Alwin Antreich
@ 2021-02-06 13:29 ` Thomas Lamprecht
0 siblings, 0 replies; 15+ messages in thread
From: Thomas Lamprecht @ 2021-02-06 13:29 UTC (permalink / raw)
To: Proxmox VE development discussion, Alwin Antreich
On 12.01.21 11:21, Alwin Antreich wrote:
> Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
> ---
> PVE/API2/Ceph/Pools.pm | 7 +++++++
> 1 file changed, 7 insertions(+)
>
>
applied, thanks!
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2021-02-06 13:29 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-12 10:21 [pve-devel] [PATCH manager v3 00/10] ceph: allow pools settings to be changed Alwin Antreich
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 01/10] api: ceph: subclass pools Alwin Antreich
2021-02-06 13:28 ` [pve-devel] applied: " Thomas Lamprecht
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 02/10] ceph: setpool, use parameter extraction instead Alwin Antreich
2021-02-06 13:29 ` [pve-devel] applied: " Thomas Lamprecht
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 03/10] ceph: add titles to ceph_pool_common_options Alwin Antreich
2021-02-06 13:29 ` [pve-devel] applied: " Thomas Lamprecht
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 04/10] ceph: add get api call for single pool Alwin Antreich
2021-01-14 16:49 ` Alwin Antreich
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 05/10] ceph: add autoscale_status to api calls Alwin Antreich
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 06/10] ceph: gui: add autoscale & flatten pool view Alwin Antreich
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 07/10] ceph: set allowed minimal pg_num down to 1 Alwin Antreich
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 08/10] ceph: gui: rework pool input panel Alwin Antreich
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 09/10] ceph: gui: add min num of PG Alwin Antreich
2021-01-12 10:21 ` [pve-devel] [PATCH manager v3 10/10] fix: ceph: always set pool size first Alwin Antreich
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal