* [pve-devel] [PATCH manager v4 0/9] ceph: allow pools settings to be changed
@ 2021-04-20 8:15 Dominik Csapak
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 1/9] ceph: add autoscale_status to api calls Dominik Csapak
` (9 more replies)
0 siblings, 10 replies; 11+ messages in thread
From: Dominik Csapak @ 2021-04-20 8:15 UTC (permalink / raw)
To: pve-devel
originally from Alwin Antreich
mostly rebase on master, a few eslint fixes (squashed into alwins
commits) and 3 small fixups
Alwin Antreich (6):
ceph: add autoscale_status to api calls
ceph: gui: add autoscale & flatten pool view
ceph: set allowed minimal pg_num down to 1
ceph: gui: rework pool input panel
ceph: gui: add min num of PG
fix: ceph: always set pool size first
Dominik Csapak (3):
API2/Ceph/Pools: remove unnecessary boolean conversion
ui: ceph/Pools: improve number checking for target_size
ui: ceph/Pool: show progress on pool edit/create
PVE/API2/Ceph/Pools.pm | 97 +++++++--
PVE/CLI/pveceph.pm | 4 +
PVE/Ceph/Tools.pm | 61 ++++--
www/manager6/ceph/Pool.js | 401 +++++++++++++++++++++++++++-----------
4 files changed, 422 insertions(+), 141 deletions(-)
--
2.20.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pve-devel] [PATCH manager v4 1/9] ceph: add autoscale_status to api calls
2021-04-20 8:15 [pve-devel] [PATCH manager v4 0/9] ceph: allow pools settings to be changed Dominik Csapak
@ 2021-04-20 8:15 ` Dominik Csapak
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 2/9] ceph: gui: add autoscale & flatten pool view Dominik Csapak
` (8 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2021-04-20 8:15 UTC (permalink / raw)
To: pve-devel; +Cc: Alwin Antreich
From: Alwin Antreich <a.antreich@proxmox.com>
the properties target_size_ratio, target_size_bytes and pg_num_min are
used to fine-tune the pg_autoscaler and are set on a pool. The updated
pool list shows now autoscale settings & status. Including the new
(optimal) target PGs. To make it easier for new users to get/set the
correct amount of PGs.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
PVE/API2/Ceph/Pools.pm | 96 +++++++++++++++++++++++++++++++++++++-----
PVE/CLI/pveceph.pm | 4 ++
2 files changed, 90 insertions(+), 10 deletions(-)
diff --git a/PVE/API2/Ceph/Pools.pm b/PVE/API2/Ceph/Pools.pm
index 01c11100..014e6be7 100644
--- a/PVE/API2/Ceph/Pools.pm
+++ b/PVE/API2/Ceph/Pools.pm
@@ -16,6 +16,24 @@ use PVE::API2::Storage::Config;
use base qw(PVE::RESTHandler);
+my $get_autoscale_status = sub {
+ my ($rados) = shift;
+
+ $rados = PVE::RADOS->new() if !defined($rados);
+
+ my $autoscale = $rados->mon_command({
+ prefix => 'osd pool autoscale-status'});
+
+ my $data;
+ foreach my $p (@$autoscale) {
+ $p->{would_adjust} = "$p->{would_adjust}"; # boolean
+ $data->{$p->{pool_name}} = $p;
+ }
+
+ return $data;
+};
+
+
__PACKAGE__->register_method ({
name => 'lspools',
path => '',
@@ -37,16 +55,21 @@ __PACKAGE__->register_method ({
items => {
type => "object",
properties => {
- pool => { type => 'integer', title => 'ID' },
- pool_name => { type => 'string', title => 'Name' },
- size => { type => 'integer', title => 'Size' },
- min_size => { type => 'integer', title => 'Min Size' },
- pg_num => { type => 'integer', title => 'PG Num' },
- pg_autoscale_mode => { type => 'string', optional => 1, title => 'PG Autoscale Mode' },
- crush_rule => { type => 'integer', title => 'Crush Rule' },
- crush_rule_name => { type => 'string', title => 'Crush Rule Name' },
- percent_used => { type => 'number', title => '%-Used' },
- bytes_used => { type => 'integer', title => 'Used' },
+ pool => { type => 'integer', title => 'ID' },
+ pool_name => { type => 'string', title => 'Name' },
+ size => { type => 'integer', title => 'Size' },
+ min_size => { type => 'integer', title => 'Min Size' },
+ pg_num => { type => 'integer', title => 'PG Num' },
+ pg_num_min => { type => 'integer', title => 'min. PG Num', optional => 1, },
+ pg_num_final => { type => 'integer', title => 'Optimal PG Num', optional => 1, },
+ pg_autoscale_mode => { type => 'string', title => 'PG Autoscale Mode', optional => 1, },
+ crush_rule => { type => 'integer', title => 'Crush Rule' },
+ crush_rule_name => { type => 'string', title => 'Crush Rule Name' },
+ percent_used => { type => 'number', title => '%-Used' },
+ bytes_used => { type => 'integer', title => 'Used' },
+ target_size => { type => 'integer', title => 'PG Autoscale Target Size', optional => 1 },
+ target_size_ratio => { type => 'number', title => 'PG Autoscale Target Ratio',optional => 1, },
+ autoscale_status => { type => 'object', title => 'Autoscale Status', optional => 1 },
},
},
links => [ { rel => 'child', href => "{pool_name}" } ],
@@ -86,12 +109,24 @@ __PACKAGE__->register_method ({
'pg_autoscale_mode',
];
+ # pg_autoscaler module is not enabled in Nautilus
+ my $autoscale = eval { $get_autoscale_status->($rados) };
+
foreach my $e (@{$res->{pools}}) {
my $d = {};
foreach my $attr (@$attr_list) {
$d->{$attr} = $e->{$attr} if defined($e->{$attr});
}
+ if ($autoscale) {
+ $d->{autoscale_status} = $autoscale->{$d->{pool_name}};
+ $d->{pg_num_final} = $d->{autoscale_status}->{pg_num_final};
+ # some info is nested under options instead
+ $d->{pg_num_min} = $e->{options}->{pg_num_min};
+ $d->{target_size} = $e->{options}->{target_size_bytes};
+ $d->{target_size_ratio} = $e->{options}->{target_size_ratio};
+ }
+
if (defined($d->{crush_rule}) && defined($rules->{$d->{crush_rule}})) {
$d->{crush_rule_name} = $rules->{$d->{crush_rule}};
}
@@ -143,6 +178,13 @@ my $ceph_pool_common_options = sub {
minimum => 8,
maximum => 32768,
},
+ pg_num_min => {
+ title => 'min. PG Num',
+ description => "Minimal number of placement groups.",
+ type => 'integer',
+ optional => 1,
+ maximum => 32768,
+ },
crush_rule => {
title => 'Crush Rule Name',
description => "The rule to use for mapping object placement in the cluster.",
@@ -165,6 +207,19 @@ my $ceph_pool_common_options = sub {
default => 'warn',
optional => 1,
},
+ target_size => {
+ description => "The estimated target size of the pool for the PG autoscaler.",
+ title => 'PG Autoscale Target Size',
+ type => 'string',
+ pattern => '^(\d+(\.\d+)?)([KMGT])?$',
+ optional => 1,
+ },
+ target_size_ratio => {
+ description => "The estimated target ratio of the pool for the PG autoscaler.",
+ title => 'PG Autoscale Target Ratio',
+ type => 'number',
+ optional => 1,
+ },
};
if ($nodefault) {
@@ -241,6 +296,12 @@ __PACKAGE__->register_method ({
my $rpcenv = PVE::RPCEnvironment::get();
my $user = $rpcenv->get_user();
+ # Ceph uses target_size_bytes
+ if (defined($param->{'target_size'})) {
+ my $target_sizestr = extract_param($param, 'target_size');
+ $param->{target_size_bytes} = PVE::JSONSchema::parse_size($target_sizestr);
+ }
+
if ($add_storages) {
$rpcenv->check($user, '/storage', ['Datastore.Allocate']);
die "pool name contains characters which are illegal for storage naming\n"
@@ -387,6 +448,12 @@ __PACKAGE__->register_method ({
my $pool = extract_param($param, 'name');
my $node = extract_param($param, 'node');
+ # Ceph uses target_size_bytes
+ if (defined($param->{'target_size'})) {
+ my $target_sizestr = extract_param($param, 'target_size');
+ $param->{target_size_bytes} = PVE::JSONSchema::parse_size($target_sizestr);
+ }
+
my $worker = sub {
PVE::Ceph::Tools::set_pool($pool, $param);
};
@@ -438,6 +505,7 @@ __PACKAGE__->register_method ({
fast_read => { type => 'boolean', title => 'Fast Read' },
application_list => { type => 'array', title => 'Application', optional => 1 },
statistics => { type => 'object', title => 'Statistics', optional => 1 },
+ autoscale_status => { type => 'object', title => 'Autoscale Status', optional => 1 },
%{ $ceph_pool_common_options->() },
},
},
@@ -462,6 +530,7 @@ __PACKAGE__->register_method ({
size => $res->{size},
min_size => $res->{min_size},
pg_num => $res->{pg_num},
+ pg_num_min => $res->{pg_num_min},
pgp_num => $res->{pgp_num},
crush_rule => $res->{crush_rule},
pg_autoscale_mode => $res->{pg_autoscale_mode},
@@ -474,12 +543,19 @@ __PACKAGE__->register_method ({
hashpspool => "$res->{hashpspool}",
use_gmt_hitset => "$res->{use_gmt_hitset}",
fast_read => "$res->{fast_read}",
+ target_size => $res->{target_size_bytes},
+ target_size_ratio => $res->{target_size_ratio},
};
if ($verbose) {
my $stats;
my $res = $rados->mon_command({ prefix => 'df' });
+ # pg_autoscaler module is not enabled in Nautilus
+ # avoid partial read further down, use new rados instance
+ my $autoscale_status = eval { $get_autoscale_status->() };
+ $data->{autoscale_status} = $autoscale_status->{$pool};
+
foreach my $d (@{$res->{pools}}) {
next if !$d->{stats};
next if !defined($d->{name}) && !$d->{name} ne "$pool";
diff --git a/PVE/CLI/pveceph.pm b/PVE/CLI/pveceph.pm
index ba5067b1..4c000881 100755
--- a/PVE/CLI/pveceph.pm
+++ b/PVE/CLI/pveceph.pm
@@ -207,7 +207,11 @@ our $cmddef = {
'size',
'min_size',
'pg_num',
+ 'pg_num_min',
+ 'pg_num_final',
'pg_autoscale_mode',
+ 'target_size',
+ 'target_size_ratio',
'crush_rule_name',
'percent_used',
'bytes_used',
--
2.20.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pve-devel] [PATCH manager v4 2/9] ceph: gui: add autoscale & flatten pool view
2021-04-20 8:15 [pve-devel] [PATCH manager v4 0/9] ceph: allow pools settings to be changed Dominik Csapak
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 1/9] ceph: add autoscale_status to api calls Dominik Csapak
@ 2021-04-20 8:15 ` Dominik Csapak
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 3/9] ceph: set allowed minimal pg_num down to 1 Dominik Csapak
` (7 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2021-04-20 8:15 UTC (permalink / raw)
To: pve-devel; +Cc: Alwin Antreich
From: Alwin Antreich <a.antreich@proxmox.com>
Letting the columns flex needs a flat column head structure.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/manager6/ceph/Pool.js | 138 ++++++++++++++++++++++----------------
1 file changed, 82 insertions(+), 56 deletions(-)
diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index 5dabd4e6..7f341ce8 100644
--- a/www/manager6/ceph/Pool.js
+++ b/www/manager6/ceph/Pool.js
@@ -105,14 +105,16 @@ Ext.define('PVE.node.CephPoolList', {
columns: [
{
- header: gettext('Name'),
- width: 120,
+ text: gettext('Name'),
+ minWidth: 120,
+ flex: 2,
sortable: true,
dataIndex: 'pool_name',
},
{
- header: gettext('Size') + '/min',
- width: 100,
+ text: gettext('Size') + '/min',
+ minWidth: 100,
+ flex: 1,
align: 'right',
renderer: function(v, meta, rec) {
return v + '/' + rec.data.min_size;
@@ -120,62 +122,82 @@ Ext.define('PVE.node.CephPoolList', {
dataIndex: 'size',
},
{
- text: 'Placement Groups',
- columns: [
- {
- text: '# of PGs', // pg_num',
- width: 150,
- align: 'right',
- dataIndex: 'pg_num',
- },
- {
- text: gettext('Autoscale'),
- width: 140,
- align: 'right',
- dataIndex: 'pg_autoscale_mode',
- },
- ],
+ text: '# of Placement Groups',
+ flex: 1,
+ minWidth: 150,
+ align: 'right',
+ dataIndex: 'pg_num',
},
{
- text: 'CRUSH Rule',
- columns: [
- {
- text: 'ID',
- align: 'right',
- width: 50,
- dataIndex: 'crush_rule',
- },
- {
- text: gettext('Name'),
- width: 150,
- dataIndex: 'crush_rule_name',
- },
- ],
+ text: gettext('Optimal # of PGs'),
+ flex: 1,
+ minWidth: 140,
+ align: 'right',
+ dataIndex: 'pg_num_final',
+ renderer: function(value, metaData) {
+ if (!value) {
+ value = '<i class="fa fa-info-circle faded"></i> n/a';
+ metaData.tdAttr = 'data-qtip="Needs pg_autoscaler module enabled."';
+ }
+ return value;
+ },
},
{
- text: gettext('Used'),
- columns: [
- {
- text: '%',
- width: 100,
- sortable: true,
- align: 'right',
- renderer: function(val) {
- return Ext.util.Format.percent(val, '0.00');
- },
- dataIndex: 'percent_used',
- },
- {
- text: gettext('Total'),
- width: 100,
- sortable: true,
- renderer: PVE.Utils.render_size,
- align: 'right',
- dataIndex: 'bytes_used',
- summaryType: 'sum',
- summaryRenderer: PVE.Utils.render_size,
- },
- ],
+ text: gettext('Target Size Ratio'),
+ flex: 1,
+ minWidth: 140,
+ align: 'right',
+ dataIndex: 'target_size_ratio',
+ renderer: Ext.util.Format.numberRenderer('0.0000'),
+ hidden: true,
+ },
+ {
+ text: gettext('Target Size'),
+ flex: 1,
+ minWidth: 140,
+ align: 'right',
+ dataIndex: 'target_size',
+ hidden: true,
+ renderer: function(v, metaData, rec) {
+ let value = PVE.Utils.render_size(v);
+ if (rec.data.target_size_ratio > 0) {
+ value = '<i class="fa fa-info-circle faded"></i> ' + value;
+ metaData.tdAttr = 'data-qtip="Target Size Ratio takes precedence over Target Size."';
+ }
+ return value;
+ },
+ },
+ {
+ text: gettext('Autoscale Mode'),
+ flex: 1,
+ minWidth: 140,
+ align: 'right',
+ dataIndex: 'pg_autoscale_mode',
+ },
+ {
+ text: 'CRUSH Rule (ID)',
+ flex: 1,
+ align: 'right',
+ minWidth: 150,
+ renderer: function(v, meta, rec) {
+ return v + ' (' + rec.data.crush_rule + ')';
+ },
+ dataIndex: 'crush_rule_name',
+ },
+ {
+ text: gettext('Used') + ' (%)',
+ flex: 1,
+ minWidth: 180,
+ sortable: true,
+ align: 'right',
+ dataIndex: 'bytes_used',
+ summaryType: 'sum',
+ summaryRenderer: PVE.Utils.render_size,
+ renderer: function(v, meta, rec) {
+ let percentage = Ext.util.Format.percent(rec.data.percent_used, '0.00');
+ let used = PVE.Utils.render_size(v);
+ return used + ' (' + percentage + ')';
+ },
},
],
initComponent: function() {
@@ -276,6 +298,10 @@ Ext.define('PVE.node.CephPoolList', {
{ name: 'percent_used', type: 'number' },
{ name: 'crush_rule', type: 'integer' },
{ name: 'crush_rule_name', type: 'string' },
+ { name: 'pg_autoscale_mode', type: 'string'},
+ { name: 'pg_num_final', type: 'integer'},
+ { name: 'target_size_ratio', type: 'number'},
+ { name: 'target_size_bytes', type: 'integer'},
],
idProperty: 'pool_name',
});
--
2.20.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pve-devel] [PATCH manager v4 3/9] ceph: set allowed minimal pg_num down to 1
2021-04-20 8:15 [pve-devel] [PATCH manager v4 0/9] ceph: allow pools settings to be changed Dominik Csapak
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 1/9] ceph: add autoscale_status to api calls Dominik Csapak
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 2/9] ceph: gui: add autoscale & flatten pool view Dominik Csapak
@ 2021-04-20 8:15 ` Dominik Csapak
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 4/9] ceph: gui: rework pool input panel Dominik Csapak
` (6 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2021-04-20 8:15 UTC (permalink / raw)
To: pve-devel; +Cc: Alwin Antreich
From: Alwin Antreich <a.antreich@proxmox.com>
In Ceph Octopus the device_health_metrics pool is auto-created with 1
PG. Since Ceph has the ability to split/merge PGs, hitting the wrong PG
count is now less of an issue anyhow.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
PVE/API2/Ceph/Pools.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/API2/Ceph/Pools.pm b/PVE/API2/Ceph/Pools.pm
index 014e6be7..939a1f8a 100644
--- a/PVE/API2/Ceph/Pools.pm
+++ b/PVE/API2/Ceph/Pools.pm
@@ -175,7 +175,7 @@ my $ceph_pool_common_options = sub {
type => 'integer',
default => 128,
optional => 1,
- minimum => 8,
+ minimum => 1,
maximum => 32768,
},
pg_num_min => {
--
2.20.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pve-devel] [PATCH manager v4 4/9] ceph: gui: rework pool input panel
2021-04-20 8:15 [pve-devel] [PATCH manager v4 0/9] ceph: allow pools settings to be changed Dominik Csapak
` (2 preceding siblings ...)
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 3/9] ceph: set allowed minimal pg_num down to 1 Dominik Csapak
@ 2021-04-20 8:15 ` Dominik Csapak
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 5/9] ceph: gui: add min num of PG Dominik Csapak
` (5 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2021-04-20 8:15 UTC (permalink / raw)
To: pve-devel; +Cc: Alwin Antreich
From: Alwin Antreich <a.antreich@proxmox.com>
* add the ability to edit an existing pool
* allow adjustment of autoscale settings
* warn if user specifies min_size 1
* disallow min_size 1 on pool create
* calculate min_size replica by size
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/manager6/ceph/Pool.js | 246 +++++++++++++++++++++++++++++---------
1 file changed, 189 insertions(+), 57 deletions(-)
diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index 7f341ce8..e19f8beb 100644
--- a/www/manager6/ceph/Pool.js
+++ b/www/manager6/ceph/Pool.js
@@ -1,17 +1,21 @@
-Ext.define('PVE.CephCreatePool', {
- extend: 'Proxmox.window.Edit',
- alias: 'widget.pveCephCreatePool',
+Ext.define('PVE.CephPoolInputPanel', {
+ extend: 'Proxmox.panel.InputPanel',
+ xtype: 'pveCephPoolInputPanel',
+ mixins: ['Proxmox.Mixin.CBind'],
showProgress: true,
onlineHelp: 'pve_ceph_pools',
subject: 'Ceph Pool',
- isCreate: true,
- method: 'POST',
- items: [
+ column1: [
{
- xtype: 'textfield',
+ xtype: 'pmxDisplayEditField',
fieldLabel: gettext('Name'),
+ cbind: {
+ editable: '{isCreate}',
+ value: '{pool_name}',
+ disabled: '{!isCreate}',
+ },
name: 'name',
allowBlank: false,
},
@@ -20,75 +24,179 @@ Ext.define('PVE.CephCreatePool', {
fieldLabel: gettext('Size'),
name: 'size',
value: 3,
- minValue: 1,
+ minValue: 2,
maxValue: 7,
allowBlank: false,
+ listeners: {
+ change: function(field, val) {
+ let size = Math.round(val / 2);
+ if (size > 1) {
+ field.up('inputpanel').down('field[name=min_size]').setValue(size);
+ }
+ },
+ },
+ },
+ ],
+ column2: [
+ {
+ xtype: 'proxmoxKVComboBox',
+ fieldLabel: 'PG Autoscale Mode',
+ name: 'pg_autoscale_mode',
+ comboItems: [
+ ['warn', 'warn'],
+ ['on', 'on'],
+ ['off', 'off'],
+ ],
+ value: 'warn',
+ allowBlank: false,
+ autoSelect: false,
+ labelWidth: 140,
},
+ {
+ xtype: 'proxmoxcheckbox',
+ fieldLabel: gettext('Add as Storage'),
+ cbind: {
+ value: '{isCreate}',
+ hidden: '{!isCreate}',
+ },
+ name: 'add_storages',
+ labelWidth: 140,
+ autoEl: {
+ tag: 'div',
+ 'data-qtip': gettext('Add the new pool to the cluster storage configuration.'),
+ },
+ },
+ ],
+ advancedColumn1: [
{
xtype: 'proxmoxintegerfield',
fieldLabel: gettext('Min. Size'),
name: 'min_size',
value: 2,
- minValue: 1,
+ cbind: {
+ minValue: (get) => get('isCreate') ? 2 : 1,
+ },
maxValue: 7,
allowBlank: false,
+ listeners: {
+ change: function(field, val) {
+ let warn = true;
+ let warn_text = gettext('Min. Size');
+
+ if (val < 2) {
+ warn = false;
+ warn_text = gettext('Min. Size') + ' <i class="fa fa-exclamation-triangle warning"></i>';
+ }
+
+ field.up().down('field[name=min_size-warning]').setHidden(warn);
+ field.setFieldLabel(warn_text);
+ },
+ },
+ },
+ {
+ xtype: 'displayfield',
+ name: 'min_size-warning',
+ userCls: 'pmx-hint',
+ value: 'A pool with min_size=1 could lead to data loss, incomplete PGs or unfound objects.',
+ hidden: true,
},
{
xtype: 'pveCephRuleSelector',
fieldLabel: 'Crush Rule', // do not localize
+ cbind: { nodename: '{nodename}' },
name: 'crush_rule',
allowBlank: false,
},
- {
- xtype: 'proxmoxKVComboBox',
- fieldLabel: 'PG Autoscale Mode', // do not localize
- name: 'pg_autoscale_mode',
- comboItems: [
- ['warn', 'warn'],
- ['on', 'on'],
- ['off', 'off'],
- ],
- value: 'warn',
- allowBlank: false,
- autoSelect: false,
- },
{
xtype: 'proxmoxintegerfield',
- fieldLabel: 'pg_num',
+ fieldLabel: '# of PGs',
name: 'pg_num',
value: 128,
- minValue: 8,
+ minValue: 1,
maxValue: 32768,
+ allowBlank: false,
+ emptyText: 128,
+ },
+ ],
+ advancedColumn2: [
+ {
+ xtype: 'numberfield',
+ fieldLabel: gettext('Target Size Ratio'),
+ name: 'target_size_ratio',
+ labelWidth: 140,
+ minValue: 0,
+ decimalPrecision: 3,
allowBlank: true,
- emptyText: gettext('Autoscale'),
+ emptyText: '0.0',
},
{
- xtype: 'proxmoxcheckbox',
- fieldLabel: gettext('Add as Storage'),
- value: true,
- name: 'add_storages',
- autoEl: {
- tag: 'div',
- 'data-qtip': gettext('Add the new pool to the cluster storage configuration.'),
- },
+ xtype: 'numberfield',
+ fieldLabel: gettext('Target Size') + ' (GiB)',
+ name: 'target_size',
+ labelWidth: 140,
+ minValue: 0,
+ allowBlank: true,
+ emptyText: '0',
+ },
+ {
+ xtype: 'displayfield',
+ userCls: 'pmx-hint',
+ value: 'Target Size Ratio takes precedence.',
},
],
- initComponent: function() {
- var me = this;
- if (!me.nodename) {
- throw "no node name specified";
+ onGetValues: function(values) {
+ Object.keys(values || {}).forEach(function(name) {
+ if (values[name] === '') {
+ delete values[name];
+ }
+ });
+
+ if (Ext.isNumber(values.target_size) && values.target_size !== 0) {
+ values.target_size = values.target_size*1024*1024*1024;
}
+ return values;
+ },
- Ext.apply(me, {
- url: "/nodes/" + me.nodename + "/ceph/pools",
- defaults: {
- nodename: me.nodename,
- },
- });
+ setValues: function(values) {
+ if (Ext.isNumber(values.target_size) && values.target_size !== 0) {
+ values.target_size = values.target_size/1024/1024/1024;
+ }
- me.callParent();
+ this.callParent([values]);
},
+
+});
+
+Ext.define('PVE.CephPoolEdit', {
+ extend: 'Proxmox.window.Edit',
+ alias: 'widget.pveCephPoolEdit',
+ xtype: 'pveCephPoolEdit',
+ mixins: ['Proxmox.Mixin.CBind'],
+
+ cbindData: {
+ pool_name: '',
+ isCreate: (cfg) => !cfg.pool_name,
+ },
+
+ cbind: {
+ autoLoad: get => !get('isCreate'),
+ url: get => get('isCreate')
+ ? `/nodes/${get('nodename')}/ceph/pools`
+ : `/nodes/${get('nodename')}/ceph/pools/${get('pool_name')}`,
+ method: get => get('isCreate') ? 'POST' : 'PUT',
+ },
+
+ subject: gettext('Ceph Pool'),
+
+ items: [{
+ xtype: 'pveCephPoolInputPanel',
+ cbind: {
+ nodename: '{nodename}',
+ pool_name: '{pool_name}',
+ isCreate: '{isCreate}',
+ },
+ }],
});
Ext.define('PVE.node.CephPoolList', {
@@ -221,6 +329,9 @@ Ext.define('PVE.node.CephPoolList', {
});
var store = Ext.create('Proxmox.data.DiffStore', { rstore: rstore });
+ var reload = function() {
+ rstore.load();
+ };
var regex = new RegExp("not (installed|initialized)", "i");
PVE.Utils.handleStoreErrorOrMask(me, rstore, regex, function(me, error) {
@@ -237,16 +348,38 @@ Ext.define('PVE.node.CephPoolList', {
var create_btn = new Ext.Button({
text: gettext('Create'),
handler: function() {
- var win = Ext.create('PVE.CephCreatePool', {
- nodename: nodename,
+ var win = Ext.create('PVE.CephPoolEdit', {
+ title: gettext('Create') + ': Ceph Pool',
+ isCreate: true,
+ nodename: nodename,
});
win.show();
- win.on('destroy', function() {
- rstore.load();
- });
+ win.on('destroy', reload);
},
});
+ var run_editor = function() {
+ var rec = sm.getSelection()[0];
+ if (!rec) {
+ return;
+ }
+
+ var win = Ext.create('PVE.CephPoolEdit', {
+ title: gettext('Edit') + ': Ceph Pool',
+ nodename: nodename,
+ pool_name: rec.data.pool_name,
+ });
+ win.on('destroy', reload);
+ win.show();
+ };
+
+ var edit_btn = new Proxmox.button.Button({
+ text: gettext('Edit'),
+ disabled: true,
+ selModel: sm,
+ handler: run_editor,
+ });
+
var destroy_btn = Ext.create('Proxmox.button.Button', {
text: gettext('Destroy'),
selModel: sm,
@@ -268,19 +401,18 @@ Ext.define('PVE.node.CephPoolList', {
},
item: { type: 'CephPool', id: rec.data.pool_name },
}).show();
- win.on('destroy', function() {
- rstore.load();
- });
+ win.on('destroy', reload);
},
});
Ext.apply(me, {
store: store,
selModel: sm,
- tbar: [create_btn, destroy_btn],
+ tbar: [create_btn, edit_btn, destroy_btn],
listeners: {
activate: () => rstore.startUpdate(),
destroy: () => rstore.stopUpdate(),
+ itemdblclick: run_editor,
},
});
@@ -298,10 +430,10 @@ Ext.define('PVE.node.CephPoolList', {
{ name: 'percent_used', type: 'number' },
{ name: 'crush_rule', type: 'integer' },
{ name: 'crush_rule_name', type: 'string' },
- { name: 'pg_autoscale_mode', type: 'string'},
- { name: 'pg_num_final', type: 'integer'},
- { name: 'target_size_ratio', type: 'number'},
- { name: 'target_size_bytes', type: 'integer'},
+ { name: 'pg_autoscale_mode', type: 'string' },
+ { name: 'pg_num_final', type: 'integer' },
+ { name: 'target_size_ratio', type: 'number' },
+ { name: 'target_size', type: 'integer' },
],
idProperty: 'pool_name',
});
--
2.20.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pve-devel] [PATCH manager v4 5/9] ceph: gui: add min num of PG
2021-04-20 8:15 [pve-devel] [PATCH manager v4 0/9] ceph: allow pools settings to be changed Dominik Csapak
` (3 preceding siblings ...)
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 4/9] ceph: gui: rework pool input panel Dominik Csapak
@ 2021-04-20 8:15 ` Dominik Csapak
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 6/9] fix: ceph: always set pool size first Dominik Csapak
` (4 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2021-04-20 8:15 UTC (permalink / raw)
To: pve-devel; +Cc: Alwin Antreich
From: Alwin Antreich <a.antreich@proxmox.com>
this is used to fine-tune the ceph autoscaler
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/manager6/ceph/Pool.js | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)
diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index e19f8beb..236ed0bc 100644
--- a/www/manager6/ceph/Pool.js
+++ b/www/manager6/ceph/Pool.js
@@ -143,6 +143,15 @@ Ext.define('PVE.CephPoolInputPanel', {
userCls: 'pmx-hint',
value: 'Target Size Ratio takes precedence.',
},
+ {
+ xtype: 'proxmoxintegerfield',
+ fieldLabel: 'Min. # of PGs',
+ name: 'pg_num_min',
+ labelWidth: 140,
+ minValue: 0,
+ allowBlank: true,
+ emptyText: '0',
+ },
],
onGetValues: function(values) {
@@ -250,6 +259,14 @@ Ext.define('PVE.node.CephPoolList', {
return value;
},
},
+ {
+ text: gettext('Min. # of PGs'),
+ flex: 1,
+ minWidth: 140,
+ align: 'right',
+ dataIndex: 'pg_num_min',
+ hidden: true,
+ },
{
text: gettext('Target Size Ratio'),
flex: 1,
@@ -426,6 +443,7 @@ Ext.define('PVE.node.CephPoolList', {
{ name: 'size', type: 'integer' },
{ name: 'min_size', type: 'integer' },
{ name: 'pg_num', type: 'integer' },
+ { name: 'pg_num_min', type: 'integer' },
{ name: 'bytes_used', type: 'integer' },
{ name: 'percent_used', type: 'number' },
{ name: 'crush_rule', type: 'integer' },
--
2.20.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pve-devel] [PATCH manager v4 6/9] fix: ceph: always set pool size first
2021-04-20 8:15 [pve-devel] [PATCH manager v4 0/9] ceph: allow pools settings to be changed Dominik Csapak
` (4 preceding siblings ...)
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 5/9] ceph: gui: add min num of PG Dominik Csapak
@ 2021-04-20 8:15 ` Dominik Csapak
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 7/9] API2/Ceph/Pools: remove unnecessary boolean conversion Dominik Csapak
` (3 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2021-04-20 8:15 UTC (permalink / raw)
To: pve-devel; +Cc: Alwin Antreich
From: Alwin Antreich <a.antreich@proxmox.com>
Since Ceph Nautilus 14.2.10 and Octopus 15.2.2 the min_size of a pool is
calculated by the size (round(size / 2)). When size is applied after
min_size to the pool, the manual specified min_size will be overwritten.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
PVE/Ceph/Tools.pm | 61 +++++++++++++++++++++++++++++++----------------
1 file changed, 40 insertions(+), 21 deletions(-)
diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
index ab38f7bc..9d4d595f 100644
--- a/PVE/Ceph/Tools.pm
+++ b/PVE/Ceph/Tools.pm
@@ -200,33 +200,52 @@ sub check_ceph_enabled {
return 1;
}
+my $set_pool_setting = sub {
+ my ($pool, $setting, $value) = @_;
+
+ my $command;
+ if ($setting eq 'application') {
+ $command = {
+ prefix => "osd pool application enable",
+ pool => "$pool",
+ app => "$value",
+ };
+ } else {
+ $command = {
+ prefix => "osd pool set",
+ pool => "$pool",
+ var => "$setting",
+ val => "$value",
+ format => 'plain',
+ };
+ }
+
+ my $rados = PVE::RADOS->new();
+ eval { $rados->mon_command($command); };
+ return $@ ? $@ : undef;
+};
+
sub set_pool {
my ($pool, $param) = @_;
- foreach my $setting (keys %$param) {
- my $value = $param->{$setting};
-
- my $command;
- if ($setting eq 'application') {
- $command = {
- prefix => "osd pool application enable",
- pool => "$pool",
- app => "$value",
- };
+ # by default, pool size always sets min_size,
+ # set it and forget it, as first item
+ # https://tracker.ceph.com/issues/44862
+ if ($param->{size}) {
+ my $value = $param->{size};
+ if (my $err = $set_pool_setting->($pool, 'size', $value)) {
+ print "$err";
} else {
- $command = {
- prefix => "osd pool set",
- pool => "$pool",
- var => "$setting",
- val => "$value",
- format => 'plain',
- };
+ delete $param->{size};
}
+ }
+
+ foreach my $setting (keys %$param) {
+ my $value = $param->{$setting};
+ next if $setting eq 'size';
- my $rados = PVE::RADOS->new();
- eval { $rados->mon_command($command); };
- if ($@) {
- print "$@";
+ if (my $err = $set_pool_setting->($pool, $setting, $value)) {
+ print "$err";
} else {
delete $param->{$setting};
}
--
2.20.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pve-devel] [PATCH manager v4 7/9] API2/Ceph/Pools: remove unnecessary boolean conversion
2021-04-20 8:15 [pve-devel] [PATCH manager v4 0/9] ceph: allow pools settings to be changed Dominik Csapak
` (5 preceding siblings ...)
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 6/9] fix: ceph: always set pool size first Dominik Csapak
@ 2021-04-20 8:15 ` Dominik Csapak
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 8/9] ui: ceph/Pools: improve number checking for target_size Dominik Csapak
` (2 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2021-04-20 8:15 UTC (permalink / raw)
To: pve-devel
we do nothing with that field, so leave it like it is
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
PVE/API2/Ceph/Pools.pm | 1 -
1 file changed, 1 deletion(-)
diff --git a/PVE/API2/Ceph/Pools.pm b/PVE/API2/Ceph/Pools.pm
index 939a1f8a..45f0c47c 100644
--- a/PVE/API2/Ceph/Pools.pm
+++ b/PVE/API2/Ceph/Pools.pm
@@ -26,7 +26,6 @@ my $get_autoscale_status = sub {
my $data;
foreach my $p (@$autoscale) {
- $p->{would_adjust} = "$p->{would_adjust}"; # boolean
$data->{$p->{pool_name}} = $p;
}
--
2.20.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pve-devel] [PATCH manager v4 8/9] ui: ceph/Pools: improve number checking for target_size
2021-04-20 8:15 [pve-devel] [PATCH manager v4 0/9] ceph: allow pools settings to be changed Dominik Csapak
` (6 preceding siblings ...)
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 7/9] API2/Ceph/Pools: remove unnecessary boolean conversion Dominik Csapak
@ 2021-04-20 8:15 ` Dominik Csapak
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 9/9] ui: ceph/Pool: show progress on pool edit/create Dominik Csapak
2021-04-21 14:20 ` [pve-devel] applied-series: [PATCH manager v4 0/9] ceph: allow pools settings to be changed Thomas Lamprecht
9 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2021-04-20 8:15 UTC (permalink / raw)
To: pve-devel
the field gives us a string, so the second condition could never
be true, instead parse to a float instead
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/manager6/ceph/Pool.js | 13 +++++++++----
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index 236ed0bc..45333f4d 100644
--- a/www/manager6/ceph/Pool.js
+++ b/www/manager6/ceph/Pool.js
@@ -161,15 +161,20 @@ Ext.define('PVE.CephPoolInputPanel', {
}
});
- if (Ext.isNumber(values.target_size) && values.target_size !== 0) {
- values.target_size = values.target_size*1024*1024*1024;
+ let target_size = Number.parseFloat(values.target_size);
+
+ if (Ext.isNumber(target_size) && target_size !== 0) {
+ values.target_size = (target_size*1024*1024*1024).toFixed(0);
}
+
return values;
},
setValues: function(values) {
- if (Ext.isNumber(values.target_size) && values.target_size !== 0) {
- values.target_size = values.target_size/1024/1024/1024;
+ let target_size = Number.parseFloat(values.target_size);
+
+ if (Ext.isNumber(target_size) && target_size !== 0) {
+ values.target_size = target_size/1024/1024/1024;
}
this.callParent([values]);
--
2.20.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pve-devel] [PATCH manager v4 9/9] ui: ceph/Pool: show progress on pool edit/create
2021-04-20 8:15 [pve-devel] [PATCH manager v4 0/9] ceph: allow pools settings to be changed Dominik Csapak
` (7 preceding siblings ...)
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 8/9] ui: ceph/Pools: improve number checking for target_size Dominik Csapak
@ 2021-04-20 8:15 ` Dominik Csapak
2021-04-21 14:20 ` [pve-devel] applied-series: [PATCH manager v4 0/9] ceph: allow pools settings to be changed Thomas Lamprecht
9 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2021-04-20 8:15 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/manager6/ceph/Pool.js | 2 ++
1 file changed, 2 insertions(+)
diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index 45333f4d..430decbb 100644
--- a/www/manager6/ceph/Pool.js
+++ b/www/manager6/ceph/Pool.js
@@ -201,6 +201,8 @@ Ext.define('PVE.CephPoolEdit', {
method: get => get('isCreate') ? 'POST' : 'PUT',
},
+ showProgress: true,
+
subject: gettext('Ceph Pool'),
items: [{
--
2.20.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pve-devel] applied-series: [PATCH manager v4 0/9] ceph: allow pools settings to be changed
2021-04-20 8:15 [pve-devel] [PATCH manager v4 0/9] ceph: allow pools settings to be changed Dominik Csapak
` (8 preceding siblings ...)
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 9/9] ui: ceph/Pool: show progress on pool edit/create Dominik Csapak
@ 2021-04-21 14:20 ` Thomas Lamprecht
9 siblings, 0 replies; 11+ messages in thread
From: Thomas Lamprecht @ 2021-04-21 14:20 UTC (permalink / raw)
To: Proxmox VE development discussion, Dominik Csapak; +Cc: Alwin Antreich
On 20.04.21 10:15, Dominik Csapak wrote:
> originally from Alwin Antreich
>
> mostly rebase on master, a few eslint fixes (squashed into alwins
> commits) and 3 small fixups
>
> Alwin Antreich (6):
> ceph: add autoscale_status to api calls
> ceph: gui: add autoscale & flatten pool view
> ceph: set allowed minimal pg_num down to 1
> ceph: gui: rework pool input panel
> ceph: gui: add min num of PG
> fix: ceph: always set pool size first
>
> Dominik Csapak (3):
> API2/Ceph/Pools: remove unnecessary boolean conversion
> ui: ceph/Pools: improve number checking for target_size
> ui: ceph/Pool: show progress on pool edit/create
>
> PVE/API2/Ceph/Pools.pm | 97 +++++++--
> PVE/CLI/pveceph.pm | 4 +
> PVE/Ceph/Tools.pm | 61 ++++--
> www/manager6/ceph/Pool.js | 401 +++++++++++++++++++++++++++-----------
> 4 files changed, 422 insertions(+), 141 deletions(-)
>
applied, thanks to both of you!
Made some followups on-top, besides some minor code style stuff it was mostly:
* avoid horizontal scrolling due to column width for 720p screens
* make min_size auto calculation (size + 1) / 2, so that size 4 gets min_size 3
* use the pveSizeField (adapted from the pveBandwidthField) for target size to
avoid the *1024*1024*1024 for and back conversion in the panel get/setValues
* print pool settings as they're applied, makes a slightly nicer task log.
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2021-04-21 14:20 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-20 8:15 [pve-devel] [PATCH manager v4 0/9] ceph: allow pools settings to be changed Dominik Csapak
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 1/9] ceph: add autoscale_status to api calls Dominik Csapak
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 2/9] ceph: gui: add autoscale & flatten pool view Dominik Csapak
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 3/9] ceph: set allowed minimal pg_num down to 1 Dominik Csapak
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 4/9] ceph: gui: rework pool input panel Dominik Csapak
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 5/9] ceph: gui: add min num of PG Dominik Csapak
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 6/9] fix: ceph: always set pool size first Dominik Csapak
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 7/9] API2/Ceph/Pools: remove unnecessary boolean conversion Dominik Csapak
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 8/9] ui: ceph/Pools: improve number checking for target_size Dominik Csapak
2021-04-20 8:15 ` [pve-devel] [PATCH manager v4 9/9] ui: ceph/Pool: show progress on pool edit/create Dominik Csapak
2021-04-21 14:20 ` [pve-devel] applied-series: [PATCH manager v4 0/9] ceph: allow pools settings to be changed Thomas Lamprecht
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox