From: Dominik Csapak <d.csapak@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>,
Alwin Antreich <a.antreich@proxmox.com>
Subject: Re: [pve-devel] [PATCH manager v2 3/8] ceph: add autoscale_status to api calls
Date: Tue, 24 Nov 2020 14:53:25 +0100 [thread overview]
Message-ID: <74b531b8-7697-3c3f-7bdc-5d519a708f1e@proxmox.com> (raw)
In-Reply-To: <20201124105811.1416723-4-a.antreich@proxmox.com>
comments inline
On 11/24/20 11:58 AM, Alwin Antreich wrote:
> the properties target_size_ratio and target_size_bytes are the only two
> options by ceph to set on a pool. The updated pool list shows now
> autoscale settings & status. Including the new target PGs. To make it
> easier for new users to get/set the correct amount of PGs.
>
> And use parameter extraction instead of the unneeded ref copy for params.
i'd rather have this as its own patch than here but ok...
>
> Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
> ---
> PVE/API2/Ceph/POOLS.pm | 131 +++++++++++++++++++++++++++++++----------
> PVE/CLI/pveceph.pm | 3 +
> PVE/Ceph/Tools.pm | 21 +++++++
> 3 files changed, 123 insertions(+), 32 deletions(-)
>
> diff --git a/PVE/API2/Ceph/POOLS.pm b/PVE/API2/Ceph/POOLS.pm
> index 19fc1b7e..324a1f79 100644
> --- a/PVE/API2/Ceph/POOLS.pm
> +++ b/PVE/API2/Ceph/POOLS.pm
> @@ -3,6 +3,7 @@ package PVE::API2::Ceph::POOLS;
> use strict;
> use warnings;
>
> +use PVE::Tools qw(extract_param);
> use PVE::Ceph::Tools;
> use PVE::Ceph::Services;
> use PVE::JSONSchema qw(get_standard_option);
> @@ -67,6 +68,19 @@ my $ceph_pool_common_options = sub {
> default => 'warn',
> optional => 1,
> },
> + target_size => {
> + description => "The estimated target size of the pool for the PG autoscaler.",
> + title => 'PG Autoscale Target Size',
> + type => 'string',
> + pattern => '(\d+(?:\.\d+)?)([KMGT])?',
> + optional => 1,
> + },
> + target_size_ratio => {
> + description => "The estimated target ratio of the pool for the PG autoscaler.",
> + title => 'PG Autoscale Target Ratio',
> + type => 'number',
> + optional => 1,
> + },
> };
>
> if ($nodefault) {
> @@ -105,6 +119,30 @@ my $get_storages = sub {
> return $res;
> };
>
> +my $get_autoscale_status = sub {
> + my ($rados, $pool) = @_;
> +
> + $rados = PVE::RADOS->new() if !defined($rados);
> +
> + my $autoscale = $rados->mon_command({
> + prefix => 'osd pool autoscale-status'});
> +
> + my $data;
> + foreach my $p (@$autoscale) {
> + $p->{would_adjust} = "$p->{would_adjust}"; # boolean
> + push @$data, $p;
> + }
> +
> + if ($pool) {
> + foreach my $p (@$data) {
> + next if $p->{pool_name} ne $pool;
> + $data = $p;
> + }
> + }
> +
> + return $data;
> +
would it not better to return a hash with mapping
poolname -> data_for_that_pool
?
this way you only need to iterate once? and save
iterations below....
};
> +
>
> __PACKAGE__->register_method ({
> name => 'lspools',
> @@ -127,16 +165,20 @@ __PACKAGE__->register_method ({
> items => {
> type => "object",
> properties => {
> - pool => { type => 'integer', title => 'ID' },
> - pool_name => { type => 'string', title => 'Name' },
> - size => { type => 'integer', title => 'Size' },
> - min_size => { type => 'integer', title => 'Min Size' },
> - pg_num => { type => 'integer', title => 'PG Num' },
> - pg_autoscale_mode => { type => 'string', optional => 1, title => 'PG Autoscale Mode' },
> - crush_rule => { type => 'integer', title => 'Crush Rule' },
> - crush_rule_name => { type => 'string', title => 'Crush Rule Name' },
> - percent_used => { type => 'number', title => '%-Used' },
> - bytes_used => { type => 'integer', title => 'Used' },
> + pool => { type => 'integer', title => 'ID' },
> + pool_name => { type => 'string', title => 'Name' },
> + size => { type => 'integer', title => 'Size' },
> + min_size => { type => 'integer', title => 'Min Size' },
> + pg_num => { type => 'integer', title => 'PG Num' },
> + pg_num_final => { type => 'integer', title => 'Optimal # of PGs' },
> + pg_autoscale_mode => { type => 'string', title => 'PG Autoscale Mode', optional => 1 },
> + crush_rule => { type => 'integer', title => 'Crush Rule' },
> + crush_rule_name => { type => 'string', title => 'Crush Rule Name' },
> + percent_used => { type => 'number', title => '%-Used' },
> + bytes_used => { type => 'integer', title => 'Used' },
> + target_size => { type => 'integer', title => 'PG Autoscale Target Size', optional => 1 },
> + target_size_ratio => { type => 'number', title => 'PG Autoscale Target Ratio',optional => 1, },
> + autoscale_status => { type => 'object', title => 'Autoscale Status', optional => 1 },
> },
> },
> links => [ { rel => 'child', href => "{pool_name}" } ],
> @@ -176,12 +218,18 @@ __PACKAGE__->register_method ({
> 'pg_autoscale_mode',
> ];
>
> + my $autoscale = $get_autoscale_status->($rados);
> +
> foreach my $e (@{$res->{pools}}) {
> my $d = {};
> foreach my $attr (@$attr_list) {
> $d->{$attr} = $e->{$attr} if defined($e->{$attr});
> }
>
> + # some info is nested under options instead
> + $d->{target_size} = $e->{options}->{target_size_bytes};
> + $d->{target_size_ratio} = $e->{options}->{target_size_ratio};
> +
> if (defined($d->{crush_rule}) && defined($rules->{$d->{crush_rule}})) {
> $d->{crush_rule_name} = $rules->{$d->{crush_rule}};
> }
> @@ -190,10 +238,17 @@ __PACKAGE__->register_method ({
> $d->{bytes_used} = $s->{bytes_used};
> $d->{percent_used} = $s->{percent_used};
> }
> +
> + foreach my $p (@$autoscale) {
> + next if ($p->{pool_id} ne $e->{pool});
> + $d->{autoscale_status} = $p;
> + # pick some info directly from the autoscale_status
> + $d->{pg_num_final} = $p->{pg_num_final};
> + }
namely here.
you could do then
$d->{autoscale_status} = $autoscale->{$e->{pool}}
or something like that, instead of iterating over the lists
everytime (this is n^2 of number of pools)
> +
> push @$data, $d;
> }
>
> -
> return $data;
> }});
>
> @@ -233,34 +288,37 @@ __PACKAGE__->register_method ({
> PVE::Cluster::check_cfs_quorum();
> PVE::Ceph::Tools::check_ceph_configured();
>
> - my $pool = $param->{name};
> + my $pool = extract_param($param, 'name');
> + my $add_storages = extract_param($param, 'add_storages');
> + delete $param->{node}; # not a ceph option
> +
> my $rpcenv = PVE::RPCEnvironment::get();
> my $user = $rpcenv->get_user();
>
> - if ($param->{add_storages}) {
> + # Ceph uses target_size_bytes
> + if (defined($param->{'target_size'})) {
> + my $target_sizestr = extract_param($param, 'target_size');
> + $param->{target_size_bytes} = PVE::Ceph::Tools::convert_to_bytes($target_sizestr);
> + }
> +
> + if (defined($add_storages)) {
this is wrong, what if i give '--add_storages 0' ?
> $rpcenv->check($user, '/storage', ['Datastore.Allocate']);
> die "pool name contains characters which are illegal for storage naming\n"
> if !PVE::JSONSchema::parse_storage_id($pool);
> }
>
> - my $ceph_param = \%$param;
> - for my $item ('add_storages', 'name', 'node') {
> - # not ceph parameters
> - delete $ceph_param->{$item};
> - }
> -
> # pool defaults
> - $ceph_param->{pg_num} //= 128;
> - $ceph_param->{size} //= 3;
> - $ceph_param->{min_size} //= 2;
> - $ceph_param->{application} //= 'rbd';
> - $ceph_param->{pg_autoscale_mode} //= 'warn';
> + $param->{pg_num} //= 128;
> + $param->{size} //= 3;
> + $param->{min_size} //= 2;
> + $param->{application} //= 'rbd';
> + $param->{pg_autoscale_mode} //= 'warn';
>
> my $worker = sub {
>
> - PVE::Ceph::Tools::create_pool($pool, $ceph_param);
> + PVE::Ceph::Tools::create_pool($pool, $param);
>
> - if ($param->{add_storages}) {
> + if (defined($add_storages)) {
same here
> my $err;
> eval { $add_storage->($pool, "${pool}"); };
> if ($@) {
> @@ -391,15 +449,17 @@ __PACKAGE__->register_method ({
> my $rpcenv = PVE::RPCEnvironment::get();
> my $authuser = $rpcenv->get_user();
>
> - my $pool = $param->{name};
> - my $ceph_param = \%$param;
> - for my $item ('name', 'node') {
> - # not ceph parameters
> - delete $ceph_param->{$item};
> + my $pool = extract_param($param, 'name');
> + delete $param->{node};
> +
> + # Ceph uses target_size_bytes
> + if (defined($param->{'target_size'})) {
> + my $target_sizestr = extract_param($param, 'target_size');
> + $param->{target_size_bytes} = PVE::Ceph::Tools::convert_to_bytes($target_sizestr);
> }
>
> my $worker = sub {
> - PVE::Ceph::Tools::set_pool($pool, $ceph_param);
> + PVE::Ceph::Tools::set_pool($pool, $param);
> };
>
> return $rpcenv->fork_worker('cephsetpool', $pool, $authuser, $worker);
> @@ -448,6 +508,7 @@ __PACKAGE__->register_method ({
> use_gmt_hitset => { type => 'boolean', title => 'use_gmt_hitset' },
> fast_read => { type => 'boolean', title => 'Fast Read' },
> statistics => { type => 'object', title => 'Statistics', optional => 1 },
> + autoscale_status => { type => 'object', title => 'Autoscale Status', optional => 1 },
> %{ $ceph_pool_common_options->() },
> },
> },
> @@ -483,8 +544,12 @@ __PACKAGE__->register_method ({
> hashpspool => "$res->{hashpspool}",
> use_gmt_hitset => "$res->{use_gmt_hitset}",
> fast_read => "$res->{fast_read}",
> + # is only avialable if set
s/avialable/available/
> + target_size => $res->{target_size_bytes} // undef,
> + target_size_ratio => $res->{target_size_ratio} // undef,
> };
>
> +
> if ($verbose) {
> my $stats;
> my $res = $rados->mon_command({ prefix => 'df' });
> @@ -498,6 +563,8 @@ __PACKAGE__->register_method ({
> my $apps = $rados->mon_command({ prefix => "osd pool application get", pool => "$pool", });
> my @application = keys %$apps;
> $data->{application} = $application[0];
> +
> + $data->{autoscale_status} = $get_autoscale_status->($rados, $pool),
> }
>
> return $data;
> diff --git a/PVE/CLI/pveceph.pm b/PVE/CLI/pveceph.pm
> index 69421ca6..b7b68540 100755
> --- a/PVE/CLI/pveceph.pm
> +++ b/PVE/CLI/pveceph.pm
> @@ -187,7 +187,10 @@ our $cmddef = {
> 'size',
> 'min_size',
> 'pg_num',
> + 'pg_num_final',
> 'pg_autoscale_mode',
> + 'target_size',
> + 'target_size_ratio',
> 'crush_rule_name',
> 'percent_used',
> 'bytes_used',
> diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
> index 12d309be..d95e8676 100644
> --- a/PVE/Ceph/Tools.pm
> +++ b/PVE/Ceph/Tools.pm
> @@ -487,4 +487,25 @@ sub ceph_cluster_status {
> return $status;
> }
>
> +sub convert_to_bytes {
> + my ($sizestr) = shift;
> +
> + die "internal error" if $sizestr !~ m/^(\d+(?:\.\d+)?)([KMGT])?$/;
> + my ($newsize, $unit) = ($1, $2);
> +
> + if ($unit) {
> + if ($unit eq 'K') {
> + $newsize = $newsize * 1024;
> + } elsif ($unit eq 'M') {
> + $newsize = $newsize * 1024 * 1024;
> + } elsif ($unit eq 'G') {
> + $newsize = $newsize * 1024 * 1024 * 1024;
> + } elsif ($unit eq 'T') {
> + $newsize = $newsize * 1024 * 1024 * 1024 * 1024;
> + }
> + }
> +
> + return int($newsize);
> +}
don't we have something like this in PVE::Tools?
> +
> 1;
>
next prev parent reply other threads:[~2020-11-24 13:53 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-11-24 10:58 [pve-devel] [PATCH manager v2 0/8] ceph: allow pools settings to be changed Alwin Antreich
2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 1/8] api: ceph: subclass pools Alwin Antreich
2020-11-24 13:53 ` Dominik Csapak
2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 2/8] ceph: add get api call for single pool Alwin Antreich
2020-11-24 13:53 ` Dominik Csapak
2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 3/8] ceph: add autoscale_status to api calls Alwin Antreich
2020-11-24 13:53 ` Dominik Csapak [this message]
2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 4/8] ceph: gui: add autoscale & flatten pool view Alwin Antreich
2020-11-24 13:53 ` Dominik Csapak
2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 5/8] ceph: gui: rework pool input panel Alwin Antreich
2020-11-24 13:53 ` Dominik Csapak
2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 6/8] ceph: schema: change min. required PG count Alwin Antreich
2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 7/8] ceph: remove the pg_autoscale_mode default Alwin Antreich
2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 8/8] fix: ceph: always set pool size first Alwin Antreich
2020-11-24 13:53 ` [pve-devel] [PATCH manager v2 0/8] ceph: allow pools settings to be changed Dominik Csapak
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=74b531b8-7697-3c3f-7bdc-5d519a708f1e@proxmox.com \
--to=d.csapak@proxmox.com \
--cc=a.antreich@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal