From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 2F6F491BB8 for ; Mon, 20 Mar 2023 09:32:34 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 11DD62FA2A for ; Mon, 20 Mar 2023 09:32:04 +0100 (CET) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Mon, 20 Mar 2023 09:32:01 +0100 (CET) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id B82474584A for ; Mon, 20 Mar 2023 09:32:00 +0100 (CET) Date: Mon, 20 Mar 2023 09:31:50 +0100 From: Fabian =?iso-8859-1?q?Gr=FCnbichler?= To: Proxmox VE development discussion References: <20221209125844.1490407-1-a.lauterer@proxmox.com> In-Reply-To: <20221209125844.1490407-1-a.lauterer@proxmox.com> MIME-Version: 1.0 User-Agent: astroid/0.16.0 (https://github.com/astroidmail/astroid) Message-Id: <1679300899.a87qibzw9i.astroid@yuna.none> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-SPAM-LEVEL: Spam detection results: 0 AWL -0.028 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment POISEN_SPAM_PILL 0.1 Meta: its spam POISEN_SPAM_PILL_1 0.1 random spam to be learned in bayes POISEN_SPAM_PILL_3 0.1 random spam to be learned in bayes SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pve-devel] [PATCH manager 1/2] api: ceph: deprecate pools in favor or pool X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 20 Mar 2023 08:32:34 -0000 On December 9, 2022 1:58 pm, Aaron Lauterer wrote: > /nodes/{node}/ceph/pools/{pool} returns the pool details right away on a > GET. This makes it bad practice to add additional sub API endpoints. >=20 > By deprecating it and replacing it with /nodes/{node}/ceph/pool/{pool} > (singular instead of plural) we can turn that into an index GET > response, making it possible to expand it more in the future. >=20 > The GET call returning the pool details is moved into > /nodes/{node}/ceph/pool/{pool}/status >=20 > The code in the new Pool.pm is basically a copy of Pools.pm to avoid > a close coupling with the old code as it is possible that it will divert > until we can entirely remove the old code. >=20 > Signed-off-by: Aaron Lauterer high level: pveceph also should be switched to the new endpoints ;) two small nits inline.. > --- > The next step is to add a pool/{name}/namespace API so that we can list > available namespaces and maybe also manage them via the API/UI in the > future. >=20 > PVE/API2/Ceph.pm | 7 + > PVE/API2/Ceph/Makefile | 1 + > PVE/API2/Ceph/Pool.pm | 801 +++++++++++++++++++++++++++++++++++++++++ > PVE/API2/Ceph/Pools.pm | 11 +- > 4 files changed, 815 insertions(+), 5 deletions(-) > create mode 100644 PVE/API2/Ceph/Pool.pm >=20 > diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm > index f3442408..946aebd3 100644 > --- a/PVE/API2/Ceph.pm > +++ b/PVE/API2/Ceph.pm > @@ -23,6 +23,7 @@ use PVE::API2::Ceph::FS; > use PVE::API2::Ceph::MDS; > use PVE::API2::Ceph::MGR; > use PVE::API2::Ceph::MON; > +use PVE::API2::Ceph::Pool; > use PVE::API2::Ceph::Pools; > use PVE::API2::Storage::Config; > =20 > @@ -55,6 +56,12 @@ __PACKAGE__->register_method ({ > path =3D> 'fs', > }); > =20 > +__PACKAGE__->register_method ({ > + subclass =3D> "PVE::API2::Ceph::Pool", > + path =3D> 'pool', > +}); > + > +# TODO: deprecrated, remove with PVE 8 > __PACKAGE__->register_method ({ > subclass =3D> "PVE::API2::Ceph::Pools", > path =3D> 'pools', > diff --git a/PVE/API2/Ceph/Makefile b/PVE/API2/Ceph/Makefile > index 45daafda..5d6f642b 100644 > --- a/PVE/API2/Ceph/Makefile > +++ b/PVE/API2/Ceph/Makefile > @@ -5,6 +5,7 @@ PERLSOURCE=3D \ > MON.pm \ > OSD.pm \ > FS.pm \ > + Pool.pm \ > Pools.pm \ > MDS.pm > =20 > diff --git a/PVE/API2/Ceph/Pool.pm b/PVE/API2/Ceph/Pool.pm > new file mode 100644 > index 00000000..cd46311b > --- /dev/null > +++ b/PVE/API2/Ceph/Pool.pm > @@ -0,0 +1,801 @@ > +package PVE::API2::Ceph::Pool; > + > +use strict; > +use warnings; > + > +use PVE::Ceph::Tools; > +use PVE::Ceph::Services; > +use PVE::JSONSchema qw(get_standard_option parse_property_string); > +use PVE::RADOS; > +use PVE::RESTHandler; > +use PVE::RPCEnvironment; > +use PVE::Storage; > +use PVE::Tools qw(extract_param); > + > +use PVE::API2::Storage::Config; > + > +use base qw(PVE::RESTHandler); > + > +my $get_autoscale_status =3D sub { > + my ($rados) =3D shift; > + > + $rados =3D PVE::RADOS->new() if !defined($rados); > + > + my $autoscale =3D $rados->mon_command({ > + prefix =3D> 'osd pool autoscale-status'}); > + > + my $data; > + foreach my $p (@$autoscale) { > + $data->{$p->{pool_name}} =3D $p; > + } > + > + return $data; > +}; > + > + > +__PACKAGE__->register_method ({ > + name =3D> 'lspools', > + path =3D> '', > + method =3D> 'GET', > + description =3D> "List all pools.", and their settings (which are settable by the POST/PUT endpoints). > + proxyto =3D> 'node', > + protected =3D> 1, > + permissions =3D> { > + check =3D> ['perm', '/', [ 'Sys.Audit', 'Datastore.Audit' ], any =3D> 1= ], > + }, > + parameters =3D> { > + additionalProperties =3D> 0, > + properties =3D> { > + node =3D> get_standard_option('pve-node'), > + }, > + }, > + returns =3D> { > + type =3D> 'array', > + items =3D> { > + type =3D> "object", > + properties =3D> { > + pool =3D> { > + type =3D> 'integer', > + title =3D> 'ID', > + }, > + pool_name =3D> { > + type =3D> 'string', > + title =3D> 'Name', > + }, > + size =3D> { > + type =3D> 'integer', > + title =3D> 'Size', > + }, > + type =3D> { > + type =3D> 'string', > + title =3D> 'Type', > + enum =3D> ['replicated', 'erasure', 'unknown'], > + }, > + min_size =3D> { > + type =3D> 'integer', > + title =3D> 'Min Size', > + }, > + pg_num =3D> { > + type =3D> 'integer', > + title =3D> 'PG Num', > + }, > + pg_num_min =3D> { > + type =3D> 'integer', > + title =3D> 'min. PG Num', > + optional =3D> 1, > + }, > + pg_num_final =3D> { > + type =3D> 'integer', > + title =3D> 'Optimal PG Num', > + optional =3D> 1, > + }, > + pg_autoscale_mode =3D> { > + type =3D> 'string', > + title =3D> 'PG Autoscale Mode', > + optional =3D> 1, > + }, > + crush_rule =3D> { > + type =3D> 'integer', > + title =3D> 'Crush Rule', > + }, > + crush_rule_name =3D> { > + type =3D> 'string', > + title =3D> 'Crush Rule Name', > + }, > + percent_used =3D> { > + type =3D> 'number', > + title =3D> '%-Used', > + }, > + bytes_used =3D> { > + type =3D> 'integer', > + title =3D> 'Used', > + }, > + target_size =3D> { > + type =3D> 'integer', > + title =3D> 'PG Autoscale Target Size', > + optional =3D> 1, > + }, > + target_size_ratio =3D> { > + type =3D> 'number', > + title =3D> 'PG Autoscale Target Ratio', > + optional =3D> 1, > + }, > + autoscale_status =3D> { > + type =3D> 'object', > + title =3D> 'Autoscale Status', > + optional =3D> 1, > + }, > + application_metadata =3D> { > + type =3D> 'object', > + title =3D> 'Associated Applications', > + optional =3D> 1, > + }, > + }, > + }, > + links =3D> [ { rel =3D> 'child', href =3D> "{pool_name}" } ], > + }, > + code =3D> sub { > + my ($param) =3D @_; > + > + PVE::Ceph::Tools::check_ceph_inited(); > + > + my $rados =3D PVE::RADOS->new(); > + > + my $stats =3D {}; > + my $res =3D $rados->mon_command({ prefix =3D> 'df' }); > + > + foreach my $d (@{$res->{pools}}) { > + next if !$d->{stats}; > + next if !defined($d->{id}); > + $stats->{$d->{id}} =3D $d->{stats}; > + } > + > + $res =3D $rados->mon_command({ prefix =3D> 'osd dump' }); > + my $rulestmp =3D $rados->mon_command({ prefix =3D> 'osd crush rule dump= '}); > + > + my $rules =3D {}; > + for my $rule (@$rulestmp) { > + $rules->{$rule->{rule_id}} =3D $rule->{rule_name}; > + } > + > + my $data =3D []; > + my $attr_list =3D [ > + 'pool', > + 'pool_name', > + 'size', > + 'min_size', > + 'pg_num', > + 'crush_rule', > + 'pg_autoscale_mode', > + 'application_metadata', > + ]; > + > + # pg_autoscaler module is not enabled in Nautilus > + my $autoscale =3D eval { $get_autoscale_status->($rados) }; > + > + foreach my $e (@{$res->{pools}}) { > + my $d =3D {}; > + foreach my $attr (@$attr_list) { > + $d->{$attr} =3D $e->{$attr} if defined($e->{$attr}); > + } > + > + if ($autoscale) { > + $d->{autoscale_status} =3D $autoscale->{$d->{pool_name}}; > + $d->{pg_num_final} =3D $d->{autoscale_status}->{pg_num_final}; > + # some info is nested under options instead > + $d->{pg_num_min} =3D $e->{options}->{pg_num_min}; > + $d->{target_size} =3D $e->{options}->{target_size_bytes}; > + $d->{target_size_ratio} =3D $e->{options}->{target_size_ratio}; > + } > + > + if (defined($d->{crush_rule}) && defined($rules->{$d->{crush_rule}}= )) { > + $d->{crush_rule_name} =3D $rules->{$d->{crush_rule}}; > + } > + > + if (my $s =3D $stats->{$d->{pool}}) { > + $d->{bytes_used} =3D $s->{bytes_used}; > + $d->{percent_used} =3D $s->{percent_used}; > + } > + > + # Cephs numerical pool types are barely documented. Found the follo= wing in the Ceph > + # codebase: https://github.com/ceph/ceph/blob/ff144995a849407c258bc= b763daa3e03cfce5059/src/osd/osd_types.h#L1221-L1233 > + if ($e->{type} =3D=3D 1) { > + $d->{type} =3D 'replicated'; > + } elsif ($e->{type} =3D=3D 3) { > + $d->{type} =3D 'erasure'; > + } else { > + # we should never get here, but better be safe > + $d->{type} =3D 'unknown'; > + } > + push @$data, $d; > + } > + > + > + return $data; > + }}); > + > + > +my $ceph_pool_common_options =3D sub { > + my ($nodefault) =3D shift; > + my $options =3D { > + name =3D> { > + title =3D> 'Name', > + description =3D> "The name of the pool. It must be unique.", > + type =3D> 'string', > + }, > + size =3D> { > + title =3D> 'Size', > + description =3D> 'Number of replicas per object', > + type =3D> 'integer', > + default =3D> 3, > + optional =3D> 1, > + minimum =3D> 1, > + maximum =3D> 7, > + }, > + min_size =3D> { > + title =3D> 'Min Size', > + description =3D> 'Minimum number of replicas per object', > + type =3D> 'integer', > + default =3D> 2, > + optional =3D> 1, > + minimum =3D> 1, > + maximum =3D> 7, > + }, > + pg_num =3D> { > + title =3D> 'PG Num', > + description =3D> "Number of placement groups.", > + type =3D> 'integer', > + default =3D> 128, > + optional =3D> 1, > + minimum =3D> 1, > + maximum =3D> 32768, > + }, > + pg_num_min =3D> { > + title =3D> 'min. PG Num', > + description =3D> "Minimal number of placement groups.", > + type =3D> 'integer', > + optional =3D> 1, > + maximum =3D> 32768, > + }, > + crush_rule =3D> { > + title =3D> 'Crush Rule Name', > + description =3D> "The rule to use for mapping object placement in t= he cluster.", > + type =3D> 'string', > + optional =3D> 1, > + }, > + application =3D> { > + title =3D> 'Application', > + description =3D> "The application of the pool.", > + default =3D> 'rbd', > + type =3D> 'string', > + enum =3D> ['rbd', 'cephfs', 'rgw'], > + optional =3D> 1, > + }, > + pg_autoscale_mode =3D> { > + title =3D> 'PG Autoscale Mode', > + description =3D> "The automatic PG scaling mode of the pool.", > + type =3D> 'string', > + enum =3D> ['on', 'off', 'warn'], > + default =3D> 'warn', > + optional =3D> 1, > + }, > + target_size =3D> { > + description =3D> "The estimated target size of the pool for the PG = autoscaler.", > + title =3D> 'PG Autoscale Target Size', > + type =3D> 'string', > + pattern =3D> '^(\d+(\.\d+)?)([KMGT])?$', > + optional =3D> 1, > + }, > + target_size_ratio =3D> { > + description =3D> "The estimated target ratio of the pool for the PG= autoscaler.", > + title =3D> 'PG Autoscale Target Ratio', > + type =3D> 'number', > + optional =3D> 1, > + }, > + }; > + > + if ($nodefault) { > + delete $options->{$_}->{default} for keys %$options; > + } > + return $options; > +}; > + > + > +my $add_storage =3D sub { > + my ($pool, $storeid, $ec_data_pool) =3D @_; > + > + my $storage_params =3D { > + type =3D> 'rbd', > + pool =3D> $pool, > + storage =3D> $storeid, > + krbd =3D> 0, > + content =3D> 'rootdir,images', > + }; > + > + $storage_params->{'data-pool'} =3D $ec_data_pool if $ec_data_pool; > + > + PVE::API2::Storage::Config->create($storage_params); > +}; > + > +my $get_storages =3D sub { > + my ($pool) =3D @_; > + > + my $cfg =3D PVE::Storage::config(); > + > + my $storages =3D $cfg->{ids}; > + my $res =3D {}; > + foreach my $storeid (keys %$storages) { > + my $curr =3D $storages->{$storeid}; > + next if $curr->{type} ne 'rbd'; > + $curr->{pool} =3D 'rbd' if !defined $curr->{pool}; # set default > + if ( > + $pool eq $curr->{pool} || > + (defined $curr->{'data-pool'} && $pool eq $curr->{'data-pool'}) > + ) { > + $res->{$storeid} =3D $storages->{$storeid}; > + } > + } > + > + return $res; > +}; > + > +my $ec_format =3D { > + k =3D> { > + type =3D> 'integer', > + description =3D> "Number of data chunks. Will create an erasure coded p= ool plus a" > + ." replicated pool for metadata.", > + minimum =3D> 2, > + }, > + m =3D> { > + type =3D> 'integer', > + description =3D> "Number of coding chunks. Will create an erasure coded= pool plus a" > + ." replicated pool for metadata.", > + minimum =3D> 1, > + }, > + 'failure-domain' =3D> { > + type =3D> 'string', > + description =3D> "CRUSH failure domain. Default is 'host'. Will create = an erasure" > + ." coded pool plus a replicated pool for metadata.", > + format_description =3D> 'domain', > + optional =3D> 1, > + default =3D> 'host', > + }, > + 'device-class' =3D> { > + type =3D> 'string', > + description =3D> "CRUSH device class. Will create an erasure coded pool= plus a" > + ." replicated pool for metadata.", > + format_description =3D> 'class', > + optional =3D> 1, > + }, > + profile =3D> { > + description =3D> "Override the erasure code (EC) profile to use. Will c= reate an" > + ." erasure coded pool plus a replicated pool for metadata.", > + type =3D> 'string', > + format_description =3D> 'profile', > + optional =3D> 1, > + }, > +}; > + > +sub ec_parse_and_check { > + my ($property, $rados) =3D @_; > + return if !$property; > + > + my $ec =3D parse_property_string($ec_format, $property); > + > + die "Erasure code profile '$ec->{profile}' does not exist.\n" > + if $ec->{profile} && !PVE::Ceph::Tools::ecprofile_exists($ec->{profile}= , $rados); > + > + return $ec; > +} > + > + > +__PACKAGE__->register_method ({ > + name =3D> 'createpool', > + path =3D> '', > + method =3D> 'POST', > + description =3D> "Create Ceph pool", > + proxyto =3D> 'node', > + protected =3D> 1, > + permissions =3D> { > + check =3D> ['perm', '/', [ 'Sys.Modify' ]], > + }, > + parameters =3D> { > + additionalProperties =3D> 0, > + properties =3D> { > + node =3D> get_standard_option('pve-node'), > + add_storages =3D> { > + description =3D> "Configure VM and CT storage using the new pool.", > + type =3D> 'boolean', > + optional =3D> 1, > + default =3D> "0; for erasure coded pools: 1", > + }, > + 'erasure-coding' =3D> { > + description =3D> "Create an erasure coded pool for RBD with an accompa= ning" > + ." replicated pool for metadata storage. With EC, the common ceph = options 'size'," > + ." 'min_size' and 'crush_rule' parameters will be applied to the m= etadata pool.", > + type =3D> 'string', > + format =3D> $ec_format, > + optional =3D> 1, > + }, > + %{ $ceph_pool_common_options->() }, > + }, > + }, > + returns =3D> { type =3D> 'string' }, > + code =3D> sub { > + my ($param) =3D @_; > + > + PVE::Cluster::check_cfs_quorum(); > + PVE::Ceph::Tools::check_ceph_configured(); > + > + my $pool =3D my $name =3D extract_param($param, 'name'); > + my $node =3D extract_param($param, 'node'); > + my $add_storages =3D extract_param($param, 'add_storages'); > + > + my $rpcenv =3D PVE::RPCEnvironment::get(); > + my $user =3D $rpcenv->get_user(); > + # Ceph uses target_size_bytes > + if (defined($param->{'target_size'})) { > + my $target_sizestr =3D extract_param($param, 'target_size'); > + $param->{target_size_bytes} =3D PVE::JSONSchema::parse_size($target= _sizestr); > + } > + > + my $rados =3D PVE::RADOS->new(); > + my $ec =3D ec_parse_and_check(extract_param($param, 'erasure-coding'), = $rados); > + $add_storages =3D 1 if $ec && !defined($add_storages); > + > + if ($add_storages) { > + $rpcenv->check($user, '/storage', ['Datastore.Allocate']); > + die "pool name contains characters which are illegal for storage na= ming\n" > + if !PVE::JSONSchema::parse_storage_id($pool); > + } > + > + # pool defaults > + $param->{pg_num} //=3D 128; > + $param->{size} //=3D 3; > + $param->{min_size} //=3D 2; > + $param->{application} //=3D 'rbd'; > + $param->{pg_autoscale_mode} //=3D 'warn'; > + > + my $worker =3D sub { > + # reopen with longer timeout > + $rados =3D PVE::RADOS->new(timeout =3D> PVE::Ceph::Tools::get_confi= g('long_rados_timeout')); > + > + if ($ec) { > + if (!$ec->{profile}) { > + $ec->{profile} =3D PVE::Ceph::Tools::get_ecprofile_name($pool, $ra= dos); > + eval { > + PVE::Ceph::Tools::create_ecprofile( > + $ec->@{'profile', 'k', 'm', 'failure-domain', 'device-class'}, > + $rados, > + ); > + }; > + die "could not create erasure code profile '$ec->{profile}': $@\n"= if $@; > + print "created new erasure code profile '$ec->{profile}'\n"; > + } > + > + my $ec_data_param =3D {}; > + # copy all params, should be a flat hash > + $ec_data_param =3D { map { $_ =3D> $param->{$_} } keys %$param }; > + > + $ec_data_param->{pool_type} =3D 'erasure'; > + $ec_data_param->{allow_ec_overwrites} =3D 'true'; > + $ec_data_param->{erasure_code_profile} =3D $ec->{profile}; > + delete $ec_data_param->{size}; > + delete $ec_data_param->{min_size}; > + delete $ec_data_param->{crush_rule}; > + > + # metadata pool should be ok with 32 PGs > + $param->{pg_num} =3D 32; > + > + $pool =3D "${name}-metadata"; > + $ec->{data_pool} =3D "${name}-data"; > + > + PVE::Ceph::Tools::create_pool($ec->{data_pool}, $ec_data_param, $rados= ); > + } > + > + PVE::Ceph::Tools::create_pool($pool, $param, $rados); > + > + if ($add_storages) { > + eval { $add_storage->($pool, "${name}", $ec->{data_pool}) }; > + die "adding PVE storage for ceph pool '$name' failed: $@\n" if $@; > + } > + }; > + > + return $rpcenv->fork_worker('cephcreatepool', $pool, $user, $worker); > + }}); > + > + > +__PACKAGE__->register_method ({ > + name =3D> 'destroypool', > + path =3D> '{name}', > + method =3D> 'DELETE', > + description =3D> "Destroy pool", > + proxyto =3D> 'node', > + protected =3D> 1, > + permissions =3D> { > + check =3D> ['perm', '/', [ 'Sys.Modify' ]], > + }, > + parameters =3D> { > + additionalProperties =3D> 0, > + properties =3D> { > + node =3D> get_standard_option('pve-node'), > + name =3D> { > + description =3D> "The name of the pool. It must be unique.", > + type =3D> 'string', > + }, > + force =3D> { > + description =3D> "If true, destroys pool even if in use", > + type =3D> 'boolean', > + optional =3D> 1, > + default =3D> 0, > + }, > + remove_storages =3D> { > + description =3D> "Remove all pveceph-managed storages configured for t= his pool", > + type =3D> 'boolean', > + optional =3D> 1, > + default =3D> 0, > + }, > + remove_ecprofile =3D> { > + description =3D> "Remove the erasure code profile. Defaults to true, i= f applicable.", > + type =3D> 'boolean', > + optional =3D> 1, > + default =3D> 1, > + }, > + }, > + }, > + returns =3D> { type =3D> 'string' }, > + code =3D> sub { > + my ($param) =3D @_; > + > + PVE::Ceph::Tools::check_ceph_inited(); > + > + my $rpcenv =3D PVE::RPCEnvironment::get(); > + my $user =3D $rpcenv->get_user(); > + $rpcenv->check($user, '/storage', ['Datastore.Allocate']) > + if $param->{remove_storages}; > + > + my $pool =3D $param->{name}; > + > + my $worker =3D sub { > + my $storages =3D $get_storages->($pool); > + > + # if not forced, destroy ceph pool only when no > + # vm disks are on it anymore > + if (!$param->{force}) { > + my $storagecfg =3D PVE::Storage::config(); > + foreach my $storeid (keys %$storages) { > + my $storage =3D $storages->{$storeid}; > + > + # check if any vm disks are on the pool > + print "checking storage '$storeid' for RBD images..\n"; > + my $res =3D PVE::Storage::vdisk_list($storagecfg, $storeid); > + die "ceph pool '$pool' still in use by storage '$storeid'\n" > + if @{$res->{$storeid}} !=3D 0; > + } > + } > + my $rados =3D PVE::RADOS->new(); > + > + my $pool_properties =3D PVE::Ceph::Tools::get_pool_properties($pool= , $rados); > + > + PVE::Ceph::Tools::destroy_pool($pool, $rados); > + > + if (my $ecprofile =3D $pool_properties->{erasure_code_profile}) { > + print "found erasure coded profile '$ecprofile', destroying its CRUSH = rule\n"; > + my $crush_rule =3D $pool_properties->{crush_rule}; > + eval { PVE::Ceph::Tools::destroy_crush_rule($crush_rule, $rados); }; > + warn "removing crush rule '${crush_rule}' failed: $@\n" if $@; > + > + if ($param->{remove_ecprofile} // 1) { > + print "destroying erasure coded profile '$ecprofile'\n"; > + eval { PVE::Ceph::Tools::destroy_ecprofile($ecprofile, $rados) }; > + warn "removing EC profile '${ecprofile}' failed: $@\n" if $@; > + } > + } > + > + if ($param->{remove_storages}) { > + my $err; > + foreach my $storeid (keys %$storages) { > + # skip external clusters, not managed by pveceph > + next if $storages->{$storeid}->{monhost}; > + eval { PVE::API2::Storage::Config->delete({storage =3D> $storeid})= }; > + if ($@) { > + warn "failed to remove storage '$storeid': $@\n"; > + $err =3D 1; > + } > + } > + die "failed to remove (some) storages - check log and remove manually!= \n" > + if $err; > + } > + }; > + return $rpcenv->fork_worker('cephdestroypool', $pool, $user, $worker); > + }}); > + > + > +__PACKAGE__->register_method ({ > + name =3D> 'setpool', > + path =3D> '{name}', > + method =3D> 'PUT', > + description =3D> "Change POOL settings", > + proxyto =3D> 'node', > + protected =3D> 1, > + permissions =3D> { > + check =3D> ['perm', '/', [ 'Sys.Modify' ]], > + }, > + parameters =3D> { > + additionalProperties =3D> 0, > + properties =3D> { > + node =3D> get_standard_option('pve-node'), > + %{ $ceph_pool_common_options->('nodefault') }, > + }, > + }, > + returns =3D> { type =3D> 'string' }, > + code =3D> sub { > + my ($param) =3D @_; > + > + PVE::Ceph::Tools::check_ceph_configured(); > + > + my $rpcenv =3D PVE::RPCEnvironment::get(); > + my $authuser =3D $rpcenv->get_user(); > + > + my $pool =3D extract_param($param, 'name'); > + my $node =3D extract_param($param, 'node'); > + > + # Ceph uses target_size_bytes > + if (defined($param->{'target_size'})) { > + my $target_sizestr =3D extract_param($param, 'target_size'); > + $param->{target_size_bytes} =3D PVE::JSONSchema::parse_size($target= _sizestr); > + } > + > + my $worker =3D sub { > + PVE::Ceph::Tools::set_pool($pool, $param); > + }; > + > + return $rpcenv->fork_worker('cephsetpool', $pool, $authuser, $worker); > + }}); > + > +__PACKAGE__->register_method ({ > + name =3D> 'poolindex', > + path =3D> '{name}', > + method =3D> 'GET', > + permissions =3D> { > + check =3D> ['perm', '/', [ 'Sys.Audit', 'Datastore.Audit' ], any =3D> 1= ], > + }, > + description =3D> "Pool index.", > + parameters =3D> { > + additionalProperties =3D> 0, > + properties =3D> { > + node =3D> get_standard_option('pve-node'), > + name =3D> { > + description =3D> 'The name of the pool.', > + type =3D> 'string', > + }, > + }, > + }, > + returns =3D> { > + type =3D> 'array', > + items =3D> { > + type =3D> "object", > + properties =3D> {}, > + }, > + links =3D> [ { rel =3D> 'child', href =3D> "{name}" } ], > + }, > + code =3D> sub { > + my ($param) =3D @_; > + > + my $result =3D [ > + { name =3D> 'status' }, > + ]; > + > + return $result; > + }}); > + > + > +__PACKAGE__->register_method ({ > + name =3D> 'getpool', > + path =3D> '{name}/status', > + method =3D> 'GET', > + description =3D> "List pool settings.", whereas this actually returns *much more* than just the settings, and is therefor rightly named "status", so maybe the description should also make = that clear ;) > + proxyto =3D> 'node', > + protected =3D> 1, > + permissions =3D> { > + check =3D> ['perm', '/', [ 'Sys.Audit', 'Datastore.Audit' ], any =3D> 1= ], > + }, > + parameters =3D> { > + additionalProperties =3D> 0, > + properties =3D> { > + node =3D> get_standard_option('pve-node'), > + name =3D> { > + description =3D> "The name of the pool. It must be unique.", > + type =3D> 'string', > + }, > + verbose =3D> { > + type =3D> 'boolean', > + default =3D> 0, > + optional =3D> 1, > + description =3D> "If enabled, will display additional data". > + "(eg. statistics).", > + }, > + }, > + }, > + returns =3D> { > + type =3D> "object", > + properties =3D> { > + id =3D> { type =3D> 'integer', title =3D> 'ID' = }, > + pgp_num =3D> { type =3D> 'integer', title =3D> 'PGP = num' }, > + noscrub =3D> { type =3D> 'boolean', title =3D> 'nosc= rub' }, > + 'nodeep-scrub' =3D> { type =3D> 'boolean', title =3D> 'node= ep-scrub' }, > + nodelete =3D> { type =3D> 'boolean', title =3D> 'node= lete' }, > + nopgchange =3D> { type =3D> 'boolean', title =3D> 'nopg= change' }, > + nosizechange =3D> { type =3D> 'boolean', title =3D> 'nosi= zechange' }, > + write_fadvise_dontneed =3D> { type =3D> 'boolean', title =3D> 'writ= e_fadvise_dontneed' }, > + hashpspool =3D> { type =3D> 'boolean', title =3D> 'hash= pspool' }, > + use_gmt_hitset =3D> { type =3D> 'boolean', title =3D> 'use_= gmt_hitset' }, > + fast_read =3D> { type =3D> 'boolean', title =3D> 'Fast= Read' }, > + application_list =3D> { type =3D> 'array', title =3D> 'Applic= ation', optional =3D> 1 }, > + statistics =3D> { type =3D> 'object', title =3D> 'Stati= stics', optional =3D> 1 }, > + autoscale_status =3D> { type =3D> 'object', title =3D> 'Auto= scale Status', optional =3D> 1 }, > + %{ $ceph_pool_common_options->() }, > + }, > + }, > + code =3D> sub { > + my ($param) =3D @_; > + > + PVE::Ceph::Tools::check_ceph_inited(); > + > + my $verbose =3D $param->{verbose}; > + my $pool =3D $param->{name}; > + > + my $rados =3D PVE::RADOS->new(); > + my $res =3D $rados->mon_command({ > + prefix =3D> 'osd pool get', > + pool =3D> "$pool", > + var =3D> 'all', > + }); > + > + my $data =3D { > + id =3D> $res->{pool_id}, > + name =3D> $pool, > + size =3D> $res->{size}, > + min_size =3D> $res->{min_size}, > + pg_num =3D> $res->{pg_num}, > + pg_num_min =3D> $res->{pg_num_min}, > + pgp_num =3D> $res->{pgp_num}, > + crush_rule =3D> $res->{crush_rule}, > + pg_autoscale_mode =3D> $res->{pg_autoscale_mode}, > + noscrub =3D> "$res->{noscrub}", > + 'nodeep-scrub' =3D> "$res->{'nodeep-scrub'}", > + nodelete =3D> "$res->{nodelete}", > + nopgchange =3D> "$res->{nopgchange}", > + nosizechange =3D> "$res->{nosizechange}", > + write_fadvise_dontneed =3D> "$res->{write_fadvise_dontneed}", > + hashpspool =3D> "$res->{hashpspool}", > + use_gmt_hitset =3D> "$res->{use_gmt_hitset}", > + fast_read =3D> "$res->{fast_read}", > + target_size =3D> $res->{target_size_bytes}, > + target_size_ratio =3D> $res->{target_size_ratio}, > + }; > + > + if ($verbose) { > + my $stats; > + my $res =3D $rados->mon_command({ prefix =3D> 'df' }); > + > + # pg_autoscaler module is not enabled in Nautilus > + # avoid partial read further down, use new rados instance > + my $autoscale_status =3D eval { $get_autoscale_status->() }; > + $data->{autoscale_status} =3D $autoscale_status->{$pool}; > + > + foreach my $d (@{$res->{pools}}) { > + next if !$d->{stats}; > + next if !defined($d->{name}) && !$d->{name} ne "$pool"; > + $data->{statistics} =3D $d->{stats}; > + } > + > + my $apps =3D $rados->mon_command({ prefix =3D> "osd pool applicatio= n get", pool =3D> "$pool", }); > + $data->{application_list} =3D [ keys %$apps ]; > + } > + > + return $data; > + }}); > + > + > +1; > diff --git a/PVE/API2/Ceph/Pools.pm b/PVE/API2/Ceph/Pools.pm > index fce56787..ffae73b9 100644 > --- a/PVE/API2/Ceph/Pools.pm > +++ b/PVE/API2/Ceph/Pools.pm > @@ -1,4 +1,5 @@ > package PVE::API2::Ceph::Pools; > +# TODO: Deprecated, drop with PVE 8.0! PVE::API2::Ceph::Pool is the repl= acement > =20 > use strict; > use warnings; > @@ -37,7 +38,7 @@ __PACKAGE__->register_method ({ > name =3D> 'lspools', > path =3D> '', > method =3D> 'GET', > - description =3D> "List all pools.", > + description =3D> "List all pools. Deprecated, please use `/nodes/{no= de}/ceph/pool`.", > proxyto =3D> 'node', > protected =3D> 1, > permissions =3D> { > @@ -393,7 +394,7 @@ __PACKAGE__->register_method ({ > name =3D> 'createpool', > path =3D> '', > method =3D> 'POST', > - description =3D> "Create Ceph pool", > + description =3D> "Create Ceph pool. Deprecated, please use `/nodes/{= node}/ceph/pool`.", > proxyto =3D> 'node', > protected =3D> 1, > permissions =3D> { > @@ -509,7 +510,7 @@ __PACKAGE__->register_method ({ > name =3D> 'destroypool', > path =3D> '{name}', > method =3D> 'DELETE', > - description =3D> "Destroy pool", > + description =3D> "Destroy pool. Deprecated, please use `/nodes/{node= }/ceph/pool/{name}`.", > proxyto =3D> 'node', > protected =3D> 1, > permissions =3D> { > @@ -615,7 +616,7 @@ __PACKAGE__->register_method ({ > name =3D> 'setpool', > path =3D> '{name}', > method =3D> 'PUT', > - description =3D> "Change POOL settings", > + description =3D> "Change POOL settings. Deprecated, please use `/nod= es/{node}/ceph/pool/{name}`.", > proxyto =3D> 'node', > protected =3D> 1, > permissions =3D> { > @@ -658,7 +659,7 @@ __PACKAGE__->register_method ({ > name =3D> 'getpool', > path =3D> '{name}', > method =3D> 'GET', > - description =3D> "List pool settings.", > + description =3D> "List pool settings. Deprecated, please use `/nodes= /{node}/ceph/pool/{pool}/status`.", > proxyto =3D> 'node', > protected =3D> 1, > permissions =3D> { > --=20 > 2.30.2 >=20 >=20 >=20 > _______________________________________________ > pve-devel mailing list > pve-devel@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel >=20 >=20 >=20