public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Aaron Lauterer <a.lauterer@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH manager 4/4] ceph pools: allow to create erasure code pools
Date: Fri,  8 Apr 2022 12:14:16 +0200	[thread overview]
Message-ID: <20220408101416.165312-5-a.lauterer@proxmox.com> (raw)
In-Reply-To: <20220408101416.165312-1-a.lauterer@proxmox.com>

When using erasure coded pools for RBD storages, the main use case in
this patch, we need a replicated pool that will hold the RBD omap and
other metadata. The EC pool itself will only hold the data objects.

The coupling happens when an RBD image is created by adding the
--data-pool parameter. This is why we have the 'data-pool' parameter in
the storage configuration.

To follow already established semantics, once the 'ecprofile' parameter
is provided, we will create a 'X-metadata' and 'X-data' pool. The
storage configuration is always added as it is the only thing that links
the two together (besides naming schemes).

Different pg_num defaults are chosen for the replicated metadata pool as
it will not hold a lot of data.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
At first I though that we should add another API endpoint just to create
EC pools, but that then brings the problem with it, that we need a new
(sub)path for the new POST endpoint.

Since we do not actually change that much in the existing one to support
ec pools, I went for that now. We do need to copy over the pool params
for the ec pool and change defaults a bit for the meta and data pool.


 PVE/API2/Ceph/Pools.pm | 46 ++++++++++++++++++++++++++++++++++++++----
 PVE/Ceph/Tools.pm      | 11 +++++++---
 2 files changed, 50 insertions(+), 7 deletions(-)

diff --git a/PVE/API2/Ceph/Pools.pm b/PVE/API2/Ceph/Pools.pm
index 05855e15..1a6a346b 100644
--- a/PVE/API2/Ceph/Pools.pm
+++ b/PVE/API2/Ceph/Pools.pm
@@ -280,7 +280,7 @@ my $ceph_pool_common_options = sub {
 
 
 my $add_storage = sub {
-    my ($pool, $storeid) = @_;
+    my ($pool, $storeid, $data_pool) = @_;
 
     my $storage_params = {
 	type => 'rbd',
@@ -290,6 +290,8 @@ my $add_storage = sub {
 	content => 'rootdir,images',
     };
 
+    $storage_params->{'data-pool'} = $data_pool if $data_pool;
+
     PVE::API2::Storage::Config->create($storage_params);
 };
 
@@ -334,6 +336,13 @@ __PACKAGE__->register_method ({
 		type => 'boolean',
 		optional => 1,
 	    },
+	    ecprofile => {
+		description => "Erasure code profile to use. This will create a replicated ".
+			       "metadata pool, an erasure coded metadata pool and the storage ".
+			       "configuration.",
+		type => 'string',
+		optional => 1,
+	    },
 	    %{ $ceph_pool_common_options->() },
 	},
     },
@@ -344,10 +353,17 @@ __PACKAGE__->register_method ({
 	PVE::Cluster::check_cfs_quorum();
 	PVE::Ceph::Tools::check_ceph_configured();
 
-	my $pool = extract_param($param, 'name');
+	my $name = extract_param($param, 'name');
+	my $pool = $name;
 	my $node = extract_param($param, 'node');
 	my $add_storages = extract_param($param, 'add_storages');
 
+	my $ecprofile = extract_param($param, 'ecprofile');
+	die "Erasure code profile '$ecprofile' does not exist.\n"
+	    if $ecprofile && !PVE::Ceph::Tools::ecprofile_exists($ecprofile);
+
+	$add_storages = 1 if $ecprofile;
+
 	my $rpcenv = PVE::RPCEnvironment::get();
 	my $user = $rpcenv->get_user();
 
@@ -370,13 +386,35 @@ __PACKAGE__->register_method ({
 	$param->{application} //= 'rbd';
 	$param->{pg_autoscale_mode} //= 'warn';
 
+	my $data_param = {};
+	my $data_pool = '';
+
+	if ($ecprofile) {
+	    # copy all params, should be a flat hash
+	    $data_param = { map { $_ => $param->{$_} } keys %$param };
+
+	    $data_param->{pool_type} = 'erasure';
+	    $data_param->{allow_ec_overwrites} = 'true';
+	    $data_param->{erasure_code_profile} = $ecprofile;
+	    delete $data_param->{size};
+	    delete $data_param->{min_size};
+
+	    # metadata pool should be ok with 32 PGs
+	    $param->{pg_num} = 32;
+
+	    $pool = "${name}-metadata";
+	    $data_pool = "${name}-data";
+	}
+
 	my $worker = sub {
 
 	    PVE::Ceph::Tools::create_pool($pool, $param);
 
+	    PVE::Ceph::Tools::create_pool($data_pool, $data_param) if $ecprofile;
+
 	    if ($add_storages) {
-		eval { $add_storage->($pool, "${pool}") };
-		die "adding PVE storage for ceph pool '$pool' failed: $@\n" if $@;
+		eval { $add_storage->($pool, "${name}", $data_pool) };
+		die "adding PVE storage for ceph pool '$name' failed: $@\n" if $@;
 	    }
 	};
 
diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
index 91aa6ce5..18051e06 100644
--- a/PVE/Ceph/Tools.pm
+++ b/PVE/Ceph/Tools.pm
@@ -8,7 +8,7 @@ use File::Basename;
 use IO::File;
 use JSON;
 
-use PVE::Tools qw(run_command dir_glob_foreach);
+use PVE::Tools qw(run_command dir_glob_foreach extract_param);
 use PVE::Cluster qw(cfs_read_file);
 use PVE::RADOS;
 use PVE::Ceph::Services;
@@ -264,12 +264,17 @@ sub create_pool {
 
     my $pg_num = $param->{pg_num} || 128;
 
-    $rados->mon_command({
+    my $mon_params = {
 	prefix => "osd pool create",
 	pool => $pool,
 	pg_num => int($pg_num),
 	format => 'plain',
-    });
+    };
+    $mon_params->{pool_type} = extract_param($param, 'pool_type') if $param->{pool_type};
+    $mon_params->{erasure_code_profile} = extract_param($param, 'erasure_code_profile')
+	if $param->{erasure_code_profile};
+
+    $rados->mon_command($mon_params);
 
     set_pool($pool, $param);
 
-- 
2.30.2





  parent reply	other threads:[~2022-04-08 10:14 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-08 10:14 [pve-devel] [RFC manager 0/4] Ceph add basic erasure code pool mgmt support Aaron Lauterer
2022-04-08 10:14 ` [pve-devel] [PATCH manager 1/4] api: ceph: $get_storages check if data-pool too Aaron Lauterer
2022-04-08 10:14 ` [pve-devel] [RFC manager 2/4] pveceph: add management for erasure code rules Aaron Lauterer
2022-04-27 13:32   ` Dominik Csapak
2022-04-08 10:14 ` [pve-devel] [RFC manager 3/4] ceph tools: add check if erasure code profile exists Aaron Lauterer
2022-04-08 10:14 ` Aaron Lauterer [this message]
2022-04-27 13:32   ` [pve-devel] [PATCH manager 4/4] ceph pools: allow to create erasure code pools Dominik Csapak
2022-04-08 11:13 ` [pve-devel] [RFC manager 0/4] Ceph add basic erasure code pool mgmt support Aaron Lauterer
2022-04-27 13:37 ` Dominik Csapak

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220408101416.165312-5-a.lauterer@proxmox.com \
    --to=a.lauterer@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal