public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [RFC manager 0/4] Ceph add basic erasure code pool mgmt support
@ 2022-04-08 10:14 Aaron Lauterer
  2022-04-08 10:14 ` [pve-devel] [PATCH manager 1/4] api: ceph: $get_storages check if data-pool too Aaron Lauterer
                   ` (5 more replies)
  0 siblings, 6 replies; 9+ messages in thread
From: Aaron Lauterer @ 2022-04-08 10:14 UTC (permalink / raw)
  To: pve-devel

This RFC series adds basic support to create erasure coded (EC) pools
with the PVE tooling.

We need to first manage EC profiles as they are the important part when
someone wants to use EC pools. They define how the data is split up and
how much coding/parity one wants.

The actual creation of the EC pools follows the same approach we use for
cephfs pools. One metadata and one EC coded data pool. More details in
the actual patches.

The first patch is one that we should have added when we added basic
support for ec pools [0].

I sent it as RFC mainly to get some feedback, especially regarding the
CLI interface for the profile management and if the approach on how to
create EC pools, by adding an optional 'ecprofile' parameter to
pool create, is one we are okay with from an interface POV.

More details can be found in the individual patches.

[0] https://git.proxmox.com/?p=pve-storage.git;a=commit;h=ef2afce74aba01f2ab698a5477f5e396fa4d3725

Aaron Lauterer (4):
  api: ceph: $get_storages check if data-pool too
  pveceph: add management for erasure code rules
  ceph tools: add check if erasure code profile exists
  ceph pools: allow to create erasure code pools

 PVE/API2/Ceph.pm            |   6 +
 PVE/API2/Ceph/ECProfiles.pm | 249 ++++++++++++++++++++++++++++++++++++
 PVE/API2/Ceph/Makefile      |   1 +
 PVE/API2/Ceph/Pools.pm      |  55 +++++++-
 PVE/CLI/pveceph.pm          |  12 ++
 PVE/Ceph/Tools.pm           |  21 ++-
 6 files changed, 335 insertions(+), 9 deletions(-)
 create mode 100644 PVE/API2/Ceph/ECProfiles.pm

-- 
2.30.2





^ permalink raw reply	[flat|nested] 9+ messages in thread

* [pve-devel] [PATCH manager 1/4] api: ceph: $get_storages check if data-pool too
  2022-04-08 10:14 [pve-devel] [RFC manager 0/4] Ceph add basic erasure code pool mgmt support Aaron Lauterer
@ 2022-04-08 10:14 ` Aaron Lauterer
  2022-04-08 10:14 ` [pve-devel] [RFC manager 2/4] pveceph: add management for erasure code rules Aaron Lauterer
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Aaron Lauterer @ 2022-04-08 10:14 UTC (permalink / raw)
  To: pve-devel

When removing a pool, we check against any storage that might have that
pool configured.
We need to check if that pool is used as data-pool too.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
This check should have been added when we added basic support for EC
pools on the storage side [0].

[0] https://git.proxmox.com/?p=pve-storage.git;a=commit;h=ef2afce74aba01f2ab698a5477f5e396fa4d3725

 PVE/API2/Ceph/Pools.pm | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/PVE/API2/Ceph/Pools.pm b/PVE/API2/Ceph/Pools.pm
index 002f7893..05855e15 100644
--- a/PVE/API2/Ceph/Pools.pm
+++ b/PVE/API2/Ceph/Pools.pm
@@ -302,8 +302,13 @@ my $get_storages = sub {
     my $res = {};
     foreach my $storeid (keys %$storages) {
 	my $curr = $storages->{$storeid};
-	$res->{$storeid} = $storages->{$storeid}
-	    if $curr->{type} eq 'rbd' && $pool eq $curr->{pool};
+	next if $curr->{type} ne 'rbd';
+	if (
+	    $pool eq $curr->{pool} ||
+	    (defined $curr->{'data-pool'} && $pool eq $curr->{'data-pool'})
+	) {
+	    $res->{$storeid} = $storages->{$storeid};
+	}
     }
 
     return $res;
-- 
2.30.2





^ permalink raw reply	[flat|nested] 9+ messages in thread

* [pve-devel] [RFC manager 2/4] pveceph: add management for erasure code rules
  2022-04-08 10:14 [pve-devel] [RFC manager 0/4] Ceph add basic erasure code pool mgmt support Aaron Lauterer
  2022-04-08 10:14 ` [pve-devel] [PATCH manager 1/4] api: ceph: $get_storages check if data-pool too Aaron Lauterer
@ 2022-04-08 10:14 ` Aaron Lauterer
  2022-04-27 13:32   ` Dominik Csapak
  2022-04-08 10:14 ` [pve-devel] [RFC manager 3/4] ceph tools: add check if erasure code profile exists Aaron Lauterer
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 9+ messages in thread
From: Aaron Lauterer @ 2022-04-08 10:14 UTC (permalink / raw)
  To: pve-devel

Allow to set 'k' and 'm' values and optionally the device class and
failure domains.
Implemented in a way that also exposes the functionality via the API.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
 PVE/API2/Ceph.pm            |   6 +
 PVE/API2/Ceph/ECProfiles.pm | 249 ++++++++++++++++++++++++++++++++++++
 PVE/API2/Ceph/Makefile      |   1 +
 PVE/CLI/pveceph.pm          |  12 ++
 4 files changed, 268 insertions(+)
 create mode 100644 PVE/API2/Ceph/ECProfiles.pm

diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index 3bbcfe4c..06bd2e2e 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@ -24,6 +24,7 @@ use PVE::API2::Ceph::MDS;
 use PVE::API2::Ceph::MGR;
 use PVE::API2::Ceph::MON;
 use PVE::API2::Ceph::Pools;
+use PVE::API2::Ceph::ECProfiles;
 use PVE::API2::Storage::Config;
 
 use base qw(PVE::RESTHandler);
@@ -60,6 +61,11 @@ __PACKAGE__->register_method ({
     path => 'pools',
 });
 
+__PACKAGE__->register_method ({
+    subclass => "PVE::API2::Ceph::ECProfiles",
+    path => 'ecprofiles',
+});
+
 __PACKAGE__->register_method ({
     name => 'index',
     path => '',
diff --git a/PVE/API2/Ceph/ECProfiles.pm b/PVE/API2/Ceph/ECProfiles.pm
new file mode 100644
index 00000000..f7c51845
--- /dev/null
+++ b/PVE/API2/Ceph/ECProfiles.pm
@@ -0,0 +1,249 @@
+package PVE::API2::Ceph::ECProfiles;
+
+use strict;
+use warnings;
+
+use PVE::Ceph::Tools;
+use PVE::Ceph::Services;
+use PVE::JSONSchema qw(get_standard_option);
+use PVE::RADOS;
+use PVE::RESTHandler;
+use PVE::RPCEnvironment;
+use PVE::Tools qw(extract_param);
+
+use base qw(PVE::RESTHandler);
+
+__PACKAGE__->register_method ({
+    name => 'lsecprofile',
+    path => '',
+    method => 'GET',
+    description => "List erasure coding (EC) profiles",
+    proxyto => 'node',
+    protected => 1,
+    permissions => {
+	check => ['perm', '/', [ 'Sys.Audit', 'Datastore.Audit' ], any => 1],
+    },
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    node => get_standard_option('pve-node'),
+	},
+    },
+    returns => {
+	type => 'array',
+	items => {
+	    type => "object",
+	    properties => {
+		name => {
+		    type => 'string',
+		    title => 'Name',
+		},
+	    },
+	},
+    },
+    code => sub {
+	my ($param) = @_;
+
+	PVE::Ceph::Tools::check_ceph_configured();
+
+	my $rados = PVE::RADOS->new();
+	my $profiles = $rados->mon_command({ prefix => 'osd erasure-code-profile ls' });
+	my $res = [];
+	foreach my $profile (@$profiles) {
+	    push @$res, { name => $profile };
+	}
+	return $res;
+    }});
+
+
+__PACKAGE__->register_method ({
+    name => 'getecprofile',
+    path => '{name}',
+    method => 'GET',
+    description => "List erasure coding (EC) profiles",
+    proxyto => 'node',
+    protected => 1,
+    permissions => {
+	check => ['perm', '/', [ 'Sys.Audit', 'Datastore.Audit' ], any => 1],
+    },
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    node => get_standard_option('pve-node'),
+	    name => {
+		type => 'string',
+		description => "The name of the erasure code profile.",
+	    },
+	},
+    },
+    returns => {
+	type => 'object',
+	properties => {
+	    name			    => { type => 'string', title => 'Name' },
+	    m 				    => { type => 'integer', title => 'm' },
+	    k 				    => { type => 'integer', title => 'k' },
+	    plugin			    => { type => 'string', title => 'plugin' },
+	    technique			    => { type => 'string', title => 'Technique' },
+	    w				    => { type => 'integer', title => 'w', optional => 1 },
+	    'crush-root'		    => {
+		type => 'string',
+		title => 'Crush root',
+		optional => 1,
+	    },
+	    'crush-device-class'	    => {
+		type => 'string',
+		title => 'Device Class',
+		optional => 1,
+	    },
+	    'crush-failure-domain'	    => {
+		type => 'string',
+		title => 'Failure Domain',
+		optional => 1,
+	    },
+	    'jerasure-per-chunk-alignment'  => {
+		type => 'string',
+		title => 'jerasure-per-chunk-alignment',
+		optional => 1,
+	    },
+	},
+    },
+    code => sub {
+	my ($param) = @_;
+
+	PVE::Ceph::Tools::check_ceph_configured();
+
+	my $rados = PVE::RADOS->new();
+	my $res = $rados->mon_command({
+		prefix => 'osd erasure-code-profile get',
+		name => $param->{name},
+	    });
+
+	my $data = {
+	    name			    => $param->{name},
+	    'crush-root'		    => $res->{'crush-root'},
+	    w				    => $res->{w},
+	    m 				    => $res->{m},
+	    k 				    => $res->{k},
+	    'crush-device-class'	    => $res->{'crush-device-class'},
+	    'crush-failure-domain'	    => $res->{'crush-failure-domain'},
+	    plugin			    => $res->{'plugin'},
+	    'jerasure-per-chunk-alignment'  => $res->{'jerasure-per-chunk-alignment'},
+	    technique			    => $res->{'technique'},
+	};
+	return $data;
+    }});
+
+
+__PACKAGE__->register_method ({
+    name => 'createecprofile',
+    path => '',
+    method => 'POST',
+    description => "Create erasure code profile",
+    proxyto => 'node',
+    protected => 1,
+    permissions => {
+	check => ['perm', '/', [ 'Sys.Modify' ]],
+    },
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    node => get_standard_option('pve-node'),
+	    name => {
+		description => 'The name of the erasure code profile. Must be unique.',
+		type => 'string',
+	    },
+	    k => {
+		type => 'integer',
+		description => 'Number of data chunks.',
+	    },
+	    m => {
+		type => 'integer',
+		description => 'Number of coding chunks.',
+	    },
+	    'failure-domain' => {
+		type => 'string',
+		optional => 1,
+		description => "CRUSH failure domain. Default is 'host'",
+	    },
+	    'device-class' => {
+		type => 'string',
+		optional => 1,
+		description => "CRUSH device class.",
+	    },
+	},
+    },
+    returns => { type => 'string' },
+    code => sub {
+	my ($param) = @_;
+
+	PVE::Ceph::Tools::check_ceph_configured();
+
+	my $failuredomain = $param->{'failure-domain'} // 'host';
+
+	my $rpcenv = PVE::RPCEnvironment::get();
+	my $user = $rpcenv->get_user();
+
+	my $profile = [
+	    "crush-failure-domain=${failuredomain}",
+	    "k=$param->{k}",
+	    "m=$param->{m}",
+	];
+
+	push(@$profile, "crush-device-class=$param->{'device-class'}") if $param->{'device-class'};
+
+	my $worker = sub {
+	    my $rados = PVE::RADOS->new();
+	    $rados->mon_command({
+		prefix => 'osd erasure-code-profile set',
+		name => $param->{name},
+		profile => $profile,
+	    });
+	};
+
+	return $rpcenv->fork_worker('cephcreateecprofile', $param->{name}, $user, $worker);
+    }});
+
+
+__PACKAGE__->register_method ({
+    name => 'destroyecprofile',
+    path => '{name}',
+    method => 'DELETE',
+    description => "Destroy erasure code profile",
+    proxyto => 'node',
+    protected => 1,
+    permissions => {
+	check => ['perm', '/', [ 'Sys.Modify' ]],
+    },
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    node => get_standard_option('pve-node'),
+	    name => {
+		description => 'The name of the erasure code profile.',
+		type => 'string',
+	    },
+	},
+    },
+    returns => { type => 'string' },
+    code => sub {
+	my ($param) = @_;
+
+	PVE::Ceph::Tools::check_ceph_configured();
+
+	my $rpcenv = PVE::RPCEnvironment::get();
+	my $user = $rpcenv->get_user();
+
+	my $worker = sub {
+	    my $rados = PVE::RADOS->new();
+	    $rados->mon_command({
+		prefix => 'osd erasure-code-profile rm',
+		name => $param->{name},
+		format => 'plain',
+	    });
+	};
+
+	return $rpcenv->fork_worker('cephdestroyecprofile', $param->{name}, $user, $worker);
+    }});
+
+
+1;
diff --git a/PVE/API2/Ceph/Makefile b/PVE/API2/Ceph/Makefile
index 45daafda..c0d8135a 100644
--- a/PVE/API2/Ceph/Makefile
+++ b/PVE/API2/Ceph/Makefile
@@ -6,6 +6,7 @@ PERLSOURCE= 			\
 	OSD.pm			\
 	FS.pm			\
 	Pools.pm		\
+	ECProfiles.pm		\
 	MDS.pm
 
 all:
diff --git a/PVE/CLI/pveceph.pm b/PVE/CLI/pveceph.pm
index 995cfcd5..839df9a3 100755
--- a/PVE/CLI/pveceph.pm
+++ b/PVE/CLI/pveceph.pm
@@ -370,6 +370,18 @@ our $cmddef = {
 	    PVE::CLIFormatter::print_api_result($data, $schema, undef, $options);
 	}, $PVE::RESTHandler::standard_output_options],
     },
+    ecprofile => {
+	ls => ['PVE::API2::Ceph::ECProfiles', 'lsecprofile', [], {node => $nodename}, sub {
+	    my ($data, $schema, $options) = @_;
+	    PVE::CLIFormatter::print_api_result($data, $schema,[ 'name' ], $options);
+	}, $PVE::RESTHandler::standard_output_options],
+	create => ['PVE::API2::Ceph::ECProfiles', 'createecprofile', ['name', 'k', 'm'], { node => $nodename } ],
+	destroy => [ 'PVE::API2::Ceph::ECProfiles', 'destroyecprofile', ['name'], { node => $nodename } ],
+	get => [ 'PVE::API2::Ceph::ECProfiles', 'getecprofile', ['name'], { node => $nodename }, sub {
+	    my ($data, $schema, $options) = @_;
+	    PVE::CLIFormatter::print_api_result($data, $schema, undef, $options);
+	}, $PVE::RESTHandler::standard_output_options],
+    },
     lspools => { alias => 'pool ls' },
     createpool => { alias => 'pool create' },
     destroypool => { alias => 'pool destroy' },
-- 
2.30.2





^ permalink raw reply	[flat|nested] 9+ messages in thread

* [pve-devel] [RFC manager 3/4] ceph tools: add check if erasure code profile exists
  2022-04-08 10:14 [pve-devel] [RFC manager 0/4] Ceph add basic erasure code pool mgmt support Aaron Lauterer
  2022-04-08 10:14 ` [pve-devel] [PATCH manager 1/4] api: ceph: $get_storages check if data-pool too Aaron Lauterer
  2022-04-08 10:14 ` [pve-devel] [RFC manager 2/4] pveceph: add management for erasure code rules Aaron Lauterer
@ 2022-04-08 10:14 ` Aaron Lauterer
  2022-04-08 10:14 ` [pve-devel] [PATCH manager 4/4] ceph pools: allow to create erasure code pools Aaron Lauterer
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Aaron Lauterer @ 2022-04-08 10:14 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
 PVE/Ceph/Tools.pm | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
index 36d7788a..91aa6ce5 100644
--- a/PVE/Ceph/Tools.pm
+++ b/PVE/Ceph/Tools.pm
@@ -531,4 +531,14 @@ sub ceph_cluster_status {
     return $status;
 }
 
+sub ecprofile_exists {
+    my ($name) = @_;
+
+    my $rados = PVE::RADOS->new();
+    my $res = $rados->mon_command({ prefix => 'osd erasure-code-profile ls' });
+
+    my $profiles = { map { $_ => 1 } @$res };
+    return $profiles->{$name};
+};
+
 1;
-- 
2.30.2





^ permalink raw reply	[flat|nested] 9+ messages in thread

* [pve-devel] [PATCH manager 4/4] ceph pools: allow to create erasure code pools
  2022-04-08 10:14 [pve-devel] [RFC manager 0/4] Ceph add basic erasure code pool mgmt support Aaron Lauterer
                   ` (2 preceding siblings ...)
  2022-04-08 10:14 ` [pve-devel] [RFC manager 3/4] ceph tools: add check if erasure code profile exists Aaron Lauterer
@ 2022-04-08 10:14 ` Aaron Lauterer
  2022-04-27 13:32   ` Dominik Csapak
  2022-04-08 11:13 ` [pve-devel] [RFC manager 0/4] Ceph add basic erasure code pool mgmt support Aaron Lauterer
  2022-04-27 13:37 ` Dominik Csapak
  5 siblings, 1 reply; 9+ messages in thread
From: Aaron Lauterer @ 2022-04-08 10:14 UTC (permalink / raw)
  To: pve-devel

When using erasure coded pools for RBD storages, the main use case in
this patch, we need a replicated pool that will hold the RBD omap and
other metadata. The EC pool itself will only hold the data objects.

The coupling happens when an RBD image is created by adding the
--data-pool parameter. This is why we have the 'data-pool' parameter in
the storage configuration.

To follow already established semantics, once the 'ecprofile' parameter
is provided, we will create a 'X-metadata' and 'X-data' pool. The
storage configuration is always added as it is the only thing that links
the two together (besides naming schemes).

Different pg_num defaults are chosen for the replicated metadata pool as
it will not hold a lot of data.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
At first I though that we should add another API endpoint just to create
EC pools, but that then brings the problem with it, that we need a new
(sub)path for the new POST endpoint.

Since we do not actually change that much in the existing one to support
ec pools, I went for that now. We do need to copy over the pool params
for the ec pool and change defaults a bit for the meta and data pool.


 PVE/API2/Ceph/Pools.pm | 46 ++++++++++++++++++++++++++++++++++++++----
 PVE/Ceph/Tools.pm      | 11 +++++++---
 2 files changed, 50 insertions(+), 7 deletions(-)

diff --git a/PVE/API2/Ceph/Pools.pm b/PVE/API2/Ceph/Pools.pm
index 05855e15..1a6a346b 100644
--- a/PVE/API2/Ceph/Pools.pm
+++ b/PVE/API2/Ceph/Pools.pm
@@ -280,7 +280,7 @@ my $ceph_pool_common_options = sub {
 
 
 my $add_storage = sub {
-    my ($pool, $storeid) = @_;
+    my ($pool, $storeid, $data_pool) = @_;
 
     my $storage_params = {
 	type => 'rbd',
@@ -290,6 +290,8 @@ my $add_storage = sub {
 	content => 'rootdir,images',
     };
 
+    $storage_params->{'data-pool'} = $data_pool if $data_pool;
+
     PVE::API2::Storage::Config->create($storage_params);
 };
 
@@ -334,6 +336,13 @@ __PACKAGE__->register_method ({
 		type => 'boolean',
 		optional => 1,
 	    },
+	    ecprofile => {
+		description => "Erasure code profile to use. This will create a replicated ".
+			       "metadata pool, an erasure coded metadata pool and the storage ".
+			       "configuration.",
+		type => 'string',
+		optional => 1,
+	    },
 	    %{ $ceph_pool_common_options->() },
 	},
     },
@@ -344,10 +353,17 @@ __PACKAGE__->register_method ({
 	PVE::Cluster::check_cfs_quorum();
 	PVE::Ceph::Tools::check_ceph_configured();
 
-	my $pool = extract_param($param, 'name');
+	my $name = extract_param($param, 'name');
+	my $pool = $name;
 	my $node = extract_param($param, 'node');
 	my $add_storages = extract_param($param, 'add_storages');
 
+	my $ecprofile = extract_param($param, 'ecprofile');
+	die "Erasure code profile '$ecprofile' does not exist.\n"
+	    if $ecprofile && !PVE::Ceph::Tools::ecprofile_exists($ecprofile);
+
+	$add_storages = 1 if $ecprofile;
+
 	my $rpcenv = PVE::RPCEnvironment::get();
 	my $user = $rpcenv->get_user();
 
@@ -370,13 +386,35 @@ __PACKAGE__->register_method ({
 	$param->{application} //= 'rbd';
 	$param->{pg_autoscale_mode} //= 'warn';
 
+	my $data_param = {};
+	my $data_pool = '';
+
+	if ($ecprofile) {
+	    # copy all params, should be a flat hash
+	    $data_param = { map { $_ => $param->{$_} } keys %$param };
+
+	    $data_param->{pool_type} = 'erasure';
+	    $data_param->{allow_ec_overwrites} = 'true';
+	    $data_param->{erasure_code_profile} = $ecprofile;
+	    delete $data_param->{size};
+	    delete $data_param->{min_size};
+
+	    # metadata pool should be ok with 32 PGs
+	    $param->{pg_num} = 32;
+
+	    $pool = "${name}-metadata";
+	    $data_pool = "${name}-data";
+	}
+
 	my $worker = sub {
 
 	    PVE::Ceph::Tools::create_pool($pool, $param);
 
+	    PVE::Ceph::Tools::create_pool($data_pool, $data_param) if $ecprofile;
+
 	    if ($add_storages) {
-		eval { $add_storage->($pool, "${pool}") };
-		die "adding PVE storage for ceph pool '$pool' failed: $@\n" if $@;
+		eval { $add_storage->($pool, "${name}", $data_pool) };
+		die "adding PVE storage for ceph pool '$name' failed: $@\n" if $@;
 	    }
 	};
 
diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
index 91aa6ce5..18051e06 100644
--- a/PVE/Ceph/Tools.pm
+++ b/PVE/Ceph/Tools.pm
@@ -8,7 +8,7 @@ use File::Basename;
 use IO::File;
 use JSON;
 
-use PVE::Tools qw(run_command dir_glob_foreach);
+use PVE::Tools qw(run_command dir_glob_foreach extract_param);
 use PVE::Cluster qw(cfs_read_file);
 use PVE::RADOS;
 use PVE::Ceph::Services;
@@ -264,12 +264,17 @@ sub create_pool {
 
     my $pg_num = $param->{pg_num} || 128;
 
-    $rados->mon_command({
+    my $mon_params = {
 	prefix => "osd pool create",
 	pool => $pool,
 	pg_num => int($pg_num),
 	format => 'plain',
-    });
+    };
+    $mon_params->{pool_type} = extract_param($param, 'pool_type') if $param->{pool_type};
+    $mon_params->{erasure_code_profile} = extract_param($param, 'erasure_code_profile')
+	if $param->{erasure_code_profile};
+
+    $rados->mon_command($mon_params);
 
     set_pool($pool, $param);
 
-- 
2.30.2





^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [pve-devel] [RFC manager 0/4] Ceph add basic erasure code pool mgmt support
  2022-04-08 10:14 [pve-devel] [RFC manager 0/4] Ceph add basic erasure code pool mgmt support Aaron Lauterer
                   ` (3 preceding siblings ...)
  2022-04-08 10:14 ` [pve-devel] [PATCH manager 4/4] ceph pools: allow to create erasure code pools Aaron Lauterer
@ 2022-04-08 11:13 ` Aaron Lauterer
  2022-04-27 13:37 ` Dominik Csapak
  5 siblings, 0 replies; 9+ messages in thread
From: Aaron Lauterer @ 2022-04-08 11:13 UTC (permalink / raw)
  To: pve-devel

One thing I forgot to mention is that due to quite a few $rados->mon_commands that are introduced here, there will be rebases needed when we apply the changes to librados2-perl. Similar to this patch [0] of that other series.

[0] https://lists.proxmox.com/pipermail/pve-devel/2022-March/052290.html

On 4/8/22 12:14, Aaron Lauterer wrote:
> This RFC series adds basic support to create erasure coded (EC) pools
> with the PVE tooling.
> 
> We need to first manage EC profiles as they are the important part when
> someone wants to use EC pools. They define how the data is split up and
> how much coding/parity one wants.
> 
> The actual creation of the EC pools follows the same approach we use for
> cephfs pools. One metadata and one EC coded data pool. More details in
> the actual patches.
> 
> The first patch is one that we should have added when we added basic
> support for ec pools [0].
> 
> I sent it as RFC mainly to get some feedback, especially regarding the
> CLI interface for the profile management and if the approach on how to
> create EC pools, by adding an optional 'ecprofile' parameter to
> pool create, is one we are okay with from an interface POV.
> 
> More details can be found in the individual patches.
> 
> [0] https://git.proxmox.com/?p=pve-storage.git;a=commit;h=ef2afce74aba01f2ab698a5477f5e396fa4d3725
> 
> Aaron Lauterer (4):
>    api: ceph: $get_storages check if data-pool too
>    pveceph: add management for erasure code rules
>    ceph tools: add check if erasure code profile exists
>    ceph pools: allow to create erasure code pools
> 
>   PVE/API2/Ceph.pm            |   6 +
>   PVE/API2/Ceph/ECProfiles.pm | 249 ++++++++++++++++++++++++++++++++++++
>   PVE/API2/Ceph/Makefile      |   1 +
>   PVE/API2/Ceph/Pools.pm      |  55 +++++++-
>   PVE/CLI/pveceph.pm          |  12 ++
>   PVE/Ceph/Tools.pm           |  21 ++-
>   6 files changed, 335 insertions(+), 9 deletions(-)
>   create mode 100644 PVE/API2/Ceph/ECProfiles.pm
> 




^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [pve-devel] [RFC manager 2/4] pveceph: add management for erasure code rules
  2022-04-08 10:14 ` [pve-devel] [RFC manager 2/4] pveceph: add management for erasure code rules Aaron Lauterer
@ 2022-04-27 13:32   ` Dominik Csapak
  0 siblings, 0 replies; 9+ messages in thread
From: Dominik Csapak @ 2022-04-27 13:32 UTC (permalink / raw)
  To: Proxmox VE development discussion, Aaron Lauterer

some comments inline

On 4/8/22 12:14, Aaron Lauterer wrote:
> Allow to set 'k' and 'm' values and optionally the device class and
> failure domains.
> Implemented in a way that also exposes the functionality via the API.
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
>   PVE/API2/Ceph.pm            |   6 +
>   PVE/API2/Ceph/ECProfiles.pm | 249 ++++++++++++++++++++++++++++++++++++
>   PVE/API2/Ceph/Makefile      |   1 +
>   PVE/CLI/pveceph.pm          |  12 ++
>   4 files changed, 268 insertions(+)
>   create mode 100644 PVE/API2/Ceph/ECProfiles.pm
> 
> diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
> index 3bbcfe4c..06bd2e2e 100644
> --- a/PVE/API2/Ceph.pm
> +++ b/PVE/API2/Ceph.pm
> @@ -24,6 +24,7 @@ use PVE::API2::Ceph::MDS;
>   use PVE::API2::Ceph::MGR;
>   use PVE::API2::Ceph::MON;
>   use PVE::API2::Ceph::Pools;
> +use PVE::API2::Ceph::ECProfiles;
>   use PVE::API2::Storage::Config;
>   
>   use base qw(PVE::RESTHandler);
> @@ -60,6 +61,11 @@ __PACKAGE__->register_method ({
>       path => 'pools',
>   });
>   
> +__PACKAGE__->register_method ({
> +    subclass => "PVE::API2::Ceph::ECProfiles",
> +    path => 'ecprofiles',
> +});
> +
>   __PACKAGE__->register_method ({
>       name => 'index',
>       path => '',
> diff --git a/PVE/API2/Ceph/ECProfiles.pm b/PVE/API2/Ceph/ECProfiles.pm
> new file mode 100644
> index 00000000..f7c51845
> --- /dev/null
> +++ b/PVE/API2/Ceph/ECProfiles.pm
> @@ -0,0 +1,249 @@
> +package PVE::API2::Ceph::ECProfiles;
> +
> +use strict;
> +use warnings;
> +
> +use PVE::Ceph::Tools;
> +use PVE::Ceph::Services;
> +use PVE::JSONSchema qw(get_standard_option);
> +use PVE::RADOS;
> +use PVE::RESTHandler;
> +use PVE::RPCEnvironment;
> +use PVE::Tools qw(extract_param);
> +
> +use base qw(PVE::RESTHandler);
> +
> +__PACKAGE__->register_method ({
> +    name => 'lsecprofile',
> +    path => '',
> +    method => 'GET',
> +    description => "List erasure coding (EC) profiles",
> +    proxyto => 'node',
> +    protected => 1,
> +    permissions => {
> +	check => ['perm', '/', [ 'Sys.Audit', 'Datastore.Audit' ], any => 1],
> +    },
> +    parameters => {
> +	additionalProperties => 0,
> +	properties => {
> +	    node => get_standard_option('pve-node'),
> +	},
> +    },
> +    returns => {
> +	type => 'array',
> +	items => {
> +	    type => "object",
> +	    properties => {
> +		name => {
> +		    type => 'string',
> +		    title => 'Name',
> +		},
> +	    },
> +	},
> +    },
> +    code => sub {
> +	my ($param) = @_;
> +
> +	PVE::Ceph::Tools::check_ceph_configured();
> +
> +	my $rados = PVE::RADOS->new();
> +	my $profiles = $rados->mon_command({ prefix => 'osd erasure-code-profile ls' });
> +	my $res = [];
> +	foreach my $profile (@$profiles) {
> +	    push @$res, { name => $profile };
> +	}

would that not be a simple

map { { name => $_ } } @$profiles;

?

> +	return $res;
> +    }});
> +
> +
> +__PACKAGE__->register_method ({
> +    name => 'getecprofile',
> +    path => '{name}',
> +    method => 'GET',
> +    description => "List erasure coding (EC) profiles",
> +    proxyto => 'node',
> +    protected => 1,
> +    permissions => {
> +	check => ['perm', '/', [ 'Sys.Audit', 'Datastore.Audit' ], any => 1],
> +    },
> +    parameters => {
> +	additionalProperties => 0,
> +	properties => {
> +	    node => get_standard_option('pve-node'),
> +	    name => {
> +		type => 'string',
> +		description => "The name of the erasure code profile.",
> +	    },
> +	},
> +    },
> +    returns => {
> +	type => 'object',
> +	properties => {
> +	    name			    => { type => 'string', title => 'Name' },
> +	    m 				    => { type => 'integer', title => 'm' },
> +	    k 				    => { type => 'integer', title => 'k' },
> +	    plugin			    => { type => 'string', title => 'plugin' },
> +	    technique			    => { type => 'string', title => 'Technique' },
> +	    w				    => { type => 'integer', title => 'w', optional => 1 },
> +	    'crush-root'		    => {
> +		type => 'string',
> +		title => 'Crush root',
> +		optional => 1,
> +	    },
> +	    'crush-device-class'	    => {
> +		type => 'string',
> +		title => 'Device Class',
> +		optional => 1,
> +	    },
> +	    'crush-failure-domain'	    => {
> +		type => 'string',
> +		title => 'Failure Domain',
> +		optional => 1,
> +	    },
> +	    'jerasure-per-chunk-alignment'  => {
> +		type => 'string',
> +		title => 'jerasure-per-chunk-alignment',
> +		optional => 1,
> +	    },

nit: i am not really a fan of the indenation/aligning here... especially the mixing of the inlining

> +	},
> +    },
> +    code => sub {
> +	my ($param) = @_;
> +
> +	PVE::Ceph::Tools::check_ceph_configured();
> +
> +	my $rados = PVE::RADOS->new();
> +	my $res = $rados->mon_command({
> +		prefix => 'osd erasure-code-profile get',
> +		name => $param->{name},
> +	    });
> +
> +	my $data = {
> +	    name			    => $param->{name},
> +	    'crush-root'		    => $res->{'crush-root'},
> +	    w				    => $res->{w},
> +	    m 				    => $res->{m},
> +	    k 				    => $res->{k},
> +	    'crush-device-class'	    => $res->{'crush-device-class'},
> +	    'crush-failure-domain'	    => $res->{'crush-failure-domain'},
> +	    plugin			    => $res->{'plugin'},
> +	    'jerasure-per-chunk-alignment'  => $res->{'jerasure-per-chunk-alignment'},
> +	    technique			    => $res->{'technique'},
> +	};

i don't think the aligning here makes it much more readable..
aside from that, why not simply using the $res ?

is there more here that we may want?
if not, how about having a list of 'whitelisted' props and doing

for my $prop (qw(name crush-root w m k ...)) {
    $data->{$prop} = $res->{$prop};
}

we could even factor out the return schema and reuse that ?

for my $prop (keys %$return_schema) {
...
}

> +	return $data;
> +    }});
> +
> +
> +__PACKAGE__->register_method ({
> +    name => 'createecprofile',
> +    path => '',
> +    method => 'POST',
> +    description => "Create erasure code profile",
> +    proxyto => 'node',
> +    protected => 1,
> +    permissions => {
> +	check => ['perm', '/', [ 'Sys.Modify' ]],
> +    },
> +    parameters => {
> +	additionalProperties => 0,
> +	properties => {
> +	    node => get_standard_option('pve-node'),
> +	    name => {
> +		description => 'The name of the erasure code profile. Must be unique.',
> +		type => 'string',
> +	    },
> +	    k => {
> +		type => 'integer',
> +		description => 'Number of data chunks.',
> +	    },
> +	    m => {
> +		type => 'integer',
> +		description => 'Number of coding chunks.',
> +	    },
> +	    'failure-domain' => {
> +		type => 'string',
> +		optional => 1,
> +		description => "CRUSH failure domain. Default is 'host'",
> +	    },
> +	    'device-class' => {
> +		type => 'string',
> +		optional => 1,
> +		description => "CRUSH device class.",
> +	    },
> +	},
> +    },


i guess this is not complete because of the rfc?
there are no formats defined and no limits whatsoever (k/m should have a minimum i guess)
also the strings should be limited somewhat and the default (for failure-domain)
can be in the schema itself too

because of this i can create an ecprofile '0' that i cannot use ;)

> +    returns => { type => 'string' },
> +    code => sub {
> +	my ($param) = @_;
> +
> +	PVE::Ceph::Tools::check_ceph_configured();
> +
> +	my $failuredomain = $param->{'failure-domain'} // 'host';
> +
> +	my $rpcenv = PVE::RPCEnvironment::get();
> +	my $user = $rpcenv->get_user();
> +
> +	my $profile = [
> +	    "crush-failure-domain=${failuredomain}",
> +	    "k=$param->{k}",
> +	    "m=$param->{m}",
> +	];
> +
> +	push(@$profile, "crush-device-class=$param->{'device-class'}") if $param->{'device-class'};
> +
> +	my $worker = sub {
> +	    my $rados = PVE::RADOS->new();
> +	    $rados->mon_command({
> +		prefix => 'osd erasure-code-profile set',
> +		name => $param->{name},
> +		profile => $profile,
> +	    });
> +	};
> +
> +	return $rpcenv->fork_worker('cephcreateecprofile', $param->{name}, $user, $worker);
> +    }});
> +
> +
> +__PACKAGE__->register_method ({
> +    name => 'destroyecprofile',
> +    path => '{name}',
> +    method => 'DELETE',
> +    description => "Destroy erasure code profile",
> +    proxyto => 'node',
> +    protected => 1,
> +    permissions => {
> +	check => ['perm', '/', [ 'Sys.Modify' ]],
> +    },
> +    parameters => {
> +	additionalProperties => 0,
> +	properties => {
> +	    node => get_standard_option('pve-node'),
> +	    name => {
> +		description => 'The name of the erasure code profile.',
> +		type => 'string',
> +	    },
> +	},
> +    },
> +    returns => { type => 'string' },
> +    code => sub {
> +	my ($param) = @_;
> +
> +	PVE::Ceph::Tools::check_ceph_configured();
> +
> +	my $rpcenv = PVE::RPCEnvironment::get();
> +	my $user = $rpcenv->get_user();
> +
> +	my $worker = sub {
> +	    my $rados = PVE::RADOS->new();
> +	    $rados->mon_command({
> +		prefix => 'osd erasure-code-profile rm',
> +		name => $param->{name},
> +		format => 'plain',
> +	    });
> +	};

should there not be some checks if there is still some pool using that?
or do we want to fallback to the ceph error (if there is one?)

> +
> +	return $rpcenv->fork_worker('cephdestroyecprofile', $param->{name}, $user, $worker);
> +    }});
> +
> +
> +1;
> diff --git a/PVE/API2/Ceph/Makefile b/PVE/API2/Ceph/Makefile
> index 45daafda..c0d8135a 100644
> --- a/PVE/API2/Ceph/Makefile
> +++ b/PVE/API2/Ceph/Makefile
> @@ -6,6 +6,7 @@ PERLSOURCE= 			\
>   	OSD.pm			\
>   	FS.pm			\
>   	Pools.pm		\
> +	ECProfiles.pm		\
>   	MDS.pm
>   
>   all:
> diff --git a/PVE/CLI/pveceph.pm b/PVE/CLI/pveceph.pm
> index 995cfcd5..839df9a3 100755
> --- a/PVE/CLI/pveceph.pm
> +++ b/PVE/CLI/pveceph.pm
> @@ -370,6 +370,18 @@ our $cmddef = {
>   	    PVE::CLIFormatter::print_api_result($data, $schema, undef, $options);
>   	}, $PVE::RESTHandler::standard_output_options],
>       },
> +    ecprofile => {
> +	ls => ['PVE::API2::Ceph::ECProfiles', 'lsecprofile', [], {node => $nodename}, sub {
> +	    my ($data, $schema, $options) = @_;
> +	    PVE::CLIFormatter::print_api_result($data, $schema,[ 'name' ], $options);
> +	}, $PVE::RESTHandler::standard_output_options],
> +	create => ['PVE::API2::Ceph::ECProfiles', 'createecprofile', ['name', 'k', 'm'], { node => $nodename } ],
> +	destroy => [ 'PVE::API2::Ceph::ECProfiles', 'destroyecprofile', ['name'], { node => $nodename } ],
> +	get => [ 'PVE::API2::Ceph::ECProfiles', 'getecprofile', ['name'], { node => $nodename }, sub {
> +	    my ($data, $schema, $options) = @_;
> +	    PVE::CLIFormatter::print_api_result($data, $schema, undef, $options);
> +	}, $PVE::RESTHandler::standard_output_options],
> +    },
>       lspools => { alias => 'pool ls' },
>       createpool => { alias => 'pool create' },
>       destroypool => { alias => 'pool destroy' },





^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [pve-devel] [PATCH manager 4/4] ceph pools: allow to create erasure code pools
  2022-04-08 10:14 ` [pve-devel] [PATCH manager 4/4] ceph pools: allow to create erasure code pools Aaron Lauterer
@ 2022-04-27 13:32   ` Dominik Csapak
  0 siblings, 0 replies; 9+ messages in thread
From: Dominik Csapak @ 2022-04-27 13:32 UTC (permalink / raw)
  To: Proxmox VE development discussion, Aaron Lauterer

some comments inline

aside from those, i think there are still some parts missing.
from my test, i alwasy got an error that it could not set 'size' for
the ecpool when i wanted to edit it (e.g. autoscale on/off)
so there seems to be some checks missing (can we now beforehand
if a pool is an ecpool?)

On 4/8/22 12:14, Aaron Lauterer wrote:
> When using erasure coded pools for RBD storages, the main use case in
> this patch, we need a replicated pool that will hold the RBD omap and
> other metadata. The EC pool itself will only hold the data objects.
> 
> The coupling happens when an RBD image is created by adding the
> --data-pool parameter. This is why we have the 'data-pool' parameter in
> the storage configuration.
> 
> To follow already established semantics, once the 'ecprofile' parameter
> is provided, we will create a 'X-metadata' and 'X-data' pool. The
> storage configuration is always added as it is the only thing that links
> the two together (besides naming schemes).
> 
> Different pg_num defaults are chosen for the replicated metadata pool as
> it will not hold a lot of data.
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> At first I though that we should add another API endpoint just to create
> EC pools, but that then brings the problem with it, that we need a new
> (sub)path for the new POST endpoint.
> 
> Since we do not actually change that much in the existing one to support
> ec pools, I went for that now. We do need to copy over the pool params
> for the ec pool and change defaults a bit for the meta and data pool.
> 
> 
>   PVE/API2/Ceph/Pools.pm | 46 ++++++++++++++++++++++++++++++++++++++----
>   PVE/Ceph/Tools.pm      | 11 +++++++---
>   2 files changed, 50 insertions(+), 7 deletions(-)
> 
> diff --git a/PVE/API2/Ceph/Pools.pm b/PVE/API2/Ceph/Pools.pm
> index 05855e15..1a6a346b 100644
> --- a/PVE/API2/Ceph/Pools.pm
> +++ b/PVE/API2/Ceph/Pools.pm
> @@ -280,7 +280,7 @@ my $ceph_pool_common_options = sub {
>   
>   
>   my $add_storage = sub {
> -    my ($pool, $storeid) = @_;
> +    my ($pool, $storeid, $data_pool) = @_;
>   
>       my $storage_params = {
>   	type => 'rbd',
> @@ -290,6 +290,8 @@ my $add_storage = sub {
>   	content => 'rootdir,images',
>       };
>   
> +    $storage_params->{'data-pool'} = $data_pool if $data_pool;
> +
>       PVE::API2::Storage::Config->create($storage_params);
>   };
>   
> @@ -334,6 +336,13 @@ __PACKAGE__->register_method ({
>   		type => 'boolean',
>   		optional => 1,
>   	    },
> +	    ecprofile => {
> +		description => "Erasure code profile to use. This will create a replicated ".
> +			       "metadata pool, an erasure coded metadata pool and the storage ".
> +			       "configuration.",
> +		type => 'string',
> +		optional => 1,
> +	    },
>   	    %{ $ceph_pool_common_options->() },
>   	},
>       },
> @@ -344,10 +353,17 @@ __PACKAGE__->register_method ({
>   	PVE::Cluster::check_cfs_quorum();
>   	PVE::Ceph::Tools::check_ceph_configured();
>   
> -	my $pool = extract_param($param, 'name');
> +	my $name = extract_param($param, 'name');
> +	my $pool = $name;

nit: this could be done in one line:

my $pool = my $name = extract_param();

>   	my $node = extract_param($param, 'node');
>   	my $add_storages = extract_param($param, 'add_storages');
>   
> +	my $ecprofile = extract_param($param, 'ecprofile');
> +	die "Erasure code profile '$ecprofile' does not exist.\n"
> +	    if $ecprofile && !PVE::Ceph::Tools::ecprofile_exists($ecprofile);
> +
> +	$add_storages = 1 if $ecprofile;
> +

does it really make sense to overwrite what the user (maybe) wanted?
i get that we cannot add such pools via the gui yet, but
we can also not add the ec pools via gui for now
so the user should be able to do a pvesm set later if he just wanted to add an ecpool...

if we really want to do this, i'd update the description of 'add_storages' too
to reflect that

>   	my $rpcenv = PVE::RPCEnvironment::get();
>   	my $user = $rpcenv->get_user();
>   
> @@ -370,13 +386,35 @@ __PACKAGE__->register_method ({
>   	$param->{application} //= 'rbd';
>   	$param->{pg_autoscale_mode} //= 'warn';
>   
> +	my $data_param = {};
> +	my $data_pool = '';
> +
> +	if ($ecprofile) {
> +	    # copy all params, should be a flat hash
> +	    $data_param = { map { $_ => $param->{$_} } keys %$param };
> +
> +	    $data_param->{pool_type} = 'erasure';
> +	    $data_param->{allow_ec_overwrites} = 'true';
> +	    $data_param->{erasure_code_profile} = $ecprofile;
> +	    delete $data_param->{size};
> +	    delete $data_param->{min_size};
> +
> +	    # metadata pool should be ok with 32 PGs
> +	    $param->{pg_num} = 32;
> +
> +	    $pool = "${name}-metadata";
> +	    $data_pool = "${name}-data";
> +	}
> +
>   	my $worker = sub {
>   
>   	    PVE::Ceph::Tools::create_pool($pool, $param);
>   
> +	    PVE::Ceph::Tools::create_pool($data_pool, $data_param) if $ecprofile;
> +
>   	    if ($add_storages) {
> -		eval { $add_storage->($pool, "${pool}") };
> -		die "adding PVE storage for ceph pool '$pool' failed: $@\n" if $@;
> +		eval { $add_storage->($pool, "${name}", $data_pool) };
> +		die "adding PVE storage for ceph pool '$name' failed: $@\n" if $@;
>   	    }
>   	};
>   
> diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
> index 91aa6ce5..18051e06 100644
> --- a/PVE/Ceph/Tools.pm
> +++ b/PVE/Ceph/Tools.pm
> @@ -8,7 +8,7 @@ use File::Basename;
>   use IO::File;
>   use JSON;
>   
> -use PVE::Tools qw(run_command dir_glob_foreach);
> +use PVE::Tools qw(run_command dir_glob_foreach extract_param);
>   use PVE::Cluster qw(cfs_read_file);
>   use PVE::RADOS;
>   use PVE::Ceph::Services;
> @@ -264,12 +264,17 @@ sub create_pool {
>   
>       my $pg_num = $param->{pg_num} || 128;
>   
> -    $rados->mon_command({
> +    my $mon_params = {
>   	prefix => "osd pool create",
>   	pool => $pool,
>   	pg_num => int($pg_num),
>   	format => 'plain',
> -    });
> +    };
> +    $mon_params->{pool_type} = extract_param($param, 'pool_type') if $param->{pool_type};
> +    $mon_params->{erasure_code_profile} = extract_param($param, 'erasure_code_profile')
> +	if $param->{erasure_code_profile};
> +
> +    $rados->mon_command($mon_params);
>   
>       set_pool($pool, $param);
>   





^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [pve-devel] [RFC manager 0/4] Ceph add basic erasure code pool mgmt support
  2022-04-08 10:14 [pve-devel] [RFC manager 0/4] Ceph add basic erasure code pool mgmt support Aaron Lauterer
                   ` (4 preceding siblings ...)
  2022-04-08 11:13 ` [pve-devel] [RFC manager 0/4] Ceph add basic erasure code pool mgmt support Aaron Lauterer
@ 2022-04-27 13:37 ` Dominik Csapak
  5 siblings, 0 replies; 9+ messages in thread
From: Dominik Csapak @ 2022-04-27 13:37 UTC (permalink / raw)
  To: Proxmox VE development discussion, Aaron Lauterer

sent some replies to the relevant parts,

all in all seems to work ok (nothing major functionally)
regarding cli, ecprofile is fine imo, i don't think we have to write out
'erasure-code-profile' in our cli (should be clear from context)

the only thing we might want to do is to (optionally?) create an ec profile on the fly
when creating a pool by giving k/m directly there?

i guess you did not do it because we already must expose the profile management
since it does not get auto cleaned up? (though could we do that on pool delete?)

we could save the whole ec profile management by always explicitly creating a new one
for each ecpool we create, so the cleanup 'should' work on pool delete.
(that will run into an error anyway if it's still used)




^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2022-04-27 13:37 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-08 10:14 [pve-devel] [RFC manager 0/4] Ceph add basic erasure code pool mgmt support Aaron Lauterer
2022-04-08 10:14 ` [pve-devel] [PATCH manager 1/4] api: ceph: $get_storages check if data-pool too Aaron Lauterer
2022-04-08 10:14 ` [pve-devel] [RFC manager 2/4] pveceph: add management for erasure code rules Aaron Lauterer
2022-04-27 13:32   ` Dominik Csapak
2022-04-08 10:14 ` [pve-devel] [RFC manager 3/4] ceph tools: add check if erasure code profile exists Aaron Lauterer
2022-04-08 10:14 ` [pve-devel] [PATCH manager 4/4] ceph pools: allow to create erasure code pools Aaron Lauterer
2022-04-27 13:32   ` Dominik Csapak
2022-04-08 11:13 ` [pve-devel] [RFC manager 0/4] Ceph add basic erasure code pool mgmt support Aaron Lauterer
2022-04-27 13:37 ` Dominik Csapak

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal