public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH manager 0/5] Ceph add basic erasure code pool mgmt support
@ 2022-04-28 11:58 Aaron Lauterer
  2022-04-28 11:58 ` [pve-devel] [PATCH manager 1/5] api: ceph: $get_storages check if data-pool too Aaron Lauterer
                   ` (5 more replies)
  0 siblings, 6 replies; 14+ messages in thread
From: Aaron Lauterer @ 2022-04-28 11:58 UTC (permalink / raw)
  To: pve-devel

The first patch is one that we should have added when we added basic
support for ec pools [0].

The rest adds basic support to manage erasure coded (EC) pools. When
adding an EC pool we need to define how many k and m chunks we want for
the ec profile. Optionally one can define the failure domain and device
class.
Or if there is already another ec profile that should be used, it can be
specified as well.

Since EC pools need a replicated pool that will hold metadata, we follow
the same approach we already have for cephfs pools and have one metadata
and one data pool.
A few prerequisites were added to Ceph::Tools.

More details in the actual patches.


changes since rfc:

change the approach by handling ec profiles automatically on pool
creation and cleaning up ec profiles and crush rules on pool
desctruction. The whole ec profile mgmt CLI has been dropped for now.

Aaron Lauterer (5):
  api: ceph: $get_storages check if data-pool too
  ceph tools: add erasure code management functions
  ceph tools: add function to get pool properties
  ceph tools: add destroy crush rule destroy function
  ceph pools: allow to create erasure code pools

 PVE/API2/Ceph/Pools.pm | 129 ++++++++++++++++++++++++++++++++++++++---
 PVE/Ceph/Tools.pm      |  83 +++++++++++++++++++++++++-
 2 files changed, 201 insertions(+), 11 deletions(-)

-- 
2.30.2





^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pve-devel] [PATCH manager 1/5] api: ceph: $get_storages check if data-pool too
  2022-04-28 11:58 [pve-devel] [PATCH manager 0/5] Ceph add basic erasure code pool mgmt support Aaron Lauterer
@ 2022-04-28 11:58 ` Aaron Lauterer
  2022-04-28 11:58 ` [pve-devel] [PATCH manager 2/5] ceph tools: add erasure code management functions Aaron Lauterer
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 14+ messages in thread
From: Aaron Lauterer @ 2022-04-28 11:58 UTC (permalink / raw)
  To: pve-devel

When removing a pool, we check against any storage that might have that
pool configured.
We need to check if that pool is used as data-pool too.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
This check should have been added when we added basic support for EC
pools on the storage side [0].

[0] https://git.proxmox.com/?p=pve-storage.git;a=commit;h=ef2afce74aba01f2ab698a5477f5e396fa4d3725

 PVE/API2/Ceph/Pools.pm | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/PVE/API2/Ceph/Pools.pm b/PVE/API2/Ceph/Pools.pm
index 002f7893..05855e15 100644
--- a/PVE/API2/Ceph/Pools.pm
+++ b/PVE/API2/Ceph/Pools.pm
@@ -302,8 +302,13 @@ my $get_storages = sub {
     my $res = {};
     foreach my $storeid (keys %$storages) {
 	my $curr = $storages->{$storeid};
-	$res->{$storeid} = $storages->{$storeid}
-	    if $curr->{type} eq 'rbd' && $pool eq $curr->{pool};
+	next if $curr->{type} ne 'rbd';
+	if (
+	    $pool eq $curr->{pool} ||
+	    (defined $curr->{'data-pool'} && $pool eq $curr->{'data-pool'})
+	) {
+	    $res->{$storeid} = $storages->{$storeid};
+	}
     }
 
     return $res;
-- 
2.30.2





^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pve-devel] [PATCH manager 2/5] ceph tools: add erasure code management functions
  2022-04-28 11:58 [pve-devel] [PATCH manager 0/5] Ceph add basic erasure code pool mgmt support Aaron Lauterer
  2022-04-28 11:58 ` [pve-devel] [PATCH manager 1/5] api: ceph: $get_storages check if data-pool too Aaron Lauterer
@ 2022-04-28 11:58 ` Aaron Lauterer
  2022-04-28 14:19   ` Dominik Csapak
  2022-04-28 11:58 ` [pve-devel] [PATCH manager 3/5] ceph tools: add function to get pool properties Aaron Lauterer
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: Aaron Lauterer @ 2022-04-28 11:58 UTC (permalink / raw)
  To: pve-devel

Functions to manage erasure code (EC) profiles:
* add
* remove
* check if exists
* get default prefixed name

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
 PVE/Ceph/Tools.pm | 49 +++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 49 insertions(+)

diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
index 36d7788a..0c75df0e 100644
--- a/PVE/Ceph/Tools.pm
+++ b/PVE/Ceph/Tools.pm
@@ -531,4 +531,53 @@ sub ceph_cluster_status {
     return $status;
 }
 
+sub ecprofile_exists {
+    my ($name) = @_;
+
+    my $rados = PVE::RADOS->new();
+    my $res = $rados->mon_command({ prefix => 'osd erasure-code-profile ls' });
+
+    my $profiles = { map { $_ => 1 } @$res };
+    return $profiles->{$name};
+}
+
+sub create_ecprofile {
+    my ($name, $k, $m, $failure_domain, $device_class) = @_;
+
+    $failure_domain = 'host' if !$failure_domain;
+
+    my $profile = [
+	"crush-failure-domain=${failure_domain}",
+	"k=${k}",
+	"m=${m}",
+    ];
+
+    push(@$profile, "crush-device-class=${device_class}") if $device_class;
+
+    my $rados = PVE::RADOS->new();
+    $rados->mon_command({
+	prefix => 'osd erasure-code-profile set',
+	name => $name,
+	profile => $profile,
+    });
+}
+
+sub destroy_ecprofile {
+    my ($profile) = @_;
+
+    my $rados = PVE::RADOS->new();
+    my $command = {
+	prefix => 'osd erasure-code-profile rm',
+	name => $profile,
+	format => 'plain',
+    };
+    return $rados->mon_command($command);
+}
+
+sub get_ecprofile_name {
+    my ($name) = @_;
+    return "pve_ec_${name}";
+}
+
+
 1;
-- 
2.30.2





^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pve-devel] [PATCH manager 3/5] ceph tools: add function to get pool properties
  2022-04-28 11:58 [pve-devel] [PATCH manager 0/5] Ceph add basic erasure code pool mgmt support Aaron Lauterer
  2022-04-28 11:58 ` [pve-devel] [PATCH manager 1/5] api: ceph: $get_storages check if data-pool too Aaron Lauterer
  2022-04-28 11:58 ` [pve-devel] [PATCH manager 2/5] ceph tools: add erasure code management functions Aaron Lauterer
@ 2022-04-28 11:58 ` Aaron Lauterer
  2022-04-28 14:20   ` Dominik Csapak
  2022-04-28 11:58 ` [pve-devel] [PATCH manager 4/5] ceph tools: add destroy crush rule destroy function Aaron Lauterer
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: Aaron Lauterer @ 2022-04-28 11:58 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
 PVE/Ceph/Tools.pm | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
index 0c75df0e..b5e60783 100644
--- a/PVE/Ceph/Tools.pm
+++ b/PVE/Ceph/Tools.pm
@@ -255,6 +255,19 @@ sub set_pool {
 
 }
 
+sub get_pool_properties {
+    my ($pool) = @_;
+    my $command = {
+	prefix => "osd pool get",
+	pool   => "$pool",
+	var    => "all",
+	format => 'json',
+    };
+
+    my $rados = PVE::RADOS->new();
+    return $rados->mon_command($command);
+}
+
 sub create_pool {
     my ($pool, $param, $rados) = @_;
 
-- 
2.30.2





^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pve-devel] [PATCH manager 4/5] ceph tools: add destroy crush rule destroy function
  2022-04-28 11:58 [pve-devel] [PATCH manager 0/5] Ceph add basic erasure code pool mgmt support Aaron Lauterer
                   ` (2 preceding siblings ...)
  2022-04-28 11:58 ` [pve-devel] [PATCH manager 3/5] ceph tools: add function to get pool properties Aaron Lauterer
@ 2022-04-28 11:58 ` Aaron Lauterer
  2022-04-28 11:58 ` [pve-devel] [PATCH manager 5/5] ceph pools: allow to create erasure code pools Aaron Lauterer
  2022-04-28 18:33 ` [pve-devel] applied: [PATCH manager 0/5] Ceph add basic erasure code pool mgmt support Thomas Lamprecht
  5 siblings, 0 replies; 14+ messages in thread
From: Aaron Lauterer @ 2022-04-28 11:58 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
 PVE/Ceph/Tools.pm | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
index b5e60783..7e94ee58 100644
--- a/PVE/Ceph/Tools.pm
+++ b/PVE/Ceph/Tools.pm
@@ -592,5 +592,15 @@ sub get_ecprofile_name {
     return "pve_ec_${name}";
 }
 
+sub destroy_crush_rule {
+    my ($rule) = @_;
+    my $rados = PVE::RADOS->new();
+    my $command = {
+	prefix => 'osd crush rule rm',
+	name => $rule,
+	format => 'plain',
+    };
+    return $rados->mon_command($command);
+}
 
 1;
-- 
2.30.2





^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pve-devel] [PATCH manager 5/5] ceph pools: allow to create erasure code pools
  2022-04-28 11:58 [pve-devel] [PATCH manager 0/5] Ceph add basic erasure code pool mgmt support Aaron Lauterer
                   ` (3 preceding siblings ...)
  2022-04-28 11:58 ` [pve-devel] [PATCH manager 4/5] ceph tools: add destroy crush rule destroy function Aaron Lauterer
@ 2022-04-28 11:58 ` Aaron Lauterer
  2022-04-28 14:32   ` Dominik Csapak
  2022-04-28 18:31   ` [pve-devel] applied: " Thomas Lamprecht
  2022-04-28 18:33 ` [pve-devel] applied: [PATCH manager 0/5] Ceph add basic erasure code pool mgmt support Thomas Lamprecht
  5 siblings, 2 replies; 14+ messages in thread
From: Aaron Lauterer @ 2022-04-28 11:58 UTC (permalink / raw)
  To: pve-devel

To use erasure coded (EC) pools for RBD storages, we need two pools. One
regular replicated pool that will hold the RBD omap and other metadata
and the EC pool which will hold the image data.

The coupling happens when an RBD image is created by adding the
--data-pool parameter. This is why we have the 'data-pool' parameter in
the storage configuration.

To follow already established semantics, we will create a 'X-metadata'
and 'X-data' pool. The storage configuration is always added as it is
the only thing that links the two together (besides naming schemes).

Different pg_num defaults are chosen for the replicated metadata pool as
it will not hold a lot of data.

Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
---
we need some more follow ups as setting pool parameters on ec pools will
currently fail when trying to set the size. In that case, we most likely
should adapt the gui as well and make inon changeable settings, such as
size, read only.

changes since rfc:

ec profile parameters need to be configured during pool creation now. or
alternatively the name of an ec profile can be provided.

cleanup of ec profile & crush rule on pool destruction

 PVE/API2/Ceph/Pools.pm | 120 ++++++++++++++++++++++++++++++++++++++---
 PVE/Ceph/Tools.pm      |  11 ++--
 2 files changed, 122 insertions(+), 9 deletions(-)

diff --git a/PVE/API2/Ceph/Pools.pm b/PVE/API2/Ceph/Pools.pm
index 05855e15..69cf0d1a 100644
--- a/PVE/API2/Ceph/Pools.pm
+++ b/PVE/API2/Ceph/Pools.pm
@@ -280,7 +280,7 @@ my $ceph_pool_common_options = sub {
 
 
 my $add_storage = sub {
-    my ($pool, $storeid) = @_;
+    my ($pool, $storeid, $data_pool) = @_;
 
     my $storage_params = {
 	type => 'rbd',
@@ -290,6 +290,8 @@ my $add_storage = sub {
 	content => 'rootdir,images',
     };
 
+    $storage_params->{'data-pool'} = $data_pool if $data_pool;
+
     PVE::API2::Storage::Config->create($storage_params);
 };
 
@@ -330,10 +332,41 @@ __PACKAGE__->register_method ({
 	properties => {
 	    node => get_standard_option('pve-node'),
 	    add_storages => {
-		description => "Configure VM and CT storage using the new pool.",
+		description => "Configure VM and CT storage using the new pool. ".
+				"Always enabled for erasure coded pools.",
 		type => 'boolean',
 		optional => 1,
 	    },
+	    k => {
+		type => 'integer',
+		description => "Number of data chunks. Will create an erasure coded pool plus a ".
+				"replicated pool for metadata.",
+		optional => 1,
+	    },
+	    m => {
+		type => 'integer',
+		description => "Number of coding chunks. Will create an erasure coded pool plus a ".
+				"replicated pool for metadata.",
+		optional => 1,
+	    },
+	    'failure-domain' => {
+		type => 'string',
+		description => "CRUSH failure domain. Default is 'host'. Will create an erasure ".
+				"coded pool plus a replicated pool for metadata.",
+		optional => 1,
+	    },
+	    'device-class' => {
+		type => 'string',
+		description => "CRUSH device class. Will create an erasure coded pool plus a ".
+				"replicated pool for metadata.",
+		optional => 1,
+	    },
+	    ecprofile => {
+		description => "Override the erasure code (EC) profile to use. Will create an ".
+				"erasure coded pool plus a replicated pool for metadata.",
+		type => 'string',
+		optional => 1,
+	    },
 	    %{ $ceph_pool_common_options->() },
 	},
     },
@@ -344,10 +377,31 @@ __PACKAGE__->register_method ({
 	PVE::Cluster::check_cfs_quorum();
 	PVE::Ceph::Tools::check_ceph_configured();
 
-	my $pool = extract_param($param, 'name');
+	my $pool = my $name = extract_param($param, 'name');
 	my $node = extract_param($param, 'node');
 	my $add_storages = extract_param($param, 'add_storages');
 
+	my $ec_k = extract_param($param, 'k');
+	my $ec_m = extract_param($param, 'm');
+	my $ec_failure_domain = extract_param($param, 'failure-domain');
+	my $ec_device_class = extract_param($param, 'device-class');
+
+	my $is_ec = 0;
+
+	my $ecprofile = extract_param($param, 'ecprofile');
+	die "Erasure code profile '$ecprofile' does not exist.\n"
+	    if $ecprofile && !PVE::Ceph::Tools::ecprofile_exists($ecprofile);
+
+	if ($ec_k || $ec_m || $ec_failure_domain || $ec_device_class) {
+	    die "'k' and 'm' parameters are needed for an erasure coded pool\n"
+		if !$ec_k || !$ec_m;
+
+	    $is_ec = 1;
+	}
+
+	$is_ec = 1 if $ecprofile;
+	$add_storages = 1 if $is_ec;
+
 	my $rpcenv = PVE::RPCEnvironment::get();
 	my $user = $rpcenv->get_user();
 
@@ -370,13 +424,47 @@ __PACKAGE__->register_method ({
 	$param->{application} //= 'rbd';
 	$param->{pg_autoscale_mode} //= 'warn';
 
-	my $worker = sub {
+	my $data_param = {};
+	my $data_pool = '';
+	if (!$ecprofile) {
+	    $ecprofile = PVE::Ceph::Tools::get_ecprofile_name($pool);
+	    eval {
+		PVE::Ceph::Tools::create_ecprofile(
+		    $ecprofile,
+		    $ec_k,
+		    $ec_m,
+		    $ec_failure_domain,
+		    $ec_device_class,
+		);
+	    };
+	    die "could not create erasure code profile '$ecprofile': $@\n" if $@;
+	}
+
+	if ($is_ec) {
+	    # copy all params, should be a flat hash
+	    $data_param = { map { $_ => $param->{$_} } keys %$param };
 
+	    $data_param->{pool_type} = 'erasure';
+	    $data_param->{allow_ec_overwrites} = 'true';
+	    $data_param->{erasure_code_profile} = $ecprofile;
+	    delete $data_param->{size};
+	    delete $data_param->{min_size};
+
+	    # metadata pool should be ok with 32 PGs
+	    $param->{pg_num} = 32;
+
+	    $pool = "${name}-metadata";
+	    $data_pool = "${name}-data";
+	}
+
+	my $worker = sub {
 	    PVE::Ceph::Tools::create_pool($pool, $param);
 
+	    PVE::Ceph::Tools::create_pool($data_pool, $data_param) if $is_ec;
+
 	    if ($add_storages) {
-		eval { $add_storage->($pool, "${pool}") };
-		die "adding PVE storage for ceph pool '$pool' failed: $@\n" if $@;
+		eval { $add_storage->($pool, "${name}", $data_pool) };
+		die "adding PVE storage for ceph pool '$name' failed: $@\n" if $@;
 	    }
 	};
 
@@ -414,6 +502,12 @@ __PACKAGE__->register_method ({
 		optional => 1,
 		default => 0,
 	    },
+	    remove_ecprofile => {
+		description => "Remove the erasure code profile. Used for erasure code pools. Default is true",
+		type => 'boolean',
+		optional => 1,
+		default => 1,
+	    },
 	},
     },
     returns => { type => 'string' },
@@ -428,6 +522,7 @@ __PACKAGE__->register_method ({
 	    if $param->{remove_storages};
 
 	my $pool = $param->{name};
+	my $remove_ecprofile = $param->{remove_ecprofile} // 1;
 
 	my $worker = sub {
 	    my $storages = $get_storages->($pool);
@@ -447,8 +542,21 @@ __PACKAGE__->register_method ({
 		}
 	    }
 
+	    my $pool_properties = PVE::Ceph::Tools::get_pool_properties($pool);
+
 	    PVE::Ceph::Tools::destroy_pool($pool);
 
+	    if (my $ecprofile = $pool_properties->{erasure_code_profile}) {
+		my $crush_rule = $pool_properties->{crush_rule};
+		eval { PVE::Ceph::Tools::destroy_crush_rule($crush_rule); };
+		warn "removing crush rule '${crush_rule}' failed: $@\n" if $@;
+
+		if ($remove_ecprofile) {
+		    eval { PVE::Ceph::Tools::destroy_ecprofile($ecprofile) };
+		    warn "removing EC profile '${ecprofile}' failed: $@\n" if $@;
+		}
+	    }
+
 	    if ($param->{remove_storages}) {
 		my $err;
 		foreach my $storeid (keys %$storages) {
diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
index 7e94ee58..f50cec67 100644
--- a/PVE/Ceph/Tools.pm
+++ b/PVE/Ceph/Tools.pm
@@ -8,7 +8,7 @@ use File::Basename;
 use IO::File;
 use JSON;
 
-use PVE::Tools qw(run_command dir_glob_foreach);
+use PVE::Tools qw(run_command dir_glob_foreach extract_param);
 use PVE::Cluster qw(cfs_read_file);
 use PVE::RADOS;
 use PVE::Ceph::Services;
@@ -277,12 +277,17 @@ sub create_pool {
 
     my $pg_num = $param->{pg_num} || 128;
 
-    $rados->mon_command({
+    my $mon_params = {
 	prefix => "osd pool create",
 	pool => $pool,
 	pg_num => int($pg_num),
 	format => 'plain',
-    });
+    };
+    $mon_params->{pool_type} = extract_param($param, 'pool_type') if $param->{pool_type};
+    $mon_params->{erasure_code_profile} = extract_param($param, 'erasure_code_profile')
+	if $param->{erasure_code_profile};
+
+    $rados->mon_command($mon_params);
 
     set_pool($pool, $param);
 
-- 
2.30.2





^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pve-devel] [PATCH manager 2/5] ceph tools: add erasure code management functions
  2022-04-28 11:58 ` [pve-devel] [PATCH manager 2/5] ceph tools: add erasure code management functions Aaron Lauterer
@ 2022-04-28 14:19   ` Dominik Csapak
  0 siblings, 0 replies; 14+ messages in thread
From: Dominik Csapak @ 2022-04-28 14:19 UTC (permalink / raw)
  To: Proxmox VE development discussion, Aaron Lauterer

imho all functions that use a PVE::RADOS object here should take one optionally
(like the rest of the functions in this file, e.g. create_pool)

On 4/28/22 13:58, Aaron Lauterer wrote:
> Functions to manage erasure code (EC) profiles:
> * add
> * remove
> * check if exists
> * get default prefixed name
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
>   PVE/Ceph/Tools.pm | 49 +++++++++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 49 insertions(+)
> 
> diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
> index 36d7788a..0c75df0e 100644
> --- a/PVE/Ceph/Tools.pm
> +++ b/PVE/Ceph/Tools.pm
> @@ -531,4 +531,53 @@ sub ceph_cluster_status {
>       return $status;
>   }
>   
> +sub ecprofile_exists {
> +    my ($name) = @_;
> +
> +    my $rados = PVE::RADOS->new();
> +    my $res = $rados->mon_command({ prefix => 'osd erasure-code-profile ls' });
> +
> +    my $profiles = { map { $_ => 1 } @$res };
> +    return $profiles->{$name};
> +}
> +
> +sub create_ecprofile {
> +    my ($name, $k, $m, $failure_domain, $device_class) = @_;
> +
> +    $failure_domain = 'host' if !$failure_domain;
> +
> +    my $profile = [
> +	"crush-failure-domain=${failure_domain}",
> +	"k=${k}",
> +	"m=${m}",
> +    ];
> +
> +    push(@$profile, "crush-device-class=${device_class}") if $device_class;
> +
> +    my $rados = PVE::RADOS->new();
> +    $rados->mon_command({
> +	prefix => 'osd erasure-code-profile set',
> +	name => $name,
> +	profile => $profile,
> +    });
> +}
> +
> +sub destroy_ecprofile {
> +    my ($profile) = @_;
> +
> +    my $rados = PVE::RADOS->new();
> +    my $command = {
> +	prefix => 'osd erasure-code-profile rm',
> +	name => $profile,
> +	format => 'plain',
> +    };
> +    return $rados->mon_command($command);
> +}
> +
> +sub get_ecprofile_name {
> +    my ($name) = @_;
> +    return "pve_ec_${name}";
> +}
> +
> +
>   1;





^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pve-devel] [PATCH manager 3/5] ceph tools: add function to get pool properties
  2022-04-28 11:58 ` [pve-devel] [PATCH manager 3/5] ceph tools: add function to get pool properties Aaron Lauterer
@ 2022-04-28 14:20   ` Dominik Csapak
  0 siblings, 0 replies; 14+ messages in thread
From: Dominik Csapak @ 2022-04-28 14:20 UTC (permalink / raw)
  To: Proxmox VE development discussion, Aaron Lauterer

same here with the PVE::RADOS object (also in the next patch)

On 4/28/22 13:58, Aaron Lauterer wrote:
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
>   PVE/Ceph/Tools.pm | 13 +++++++++++++
>   1 file changed, 13 insertions(+)
> 
> diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
> index 0c75df0e..b5e60783 100644
> --- a/PVE/Ceph/Tools.pm
> +++ b/PVE/Ceph/Tools.pm
> @@ -255,6 +255,19 @@ sub set_pool {
>   
>   }
>   
> +sub get_pool_properties {
> +    my ($pool) = @_;
> +    my $command = {
> +	prefix => "osd pool get",
> +	pool   => "$pool",
> +	var    => "all",
> +	format => 'json',
> +    };
> +
> +    my $rados = PVE::RADOS->new();
> +    return $rados->mon_command($command);
> +}
> +
>   sub create_pool {
>       my ($pool, $param, $rados) = @_;
>   





^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pve-devel] [PATCH manager 5/5] ceph pools: allow to create erasure code pools
  2022-04-28 11:58 ` [pve-devel] [PATCH manager 5/5] ceph pools: allow to create erasure code pools Aaron Lauterer
@ 2022-04-28 14:32   ` Dominik Csapak
  2022-04-29  7:33     ` Aaron Lauterer
  2022-04-28 18:31   ` [pve-devel] applied: " Thomas Lamprecht
  1 sibling, 1 reply; 14+ messages in thread
From: Dominik Csapak @ 2022-04-28 14:32 UTC (permalink / raw)
  To: Proxmox VE development discussion, Aaron Lauterer

is there a specific reason why you still force the autocreation
of the storage? i don't mind really, but giving a reason
would be nice. if there is no reason, i'd drop that since
the user can specify it anyway and adding one manually
should also not too hard..

some nits/comments inline
(mostly minor issues imho that could be fixed up)
On 4/28/22 13:58, Aaron Lauterer wrote:
> To use erasure coded (EC) pools for RBD storages, we need two pools. One
> regular replicated pool that will hold the RBD omap and other metadata
> and the EC pool which will hold the image data.
> 
> The coupling happens when an RBD image is created by adding the
> --data-pool parameter. This is why we have the 'data-pool' parameter in
> the storage configuration.
> 
> To follow already established semantics, we will create a 'X-metadata'
> and 'X-data' pool. The storage configuration is always added as it is
> the only thing that links the two together (besides naming schemes).
> 
> Different pg_num defaults are chosen for the replicated metadata pool as
> it will not hold a lot of data.
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> we need some more follow ups as setting pool parameters on ec pools will
> currently fail when trying to set the size. In that case, we most likely
> should adapt the gui as well and make inon changeable settings, such as
> size, read only.
> 
> changes since rfc:
> 
> ec profile parameters need to be configured during pool creation now. or
> alternatively the name of an ec profile can be provided.
> 
> cleanup of ec profile & crush rule on pool destruction
> 
>   PVE/API2/Ceph/Pools.pm | 120 ++++++++++++++++++++++++++++++++++++++---
>   PVE/Ceph/Tools.pm      |  11 ++--
>   2 files changed, 122 insertions(+), 9 deletions(-)
> 
> diff --git a/PVE/API2/Ceph/Pools.pm b/PVE/API2/Ceph/Pools.pm
> index 05855e15..69cf0d1a 100644
> --- a/PVE/API2/Ceph/Pools.pm
> +++ b/PVE/API2/Ceph/Pools.pm
> @@ -280,7 +280,7 @@ my $ceph_pool_common_options = sub {
>   
>   
>   my $add_storage = sub {
> -    my ($pool, $storeid) = @_;
> +    my ($pool, $storeid, $data_pool) = @_;
>   
>       my $storage_params = {
>   	type => 'rbd',
> @@ -290,6 +290,8 @@ my $add_storage = sub {
>   	content => 'rootdir,images',
>       };
>   
> +    $storage_params->{'data-pool'} = $data_pool if $data_pool;
> +
>       PVE::API2::Storage::Config->create($storage_params);
>   };
>   
> @@ -330,10 +332,41 @@ __PACKAGE__->register_method ({
>   	properties => {
>   	    node => get_standard_option('pve-node'),
>   	    add_storages => {
> -		description => "Configure VM and CT storage using the new pool.",
> +		description => "Configure VM and CT storage using the new pool. ".
> +				"Always enabled for erasure coded pools.",
>   		type => 'boolean',
>   		optional => 1,
>   	    },
> +	    k => {
> +		type => 'integer',
> +		description => "Number of data chunks. Will create an erasure coded pool plus a ".
> +				"replicated pool for metadata.",
> +		optional => 1,
> +	    },
> +	    m => {
> +		type => 'integer',
> +		description => "Number of coding chunks. Will create an erasure coded pool plus a ".
> +				"replicated pool for metadata.",
> +		optional => 1,
> +	    },
> +	    'failure-domain' => {
> +		type => 'string',
> +		description => "CRUSH failure domain. Default is 'host'. Will create an erasure ".
> +				"coded pool plus a replicated pool for metadata.",
> +		optional => 1,
> +	    },
> +	    'device-class' => {
> +		type => 'string',
> +		description => "CRUSH device class. Will create an erasure coded pool plus a ".
> +				"replicated pool for metadata.",
> +		optional => 1,
> +	    },
> +	    ecprofile => {
> +		description => "Override the erasure code (EC) profile to use. Will create an ".
> +				"erasure coded pool plus a replicated pool for metadata.",
> +		type => 'string',
> +		optional => 1,
> +	    },

i'm still missing some limitations here for the parameters.
at least lets limit the length of the names and set k/m to minimum 1?

>   	    %{ $ceph_pool_common_options->() },
>   	},
>       },
> @@ -344,10 +377,31 @@ __PACKAGE__->register_method ({
>   	PVE::Cluster::check_cfs_quorum();
>   	PVE::Ceph::Tools::check_ceph_configured();
>   
> -	my $pool = extract_param($param, 'name');
> +	my $pool = my $name = extract_param($param, 'name');
>   	my $node = extract_param($param, 'node');
>   	my $add_storages = extract_param($param, 'add_storages');
>   
> +	my $ec_k = extract_param($param, 'k');
> +	my $ec_m = extract_param($param, 'm');
> +	my $ec_failure_domain = extract_param($param, 'failure-domain');
> +	my $ec_device_class = extract_param($param, 'device-class');
> +
> +	my $is_ec = 0;
> +
> +	my $ecprofile = extract_param($param, 'ecprofile');
> +	die "Erasure code profile '$ecprofile' does not exist.\n"
> +	    if $ecprofile && !PVE::Ceph::Tools::ecprofile_exists($ecprofile);
> +
> +	if ($ec_k || $ec_m || $ec_failure_domain || $ec_device_class) {
> +	    die "'k' and 'm' parameters are needed for an erasure coded pool\n"
> +		if !$ec_k || !$ec_m;

nit: we could param a param_exc here (technically only relevant the gui, so not necessary)

> +
> +	    $is_ec = 1;
> +	}
> +
> +	$is_ec = 1 if $ecprofile;
> +	$add_storages = 1 if $is_ec;
> +
>   	my $rpcenv = PVE::RPCEnvironment::get();
>   	my $user = $rpcenv->get_user();
>   
> @@ -370,13 +424,47 @@ __PACKAGE__->register_method ({
>   	$param->{application} //= 'rbd';
>   	$param->{pg_autoscale_mode} //= 'warn';
>   
> -	my $worker = sub {
> +	my $data_param = {};
> +	my $data_pool = '';
> +	if (!$ecprofile) {
> +	    $ecprofile = PVE::Ceph::Tools::get_ecprofile_name($pool);
> +	    eval {
> +		PVE::Ceph::Tools::create_ecprofile(
> +		    $ecprofile,
> +		    $ec_k,
> +		    $ec_m,
> +		    $ec_failure_domain,
> +		    $ec_device_class,
> +		);
> +	    };

is creating an ec profile so fast that we can it do outside the worker?
(iow. can it hang? i guess so if the ceph cluster is not healthy?)

> +	    die "could not create erasure code profile '$ecprofile': $@\n" if $@;
> +	}
> +
> +	if ($is_ec) {
> +	    # copy all params, should be a flat hash
> +	    $data_param = { map { $_ => $param->{$_} } keys %$param };
>   
> +	    $data_param->{pool_type} = 'erasure';
> +	    $data_param->{allow_ec_overwrites} = 'true';
> +	    $data_param->{erasure_code_profile} = $ecprofile;
> +	    delete $data_param->{size};
> +	    delete $data_param->{min_size};
> +
> +	    # metadata pool should be ok with 32 PGs
> +	    $param->{pg_num} = 32;
> +
> +	    $pool = "${name}-metadata";
> +	    $data_pool = "${name}-data";
> +	}
> +
> +	my $worker = sub {
>   	    PVE::Ceph::Tools::create_pool($pool, $param);
>   
> +	    PVE::Ceph::Tools::create_pool($data_pool, $data_param) if $is_ec;

nit: both of these take an optional rados object, so we could create one beforehand
and pass it to both

also, ideally we would clean up if there was an error here
so if e.g. data pool creation fails, clean up the auto-created metadata pool and ec profile

> +
>   	    if ($add_storages) {
> -		eval { $add_storage->($pool, "${pool}") };
> -		die "adding PVE storage for ceph pool '$pool' failed: $@\n" if $@;
> +		eval { $add_storage->($pool, "${name}", $data_pool) };
> +		die "adding PVE storage for ceph pool '$name' failed: $@\n" if $@;
>   	    }
>   	};
>   
> @@ -414,6 +502,12 @@ __PACKAGE__->register_method ({
>   		optional => 1,
>   		default => 0,
>   	    },
> +	    remove_ecprofile => {
> +		description => "Remove the erasure code profile. Used for erasure code pools. Default is true",
> +		type => 'boolean',
> +		optional => 1,
> +		default => 1,
> +	    },
>   	},
>       },
>       returns => { type => 'string' },
> @@ -428,6 +522,7 @@ __PACKAGE__->register_method ({
>   	    if $param->{remove_storages};
>   
>   	my $pool = $param->{name};
> +	my $remove_ecprofile = $param->{remove_ecprofile} // 1;
>   
>   	my $worker = sub {
>   	    my $storages = $get_storages->($pool);
> @@ -447,8 +542,21 @@ __PACKAGE__->register_method ({
>   		}
>   	    }
>   
> +	    my $pool_properties = PVE::Ceph::Tools::get_pool_properties($pool);
> +
>   	    PVE::Ceph::Tools::destroy_pool($pool);
>   
> +	    if (my $ecprofile = $pool_properties->{erasure_code_profile}) {
> +		my $crush_rule = $pool_properties->{crush_rule};
> +		eval { PVE::Ceph::Tools::destroy_crush_rule($crush_rule); };
> +		warn "removing crush rule '${crush_rule}' failed: $@\n" if $@;
> +
> +		if ($remove_ecprofile) {
> +		    eval { PVE::Ceph::Tools::destroy_ecprofile($ecprofile) };
> +		    warn "removing EC profile '${ecprofile}' failed: $@\n" if $@;
> +		}

same here for the rados objects (if the subs accept one)
we do create 4 Rados objects here:

get_pool_properties
destroy_pool
destroy_crush_rule
destroy_ecprofile


> +	    }
> +
>   	    if ($param->{remove_storages}) {
>   		my $err;
>   		foreach my $storeid (keys %$storages) {
> diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
> index 7e94ee58..f50cec67 100644
> --- a/PVE/Ceph/Tools.pm
> +++ b/PVE/Ceph/Tools.pm
> @@ -8,7 +8,7 @@ use File::Basename;
>   use IO::File;
>   use JSON;
>   
> -use PVE::Tools qw(run_command dir_glob_foreach);
> +use PVE::Tools qw(run_command dir_glob_foreach extract_param);
>   use PVE::Cluster qw(cfs_read_file);
>   use PVE::RADOS;
>   use PVE::Ceph::Services;
> @@ -277,12 +277,17 @@ sub create_pool {
>   
>       my $pg_num = $param->{pg_num} || 128;
>   
> -    $rados->mon_command({
> +    my $mon_params = {
>   	prefix => "osd pool create",
>   	pool => $pool,
>   	pg_num => int($pg_num),
>   	format => 'plain',
> -    });
> +    };
> +    $mon_params->{pool_type} = extract_param($param, 'pool_type') if $param->{pool_type};
> +    $mon_params->{erasure_code_profile} = extract_param($param, 'erasure_code_profile')
> +	if $param->{erasure_code_profile};
> +
> +    $rados->mon_command($mon_params);
>   
>       set_pool($pool, $param);
>   





^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pve-devel] applied: [PATCH manager 5/5] ceph pools: allow to create erasure code pools
  2022-04-28 11:58 ` [pve-devel] [PATCH manager 5/5] ceph pools: allow to create erasure code pools Aaron Lauterer
  2022-04-28 14:32   ` Dominik Csapak
@ 2022-04-28 18:31   ` Thomas Lamprecht
  1 sibling, 0 replies; 14+ messages in thread
From: Thomas Lamprecht @ 2022-04-28 18:31 UTC (permalink / raw)
  To: Proxmox VE development discussion, Aaron Lauterer

On 28.04.22 13:58, Aaron Lauterer wrote:
> To use erasure coded (EC) pools for RBD storages, we need two pools. One
> regular replicated pool that will hold the RBD omap and other metadata
> and the EC pool which will hold the image data.
> 
> The coupling happens when an RBD image is created by adding the
> --data-pool parameter. This is why we have the 'data-pool' parameter in
> the storage configuration.
> 
> To follow already established semantics, we will create a 'X-metadata'
> and 'X-data' pool. The storage configuration is always added as it is
> the only thing that links the two together (besides naming schemes).
> 
> Different pg_num defaults are chosen for the replicated metadata pool as
> it will not hold a lot of data.
> 
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> we need some more follow ups as setting pool parameters on ec pools will
> currently fail when trying to set the size. In that case, we most likely
> should adapt the gui as well and make inon changeable settings, such as
> size, read only.
> 
> changes since rfc:
> 
> ec profile parameters need to be configured during pool creation now. or
> alternatively the name of an ec profile can be provided.
> 
> cleanup of ec profile & crush rule on pool destruction
> 
>  PVE/API2/Ceph/Pools.pm | 120 ++++++++++++++++++++++++++++++++++++++---
>  PVE/Ceph/Tools.pm      |  11 ++--
>  2 files changed, 122 insertions(+), 9 deletions(-)
> 

applied but with a bit API interface rework, mostly tested the non-ec case for not getting
broken, so you way want to recheck if I did nothing too stupid for the EC case.

>  
> @@ -330,10 +332,41 @@ __PACKAGE__->register_method ({
>  	properties => {
>  	    node => get_standard_option('pve-node'),
>  	    add_storages => {
> -		description => "Configure VM and CT storage using the new pool.",
> +		description => "Configure VM and CT storage using the new pool. ".
> +				"Always enabled for erasure coded pools.",
>  		type => 'boolean',
>  		optional => 1,
>  	    },
> +	    k => {
> +		type => 'integer',
> +		description => "Number of data chunks. Will create an erasure coded pool plus a ".
> +				"replicated pool for metadata.",
> +		optional => 1,
> +	    },
> +	    m => {
> +		type => 'integer',
> +		description => "Number of coding chunks. Will create an erasure coded pool plus a ".
> +				"replicated pool for metadata.",
> +		optional => 1,
> +	    },
> +	    'failure-domain' => {
> +		type => 'string',
> +		description => "CRUSH failure domain. Default is 'host'. Will create an erasure ".
> +				"coded pool plus a replicated pool for metadata.",
> +		optional => 1,
> +	    },
> +	    'device-class' => {
> +		type => 'string',
> +		description => "CRUSH device class. Will create an erasure coded pool plus a ".
> +				"replicated pool for metadata.",
> +		optional => 1,
> +	    },
> +	    ecprofile => {
> +		description => "Override the erasure code (EC) profile to use. Will create an ".
> +				"erasure coded pool plus a replicated pool for metadata.",
> +		type => 'string',
> +		optional => 1,
> +	    },

moved to a format string 'erasurce-coded', that allows also to drop most of the param
existence checking as we can set the correct optional'ness in there.
Also avoids bloating the API to much for just this.

>  	    %{ $ceph_pool_common_options->() },
>  	},
>      },
> @@ -344,10 +377,31 @@ __PACKAGE__->register_method ({
>  	PVE::Cluster::check_cfs_quorum();
>  	PVE::Ceph::Tools::check_ceph_configured();
>  
> -	my $pool = extract_param($param, 'name');
> +	my $pool = my $name = extract_param($param, 'name');
>  	my $node = extract_param($param, 'node');
>  	my $add_storages = extract_param($param, 'add_storages');
>  
> +	my $ec_k = extract_param($param, 'k');
> +	my $ec_m = extract_param($param, 'm');
> +	my $ec_failure_domain = extract_param($param, 'failure-domain');
> +	my $ec_device_class = extract_param($param, 'device-class');
> +
> +	my $is_ec = 0;
> +
> +	my $ecprofile = extract_param($param, 'ecprofile');
> +	die "Erasure code profile '$ecprofile' does not exist.\n"
> +	    if $ecprofile && !PVE::Ceph::Tools::ecprofile_exists($ecprofile);
> +
> +	if ($ec_k || $ec_m || $ec_failure_domain || $ec_device_class) {
> +	    die "'k' and 'm' parameters are needed for an erasure coded pool\n"
> +		if !$ec_k || !$ec_m;
> +
> +	    $is_ec = 1;
> +	}
> +
> +	$is_ec = 1 if $ecprofile;
> +	$add_storages = 1 if $is_ec;
> +
>  	my $rpcenv = PVE::RPCEnvironment::get();
>  	my $user = $rpcenv->get_user();
>  
> @@ -370,13 +424,47 @@ __PACKAGE__->register_method ({
>  	$param->{application} //= 'rbd';
>  	$param->{pg_autoscale_mode} //= 'warn';
>  
> -	my $worker = sub {
> +	my $data_param = {};
> +	my $data_pool = '';
> +	if (!$ecprofile) {
> +	    $ecprofile = PVE::Ceph::Tools::get_ecprofile_name($pool);
> +	    eval {
> +		PVE::Ceph::Tools::create_ecprofile(
> +		    $ecprofile,
> +		    $ec_k,
> +		    $ec_m,
> +		    $ec_failure_domain,
> +		    $ec_device_class,
> +		);

Dominik is right, this can block and should happen in the worker (moved)

> +	    };
> +	    die "could not create erasure code profile '$ecprofile': $@\n" if $@;
> +	}
> +
> +	if ($is_ec) {
> +	    # copy all params, should be a flat hash
> +	    $data_param = { map { $_ => $param->{$_} } keys %$param };
>  
> +	    $data_param->{pool_type} = 'erasure';
> +	    $data_param->{allow_ec_overwrites} = 'true';
> +	    $data_param->{erasure_code_profile} = $ecprofile;
> +	    delete $data_param->{size};
> +	    delete $data_param->{min_size};
> +
> +	    # metadata pool should be ok with 32 PGs
> +	    $param->{pg_num} = 32;
> +
> +	    $pool = "${name}-metadata";
> +	    $data_pool = "${name}-data";
> +	}
> +
> +	my $worker = sub {
>  	    PVE::Ceph::Tools::create_pool($pool, $param);
>  
> +	    PVE::Ceph::Tools::create_pool($data_pool, $data_param) if $is_ec;
> +
>  	    if ($add_storages) {
> -		eval { $add_storage->($pool, "${pool}") };
> -		die "adding PVE storage for ceph pool '$pool' failed: $@\n" if $@;
> +		eval { $add_storage->($pool, "${name}", $data_pool) };
> +		die "adding PVE storage for ceph pool '$name' failed: $@\n" if $@;
>  	    }
>  	};
>  
> @@ -414,6 +502,12 @@ __PACKAGE__->register_method ({
>  		optional => 1,
>  		default => 0,
>  	    },
> +	    remove_ecprofile => {
> +		description => "Remove the erasure code profile. Used for erasure code pools. Default is true",

"Used for erasure code pools" is a bit redundant, I dropped that and mentioned that the default is true
"if applicable".

> +		type => 'boolean',
> +		optional => 1,
> +		default => 1,
> +	    },
>  	},
>      },
>      returns => { type => 'string' },
> @@ -428,6 +522,7 @@ __PACKAGE__->register_method ({
>  	    if $param->{remove_storages};
>  
>  	my $pool = $param->{name};
> +	my $remove_ecprofile = $param->{remove_ecprofile} // 1;
>  
>  	my $worker = sub {
>  	    my $storages = $get_storages->($pool);
> @@ -447,8 +542,21 @@ __PACKAGE__->register_method ({
>  		}
>  	    }
>  

added $rados reuse to all helpers and used it here for all calls.

> +	    my $pool_properties = PVE::Ceph::Tools::get_pool_properties($pool);
> +
>  	    PVE::Ceph::Tools::destroy_pool($pool);
>  
> +	    if (my $ecprofile = $pool_properties->{erasure_code_profile}) {
> +		my $crush_rule = $pool_properties->{crush_rule};
> +		eval { PVE::Ceph::Tools::destroy_crush_rule($crush_rule); };
> +		warn "removing crush rule '${crush_rule}' failed: $@\n" if $@;
> +
> +		if ($remove_ecprofile) {
> +		    eval { PVE::Ceph::Tools::destroy_ecprofile($ecprofile) };
> +		    warn "removing EC profile '${ecprofile}' failed: $@\n" if $@;
> +		}
> +	    }
> +
>  	    if ($param->{remove_storages}) {
>  		my $err;
>  		foreach my $storeid (keys %$storages) {
> diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
> index 7e94ee58..f50cec67 100644
> --- a/PVE/Ceph/Tools.pm
> +++ b/PVE/Ceph/Tools.pm
> @@ -8,7 +8,7 @@ use File::Basename;
>  use IO::File;
>  use JSON;
>  
> -use PVE::Tools qw(run_command dir_glob_foreach);
> +use PVE::Tools qw(run_command dir_glob_foreach extract_param);
>  use PVE::Cluster qw(cfs_read_file);
>  use PVE::RADOS;
>  use PVE::Ceph::Services;
> @@ -277,12 +277,17 @@ sub create_pool {
>  
>      my $pg_num = $param->{pg_num} || 128;
>  
> -    $rados->mon_command({
> +    my $mon_params = {
>  	prefix => "osd pool create",
>  	pool => $pool,
>  	pg_num => int($pg_num),
>  	format => 'plain',
> -    });
> +    };
> +    $mon_params->{pool_type} = extract_param($param, 'pool_type') if $param->{pool_type};
> +    $mon_params->{erasure_code_profile} = extract_param($param, 'erasure_code_profile')
> +	if $param->{erasure_code_profile};
> +
> +    $rados->mon_command($mon_params);
>  
>      set_pool($pool, $param);
>  





^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pve-devel] applied: [PATCH manager 0/5] Ceph add basic erasure code pool mgmt support
  2022-04-28 11:58 [pve-devel] [PATCH manager 0/5] Ceph add basic erasure code pool mgmt support Aaron Lauterer
                   ` (4 preceding siblings ...)
  2022-04-28 11:58 ` [pve-devel] [PATCH manager 5/5] ceph pools: allow to create erasure code pools Aaron Lauterer
@ 2022-04-28 18:33 ` Thomas Lamprecht
  5 siblings, 0 replies; 14+ messages in thread
From: Thomas Lamprecht @ 2022-04-28 18:33 UTC (permalink / raw)
  To: Proxmox VE development discussion, Aaron Lauterer

On 28.04.22 13:58, Aaron Lauterer wrote:
> The first patch is one that we should have added when we added basic
> support for ec pools [0].
> 
> The rest adds basic support to manage erasure coded (EC) pools. When
> adding an EC pool we need to define how many k and m chunks we want for
> the ec profile. Optionally one can define the failure domain and device
> class.
> Or if there is already another ec profile that should be used, it can be
> specified as well.
> 
> Since EC pools need a replicated pool that will hold metadata, we follow
> the same approach we already have for cephfs pools and have one metadata
> and one data pool.
> A few prerequisites were added to Ceph::Tools.
> 
> More details in the actual patches.
> 
> 
> changes since rfc:
> 
> change the approach by handling ec profiles automatically on pool
> creation and cleaning up ec profiles and crush rules on pool
> desctruction. The whole ec profile mgmt CLI has been dropped for now.
> 
> Aaron Lauterer (5):
>   api: ceph: $get_storages check if data-pool too
>   ceph tools: add erasure code management functions
>   ceph tools: add function to get pool properties
>   ceph tools: add destroy crush rule destroy function
>   ceph pools: allow to create erasure code pools
> 
>  PVE/API2/Ceph/Pools.pm | 129 ++++++++++++++++++++++++++++++++++++++---
>  PVE/Ceph/Tools.pm      |  83 +++++++++++++++++++++++++-
>  2 files changed, 201 insertions(+), 11 deletions(-)
> 


applied series, but squashed the three "add helper" patches in the middle in one,
and reworked the approach of the last one a bit (see reply there), thanks!




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pve-devel] [PATCH manager 5/5] ceph pools: allow to create erasure code pools
  2022-04-28 14:32   ` Dominik Csapak
@ 2022-04-29  7:33     ` Aaron Lauterer
  2022-04-29  7:40       ` Thomas Lamprecht
  0 siblings, 1 reply; 14+ messages in thread
From: Aaron Lauterer @ 2022-04-29  7:33 UTC (permalink / raw)
  To: Dominik Csapak, Proxmox VE development discussion


On 4/28/22 16:32, Dominik Csapak wrote:
> is there a specific reason why you still force the autocreation
> of the storage? i don't mind really, but giving a reason
> would be nice. if there is no reason, i'd drop that since
> the user can specify it anyway and adding one manually
> should also not too hard..

I guess I could have been more explicit in the commit message. I decided to always create the storage config to have the coupling of ec pool and replicated metadata pool in place. We currently only have the RBD use case for EC pools in PVE itself and otherwise people might forget about it and any RBD image created without the data-pool parameter configured will be placed in the replicated pool.




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pve-devel] [PATCH manager 5/5] ceph pools: allow to create erasure code pools
  2022-04-29  7:33     ` Aaron Lauterer
@ 2022-04-29  7:40       ` Thomas Lamprecht
  2022-04-29  9:27         ` Aaron Lauterer
  0 siblings, 1 reply; 14+ messages in thread
From: Thomas Lamprecht @ 2022-04-29  7:40 UTC (permalink / raw)
  To: Proxmox VE development discussion, Aaron Lauterer, Dominik Csapak

On 29.04.22 09:33, Aaron Lauterer wrote:
> 
> On 4/28/22 16:32, Dominik Csapak wrote:
>> is there a specific reason why you still force the autocreation
>> of the storage? i don't mind really, but giving a reason
>> would be nice. if there is no reason, i'd drop that since
>> the user can specify it anyway and adding one manually
>> should also not too hard..
> 
> I guess I could have been more explicit in the commit message. I decided to always create the storage config to have the coupling of ec pool and replicated metadata pool in place. We currently only have the RBD use case for EC pools in PVE itself and otherwise people might forget about it and any RBD image created without the data-pool parameter configured will be placed in the replicated pool.
> 

IMO still confusing to enforce it, default to true can be Ok, but that's
something different than hard-enforcing it always, would surely lead to a bug
report sooner or later.

Oh, and for the CLI I'd like to see this as separate command, i.e.:

pveceph ec-pool create
pveceph ec-pool destroy

That can then have the expanded options format, which can be nicer to use on CLI,
and also default to add-storage there already.


small ot/general nit: can you please use a text-width of 80 (or even 100) cc on
the mailing list, my MTA is configured to show text as is to ensure patches are
seen correctly, so long replies require horizontal scrolling





^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pve-devel] [PATCH manager 5/5] ceph pools: allow to create erasure code pools
  2022-04-29  7:40       ` Thomas Lamprecht
@ 2022-04-29  9:27         ` Aaron Lauterer
  0 siblings, 0 replies; 14+ messages in thread
From: Aaron Lauterer @ 2022-04-29  9:27 UTC (permalink / raw)
  To: Thomas Lamprecht, Proxmox VE development discussion, Dominik Csapak



On 4/29/22 09:40, Thomas Lamprecht wrote:
> On 29.04.22 09:33, Aaron Lauterer wrote:
>>
>> On 4/28/22 16:32, Dominik Csapak wrote:
>>> is there a specific reason why you still force the autocreation
>>> of the storage? i don't mind really, but giving a reason
>>> would be nice. if there is no reason, i'd drop that since
>>> the user can specify it anyway and adding one manually
>>> should also not too hard..
>>
>> I guess I could have been more explicit in the commit message. I decided to always create the storage config to have the coupling of ec pool and replicated metadata pool in place. We currently only have the RBD use case for EC pools in PVE itself and otherwise people might forget about it and any RBD image created without the data-pool parameter configured will be placed in the replicated pool.
>>
> 
> IMO still confusing to enforce it, default to true can be Ok, but that's
> something different than hard-enforcing it always, would surely lead to a bug
> report sooner or later.

I am sending a follow up which makes it a default that can be explicitly set
> 
> Oh, and for the CLI I'd like to see this as separate command, i.e.:
> 
> pveceph ec-pool create
> pveceph ec-pool destroy
> 
> That can then have the expanded options format, which can be nicer to use on CLI,
> and also default to add-storage there already.

Okay. One question on how to handle the destroy part. Since we do have 2 pools, 
depending on what the user provides, should we search for the storage name and 
any of the possible pool names and then figure out the missing pools from there 
and destroy both pools?
This way, the user could provide us with any of the 3 possibilities. On the 
other hand, this could be unexpected behavior, and we may rather let the user 
destroy 2 pools.

Or should it be the regular pool destroy, but set the parameters to destroy ec 
profiles?

> 
> 
> small ot/general nit: can you please use a text-width of 80 (or even 100) cc on
> the mailing list, my MTA is configured to show text as is to ensure patches are
> seen correctly, so long replies require horizontal scrolling
> 

Sorry for that. Should be fixed now :)




^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2022-04-29  9:27 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-28 11:58 [pve-devel] [PATCH manager 0/5] Ceph add basic erasure code pool mgmt support Aaron Lauterer
2022-04-28 11:58 ` [pve-devel] [PATCH manager 1/5] api: ceph: $get_storages check if data-pool too Aaron Lauterer
2022-04-28 11:58 ` [pve-devel] [PATCH manager 2/5] ceph tools: add erasure code management functions Aaron Lauterer
2022-04-28 14:19   ` Dominik Csapak
2022-04-28 11:58 ` [pve-devel] [PATCH manager 3/5] ceph tools: add function to get pool properties Aaron Lauterer
2022-04-28 14:20   ` Dominik Csapak
2022-04-28 11:58 ` [pve-devel] [PATCH manager 4/5] ceph tools: add destroy crush rule destroy function Aaron Lauterer
2022-04-28 11:58 ` [pve-devel] [PATCH manager 5/5] ceph pools: allow to create erasure code pools Aaron Lauterer
2022-04-28 14:32   ` Dominik Csapak
2022-04-29  7:33     ` Aaron Lauterer
2022-04-29  7:40       ` Thomas Lamprecht
2022-04-29  9:27         ` Aaron Lauterer
2022-04-28 18:31   ` [pve-devel] applied: " Thomas Lamprecht
2022-04-28 18:33 ` [pve-devel] applied: [PATCH manager 0/5] Ceph add basic erasure code pool mgmt support Thomas Lamprecht

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal