public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH manager v2 1/5] ceph: split out pool set into own method
@ 2020-10-19 10:39 Alwin Antreich
  2020-10-19 10:39 ` [pve-devel] [PATCH manager v2 2/5] ceph: allow to alter pool settings Alwin Antreich
                   ` (4 more replies)
  0 siblings, 5 replies; 6+ messages in thread
From: Alwin Antreich @ 2020-10-19 10:39 UTC (permalink / raw)
  To: pve-devel

to reduce code duplication and make it easier to add more options for
pool commands.

Use a new rados object for each 'osd pool set', as each command can set
an option independent of the previous commands success/failure. On
failure a new rados object would need to be created and that will
confuse task tracking of the REST environment.

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
---
Note:
    v1 -> v2:
	* reorder patches, since pool create & set share common pool
	  options.
	* include new setpool API

 PVE/API2/Ceph.pm  | 17 +++++++----
 PVE/Ceph/Tools.pm | 74 +++++++++++++++++++++++++----------------------
 2 files changed, 52 insertions(+), 39 deletions(-)

diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index d7e5892c..48d0484f 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@ -751,14 +751,21 @@ __PACKAGE__->register_method ({
 		if !PVE::JSONSchema::parse_storage_id($pool);
 	}
 
-	my $pg_num = $param->{pg_num} || 128;
-	my $size = $param->{size} || 3;
-	my $min_size = $param->{min_size} || 2;
-	my $application = $param->{application} // 'rbd';
+	my $ceph_param = \%$param;
+	for my $item ('add_storages', 'name', 'node') {
+	    # not ceph parameters
+	    delete $ceph_param->{$item};
+	}
+
+	# pool defaults
+	$ceph_param->{pg_num} //= 128;
+	$ceph_param->{size} //= 3;
+	$ceph_param->{min_size} //= 2;
+	$ceph_param->{application} //= 'rbd';
 
 	my $worker = sub {
 
-	    PVE::Ceph::Tools::create_pool($pool, $param);
+	    PVE::Ceph::Tools::create_pool($pool, $ceph_param);
 
 	    if ($param->{add_storages}) {
 		my $err;
diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
index 2d8a4c1d..cc4238c9 100644
--- a/PVE/Ceph/Tools.pm
+++ b/PVE/Ceph/Tools.pm
@@ -202,6 +202,45 @@ sub check_ceph_enabled {
     return 1;
 }
 
+sub set_pool {
+    my ($pool, $param) = @_;
+
+    foreach my $setting (keys %$param) {
+	my $value = $param->{$setting};
+
+	my $command;
+	if ($setting eq 'application') {
+	    $command = {
+		prefix => "osd pool application enable",
+		pool   => "$pool",
+		app    => "$value",
+	    };
+	} else {
+	    $command = {
+		prefix => "osd pool set",
+		pool   => "$pool",
+		var    => "$setting",
+		val    => "$value",
+		format => 'plain',
+	    };
+	}
+
+	my $rados = PVE::RADOS->new();
+	eval { $rados->mon_command($command); };
+	if ($@) {
+	    print "$@";
+	} else {
+	    delete $param->{$setting};
+	}
+    }
+
+    if ((keys %$param) > 0) {
+	my @missing = join(', ', keys %$param );
+	die "Could not set: @missing\n";
+    }
+
+}
+
 sub create_pool {
     my ($pool, $param, $rados) = @_;
 
@@ -210,9 +249,6 @@ sub create_pool {
     }
 
     my $pg_num = $param->{pg_num} || 128;
-    my $size = $param->{size} || 3;
-    my $min_size = $param->{min_size} || 2;
-    my $application = $param->{application} // 'rbd';
 
     $rados->mon_command({
 	prefix => "osd pool create",
@@ -221,37 +257,7 @@ sub create_pool {
 	format => 'plain',
     });
 
-    $rados->mon_command({
-	prefix => "osd pool set",
-	pool => $pool,
-	var => 'min_size',
-	val => "$min_size",
-	format => 'plain',
-    });
-
-    $rados->mon_command({
-	prefix => "osd pool set",
-	pool => $pool,
-	var => 'size',
-	val => "$size",
-	format => 'plain',
-    });
-
-    if (defined($param->{crush_rule})) {
-	$rados->mon_command({
-	    prefix => "osd pool set",
-	    pool => $pool,
-	    var => 'crush_rule',
-	    val => $param->{crush_rule},
-	    format => 'plain',
-	});
-    }
-
-    $rados->mon_command({
-	prefix => "osd pool application enable",
-	pool => $pool,
-	app => $application,
-    });
+    set_pool($pool, $param);
 
 }
 
-- 
2.27.0





^ permalink raw reply	[flat|nested] 6+ messages in thread

* [pve-devel] [PATCH manager v2 2/5] ceph: allow to alter pool settings
  2020-10-19 10:39 [pve-devel] [PATCH manager v2 1/5] ceph: split out pool set into own method Alwin Antreich
@ 2020-10-19 10:39 ` Alwin Antreich
  2020-10-19 10:39 ` [pve-devel] [PATCH manager v2 3/5] ceph: use pool common options pool create Alwin Antreich
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Alwin Antreich @ 2020-10-19 10:39 UTC (permalink / raw)
  To: pve-devel

after creation, so that users don't need to go the ceph tooling route.
Separate common pool options to reuse them in other places.

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
---
 PVE/API2/Ceph.pm   | 98 ++++++++++++++++++++++++++++++++++++++++++++++
 PVE/CLI/pveceph.pm |  1 +
 2 files changed, 99 insertions(+)

diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index 48d0484f..7cdbdccd 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@ -674,6 +674,63 @@ __PACKAGE__->register_method ({
 	return $data;
     }});
 
+
+my $ceph_pool_common_options = sub {
+    my ($nodefault) = shift;
+    my $options = {
+	name => {
+	    description => "The name of the pool. It must be unique.",
+	    type => 'string',
+	},
+	size => {
+	    description => 'Number of replicas per object',
+	    type => 'integer',
+	    default => 3,
+	    optional => 1,
+	    minimum => 1,
+	    maximum => 7,
+	},
+	min_size => {
+	    description => 'Minimum number of replicas per object',
+	    type => 'integer',
+	    default => 2,
+	    optional => 1,
+	    minimum => 1,
+	    maximum => 7,
+	},
+	pg_num => {
+	    description => "Number of placement groups.",
+	    type => 'integer',
+	    default => 128,
+	    optional => 1,
+	    minimum => 8,
+	    maximum => 32768,
+	},
+	crush_rule => {
+	    description => "The rule to use for mapping object placement in the cluster.",
+	    type => 'string',
+	    optional => 1,
+	},
+	application => {
+	    description => "The application of the pool.",
+	    default => 'rbd',
+	    type => 'string',
+	    enum => ['rbd', 'cephfs', 'rgw'],
+	    optional => 1,
+	},
+    };
+
+    if (!$nodefault) {
+	return $options;
+    } else {
+	foreach my $key (keys %$options) {
+	    delete $options->{$key}->{default} if defined($options->{$key}->{default});
+	}
+	return $options;
+    }
+};
+
+
 __PACKAGE__->register_method ({
     name => 'createpool',
     path => 'pools',
@@ -974,6 +1031,47 @@ __PACKAGE__->register_method ({
     }});
 
 
+__PACKAGE__->register_method ({
+    name => 'setpool',
+    path => 'pools/{name}',
+    method => 'PUT',
+    description => "Change POOL settings",
+    proxyto => 'node',
+    protected => 1,
+    permissions => {
+	check => ['perm', '/', [ 'Sys.Modify' ]],
+    },
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    node => get_standard_option('pve-node'),
+	    %{ $ceph_pool_common_options->('nodefault') },
+	},
+    },
+    returns => { type => 'string' },
+    code => sub {
+	my ($param) = @_;
+
+	PVE::Ceph::Tools::check_ceph_configured();
+
+	my $rpcenv = PVE::RPCEnvironment::get();
+	my $authuser = $rpcenv->get_user();
+
+	my $pool = $param->{name};
+	my $ceph_param = \%$param;
+	for my $item ('name', 'node') {
+	    # not ceph parameters
+	    delete $ceph_param->{$item};
+	}
+
+	my $worker = sub {
+	    PVE::Ceph::Tools::set_pool($pool, $ceph_param);
+	};
+
+	return $rpcenv->fork_worker('cephsetpool', $pool,  $authuser, $worker);
+    }});
+
+
 __PACKAGE__->register_method ({
     name => 'crush',
     path => 'crush',
diff --git a/PVE/CLI/pveceph.pm b/PVE/CLI/pveceph.pm
index af14cbc0..b8420a1d 100755
--- a/PVE/CLI/pveceph.pm
+++ b/PVE/CLI/pveceph.pm
@@ -198,6 +198,7 @@ our $cmddef = {
 	}, $PVE::RESTHandler::standard_output_options],
 	create => [ 'PVE::API2::Ceph', 'createpool', ['name'], { node => $nodename }],
 	destroy => [ 'PVE::API2::Ceph', 'destroypool', ['name'], { node => $nodename } ],
+	set => [ 'PVE::API2::Ceph', 'setpool', ['name'], { node => $nodename } ],
     },
     lspools => { alias => 'pool ls' },
     createpool => { alias => 'pool create' },
-- 
2.27.0





^ permalink raw reply	[flat|nested] 6+ messages in thread

* [pve-devel] [PATCH manager v2 3/5] ceph: use pool common options pool create
  2020-10-19 10:39 [pve-devel] [PATCH manager v2 1/5] ceph: split out pool set into own method Alwin Antreich
  2020-10-19 10:39 ` [pve-devel] [PATCH manager v2 2/5] ceph: allow to alter pool settings Alwin Antreich
@ 2020-10-19 10:39 ` Alwin Antreich
  2020-10-19 10:39 ` [pve-devel] [PATCH manager v2 4/5] ceph: add pg_autoscale_mode to " Alwin Antreich
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Alwin Antreich @ 2020-10-19 10:39 UTC (permalink / raw)
  To: pve-devel

to keep the pool create & set in sync.

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
---
 PVE/API2/Ceph.pm | 40 +---------------------------------------
 1 file changed, 1 insertion(+), 39 deletions(-)

diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index 7cdbdccd..0aeb5075 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@ -745,50 +745,12 @@ __PACKAGE__->register_method ({
 	additionalProperties => 0,
 	properties => {
 	    node => get_standard_option('pve-node'),
-	    name => {
-		description => "The name of the pool. It must be unique.",
-		type => 'string',
-	    },
-	    size => {
-		description => 'Number of replicas per object',
-		type => 'integer',
-		default => 3,
-		optional => 1,
-		minimum => 1,
-		maximum => 7,
-	    },
-	    min_size => {
-		description => 'Minimum number of replicas per object',
-		type => 'integer',
-		default => 2,
-		optional => 1,
-		minimum => 1,
-		maximum => 7,
-	    },
-	    pg_num => {
-		description => "Number of placement groups.",
-		type => 'integer',
-		default => 128,
-		optional => 1,
-		minimum => 8,
-		maximum => 32768,
-	    },
-	    crush_rule => {
-		description => "The rule to use for mapping object placement in the cluster.",
-		type => 'string',
-		optional => 1,
-	    },
-	    application => {
-		description => "The application of the pool, 'rbd' by default.",
-		type => 'string',
-		enum => ['rbd', 'cephfs', 'rgw'],
-		optional => 1,
-	    },
 	    add_storages => {
 		description => "Configure VM and CT storage using the new pool.",
 		type => 'boolean',
 		optional => 1,
 	    },
+	    %{ $ceph_pool_common_options->() },
 	},
     },
     returns => { type => 'string' },
-- 
2.27.0





^ permalink raw reply	[flat|nested] 6+ messages in thread

* [pve-devel] [PATCH manager v2 4/5] ceph: add pg_autoscale_mode to pool create
  2020-10-19 10:39 [pve-devel] [PATCH manager v2 1/5] ceph: split out pool set into own method Alwin Antreich
  2020-10-19 10:39 ` [pve-devel] [PATCH manager v2 2/5] ceph: allow to alter pool settings Alwin Antreich
  2020-10-19 10:39 ` [pve-devel] [PATCH manager v2 3/5] ceph: use pool common options pool create Alwin Antreich
@ 2020-10-19 10:39 ` Alwin Antreich
  2020-10-19 10:39 ` [pve-devel] [PATCH manager v2 5/5] ceph: gui: add autoscale mode " Alwin Antreich
  2020-10-22 16:52 ` [pve-devel] applied-series: [PATCH manager v2 1/5] ceph: split out pool set into own method Thomas Lamprecht
  4 siblings, 0 replies; 6+ messages in thread
From: Alwin Antreich @ 2020-10-19 10:39 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
---
 PVE/API2/Ceph.pm | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index 0aeb5075..69fe3d6d 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@ -718,6 +718,13 @@ my $ceph_pool_common_options = sub {
 	    enum => ['rbd', 'cephfs', 'rgw'],
 	    optional => 1,
 	},
+	pg_autoscale_mode => {
+	    description => "The automatic PG scaling mode of the pool.",
+	    type => 'string',
+	    enum => ['on', 'off', 'warn'],
+	    default => 'warn',
+	    optional => 1,
+	},
     };
 
     if (!$nodefault) {
@@ -781,6 +788,7 @@ __PACKAGE__->register_method ({
 	$ceph_param->{size} //= 3;
 	$ceph_param->{min_size} //= 2;
 	$ceph_param->{application} //= 'rbd';
+	$ceph_param->{pg_autoscale_mode} //= 'warn';
 
 	my $worker = sub {
 
-- 
2.27.0





^ permalink raw reply	[flat|nested] 6+ messages in thread

* [pve-devel] [PATCH manager v2 5/5] ceph: gui: add autoscale mode to pool create
  2020-10-19 10:39 [pve-devel] [PATCH manager v2 1/5] ceph: split out pool set into own method Alwin Antreich
                   ` (2 preceding siblings ...)
  2020-10-19 10:39 ` [pve-devel] [PATCH manager v2 4/5] ceph: add pg_autoscale_mode to " Alwin Antreich
@ 2020-10-19 10:39 ` Alwin Antreich
  2020-10-22 16:52 ` [pve-devel] applied-series: [PATCH manager v2 1/5] ceph: split out pool set into own method Thomas Lamprecht
  4 siblings, 0 replies; 6+ messages in thread
From: Alwin Antreich @ 2020-10-19 10:39 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
---
 www/manager6/ceph/Pool.js | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index 19eb01e9..11bcf9d5 100644
--- a/www/manager6/ceph/Pool.js
+++ b/www/manager6/ceph/Pool.js
@@ -39,6 +39,19 @@ Ext.define('PVE.CephCreatePool', {
 	    name: 'crush_rule',
 	    allowBlank: false
 	},
+	{
+	    xtype: 'proxmoxKVComboBox',
+	    fieldLabel: 'PG Autoscale Mode', // do not localize
+	    name: 'pg_autoscale_mode',
+	    comboItems: [
+		['warn', 'warn'],
+		['on', 'on'],
+		['off', 'off'],
+	    ],
+	    value: 'warn',
+	    allowBlank: false,
+	    autoSelect: false,
+	},
 	{
 	    xtype: 'proxmoxintegerfield',
 	    fieldLabel: 'pg_num',
-- 
2.27.0





^ permalink raw reply	[flat|nested] 6+ messages in thread

* [pve-devel] applied-series: [PATCH manager v2 1/5] ceph: split out pool set into own method
  2020-10-19 10:39 [pve-devel] [PATCH manager v2 1/5] ceph: split out pool set into own method Alwin Antreich
                   ` (3 preceding siblings ...)
  2020-10-19 10:39 ` [pve-devel] [PATCH manager v2 5/5] ceph: gui: add autoscale mode " Alwin Antreich
@ 2020-10-22 16:52 ` Thomas Lamprecht
  4 siblings, 0 replies; 6+ messages in thread
From: Thomas Lamprecht @ 2020-10-22 16:52 UTC (permalink / raw)
  To: Proxmox VE development discussion, Alwin Antreich

On 19.10.20 12:39, Alwin Antreich wrote:
> to reduce code duplication and make it easier to add more options for
> pool commands.
> 
> Use a new rados object for each 'osd pool set', as each command can set
> an option independent of the previous commands success/failure. On
> failure a new rados object would need to be created and that will
> confuse task tracking of the REST environment.
> 
> Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
> ---
> Note:
>     v1 -> v2:
> 	* reorder patches, since pool create & set share common pool
> 	  options.
> 	* include new setpool API
> 
>  PVE/API2/Ceph.pm  | 17 +++++++----
>  PVE/Ceph/Tools.pm | 74 +++++++++++++++++++++++++----------------------
>  2 files changed, 52 insertions(+), 39 deletions(-)
> 
>

applied series, thanks! FYI: Did a followup refactoring to the part where the defaults
are deleted conditionally in the common option method.





^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-10-22 16:52 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-19 10:39 [pve-devel] [PATCH manager v2 1/5] ceph: split out pool set into own method Alwin Antreich
2020-10-19 10:39 ` [pve-devel] [PATCH manager v2 2/5] ceph: allow to alter pool settings Alwin Antreich
2020-10-19 10:39 ` [pve-devel] [PATCH manager v2 3/5] ceph: use pool common options pool create Alwin Antreich
2020-10-19 10:39 ` [pve-devel] [PATCH manager v2 4/5] ceph: add pg_autoscale_mode to " Alwin Antreich
2020-10-19 10:39 ` [pve-devel] [PATCH manager v2 5/5] ceph: gui: add autoscale mode " Alwin Antreich
2020-10-22 16:52 ` [pve-devel] applied-series: [PATCH manager v2 1/5] ceph: split out pool set into own method Thomas Lamprecht

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal