public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH manager v2 0/8] ceph: allow pools settings to be changed
@ 2020-11-24 10:58 Alwin Antreich
  2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 1/8] api: ceph: subclass pools Alwin Antreich
                   ` (8 more replies)
  0 siblings, 9 replies; 15+ messages in thread
From: Alwin Antreich @ 2020-11-24 10:58 UTC (permalink / raw)
  To: pve-devel

This set allows to edit pools via GUI & CLI. This should make it easier
for users to adjust pool settings, since they don't have to go the ceph
tool route.

v1 -> v2:
    - move pools endpoint to a subclass
    - add pg autsocale status and settings
    - rework and flatten the grid view of ceph pools
    - rework the create input panel
    - add an edit button using the reworked input panel
    - fix borken add_storages
    - extend setp_pool function to avoid a race condition
    - remove the pg_autoscale_mode default to allow Ceph's setting to
      take precedence


Alwin Antreich (8):
  api: ceph: subclass pools
  ceph: add get api call for single pool
  ceph: add autoscale_status to api calls
  ceph: gui: add autoscale & flatten pool view
  ceph: gui: rework pool input panel
  ceph: schema: change min. required PG count to 1
  ceph: remove the pg_autoscale_mode default
  fix: ceph: always set pool size first

 PVE/API2/Ceph/Makefile    |   1 +
 PVE/API2/Ceph.pm          | 380 +------------------------
 PVE/API2/Ceph/POOLS.pm    | 572 ++++++++++++++++++++++++++++++++++++++
 PVE/CLI/pveceph.pm        |  12 +-
 PVE/Ceph/Tools.pm         |  70 ++++-
 www/manager6/ceph/Pool.js | 405 +++++++++++++++++++--------
 6 files changed, 950 insertions(+), 490 deletions(-)
 create mode 100644 PVE/API2/Ceph/POOLS.pm

-- 
2.27.0





^ permalink raw reply	[flat|nested] 15+ messages in thread

* [pve-devel] [PATCH manager v2 1/8] api: ceph: subclass pools
  2020-11-24 10:58 [pve-devel] [PATCH manager v2 0/8] ceph: allow pools settings to be changed Alwin Antreich
@ 2020-11-24 10:58 ` Alwin Antreich
  2020-11-24 13:53   ` Dominik Csapak
  2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 2/8] ceph: add get api call for single pool Alwin Antreich
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 15+ messages in thread
From: Alwin Antreich @ 2020-11-24 10:58 UTC (permalink / raw)
  To: pve-devel

for better handling and since the pool endpoints got more entries.

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
---
 PVE/API2/Ceph/Makefile |   1 +
 PVE/API2/Ceph.pm       | 380 +-------------------------------------
 PVE/API2/Ceph/POOLS.pm | 404 +++++++++++++++++++++++++++++++++++++++++
 PVE/CLI/pveceph.pm     |   9 +-
 4 files changed, 416 insertions(+), 378 deletions(-)
 create mode 100644 PVE/API2/Ceph/POOLS.pm

diff --git a/PVE/API2/Ceph/Makefile b/PVE/API2/Ceph/Makefile
index 5b6493d5..65c7b862 100644
--- a/PVE/API2/Ceph/Makefile
+++ b/PVE/API2/Ceph/Makefile
@@ -5,6 +5,7 @@ PERLSOURCE= 			\
 	MON.pm			\
 	OSD.pm			\
 	FS.pm			\
+	POOLS.pm		\
 	MDS.pm
 
 all:
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index c3a3091d..8e7e525e 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@ -20,6 +20,7 @@ use PVE::Tools qw(run_command file_get_contents file_set_contents);
 
 use PVE::API2::Ceph::OSD;
 use PVE::API2::Ceph::FS;
+use PVE::API2::Ceph::POOLS;
 use PVE::API2::Ceph::MDS;
 use PVE::API2::Ceph::MGR;
 use PVE::API2::Ceph::MON;
@@ -54,6 +55,11 @@ __PACKAGE__->register_method ({
     path => 'fs',
 });
 
+__PACKAGE__->register_method ({
+    subclass => "PVE::API2::Ceph::POOLS",
+    path => 'pools',
+});
+
 __PACKAGE__->register_method ({
     name => 'index',
     path => '',
@@ -239,35 +245,6 @@ __PACKAGE__->register_method ({
 	return $res;
     }});
 
-my $add_storage = sub {
-    my ($pool, $storeid) = @_;
-
-    my $storage_params = {
-	type => 'rbd',
-	pool => $pool,
-	storage => $storeid,
-	krbd => 0,
-	content => 'rootdir,images',
-    };
-
-    PVE::API2::Storage::Config->create($storage_params);
-};
-
-my $get_storages = sub {
-    my ($pool) = @_;
-
-    my $cfg = PVE::Storage::config();
-
-    my $storages = $cfg->{ids};
-    my $res = {};
-    foreach my $storeid (keys %$storages) {
-	my $curr = $storages->{$storeid};
-	$res->{$storeid} = $storages->{$storeid}
-	    if $curr->{type} eq 'rbd' && $pool eq $curr->{pool};
-    }
-
-    return $res;
-};
 
 __PACKAGE__->register_method ({
     name => 'init',
@@ -583,227 +560,6 @@ __PACKAGE__->register_method ({
 	return PVE::Ceph::Tools::ceph_cluster_status();
     }});
 
-__PACKAGE__->register_method ({
-    name => 'lspools',
-    path => 'pools',
-    method => 'GET',
-    description => "List all pools.",
-    proxyto => 'node',
-    protected => 1,
-    permissions => {
-	check => ['perm', '/', [ 'Sys.Audit', 'Datastore.Audit' ], any => 1],
-    },
-    parameters => {
-	additionalProperties => 0,
-	properties => {
-	    node => get_standard_option('pve-node'),
-	},
-    },
-    returns => {
-	type => 'array',
-	items => {
-	    type => "object",
-	    properties => {
-		pool => { type => 'integer', title => 'ID' },
-		pool_name => { type => 'string', title => 'Name' },
-		size => { type => 'integer', title => 'Size' },
-		min_size => { type => 'integer', title => 'Min Size' },
-		pg_num => { type => 'integer', title => 'PG Num' },
-		pg_autoscale_mode => { type => 'string', optional => 1, title => 'PG Autoscale Mode' },
-		crush_rule => { type => 'integer', title => 'Crush Rule' },
-		crush_rule_name => { type => 'string', title => 'Crush Rule Name' },
-		percent_used => { type => 'number', title => '%-Used' },
-		bytes_used => { type => 'integer', title => 'Used' },
-	    },
-	},
-	links => [ { rel => 'child', href => "{pool_name}" } ],
-    },
-    code => sub {
-	my ($param) = @_;
-
-	PVE::Ceph::Tools::check_ceph_inited();
-
-	my $rados = PVE::RADOS->new();
-
-	my $stats = {};
-	my $res = $rados->mon_command({ prefix => 'df' });
-
-	foreach my $d (@{$res->{pools}}) {
-	    next if !$d->{stats};
-	    next if !defined($d->{id});
-	    $stats->{$d->{id}} = $d->{stats};
-	}
-
-	$res = $rados->mon_command({ prefix => 'osd dump' });
-	my $rulestmp = $rados->mon_command({ prefix => 'osd crush rule dump'});
-
-	my $rules = {};
-	for my $rule (@$rulestmp) {
-	    $rules->{$rule->{rule_id}} = $rule->{rule_name};
-	}
-
-	my $data = [];
-	my $attr_list = [
-	    'pool',
-	    'pool_name',
-	    'size',
-	    'min_size',
-	    'pg_num',
-	    'crush_rule',
-	    'pg_autoscale_mode',
-	];
-
-	foreach my $e (@{$res->{pools}}) {
-	    my $d = {};
-	    foreach my $attr (@$attr_list) {
-		$d->{$attr} = $e->{$attr} if defined($e->{$attr});
-	    }
-
-	    if (defined($d->{crush_rule}) && defined($rules->{$d->{crush_rule}})) {
-		$d->{crush_rule_name} = $rules->{$d->{crush_rule}};
-	    }
-
-	    if (my $s = $stats->{$d->{pool}}) {
-		$d->{bytes_used} = $s->{bytes_used};
-		$d->{percent_used} = $s->{percent_used};
-	    }
-	    push @$data, $d;
-	}
-
-
-	return $data;
-    }});
-
-
-my $ceph_pool_common_options = sub {
-    my ($nodefault) = shift;
-    my $options = {
-	name => {
-	    description => "The name of the pool. It must be unique.",
-	    type => 'string',
-	},
-	size => {
-	    description => 'Number of replicas per object',
-	    type => 'integer',
-	    default => 3,
-	    optional => 1,
-	    minimum => 1,
-	    maximum => 7,
-	},
-	min_size => {
-	    description => 'Minimum number of replicas per object',
-	    type => 'integer',
-	    default => 2,
-	    optional => 1,
-	    minimum => 1,
-	    maximum => 7,
-	},
-	pg_num => {
-	    description => "Number of placement groups.",
-	    type => 'integer',
-	    default => 128,
-	    optional => 1,
-	    minimum => 8,
-	    maximum => 32768,
-	},
-	crush_rule => {
-	    description => "The rule to use for mapping object placement in the cluster.",
-	    type => 'string',
-	    optional => 1,
-	},
-	application => {
-	    description => "The application of the pool.",
-	    default => 'rbd',
-	    type => 'string',
-	    enum => ['rbd', 'cephfs', 'rgw'],
-	    optional => 1,
-	},
-	pg_autoscale_mode => {
-	    description => "The automatic PG scaling mode of the pool.",
-	    type => 'string',
-	    enum => ['on', 'off', 'warn'],
-	    default => 'warn',
-	    optional => 1,
-	},
-    };
-
-    if ($nodefault) {
-	delete $options->{$_}->{default} for keys %$options;
-    }
-    return $options;
-};
-
-
-__PACKAGE__->register_method ({
-    name => 'createpool',
-    path => 'pools',
-    method => 'POST',
-    description => "Create POOL",
-    proxyto => 'node',
-    protected => 1,
-    permissions => {
-	check => ['perm', '/', [ 'Sys.Modify' ]],
-    },
-    parameters => {
-	additionalProperties => 0,
-	properties => {
-	    node => get_standard_option('pve-node'),
-	    add_storages => {
-		description => "Configure VM and CT storage using the new pool.",
-		type => 'boolean',
-		optional => 1,
-	    },
-	    %{ $ceph_pool_common_options->() },
-	},
-    },
-    returns => { type => 'string' },
-    code => sub {
-	my ($param) = @_;
-
-	PVE::Cluster::check_cfs_quorum();
-	PVE::Ceph::Tools::check_ceph_configured();
-
-	my $pool = $param->{name};
-	my $rpcenv = PVE::RPCEnvironment::get();
-	my $user = $rpcenv->get_user();
-
-	if ($param->{add_storages}) {
-	    $rpcenv->check($user, '/storage', ['Datastore.Allocate']);
-	    die "pool name contains characters which are illegal for storage naming\n"
-		if !PVE::JSONSchema::parse_storage_id($pool);
-	}
-
-	my $ceph_param = \%$param;
-	for my $item ('add_storages', 'name', 'node') {
-	    # not ceph parameters
-	    delete $ceph_param->{$item};
-	}
-
-	# pool defaults
-	$ceph_param->{pg_num} //= 128;
-	$ceph_param->{size} //= 3;
-	$ceph_param->{min_size} //= 2;
-	$ceph_param->{application} //= 'rbd';
-	$ceph_param->{pg_autoscale_mode} //= 'warn';
-
-	my $worker = sub {
-
-	    PVE::Ceph::Tools::create_pool($pool, $ceph_param);
-
-	    if ($param->{add_storages}) {
-		my $err;
-		eval { $add_storage->($pool, "${pool}"); };
-		if ($@) {
-		    warn "failed to add storage: $@";
-		    $err = 1;
-		}
-		die "adding storage for pool '$pool' failed, check log and add manually!\n"
-		    if $err;
-	    }
-	};
-
-	return $rpcenv->fork_worker('cephcreatepool', $pool,  $user, $worker);
-    }});
 
 my $possible_flags = PVE::Ceph::Tools::get_possible_osd_flags();
 my $possible_flags_list = [ sort keys %$possible_flags ];
@@ -913,130 +669,6 @@ __PACKAGE__->register_method ({
 	return undef;
     }});
 
-__PACKAGE__->register_method ({
-    name => 'destroypool',
-    path => 'pools/{name}',
-    method => 'DELETE',
-    description => "Destroy pool",
-    proxyto => 'node',
-    protected => 1,
-    permissions => {
-	check => ['perm', '/', [ 'Sys.Modify' ]],
-    },
-    parameters => {
-	additionalProperties => 0,
-	properties => {
-	    node => get_standard_option('pve-node'),
-	    name => {
-		description => "The name of the pool. It must be unique.",
-		type => 'string',
-	    },
-	    force => {
-		description => "If true, destroys pool even if in use",
-		type => 'boolean',
-		optional => 1,
-		default => 0,
-	    },
-	    remove_storages => {
-		description => "Remove all pveceph-managed storages configured for this pool",
-		type => 'boolean',
-		optional => 1,
-		default => 0,
-	    },
-	},
-    },
-    returns => { type => 'string' },
-    code => sub {
-	my ($param) = @_;
-
-	PVE::Ceph::Tools::check_ceph_inited();
-
-	my $rpcenv = PVE::RPCEnvironment::get();
-	my $user = $rpcenv->get_user();
-	$rpcenv->check($user, '/storage', ['Datastore.Allocate'])
-	    if $param->{remove_storages};
-
-	my $pool = $param->{name};
-
-	my $worker = sub {
-	    my $storages = $get_storages->($pool);
-
-	    # if not forced, destroy ceph pool only when no
-	    # vm disks are on it anymore
-	    if (!$param->{force}) {
-		my $storagecfg = PVE::Storage::config();
-		foreach my $storeid (keys %$storages) {
-		    my $storage = $storages->{$storeid};
-
-		    # check if any vm disks are on the pool
-		    print "checking storage '$storeid' for RBD images..\n";
-		    my $res = PVE::Storage::vdisk_list($storagecfg, $storeid);
-		    die "ceph pool '$pool' still in use by storage '$storeid'\n"
-			if @{$res->{$storeid}} != 0;
-		}
-	    }
-
-	    PVE::Ceph::Tools::destroy_pool($pool);
-
-	    if ($param->{remove_storages}) {
-		my $err;
-		foreach my $storeid (keys %$storages) {
-		    # skip external clusters, not managed by pveceph
-		    next if $storages->{$storeid}->{monhost};
-		    eval { PVE::API2::Storage::Config->delete({storage => $storeid}) };
-		    if ($@) {
-			warn "failed to remove storage '$storeid': $@\n";
-			$err = 1;
-		    }
-		}
-		die "failed to remove (some) storages - check log and remove manually!\n"
-		    if $err;
-	    }
-	};
-	return $rpcenv->fork_worker('cephdestroypool', $pool,  $user, $worker);
-    }});
-
-
-__PACKAGE__->register_method ({
-    name => 'setpool',
-    path => 'pools/{name}',
-    method => 'PUT',
-    description => "Change POOL settings",
-    proxyto => 'node',
-    protected => 1,
-    permissions => {
-	check => ['perm', '/', [ 'Sys.Modify' ]],
-    },
-    parameters => {
-	additionalProperties => 0,
-	properties => {
-	    node => get_standard_option('pve-node'),
-	    %{ $ceph_pool_common_options->('nodefault') },
-	},
-    },
-    returns => { type => 'string' },
-    code => sub {
-	my ($param) = @_;
-
-	PVE::Ceph::Tools::check_ceph_configured();
-
-	my $rpcenv = PVE::RPCEnvironment::get();
-	my $authuser = $rpcenv->get_user();
-
-	my $pool = $param->{name};
-	my $ceph_param = \%$param;
-	for my $item ('name', 'node') {
-	    # not ceph parameters
-	    delete $ceph_param->{$item};
-	}
-
-	my $worker = sub {
-	    PVE::Ceph::Tools::set_pool($pool, $ceph_param);
-	};
-
-	return $rpcenv->fork_worker('cephsetpool', $pool,  $authuser, $worker);
-    }});
-
 
 __PACKAGE__->register_method ({
     name => 'crush',
diff --git a/PVE/API2/Ceph/POOLS.pm b/PVE/API2/Ceph/POOLS.pm
new file mode 100644
index 00000000..744f2bce
--- /dev/null
+++ b/PVE/API2/Ceph/POOLS.pm
@@ -0,0 +1,404 @@
+package PVE::API2::Ceph::POOLS;
+
+use strict;
+use warnings;
+
+use PVE::Ceph::Tools;
+use PVE::Ceph::Services;
+use PVE::JSONSchema qw(get_standard_option);
+use PVE::RADOS;
+use PVE::RESTHandler;
+use PVE::RPCEnvironment;
+use PVE::Storage;
+
+use PVE::API2::Storage::Config;
+
+use base qw(PVE::RESTHandler);
+
+my $ceph_pool_common_options = sub {
+    my ($nodefault) = shift;
+    my $options = {
+	name => {
+	    title => 'Name',
+	    description => "The name of the pool. It must be unique.",
+	    type => 'string',
+	},
+	size => {
+	    description => 'Number of replicas per object',
+	    title => 'Size',
+	    type => 'integer',
+	    default => 3,
+	    optional => 1,
+	    minimum => 1,
+	    maximum => 7,
+	},
+	min_size => {
+	    description => 'Minimum number of replicas per object',
+	    title => 'Min Size',
+	    type => 'integer',
+	    default => 2,
+	    optional => 1,
+	    minimum => 1,
+	    maximum => 7,
+	},
+	pg_num => {
+	    description => "Number of placement groups.",
+	    title => 'PG Num',
+	    type => 'integer',
+	    default => 128,
+	    optional => 1,
+	    minimum => 8,
+	    maximum => 32768,
+	},
+	crush_rule => {
+	    description => "The rule to use for mapping object placement in the cluster.",
+	    title => 'Crush Rule Name',
+	    type => 'string',
+	    optional => 1,
+	},
+	application => {
+	    description => "The application of the pool.",
+	    title => 'Application',
+	    default => 'rbd',
+	    type => 'string',
+	    enum => ['rbd', 'cephfs', 'rgw'],
+	    optional => 1,
+	},
+	pg_autoscale_mode => {
+	    description => "The automatic PG scaling mode of the pool.",
+	    title => 'PG Autoscale Mode',
+	    type => 'string',
+	    enum => ['on', 'off', 'warn'],
+	    default => 'warn',
+	    optional => 1,
+	},
+    };
+
+    if ($nodefault) {
+	delete $options->{$_}->{default} for keys %$options;
+    }
+    return $options;
+};
+
+my $add_storage = sub {
+    my ($pool, $storeid) = @_;
+
+    my $storage_params = {
+	type => 'rbd',
+	pool => $pool,
+	storage => $storeid,
+	krbd => 0,
+	content => 'rootdir,images',
+    };
+
+    PVE::API2::Storage::Config->create($storage_params);
+};
+
+my $get_storages = sub {
+    my ($pool) = @_;
+
+    my $cfg = PVE::Storage::config();
+
+    my $storages = $cfg->{ids};
+    my $res = {};
+    foreach my $storeid (keys %$storages) {
+	my $curr = $storages->{$storeid};
+	$res->{$storeid} = $storages->{$storeid}
+	    if $curr->{type} eq 'rbd' && $pool eq $curr->{pool};
+    }
+
+    return $res;
+};
+
+
+__PACKAGE__->register_method ({
+    name => 'lspools',
+    path => '',
+    method => 'GET',
+    description => "List all pools.",
+    proxyto => 'node',
+    protected => 1,
+    permissions => {
+	check => ['perm', '/', [ 'Sys.Audit', 'Datastore.Audit' ], any => 1],
+    },
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    node => get_standard_option('pve-node'),
+	},
+    },
+    returns => {
+	type => 'array',
+	items => {
+	    type => "object",
+	    properties => {
+		pool => { type => 'integer', title => 'ID' },
+		pool_name => { type => 'string', title => 'Name' },
+		size => { type => 'integer', title => 'Size' },
+		min_size => { type => 'integer', title => 'Min Size' },
+		pg_num => { type => 'integer', title => 'PG Num' },
+		pg_autoscale_mode => { type => 'string', optional => 1, title => 'PG Autoscale Mode' },
+		crush_rule => { type => 'integer', title => 'Crush Rule' },
+		crush_rule_name => { type => 'string', title => 'Crush Rule Name' },
+		percent_used => { type => 'number', title => '%-Used' },
+		bytes_used => { type => 'integer', title => 'Used' },
+	    },
+	},
+	links => [ { rel => 'child', href => "{pool_name}" } ],
+    },
+    code => sub {
+	my ($param) = @_;
+
+	PVE::Ceph::Tools::check_ceph_inited();
+
+	my $rados = PVE::RADOS->new();
+
+	my $stats = {};
+	my $res = $rados->mon_command({ prefix => 'df' });
+
+	foreach my $d (@{$res->{pools}}) {
+	    next if !$d->{stats};
+	    next if !defined($d->{id});
+	    $stats->{$d->{id}} = $d->{stats};
+	}
+
+	$res = $rados->mon_command({ prefix => 'osd dump' });
+	my $rulestmp = $rados->mon_command({ prefix => 'osd crush rule dump'});
+
+	my $rules = {};
+	for my $rule (@$rulestmp) {
+	    $rules->{$rule->{rule_id}} = $rule->{rule_name};
+	}
+
+	my $data = [];
+	my $attr_list = [
+	    'pool',
+	    'pool_name',
+	    'size',
+	    'min_size',
+	    'pg_num',
+	    'crush_rule',
+	    'pg_autoscale_mode',
+	];
+
+	foreach my $e (@{$res->{pools}}) {
+	    my $d = {};
+	    foreach my $attr (@$attr_list) {
+		$d->{$attr} = $e->{$attr} if defined($e->{$attr});
+	    }
+
+	    if (defined($d->{crush_rule}) && defined($rules->{$d->{crush_rule}})) {
+		$d->{crush_rule_name} = $rules->{$d->{crush_rule}};
+	    }
+
+	    if (my $s = $stats->{$d->{pool}}) {
+		$d->{bytes_used} = $s->{bytes_used};
+		$d->{percent_used} = $s->{percent_used};
+	    }
+	    push @$data, $d;
+	}
+
+
+	return $data;
+    }});
+
+
+# FIXME: use pools/{pool_name} with PVE 7.0
+__PACKAGE__->register_method ({
+    name => 'createpool',
+    path => '',
+    method => 'POST',
+    description => "Create POOL",
+    proxyto => 'node',
+    protected => 1,
+    permissions => {
+	check => ['perm', '/', [ 'Sys.Modify' ]],
+    },
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    node => get_standard_option('pve-node'),
+	    add_storages => {
+		description => "Configure VM and CT storage using the new pool.",
+		type => 'boolean',
+		optional => 1,
+	    },
+	    %{ $ceph_pool_common_options->() },
+	},
+    },
+    returns => { type => 'string' },
+    code => sub {
+	my ($param) = @_;
+
+	PVE::Cluster::check_cfs_quorum();
+	PVE::Ceph::Tools::check_ceph_configured();
+
+	my $pool = $param->{name};
+	my $rpcenv = PVE::RPCEnvironment::get();
+	my $user = $rpcenv->get_user();
+
+	if ($param->{add_storages}) {
+	    $rpcenv->check($user, '/storage', ['Datastore.Allocate']);
+	    die "pool name contains characters which are illegal for storage naming\n"
+		if !PVE::JSONSchema::parse_storage_id($pool);
+	}
+
+	my $ceph_param = \%$param;
+	for my $item ('add_storages', 'name', 'node') {
+	    # not ceph parameters
+	    delete $ceph_param->{$item};
+	}
+
+	# pool defaults
+	$ceph_param->{pg_num} //= 128;
+	$ceph_param->{size} //= 3;
+	$ceph_param->{min_size} //= 2;
+	$ceph_param->{application} //= 'rbd';
+	$ceph_param->{pg_autoscale_mode} //= 'warn';
+
+	my $worker = sub {
+
+	    PVE::Ceph::Tools::create_pool($pool, $ceph_param);
+
+	    if ($param->{add_storages}) {
+		my $err;
+		eval { $add_storage->($pool, "${pool}"); };
+		if ($@) {
+		    warn "failed to add storage: $@";
+		    $err = 1;
+		}
+		die "adding storage for pool '$pool' failed, check log and add manually!\n"
+		    if $err;
+	    }
+	};
+
+	return $rpcenv->fork_worker('cephcreatepool', $pool,  $user, $worker);
+    }});
+
+
+__PACKAGE__->register_method ({
+    name => 'destroypool',
+    path => '{name}',
+    method => 'DELETE',
+    description => "Destroy pool",
+    proxyto => 'node',
+    protected => 1,
+    permissions => {
+	check => ['perm', '/', [ 'Sys.Modify' ]],
+    },
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    node => get_standard_option('pve-node'),
+	    name => {
+		description => "The name of the pool. It must be unique.",
+		type => 'string',
+	    },
+	    force => {
+		description => "If true, destroys pool even if in use",
+		type => 'boolean',
+		optional => 1,
+		default => 0,
+	    },
+	    remove_storages => {
+		description => "Remove all pveceph-managed storages configured for this pool",
+		type => 'boolean',
+		optional => 1,
+		default => 0,
+	    },
+	},
+    },
+    returns => { type => 'string' },
+    code => sub {
+	my ($param) = @_;
+
+	PVE::Ceph::Tools::check_ceph_inited();
+
+	my $rpcenv = PVE::RPCEnvironment::get();
+	my $user = $rpcenv->get_user();
+	$rpcenv->check($user, '/storage', ['Datastore.Allocate'])
+	    if $param->{remove_storages};
+
+	my $pool = $param->{name};
+
+	my $worker = sub {
+	    my $storages = $get_storages->($pool);
+
+	    # if not forced, destroy ceph pool only when no
+	    # vm disks are on it anymore
+	    if (!$param->{force}) {
+		my $storagecfg = PVE::Storage::config();
+		foreach my $storeid (keys %$storages) {
+		    my $storage = $storages->{$storeid};
+
+		    # check if any vm disks are on the pool
+		    print "checking storage '$storeid' for RBD images..\n";
+		    my $res = PVE::Storage::vdisk_list($storagecfg, $storeid);
+		    die "ceph pool '$pool' still in use by storage '$storeid'\n"
+			if @{$res->{$storeid}} != 0;
+		}
+	    }
+
+	    PVE::Ceph::Tools::destroy_pool($pool);
+
+	    if ($param->{remove_storages}) {
+		my $err;
+		foreach my $storeid (keys %$storages) {
+		    # skip external clusters, not managed by pveceph
+		    next if $storages->{$storeid}->{monhost};
+		    eval { PVE::API2::Storage::Config->delete({storage => $storeid}) };
+		    if ($@) {
+			warn "failed to remove storage '$storeid': $@\n";
+			$err = 1;
+		    }
+		}
+		die "failed to remove (some) storages - check log and remove manually!\n"
+		    if $err;
+	    }
+	};
+	return $rpcenv->fork_worker('cephdestroypool', $pool,  $user, $worker);
+    }});
+
+
+__PACKAGE__->register_method ({
+    name => 'setpool',
+    path => '{name}',
+    method => 'PUT',
+    description => "Change POOL settings",
+    proxyto => 'node',
+    protected => 1,
+    permissions => {
+	check => ['perm', '/', [ 'Sys.Modify' ]],
+    },
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    node => get_standard_option('pve-node'),
+	    %{ $ceph_pool_common_options->('nodefault') },
+	},
+    },
+    returns => { type => 'string' },
+    code => sub {
+	my ($param) = @_;
+
+	PVE::Ceph::Tools::check_ceph_configured();
+
+	my $rpcenv = PVE::RPCEnvironment::get();
+	my $authuser = $rpcenv->get_user();
+
+	my $pool = $param->{name};
+	my $ceph_param = \%$param;
+	for my $item ('name', 'node') {
+	    # not ceph parameters
+	    delete $ceph_param->{$item};
+	}
+
+	my $worker = sub {
+	    PVE::Ceph::Tools::set_pool($pool, $ceph_param);
+	};
+
+	return $rpcenv->fork_worker('cephsetpool', $pool,  $authuser, $worker);
+    }});
+
+
+1;
diff --git a/PVE/CLI/pveceph.pm b/PVE/CLI/pveceph.pm
index 3d7bf2b1..69421ca6 100755
--- a/PVE/CLI/pveceph.pm
+++ b/PVE/CLI/pveceph.pm
@@ -21,6 +21,7 @@ use PVE::Ceph::Tools;
 use PVE::Ceph::Services;
 use PVE::API2::Ceph;
 use PVE::API2::Ceph::FS;
+use PVE::API2::Ceph::POOLS;
 use PVE::API2::Ceph::MDS;
 use PVE::API2::Ceph::MGR;
 use PVE::API2::Ceph::MON;
@@ -178,7 +179,7 @@ __PACKAGE__->register_method ({
 our $cmddef = {
     init => [ 'PVE::API2::Ceph', 'init', [], { node => $nodename } ],
     pool => {
-	ls => [ 'PVE::API2::Ceph', 'lspools', [], { node => $nodename }, sub {
+	ls => [ 'PVE::API2::Ceph::POOLS', 'lspools', [], { node => $nodename }, sub {
 	    my ($data, $schema, $options) = @_;
 	    PVE::CLIFormatter::print_api_result($data, $schema,
 		[
@@ -193,9 +194,9 @@ our $cmddef = {
 		],
 		$options);
 	}, $PVE::RESTHandler::standard_output_options],
-	create => [ 'PVE::API2::Ceph', 'createpool', ['name'], { node => $nodename }],
-	destroy => [ 'PVE::API2::Ceph', 'destroypool', ['name'], { node => $nodename } ],
-	set => [ 'PVE::API2::Ceph', 'setpool', ['name'], { node => $nodename } ],
+	create => [ 'PVE::API2::Ceph::POOLS', 'createpool', ['name'], { node => $nodename }],
+	destroy => [ 'PVE::API2::Ceph::POOLS', 'destroypool', ['name'], { node => $nodename } ],
+	set => [ 'PVE::API2::Ceph::POOLS', 'setpool', ['name'], { node => $nodename } ],
     },
     lspools => { alias => 'pool ls' },
     createpool => { alias => 'pool create' },
-- 
2.27.0





^ permalink raw reply	[flat|nested] 15+ messages in thread

* [pve-devel] [PATCH manager v2 2/8] ceph: add get api call for single pool
  2020-11-24 10:58 [pve-devel] [PATCH manager v2 0/8] ceph: allow pools settings to be changed Alwin Antreich
  2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 1/8] api: ceph: subclass pools Alwin Antreich
@ 2020-11-24 10:58 ` Alwin Antreich
  2020-11-24 13:53   ` Dominik Csapak
  2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 3/8] ceph: add autoscale_status to api calls Alwin Antreich
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 15+ messages in thread
From: Alwin Antreich @ 2020-11-24 10:58 UTC (permalink / raw)
  To: pve-devel

Information of a single pool can be queried.

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
---
 PVE/API2/Ceph/POOLS.pm | 113 +++++++++++++++++++++++++++++++++++++++--
 1 file changed, 108 insertions(+), 5 deletions(-)

diff --git a/PVE/API2/Ceph/POOLS.pm b/PVE/API2/Ceph/POOLS.pm
index 744f2bce..19fc1b7e 100644
--- a/PVE/API2/Ceph/POOLS.pm
+++ b/PVE/API2/Ceph/POOLS.pm
@@ -18,11 +18,6 @@ use base qw(PVE::RESTHandler);
 my $ceph_pool_common_options = sub {
     my ($nodefault) = shift;
     my $options = {
-	name => {
-	    title => 'Name',
-	    description => "The name of the pool. It must be unique.",
-	    type => 'string',
-	},
 	size => {
 	    description => 'Number of replicas per object',
 	    title => 'Size',
@@ -218,6 +213,11 @@ __PACKAGE__->register_method ({
 	additionalProperties => 0,
 	properties => {
 	    node => get_standard_option('pve-node'),
+	    name => {
+		title => 'Name',
+		description => "The name of the pool. It must be unique.",
+		type => 'string',
+	    },
 	    add_storages => {
 		description => "Configure VM and CT storage using the new pool.",
 		type => 'boolean',
@@ -374,6 +374,11 @@ __PACKAGE__->register_method ({
 	additionalProperties => 0,
 	properties => {
 	    node => get_standard_option('pve-node'),
+	    name => {
+		title => 'Name',
+		description => "The name of the pool. It must be unique.",
+		type => 'string',
+	    },
 	    %{ $ceph_pool_common_options->('nodefault') },
 	},
     },
@@ -401,4 +406,102 @@ __PACKAGE__->register_method ({
     }});
 
 
+__PACKAGE__->register_method ({
+    name => 'getpool',
+    path => '{name}',
+    method => 'GET',
+    description => "List pool settings.",
+    proxyto => 'node',
+    protected => 1,
+    permissions => {
+	check => ['perm', '/', [ 'Sys.Audit', 'Datastore.Audit' ], any => 1],
+    },
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    node => get_standard_option('pve-node'),
+	    name => {
+		description => "The name of the pool. It must be unique.",
+		type => 'string',
+	    },
+	    verbose => {
+		type => 'boolean',
+		default => 0,
+		optional => 1,
+		description => "If enabled, will display additional data".
+		    "(eg. statistics).",
+	    },
+	},
+    },
+    returns => {
+	type => "object",
+	properties => {
+	    id                     => { type => 'integer', title => 'ID' },
+	    pgp_num                => { type => 'integer', title => 'PGP num' },
+	    noscrub                => { type => 'boolean', title => 'noscrub' },
+	    'nodeep-scrub'         => { type => 'boolean', title => 'nodeep-scrub' },
+	    nodelete               => { type => 'boolean', title => 'nodelete' },
+	    nopgchange             => { type => 'boolean', title => 'nopgchange' },
+	    nosizechange           => { type => 'boolean', title => 'nosizechange' },
+	    write_fadvise_dontneed => { type => 'boolean', title => 'write_fadvise_dontneed' },
+	    hashpspool             => { type => 'boolean', title => 'hashpspool' },
+	    use_gmt_hitset         => { type => 'boolean', title => 'use_gmt_hitset' },
+	    fast_read              => { type => 'boolean', title => 'Fast Read' },
+	    statistics             => { type => 'object', title => 'Statistics', optional => 1 },
+	    %{ $ceph_pool_common_options->() },
+	},
+    },
+    code => sub {
+	my ($param) = @_;
+
+	PVE::Ceph::Tools::check_ceph_inited();
+
+	my $verbose = $param->{verbose};
+	my $pool = $param->{name};
+
+	my $rados = PVE::RADOS->new();
+	my $res = $rados->mon_command({
+		prefix => 'osd pool get',
+		pool   => "$pool",
+		var    => 'all',
+	    });
+
+	my $data = {
+	    id                     => $res->{pool_id},
+	    size                   => $res->{size},
+	    min_size               => $res->{min_size},
+	    pg_num                 => $res->{pg_num},
+	    pgp_num                => $res->{pgp_num},
+	    crush_rule             => $res->{crush_rule},
+	    pg_autoscale_mode      => $res->{pg_autoscale_mode},
+	    noscrub                => "$res->{noscrub}",
+	    'nodeep-scrub'         => "$res->{'nodeep-scrub'}",
+	    nodelete               => "$res->{nodelete}",
+	    nopgchange             => "$res->{nopgchange}",
+	    nosizechange           => "$res->{nosizechange}",
+	    write_fadvise_dontneed => "$res->{write_fadvise_dontneed}",
+	    hashpspool             => "$res->{hashpspool}",
+	    use_gmt_hitset         => "$res->{use_gmt_hitset}",
+	    fast_read              => "$res->{fast_read}",
+	};
+
+	if ($verbose) {
+	    my $stats;
+	    my $res = $rados->mon_command({ prefix => 'df' });
+
+	    foreach my $d (@{$res->{pools}}) {
+		next if !$d->{stats};
+		next if !defined($d->{name}) && !$d->{name} ne "$pool";
+		$data->{statistics} = $d->{stats};
+	    }
+
+	    my $apps = $rados->mon_command({ prefix => "osd pool application get", pool => "$pool", });
+	    my @application = keys %$apps;
+	    $data->{application} = $application[0];
+	}
+
+	return $data;
+    }});
+
+
 1;
-- 
2.27.0





^ permalink raw reply	[flat|nested] 15+ messages in thread

* [pve-devel] [PATCH manager v2 3/8] ceph: add autoscale_status to api calls
  2020-11-24 10:58 [pve-devel] [PATCH manager v2 0/8] ceph: allow pools settings to be changed Alwin Antreich
  2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 1/8] api: ceph: subclass pools Alwin Antreich
  2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 2/8] ceph: add get api call for single pool Alwin Antreich
@ 2020-11-24 10:58 ` Alwin Antreich
  2020-11-24 13:53   ` Dominik Csapak
  2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 4/8] ceph: gui: add autoscale & flatten pool view Alwin Antreich
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 15+ messages in thread
From: Alwin Antreich @ 2020-11-24 10:58 UTC (permalink / raw)
  To: pve-devel

the properties target_size_ratio and target_size_bytes are the only two
options by ceph to set on a pool. The updated pool list shows now
autoscale settings & status. Including the new target PGs. To make it
easier for new users to get/set the correct amount of PGs.

And use parameter extraction instead of the unneeded ref copy for params.

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
---
 PVE/API2/Ceph/POOLS.pm | 131 +++++++++++++++++++++++++++++++----------
 PVE/CLI/pveceph.pm     |   3 +
 PVE/Ceph/Tools.pm      |  21 +++++++
 3 files changed, 123 insertions(+), 32 deletions(-)

diff --git a/PVE/API2/Ceph/POOLS.pm b/PVE/API2/Ceph/POOLS.pm
index 19fc1b7e..324a1f79 100644
--- a/PVE/API2/Ceph/POOLS.pm
+++ b/PVE/API2/Ceph/POOLS.pm
@@ -3,6 +3,7 @@ package PVE::API2::Ceph::POOLS;
 use strict;
 use warnings;
 
+use PVE::Tools qw(extract_param);
 use PVE::Ceph::Tools;
 use PVE::Ceph::Services;
 use PVE::JSONSchema qw(get_standard_option);
@@ -67,6 +68,19 @@ my $ceph_pool_common_options = sub {
 	    default => 'warn',
 	    optional => 1,
 	},
+	target_size => {
+	    description => "The estimated target size of the pool for the PG autoscaler.",
+	    title => 'PG Autoscale Target Size',
+	    type => 'string',
+	    pattern => '(\d+(?:\.\d+)?)([KMGT])?',
+	    optional => 1,
+	},
+	target_size_ratio => {
+	    description => "The estimated target ratio of the pool for the PG autoscaler.",
+	    title => 'PG Autoscale Target Ratio',
+	    type => 'number',
+	    optional => 1,
+	},
     };
 
     if ($nodefault) {
@@ -105,6 +119,30 @@ my $get_storages = sub {
     return $res;
 };
 
+my $get_autoscale_status = sub {
+    my ($rados, $pool) = @_;
+
+    $rados = PVE::RADOS->new() if !defined($rados);
+
+    my $autoscale = $rados->mon_command({
+	    prefix => 'osd pool autoscale-status'});
+
+    my $data;
+    foreach my $p (@$autoscale) {
+	$p->{would_adjust} = "$p->{would_adjust}"; # boolean
+	push @$data, $p;
+    }
+
+    if ($pool) {
+	foreach my $p (@$data) {
+	    next if $p->{pool_name} ne $pool;
+	    $data = $p;
+	}
+    }
+
+    return $data;
+};
+
 
 __PACKAGE__->register_method ({
     name => 'lspools',
@@ -127,16 +165,20 @@ __PACKAGE__->register_method ({
 	items => {
 	    type => "object",
 	    properties => {
-		pool => { type => 'integer', title => 'ID' },
-		pool_name => { type => 'string', title => 'Name' },
-		size => { type => 'integer', title => 'Size' },
-		min_size => { type => 'integer', title => 'Min Size' },
-		pg_num => { type => 'integer', title => 'PG Num' },
-		pg_autoscale_mode => { type => 'string', optional => 1, title => 'PG Autoscale Mode' },
-		crush_rule => { type => 'integer', title => 'Crush Rule' },
-		crush_rule_name => { type => 'string', title => 'Crush Rule Name' },
-		percent_used => { type => 'number', title => '%-Used' },
-		bytes_used => { type => 'integer', title => 'Used' },
+		pool              => { type => 'integer', title => 'ID' },
+		pool_name         => { type => 'string',  title => 'Name' },
+		size              => { type => 'integer', title => 'Size' },
+		min_size          => { type => 'integer', title => 'Min Size' },
+		pg_num            => { type => 'integer', title => 'PG Num' },
+		pg_num_final      => { type => 'integer', title => 'Optimal # of PGs' },
+		pg_autoscale_mode => { type => 'string',  title => 'PG Autoscale Mode', optional => 1 },
+		crush_rule        => { type => 'integer', title => 'Crush Rule' },
+		crush_rule_name   => { type => 'string',  title => 'Crush Rule Name' },
+		percent_used      => { type => 'number',  title => '%-Used' },
+		bytes_used        => { type => 'integer', title => 'Used' },
+		target_size       => { type => 'integer', title => 'PG Autoscale Target Size', optional => 1 },
+		target_size_ratio => { type => 'number',  title => 'PG Autoscale Target Ratio',optional => 1, },
+		autoscale_status  => { type => 'object',  title => 'Autoscale Status', optional => 1 },
 	    },
 	},
 	links => [ { rel => 'child', href => "{pool_name}" } ],
@@ -176,12 +218,18 @@ __PACKAGE__->register_method ({
 	    'pg_autoscale_mode',
 	];
 
+	my $autoscale = $get_autoscale_status->($rados);
+
 	foreach my $e (@{$res->{pools}}) {
 	    my $d = {};
 	    foreach my $attr (@$attr_list) {
 		$d->{$attr} = $e->{$attr} if defined($e->{$attr});
 	    }
 
+	    # some info is nested under options instead
+	    $d->{target_size} = $e->{options}->{target_size_bytes};
+	    $d->{target_size_ratio} = $e->{options}->{target_size_ratio};
+
 	    if (defined($d->{crush_rule}) && defined($rules->{$d->{crush_rule}})) {
 		$d->{crush_rule_name} = $rules->{$d->{crush_rule}};
 	    }
@@ -190,10 +238,17 @@ __PACKAGE__->register_method ({
 		$d->{bytes_used} = $s->{bytes_used};
 		$d->{percent_used} = $s->{percent_used};
 	    }
+
+	    foreach my $p (@$autoscale) {
+		next if ($p->{pool_id} ne $e->{pool});
+		$d->{autoscale_status} = $p;
+		# pick some info directly from the autoscale_status
+		$d->{pg_num_final} = $p->{pg_num_final};
+	    }
+
 	    push @$data, $d;
 	}
 
-
 	return $data;
     }});
 
@@ -233,34 +288,37 @@ __PACKAGE__->register_method ({
 	PVE::Cluster::check_cfs_quorum();
 	PVE::Ceph::Tools::check_ceph_configured();
 
-	my $pool = $param->{name};
+	my $pool = extract_param($param, 'name');
+	my $add_storages = extract_param($param, 'add_storages');
+	delete $param->{node}; # not a ceph option
+
 	my $rpcenv = PVE::RPCEnvironment::get();
 	my $user = $rpcenv->get_user();
 
-	if ($param->{add_storages}) {
+	# Ceph uses target_size_bytes
+	if (defined($param->{'target_size'})) {
+	    my $target_sizestr = extract_param($param, 'target_size');
+	    $param->{target_size_bytes} = PVE::Ceph::Tools::convert_to_bytes($target_sizestr);
+	}
+
+	if (defined($add_storages)) {
 	    $rpcenv->check($user, '/storage', ['Datastore.Allocate']);
 	    die "pool name contains characters which are illegal for storage naming\n"
 		if !PVE::JSONSchema::parse_storage_id($pool);
 	}
 
-	my $ceph_param = \%$param;
-	for my $item ('add_storages', 'name', 'node') {
-	    # not ceph parameters
-	    delete $ceph_param->{$item};
-	}
-
 	# pool defaults
-	$ceph_param->{pg_num} //= 128;
-	$ceph_param->{size} //= 3;
-	$ceph_param->{min_size} //= 2;
-	$ceph_param->{application} //= 'rbd';
-	$ceph_param->{pg_autoscale_mode} //= 'warn';
+	$param->{pg_num} //= 128;
+	$param->{size} //= 3;
+	$param->{min_size} //= 2;
+	$param->{application} //= 'rbd';
+	$param->{pg_autoscale_mode} //= 'warn';
 
 	my $worker = sub {
 
-	    PVE::Ceph::Tools::create_pool($pool, $ceph_param);
+	    PVE::Ceph::Tools::create_pool($pool, $param);
 
-	    if ($param->{add_storages}) {
+	    if (defined($add_storages)) {
 		my $err;
 		eval { $add_storage->($pool, "${pool}"); };
 		if ($@) {
@@ -391,15 +449,17 @@ __PACKAGE__->register_method ({
 	my $rpcenv = PVE::RPCEnvironment::get();
 	my $authuser = $rpcenv->get_user();
 
-	my $pool = $param->{name};
-	my $ceph_param = \%$param;
-	for my $item ('name', 'node') {
-	    # not ceph parameters
-	    delete $ceph_param->{$item};
+	my $pool = extract_param($param, 'name');
+	delete $param->{node};
+
+	# Ceph uses target_size_bytes
+	if (defined($param->{'target_size'})) {
+	    my $target_sizestr = extract_param($param, 'target_size');
+	    $param->{target_size_bytes} = PVE::Ceph::Tools::convert_to_bytes($target_sizestr);
 	}
 
 	my $worker = sub {
-	    PVE::Ceph::Tools::set_pool($pool, $ceph_param);
+	    PVE::Ceph::Tools::set_pool($pool, $param);
 	};
 
 	return $rpcenv->fork_worker('cephsetpool', $pool,  $authuser, $worker);
@@ -448,6 +508,7 @@ __PACKAGE__->register_method ({
 	    use_gmt_hitset         => { type => 'boolean', title => 'use_gmt_hitset' },
 	    fast_read              => { type => 'boolean', title => 'Fast Read' },
 	    statistics             => { type => 'object', title => 'Statistics', optional => 1 },
+	    autoscale_status       => { type => 'object', title => 'Autoscale Status', optional => 1 },
 	    %{ $ceph_pool_common_options->() },
 	},
     },
@@ -483,8 +544,12 @@ __PACKAGE__->register_method ({
 	    hashpspool             => "$res->{hashpspool}",
 	    use_gmt_hitset         => "$res->{use_gmt_hitset}",
 	    fast_read              => "$res->{fast_read}",
+	    # is only avialable if set
+	    target_size            => $res->{target_size_bytes} // undef,
+	    target_size_ratio      => $res->{target_size_ratio} // undef,
 	};
 
+
 	if ($verbose) {
 	    my $stats;
 	    my $res = $rados->mon_command({ prefix => 'df' });
@@ -498,6 +563,8 @@ __PACKAGE__->register_method ({
 	    my $apps = $rados->mon_command({ prefix => "osd pool application get", pool => "$pool", });
 	    my @application = keys %$apps;
 	    $data->{application} = $application[0];
+
+	    $data->{autoscale_status} = $get_autoscale_status->($rados, $pool),
 	}
 
 	return $data;
diff --git a/PVE/CLI/pveceph.pm b/PVE/CLI/pveceph.pm
index 69421ca6..b7b68540 100755
--- a/PVE/CLI/pveceph.pm
+++ b/PVE/CLI/pveceph.pm
@@ -187,7 +187,10 @@ our $cmddef = {
 		    'size',
 		    'min_size',
 		    'pg_num',
+		    'pg_num_final',
 		    'pg_autoscale_mode',
+		    'target_size',
+		    'target_size_ratio',
 		    'crush_rule_name',
 		    'percent_used',
 		    'bytes_used',
diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
index 12d309be..d95e8676 100644
--- a/PVE/Ceph/Tools.pm
+++ b/PVE/Ceph/Tools.pm
@@ -487,4 +487,25 @@ sub ceph_cluster_status {
     return $status;
 }
 
+sub convert_to_bytes {
+    my ($sizestr) = shift;
+
+    die "internal error" if $sizestr !~ m/^(\d+(?:\.\d+)?)([KMGT])?$/;
+    my ($newsize, $unit) = ($1, $2);
+
+    if ($unit) {
+	if ($unit eq 'K') {
+	    $newsize = $newsize * 1024;
+	} elsif ($unit eq 'M') {
+	    $newsize = $newsize * 1024 * 1024;
+	} elsif ($unit eq 'G') {
+	    $newsize = $newsize * 1024 * 1024 * 1024;
+	} elsif ($unit eq 'T') {
+	    $newsize = $newsize * 1024 * 1024 * 1024 * 1024;
+	}
+    }
+
+    return int($newsize);
+}
+
 1;
-- 
2.27.0





^ permalink raw reply	[flat|nested] 15+ messages in thread

* [pve-devel] [PATCH manager v2 4/8] ceph: gui: add autoscale & flatten pool view
  2020-11-24 10:58 [pve-devel] [PATCH manager v2 0/8] ceph: allow pools settings to be changed Alwin Antreich
                   ` (2 preceding siblings ...)
  2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 3/8] ceph: add autoscale_status to api calls Alwin Antreich
@ 2020-11-24 10:58 ` Alwin Antreich
  2020-11-24 13:53   ` Dominik Csapak
  2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 5/8] ceph: gui: rework pool input panel Alwin Antreich
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 15+ messages in thread
From: Alwin Antreich @ 2020-11-24 10:58 UTC (permalink / raw)
  To: pve-devel

Letting the columns flex needs a flat column head structure.

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
---
 www/manager6/ceph/Pool.js | 131 ++++++++++++++++++++++----------------
 1 file changed, 75 insertions(+), 56 deletions(-)

diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index 271dcc3c..6febe1fc 100644
--- a/www/manager6/ceph/Pool.js
+++ b/www/manager6/ceph/Pool.js
@@ -105,14 +105,16 @@ Ext.define('PVE.node.CephPoolList', {
 
     columns: [
 	{
-	    header: gettext('Name'),
-	    width: 120,
+	    text: gettext('Name'),
+	    minWidth: 120,
+	    flex: 2,
 	    sortable: true,
 	    dataIndex: 'pool_name'
 	},
 	{
-	    header: gettext('Size') + '/min',
-	    width: 100,
+	    text: gettext('Size') + '/min',
+	    minWidth: 100,
+	    flex: 1,
 	    align: 'right',
 	    renderer: function(v, meta, rec) {
 		return v + '/' + rec.data.min_size;
@@ -120,62 +122,75 @@ Ext.define('PVE.node.CephPoolList', {
 	    dataIndex: 'size'
 	},
 	{
-	    text: 'Placement Groups',
-	    columns: [
-		{
-		    text: '# of PGs', // pg_num',
-		    width: 150,
-		    align: 'right',
-		    dataIndex: 'pg_num'
-		},
-		{
-		    text: gettext('Autoscale'),
-		    width: 140,
-		    align: 'right',
-		    dataIndex: 'pg_autoscale_mode'
-		},
-	    ]
+	    text: '# of Placement Groups',
+	    flex: 1,
+	    minWidth: 150,
+	    align: 'right',
+	    dataIndex: 'pg_num'
 	},
 	{
-	    text: 'CRUSH Rule',
-	    columns: [
-		{
-		    text: 'ID',
-		    align: 'right',
-		    width: 50,
-		    dataIndex: 'crush_rule'
-		},
-		{
-		    text: gettext('Name'),
-		    width: 150,
-		    dataIndex: 'crush_rule_name',
-		},
-	    ]
+	    text: gettext('Optimal # of PGs'),
+	    flex: 1,
+	    minWidth: 140,
+	    align: 'right',
+	    dataIndex: 'pg_num_final',
 	},
 	{
-	    text: gettext('Used'),
-	    columns: [
-		{
-		    text: '%',
-		    width: 100,
-		    sortable: true,
-		    align: 'right',
-		    renderer: function(val) {
-			return Ext.util.Format.percent(val, '0.00');
-		    },
-		    dataIndex: 'percent_used',
-		},
-		{
-		    text: gettext('Total'),
-		    width: 100,
-		    sortable: true,
-		    renderer: PVE.Utils.render_size,
-		    align: 'right',
-		    dataIndex: 'bytes_used',
-		    summaryType: 'sum',
-		    summaryRenderer: PVE.Utils.render_size
+	    text: gettext('Target Size Ratio'),
+	    flex: 1,
+	    minWidth: 140,
+	    align: 'right',
+	    dataIndex: 'target_size_ratio',
+	    renderer: Ext.util.Format.numberRenderer('0.0000'),
+	    hidden: true,
+	},
+	{
+	    text: gettext('Target Size'),
+	    flex: 1,
+	    minWidth: 140,
+	    align: 'right',
+	    dataIndex: 'target_size',
+	    hidden: true,
+	    renderer: function(v, metaData, rec) {
+		let value = PVE.Utils.render_size(v);
+		if (rec.data.target_size_ratio > 0) {
+		    value = '<i class="fa fa-info-circle faded"></i> ' + value;
+		    metaData.tdAttr = 'data-qtip="Target Size Ratio takes precedence over Target Size."';
 		}
-	    ]
+		return value;
+	    },
+	},
+	{
+	    text: gettext('Autoscale') + ' ' + gettext('Mode'),
+	    flex: 1,
+	    minWidth: 140,
+	    align: 'right',
+	    dataIndex: 'pg_autoscale_mode',
+	},
+	{
+	    text: 'CRUSH ' + gettext('Rule') + ' (ID)',
+	    flex: 1,
+	    align: 'right',
+	    minWidth: 150,
+	    renderer: function(v, meta, rec) {
+		return v + ' (' + rec.data.crush_rule + ')';
+	    },
+	    dataIndex: 'crush_rule_name',
+	},
+	{
+	    text: gettext('Used (%)'),
+	    flex: 1,
+	    minWidth: 180,
+	    sortable: true,
+	    align: 'right',
+	    dataIndex: 'bytes_used',
+	    summaryType: 'sum',
+	    summaryRenderer: PVE.Utils.render_size,
+	    renderer: function(v, meta, rec) {
+		let percentage = Ext.util.Format.percent(rec.data.percent_used, '0.00');
+		let used = PVE.Utils.render_size(v);
+		return used + ' (' + percentage + ')';
+	    },
 	}
     ],
     initComponent: function() {
@@ -276,7 +291,11 @@ Ext.define('PVE.node.CephPoolList', {
 		  { name: 'bytes_used', type: 'integer'},
 		  { name: 'percent_used', type: 'number'},
 		  { name: 'crush_rule', type: 'integer'},
-		  { name: 'crush_rule_name', type: 'string'}
+		  { name: 'crush_rule_name', type: 'string'},
+		  { name: 'pg_autoscale_mode', type: 'string'},
+		  { name: 'pg_num_final', type: 'integer'},
+		  { name: 'target_size_ratio', type: 'number'},
+		  { name: 'target_size_bytes', type: 'integer'},
 		],
 	idProperty: 'pool_name'
     });
-- 
2.27.0





^ permalink raw reply	[flat|nested] 15+ messages in thread

* [pve-devel] [PATCH manager v2 5/8] ceph: gui: rework pool input panel
  2020-11-24 10:58 [pve-devel] [PATCH manager v2 0/8] ceph: allow pools settings to be changed Alwin Antreich
                   ` (3 preceding siblings ...)
  2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 4/8] ceph: gui: add autoscale & flatten pool view Alwin Antreich
@ 2020-11-24 10:58 ` Alwin Antreich
  2020-11-24 13:53   ` Dominik Csapak
  2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 6/8] ceph: schema: change min. required PG count Alwin Antreich
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 15+ messages in thread
From: Alwin Antreich @ 2020-11-24 10:58 UTC (permalink / raw)
  To: pve-devel

* add the ability to edit an existing pool
* allow adjustment of autoscale settings
* warn if user specifies min_size 1
* disallow min_size 1 on pool create
* calculate min_size replica by size

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
---
 www/manager6/ceph/Pool.js | 276 ++++++++++++++++++++++++++++++--------
 1 file changed, 221 insertions(+), 55 deletions(-)

diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index 6febe1fc..35ecbf09 100644
--- a/www/manager6/ceph/Pool.js
+++ b/www/manager6/ceph/Pool.js
@@ -1,94 +1,237 @@
-Ext.define('PVE.CephCreatePool', {
-    extend: 'Proxmox.window.Edit',
-    alias: 'widget.pveCephCreatePool',
+Ext.define('PVE.CephPoolInputPanel', {
+    extend: 'Proxmox.panel.InputPanel',
+    xtype: 'pveCephPoolInputPanel',
+    mixins: ['Proxmox.Mixin.CBind'],
 
     showProgress: true,
     onlineHelp: 'pve_ceph_pools',
 
     subject: 'Ceph Pool',
-    isCreate: true,
-    method: 'POST',
-    items: [
+    column1: [
 	{
-	    xtype: 'textfield',
+	    xtype: 'pmxDisplayEditField',
 	    fieldLabel: gettext('Name'),
+	    cbind: {
+		editable: '{isCreate}',
+		value: '{pool_name}',
+		disabled: '{!isCreate}'
+	    },
 	    name: 'name',
+	    labelWidth: 80,
 	    allowBlank: false
 	},
 	{
 	    xtype: 'proxmoxintegerfield',
 	    fieldLabel: gettext('Size'),
 	    name: 'size',
+	    labelWidth: 80,
 	    value: 3,
-	    minValue: 1,
+	    minValue: 2,
 	    maxValue: 7,
-	    allowBlank: false
+	    allowBlank: false,
+	    listeners: {
+		change: function(field, val) {
+		    let size = Math.round(val / 2);
+		    if (size > 1) {
+			field.up('inputpanel').down('field[name=min_size]').setValue(size);
+		    }
+		},
+	    },
+	},
+    ],
+    column2: [
+	{
+	    xtype: 'proxmoxKVComboBox',
+	    fieldLabel: 'PG Autoscale ' + gettext('Mode'),
+	    name: 'pg_autoscale_mode',
+	    comboItems: [
+		['warn', 'warn'],
+		['on', 'on'],
+		['off', 'off'],
+	    ],
+	    value: 'warn',
+	    allowBlank: false,
+	    autoSelect: false,
+	    labelWidth: 140,
+	},
+	{
+	    xtype: 'proxmoxcheckbox',
+	    fieldLabel: gettext('Add as Storage'),
+	    cbind: {
+		value: '{isCreate}',
+		hidden: '{!isCreate}',
+	    },
+	    name: 'add_storages',
+	    labelWidth: 140,
+	    autoEl: {
+		tag: 'div',
+		'data-qtip': gettext('Add the new pool to the cluster storage configuration.'),
+	    },
 	},
+    ],
+    advancedColumn1: [
 	{
 	    xtype: 'proxmoxintegerfield',
 	    fieldLabel: gettext('Min. Size'),
 	    name: 'min_size',
+	    labelWidth: 80,
 	    value: 2,
-	    minValue: 1,
+	    cbind: {
+		minValue: (get) => get('isCreate') ? 2 : 1,
+	    },
 	    maxValue: 7,
-	    allowBlank: false
+	    allowBlank: false,
+	    listeners: {
+		change: function(field, val) {
+		    let warn = true;
+		    let warn_text = gettext('Min. Size');
+
+		    if (val < 2) {
+			warn = false;
+			warn_text = gettext('Min. Size') + ' <i class="fa fa-exclamation-triangle warning"></i>';
+		    }
+
+		    field.up().down('field[name=min_size-warning]').setHidden(warn);
+		    field.setFieldLabel(warn_text);
+		}
+	    },
+	},
+	{
+	    xtype: 'displayfield',
+	    name: 'min_size-warning',
+	    padding: 5,
+	    userCls: 'pmx-hint',
+	    value: 'A pool with min_size=1 could lead to data loss, incomplete PGs or unfound objects.',
+	    hidden: true,
 	},
 	{
 	    xtype: 'pveCephRuleSelector',
 	    fieldLabel: 'Crush Rule', // do not localize
+	    cbind: { nodename: '{nodename}' },
+	    labelWidth: 80,
 	    name: 'crush_rule',
 	    allowBlank: false
 	},
-	{
-	    xtype: 'proxmoxKVComboBox',
-	    fieldLabel: 'PG Autoscale Mode', // do not localize
-	    name: 'pg_autoscale_mode',
-	    comboItems: [
-		['warn', 'warn'],
-		['on', 'on'],
-		['off', 'off'],
-	    ],
-	    value: 'warn',
-	    allowBlank: false,
-	    autoSelect: false,
-	},
+    ],
+    advancedColumn2: [
 	{
 	    xtype: 'proxmoxintegerfield',
-	    fieldLabel: 'pg_num',
+	    fieldLabel: '# of PGs',
 	    name: 'pg_num',
+	    labelWidth: 120,
 	    value: 128,
-	    minValue: 8,
+	    minValue: 1,
 	    maxValue: 32768,
+	    allowBlank: false,
+	    emptyText: 128,
+	},
+	{
+	    xtype: 'numberfield',
+	    fieldLabel: gettext('Target Size') + ' (GiB)',
+	    name: 'target_size',
+	    labelWidth: 120,
+	    value: 0,
+	    minValue: 0,
 	    allowBlank: true,
-	    emptyText: gettext('Autoscale'),
+	    emptyText: '0.0',
+	    disabled: false,
+	    listeners: {
+		change: function(field, val) {
+		    let ratio = field.up().down('field[name=target_size_ratio]').getValue();
+		    if ( ratio === null) {
+			field.up().down('field[name=target_size_ratio]').setDisabled(val);
+		    }
+		}
+	    },
 	},
 	{
-	    xtype: 'proxmoxcheckbox',
-	    fieldLabel: gettext('Add as Storage'),
-	    value: true,
-	    name: 'add_storages',
+	    xtype: 'numberfield',
+	    fieldLabel: gettext('Target Size Ratio'),
+	    name: 'target_size_ratio',
+	    labelWidth: 120,
+	    value: 0,
+	    minValue: 0,
+	    allowBlank: true,
+	    emptyText: '0.0',
+	    disabled: false,
+	    listeners: {
+		change: function(field, val) {
+		    field.up().down('field[name=target_size]').setDisabled(val);
+		}
+	    },
 	    autoEl: {
 		tag: 'div',
-		 'data-qtip': gettext('Add the new pool to the cluster storage configuration.'),
+		'data-qtip': gettext('If Target Size Ratio is set it takes precedence over Target Size.'),
 	    },
-	}
+	},
     ],
-    initComponent : function() {
-        var me = this;
 
-	if (!me.nodename) {
-	    throw "no node name specified";
+    onGetValues: function(values) {
+	Object.keys(values || {}).forEach(function(name) {
+	    if (values[name] === '') {
+		delete values[name];
+	    }
+	});
+
+	if (values['target_size'] && values['target_size'] !== 0) {
+	    values['target_size'] = values['target_size']*1024*1024*1024;
+	} else {
+	    // needs to be 0 to be cleared
+	    values['target_size'] = 0;
 	}
 
-        Ext.apply(me, {
-	    url: "/nodes/" + me.nodename + "/ceph/pools",
-	    defaults: {
-		nodename: me.nodename
-	    }
-        });
+	// needs to be 0 to be cleared
+	if (!values['target_size_ratio']) {
+	    values['target_size_ratio'] = 0;
+	}
 
-        me.callParent();
-    }
+	return values;
+    },
+
+    setValues: function(values) {
+	if (values['target_size'] && values['target_size'] !== 0) {
+	    values['target_size'] = values['target_size']/1024/1024/1024;
+	}
+
+	this.callParent([values]);
+    },
+
+    initComponent: function() {
+	let me = this;
+	me.callParent();
+    },
+
+});
+
+Ext.define('PVE.CephPoolEdit', {
+    extend: 'Proxmox.window.Edit',
+    alias: 'widget.pveCephPoolEdit',
+    xtype: 'pveCephPoolEdit',
+    mixins: ['Proxmox.Mixin.CBind'],
+
+    cbindData: {
+	pool_name: '',
+	isCreate: (cfg) => !cfg.pool_name,
+    },
+
+    cbind: {
+	autoLoad: get => !get('isCreate'),
+	url: get => get('isCreate') ?
+	`/nodes/${get('nodename')}/ceph/pools` :
+	`/nodes/${get('nodename')}/ceph/pools/${get('pool_name')}`,
+	method: get => get('isCreate') ? 'POST' : 'PUT',
+    },
+
+    subject: gettext('Ceph Pool'),
+
+    items: [{
+	xtype: 'pveCephPoolInputPanel',
+	cbind: {
+	    nodename: '{nodename}',
+	    pool_name: '{pool_name}',
+	    isCreate: '{isCreate}',
+	},
+    }],
 });
 
 Ext.define('PVE.node.CephPoolList', {
@@ -214,6 +357,9 @@ Ext.define('PVE.node.CephPoolList', {
 	});
 
 	var store = Ext.create('Proxmox.data.DiffStore', { rstore: rstore });
+	var reload = function() {
+	    rstore.load();
+	};
 
 	var regex = new RegExp("not (installed|initialized)", "i");
 	PVE.Utils.handleStoreErrorOrMask(me, rstore, regex, function(me, error){
@@ -230,16 +376,37 @@ Ext.define('PVE.node.CephPoolList', {
 	var create_btn = new Ext.Button({
 	    text: gettext('Create'),
 	    handler: function() {
-		var win = Ext.create('PVE.CephCreatePool', {
-                    nodename: nodename
+		var win = Ext.create('PVE.CephPoolEdit', {
+		    title: gettext('Create') + ': Ceph Pool',
+		    nodename: nodename,
 		});
 		win.show();
-		win.on('destroy', function() {
-		    rstore.load();
-		});
+		win.on('destroy', reload);
 	    }
 	});
 
+	var run_editor = function() {
+	    var rec = sm.getSelection()[0];
+	    if (!rec) {
+		return;
+	    }
+
+	    var win = Ext.create('PVE.CephPoolEdit', {
+		title: gettext('Edit') + ': Ceph Pool',
+		nodename: nodename,
+		pool_name: rec.data.pool_name,
+	    });
+            win.on('destroy', reload);
+            win.show();
+	};
+
+	var edit_btn = new Proxmox.button.Button({
+	    text: gettext('Edit'),
+	    disabled: true,
+	    selModel: sm,
+	    handler: run_editor,
+	});
+
 	var destroy_btn = Ext.create('Proxmox.button.Button', {
 	    text: gettext('Destroy'),
 	    selModel: sm,
@@ -261,19 +428,18 @@ Ext.define('PVE.node.CephPoolList', {
 		    },
 		    item: { type: 'CephPool', id: rec.data.pool_name }
 		}).show();
-		win.on('destroy', function() {
-		    rstore.load();
-		});
+		win.on('destroy', reload);
 	    }
 	});
 
 	Ext.apply(me, {
 	    store: store,
 	    selModel: sm,
-	    tbar: [ create_btn, destroy_btn ],
+	    tbar: [ create_btn, edit_btn, destroy_btn ],
 	    listeners: {
 		activate: () => rstore.startUpdate(),
 		destroy: () => rstore.stopUpdate(),
+		itemdblclick: run_editor,
 	    }
 	});
 
@@ -295,7 +461,7 @@ Ext.define('PVE.node.CephPoolList', {
 		  { name: 'pg_autoscale_mode', type: 'string'},
 		  { name: 'pg_num_final', type: 'integer'},
 		  { name: 'target_size_ratio', type: 'number'},
-		  { name: 'target_size_bytes', type: 'integer'},
+		  { name: 'target_size', type: 'integer'},
 		],
 	idProperty: 'pool_name'
     });
-- 
2.27.0





^ permalink raw reply	[flat|nested] 15+ messages in thread

* [pve-devel] [PATCH manager v2 6/8] ceph: schema: change min. required PG count
  2020-11-24 10:58 [pve-devel] [PATCH manager v2 0/8] ceph: allow pools settings to be changed Alwin Antreich
                   ` (4 preceding siblings ...)
  2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 5/8] ceph: gui: rework pool input panel Alwin Antreich
@ 2020-11-24 10:58 ` Alwin Antreich
  2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 7/8] ceph: remove the pg_autoscale_mode default Alwin Antreich
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 15+ messages in thread
From: Alwin Antreich @ 2020-11-24 10:58 UTC (permalink / raw)
  To: pve-devel

to 1, since Ceph creates a pool with 1 PG for device health metrics. And
the autoscaler may adjust the PGs of a pool anyway.

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
---
 PVE/API2/Ceph/POOLS.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/API2/Ceph/POOLS.pm b/PVE/API2/Ceph/POOLS.pm
index 324a1f79..35d89643 100644
--- a/PVE/API2/Ceph/POOLS.pm
+++ b/PVE/API2/Ceph/POOLS.pm
@@ -43,7 +43,7 @@ my $ceph_pool_common_options = sub {
 	    type => 'integer',
 	    default => 128,
 	    optional => 1,
-	    minimum => 8,
+	    minimum => 1,
 	    maximum => 32768,
 	},
 	crush_rule => {
-- 
2.27.0





^ permalink raw reply	[flat|nested] 15+ messages in thread

* [pve-devel] [PATCH manager v2 7/8] ceph: remove the pg_autoscale_mode default
  2020-11-24 10:58 [pve-devel] [PATCH manager v2 0/8] ceph: allow pools settings to be changed Alwin Antreich
                   ` (5 preceding siblings ...)
  2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 6/8] ceph: schema: change min. required PG count Alwin Antreich
@ 2020-11-24 10:58 ` Alwin Antreich
  2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 8/8] fix: ceph: always set pool size first Alwin Antreich
  2020-11-24 13:53 ` [pve-devel] [PATCH manager v2 0/8] ceph: allow pools settings to be changed Dominik Csapak
  8 siblings, 0 replies; 15+ messages in thread
From: Alwin Antreich @ 2020-11-24 10:58 UTC (permalink / raw)
  To: pve-devel

The default pg_autoscale_mode can be configured at Ceph directly. With
Nautilus the default mode is warn and with Octopus it has changed to on.

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
---
 PVE/API2/Ceph/POOLS.pm | 2 --
 1 file changed, 2 deletions(-)

diff --git a/PVE/API2/Ceph/POOLS.pm b/PVE/API2/Ceph/POOLS.pm
index 35d89643..8a4e75f9 100644
--- a/PVE/API2/Ceph/POOLS.pm
+++ b/PVE/API2/Ceph/POOLS.pm
@@ -65,7 +65,6 @@ my $ceph_pool_common_options = sub {
 	    title => 'PG Autoscale Mode',
 	    type => 'string',
 	    enum => ['on', 'off', 'warn'],
-	    default => 'warn',
 	    optional => 1,
 	},
 	target_size => {
@@ -312,7 +311,6 @@ __PACKAGE__->register_method ({
 	$param->{size} //= 3;
 	$param->{min_size} //= 2;
 	$param->{application} //= 'rbd';
-	$param->{pg_autoscale_mode} //= 'warn';
 
 	my $worker = sub {
 
-- 
2.27.0





^ permalink raw reply	[flat|nested] 15+ messages in thread

* [pve-devel] [PATCH manager v2 8/8] fix: ceph: always set pool size first
  2020-11-24 10:58 [pve-devel] [PATCH manager v2 0/8] ceph: allow pools settings to be changed Alwin Antreich
                   ` (6 preceding siblings ...)
  2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 7/8] ceph: remove the pg_autoscale_mode default Alwin Antreich
@ 2020-11-24 10:58 ` Alwin Antreich
  2020-11-24 13:53 ` [pve-devel] [PATCH manager v2 0/8] ceph: allow pools settings to be changed Dominik Csapak
  8 siblings, 0 replies; 15+ messages in thread
From: Alwin Antreich @ 2020-11-24 10:58 UTC (permalink / raw)
  To: pve-devel

Since Ceph Nautilus 14.2.10 and Octopus 15.2.2 the min_size of a pool is
calculated by the size (round(size / 2)). When size is applied after
min_size to the pool, the manual specified min_size will be overwritten.

With that a race condition can occur if the setting was set but is not
active yet. Run an extra rados command to verify the current setting.

Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
---
 PVE/Ceph/Tools.pm | 49 +++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 47 insertions(+), 2 deletions(-)

diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
index d95e8676..9505f0bf 100644
--- a/PVE/Ceph/Tools.pm
+++ b/PVE/Ceph/Tools.pm
@@ -203,16 +203,51 @@ sub check_ceph_enabled {
 sub set_pool {
     my ($pool, $param) = @_;
 
+    # by default pool size always sets min_size
+    # https://tracker.ceph.com/issues/44862
+    my $rados = PVE::RADOS->new();
+    if ($param->{size}) {
+	my $value = $param->{size};
+	eval { $rados->mon_command({
+		    prefix => "osd pool set",
+		    pool   => "$pool",
+		    var    => "size",
+		    val    => "$value",
+		    format => 'plain',
+		});
+	};
+	if ($@) {
+	    print "$@";
+	} else {
+	    my $result;
+	    eval { $result = $rados->mon_command({
+			prefix => "osd pool get",
+			pool   => "$pool",
+			var    => "size",
+		    });
+	    };
+	    if (!$@ && $result->{size} eq $value) {
+		delete $param->{size};
+	    }
+	}
+    }
+
     foreach my $setting (keys %$param) {
 	my $value = $param->{$setting};
+	next if $setting eq 'size';
 
 	my $command;
+	my $verify;
 	if ($setting eq 'application') {
 	    $command = {
 		prefix => "osd pool application enable",
 		pool   => "$pool",
 		app    => "$value",
 	    };
+	    $verify = {
+		prefix => "osd pool application get",
+		pool   => "$pool",
+	    };
 	} else {
 	    $command = {
 		prefix => "osd pool set",
@@ -221,14 +256,24 @@ sub set_pool {
 		val    => "$value",
 		format => 'plain',
 	    };
+
+	    $verify = {
+		prefix => "osd pool get",
+		pool   => "$pool",
+		var    => "$setting",
+	    };
 	}
 
-	my $rados = PVE::RADOS->new();
+	$rados = PVE::RADOS->new();
 	eval { $rados->mon_command($command); };
 	if ($@) {
 	    print "$@";
 	} else {
-	    delete $param->{$setting};
+	    my $result;
+	    eval { $result = $rados->mon_command($verify); };
+	    if (!$@ && $result->{$setting} eq $value) {
+		delete $param->{$setting};
+	    }
 	}
     }
 
-- 
2.27.0





^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [pve-devel] [PATCH manager v2 0/8] ceph: allow pools settings to be changed
  2020-11-24 10:58 [pve-devel] [PATCH manager v2 0/8] ceph: allow pools settings to be changed Alwin Antreich
                   ` (7 preceding siblings ...)
  2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 8/8] fix: ceph: always set pool size first Alwin Antreich
@ 2020-11-24 13:53 ` Dominik Csapak
  8 siblings, 0 replies; 15+ messages in thread
From: Dominik Csapak @ 2020-11-24 13:53 UTC (permalink / raw)
  To: Proxmox VE development discussion, Alwin Antreich

high level comment

it seems the series depends on pg autoscaling to be enabled, but on my
nautilus installation it was not

so i'd say we have to handle that by not querying autoscaling settings
if it is disabled or wrapping the mon call in an eval, else the whole 
pool view does not work
(i get an 500)

On 11/24/20 11:58 AM, Alwin Antreich wrote:
> This set allows to edit pools via GUI & CLI. This should make it easier
> for users to adjust pool settings, since they don't have to go the ceph
> tool route.
> 
> v1 -> v2:
>      - move pools endpoint to a subclass
>      - add pg autsocale status and settings
>      - rework and flatten the grid view of ceph pools
>      - rework the create input panel
>      - add an edit button using the reworked input panel
>      - fix borken add_storages
>      - extend setp_pool function to avoid a race condition
>      - remove the pg_autoscale_mode default to allow Ceph's setting to
>        take precedence
> 
> 
> Alwin Antreich (8):
>    api: ceph: subclass pools
>    ceph: add get api call for single pool
>    ceph: add autoscale_status to api calls
>    ceph: gui: add autoscale & flatten pool view
>    ceph: gui: rework pool input panel
>    ceph: schema: change min. required PG count to 1
>    ceph: remove the pg_autoscale_mode default
>    fix: ceph: always set pool size first
> 
>   PVE/API2/Ceph/Makefile    |   1 +
>   PVE/API2/Ceph.pm          | 380 +------------------------
>   PVE/API2/Ceph/POOLS.pm    | 572 ++++++++++++++++++++++++++++++++++++++
>   PVE/CLI/pveceph.pm        |  12 +-
>   PVE/Ceph/Tools.pm         |  70 ++++-
>   www/manager6/ceph/Pool.js | 405 +++++++++++++++++++--------
>   6 files changed, 950 insertions(+), 490 deletions(-)
>   create mode 100644 PVE/API2/Ceph/POOLS.pm
> 





^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [pve-devel] [PATCH manager v2 1/8] api: ceph: subclass pools
  2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 1/8] api: ceph: subclass pools Alwin Antreich
@ 2020-11-24 13:53   ` Dominik Csapak
  0 siblings, 0 replies; 15+ messages in thread
From: Dominik Csapak @ 2020-11-24 13:53 UTC (permalink / raw)
  To: Proxmox VE development discussion, Alwin Antreich

mhmm.. you did not simply move the code, you added at least the 'titles'
it would be better to have a commit which really only moves the
code to a new file

or at least mention it in the commit message

one comment inline

On 11/24/20 11:58 AM, Alwin Antreich wrote:
> for better handling and since the pool endpoints got more entries.
> 
> Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
> ---
>   PVE/API2/Ceph/Makefile |   1 +
>   PVE/API2/Ceph.pm       | 380 +-------------------------------------
>   PVE/API2/Ceph/POOLS.pm | 404 +++++++++++++++++++++++++++++++++++++++++
>   PVE/CLI/pveceph.pm     |   9 +-
>   4 files changed, 416 insertions(+), 378 deletions(-)
>   create mode 100644 PVE/API2/Ceph/POOLS.pm
> 
> diff --git a/PVE/API2/Ceph/Makefile b/PVE/API2/Ceph/Makefile
> index 5b6493d5..65c7b862 100644
> --- a/PVE/API2/Ceph/Makefile
> +++ b/PVE/API2/Ceph/Makefile
> @@ -5,6 +5,7 @@ PERLSOURCE= 			\
>   	MON.pm			\
>   	OSD.pm			\
>   	FS.pm			\
> +	POOLS.pm		\

why POOLS.pm and not Pools.pm ?
the others are only capitalized because they are abbreviations

i'd prefer Pools.pm (but this is debatable...)

>   	MDS.pm
>   
>   all:
> diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
> index c3a3091d..8e7e525e 100644
> --- a/PVE/API2/Ceph.pm
> +++ b/PVE/API2/Ceph.pm
> @@ -20,6 +20,7 @@ use PVE::Tools qw(run_command file_get_contents file_set_contents);
>   
>   use PVE::API2::Ceph::OSD;
>   use PVE::API2::Ceph::FS;
> +use PVE::API2::Ceph::POOLS;
>   use PVE::API2::Ceph::MDS;
>   use PVE::API2::Ceph::MGR;
>   use PVE::API2::Ceph::MON;
> @@ -54,6 +55,11 @@ __PACKAGE__->register_method ({
>       path => 'fs',
>   });
>   
> +__PACKAGE__->register_method ({
> +    subclass => "PVE::API2::Ceph::POOLS",
> +    path => 'pools',
> +});
> +
>   __PACKAGE__->register_method ({
>       name => 'index',
>       path => '',
> @@ -239,35 +245,6 @@ __PACKAGE__->register_method ({
>   	return $res;
>       }});
>   
> -my $add_storage = sub {
> -    my ($pool, $storeid) = @_;
> -
> -    my $storage_params = {
> -	type => 'rbd',
> -	pool => $pool,
> -	storage => $storeid,
> -	krbd => 0,
> -	content => 'rootdir,images',
> -    };
> -
> -    PVE::API2::Storage::Config->create($storage_params);
> -};
> -
> -my $get_storages = sub {
> -    my ($pool) = @_;
> -
> -    my $cfg = PVE::Storage::config();
> -
> -    my $storages = $cfg->{ids};
> -    my $res = {};
> -    foreach my $storeid (keys %$storages) {
> -	my $curr = $storages->{$storeid};
> -	$res->{$storeid} = $storages->{$storeid}
> -	    if $curr->{type} eq 'rbd' && $pool eq $curr->{pool};
> -    }
> -
> -    return $res;
> -};
>   
>   __PACKAGE__->register_method ({
>       name => 'init',
> @@ -583,227 +560,6 @@ __PACKAGE__->register_method ({
>   	return PVE::Ceph::Tools::ceph_cluster_status();
>       }});
>   
> -__PACKAGE__->register_method ({
> -    name => 'lspools',
> -    path => 'pools',
> -    method => 'GET',
> -    description => "List all pools.",
> -    proxyto => 'node',
> -    protected => 1,
> -    permissions => {
> -	check => ['perm', '/', [ 'Sys.Audit', 'Datastore.Audit' ], any => 1],
> -    },
> -    parameters => {
> -	additionalProperties => 0,
> -	properties => {
> -	    node => get_standard_option('pve-node'),
> -	},
> -    },
> -    returns => {
> -	type => 'array',
> -	items => {
> -	    type => "object",
> -	    properties => {
> -		pool => { type => 'integer', title => 'ID' },
> -		pool_name => { type => 'string', title => 'Name' },
> -		size => { type => 'integer', title => 'Size' },
> -		min_size => { type => 'integer', title => 'Min Size' },
> -		pg_num => { type => 'integer', title => 'PG Num' },
> -		pg_autoscale_mode => { type => 'string', optional => 1, title => 'PG Autoscale Mode' },
> -		crush_rule => { type => 'integer', title => 'Crush Rule' },
> -		crush_rule_name => { type => 'string', title => 'Crush Rule Name' },
> -		percent_used => { type => 'number', title => '%-Used' },
> -		bytes_used => { type => 'integer', title => 'Used' },
> -	    },
> -	},
> -	links => [ { rel => 'child', href => "{pool_name}" } ],
> -    },
> -    code => sub {
> -	my ($param) = @_;
> -
> -	PVE::Ceph::Tools::check_ceph_inited();
> -
> -	my $rados = PVE::RADOS->new();
> -
> -	my $stats = {};
> -	my $res = $rados->mon_command({ prefix => 'df' });
> -
> -	foreach my $d (@{$res->{pools}}) {
> -	    next if !$d->{stats};
> -	    next if !defined($d->{id});
> -	    $stats->{$d->{id}} = $d->{stats};
> -	}
> -
> -	$res = $rados->mon_command({ prefix => 'osd dump' });
> -	my $rulestmp = $rados->mon_command({ prefix => 'osd crush rule dump'});
> -
> -	my $rules = {};
> -	for my $rule (@$rulestmp) {
> -	    $rules->{$rule->{rule_id}} = $rule->{rule_name};
> -	}
> -
> -	my $data = [];
> -	my $attr_list = [
> -	    'pool',
> -	    'pool_name',
> -	    'size',
> -	    'min_size',
> -	    'pg_num',
> -	    'crush_rule',
> -	    'pg_autoscale_mode',
> -	];
> -
> -	foreach my $e (@{$res->{pools}}) {
> -	    my $d = {};
> -	    foreach my $attr (@$attr_list) {
> -		$d->{$attr} = $e->{$attr} if defined($e->{$attr});
> -	    }
> -
> -	    if (defined($d->{crush_rule}) && defined($rules->{$d->{crush_rule}})) {
> -		$d->{crush_rule_name} = $rules->{$d->{crush_rule}};
> -	    }
> -
> -	    if (my $s = $stats->{$d->{pool}}) {
> -		$d->{bytes_used} = $s->{bytes_used};
> -		$d->{percent_used} = $s->{percent_used};
> -	    }
> -	    push @$data, $d;
> -	}
> -
> -
> -	return $data;
> -    }});
> -
> -
> -my $ceph_pool_common_options = sub {
> -    my ($nodefault) = shift;
> -    my $options = {
> -	name => {
> -	    description => "The name of the pool. It must be unique.",
> -	    type => 'string',
> -	},
> -	size => {
> -	    description => 'Number of replicas per object',
> -	    type => 'integer',
> -	    default => 3,
> -	    optional => 1,
> -	    minimum => 1,
> -	    maximum => 7,
> -	},
> -	min_size => {
> -	    description => 'Minimum number of replicas per object',
> -	    type => 'integer',
> -	    default => 2,
> -	    optional => 1,
> -	    minimum => 1,
> -	    maximum => 7,
> -	},
> -	pg_num => {
> -	    description => "Number of placement groups.",
> -	    type => 'integer',
> -	    default => 128,
> -	    optional => 1,
> -	    minimum => 8,
> -	    maximum => 32768,
> -	},
> -	crush_rule => {
> -	    description => "The rule to use for mapping object placement in the cluster.",
> -	    type => 'string',
> -	    optional => 1,
> -	},
> -	application => {
> -	    description => "The application of the pool.",
> -	    default => 'rbd',
> -	    type => 'string',
> -	    enum => ['rbd', 'cephfs', 'rgw'],
> -	    optional => 1,
> -	},
> -	pg_autoscale_mode => {
> -	    description => "The automatic PG scaling mode of the pool.",
> -	    type => 'string',
> -	    enum => ['on', 'off', 'warn'],
> -	    default => 'warn',
> -	    optional => 1,
> -	},
> -    };
> -
> -    if ($nodefault) {
> -	delete $options->{$_}->{default} for keys %$options;
> -    }
> -    return $options;
> -};
> -
> -
> -__PACKAGE__->register_method ({
> -    name => 'createpool',
> -    path => 'pools',
> -    method => 'POST',
> -    description => "Create POOL",
> -    proxyto => 'node',
> -    protected => 1,
> -    permissions => {
> -	check => ['perm', '/', [ 'Sys.Modify' ]],
> -    },
> -    parameters => {
> -	additionalProperties => 0,
> -	properties => {
> -	    node => get_standard_option('pve-node'),
> -	    add_storages => {
> -		description => "Configure VM and CT storage using the new pool.",
> -		type => 'boolean',
> -		optional => 1,
> -	    },
> -	    %{ $ceph_pool_common_options->() },
> -	},
> -    },
> -    returns => { type => 'string' },
> -    code => sub {
> -	my ($param) = @_;
> -
> -	PVE::Cluster::check_cfs_quorum();
> -	PVE::Ceph::Tools::check_ceph_configured();
> -
> -	my $pool = $param->{name};
> -	my $rpcenv = PVE::RPCEnvironment::get();
> -	my $user = $rpcenv->get_user();
> -
> -	if ($param->{add_storages}) {
> -	    $rpcenv->check($user, '/storage', ['Datastore.Allocate']);
> -	    die "pool name contains characters which are illegal for storage naming\n"
> -		if !PVE::JSONSchema::parse_storage_id($pool);
> -	}
> -
> -	my $ceph_param = \%$param;
> -	for my $item ('add_storages', 'name', 'node') {
> -	    # not ceph parameters
> -	    delete $ceph_param->{$item};
> -	}
> -
> -	# pool defaults
> -	$ceph_param->{pg_num} //= 128;
> -	$ceph_param->{size} //= 3;
> -	$ceph_param->{min_size} //= 2;
> -	$ceph_param->{application} //= 'rbd';
> -	$ceph_param->{pg_autoscale_mode} //= 'warn';
> -
> -	my $worker = sub {
> -
> -	    PVE::Ceph::Tools::create_pool($pool, $ceph_param);
> -
> -	    if ($param->{add_storages}) {
> -		my $err;
> -		eval { $add_storage->($pool, "${pool}"); };
> -		if ($@) {
> -		    warn "failed to add storage: $@";
> -		    $err = 1;
> -		}
> -		die "adding storage for pool '$pool' failed, check log and add manually!\n"
> -		    if $err;
> -	    }
> -	};
> -
> -	return $rpcenv->fork_worker('cephcreatepool', $pool,  $user, $worker);
> -    }});
>   
>   my $possible_flags = PVE::Ceph::Tools::get_possible_osd_flags();
>   my $possible_flags_list = [ sort keys %$possible_flags ];
> @@ -913,130 +669,6 @@ __PACKAGE__->register_method ({
>   	return undef;
>       }});
>   
> -__PACKAGE__->register_method ({
> -    name => 'destroypool',
> -    path => 'pools/{name}',
> -    method => 'DELETE',
> -    description => "Destroy pool",
> -    proxyto => 'node',
> -    protected => 1,
> -    permissions => {
> -	check => ['perm', '/', [ 'Sys.Modify' ]],
> -    },
> -    parameters => {
> -	additionalProperties => 0,
> -	properties => {
> -	    node => get_standard_option('pve-node'),
> -	    name => {
> -		description => "The name of the pool. It must be unique.",
> -		type => 'string',
> -	    },
> -	    force => {
> -		description => "If true, destroys pool even if in use",
> -		type => 'boolean',
> -		optional => 1,
> -		default => 0,
> -	    },
> -	    remove_storages => {
> -		description => "Remove all pveceph-managed storages configured for this pool",
> -		type => 'boolean',
> -		optional => 1,
> -		default => 0,
> -	    },
> -	},
> -    },
> -    returns => { type => 'string' },
> -    code => sub {
> -	my ($param) = @_;
> -
> -	PVE::Ceph::Tools::check_ceph_inited();
> -
> -	my $rpcenv = PVE::RPCEnvironment::get();
> -	my $user = $rpcenv->get_user();
> -	$rpcenv->check($user, '/storage', ['Datastore.Allocate'])
> -	    if $param->{remove_storages};
> -
> -	my $pool = $param->{name};
> -
> -	my $worker = sub {
> -	    my $storages = $get_storages->($pool);
> -
> -	    # if not forced, destroy ceph pool only when no
> -	    # vm disks are on it anymore
> -	    if (!$param->{force}) {
> -		my $storagecfg = PVE::Storage::config();
> -		foreach my $storeid (keys %$storages) {
> -		    my $storage = $storages->{$storeid};
> -
> -		    # check if any vm disks are on the pool
> -		    print "checking storage '$storeid' for RBD images..\n";
> -		    my $res = PVE::Storage::vdisk_list($storagecfg, $storeid);
> -		    die "ceph pool '$pool' still in use by storage '$storeid'\n"
> -			if @{$res->{$storeid}} != 0;
> -		}
> -	    }
> -
> -	    PVE::Ceph::Tools::destroy_pool($pool);
> -
> -	    if ($param->{remove_storages}) {
> -		my $err;
> -		foreach my $storeid (keys %$storages) {
> -		    # skip external clusters, not managed by pveceph
> -		    next if $storages->{$storeid}->{monhost};
> -		    eval { PVE::API2::Storage::Config->delete({storage => $storeid}) };
> -		    if ($@) {
> -			warn "failed to remove storage '$storeid': $@\n";
> -			$err = 1;
> -		    }
> -		}
> -		die "failed to remove (some) storages - check log and remove manually!\n"
> -		    if $err;
> -	    }
> -	};
> -	return $rpcenv->fork_worker('cephdestroypool', $pool,  $user, $worker);
> -    }});
> -
> -
> -__PACKAGE__->register_method ({
> -    name => 'setpool',
> -    path => 'pools/{name}',
> -    method => 'PUT',
> -    description => "Change POOL settings",
> -    proxyto => 'node',
> -    protected => 1,
> -    permissions => {
> -	check => ['perm', '/', [ 'Sys.Modify' ]],
> -    },
> -    parameters => {
> -	additionalProperties => 0,
> -	properties => {
> -	    node => get_standard_option('pve-node'),
> -	    %{ $ceph_pool_common_options->('nodefault') },
> -	},
> -    },
> -    returns => { type => 'string' },
> -    code => sub {
> -	my ($param) = @_;
> -
> -	PVE::Ceph::Tools::check_ceph_configured();
> -
> -	my $rpcenv = PVE::RPCEnvironment::get();
> -	my $authuser = $rpcenv->get_user();
> -
> -	my $pool = $param->{name};
> -	my $ceph_param = \%$param;
> -	for my $item ('name', 'node') {
> -	    # not ceph parameters
> -	    delete $ceph_param->{$item};
> -	}
> -
> -	my $worker = sub {
> -	    PVE::Ceph::Tools::set_pool($pool, $ceph_param);
> -	};
> -
> -	return $rpcenv->fork_worker('cephsetpool', $pool,  $authuser, $worker);
> -    }});
> -
>   
>   __PACKAGE__->register_method ({
>       name => 'crush',
> diff --git a/PVE/API2/Ceph/POOLS.pm b/PVE/API2/Ceph/POOLS.pm
> new file mode 100644
> index 00000000..744f2bce
> --- /dev/null
> +++ b/PVE/API2/Ceph/POOLS.pm
> @@ -0,0 +1,404 @@
> +package PVE::API2::Ceph::POOLS;
> +
> +use strict;
> +use warnings;
> +
> +use PVE::Ceph::Tools;
> +use PVE::Ceph::Services;
> +use PVE::JSONSchema qw(get_standard_option);
> +use PVE::RADOS;
> +use PVE::RESTHandler;
> +use PVE::RPCEnvironment;
> +use PVE::Storage;
> +
> +use PVE::API2::Storage::Config;
> +
> +use base qw(PVE::RESTHandler);
> +
> +my $ceph_pool_common_options = sub {
> +    my ($nodefault) = shift;
> +    my $options = {
> +	name => {
> +	    title => 'Name',
> +	    description => "The name of the pool. It must be unique.",
> +	    type => 'string',
> +	},
> +	size => {
> +	    description => 'Number of replicas per object',
> +	    title => 'Size',
> +	    type => 'integer',
> +	    default => 3,
> +	    optional => 1,
> +	    minimum => 1,
> +	    maximum => 7,
> +	},
> +	min_size => {
> +	    description => 'Minimum number of replicas per object',
> +	    title => 'Min Size',
> +	    type => 'integer',
> +	    default => 2,
> +	    optional => 1,
> +	    minimum => 1,
> +	    maximum => 7,
> +	},
> +	pg_num => {
> +	    description => "Number of placement groups.",
> +	    title => 'PG Num',
> +	    type => 'integer',
> +	    default => 128,
> +	    optional => 1,
> +	    minimum => 8,
> +	    maximum => 32768,
> +	},
> +	crush_rule => {
> +	    description => "The rule to use for mapping object placement in the cluster.",
> +	    title => 'Crush Rule Name',
> +	    type => 'string',
> +	    optional => 1,
> +	},
> +	application => {
> +	    description => "The application of the pool.",
> +	    title => 'Application',
> +	    default => 'rbd',
> +	    type => 'string',
> +	    enum => ['rbd', 'cephfs', 'rgw'],
> +	    optional => 1,
> +	},
> +	pg_autoscale_mode => {
> +	    description => "The automatic PG scaling mode of the pool.",
> +	    title => 'PG Autoscale Mode',
> +	    type => 'string',
> +	    enum => ['on', 'off', 'warn'],
> +	    default => 'warn',
> +	    optional => 1,
> +	},
> +    };
> +
> +    if ($nodefault) {
> +	delete $options->{$_}->{default} for keys %$options;
> +    }
> +    return $options;
> +};
> +
> +my $add_storage = sub {
> +    my ($pool, $storeid) = @_;
> +
> +    my $storage_params = {
> +	type => 'rbd',
> +	pool => $pool,
> +	storage => $storeid,
> +	krbd => 0,
> +	content => 'rootdir,images',
> +    };
> +
> +    PVE::API2::Storage::Config->create($storage_params);
> +};
> +
> +my $get_storages = sub {
> +    my ($pool) = @_;
> +
> +    my $cfg = PVE::Storage::config();
> +
> +    my $storages = $cfg->{ids};
> +    my $res = {};
> +    foreach my $storeid (keys %$storages) {
> +	my $curr = $storages->{$storeid};
> +	$res->{$storeid} = $storages->{$storeid}
> +	    if $curr->{type} eq 'rbd' && $pool eq $curr->{pool};
> +    }
> +
> +    return $res;
> +};
> +
> +
> +__PACKAGE__->register_method ({
> +    name => 'lspools',
> +    path => '',
> +    method => 'GET',
> +    description => "List all pools.",
> +    proxyto => 'node',
> +    protected => 1,
> +    permissions => {
> +	check => ['perm', '/', [ 'Sys.Audit', 'Datastore.Audit' ], any => 1],
> +    },
> +    parameters => {
> +	additionalProperties => 0,
> +	properties => {
> +	    node => get_standard_option('pve-node'),
> +	},
> +    },
> +    returns => {
> +	type => 'array',
> +	items => {
> +	    type => "object",
> +	    properties => {
> +		pool => { type => 'integer', title => 'ID' },
> +		pool_name => { type => 'string', title => 'Name' },
> +		size => { type => 'integer', title => 'Size' },
> +		min_size => { type => 'integer', title => 'Min Size' },
> +		pg_num => { type => 'integer', title => 'PG Num' },
> +		pg_autoscale_mode => { type => 'string', optional => 1, title => 'PG Autoscale Mode' },
> +		crush_rule => { type => 'integer', title => 'Crush Rule' },
> +		crush_rule_name => { type => 'string', title => 'Crush Rule Name' },
> +		percent_used => { type => 'number', title => '%-Used' },
> +		bytes_used => { type => 'integer', title => 'Used' },
> +	    },
> +	},
> +	links => [ { rel => 'child', href => "{pool_name}" } ],
> +    },
> +    code => sub {
> +	my ($param) = @_;
> +
> +	PVE::Ceph::Tools::check_ceph_inited();
> +
> +	my $rados = PVE::RADOS->new();
> +
> +	my $stats = {};
> +	my $res = $rados->mon_command({ prefix => 'df' });
> +
> +	foreach my $d (@{$res->{pools}}) {
> +	    next if !$d->{stats};
> +	    next if !defined($d->{id});
> +	    $stats->{$d->{id}} = $d->{stats};
> +	}
> +
> +	$res = $rados->mon_command({ prefix => 'osd dump' });
> +	my $rulestmp = $rados->mon_command({ prefix => 'osd crush rule dump'});
> +
> +	my $rules = {};
> +	for my $rule (@$rulestmp) {
> +	    $rules->{$rule->{rule_id}} = $rule->{rule_name};
> +	}
> +
> +	my $data = [];
> +	my $attr_list = [
> +	    'pool',
> +	    'pool_name',
> +	    'size',
> +	    'min_size',
> +	    'pg_num',
> +	    'crush_rule',
> +	    'pg_autoscale_mode',
> +	];
> +
> +	foreach my $e (@{$res->{pools}}) {
> +	    my $d = {};
> +	    foreach my $attr (@$attr_list) {
> +		$d->{$attr} = $e->{$attr} if defined($e->{$attr});
> +	    }
> +
> +	    if (defined($d->{crush_rule}) && defined($rules->{$d->{crush_rule}})) {
> +		$d->{crush_rule_name} = $rules->{$d->{crush_rule}};
> +	    }
> +
> +	    if (my $s = $stats->{$d->{pool}}) {
> +		$d->{bytes_used} = $s->{bytes_used};
> +		$d->{percent_used} = $s->{percent_used};
> +	    }
> +	    push @$data, $d;
> +	}
> +
> +
> +	return $data;
> +    }});
> +
> +
> +# FIXME: use pools/{pool_name} with PVE 7.0
> +__PACKAGE__->register_method ({
> +    name => 'createpool',
> +    path => '',
> +    method => 'POST',
> +    description => "Create POOL",
> +    proxyto => 'node',
> +    protected => 1,
> +    permissions => {
> +	check => ['perm', '/', [ 'Sys.Modify' ]],
> +    },
> +    parameters => {
> +	additionalProperties => 0,
> +	properties => {
> +	    node => get_standard_option('pve-node'),
> +	    add_storages => {
> +		description => "Configure VM and CT storage using the new pool.",
> +		type => 'boolean',
> +		optional => 1,
> +	    },
> +	    %{ $ceph_pool_common_options->() },
> +	},
> +    },
> +    returns => { type => 'string' },
> +    code => sub {
> +	my ($param) = @_;
> +
> +	PVE::Cluster::check_cfs_quorum();
> +	PVE::Ceph::Tools::check_ceph_configured();
> +
> +	my $pool = $param->{name};
> +	my $rpcenv = PVE::RPCEnvironment::get();
> +	my $user = $rpcenv->get_user();
> +
> +	if ($param->{add_storages}) {
> +	    $rpcenv->check($user, '/storage', ['Datastore.Allocate']);
> +	    die "pool name contains characters which are illegal for storage naming\n"
> +		if !PVE::JSONSchema::parse_storage_id($pool);
> +	}
> +
> +	my $ceph_param = \%$param;
> +	for my $item ('add_storages', 'name', 'node') {
> +	    # not ceph parameters
> +	    delete $ceph_param->{$item};
> +	}
> +
> +	# pool defaults
> +	$ceph_param->{pg_num} //= 128;
> +	$ceph_param->{size} //= 3;
> +	$ceph_param->{min_size} //= 2;
> +	$ceph_param->{application} //= 'rbd';
> +	$ceph_param->{pg_autoscale_mode} //= 'warn';
> +
> +	my $worker = sub {
> +
> +	    PVE::Ceph::Tools::create_pool($pool, $ceph_param);
> +
> +	    if ($param->{add_storages}) {
> +		my $err;
> +		eval { $add_storage->($pool, "${pool}"); };
> +		if ($@) {
> +		    warn "failed to add storage: $@";
> +		    $err = 1;
> +		}
> +		die "adding storage for pool '$pool' failed, check log and add manually!\n"
> +		    if $err;
> +	    }
> +	};
> +
> +	return $rpcenv->fork_worker('cephcreatepool', $pool,  $user, $worker);
> +    }});
> +
> +
> +__PACKAGE__->register_method ({
> +    name => 'destroypool',
> +    path => '{name}',
> +    method => 'DELETE',
> +    description => "Destroy pool",
> +    proxyto => 'node',
> +    protected => 1,
> +    permissions => {
> +	check => ['perm', '/', [ 'Sys.Modify' ]],
> +    },
> +    parameters => {
> +	additionalProperties => 0,
> +	properties => {
> +	    node => get_standard_option('pve-node'),
> +	    name => {
> +		description => "The name of the pool. It must be unique.",
> +		type => 'string',
> +	    },
> +	    force => {
> +		description => "If true, destroys pool even if in use",
> +		type => 'boolean',
> +		optional => 1,
> +		default => 0,
> +	    },
> +	    remove_storages => {
> +		description => "Remove all pveceph-managed storages configured for this pool",
> +		type => 'boolean',
> +		optional => 1,
> +		default => 0,
> +	    },
> +	},
> +    },
> +    returns => { type => 'string' },
> +    code => sub {
> +	my ($param) = @_;
> +
> +	PVE::Ceph::Tools::check_ceph_inited();
> +
> +	my $rpcenv = PVE::RPCEnvironment::get();
> +	my $user = $rpcenv->get_user();
> +	$rpcenv->check($user, '/storage', ['Datastore.Allocate'])
> +	    if $param->{remove_storages};
> +
> +	my $pool = $param->{name};
> +
> +	my $worker = sub {
> +	    my $storages = $get_storages->($pool);
> +
> +	    # if not forced, destroy ceph pool only when no
> +	    # vm disks are on it anymore
> +	    if (!$param->{force}) {
> +		my $storagecfg = PVE::Storage::config();
> +		foreach my $storeid (keys %$storages) {
> +		    my $storage = $storages->{$storeid};
> +
> +		    # check if any vm disks are on the pool
> +		    print "checking storage '$storeid' for RBD images..\n";
> +		    my $res = PVE::Storage::vdisk_list($storagecfg, $storeid);
> +		    die "ceph pool '$pool' still in use by storage '$storeid'\n"
> +			if @{$res->{$storeid}} != 0;
> +		}
> +	    }
> +
> +	    PVE::Ceph::Tools::destroy_pool($pool);
> +
> +	    if ($param->{remove_storages}) {
> +		my $err;
> +		foreach my $storeid (keys %$storages) {
> +		    # skip external clusters, not managed by pveceph
> +		    next if $storages->{$storeid}->{monhost};
> +		    eval { PVE::API2::Storage::Config->delete({storage => $storeid}) };
> +		    if ($@) {
> +			warn "failed to remove storage '$storeid': $@\n";
> +			$err = 1;
> +		    }
> +		}
> +		die "failed to remove (some) storages - check log and remove manually!\n"
> +		    if $err;
> +	    }
> +	};
> +	return $rpcenv->fork_worker('cephdestroypool', $pool,  $user, $worker);
> +    }});
> +
> +
> +__PACKAGE__->register_method ({
> +    name => 'setpool',
> +    path => '{name}',
> +    method => 'PUT',
> +    description => "Change POOL settings",
> +    proxyto => 'node',
> +    protected => 1,
> +    permissions => {
> +	check => ['perm', '/', [ 'Sys.Modify' ]],
> +    },
> +    parameters => {
> +	additionalProperties => 0,
> +	properties => {
> +	    node => get_standard_option('pve-node'),
> +	    %{ $ceph_pool_common_options->('nodefault') },
> +	},
> +    },
> +    returns => { type => 'string' },
> +    code => sub {
> +	my ($param) = @_;
> +
> +	PVE::Ceph::Tools::check_ceph_configured();
> +
> +	my $rpcenv = PVE::RPCEnvironment::get();
> +	my $authuser = $rpcenv->get_user();
> +
> +	my $pool = $param->{name};
> +	my $ceph_param = \%$param;
> +	for my $item ('name', 'node') {
> +	    # not ceph parameters
> +	    delete $ceph_param->{$item};
> +	}
> +
> +	my $worker = sub {
> +	    PVE::Ceph::Tools::set_pool($pool, $ceph_param);
> +	};
> +
> +	return $rpcenv->fork_worker('cephsetpool', $pool,  $authuser, $worker);
> +    }});
> +
> +
> +1;
> diff --git a/PVE/CLI/pveceph.pm b/PVE/CLI/pveceph.pm
> index 3d7bf2b1..69421ca6 100755
> --- a/PVE/CLI/pveceph.pm
> +++ b/PVE/CLI/pveceph.pm
> @@ -21,6 +21,7 @@ use PVE::Ceph::Tools;
>   use PVE::Ceph::Services;
>   use PVE::API2::Ceph;
>   use PVE::API2::Ceph::FS;
> +use PVE::API2::Ceph::POOLS;
>   use PVE::API2::Ceph::MDS;
>   use PVE::API2::Ceph::MGR;
>   use PVE::API2::Ceph::MON;
> @@ -178,7 +179,7 @@ __PACKAGE__->register_method ({
>   our $cmddef = {
>       init => [ 'PVE::API2::Ceph', 'init', [], { node => $nodename } ],
>       pool => {
> -	ls => [ 'PVE::API2::Ceph', 'lspools', [], { node => $nodename }, sub {
> +	ls => [ 'PVE::API2::Ceph::POOLS', 'lspools', [], { node => $nodename }, sub {
>   	    my ($data, $schema, $options) = @_;
>   	    PVE::CLIFormatter::print_api_result($data, $schema,
>   		[
> @@ -193,9 +194,9 @@ our $cmddef = {
>   		],
>   		$options);
>   	}, $PVE::RESTHandler::standard_output_options],
> -	create => [ 'PVE::API2::Ceph', 'createpool', ['name'], { node => $nodename }],
> -	destroy => [ 'PVE::API2::Ceph', 'destroypool', ['name'], { node => $nodename } ],
> -	set => [ 'PVE::API2::Ceph', 'setpool', ['name'], { node => $nodename } ],
> +	create => [ 'PVE::API2::Ceph::POOLS', 'createpool', ['name'], { node => $nodename }],
> +	destroy => [ 'PVE::API2::Ceph::POOLS', 'destroypool', ['name'], { node => $nodename } ],
> +	set => [ 'PVE::API2::Ceph::POOLS', 'setpool', ['name'], { node => $nodename } ],
>       },
>       lspools => { alias => 'pool ls' },
>       createpool => { alias => 'pool create' },
> 





^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [pve-devel] [PATCH manager v2 2/8] ceph: add get api call for single pool
  2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 2/8] ceph: add get api call for single pool Alwin Antreich
@ 2020-11-24 13:53   ` Dominik Csapak
  0 siblings, 0 replies; 15+ messages in thread
From: Dominik Csapak @ 2020-11-24 13:53 UTC (permalink / raw)
  To: Proxmox VE development discussion, Alwin Antreich

comments inline

On 11/24/20 11:58 AM, Alwin Antreich wrote:
> Information of a single pool can be queried.
> 
> Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
> ---
>   PVE/API2/Ceph/POOLS.pm | 113 +++++++++++++++++++++++++++++++++++++++--
>   1 file changed, 108 insertions(+), 5 deletions(-)
> 
> diff --git a/PVE/API2/Ceph/POOLS.pm b/PVE/API2/Ceph/POOLS.pm
> index 744f2bce..19fc1b7e 100644
> --- a/PVE/API2/Ceph/POOLS.pm
> +++ b/PVE/API2/Ceph/POOLS.pm
> @@ -18,11 +18,6 @@ use base qw(PVE::RESTHandler);
>   my $ceph_pool_common_options = sub {
>       my ($nodefault) = shift;
>       my $options = {
> -	name => {
> -	    title => 'Name',
> -	    description => "The name of the pool. It must be unique.",
> -	    type => 'string',
> -	},

why do you take that info here out?
on every instance where you use the ceph_pool_common_options you
add it again apart from the one return values
(where you could remove it conditionally, like with 'nodefault')

>   	size => {
>   	    description => 'Number of replicas per object',
>   	    title => 'Size',
> @@ -218,6 +213,11 @@ __PACKAGE__->register_method ({
>   	additionalProperties => 0,
>   	properties => {
>   	    node => get_standard_option('pve-node'),
> +	    name => {
> +		title => 'Name',
> +		description => "The name of the pool. It must be unique.",
> +		type => 'string',
> +	    },
>   	    add_storages => {
>   		description => "Configure VM and CT storage using the new pool.",
>   		type => 'boolean',
> @@ -374,6 +374,11 @@ __PACKAGE__->register_method ({
>   	additionalProperties => 0,
>   	properties => {
>   	    node => get_standard_option('pve-node'),
> +	    name => {
> +		title => 'Name',
> +		description => "The name of the pool. It must be unique.",
> +		type => 'string',
> +	    },
>   	    %{ $ceph_pool_common_options->('nodefault') },
>   	},
>       },
> @@ -401,4 +406,102 @@ __PACKAGE__->register_method ({
>       }});
>   
>   
> +__PACKAGE__->register_method ({
> +    name => 'getpool',
> +    path => '{name}',
> +    method => 'GET',
> +    description => "List pool settings.",
> +    proxyto => 'node',
> +    protected => 1,
> +    permissions => {
> +	check => ['perm', '/', [ 'Sys.Audit', 'Datastore.Audit' ], any => 1],
> +    },
> +    parameters => {
> +	additionalProperties => 0,
> +	properties => {
> +	    node => get_standard_option('pve-node'),
> +	    name => {
> +		description => "The name of the pool. It must be unique.",
> +		type => 'string',
> +	    },
> +	    verbose => {
> +		type => 'boolean',
> +		default => 0,
> +		optional => 1,
> +		description => "If enabled, will display additional data".
> +		    "(eg. statistics).",
> +	    },
> +	},
> +    },
> +    returns => {
> +	type => "object",
> +	properties => {
> +	    id                     => { type => 'integer', title => 'ID' },
> +	    pgp_num                => { type => 'integer', title => 'PGP num' },
> +	    noscrub                => { type => 'boolean', title => 'noscrub' },
> +	    'nodeep-scrub'         => { type => 'boolean', title => 'nodeep-scrub' },
> +	    nodelete               => { type => 'boolean', title => 'nodelete' },
> +	    nopgchange             => { type => 'boolean', title => 'nopgchange' },
> +	    nosizechange           => { type => 'boolean', title => 'nosizechange' },
> +	    write_fadvise_dontneed => { type => 'boolean', title => 'write_fadvise_dontneed' },
> +	    hashpspool             => { type => 'boolean', title => 'hashpspool' },
> +	    use_gmt_hitset         => { type => 'boolean', title => 'use_gmt_hitset' },
> +	    fast_read              => { type => 'boolean', title => 'Fast Read' },
> +	    statistics             => { type => 'object', title => 'Statistics', optional => 1 },
> +	    %{ $ceph_pool_common_options->() },
> +	},
> +    },
> +    code => sub {
> +	my ($param) = @_;
> +
> +	PVE::Ceph::Tools::check_ceph_inited();
> +
> +	my $verbose = $param->{verbose};
> +	my $pool = $param->{name};
> +
> +	my $rados = PVE::RADOS->new();
> +	my $res = $rados->mon_command({
> +		prefix => 'osd pool get',
> +		pool   => "$pool",
> +		var    => 'all',
> +	    });
> +
> +	my $data = {
> +	    id                     => $res->{pool_id},
> +	    size                   => $res->{size},
> +	    min_size               => $res->{min_size},
> +	    pg_num                 => $res->{pg_num},
> +	    pgp_num                => $res->{pgp_num},
> +	    crush_rule             => $res->{crush_rule},
> +	    pg_autoscale_mode      => $res->{pg_autoscale_mode},
> +	    noscrub                => "$res->{noscrub}",
> +	    'nodeep-scrub'         => "$res->{'nodeep-scrub'}",
> +	    nodelete               => "$res->{nodelete}",
> +	    nopgchange             => "$res->{nopgchange}",
> +	    nosizechange           => "$res->{nosizechange}",
> +	    write_fadvise_dontneed => "$res->{write_fadvise_dontneed}",
> +	    hashpspool             => "$res->{hashpspool}",
> +	    use_gmt_hitset         => "$res->{use_gmt_hitset}",
> +	    fast_read              => "$res->{fast_read}",
> +	};
> +
> +	if ($verbose) {
> +	    my $stats;
> +	    my $res = $rados->mon_command({ prefix => 'df' });
> +
> +	    foreach my $d (@{$res->{pools}}) {
> +		next if !$d->{stats};
> +		next if !defined($d->{name}) && !$d->{name} ne "$pool";
> +		$data->{statistics} = $d->{stats};
> +	    }
> +
> +	    my $apps = $rados->mon_command({ prefix => "osd pool application get", pool => "$pool", });
> +	    my @application = keys %$apps;
> +	    $data->{application} = $application[0];

why only the first application? can there be more than one?

> +	}
> +
> +	return $data;
> +    }});
> +
> +
>   1;
> 





^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [pve-devel] [PATCH manager v2 3/8] ceph: add autoscale_status to api calls
  2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 3/8] ceph: add autoscale_status to api calls Alwin Antreich
@ 2020-11-24 13:53   ` Dominik Csapak
  0 siblings, 0 replies; 15+ messages in thread
From: Dominik Csapak @ 2020-11-24 13:53 UTC (permalink / raw)
  To: Proxmox VE development discussion, Alwin Antreich

comments inline

On 11/24/20 11:58 AM, Alwin Antreich wrote:
> the properties target_size_ratio and target_size_bytes are the only two
> options by ceph to set on a pool. The updated pool list shows now
> autoscale settings & status. Including the new target PGs. To make it
> easier for new users to get/set the correct amount of PGs.
> 
> And use parameter extraction instead of the unneeded ref copy for params.

i'd rather have this as its own patch than here but ok...

> 
> Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
> ---
>   PVE/API2/Ceph/POOLS.pm | 131 +++++++++++++++++++++++++++++++----------
>   PVE/CLI/pveceph.pm     |   3 +
>   PVE/Ceph/Tools.pm      |  21 +++++++
>   3 files changed, 123 insertions(+), 32 deletions(-)
> 
> diff --git a/PVE/API2/Ceph/POOLS.pm b/PVE/API2/Ceph/POOLS.pm
> index 19fc1b7e..324a1f79 100644
> --- a/PVE/API2/Ceph/POOLS.pm
> +++ b/PVE/API2/Ceph/POOLS.pm
> @@ -3,6 +3,7 @@ package PVE::API2::Ceph::POOLS;
>   use strict;
>   use warnings;
>   
> +use PVE::Tools qw(extract_param);
>   use PVE::Ceph::Tools;
>   use PVE::Ceph::Services;
>   use PVE::JSONSchema qw(get_standard_option);
> @@ -67,6 +68,19 @@ my $ceph_pool_common_options = sub {
>   	    default => 'warn',
>   	    optional => 1,
>   	},
> +	target_size => {
> +	    description => "The estimated target size of the pool for the PG autoscaler.",
> +	    title => 'PG Autoscale Target Size',
> +	    type => 'string',
> +	    pattern => '(\d+(?:\.\d+)?)([KMGT])?',
> +	    optional => 1,
> +	},
> +	target_size_ratio => {
> +	    description => "The estimated target ratio of the pool for the PG autoscaler.",
> +	    title => 'PG Autoscale Target Ratio',
> +	    type => 'number',
> +	    optional => 1,
> +	},
>       };
>   
>       if ($nodefault) {
> @@ -105,6 +119,30 @@ my $get_storages = sub {
>       return $res;
>   };
>   
> +my $get_autoscale_status = sub {
> +    my ($rados, $pool) = @_;
> +
> +    $rados = PVE::RADOS->new() if !defined($rados);
> +
> +    my $autoscale = $rados->mon_command({
> +	    prefix => 'osd pool autoscale-status'});
> +
> +    my $data;
> +    foreach my $p (@$autoscale) {
> +	$p->{would_adjust} = "$p->{would_adjust}"; # boolean
> +	push @$data, $p;
> +    }
> +
> +    if ($pool) {
> +	foreach my $p (@$data) {
> +	    next if $p->{pool_name} ne $pool;
> +	    $data = $p;
> +	}
> +    }
> +
> +    return $data;
> +

would it not better to return a hash with mapping
poolname -> data_for_that_pool
?

this way you only need to iterate once? and save
iterations below....

};
> +
>   
>   __PACKAGE__->register_method ({
>       name => 'lspools',
> @@ -127,16 +165,20 @@ __PACKAGE__->register_method ({
>   	items => {
>   	    type => "object",
>   	    properties => {
> -		pool => { type => 'integer', title => 'ID' },
> -		pool_name => { type => 'string', title => 'Name' },
> -		size => { type => 'integer', title => 'Size' },
> -		min_size => { type => 'integer', title => 'Min Size' },
> -		pg_num => { type => 'integer', title => 'PG Num' },
> -		pg_autoscale_mode => { type => 'string', optional => 1, title => 'PG Autoscale Mode' },
> -		crush_rule => { type => 'integer', title => 'Crush Rule' },
> -		crush_rule_name => { type => 'string', title => 'Crush Rule Name' },
> -		percent_used => { type => 'number', title => '%-Used' },
> -		bytes_used => { type => 'integer', title => 'Used' },
> +		pool              => { type => 'integer', title => 'ID' },
> +		pool_name         => { type => 'string',  title => 'Name' },
> +		size              => { type => 'integer', title => 'Size' },
> +		min_size          => { type => 'integer', title => 'Min Size' },
> +		pg_num            => { type => 'integer', title => 'PG Num' },
> +		pg_num_final      => { type => 'integer', title => 'Optimal # of PGs' },
> +		pg_autoscale_mode => { type => 'string',  title => 'PG Autoscale Mode', optional => 1 },
> +		crush_rule        => { type => 'integer', title => 'Crush Rule' },
> +		crush_rule_name   => { type => 'string',  title => 'Crush Rule Name' },
> +		percent_used      => { type => 'number',  title => '%-Used' },
> +		bytes_used        => { type => 'integer', title => 'Used' },
> +		target_size       => { type => 'integer', title => 'PG Autoscale Target Size', optional => 1 },
> +		target_size_ratio => { type => 'number',  title => 'PG Autoscale Target Ratio',optional => 1, },
> +		autoscale_status  => { type => 'object',  title => 'Autoscale Status', optional => 1 },
>   	    },
>   	},
>   	links => [ { rel => 'child', href => "{pool_name}" } ],
> @@ -176,12 +218,18 @@ __PACKAGE__->register_method ({
>   	    'pg_autoscale_mode',
>   	];
>   
> +	my $autoscale = $get_autoscale_status->($rados);
> +
>   	foreach my $e (@{$res->{pools}}) {
>   	    my $d = {};
>   	    foreach my $attr (@$attr_list) {
>   		$d->{$attr} = $e->{$attr} if defined($e->{$attr});
>   	    }
>   
> +	    # some info is nested under options instead
> +	    $d->{target_size} = $e->{options}->{target_size_bytes};
> +	    $d->{target_size_ratio} = $e->{options}->{target_size_ratio};
> +
>   	    if (defined($d->{crush_rule}) && defined($rules->{$d->{crush_rule}})) {
>   		$d->{crush_rule_name} = $rules->{$d->{crush_rule}};
>   	    }
> @@ -190,10 +238,17 @@ __PACKAGE__->register_method ({
>   		$d->{bytes_used} = $s->{bytes_used};
>   		$d->{percent_used} = $s->{percent_used};
>   	    }
> +
> +	    foreach my $p (@$autoscale) {
> +		next if ($p->{pool_id} ne $e->{pool});
> +		$d->{autoscale_status} = $p;
> +		# pick some info directly from the autoscale_status
> +		$d->{pg_num_final} = $p->{pg_num_final};
> +	    }

namely here.

you could do then

$d->{autoscale_status} = $autoscale->{$e->{pool}}

or something like that, instead of iterating over the lists
everytime (this is n^2 of number of pools)

> +
>   	    push @$data, $d;
>   	}
>   
> -
>   	return $data;
>       }});
>   
> @@ -233,34 +288,37 @@ __PACKAGE__->register_method ({
>   	PVE::Cluster::check_cfs_quorum();
>   	PVE::Ceph::Tools::check_ceph_configured();
>   
> -	my $pool = $param->{name};
> +	my $pool = extract_param($param, 'name');
> +	my $add_storages = extract_param($param, 'add_storages');
> +	delete $param->{node}; # not a ceph option
> +
>   	my $rpcenv = PVE::RPCEnvironment::get();
>   	my $user = $rpcenv->get_user();
>   
> -	if ($param->{add_storages}) {
> +	# Ceph uses target_size_bytes
> +	if (defined($param->{'target_size'})) {
> +	    my $target_sizestr = extract_param($param, 'target_size');
> +	    $param->{target_size_bytes} = PVE::Ceph::Tools::convert_to_bytes($target_sizestr);
> +	}
> +
> +	if (defined($add_storages)) {

this is wrong, what if i give '--add_storages 0' ?

>   	    $rpcenv->check($user, '/storage', ['Datastore.Allocate']);
>   	    die "pool name contains characters which are illegal for storage naming\n"
>   		if !PVE::JSONSchema::parse_storage_id($pool);
>   	}
>   
> -	my $ceph_param = \%$param;
> -	for my $item ('add_storages', 'name', 'node') {
> -	    # not ceph parameters
> -	    delete $ceph_param->{$item};
> -	}
> -
>   	# pool defaults
> -	$ceph_param->{pg_num} //= 128;
> -	$ceph_param->{size} //= 3;
> -	$ceph_param->{min_size} //= 2;
> -	$ceph_param->{application} //= 'rbd';
> -	$ceph_param->{pg_autoscale_mode} //= 'warn';
> +	$param->{pg_num} //= 128;
> +	$param->{size} //= 3;
> +	$param->{min_size} //= 2;
> +	$param->{application} //= 'rbd';
> +	$param->{pg_autoscale_mode} //= 'warn';
>   
>   	my $worker = sub {
>   
> -	    PVE::Ceph::Tools::create_pool($pool, $ceph_param);
> +	    PVE::Ceph::Tools::create_pool($pool, $param);
>   
> -	    if ($param->{add_storages}) {
> +	    if (defined($add_storages)) {

same here

>   		my $err;
>   		eval { $add_storage->($pool, "${pool}"); };
>   		if ($@) {
> @@ -391,15 +449,17 @@ __PACKAGE__->register_method ({
>   	my $rpcenv = PVE::RPCEnvironment::get();
>   	my $authuser = $rpcenv->get_user();
>   
> -	my $pool = $param->{name};
> -	my $ceph_param = \%$param;
> -	for my $item ('name', 'node') {
> -	    # not ceph parameters
> -	    delete $ceph_param->{$item};
> +	my $pool = extract_param($param, 'name');
> +	delete $param->{node};
> +
> +	# Ceph uses target_size_bytes
> +	if (defined($param->{'target_size'})) {
> +	    my $target_sizestr = extract_param($param, 'target_size');
> +	    $param->{target_size_bytes} = PVE::Ceph::Tools::convert_to_bytes($target_sizestr);
>   	}
>   
>   	my $worker = sub {
> -	    PVE::Ceph::Tools::set_pool($pool, $ceph_param);
> +	    PVE::Ceph::Tools::set_pool($pool, $param);
>   	};
>   
>   	return $rpcenv->fork_worker('cephsetpool', $pool,  $authuser, $worker);
> @@ -448,6 +508,7 @@ __PACKAGE__->register_method ({
>   	    use_gmt_hitset         => { type => 'boolean', title => 'use_gmt_hitset' },
>   	    fast_read              => { type => 'boolean', title => 'Fast Read' },
>   	    statistics             => { type => 'object', title => 'Statistics', optional => 1 },
> +	    autoscale_status       => { type => 'object', title => 'Autoscale Status', optional => 1 },
>   	    %{ $ceph_pool_common_options->() },
>   	},
>       },
> @@ -483,8 +544,12 @@ __PACKAGE__->register_method ({
>   	    hashpspool             => "$res->{hashpspool}",
>   	    use_gmt_hitset         => "$res->{use_gmt_hitset}",
>   	    fast_read              => "$res->{fast_read}",
> +	    # is only avialable if set

s/avialable/available/

> +	    target_size            => $res->{target_size_bytes} // undef,
> +	    target_size_ratio      => $res->{target_size_ratio} // undef,
>   	};
>   
> +
>   	if ($verbose) {
>   	    my $stats;
>   	    my $res = $rados->mon_command({ prefix => 'df' });
> @@ -498,6 +563,8 @@ __PACKAGE__->register_method ({
>   	    my $apps = $rados->mon_command({ prefix => "osd pool application get", pool => "$pool", });
>   	    my @application = keys %$apps;
>   	    $data->{application} = $application[0];
> +
> +	    $data->{autoscale_status} = $get_autoscale_status->($rados, $pool),
>   	}
>   
>   	return $data;
> diff --git a/PVE/CLI/pveceph.pm b/PVE/CLI/pveceph.pm
> index 69421ca6..b7b68540 100755
> --- a/PVE/CLI/pveceph.pm
> +++ b/PVE/CLI/pveceph.pm
> @@ -187,7 +187,10 @@ our $cmddef = {
>   		    'size',
>   		    'min_size',
>   		    'pg_num',
> +		    'pg_num_final',
>   		    'pg_autoscale_mode',
> +		    'target_size',
> +		    'target_size_ratio',
>   		    'crush_rule_name',
>   		    'percent_used',
>   		    'bytes_used',
> diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
> index 12d309be..d95e8676 100644
> --- a/PVE/Ceph/Tools.pm
> +++ b/PVE/Ceph/Tools.pm
> @@ -487,4 +487,25 @@ sub ceph_cluster_status {
>       return $status;
>   }
>   
> +sub convert_to_bytes {
> +    my ($sizestr) = shift;
> +
> +    die "internal error" if $sizestr !~ m/^(\d+(?:\.\d+)?)([KMGT])?$/;
> +    my ($newsize, $unit) = ($1, $2);
> +
> +    if ($unit) {
> +	if ($unit eq 'K') {
> +	    $newsize = $newsize * 1024;
> +	} elsif ($unit eq 'M') {
> +	    $newsize = $newsize * 1024 * 1024;
> +	} elsif ($unit eq 'G') {
> +	    $newsize = $newsize * 1024 * 1024 * 1024;
> +	} elsif ($unit eq 'T') {
> +	    $newsize = $newsize * 1024 * 1024 * 1024 * 1024;
> +	}
> +    }
> +
> +    return int($newsize);
> +}

don't we have something like this in PVE::Tools?

> +
>   1;
> 





^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [pve-devel] [PATCH manager v2 4/8] ceph: gui: add autoscale & flatten pool view
  2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 4/8] ceph: gui: add autoscale & flatten pool view Alwin Antreich
@ 2020-11-24 13:53   ` Dominik Csapak
  0 siblings, 0 replies; 15+ messages in thread
From: Dominik Csapak @ 2020-11-24 13:53 UTC (permalink / raw)
  To: Proxmox VE development discussion, Alwin Antreich

comments inline

On 11/24/20 11:58 AM, Alwin Antreich wrote:
> Letting the columns flex needs a flat column head structure.
> 
> Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
> ---
>   www/manager6/ceph/Pool.js | 131 ++++++++++++++++++++++----------------
>   1 file changed, 75 insertions(+), 56 deletions(-)
> 
> diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
> index 271dcc3c..6febe1fc 100644
> --- a/www/manager6/ceph/Pool.js
> +++ b/www/manager6/ceph/Pool.js
> @@ -105,14 +105,16 @@ Ext.define('PVE.node.CephPoolList', {
>   
>       columns: [
>   	{
> -	    header: gettext('Name'),
> -	    width: 120,
> +	    text: gettext('Name'),
> +	    minWidth: 120,
> +	    flex: 2,
>   	    sortable: true,
>   	    dataIndex: 'pool_name'
>   	},
>   	{
> -	    header: gettext('Size') + '/min',
> -	    width: 100,
> +	    text: gettext('Size') + '/min',
> +	    minWidth: 100,
> +	    flex: 1,
>   	    align: 'right',
>   	    renderer: function(v, meta, rec) {
>   		return v + '/' + rec.data.min_size;
> @@ -120,62 +122,75 @@ Ext.define('PVE.node.CephPoolList', {
>   	    dataIndex: 'size'
>   	},
>   	{
> -	    text: 'Placement Groups',
> -	    columns: [
> -		{
> -		    text: '# of PGs', // pg_num',
> -		    width: 150,
> -		    align: 'right',
> -		    dataIndex: 'pg_num'
> -		},
> -		{
> -		    text: gettext('Autoscale'),
> -		    width: 140,
> -		    align: 'right',
> -		    dataIndex: 'pg_autoscale_mode'
> -		},
> -	    ]
> +	    text: '# of Placement Groups',
> +	    flex: 1,
> +	    minWidth: 150,
> +	    align: 'right',
> +	    dataIndex: 'pg_num'
>   	},
>   	{
> -	    text: 'CRUSH Rule',
> -	    columns: [
> -		{
> -		    text: 'ID',
> -		    align: 'right',
> -		    width: 50,
> -		    dataIndex: 'crush_rule'
> -		},
> -		{
> -		    text: gettext('Name'),
> -		    width: 150,
> -		    dataIndex: 'crush_rule_name',
> -		},
> -	    ]
> +	    text: gettext('Optimal # of PGs'),
> +	    flex: 1,
> +	    minWidth: 140,
> +	    align: 'right',
> +	    dataIndex: 'pg_num_final',
>   	},
>   	{
> -	    text: gettext('Used'),
> -	    columns: [
> -		{
> -		    text: '%',
> -		    width: 100,
> -		    sortable: true,
> -		    align: 'right',
> -		    renderer: function(val) {
> -			return Ext.util.Format.percent(val, '0.00');
> -		    },
> -		    dataIndex: 'percent_used',
> -		},
> -		{
> -		    text: gettext('Total'),
> -		    width: 100,
> -		    sortable: true,
> -		    renderer: PVE.Utils.render_size,
> -		    align: 'right',
> -		    dataIndex: 'bytes_used',
> -		    summaryType: 'sum',
> -		    summaryRenderer: PVE.Utils.render_size
> +	    text: gettext('Target Size Ratio'),
> +	    flex: 1,
> +	    minWidth: 140,
> +	    align: 'right',
> +	    dataIndex: 'target_size_ratio',
> +	    renderer: Ext.util.Format.numberRenderer('0.0000'),
> +	    hidden: true,
> +	},
> +	{
> +	    text: gettext('Target Size'),
> +	    flex: 1,
> +	    minWidth: 140,
> +	    align: 'right',
> +	    dataIndex: 'target_size',
> +	    hidden: true,
> +	    renderer: function(v, metaData, rec) {
> +		let value = PVE.Utils.render_size(v);
> +		if (rec.data.target_size_ratio > 0) {
> +		    value = '<i class="fa fa-info-circle faded"></i> ' + value;
> +		    metaData.tdAttr = 'data-qtip="Target Size Ratio takes precedence over Target Size."';
>   		}
> -	    ]
> +		return value;
> +	    },
> +	},
> +	{
> +	    text: gettext('Autoscale') + ' ' + gettext('Mode'),

please do not do this as it does not work in all languages

e.g. 'autoscaling mode' translates to 'modo de escalado automático' in
spanish (courtesy of deepl.com)

simply write gettext("Autoscale Mode')

> +	    flex: 1,
> +	    minWidth: 140,
> +	    align: 'right',
> +	    dataIndex: 'pg_autoscale_mode',
> +	},
> +	{
> +	    text: 'CRUSH ' + gettext('Rule') + ' (ID)',

also this:

we did not translate 'CRUSH Rule' so we can simply
use 'CRUSH Rule (ID)' here

> +	    flex: 1,
> +	    align: 'right',
> +	    minWidth: 150,
> +	    renderer: function(v, meta, rec) {
> +		return v + ' (' + rec.data.crush_rule + ')';
> +	    },
> +	    dataIndex: 'crush_rule_name',
> +	},
> +	{
> +	    text: gettext('Used (%)'),
> +	    flex: 1,
> +	    minWidth: 180,
> +	    sortable: true,
> +	    align: 'right',
> +	    dataIndex: 'bytes_used',
> +	    summaryType: 'sum',
> +	    summaryRenderer: PVE.Utils.render_size,
> +	    renderer: function(v, meta, rec) {
> +		let percentage = Ext.util.Format.percent(rec.data.percent_used, '0.00');
> +		let used = PVE.Utils.render_size(v);
> +		return used + ' (' + percentage + ')';
> +	    },
>   	}
>       ],
>       initComponent: function() {
> @@ -276,7 +291,11 @@ Ext.define('PVE.node.CephPoolList', {
>   		  { name: 'bytes_used', type: 'integer'},
>   		  { name: 'percent_used', type: 'number'},
>   		  { name: 'crush_rule', type: 'integer'},
> -		  { name: 'crush_rule_name', type: 'string'}
> +		  { name: 'crush_rule_name', type: 'string'},
> +		  { name: 'pg_autoscale_mode', type: 'string'},
> +		  { name: 'pg_num_final', type: 'integer'},
> +		  { name: 'target_size_ratio', type: 'number'},
> +		  { name: 'target_size_bytes', type: 'integer'},
>   		],
>   	idProperty: 'pool_name'
>       });
> 





^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [pve-devel] [PATCH manager v2 5/8] ceph: gui: rework pool input panel
  2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 5/8] ceph: gui: rework pool input panel Alwin Antreich
@ 2020-11-24 13:53   ` Dominik Csapak
  0 siblings, 0 replies; 15+ messages in thread
From: Dominik Csapak @ 2020-11-24 13:53 UTC (permalink / raw)
  To: Proxmox VE development discussion, Alwin Antreich

seems not to work to set the target_size(_bytes) via gui (cli works)

comments inline

On 11/24/20 11:58 AM, Alwin Antreich wrote:
> * add the ability to edit an existing pool
> * allow adjustment of autoscale settings
> * warn if user specifies min_size 1
> * disallow min_size 1 on pool create
> * calculate min_size replica by size
> 
> Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
> ---
>   www/manager6/ceph/Pool.js | 276 ++++++++++++++++++++++++++++++--------
>   1 file changed, 221 insertions(+), 55 deletions(-)
> 
> diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
> index 6febe1fc..35ecbf09 100644
> --- a/www/manager6/ceph/Pool.js
> +++ b/www/manager6/ceph/Pool.js
> @@ -1,94 +1,237 @@
> -Ext.define('PVE.CephCreatePool', {
> -    extend: 'Proxmox.window.Edit',
> -    alias: 'widget.pveCephCreatePool',
> +Ext.define('PVE.CephPoolInputPanel', {
> +    extend: 'Proxmox.panel.InputPanel',
> +    xtype: 'pveCephPoolInputPanel',
> +    mixins: ['Proxmox.Mixin.CBind'],
>   
>       showProgress: true,
>       onlineHelp: 'pve_ceph_pools',
>   
>       subject: 'Ceph Pool',
> -    isCreate: true,
> -    method: 'POST',
> -    items: [
> +    column1: [
>   	{
> -	    xtype: 'textfield',
> +	    xtype: 'pmxDisplayEditField',
>   	    fieldLabel: gettext('Name'),
> +	    cbind: {
> +		editable: '{isCreate}',
> +		value: '{pool_name}',
> +		disabled: '{!isCreate}'
> +	    },
>   	    name: 'name',
> +	    labelWidth: 80,
>   	    allowBlank: false
>   	},
>   	{
>   	    xtype: 'proxmoxintegerfield',
>   	    fieldLabel: gettext('Size'),
>   	    name: 'size',
> +	    labelWidth: 80,
>   	    value: 3,
> -	    minValue: 1,
> +	    minValue: 2,
>   	    maxValue: 7,
> -	    allowBlank: false
> +	    allowBlank: false,
> +	    listeners: {
> +		change: function(field, val) {
> +		    let size = Math.round(val / 2);
> +		    if (size > 1) {
> +			field.up('inputpanel').down('field[name=min_size]').setValue(size);
> +		    }
> +		},
> +	    },
> +	},
> +    ],
> +    column2: [
> +	{
> +	    xtype: 'proxmoxKVComboBox',
> +	    fieldLabel: 'PG Autoscale ' + gettext('Mode'),

like the previous patch, please combine the gettext (and optimally use 
the same text as in the columns)

> +	    name: 'pg_autoscale_mode',
> +	    comboItems: [
> +		['warn', 'warn'],
> +		['on', 'on'],
> +		['off', 'off'],
> +	    ],
> +	    value: 'warn',
> +	    allowBlank: false,
> +	    autoSelect: false,
> +	    labelWidth: 140,
> +	},
> +	{
> +	    xtype: 'proxmoxcheckbox',
> +	    fieldLabel: gettext('Add as Storage'),
> +	    cbind: {
> +		value: '{isCreate}',
> +		hidden: '{!isCreate}',

i guess you'd need to disable it also on !isCreate so that the value 
does not get submitted.. (for safety)

> +	    },
> +	    name: 'add_storages',
> +	    labelWidth: 140,
> +	    autoEl: {
> +		tag: 'div',
> +		'data-qtip': gettext('Add the new pool to the cluster storage configuration.'),
> +	    },
>   	},
> +    ],
> +    advancedColumn1: [
>   	{
>   	    xtype: 'proxmoxintegerfield',
>   	    fieldLabel: gettext('Min. Size'),
>   	    name: 'min_size',
> +	    labelWidth: 80,
>   	    value: 2,
> -	    minValue: 1,
> +	    cbind: {
> +		minValue: (get) => get('isCreate') ? 2 : 1,
> +	    },
>   	    maxValue: 7,
> -	    allowBlank: false
> +	    allowBlank: false,
> +	    listeners: {
> +		change: function(field, val) {
> +		    let warn = true;
> +		    let warn_text = gettext('Min. Size');
> +
> +		    if (val < 2) {
> +			warn = false;
> +			warn_text = gettext('Min. Size') + ' <i class="fa fa-exclamation-triangle warning"></i>';
> +		    }
> +
> +		    field.up().down('field[name=min_size-warning]').setHidden(warn);

please reverse the logic of 'warn' and use setHidden(!warn) or
use setVisible(warn), it's confusing otherwise

> +		    field.setFieldLabel(warn_text);
> +		}
> +	    },
> +	},
> +	{
> +	    xtype: 'displayfield',
> +	    name: 'min_size-warning',
> +	    padding: 5,
> +	    userCls: 'pmx-hint',
> +	    value: 'A pool with min_size=1 could lead to data loss, incomplete PGs or unfound objects.',
> +	    hidden: true,
>   	},
>   	{
>   	    xtype: 'pveCephRuleSelector',
>   	    fieldLabel: 'Crush Rule', // do not localize
> +	    cbind: { nodename: '{nodename}' },
> +	    labelWidth: 80,
>   	    name: 'crush_rule',
>   	    allowBlank: false
>   	},
> -	{
> -	    xtype: 'proxmoxKVComboBox',
> -	    fieldLabel: 'PG Autoscale Mode', // do not localize
> -	    name: 'pg_autoscale_mode',
> -	    comboItems: [
> -		['warn', 'warn'],
> -		['on', 'on'],
> -		['off', 'off'],
> -	    ],
> -	    value: 'warn',
> -	    allowBlank: false,
> -	    autoSelect: false,
> -	},
> +    ],
> +    advancedColumn2: [
>   	{
>   	    xtype: 'proxmoxintegerfield',
> -	    fieldLabel: 'pg_num',
> +	    fieldLabel: '# of PGs',
>   	    name: 'pg_num',
> +	    labelWidth: 120,
>   	    value: 128,
> -	    minValue: 8,
> +	    minValue: 1,
>   	    maxValue: 32768,
> +	    allowBlank: false,
> +	    emptyText: 128,
> +	},
> +	{
> +	    xtype: 'numberfield',
> +	    fieldLabel: gettext('Target Size') + ' (GiB)',
> +	    name: 'target_size',
> +	    labelWidth: 120,
> +	    value: 0,
> +	    minValue: 0,
>   	    allowBlank: true,
> -	    emptyText: gettext('Autoscale'),
> +	    emptyText: '0.0',
> +	    disabled: false,
> +	    listeners: {
> +		change: function(field, val) {
> +		    let ratio = field.up().down('field[name=target_size_ratio]').getValue() > +		    if ( ratio === null) {

spacing

> +			field.up().down('field[name=target_size_ratio]').setDisabled(val);
> +		    }
> +		}
> +	    },
>   	},
>   	{
> -	    xtype: 'proxmoxcheckbox',
> -	    fieldLabel: gettext('Add as Storage'),
> -	    value: true,
> -	    name: 'add_storages',
> +	    xtype: 'numberfield',
> +	    fieldLabel: gettext('Target Size Ratio'),
> +	    name: 'target_size_ratio',
> +	    labelWidth: 120,
> +	    value: 0,
> +	    minValue: 0,
> +	    allowBlank: true,
> +	    emptyText: '0.0',
> +	    disabled: false,
> +	    listeners: {
> +		change: function(field, val) {
> +		    field.up().down('field[name=target_size]').setDisabled(val);
> +		}

i find that disabling the other field on change very weird ux
i'd rather use a selector or radio button for the mode...

also if you already have the emptytext you do not need to set
the default value, that makes the input even weirder

> +	    },
>   	    autoEl: {
>   		tag: 'div',
> -		 'data-qtip': gettext('Add the new pool to the cluster storage configuration.'),
> +		'data-qtip': gettext('If Target Size Ratio is set it takes precedence over Target Size.'),
>   	    },
> -	}
> +	},
>       ],
> -    initComponent : function() {
> -        var me = this;
>   
> -	if (!me.nodename) {
> -	    throw "no node name specified";
> +    onGetValues: function(values) {
> +	Object.keys(values || {}).forEach(function(name) {
> +	    if (values[name] === '') {
> +		delete values[name];
> +	    }
> +	});
> +
> +	if (values['target_size'] && values['target_size'] !== 0) {
> +	    values['target_size'] = values['target_size']*1024*1024*1024;
> +	} else {
> +	    // needs to be 0 to be cleared
> +	    values['target_size'] = 0;
>   	}
>   
> -        Ext.apply(me, {
> -	    url: "/nodes/" + me.nodename + "/ceph/pools",
> -	    defaults: {
> -		nodename: me.nodename
> -	    }
> -        });
> +	// needs to be 0 to be cleared
> +	if (!values['target_size_ratio']) {
> +	    values['target_size_ratio'] = 0;
> +	}

i think this is the reason it does not work (set to 0 instead of delete)

>   
> -        me.callParent();
> -    }
> +	return values;
> +    },
> +
> +    setValues: function(values) {
> +	if (values['target_size'] && values['target_size'] !== 0) {
> +	    values['target_size'] = values['target_size']/1024/1024/1024;
> +	}
> +
> +	this.callParent([values]);
> +    },
> +
> +    initComponent: function() {
> +	let me = this;
> +	me.callParent();
> +    },

you can drop the fucntion if it only calls callParent

> +
> +});
> +
> +Ext.define('PVE.CephPoolEdit', {
> +    extend: 'Proxmox.window.Edit',
> +    alias: 'widget.pveCephPoolEdit',
> +    xtype: 'pveCephPoolEdit',
> +    mixins: ['Proxmox.Mixin.CBind'],
> +
> +    cbindData: {
> +	pool_name: '',
> +	isCreate: (cfg) => !cfg.pool_name,
> +    },
> +
> +    cbind: {
> +	autoLoad: get => !get('isCreate'),
> +	url: get => get('isCreate') ?
> +	`/nodes/${get('nodename')}/ceph/pools` :
> +	`/nodes/${get('nodename')}/ceph/pools/${get('pool_name')}`,
> +	method: get => get('isCreate') ? 'POST' : 'PUT',
> +    },
> +
> +    subject: gettext('Ceph Pool'),
> +
> +    items: [{
> +	xtype: 'pveCephPoolInputPanel',
> +	cbind: {
> +	    nodename: '{nodename}',
> +	    pool_name: '{pool_name}',
> +	    isCreate: '{isCreate}',
> +	},
> +    }],
>   });
>   
>   Ext.define('PVE.node.CephPoolList', {
> @@ -214,6 +357,9 @@ Ext.define('PVE.node.CephPoolList', {
>   	});
>   
>   	var store = Ext.create('Proxmox.data.DiffStore', { rstore: rstore });
> +	var reload = function() {
> +	    rstore.load();
> +	};
>   
>   	var regex = new RegExp("not (installed|initialized)", "i");
>   	PVE.Utils.handleStoreErrorOrMask(me, rstore, regex, function(me, error){
> @@ -230,16 +376,37 @@ Ext.define('PVE.node.CephPoolList', {
>   	var create_btn = new Ext.Button({
>   	    text: gettext('Create'),
>   	    handler: function() {
> -		var win = Ext.create('PVE.CephCreatePool', {
> -                    nodename: nodename
> +		var win = Ext.create('PVE.CephPoolEdit', {
> +		    title: gettext('Create') + ': Ceph Pool',
> +		    nodename: nodename,
>   		});
>   		win.show();
> -		win.on('destroy', function() {
> -		    rstore.load();
> -		});
> +		win.on('destroy', reload);
>   	    }
>   	});
>   
> +	var run_editor = function() {
> +	    var rec = sm.getSelection()[0];
> +	    if (!rec) {
> +		return;
> +	    }
> +
> +	    var win = Ext.create('PVE.CephPoolEdit', {
> +		title: gettext('Edit') + ': Ceph Pool',
> +		nodename: nodename,
> +		pool_name: rec.data.pool_name,
> +	    });
> +            win.on('destroy', reload);
> +            win.show();
> +	};
> +
> +	var edit_btn = new Proxmox.button.Button({
> +	    text: gettext('Edit'),
> +	    disabled: true,
> +	    selModel: sm,
> +	    handler: run_editor,
> +	});
> +
>   	var destroy_btn = Ext.create('Proxmox.button.Button', {
>   	    text: gettext('Destroy'),
>   	    selModel: sm,
> @@ -261,19 +428,18 @@ Ext.define('PVE.node.CephPoolList', {
>   		    },
>   		    item: { type: 'CephPool', id: rec.data.pool_name }
>   		}).show();
> -		win.on('destroy', function() {
> -		    rstore.load();
> -		});
> +		win.on('destroy', reload);
>   	    }
>   	});
>   
>   	Ext.apply(me, {
>   	    store: store,
>   	    selModel: sm,
> -	    tbar: [ create_btn, destroy_btn ],
> +	    tbar: [ create_btn, edit_btn, destroy_btn ],
>   	    listeners: {
>   		activate: () => rstore.startUpdate(),
>   		destroy: () => rstore.stopUpdate(),
> +		itemdblclick: run_editor,
>   	    }
>   	});
>   
> @@ -295,7 +461,7 @@ Ext.define('PVE.node.CephPoolList', {
>   		  { name: 'pg_autoscale_mode', type: 'string'},
>   		  { name: 'pg_num_final', type: 'integer'},
>   		  { name: 'target_size_ratio', type: 'number'},
> -		  { name: 'target_size_bytes', type: 'integer'},
> +		  { name: 'target_size', type: 'integer'},
>   		],
>   	idProperty: 'pool_name'
>       });
> 





^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2020-11-24 13:54 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-24 10:58 [pve-devel] [PATCH manager v2 0/8] ceph: allow pools settings to be changed Alwin Antreich
2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 1/8] api: ceph: subclass pools Alwin Antreich
2020-11-24 13:53   ` Dominik Csapak
2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 2/8] ceph: add get api call for single pool Alwin Antreich
2020-11-24 13:53   ` Dominik Csapak
2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 3/8] ceph: add autoscale_status to api calls Alwin Antreich
2020-11-24 13:53   ` Dominik Csapak
2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 4/8] ceph: gui: add autoscale & flatten pool view Alwin Antreich
2020-11-24 13:53   ` Dominik Csapak
2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 5/8] ceph: gui: rework pool input panel Alwin Antreich
2020-11-24 13:53   ` Dominik Csapak
2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 6/8] ceph: schema: change min. required PG count Alwin Antreich
2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 7/8] ceph: remove the pg_autoscale_mode default Alwin Antreich
2020-11-24 10:58 ` [pve-devel] [PATCH manager v2 8/8] fix: ceph: always set pool size first Alwin Antreich
2020-11-24 13:53 ` [pve-devel] [PATCH manager v2 0/8] ceph: allow pools settings to be changed Dominik Csapak

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal