public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH access-control/manager] add realm sync jobs
@ 2022-04-04  8:54 Dominik Csapak
  2022-04-04  8:54 ` [pve-devel] [PATCH access-control 1/1] add realmsync plugin for jobs and CRUD api for domainsync-jobs Dominik Csapak
                   ` (4 more replies)
  0 siblings, 5 replies; 14+ messages in thread
From: Dominik Csapak @ 2022-04-04  8:54 UTC (permalink / raw)
  To: pve-devel

with this, users now can schedule realm sync jobs, instead of manually
pressing 'sync' or configuring a cronjob for 'pveum realm sync'

this series requires my previous realm sync improvment series[0].
i could make it so that it doesn't, but this way we can safely
omit the legacy 'full' and 'purge' parameters from the beginning.

not really sure about 'domainsyncjobs' api path, but i tried to be
consistent with our remaining naming for that.
(maybe we should deprecate the 'domain' thing and use 'realm'
everywhere? that would be at least consistent with the gui)

i know i have to rebase this to where the actual perl modules land
after applying hannes previous series to move the Jobs and Plugin
modules[1], but i sent it regardless, since the code itself would not change,
so it can be reviewed right now

the access-control patch needs special care, since i try to sync
independent pve-scheduler calls across the cluster. in my tests here
it worked, but that does not mean i didn't overlook some things.

as it stands now, pve-manager depends on the new access-control

0: https://lists.proxmox.com/pipermail/pve-devel/2022-March/052319.html
1: https://lists.proxmox.com/pipermail/pve-devel/2022-March/052230.html

pve-access-control:

Dominik Csapak (1):
  add realmsync plugin for jobs and CRUD api for domainsync-jobs

 src/PVE/API2/AccessControl.pm |   6 +
 src/PVE/API2/Domainsync.pm    | 278 ++++++++++++++++++++++++++++++++++
 src/PVE/API2/Makefile         |   1 +
 src/PVE/Jobs/Makefile         |   6 +
 src/PVE/Jobs/RealmSync.pm     | 192 +++++++++++++++++++++++
 src/PVE/Makefile              |   1 +
 6 files changed, 484 insertions(+)
 create mode 100644 src/PVE/API2/Domainsync.pm
 create mode 100644 src/PVE/Jobs/Makefile
 create mode 100644 src/PVE/Jobs/RealmSync.pm

pve-manager:

Dominik Csapak (4):
  Jobs: provide id and schedule to the job
  Jobs/Plugin: remove 'vzdump' from id description
  Jobs: add RealmSync Plugin and register it
  ui: add Realm Sync panel

 PVE/Jobs.pm                     |   4 +-
 PVE/Jobs/Plugin.pm              |   2 +-
 www/manager6/Makefile           |   1 +
 www/manager6/dc/Config.js       |   7 +
 www/manager6/dc/RealmSyncJob.js | 315 ++++++++++++++++++++++++++++++++
 5 files changed, 327 insertions(+), 2 deletions(-)
 create mode 100644 www/manager6/dc/RealmSyncJob.js

-- 
2.30.2





^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pve-devel] [PATCH access-control 1/1] add realmsync plugin for jobs and CRUD api for domainsync-jobs
  2022-04-04  8:54 [pve-devel] [PATCH access-control/manager] add realm sync jobs Dominik Csapak
@ 2022-04-04  8:54 ` Dominik Csapak
  2022-11-07 17:05   ` Thomas Lamprecht
  2022-04-04  8:54 ` [pve-devel] [PATCH manager 1/4] Jobs: provide id and schedule to the job Dominik Csapak
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 14+ messages in thread
From: Dominik Csapak @ 2022-04-04  8:54 UTC (permalink / raw)
  To: pve-devel

to be able to define automated jobs that sync ldap/ad

The jobs plugin contains special handling when no node is given, since
we only want it to run on a single node when that triggers. For that,
we save a statefile in /etc/pve/priv/jobs/ which contains the
node/time/upid of the node that runs the job. The first node that
is able to lock the realm (via cfs_lock_domain) "wins" and may
sync from the ldap.

in case a specific node was selected, this is omitted and the Jobs
handling will not let it run on other nodes anyway

the API part is our usual sectionconfig CRUD api, but specialized
for the specific type of job.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 src/PVE/API2/AccessControl.pm |   6 +
 src/PVE/API2/Domainsync.pm    | 278 ++++++++++++++++++++++++++++++++++
 src/PVE/API2/Makefile         |   1 +
 src/PVE/Jobs/Makefile         |   6 +
 src/PVE/Jobs/RealmSync.pm     | 192 +++++++++++++++++++++++
 src/PVE/Makefile              |   1 +
 6 files changed, 484 insertions(+)
 create mode 100644 src/PVE/API2/Domainsync.pm
 create mode 100644 src/PVE/Jobs/Makefile
 create mode 100644 src/PVE/Jobs/RealmSync.pm

diff --git a/src/PVE/API2/AccessControl.pm b/src/PVE/API2/AccessControl.pm
index 5d78c6f..6f32bf2 100644
--- a/src/PVE/API2/AccessControl.pm
+++ b/src/PVE/API2/AccessControl.pm
@@ -15,6 +15,7 @@ use PVE::RESTHandler;
 use PVE::AccessControl;
 use PVE::JSONSchema qw(get_standard_option);
 use PVE::API2::Domains;
+use PVE::API2::Domainsync;
 use PVE::API2::User;
 use PVE::API2::Group;
 use PVE::API2::Role;
@@ -57,6 +58,11 @@ __PACKAGE__->register_method ({
     path => 'domains',
 });
 
+__PACKAGE__->register_method ({
+    subclass => "PVE::API2::Domainsync",
+    path => 'domainsyncjobs',
+});
+
 __PACKAGE__->register_method ({
     subclass => "PVE::API2::OpenId",
     path => 'openid',
diff --git a/src/PVE/API2/Domainsync.pm b/src/PVE/API2/Domainsync.pm
new file mode 100644
index 0000000..22333bf
--- /dev/null
+++ b/src/PVE/API2/Domainsync.pm
@@ -0,0 +1,278 @@
+package PVE::API2::Domainsync;
+
+use strict;
+use warnings;
+
+use JSON;
+
+use PVE::Exception qw(raise_param_exc);
+use PVE::Tools qw(extract_param);
+use PVE::Cluster qw (cfs_read_file cfs_write_file cfs_lock_file);
+use PVE::AccessControl;
+use PVE::JSONSchema qw(get_standard_option);
+use PVE::Jobs::RealmSync;
+
+use PVE::SafeSyslog;
+use PVE::RESTHandler;
+use PVE::Auth::Plugin;
+
+use base qw(PVE::RESTHandler);
+
+my $get_cluster_last_run = sub {
+    my ($jobid) = @_;
+
+    my $raw = PVE::Tools::file_get_contents("/etc/pve/priv/jobs/realmsync-$jobid.json");
+    my $state = decode_json($raw);
+    if (my $upid = $state->{upid}) {
+	if (my $decoded = PVE::Tools::upid_decode($upid)) {
+	    return $decoded->{starttime};
+	}
+    } else {
+	return $state->{time};
+    }
+
+    return undef;
+};
+
+__PACKAGE__->register_method ({
+    name => 'syncjob_index',
+    path => '',
+    method => 'GET',
+    description => "List configured domain-sync-jobs.",
+    permissions => {
+	check => ['perm', '/', ['Sys.Audit']],
+    },
+    parameters => {
+	additionalProperties => 0,
+	properties => {},
+    },
+    returns => {
+	type => 'array',
+	items => {
+	    type => "object",
+	    properties => {
+		id => {
+		    description => "The ID of the entry.",
+		    type => 'string'
+		},
+		disable => {
+		    description => "Flag to disable the plugin.",
+		    type => 'boolean',
+		},
+		# TODO ADD missing properties
+	    },
+	},
+	links => [ { rel => 'child', href => "{id}" } ],
+    },
+    code => sub {
+	my ($param) = @_;
+
+	my $rpcenv = PVE::RPCEnvironment::get();
+	my $user = $rpcenv->get_user();
+
+	my $jobs_data = cfs_read_file('jobs.cfg');
+	my $order = $jobs_data->{order};
+	my $jobs = $jobs_data->{ids};
+
+	my $res = [];
+	foreach my $jobid (sort { $order->{$a} <=> $order->{$b} } keys %$jobs) {
+	    my $job = $jobs->{$jobid};
+	    next if $job->{type} ne 'realmsync';
+
+	    if (my $schedule = $job->{schedule}) {
+		$job->{'last-run'} = eval { $get_cluster_last_run->($jobid) };
+		my $last_run = $job->{'last-run'} // time(); # current time as fallback
+		my $calspec = Proxmox::RS::CalendarEvent->new($schedule);
+		my $next_run = $calspec->compute_next_event($last_run);
+		$job->{'next-run'} = $next_run if defined($next_run);
+	    }
+
+	    push @$res, $job;
+	}
+
+	return $res;
+    }});
+
+__PACKAGE__->register_method({
+    name => 'read_job',
+    path => '{id}',
+    method => 'GET',
+    description => "Read domain-sync job definition.",
+    permissions => {
+	check => ['perm', '/', ['Sys.Audit']],
+    },
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    id => {
+		type => 'string',
+		format => 'pve-configid',
+	    },
+	},
+    },
+    returns => {
+	type => 'object',
+    },
+    code => sub {
+	my ($param) = @_;
+
+	my $jobs = cfs_read_file('jobs.cfg');
+	my $job = $jobs->{ids}->{$param->{id}};
+	return $job if $job && $job->{type} eq 'realmsync';
+
+	raise_param_exc({ id => "No such job '$param->{id}'" });
+
+    }});
+
+__PACKAGE__->register_method({
+    name => 'create_job',
+    path => '{id}',
+    method => 'POST',
+    protected => 1,
+    description => "Create new domain-sync job.",
+    permissions => {
+	description => "'Realm.AllocateUser' on '/access/realm/<realm>' and "
+	    ." 'User.Modify' permissions to '/access/groups/'.",
+	check => [ 'and',
+	    ['perm', '/access/realm/{realm}', ['Realm.AllocateUser']],
+	    ['perm', '/access/groups', ['User.Modify']],
+	],
+    },
+    parameters => PVE::Jobs::RealmSync->createSchema(),
+    returns => { type => 'null' },
+    code => sub {
+	my ($param) = @_;
+
+	my $id = extract_param($param, 'id');
+
+	cfs_lock_file('jobs.cfg', undef, sub {
+	    my $data = cfs_read_file('jobs.cfg');
+
+	    die "Job '$id' already exists\n"
+		if $data->{ids}->{$id};
+
+	    my $plugin = PVE::Jobs::Plugin->lookup('realmsync');
+	    my $opts = $plugin->check_config($id, $param, 1, 1);
+
+	    $data->{ids}->{$id} = $opts;
+
+	    PVE::Jobs::create_job($id, 'realmsync');
+
+	    cfs_write_file('jobs.cfg', $data);
+	});
+	die "$@" if ($@);
+
+	return undef;
+    }});
+
+__PACKAGE__->register_method({
+    name => 'update_job',
+    path => '{id}',
+    method => 'PUT',
+    protected => 1,
+    description => "Update domain-sync job definition.",
+    permissions => {
+	description => "'Realm.AllocateUser' on '/access/realm/<realm>' and "
+	    ." 'User.Modify' permissions to '/access/groups/'.",
+	check => [ 'and',
+	    ['perm', '/access/realm/{realm}', ['Realm.AllocateUser']],
+	    ['perm', '/access/groups', ['User.Modify']],
+	],
+    },
+    parameters => PVE::Jobs::RealmSync->updateSchema(),
+    returns => { type => 'null' },
+    code => sub {
+	my ($param) = @_;
+
+	my $id = extract_param($param, 'id');
+	my $delete = extract_param($param, 'delete');
+	if ($delete) {
+	    $delete = [PVE::Tools::split_list($delete)];
+	}
+
+	my $update_job = sub {
+	    my $jobs = cfs_read_file('jobs.cfg');
+
+	    die "no options specified\n" if !scalar(keys %$param);
+
+	    my $plugin = PVE::Jobs::Plugin->lookup('realmsync');
+	    my $opts = $plugin->check_config($id, $param, 0, 1);
+
+	    my $job = $jobs->{ids}->{$id};
+	    die "no such realmsync job\n" if !$job || $job->{type} ne 'realmsync';
+
+	    my $options = $plugin->options();
+	    foreach my $k (@$delete) {
+		my $d = $options->{$k} || die "no such option '$k'\n";
+		die "unable to delete required option '$k'\n" if !$d->{optional};
+		die "unable to delete fixed option '$k'\n" if $d->{fixed};
+		die "cannot set and delete property '$k' at the same time!\n"
+			if defined($opts->{$k});
+		delete $job->{$k};
+	    }
+
+	    my $schedule_updated = 0;
+	    if ($param->{schedule} ne $job->{schedule}) {
+		$schedule_updated = 1;
+	    }
+
+	    foreach my $k (keys %$param) {
+		$job->{$k} = $param->{$k};
+	    }
+
+	    if ($schedule_updated) {
+		PVE::Jobs::updated_job_schedule($id, 'realmsync');
+	    }
+
+	    cfs_write_file('jobs.cfg', $jobs);
+	    return;
+	};
+	cfs_lock_file('jobs.cfg', undef, $update_job);
+	die "$@" if ($@);
+    }});
+
+
+__PACKAGE__->register_method({
+    name => 'delete_job',
+    path => '{id}',
+    method => 'DELETE',
+    description => "Delete domain-sync job definition.",
+    permissions => {
+	check => ['perm', '/', ['Sys.Modify']],
+    },
+    protected => 1,
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    id => {
+		type => 'string',
+		format => 'pve-configid',
+	    },
+	},
+    },
+    returns => { type => 'null' },
+    code => sub {
+	my ($param) = @_;
+
+	my $id = $param->{id};
+
+	my $delete_job = sub {
+	    my $jobs = cfs_read_file('jobs.cfg');
+
+	    if (!defined($jobs->{ids}->{$id})) {
+		raise_param_exc({ id => "No such job '$id'" });
+	    }
+	    delete $jobs->{ids}->{$id};
+
+	    PVE::Jobs::remove_job($id, 'realmsync');
+
+	    cfs_write_file('jobs.cfg', $jobs);
+	    unlink "/etc/pve/priv/jobs/realmsync-$id.json";
+	};
+
+	cfs_lock_file('jobs.cfg', undef, $delete_job);
+	die "$@" if $@;
+
+	return undef;
+    }});
+1;
diff --git a/src/PVE/API2/Makefile b/src/PVE/API2/Makefile
index 2817f48..9aeb047 100644
--- a/src/PVE/API2/Makefile
+++ b/src/PVE/API2/Makefile
@@ -2,6 +2,7 @@
 API2_SOURCES= 		 	\
 	AccessControl.pm 	\
 	Domains.pm	 	\
+	Domainsync.pm	 	\
 	ACL.pm		 	\
 	Role.pm		 	\
 	Group.pm	 	\
diff --git a/src/PVE/Jobs/Makefile b/src/PVE/Jobs/Makefile
new file mode 100644
index 0000000..9eed1b2
--- /dev/null
+++ b/src/PVE/Jobs/Makefile
@@ -0,0 +1,6 @@
+SOURCES=RealmSync.pm
+
+.PHONY: install
+install: ${SOURCES}
+	install -d -m 0755 ${DESTDIR}${PERLDIR}/PVE/Jobs
+	for i in ${SOURCES}; do install -D -m 0644 $$i ${DESTDIR}${PERLDIR}/PVE/Jobs/$$i; done
diff --git a/src/PVE/Jobs/RealmSync.pm b/src/PVE/Jobs/RealmSync.pm
new file mode 100644
index 0000000..f19f4a8
--- /dev/null
+++ b/src/PVE/Jobs/RealmSync.pm
@@ -0,0 +1,192 @@
+package PVE::Jobs::RealmSync;
+
+use strict;
+use warnings;
+use JSON;
+
+use PVE::API2::Domains;
+use PVE::JSONSchema qw(get_standard_option);
+use PVE::Cluster;
+use PVE::CalendarEvent;
+use PVE::Tools;
+
+use base qw(PVE::Jobs::Plugin);
+
+sub type {
+    return 'realmsync';
+}
+
+my $props = get_standard_option('realm-sync-options', {
+    realm => get_standard_option('realm'),
+});
+
+sub properties {
+    return $props;
+}
+
+sub options {
+    my $options = {
+	enabled => { optional => 1 },
+	schedule => {},
+	comment => { optional => 1 },
+	node => { optional => 1 },
+	scope => {},
+    };
+    foreach my $opt (keys %$props) {
+	next if defined($options->{$opt});
+	if ($props->{$opt}->{optional}) {
+	    $options->{$opt} = { optional => 1 };
+	} else {
+	    $options->{$opt} = {};
+	}
+    }
+    $options->{realm}->{fixed} = 1;
+
+    return $options;
+}
+
+sub decode_value {
+    my ($class, $type, $key, $value) = @_;
+    return $value;
+}
+
+sub encode_value {
+    my ($class, $type, $key, $value) = @_;
+    return $value;
+}
+
+sub createSchema {
+    my ($class, $skip_type) = @_;
+    my $schema = $class->SUPER::createSchema($skip_type);
+
+    my $opts = $class->options();
+    for my $opt (keys $schema->{properties}->%*) {
+	next if defined($opts->{$opt}) || $opt eq 'id';
+	delete $schema->{properties}->{$opt};
+    }
+
+    # remove legacy props
+    delete $schema->{properties}->{full};
+    delete $schema->{properties}->{purge};
+
+    return $schema;
+}
+
+sub updateSchema {
+    my ($class, $skip_type) = @_;
+    my $schema = $class->SUPER::updateSchema($skip_type);
+
+    my $opts = $class->options();
+    for my $opt (keys $schema->{properties}->%*) {
+	next if defined($opts->{$opt});
+	next if $opt eq 'id' || $opt eq 'delete';
+	delete $schema->{properties}->{$opt};
+    }
+
+    # remove legacy props
+    delete $schema->{properties}->{full};
+    delete $schema->{properties}->{purge};
+
+    return $schema;
+}
+
+sub run {
+    my ($class, $conf, $id, $schedule) = @_;
+
+    my $node = $conf->{node};
+    foreach my $opt (keys %$conf) {
+	delete $conf->{$opt} if !defined($props->{$opt});
+    }
+
+    my $realm = $conf->{realm};
+    my $statedir = "/etc/pve/priv/jobs";
+    my $statefile = "$statedir/realmsync-$id.json";
+
+    if (!defined($node)) {
+	# cluster synced
+	my $curtime = time();
+	my $nodename = PVE::INotify::nodename();
+	PVE::Cluster::cfs_update();
+	# check statefile in pmxcfs if we should start
+	my $shouldrun = PVE::Cluster::cfs_lock_domain($realm, undef, sub {
+	    mkdir $statedir;
+	    my $raw = eval { PVE::Tools::file_get_contents($statefile) } // '';
+	    my $members = PVE::Cluster::get_members();
+
+	    my $state = ($raw =~ m/^(\{.*\})$/) ? decode_json($1) : {};
+	    my $lastnode = $state->{node} // $nodename;
+	    my $lastupid = $state->{upid};
+	    my $lasttime = $state->{time};
+
+	    my $node_online = 0;
+	    if ($lastnode ne $nodename) {
+		if (defined(my $member = $members->{$lastnode})) {
+		    $node_online = $member->{online};
+		}
+	    } else {
+		$node_online = 1;
+	    }
+
+	    if (defined($lastupid)) {
+		# first check if the next run is scheduled
+		if (my $parsed = PVE::Tools::upid_decode($lastupid, 1)) {
+		    my $calspec = PVE::CalendarEvent::parse_calendar_event($schedule);
+		    my $next_sync = PVE::CalendarEvent::compute_next_event($calspec, $parsed->{starttime});
+		    return 0 if !defined($next_sync) || $curtime < $next_sync; # not yet its (next) turn
+		}
+		# check if still running and node is online
+		my $tasks = PVE::Cluster::get_tasklist();
+		for my $task (@$tasks) {
+		    next if $task->{upid} ne $lastupid;
+		    last if defined($task->{endtime}); # it's already finished
+		    last if !$node_online; # it's not finished but the node is offline
+		    return 0; # not finished and online
+		}
+	    } elsif (defined($lasttime) && ($lasttime+60) > $curtime && $node_online) {
+		# another node started in the last 60 seconds and is online
+		return 0;
+	    }
+
+	    # any of the following coniditions should be true here:
+	    # * it was started on another node but that node is offline now
+	    # * it was started but either too long ago, or with an error
+	    # * the started task finished
+
+	    PVE::Tools::file_set_contents($statefile, encode_json({
+		node => $nodename,
+		time => $curtime,
+	    }));
+	    return 1;
+	});
+	die $@ if $@;
+
+	if ($shouldrun) {
+	    my $err = undef;
+	    my $upid = eval { PVE::API2::Domains->sync($conf) };
+	    $err = $@ if $@;
+	    PVE::Cluster::cfs_lock_domain($realm, undef, sub {
+		if ($err && !$upid) {
+		    PVE::Tools::file_set_contents($statefile, encode_json({
+			node => $nodename,
+			time => $curtime,
+			error => $err,
+		    }));
+		    die "$err\n";
+		}
+
+		PVE::Tools::file_set_contents($statefile, encode_json({
+		    node => $nodename,
+		    upid => $upid,
+		}));
+	    });
+	    die $@ if $@;
+	    return $upid;
+	} else {
+	    return "OK";
+	}
+    }
+
+    return PVE::API2::Domains->sync($conf);
+}
+
+1;
diff --git a/src/PVE/Makefile b/src/PVE/Makefile
index c839d8f..dfc5314 100644
--- a/src/PVE/Makefile
+++ b/src/PVE/Makefile
@@ -8,3 +8,4 @@ install:
 	install -D -m 0644 TokenConfig.pm ${DESTDIR}${PERLDIR}/PVE/TokenConfig.pm
 	make -C API2 install
 	make -C CLI install
+	make -C Jobs install
-- 
2.30.2





^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pve-devel] [PATCH manager 1/4] Jobs: provide id and schedule to the job
  2022-04-04  8:54 [pve-devel] [PATCH access-control/manager] add realm sync jobs Dominik Csapak
  2022-04-04  8:54 ` [pve-devel] [PATCH access-control 1/1] add realmsync plugin for jobs and CRUD api for domainsync-jobs Dominik Csapak
@ 2022-04-04  8:54 ` Dominik Csapak
  2022-11-07 16:14   ` [pve-devel] applied: " Thomas Lamprecht
  2022-04-04  8:54 ` [pve-devel] [PATCH manager 2/4] Jobs/Plugin: remove 'vzdump' from id description Dominik Csapak
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 14+ messages in thread
From: Dominik Csapak @ 2022-04-04  8:54 UTC (permalink / raw)
  To: pve-devel

we need that for realmsync jobs

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 PVE/Jobs.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/Jobs.pm b/PVE/Jobs.pm
index da648630..9bf3b7fa 100644
--- a/PVE/Jobs.pm
+++ b/PVE/Jobs.pm
@@ -238,7 +238,7 @@ sub run_jobs {
 
 	my $plugin = PVE::Jobs::Plugin->lookup($type);
 	if (starting_job($id, $type)) {
-	    my $upid = eval { $plugin->run($cfg) };
+	    my $upid = eval { $plugin->run($cfg, $id, $schedule) };
 	    if (my $err = $@) {
 		warn $@ if $@;
 		started_job($id, $type, undef, $err);
-- 
2.30.2





^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pve-devel] [PATCH manager 2/4] Jobs/Plugin: remove 'vzdump' from id description
  2022-04-04  8:54 [pve-devel] [PATCH access-control/manager] add realm sync jobs Dominik Csapak
  2022-04-04  8:54 ` [pve-devel] [PATCH access-control 1/1] add realmsync plugin for jobs and CRUD api for domainsync-jobs Dominik Csapak
  2022-04-04  8:54 ` [pve-devel] [PATCH manager 1/4] Jobs: provide id and schedule to the job Dominik Csapak
@ 2022-04-04  8:54 ` Dominik Csapak
  2022-11-07 16:14   ` [pve-devel] applied: " Thomas Lamprecht
  2022-04-04  8:54 ` [pve-devel] [PATCH manager 3/4] Jobs: add RealmSync Plugin and register it Dominik Csapak
  2022-04-04  8:54 ` [pve-devel] [PATCH manager 4/4] ui: add Realm Sync panel Dominik Csapak
  4 siblings, 1 reply; 14+ messages in thread
From: Dominik Csapak @ 2022-04-04  8:54 UTC (permalink / raw)
  To: pve-devel

it's not necessarily a vzdump job

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 PVE/Jobs/Plugin.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/Jobs/Plugin.pm b/PVE/Jobs/Plugin.pm
index 6098360b..541b6f43 100644
--- a/PVE/Jobs/Plugin.pm
+++ b/PVE/Jobs/Plugin.pm
@@ -17,7 +17,7 @@ my $defaultData = {
     propertyList => {
 	type => { description => "Section type." },
 	id => {
-	    description => "The ID of the VZDump job.",
+	    description => "The ID of the job.",
 	    type => 'string',
 	    format => 'pve-configid',
 	    maxLength => 64,
-- 
2.30.2





^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pve-devel] [PATCH manager 3/4] Jobs: add RealmSync Plugin and register it
  2022-04-04  8:54 [pve-devel] [PATCH access-control/manager] add realm sync jobs Dominik Csapak
                   ` (2 preceding siblings ...)
  2022-04-04  8:54 ` [pve-devel] [PATCH manager 2/4] Jobs/Plugin: remove 'vzdump' from id description Dominik Csapak
@ 2022-04-04  8:54 ` Dominik Csapak
  2022-11-07 16:14   ` [pve-devel] applied: " Thomas Lamprecht
  2022-04-04  8:54 ` [pve-devel] [PATCH manager 4/4] ui: add Realm Sync panel Dominik Csapak
  4 siblings, 1 reply; 14+ messages in thread
From: Dominik Csapak @ 2022-04-04  8:54 UTC (permalink / raw)
  To: pve-devel

so that realmsync jobs get executed

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 PVE/Jobs.pm | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/PVE/Jobs.pm b/PVE/Jobs.pm
index 9bf3b7fa..cc71b120 100644
--- a/PVE/Jobs.pm
+++ b/PVE/Jobs.pm
@@ -7,9 +7,11 @@ use JSON;
 use PVE::Cluster qw(cfs_read_file cfs_lock_file);
 use PVE::Jobs::Plugin;
 use PVE::Jobs::VZDump;
+use PVE::Jobs::RealmSync;
 use PVE::Tools;
 
 PVE::Jobs::VZDump->register();
+PVE::Jobs::RealmSync->register();
 PVE::Jobs::Plugin->init();
 
 my $state_dir = "/var/lib/pve-manager/jobs";
-- 
2.30.2





^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pve-devel] [PATCH manager 4/4] ui: add Realm Sync panel
  2022-04-04  8:54 [pve-devel] [PATCH access-control/manager] add realm sync jobs Dominik Csapak
                   ` (3 preceding siblings ...)
  2022-04-04  8:54 ` [pve-devel] [PATCH manager 3/4] Jobs: add RealmSync Plugin and register it Dominik Csapak
@ 2022-04-04  8:54 ` Dominik Csapak
  4 siblings, 0 replies; 14+ messages in thread
From: Dominik Csapak @ 2022-04-04  8:54 UTC (permalink / raw)
  To: pve-devel

a typical CRUD panel for adding/editing/removing realm sync jobs

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 www/manager6/Makefile           |   1 +
 www/manager6/dc/Config.js       |   7 +
 www/manager6/dc/RealmSyncJob.js | 315 ++++++++++++++++++++++++++++++++
 3 files changed, 323 insertions(+)
 create mode 100644 www/manager6/dc/RealmSyncJob.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index e6e01bd1..7fcdb563 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -159,6 +159,7 @@ JSSRC= 							\
 	dc/UserEdit.js					\
 	dc/UserView.js					\
 	dc/MetricServerView.js				\
+	dc/RealmSyncJob.js				\
 	lxc/CmdMenu.js					\
 	lxc/Config.js					\
 	lxc/CreateWizard.js				\
diff --git a/www/manager6/dc/Config.js b/www/manager6/dc/Config.js
index 9c54b19d..1f92cb0e 100644
--- a/www/manager6/dc/Config.js
+++ b/www/manager6/dc/Config.js
@@ -140,6 +140,13 @@ Ext.define('PVE.dc.Config', {
 		iconCls: 'fa fa-address-book-o',
 		itemId: 'domains',
 	    },
+	    {
+		xtype: 'pveRealmSyncJobView',
+		title: gettext('Realm Sync'),
+		groups: ['permissions'],
+		iconCls: 'fa fa-refresh',
+		itemId: 'realmsyncjobs',
+	    },
 	    {
 		xtype: 'pveHAStatus',
 		title: 'HA',
diff --git a/www/manager6/dc/RealmSyncJob.js b/www/manager6/dc/RealmSyncJob.js
new file mode 100644
index 00000000..4ac61745
--- /dev/null
+++ b/www/manager6/dc/RealmSyncJob.js
@@ -0,0 +1,315 @@
+Ext.define('PVE.dc.RealmSyncJobView', {
+    extend: 'Ext.grid.Panel',
+    alias: 'widget.pveRealmSyncJobView',
+
+    stateful: true,
+    stateId: 'grid-realmsyncjobs',
+
+    controller: {
+	xclass: 'Ext.app.ViewController',
+
+	addRealmSyncJob: function(button) {
+	    let me = this;
+	    Ext.create(`PVE.dc.RealmSyncJobEdit`, {
+		autoShow: true,
+		listeners: {
+		    destroy: () => me.reload(),
+		},
+	    });
+	},
+
+	editRealmSyncJob: function() {
+	    let me = this;
+	    let view = me.getView();
+	    let selection = view.getSelection();
+	    if (!selection || selection.length < 1) {
+		return;
+	    }
+
+	    Ext.create(`PVE.dc.RealmSyncJobEdit`, {
+		jobid: selection[0].data.id,
+		autoShow: true,
+		listeners: {
+		    destroy: () => me.reload(),
+		},
+	    });
+	},
+
+	reload: function() {
+	    this.getView().getStore().load();
+	},
+    },
+
+    store: {
+	autoLoad: true,
+	id: 'metricservers',
+	proxy: {
+	    type: 'proxmox',
+	    url: '/api2/json/access/domainsyncjobs',
+	},
+    },
+
+    columns: [
+	{
+	    header: gettext('Enabled'),
+	    width: 80,
+	    dataIndex: 'enabled',
+	    xtype: 'checkcolumn',
+	    sortable: true,
+	    disabled: true,
+	    disabledCls: 'x-item-enabled',
+	    stopSelection: false,
+	},
+	{
+	    text: gettext('Name'),
+	    flex: 1,
+	    dataIndex: 'id',
+	},
+	{
+	    header: gettext('Node'),
+	    width: 100,
+	    sortable: true,
+	    dataIndex: 'node',
+	    renderer: function(value) {
+		if (value) {
+		    return value;
+		}
+		return gettext('Any');
+	    },
+	},
+	{
+	    text: gettext('Realm'),
+	    width: 200,
+	    dataIndex: 'realm',
+	},
+	{
+	    header: gettext('Schedule'),
+	    width: 150,
+	    dataIndex: 'schedule',
+	},
+	{
+	    text: gettext('Next Run'),
+	    dataIndex: 'next-run',
+	    width: 150,
+	    renderer: PVE.Utils.render_next_event,
+	},
+	{
+	    header: gettext('Comment'),
+	    dataIndex: 'comment',
+	    renderer: Ext.htmlEncode,
+	    sorter: (a, b) => (a.data.comment || '').localeCompare(b.data.comment || ''),
+	    flex: 1,
+	},
+    ],
+
+    tbar: [
+	{
+	    text: gettext('Add'),
+	    handler: 'addRealmSyncJob',
+	},
+	{
+	    text: gettext('Edit'),
+	    xtype: 'proxmoxButton',
+	    handler: 'editRealmSyncJob',
+	    disabled: true,
+	},
+	{
+	    xtype: 'proxmoxStdRemoveButton',
+	    baseurl: `/api2/extjs/access/domainsyncjobs`,
+	    callback: 'reload',
+	},
+    ],
+
+    listeners: {
+	itemdblclick: 'editRealmSyncJob',
+    },
+
+    initComponent: function() {
+	var me = this;
+
+	me.callParent();
+
+	Proxmox.Utils.monStoreErrors(me, me.getStore());
+    },
+});
+
+Ext.define('PVE.dc.RealmSyncJobEdit', {
+    extend: 'Proxmox.window.Edit',
+    mixins: ['Proxmox.Mixin.CBind'],
+
+    subject: gettext('Realm Sync Job'),
+    onlineHelp: 'pveum_ldap_sync',
+
+    // don't focus the schedule field on edit
+    defaultFocus: 'field[name=id]',
+
+    cbindData: function() {
+	let me = this;
+	me.isCreate = !me.jobid;
+	me.jobid = me.jobid || "";
+	me.url = `/api2/extjs/access/domainsyncjobs/${me.jobid}`;
+	me.method = me.isCreate ? 'POST' : 'PUT';
+	if (!me.isCreate) {
+	    me.subject = `${me.subject}: ${me.jobid}`;
+	}
+	return {};
+    },
+
+    submitUrl: function(url, values) {
+	return this.isCreate ? `${url}/${values.id}` : url;
+    },
+
+    items: [
+	{
+	    xtype: 'inputpanel',
+
+	    onGetValues: function(values) {
+		let me = this;
+
+		let vanished_opts = [];
+		['acl', 'entry', 'properties'].forEach((prop) => {
+		    if (values[`remove-vanished-${prop}`]) {
+			vanished_opts.push(prop);
+		    }
+		    delete values[`remove-vanished-${prop}`];
+		});
+
+		values['remove-vanished'] = vanished_opts.join(';');
+		PVE.Utils.delete_if_default(values, 'remove-vanished', '');
+
+		if (me.isCreate) {
+		    delete values.delete; // on create we cannot delete values
+		}
+
+		return values;
+	    },
+
+	    column1: [
+		{
+		    xtype: 'pmxDisplayEditField',
+		    name: 'id',
+		    fieldLabel: gettext('Name'),
+		    allowBlank: false,
+		    cbind: {
+			editable: '{isCreate}',
+			value: '{jobid}',
+		    },
+		},
+		{
+		    xtype: 'pveCalendarEvent',
+		    fieldLabel: gettext('Schedule'),
+		    allowBlank: false,
+		    name: 'schedule',
+		},
+		{
+		    xtype: 'proxmoxcheckbox',
+		    fieldLabel: gettext('Enable'),
+		    name: 'enabled',
+		    uncheckedValue: 0,
+		    defaultValue: 1,
+		    checked: true,
+		},
+	    ],
+
+	    column2: [
+		{
+		    xtype: 'pmxDisplayEditField',
+		    editConfig: {
+			xtype: 'pmxRealmComboBox',
+		    },
+		    cbind: {
+			editable: '{isCreate}',
+		    },
+		    fieldLabel: gettext('Realm'),
+		    name: 'realm',
+		},
+		{
+		    xtype: 'proxmoxKVComboBox',
+		    name: 'scope',
+		    fieldLabel: gettext('Scope'),
+		    value: '',
+		    emptyText: gettext('No default available'),
+		    deleteEmpty: false,
+		    allowBlank: false,
+		    comboItems: [
+			['users', gettext('Users')],
+			['groups', gettext('Groups')],
+			['both', gettext('Users and Groups')],
+		    ],
+		},
+		{
+		    xtype: 'proxmoxKVComboBox',
+		    value: '1',
+		    deleteEmpty: false,
+		    allowBlank: false,
+		    comboItems: [
+			['1', Proxmox.Utils.yesText],
+			['0', Proxmox.Utils.noText],
+		    ],
+		    name: 'enable-new',
+		    fieldLabel: gettext('Enable new'),
+		},
+	    ],
+
+	    columnB: [
+		{
+		    xtype: 'proxmoxtextfield',
+		    name: 'comment',
+		    fieldLabel: gettext('Comment'),
+		    cbind: {
+			deleteEmpty: '{!isCreate}',
+		    },
+		},
+		{
+		    xtype: 'displayfield',
+		    value: gettext('Remove Vanished'),
+		},
+		{
+		    xtype: 'proxmoxcheckbox',
+		    fieldLabel: gettext('ACL'),
+		    name: 'remove-vanished-acl',
+		    boxLabel: gettext('Remove ACLs of users and groups which are not in the sync response.'),
+		},
+		{
+		    xtype: 'proxmoxcheckbox',
+		    fieldLabel: gettext('Entry'),
+		    name: 'remove-vanished-entry',
+		    boxLabel: gettext('Remove users and groups that are not in the sync response.'),
+		},
+		{
+		    xtype: 'proxmoxcheckbox',
+		    fieldLabel: gettext('Properties'),
+		    name: 'remove-vanished-properties',
+		    boxLabel: gettext('Remove user-properties that are not in the sync response.'),
+		},
+		{
+		    xtype: 'displayfield',
+		    reference: 'defaulthint',
+		    value: gettext('Default sync options can be set by editing the realm.'),
+		    userCls: 'pmx-hint',
+		    hidden: true,
+		},
+	    ],
+	},
+    ],
+
+    initComponent: function() {
+	let me = this;
+	me.callParent();
+	if (me.jobid) {
+	    me.load({
+		success: function(response, options) {
+		    let values = response.result.data;
+
+		    if (values['remove-vanished']) {
+			let opts = values['remove-vanished'].split(';');
+			for (const opt of opts) {
+			    values[`remove-vanished-${opt}`] = 1;
+			}
+		    }
+		    me.down('inputpanel').setValues(values);
+		},
+	    });
+	}
+    },
+});
-- 
2.30.2





^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pve-devel] applied: [PATCH manager 2/4] Jobs/Plugin: remove 'vzdump' from id description
  2022-04-04  8:54 ` [pve-devel] [PATCH manager 2/4] Jobs/Plugin: remove 'vzdump' from id description Dominik Csapak
@ 2022-11-07 16:14   ` Thomas Lamprecht
  0 siblings, 0 replies; 14+ messages in thread
From: Thomas Lamprecht @ 2022-11-07 16:14 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

Am 04/04/2022 um 10:54 schrieb Dominik Csapak:
> it's not necessarily a vzdump job
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
>  PVE/Jobs/Plugin.pm | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
>

applied, thanks!




^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pve-devel] applied: [PATCH manager 3/4] Jobs: add RealmSync Plugin and register it
  2022-04-04  8:54 ` [pve-devel] [PATCH manager 3/4] Jobs: add RealmSync Plugin and register it Dominik Csapak
@ 2022-11-07 16:14   ` Thomas Lamprecht
  2022-11-07 16:15     ` Thomas Lamprecht
  0 siblings, 1 reply; 14+ messages in thread
From: Thomas Lamprecht @ 2022-11-07 16:14 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

Am 04/04/2022 um 10:54 schrieb Dominik Csapak:
> so that realmsync jobs get executed
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
>  PVE/Jobs.pm | 2 ++
>  1 file changed, 2 insertions(+)
> 
>

applied, thanks!




^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pve-devel] applied: [PATCH manager 1/4] Jobs: provide id and schedule to the job
  2022-04-04  8:54 ` [pve-devel] [PATCH manager 1/4] Jobs: provide id and schedule to the job Dominik Csapak
@ 2022-11-07 16:14   ` Thomas Lamprecht
  0 siblings, 0 replies; 14+ messages in thread
From: Thomas Lamprecht @ 2022-11-07 16:14 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

Am 04/04/2022 um 10:54 schrieb Dominik Csapak:
> we need that for realmsync jobs
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
>  PVE/Jobs.pm | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
>

applied, thanks!




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pve-devel] applied: [PATCH manager 3/4] Jobs: add RealmSync Plugin and register it
  2022-11-07 16:14   ` [pve-devel] applied: " Thomas Lamprecht
@ 2022-11-07 16:15     ` Thomas Lamprecht
  0 siblings, 0 replies; 14+ messages in thread
From: Thomas Lamprecht @ 2022-11-07 16:15 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

Am 07/11/2022 um 17:14 schrieb Thomas Lamprecht:
> Am 04/04/2022 um 10:54 schrieb Dominik Csapak:
>> so that realmsync jobs get executed
>>
>> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
>> ---
>>  PVE/Jobs.pm | 2 ++
>>  1 file changed, 2 insertions(+)
>>
>>
> 
> applied, thanks!


replied to the wrong one, did not apply this one as it requires the pve-access-control
stuff first.




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pve-devel] [PATCH access-control 1/1] add realmsync plugin for jobs and CRUD api for domainsync-jobs
  2022-04-04  8:54 ` [pve-devel] [PATCH access-control 1/1] add realmsync plugin for jobs and CRUD api for domainsync-jobs Dominik Csapak
@ 2022-11-07 17:05   ` Thomas Lamprecht
  2022-11-08  8:20     ` Dominik Csapak
  0 siblings, 1 reply; 14+ messages in thread
From: Thomas Lamprecht @ 2022-11-07 17:05 UTC (permalink / raw)
  To: Proxmox VE development discussion, Dominik Csapak

mid to high level review, but it may be that some things are slightly
out of date since sending this ~7 months ago (sorry for the delay...)

Am 04/04/2022 um 10:54 schrieb Dominik Csapak:
> to be able to define automated jobs that sync ldap/ad
> 
> The jobs plugin contains special handling when no node is given, since
> we only want it to run on a single node when that triggers. For that,
> we save a statefile in /etc/pve/priv/jobs/ which contains the
> node/time/upid of the node that runs the job. The first node that
> is able to lock the realm (via cfs_lock_domain) "wins" and may
> sync from the ldap.
> 
> in case a specific node was selected, this is omitted and the Jobs
> handling will not let it run on other nodes anyway
> 
> the API part is our usual sectionconfig CRUD api, but specialized
> for the specific type of job.
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
>  src/PVE/API2/AccessControl.pm |   6 +
>  src/PVE/API2/Domainsync.pm    | 278 ++++++++++++++++++++++++++++++++++
>  src/PVE/API2/Makefile         |   1 +
>  src/PVE/Jobs/Makefile         |   6 +
>  src/PVE/Jobs/RealmSync.pm     | 192 +++++++++++++++++++++++
>  src/PVE/Makefile              |   1 +
>  6 files changed, 484 insertions(+)
>  create mode 100644 src/PVE/API2/Domainsync.pm
>  create mode 100644 src/PVE/Jobs/Makefile
>  create mode 100644 src/PVE/Jobs/RealmSync.pm
> 
> diff --git a/src/PVE/API2/AccessControl.pm b/src/PVE/API2/AccessControl.pm
> index 5d78c6f..6f32bf2 100644
> --- a/src/PVE/API2/AccessControl.pm
> +++ b/src/PVE/API2/AccessControl.pm
> @@ -15,6 +15,7 @@ use PVE::RESTHandler;
>  use PVE::AccessControl;
>  use PVE::JSONSchema qw(get_standard_option);
>  use PVE::API2::Domains;
> +use PVE::API2::Domainsync;
>  use PVE::API2::User;
>  use PVE::API2::Group;
>  use PVE::API2::Role;
> @@ -57,6 +58,11 @@ __PACKAGE__->register_method ({
>      path => 'domains',
>  });
>  
> +__PACKAGE__->register_method ({
> +    subclass => "PVE::API2::Domainsync",
> +    path => 'domainsyncjobs',
> +});

I'd place this in a `jobs` sub-directory, iow. above would then be

/access/jobs/domain-sync

> +
>  __PACKAGE__->register_method ({
>      subclass => "PVE::API2::OpenId",
>      path => 'openid',
> diff --git a/src/PVE/API2/Domainsync.pm b/src/PVE/API2/Domainsync.pm
> new file mode 100644
> index 0000000..22333bf
> --- /dev/null
> +++ b/src/PVE/API2/Domainsync.pm
> @@ -0,0 +1,278 @@
> +package PVE::API2::Domainsync;


As "Domainsync" is rather generall and not so telling outside of the access
control context, I'd change module name and file location to:

PVE::API2::AccessControl::Job::DomainSync;

while maybe a bit unwieldy we only need to refer this about one time on
registering it in the manager.

If you want you could place the PVE::API2::AccessControl::Job; (which will be
just a short index endpoint and subdir registering) and this in the same file.


> +
> +use strict;
> +use warnings;
> +
> +use JSON;
> +
> +use PVE::Exception qw(raise_param_exc);
> +use PVE::Tools qw(extract_param);
> +use PVE::Cluster qw (cfs_read_file cfs_write_file cfs_lock_file);
> +use PVE::AccessControl;
> +use PVE::JSONSchema qw(get_standard_option);
> +use PVE::Jobs::RealmSync;

please group by 
- perl module
- proxmox but not in repo
- in repo
and sort each group, thx!

> +
> +use PVE::SafeSyslog;
> +use PVE::RESTHandler;
> +use PVE::Auth::Plugin;
> +
> +use base qw(PVE::RESTHandler);
> +
> +my $get_cluster_last_run = sub {
> +    my ($jobid) = @_;
> +
> +    my $raw = PVE::Tools::file_get_contents("/etc/pve/priv/jobs/realmsync-$jobid.json");
> +    my $state = decode_json($raw);

maybe wrap this in eval and add some error context on $@, as decode_json
errors are pretty ugly and hard to relate too as there's not file name
info or the like available.

> +    if (my $upid = $state->{upid}) {
> +	if (my $decoded = PVE::Tools::upid_decode($upid)) {
> +	    return $decoded->{starttime};
> +	}
> +    } else {
> +	return $state->{time};

code golfy and such no hard feelings: could be written as:

my $upid = $state->{upid} or return $state->{time};

> +    }
> +
> +    return undef;
> +};
> +
> +__PACKAGE__->register_method ({
> +    name => 'syncjob_index',
> +    path => '',
> +    method => 'GET',
> +    description => "List configured domain-sync-jobs.",
> +    permissions => {
> +	check => ['perm', '/', ['Sys.Audit']],
> +    },
> +    parameters => {
> +	additionalProperties => 0,
> +	properties => {},
> +    },
> +    returns => {
> +	type => 'array',
> +	items => {
> +	    type => "object",
> +	    properties => {
> +		id => {
> +		    description => "The ID of the entry.",
> +		    type => 'string'
> +		},
> +		disable => {
> +		    description => "Flag to disable the plugin.",
> +		    type => 'boolean',
> +		},
> +		# TODO ADD missing properties
> +	    },
> +	},
> +	links => [ { rel => 'child', href => "{id}" } ],
> +    },
> +    code => sub {
> +	my ($param) = @_;
> +
> +	my $rpcenv = PVE::RPCEnvironment::get();
> +	my $user = $rpcenv->get_user();
> +
> +	my $jobs_data = cfs_read_file('jobs.cfg');
> +	my $order = $jobs_data->{order};
> +	my $jobs = $jobs_data->{ids};
> +
> +	my $res = [];
> +	foreach my $jobid (sort { $order->{$a} <=> $order->{$b} } keys %$jobs) {

nit: we prefer shorter `for` for new code, it's the same as foreach since a while
but, well, shorter.

> +	    my $job = $jobs->{$jobid};
> +	    next if $job->{type} ne 'realmsync';
> +
> +	    if (my $schedule = $job->{schedule}) {
> +		$job->{'last-run'} = eval { $get_cluster_last_run->($jobid) };
> +		my $last_run = $job->{'last-run'} // time(); # current time as fallback
> +		my $calspec = Proxmox::RS::CalendarEvent->new($schedule);

mixed variable naming (snake case before and below, but none(?) here),

> +		my $next_run = $calspec->compute_next_event($last_run);
> +		$job->{'next-run'} = $next_run if defined($next_run);
> +	    }
> +
> +	    push @$res, $job;
> +	}
> +
> +	return $res;
> +    }});
> +
> +__PACKAGE__->register_method({
> +    name => 'read_job',
> +    path => '{id}',
> +    method => 'GET',
> +    description => "Read domain-sync job definition.",
> +    permissions => {
> +	check => ['perm', '/', ['Sys.Audit']],
> +    },
> +    parameters => {
> +	additionalProperties => 0,
> +	properties => {
> +	    id => {
> +		type => 'string',
> +		format => 'pve-configid',
> +	    },
> +	},
> +    },
> +    returns => {
> +	type => 'object',
> +    },
> +    code => sub {
> +	my ($param) = @_;
> +
> +	my $jobs = cfs_read_file('jobs.cfg');
> +	my $job = $jobs->{ids}->{$param->{id}};

code style nit: please avoid using hash access inside for hash keys.

> +	return $job if $job && $job->{type} eq 'realmsync';

is this outdated due to being from april? or do we really share the
ID namespace between all plugin types?

> +
> +	raise_param_exc({ id => "No such job '$param->{id}'" });



> +
> +    }});
> +
> +__PACKAGE__->register_method({
> +    name => 'create_job',
> +    path => '{id}',
> +    method => 'POST',
> +    protected => 1,
> +    description => "Create new domain-sync job.",
> +    permissions => {
> +	description => "'Realm.AllocateUser' on '/access/realm/<realm>' and "
> +	    ." 'User.Modify' permissions to '/access/groups/'.",

results in double whitespace at "and  'User.Modify'"

> +	check => [ 'and',
> +	    ['perm', '/access/realm/{realm}', ['Realm.AllocateUser']],
> +	    ['perm', '/access/groups', ['User.Modify']],

hmm, is this the same as for a manual sync? I guess it's OK then, just
seems a bit lax.

> +	],
> +    },
> +    parameters => PVE::Jobs::RealmSync->createSchema(),
> +    returns => { type => 'null' },
> +    code => sub {
> +	my ($param) = @_;
> +
> +	my $id = extract_param($param, 'id');
> +
> +	cfs_lock_file('jobs.cfg', undef, sub {
> +	    my $data = cfs_read_file('jobs.cfg');
> +
> +	    die "Job '$id' already exists\n"
> +		if $data->{ids}->{$id};
> +
> +	    my $plugin = PVE::Jobs::Plugin->lookup('realmsync');
> +	    my $opts = $plugin->check_config($id, $param, 1, 1);
> +
> +	    $data->{ids}->{$id} = $opts;
> +
> +	    PVE::Jobs::create_job($id, 'realmsync');
> +
> +	    cfs_write_file('jobs.cfg', $data);
> +	});
> +	die "$@" if ($@);
> +
> +	return undef;
> +    }});
> +
> +__PACKAGE__->register_method({
> +    name => 'update_job',
> +    path => '{id}',
> +    method => 'PUT',
> +    protected => 1,
> +    description => "Update domain-sync job definition.",
> +    permissions => {
> +	description => "'Realm.AllocateUser' on '/access/realm/<realm>' and "
> +	    ." 'User.Modify' permissions to '/access/groups/'.",
> +	check => [ 'and',
> +	    ['perm', '/access/realm/{realm}', ['Realm.AllocateUser']],
> +	    ['perm', '/access/groups', ['User.Modify']],
> +	],
> +    },
> +    parameters => PVE::Jobs::RealmSync->updateSchema(),
> +    returns => { type => 'null' },
> +    code => sub {
> +	my ($param) = @_;
> +
> +	my $id = extract_param($param, 'id');
> +	my $delete = extract_param($param, 'delete');
> +	if ($delete) {
> +	    $delete = [PVE::Tools::split_list($delete)];
> +	}
> +
> +	my $update_job = sub {
> +	    my $jobs = cfs_read_file('jobs.cfg');
> +
> +	    die "no options specified\n" if !scalar(keys %$param);
> +
> +	    my $plugin = PVE::Jobs::Plugin->lookup('realmsync');
> +	    my $opts = $plugin->check_config($id, $param, 0, 1);
> +
> +	    my $job = $jobs->{ids}->{$id};
> +	    die "no such realmsync job\n" if !$job || $job->{type} ne 'realmsync';
> +
> +	    my $options = $plugin->options();
> +	    foreach my $k (@$delete) {
> +		my $d = $options->{$k} || die "no such option '$k'\n";
> +		die "unable to delete required option '$k'\n" if !$d->{optional};
> +		die "unable to delete fixed option '$k'\n" if $d->{fixed};
> +		die "cannot set and delete property '$k' at the same time!\n"
> +			if defined($opts->{$k});

don't we have that repeated a few times (didn't check actively, but seems to ring
some bells to me)? isn't this something some schema thingy w/could enforce?
(also @wolfgang for the latter)

> +		delete $job->{$k};
> +	    }
> +
> +	    my $schedule_updated = 0;
> +	    if ($param->{schedule} ne $job->{schedule}) {
> +		$schedule_updated = 1;
> +	    }

erm, why not?:

my $schedule_updated = $param->{schedule} ne $job->{schedule};

> +
> +	    foreach my $k (keys %$param) {

probably just copy "error", but please: s/foreach/for/, or even:

$job->{$_} = $param->{$_} for keys $param->%*;

> +		$job->{$k} = $param->{$k};
> +	    }
> +
> +	    if ($schedule_updated) {
> +		PVE::Jobs::updated_job_schedule($id, 'realmsync');
> +	    }
> +
> +	    cfs_write_file('jobs.cfg', $jobs);
> +	    return;
> +	};
> +	cfs_lock_file('jobs.cfg', undef, $update_job);

could use a direct sub {} instead of named $code_ref like the first
cfs_lock_file usage here does already.

> +	die "$@" if ($@);
> +    }});
> +
> +
> +__PACKAGE__->register_method({
> +    name => 'delete_job',
> +    path => '{id}',
> +    method => 'DELETE',
> +    description => "Delete domain-sync job definition.",
> +    permissions => {
> +	check => ['perm', '/', ['Sys.Modify']],
> +    },
> +    protected => 1,
> +    parameters => {
> +	additionalProperties => 0,
> +	properties => {
> +	    id => {
> +		type => 'string',
> +		format => 'pve-configid',
> +	    },
> +	},
> +    },
> +    returns => { type => 'null' },
> +    code => sub {
> +	my ($param) = @_;
> +
> +	my $id = $param->{id};
> +
> +	my $delete_job = sub {
> +	    my $jobs = cfs_read_file('jobs.cfg');
> +
> +	    if (!defined($jobs->{ids}->{$id})) {

why not check the job type here (but on get/update)?

> +		raise_param_exc({ id => "No such job '$id'" });
> +	    }
> +	    delete $jobs->{ids}->{$id};
> +
> +	    PVE::Jobs::remove_job($id, 'realmsync');
> +
> +	    cfs_write_file('jobs.cfg', $jobs);
> +	    unlink "/etc/pve/priv/jobs/realmsync-$id.json";
> +	};
> +
> +	cfs_lock_file('jobs.cfg', undef, $delete_job);

could use a direct sub {} instead of named $code_ref like the first
cfs_lock_file usage here does already.

> +	die "$@" if $@;
> +
> +	return undef;
> +    }});
> +1;
> diff --git a/src/PVE/API2/Makefile b/src/PVE/API2/Makefile
> index 2817f48..9aeb047 100644
> --- a/src/PVE/API2/Makefile
> +++ b/src/PVE/API2/Makefile
> @@ -2,6 +2,7 @@
>  API2_SOURCES= 		 	\
>  	AccessControl.pm 	\
>  	Domains.pm	 	\
> +	Domainsync.pm	 	\
>  	ACL.pm		 	\
>  	Role.pm		 	\
>  	Group.pm	 	\
> diff --git a/src/PVE/Jobs/Makefile b/src/PVE/Jobs/Makefile
> new file mode 100644
> index 0000000..9eed1b2
> --- /dev/null
> +++ b/src/PVE/Jobs/Makefile
> @@ -0,0 +1,6 @@
> +SOURCES=RealmSync.pm
> +
> +.PHONY: install
> +install: ${SOURCES}
> +	install -d -m 0755 ${DESTDIR}${PERLDIR}/PVE/Jobs
> +	for i in ${SOURCES}; do install -D -m 0644 $$i ${DESTDIR}${PERLDIR}/PVE/Jobs/$$i; done
> diff --git a/src/PVE/Jobs/RealmSync.pm b/src/PVE/Jobs/RealmSync.pm
> new file mode 100644
> index 0000000..f19f4a8
> --- /dev/null
> +++ b/src/PVE/Jobs/RealmSync.pm
> @@ -0,0 +1,192 @@
> +package PVE::Jobs::RealmSync;

Hmm, why not DomainSync, or why is the other module called Domainsynced, no biggie but
slightly odd

> +
> +use strict;
> +use warnings;

missing empty line between strict/warning and other use statements

> +use JSON;
> +
> +use PVE::API2::Domains;
> +use PVE::JSONSchema qw(get_standard_option);
> +use PVE::Cluster;
> +use PVE::CalendarEvent;
> +use PVE::Tools;

pls sort

> +
> +use base qw(PVE::Jobs::Plugin);

isn't this in pve-manager? would add a cyclic dependency and so to our (well
most of the time my ;P) bootstrap PITA.

> +
> +sub type {
> +    return 'realmsync';
> +}
> +
> +my $props = get_standard_option('realm-sync-options', {
> +    realm => get_standard_option('realm'),
> +});
> +
> +sub properties {
> +    return $props;
> +}
> +
> +sub options {
> +    my $options = {
> +	enabled => { optional => 1 },
> +	schedule => {},
> +	comment => { optional => 1 },
> +	node => { optional => 1 },
> +	scope => {},
> +    };
> +    foreach my $opt (keys %$props) {

s/foreach/for/

> +	next if defined($options->{$opt});
> +	if ($props->{$opt}->{optional}) {
> +	    $options->{$opt} = { optional => 1 };
> +	} else {
> +	    $options->{$opt} = {};
> +	}
> +    }
> +    $options->{realm}->{fixed} = 1;
> +
> +    return $options;
> +}
> +
> +sub decode_value {
> +    my ($class, $type, $key, $value) = @_;
> +    return $value;
> +}
> +
> +sub encode_value {
> +    my ($class, $type, $key, $value) = @_;
> +    return $value;
> +}
> +
> +sub createSchema {
> +    my ($class, $skip_type) = @_;
> +    my $schema = $class->SUPER::createSchema($skip_type);
> +
> +    my $opts = $class->options();
> +    for my $opt (keys $schema->{properties}->%*) {
> +	next if defined($opts->{$opt}) || $opt eq 'id';
> +	delete $schema->{properties}->{$opt};
> +    }
> +
> +    # remove legacy props
> +    delete $schema->{properties}->{full};
> +    delete $schema->{properties}->{purge};
> +
> +    return $schema;
> +}
> +
> +sub updateSchema {
> +    my ($class, $skip_type) = @_;
> +    my $schema = $class->SUPER::updateSchema($skip_type);
> +
> +    my $opts = $class->options();
> +    for my $opt (keys $schema->{properties}->%*) {
> +	next if defined($opts->{$opt});
> +	next if $opt eq 'id' || $opt eq 'delete';
> +	delete $schema->{properties}->{$opt};
> +    }
> +
> +    # remove legacy props
> +    delete $schema->{properties}->{full};
> +    delete $schema->{properties}->{purge};
> +
> +    return $schema;
> +}
> +
> +sub run {
> +    my ($class, $conf, $id, $schedule) = @_;
> +
> +    my $node = $conf->{node};
> +    foreach my $opt (keys %$conf) {
> +	delete $conf->{$opt} if !defined($props->{$opt});
> +    }
> +
> +    my $realm = $conf->{realm};
> +    my $statedir = "/etc/pve/priv/jobs";
> +    my $statefile = "$statedir/realmsync-$id.json";
> +
> +    if (!defined($node)) {
> +	# cluster synced
> +	my $curtime = time();

use $now if you want short, IMO a bit better than somewhat randomly cut-off
& directly concatenated words.

> +	my $nodename = PVE::INotify::nodename();
> +	PVE::Cluster::cfs_update();

doesn't the job infra does this for us?

> +	# check statefile in pmxcfs if we should start
> +	my $shouldrun = PVE::Cluster::cfs_lock_domain($realm, undef, sub {
> +	    mkdir $statedir;
> +	    my $raw = eval { PVE::Tools::file_get_contents($statefile) } // '';
> +	    my $members = PVE::Cluster::get_members();
> +
> +	    my $state = ($raw =~ m/^(\{.*\})$/) ? decode_json($1) : {};
> +	    my $lastnode = $state->{node} // $nodename;
> +	    my $lastupid = $state->{upid};
> +	    my $lasttime = $state->{time};
> +
> +	    my $node_online = 0;
> +	    if ($lastnode ne $nodename) {
> +		if (defined(my $member = $members->{$lastnode})) {
> +		    $node_online = $member->{online};
> +		}
> +	    } else {
> +		$node_online = 1;
> +	    }
> +
> +	    if (defined($lastupid)) {
> +		# first check if the next run is scheduled
> +		if (my $parsed = PVE::Tools::upid_decode($lastupid, 1)) {
> +		    my $calspec = PVE::CalendarEvent::parse_calendar_event($schedule);
> +		    my $next_sync = PVE::CalendarEvent::compute_next_event($calspec, $parsed->{starttime});
> +		    return 0 if !defined($next_sync) || $curtime < $next_sync; # not yet its (next) turn
> +		}
> +		# check if still running and node is online
> +		my $tasks = PVE::Cluster::get_tasklist();
> +		for my $task (@$tasks) {
> +		    next if $task->{upid} ne $lastupid;
> +		    last if defined($task->{endtime}); # it's already finished
> +		    last if !$node_online; # it's not finished but the node is offline
> +		    return 0; # not finished and online
> +		}
> +	    } elsif (defined($lasttime) && ($lasttime+60) > $curtime && $node_online) {
> +		# another node started in the last 60 seconds and is online
> +		return 0;
> +	    }
> +> +	    # any of the following coniditions should be true here:

s/coniditions/conditions/

> +	    # * it was started on another node but that node is offline now
> +	    # * it was started but either too long ago, or with an error
> +	    # * the started task finished
> +
> +	    PVE::Tools::file_set_contents($statefile, encode_json({
> +		node => $nodename,
> +		time => $curtime,
> +	    }));
> +	    return 1;
> +	});
> +	die $@ if $@;
> +
> +	if ($shouldrun) {
> +	    my $err = undef;

drop that and..

> +	    my $upid = eval { PVE::API2::Domains->sync($conf) };
> +	    $err = $@ if $@;

just save the error with:

my $err = $@;

> +	    PVE::Cluster::cfs_lock_domain($realm, undef, sub {
> +		if ($err && !$upid) {
> +		    PVE::Tools::file_set_contents($statefile, encode_json({
> +			node => $nodename,
> +			time => $curtime,
> +			error => $err,
> +		    }));
> +		    die "$err\n";
> +		}
> +
> +		PVE::Tools::file_set_contents($statefile, encode_json({
> +		    node => $nodename,
> +		    upid => $upid,
> +		}));
> +	    });
> +	    die $@ if $@;
> +	    return $upid;
> +	} else {
> +	    return "OK";
> +	}
> +    }
> +
> +    return PVE::API2::Domains->sync($conf);
> +}
> +
> +1;
> diff --git a/src/PVE/Makefile b/src/PVE/Makefile
> index c839d8f..dfc5314 100644
> --- a/src/PVE/Makefile
> +++ b/src/PVE/Makefile
> @@ -8,3 +8,4 @@ install:
>  	install -D -m 0644 TokenConfig.pm ${DESTDIR}${PERLDIR}/PVE/TokenConfig.pm
>  	make -C API2 install
>  	make -C CLI install
> +	make -C Jobs install





^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pve-devel] [PATCH access-control 1/1] add realmsync plugin for jobs and CRUD api for domainsync-jobs
  2022-11-07 17:05   ` Thomas Lamprecht
@ 2022-11-08  8:20     ` Dominik Csapak
  2022-11-08  9:24       ` Thomas Lamprecht
  0 siblings, 1 reply; 14+ messages in thread
From: Dominik Csapak @ 2022-11-08  8:20 UTC (permalink / raw)
  To: Thomas Lamprecht, Proxmox VE development discussion

On 11/7/22 18:05, Thomas Lamprecht wrote:
[snip]
>> +    returns => {
>> +	type => 'object',
>> +    },
>> +    code => sub {
>> +	my ($param) = @_;
>> +
>> +	my $jobs = cfs_read_file('jobs.cfg');
>> +	my $job = $jobs->{ids}->{$param->{id}};
> 
> code style nit: please avoid using hash access inside for hash keys.
> 
>> +	return $job if $job && $job->{type} eq 'realmsync';
> 
> is this outdated due to being from april? or do we really share the
> ID namespace between all plugin types?

thats normal for section configs, but usually we don't notice in pve
since we normally only have a single endpoint for listing all objects
and not per 'type'
(e.g. you can't have 2 storages with the same ids but different types
either)

> 
>> +
>> +	raise_param_exc({ id => "No such job '$param->{id}'" });
> 
> 
> 
>> +
>> +    }});
>> +
>> +__PACKAGE__->register_method({
>> +    name => 'create_job',
>> +    path => '{id}',
>> +    method => 'POST',
>> +    protected => 1,
>> +    description => "Create new domain-sync job.",
>> +    permissions => {
>> +	description => "'Realm.AllocateUser' on '/access/realm/<realm>' and "
>> +	    ." 'User.Modify' permissions to '/access/groups/'.",
> 
> results in double whitespace at "and  'User.Modify'"
> 
>> +	check => [ 'and',
>> +	    ['perm', '/access/realm/{realm}', ['Realm.AllocateUser']],
>> +	    ['perm', '/access/groups', ['User.Modify']],
> 
> hmm, is this the same as for a manual sync? I guess it's OK then, just
> seems a bit lax.
> 

that was the idea, but i'll check again

>> +	],
>> +    },
>> +    parameters => PVE::Jobs::RealmSync->createSchema(),
>> +    returns => { type => 'null' },
>> +    code => sub {
>> +	my ($param) = @_;
>> +
>> +	my $id = extract_param($param, 'id');
>> +
>> +	cfs_lock_file('jobs.cfg', undef, sub {
>> +	    my $data = cfs_read_file('jobs.cfg');
>> +
>> +	    die "Job '$id' already exists\n"
>> +		if $data->{ids}->{$id};
>> +
>> +	    my $plugin = PVE::Jobs::Plugin->lookup('realmsync');
>> +	    my $opts = $plugin->check_config($id, $param, 1, 1);
>> +
>> +	    $data->{ids}->{$id} = $opts;
>> +
>> +	    PVE::Jobs::create_job($id, 'realmsync');
>> +
>> +	    cfs_write_file('jobs.cfg', $data);
>> +	});
>> +	die "$@" if ($@);
>> +
>> +	return undef;
>> +    }});
>> +
>> +__PACKAGE__->register_method({
>> +    name => 'update_job',
>> +    path => '{id}',
>> +    method => 'PUT',
>> +    protected => 1,
>> +    description => "Update domain-sync job definition.",
>> +    permissions => {
>> +	description => "'Realm.AllocateUser' on '/access/realm/<realm>' and "
>> +	    ." 'User.Modify' permissions to '/access/groups/'.",
>> +	check => [ 'and',
>> +	    ['perm', '/access/realm/{realm}', ['Realm.AllocateUser']],
>> +	    ['perm', '/access/groups', ['User.Modify']],
>> +	],
>> +    },
>> +    parameters => PVE::Jobs::RealmSync->updateSchema(),
>> +    returns => { type => 'null' },
>> +    code => sub {
>> +	my ($param) = @_;
>> +
>> +	my $id = extract_param($param, 'id');
>> +	my $delete = extract_param($param, 'delete');
>> +	if ($delete) {
>> +	    $delete = [PVE::Tools::split_list($delete)];
>> +	}
>> +
>> +	my $update_job = sub {
>> +	    my $jobs = cfs_read_file('jobs.cfg');
>> +
>> +	    die "no options specified\n" if !scalar(keys %$param);
>> +
>> +	    my $plugin = PVE::Jobs::Plugin->lookup('realmsync');
>> +	    my $opts = $plugin->check_config($id, $param, 0, 1);
>> +
>> +	    my $job = $jobs->{ids}->{$id};
>> +	    die "no such realmsync job\n" if !$job || $job->{type} ne 'realmsync';
>> +
>> +	    my $options = $plugin->options();
>> +	    foreach my $k (@$delete) {
>> +		my $d = $options->{$k} || die "no such option '$k'\n";
>> +		die "unable to delete required option '$k'\n" if !$d->{optional};
>> +		die "unable to delete fixed option '$k'\n" if $d->{fixed};
>> +		die "cannot set and delete property '$k' at the same time!\n"
>> +			if defined($opts->{$k});
> 
> don't we have that repeated a few times (didn't check actively, but seems to ring
> some bells to me)? isn't this something some schema thingy w/could enforce?
> (also @wolfgang for the latter)

yes, afair we do that (or some variation) for all section config crud api endpoints

> 
>> +		delete $job->{$k};
>> +	    }
>> +
>> +	    my $schedule_updated = 0;
>> +	    if ($param->{schedule} ne $job->{schedule}) {
>> +		$schedule_updated = 1;
>> +	    }
> 
> erm, why not?:
> 
> my $schedule_updated = $param->{schedule} ne $job->{schedule};

good question ;)

> 
>> +
>> +	    foreach my $k (keys %$param) {
> 
> probably just copy "error", but please: s/foreach/for/, or even:
> 
> $job->{$_} = $param->{$_} for keys $param->%*;

mhmm.. AFAIR i did not see that pattern anywhere yet in our codebase, maybe we want
an example of that in our style guide? (for single line loops i like it)

> 
>> +		$job->{$k} = $param->{$k};
>> +	    }
>> +
>> +	    if ($schedule_updated) {
>> +		PVE::Jobs::updated_job_schedule($id, 'realmsync');
>> +	    }
>> +
>> +	    cfs_write_file('jobs.cfg', $jobs);
>> +	    return;
>> +	};
>> +	cfs_lock_file('jobs.cfg', undef, $update_job);
> 
> could use a direct sub {} instead of named $code_ref like the first
> cfs_lock_file usage here does already.
> 
>> +	die "$@" if ($@);
>> +    }});
>> +
>> +
>> +__PACKAGE__->register_method({
>> +    name => 'delete_job',
>> +    path => '{id}',
>> +    method => 'DELETE',
>> +    description => "Delete domain-sync job definition.",
>> +    permissions => {
>> +	check => ['perm', '/', ['Sys.Modify']],
>> +    },
>> +    protected => 1,
>> +    parameters => {
>> +	additionalProperties => 0,
>> +	properties => {
>> +	    id => {
>> +		type => 'string',
>> +		format => 'pve-configid',
>> +	    },
>> +	},
>> +    },
>> +    returns => { type => 'null' },
>> +    code => sub {
>> +	my ($param) = @_;
>> +
>> +	my $id = $param->{id};
>> +
>> +	my $delete_job = sub {
>> +	    my $jobs = cfs_read_file('jobs.cfg');
>> +
>> +	    if (!defined($jobs->{ids}->{$id})) {
> 
> why not check the job type here (but on get/update)?

probably a copy&paste mistake

> 
>> +		raise_param_exc({ id => "No such job '$id'" });
>> +	    }
>> +	    delete $jobs->{ids}->{$id};
>> +
>> +	    PVE::Jobs::remove_job($id, 'realmsync');
>> +
>> +	    cfs_write_file('jobs.cfg', $jobs);
>> +	    unlink "/etc/pve/priv/jobs/realmsync-$id.json";
>> +	};
>> +
>> +	cfs_lock_file('jobs.cfg', undef, $delete_job);
> 
> could use a direct sub {} instead of named $code_ref like the first
> cfs_lock_file usage here does already.
> 
>> +	die "$@" if $@;
>> +
>> +	return undef;
>> +    }});
>> +1;
>> diff --git a/src/PVE/API2/Makefile b/src/PVE/API2/Makefile
>> index 2817f48..9aeb047 100644
>> --- a/src/PVE/API2/Makefile
>> +++ b/src/PVE/API2/Makefile
>> @@ -2,6 +2,7 @@
>>   API2_SOURCES= 		 	\
>>   	AccessControl.pm 	\
>>   	Domains.pm	 	\
>> +	Domainsync.pm	 	\
>>   	ACL.pm		 	\
>>   	Role.pm		 	\
>>   	Group.pm	 	\
>> diff --git a/src/PVE/Jobs/Makefile b/src/PVE/Jobs/Makefile
>> new file mode 100644
>> index 0000000..9eed1b2
>> --- /dev/null
>> +++ b/src/PVE/Jobs/Makefile
>> @@ -0,0 +1,6 @@
>> +SOURCES=RealmSync.pm
>> +
>> +.PHONY: install
>> +install: ${SOURCES}
>> +	install -d -m 0755 ${DESTDIR}${PERLDIR}/PVE/Jobs
>> +	for i in ${SOURCES}; do install -D -m 0644 $$i ${DESTDIR}${PERLDIR}/PVE/Jobs/$$i; done
>> diff --git a/src/PVE/Jobs/RealmSync.pm b/src/PVE/Jobs/RealmSync.pm
>> new file mode 100644
>> index 0000000..f19f4a8
>> --- /dev/null
>> +++ b/src/PVE/Jobs/RealmSync.pm
>> @@ -0,0 +1,192 @@
>> +package PVE::Jobs::RealmSync;
> 
> Hmm, why not DomainSync, or why is the other module called Domainsynced, no biggie but
> slightly odd

i can't really remember, but i guess i decided on a different name sometime during
development, and forgot to update all instances...

i'm not bound to any of these names, but i also would like to have it consistent
(so either all DomainSync or all RealmSync)

> 
>> +
>> +use strict;
>> +use warnings;
> 
> missing empty line between strict/warning and other use statements
> 
>> +use JSON;
>> +
>> +use PVE::API2::Domains;
>> +use PVE::JSONSchema qw(get_standard_option);
>> +use PVE::Cluster;
>> +use PVE::CalendarEvent;
>> +use PVE::Tools;
> 
> pls sort
> 
>> +
>> +use base qw(PVE::Jobs::Plugin);
> 
> isn't this in pve-manager? would add a cyclic dependency and so to our (well
> most of the time my ;P) bootstrap PITA.

we already talked off-list about that, so just for documentation purposes:
AFAIR @hannes wanted to move the jobs to common (or was it cluster?)
and i did not want to wait for that for this patch so i sent it as is

i'd have to check how we'd have to move the packages around to avoid this
(easiest would probably be to have the basic plugin in common/cluster i guess)

> 
>> +
>> +sub type {
>> +    return 'realmsync';
>> +}
>> +
>> +my $props = get_standard_option('realm-sync-options', {
>> +    realm => get_standard_option('realm'),
>> +});
>> +
>> +sub properties {
>> +    return $props;
>> +}
>> +
>> +sub options {
>> +    my $options = {
>> +	enabled => { optional => 1 },
>> +	schedule => {},
>> +	comment => { optional => 1 },
>> +	node => { optional => 1 },
>> +	scope => {},
>> +    };
>> +    foreach my $opt (keys %$props) {
> 
> s/foreach/for/
> 
>> +	next if defined($options->{$opt});
>> +	if ($props->{$opt}->{optional}) {
>> +	    $options->{$opt} = { optional => 1 };
>> +	} else {
>> +	    $options->{$opt} = {};
>> +	}
>> +    }
>> +    $options->{realm}->{fixed} = 1;
>> +
>> +    return $options;
>> +}
>> +
>> +sub decode_value {
>> +    my ($class, $type, $key, $value) = @_;
>> +    return $value;
>> +}
>> +
>> +sub encode_value {
>> +    my ($class, $type, $key, $value) = @_;
>> +    return $value;
>> +}
>> +
>> +sub createSchema {
>> +    my ($class, $skip_type) = @_;
>> +    my $schema = $class->SUPER::createSchema($skip_type);
>> +
>> +    my $opts = $class->options();
>> +    for my $opt (keys $schema->{properties}->%*) {
>> +	next if defined($opts->{$opt}) || $opt eq 'id';
>> +	delete $schema->{properties}->{$opt};
>> +    }
>> +
>> +    # remove legacy props
>> +    delete $schema->{properties}->{full};
>> +    delete $schema->{properties}->{purge};
>> +
>> +    return $schema;
>> +}
>> +
>> +sub updateSchema {
>> +    my ($class, $skip_type) = @_;
>> +    my $schema = $class->SUPER::updateSchema($skip_type);
>> +
>> +    my $opts = $class->options();
>> +    for my $opt (keys $schema->{properties}->%*) {
>> +	next if defined($opts->{$opt});
>> +	next if $opt eq 'id' || $opt eq 'delete';
>> +	delete $schema->{properties}->{$opt};
>> +    }
>> +
>> +    # remove legacy props
>> +    delete $schema->{properties}->{full};
>> +    delete $schema->{properties}->{purge};
>> +
>> +    return $schema;
>> +}
>> +
>> +sub run {
>> +    my ($class, $conf, $id, $schedule) = @_;
>> +
>> +    my $node = $conf->{node};
>> +    foreach my $opt (keys %$conf) {
>> +	delete $conf->{$opt} if !defined($props->{$opt});
>> +    }
>> +
>> +    my $realm = $conf->{realm};
>> +    my $statedir = "/etc/pve/priv/jobs";
>> +    my $statefile = "$statedir/realmsync-$id.json";
>> +
>> +    if (!defined($node)) {
>> +	# cluster synced
>> +	my $curtime = time();
> 
> use $now if you want short, IMO a bit better than somewhat randomly cut-off
> & directly concatenated words.
> 
>> +	my $nodename = PVE::INotify::nodename();
>> +	PVE::Cluster::cfs_update();
> 
> doesn't the job infra does this for us?

no we also have it in the VZDump plugin but you have a point,
putting it before 'run()' in PVE::Jobs probably makes more sense

> 
>> +	# check statefile in pmxcfs if we should start
>> +	my $shouldrun = PVE::Cluster::cfs_lock_domain($realm, undef, sub {
>> +	    mkdir $statedir;
>> +	    my $raw = eval { PVE::Tools::file_get_contents($statefile) } // '';
>> +	    my $members = PVE::Cluster::get_members();
>> +
>> +	    my $state = ($raw =~ m/^(\{.*\})$/) ? decode_json($1) : {};
>> +	    my $lastnode = $state->{node} // $nodename;
>> +	    my $lastupid = $state->{upid};
>> +	    my $lasttime = $state->{time};
>> +
>> +	    my $node_online = 0;
>> +	    if ($lastnode ne $nodename) {
>> +		if (defined(my $member = $members->{$lastnode})) {
>> +		    $node_online = $member->{online};
>> +		}
>> +	    } else {
>> +		$node_online = 1;
>> +	    }
>> +
>> +	    if (defined($lastupid)) {
>> +		# first check if the next run is scheduled
>> +		if (my $parsed = PVE::Tools::upid_decode($lastupid, 1)) {
>> +		    my $calspec = PVE::CalendarEvent::parse_calendar_event($schedule);
>> +		    my $next_sync = PVE::CalendarEvent::compute_next_event($calspec, $parsed->{starttime});
>> +		    return 0 if !defined($next_sync) || $curtime < $next_sync; # not yet its (next) turn
>> +		}
>> +		# check if still running and node is online
>> +		my $tasks = PVE::Cluster::get_tasklist();
>> +		for my $task (@$tasks) {
>> +		    next if $task->{upid} ne $lastupid;
>> +		    last if defined($task->{endtime}); # it's already finished
>> +		    last if !$node_online; # it's not finished but the node is offline
>> +		    return 0; # not finished and online
>> +		}
>> +	    } elsif (defined($lasttime) && ($lasttime+60) > $curtime && $node_online) {
>> +		# another node started in the last 60 seconds and is online
>> +		return 0;
>> +	    }
>> +> +	    # any of the following coniditions should be true here:
> 
> s/coniditions/conditions/
> 
>> +	    # * it was started on another node but that node is offline now
>> +	    # * it was started but either too long ago, or with an error
>> +	    # * the started task finished
>> +
>> +	    PVE::Tools::file_set_contents($statefile, encode_json({
>> +		node => $nodename,
>> +		time => $curtime,
>> +	    }));
>> +	    return 1;
>> +	});
>> +	die $@ if $@;
>> +
>> +	if ($shouldrun) {
>> +	    my $err = undef;
> 
> drop that and..
> 
>> +	    my $upid = eval { PVE::API2::Domains->sync($conf) };
>> +	    $err = $@ if $@;
> 
> just save the error with:
> 
> my $err = $@;
> 
>> +	    PVE::Cluster::cfs_lock_domain($realm, undef, sub {
>> +		if ($err && !$upid) {
>> +		    PVE::Tools::file_set_contents($statefile, encode_json({
>> +			node => $nodename,
>> +			time => $curtime,
>> +			error => $err,
>> +		    }));
>> +		    die "$err\n";
>> +		}
>> +
>> +		PVE::Tools::file_set_contents($statefile, encode_json({
>> +		    node => $nodename,
>> +		    upid => $upid,
>> +		}));
>> +	    });
>> +	    die $@ if $@;
>> +	    return $upid;
>> +	} else {
>> +	    return "OK";
>> +	}
>> +    }
>> +
>> +    return PVE::API2::Domains->sync($conf);
>> +}
>> +
>> +1;
>> diff --git a/src/PVE/Makefile b/src/PVE/Makefile
>> index c839d8f..dfc5314 100644
>> --- a/src/PVE/Makefile
>> +++ b/src/PVE/Makefile
>> @@ -8,3 +8,4 @@ install:
>>   	install -D -m 0644 TokenConfig.pm ${DESTDIR}${PERLDIR}/PVE/TokenConfig.pm
>>   	make -C API2 install
>>   	make -C CLI install
>> +	make -C Jobs install
> 





^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pve-devel] [PATCH access-control 1/1] add realmsync plugin for jobs and CRUD api for domainsync-jobs
  2022-11-08  8:20     ` Dominik Csapak
@ 2022-11-08  9:24       ` Thomas Lamprecht
  2022-11-08 10:36         ` Wolfgang Bumiller
  0 siblings, 1 reply; 14+ messages in thread
From: Thomas Lamprecht @ 2022-11-08  9:24 UTC (permalink / raw)
  To: Dominik Csapak, Proxmox VE development discussion; +Cc: Wolfgang Bumiller

Am 08/11/2022 um 09:20 schrieb Dominik Csapak:
>> is this outdated due to being from april? or do we really share the
>> ID namespace between all plugin types?
> 
> thats normal for section configs, but usually we don't notice in pve
> since we normally only have a single endpoint for listing all objects
> and not per 'type'
> (e.g. you can't have 2 storages with the same ids but different types
> either)

@wolfgang didn't you improve on that somewhere by some xyz-id prefix or the
like?

>>> +        foreach my $k (keys %$param) {
>>
>> probably just copy "error", but please: s/foreach/for/, or even:
>>
>> $job->{$_} = $param->{$_} for keys $param->%*;
> 
> mhmm.. AFAIR i did not see that pattern anywhere yet in our codebase, maybe we want
> an example of that in our style guide? (for single line loops i like it)

$ grep -r '\$_.*for ' /usr/share/perl5/PVE
/usr/share/perl5/PVE/API2.pm:   $res->{$_} = $version_info->{$_} for qw(version release repoid);
/usr/share/perl5/PVE/CLI/pve6to7.pm:    $total += $_ for values %$counters;
/usr/share/perl5/PVE/API2/Ceph/Pools.pm:        delete $options->{$_}->{default} for keys %$options;
/usr/share/perl5/PVE/API2/Ceph/MDS.pm:      $mds_hash->{$name}->{$_} = $d->{$_} for keys %$d;
/usr/share/perl5/PVE/API2/ClusterConfig.pm:             $err_hash->{"${type}$_"} = $arr[$_] for 0..$#arr;
/usr/share/perl5/PVE/API2/Qemu.pm:                  $conf->{$_} = $created_opts->{$_} for keys $created_opts->%*;
/usr/share/perl5/PVE/API2/Qemu.pm:                  $conf->{pending}->{$_} = $created_opts->{$_} for keys $created_opts->%*;
/usr/share/perl5/PVE/API2/Disks/ZFS.pm:         eval { PVE::Diskmanage::wipe_blockdev($_) for $to_wipe->@*; };
/usr/share/perl5/PVE/API2/Cluster.pm:       $conf->{$_} = $param->{$_} for keys $param->%*;
/usr/share/perl5/PVE/API2/Cluster.pm:       delete $conf->{$_} for PVE::Tools::split_list($delete);
/usr/share/perl5/PVE/API2/Network.pm:           $ifaces->{$_} = $vnets->{$_} for keys $vnets->%*
/usr/share/perl5/PVE/API2/LXC.pm:                       $rpcenv->warn($errors->{$_}) for keys $errors->%*;
/usr/share/perl5/PVE/Service/pvescheduler.pm:   $old_workers .= "$type:$_;" for keys $worker->%*;
/usr/share/perl5/PVE/VZDump.pm:delete $confdesc_for_defaults->{$_}->{requires} for qw(notes-template protected);
/usr/share/perl5/PVE/VZDump.pm: delete $opts->{$_} for qw(notes-template protected);
/usr/share/perl5/PVE/VZDump/LXC.pm:     $task->{size} += $_->{size} for @$res;
/usr/share/perl5/PVE/Diskmanage.pm:     $_ =~ s|cciss/|cciss!| for @$disks;
/usr/share/perl5/PVE/Diskmanage.pm:         $disklist->{$_} = $partitions->{$_} for keys %{$partitions};
/usr/share/perl5/PVE/INotify.pm:            $ifaces->{$_}->{autostart} = 1 for split (/\s+/, $2);
/usr/share/perl5/PVE/INotify.pm:            $ifaces->{$_}->{autostart} = 1 for split (/\s+/, $2);
/usr/share/perl5/PVE/INotify.pm:            $d->{"$_$suffix"} = $f->{$_} for keys $f->%*;
/usr/share/perl5/PVE/QemuServer/ImportDisk.pm:          warn "hotplugging imported disk '$_' failed: $errors->{$_}\n" for keys %$errors;
/usr/share/perl5/PVE/ACME.pm:    $self->{$_} = $data->{$_} for @SAVED_VALUES;
/usr/share/perl5/PVE/ACME.pm:   print("(meta): $_ : $meta->{$_}\n") for sort keys %$meta;
/usr/share/perl5/PVE/ACME.pm:    print("$_ : $methods->{$_}\n") for sort grep {$_ ne 'meta'} keys %$methods;
/usr/share/perl5/PVE/DAB.pm:    $exclude->{$_} = 1 for split(',', $opts->{exclude});
/usr/share/perl5/PVE/Tools.pm:    check_mail_addr($_) for $mailto->@*;
/usr/share/perl5/PVE/QemuMigrate.pm:    delete $conf->{$_} for keys %$target_drives;

>>> +package PVE::Jobs::RealmSync;
>>
>> Hmm, why not DomainSync, or why is the other module called Domainsynced, no biggie but
>> slightly odd
> 
> i can't really remember, but i guess i decided on a different name sometime during
> development, and forgot to update all instances...
> 
> i'm not bound to any of these names, but i also would like to have it consistent
> (so either all DomainSync or all RealmSync)

let's do Realm, matches the frontend and in the PVE/internet world a bit less overused.






^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pve-devel] [PATCH access-control 1/1] add realmsync plugin for jobs and CRUD api for domainsync-jobs
  2022-11-08  9:24       ` Thomas Lamprecht
@ 2022-11-08 10:36         ` Wolfgang Bumiller
  0 siblings, 0 replies; 14+ messages in thread
From: Wolfgang Bumiller @ 2022-11-08 10:36 UTC (permalink / raw)
  To: Thomas Lamprecht; +Cc: Dominik Csapak, Proxmox VE development discussion

On Tue, Nov 08, 2022 at 10:24:50AM +0100, Thomas Lamprecht wrote:
> Am 08/11/2022 um 09:20 schrieb Dominik Csapak:
> >> is this outdated due to being from april? or do we really share the
> >> ID namespace between all plugin types?
> > 
> > thats normal for section configs, but usually we don't notice in pve
> > since we normally only have a single endpoint for listing all objects
> > and not per 'type'
> > (e.g. you can't have 2 storages with the same ids but different types
> > either)
> 
> @wolfgang didn't you improve on that somewhere by some xyz-id prefix or the
> like?

I don't think so, not for the jobs api anyway. There may have been
off-list discussions?

If we really want to keep a single `jobs.cfg` for multiple types, maybe
we should also forbid accessing the jobs directly via
`$jobs->{ids}->{$id}` and require going over accessors that require a
type parameter.
(Maybe bless the $jobs hash into a dedicated package for convenience, so
we can do $jobs->get($id, $type).)

> 
> >>> +        foreach my $k (keys %$param) {
> >>
> >> probably just copy "error", but please: s/foreach/for/, or even:
> >>
> >> $job->{$_} = $param->{$_} for keys $param->%*;
> > 
> > mhmm.. AFAIR i did not see that pattern anywhere yet in our codebase, maybe we want
> > an example of that in our style guide? (for single line loops i like it)

The style guide actually already specifies the preference of `for` over
`foreach` ;-)




^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2022-11-08 10:36 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-04  8:54 [pve-devel] [PATCH access-control/manager] add realm sync jobs Dominik Csapak
2022-04-04  8:54 ` [pve-devel] [PATCH access-control 1/1] add realmsync plugin for jobs and CRUD api for domainsync-jobs Dominik Csapak
2022-11-07 17:05   ` Thomas Lamprecht
2022-11-08  8:20     ` Dominik Csapak
2022-11-08  9:24       ` Thomas Lamprecht
2022-11-08 10:36         ` Wolfgang Bumiller
2022-04-04  8:54 ` [pve-devel] [PATCH manager 1/4] Jobs: provide id and schedule to the job Dominik Csapak
2022-11-07 16:14   ` [pve-devel] applied: " Thomas Lamprecht
2022-04-04  8:54 ` [pve-devel] [PATCH manager 2/4] Jobs/Plugin: remove 'vzdump' from id description Dominik Csapak
2022-11-07 16:14   ` [pve-devel] applied: " Thomas Lamprecht
2022-04-04  8:54 ` [pve-devel] [PATCH manager 3/4] Jobs: add RealmSync Plugin and register it Dominik Csapak
2022-11-07 16:14   ` [pve-devel] applied: " Thomas Lamprecht
2022-11-07 16:15     ` Thomas Lamprecht
2022-04-04  8:54 ` [pve-devel] [PATCH manager 4/4] ui: add Realm Sync panel Dominik Csapak

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal