public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH cluster/manager] add scheduling daemon for pvesr + vzdump (and more)
@ 2021-10-07  8:27 Dominik Csapak
  2021-10-07  8:27 ` [pve-devel] [PATCH cluster 1/1] add 'jobs.cfg' to observed files Dominik Csapak
                   ` (7 more replies)
  0 siblings, 8 replies; 17+ messages in thread
From: Dominik Csapak @ 2021-10-07  8:27 UTC (permalink / raw)
  To: pve-devel

with this series, we implement a new daemon (pvescheduler) that takes
over from pvesrs' systemd timer (original patch from thomas[0]) and
extends it with a generic job handling mechanism

then i convert the vzdump cron jobs to these jobs, the immediate
gain is that users can use calendarevent schedules instead of
dow + starttime

for now, this 'jobs.cfg' only handles vzdump jobs, but should be easily
extendable for other type of recurring jobs (like auth realm sync, etc.)

also, i did not yet convert the replication jobs to this job system,
but that could probably be done without too much effort (though
i did not look too deeply into it)

patch 2/7 in manager could probably be squashed into the first,
but since it does not only concern the new deaemon i left it
as a seperate patch

if some version of this gets applied, the further plan would be
to remove the vzdump.cron part completely with 8.0, but until then
we must at least list/parse that

whats currently missing but not too hard to add is a calculated
'next-run' column in the gui

a few things that are probably discussion-worthy:
* not sure if a state file per job is the way to go, or if we want
  to go the direction of pvesr and use a single state file for all jobs.
  since we only have a single entry point (most of the time) for that,
  it should make much of a difference either way
* the locking in general. i lock on every update of the state file,
  but cannot start the worker while locked, since those locks stay on
  the fork_worker call. i am sure there are ways around this, but
  did not found an easy way. also questioning if we need that much
  locking, since we have that single point when we start (we should
  still lock on create/update/delete)
* there is currently no way to handle scheduling on different nodes.
  basically the plugin is responsible to run on the correct node, and
  do nothing on the others (works out for vzdump api, since it gets
  the local vms for each node)
* the auto generation of ids. does not have to be a uuid, but should
  prevent id collision of parallel backup job creation
  (for api, on the gui id is enforced)

0: https://lists.proxmox.com/pipermail/pve-devel/2018-April/031357.html

pve-cluster:

Dominik Csapak (1):
  add 'jobs.cfg' to observed files

 data/PVE/Cluster.pm | 1 +
 data/src/status.c   | 1 +
 2 files changed, 2 insertions(+)

pve-manager:

Dominik Csapak (6):
  postinst: use reload-or-restart instead of reload-or-try-restart
  api/backup: refactor string for all days
  add PVE/Jobs to handle VZDump jobs
  pvescheduler: run jobs from jobs.cfg
  api/backup: handle new vzdump jobs
  ui: dc/backup: show id+schedule instead of dow+starttime

Thomas Lamprecht (1):
  replace systemd timer with pvescheduler daemon

 PVE/API2/Backup.pm                 | 247 +++++++++++++++++++++++------
 PVE/API2/Cluster/BackupInfo.pm     |  10 ++
 PVE/Jobs.pm                        | 210 ++++++++++++++++++++++++
 PVE/Jobs/Makefile                  |  16 ++
 PVE/Jobs/Plugin.pm                 |  61 +++++++
 PVE/Jobs/VZDump.pm                 |  54 +++++++
 PVE/Makefile                       |   3 +-
 PVE/Service/Makefile               |   2 +-
 PVE/Service/pvescheduler.pm        | 117 ++++++++++++++
 bin/Makefile                       |   6 +-
 bin/pvescheduler                   |  28 ++++
 debian/postinst                    |   5 +-
 services/Makefile                  |   3 +-
 services/pvescheduler.service      |  16 ++
 services/pvesr.service             |   8 -
 services/pvesr.timer               |  12 --
 www/manager6/dc/Backup.js          |  47 +++---
 www/manager6/dc/BackupJobDetail.js |  10 +-
 18 files changed, 749 insertions(+), 106 deletions(-)
 create mode 100644 PVE/Jobs.pm
 create mode 100644 PVE/Jobs/Makefile
 create mode 100644 PVE/Jobs/Plugin.pm
 create mode 100644 PVE/Jobs/VZDump.pm
 create mode 100644 PVE/Service/pvescheduler.pm
 create mode 100755 bin/pvescheduler
 create mode 100644 services/pvescheduler.service
 delete mode 100644 services/pvesr.service
 delete mode 100644 services/pvesr.timer

-- 
2.30.2





^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2021-11-03  9:21 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-07  8:27 [pve-devel] [PATCH cluster/manager] add scheduling daemon for pvesr + vzdump (and more) Dominik Csapak
2021-10-07  8:27 ` [pve-devel] [PATCH cluster 1/1] add 'jobs.cfg' to observed files Dominik Csapak
2021-10-07  8:27 ` [pve-devel] [PATCH manager 1/7] replace systemd timer with pvescheduler daemon Dominik Csapak
2021-10-29 12:05   ` Fabian Ebner
2021-11-02  9:26     ` Dominik Csapak
2021-10-07  8:27 ` [pve-devel] [PATCH manager 2/7] postinst: use reload-or-restart instead of reload-or-try-restart Dominik Csapak
2021-10-07  8:38   ` [pve-devel] applied: " Thomas Lamprecht
2021-10-07  8:27 ` [pve-devel] [PATCH manager 3/7] api/backup: refactor string for all days Dominik Csapak
2021-10-07  8:27 ` [pve-devel] [PATCH manager 4/7] add PVE/Jobs to handle VZDump jobs Dominik Csapak
2021-11-02 13:52   ` Fabian Ebner
2021-11-02 14:33     ` Dominik Csapak
2021-11-03  7:37       ` Fabian Ebner
2021-10-07  8:27 ` [pve-devel] [PATCH manager 5/7] pvescheduler: run jobs from jobs.cfg Dominik Csapak
2021-10-07  8:27 ` [pve-devel] [PATCH manager 6/7] api/backup: handle new vzdump jobs Dominik Csapak
2021-11-03  9:05   ` Fabian Ebner
2021-10-07  8:27 ` [pve-devel] [PATCH manager 7/7] ui: dc/backup: show id+schedule instead of dow+starttime Dominik Csapak
2021-11-03  9:21   ` Fabian Ebner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal