From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 5D15473462 for ; Thu, 7 Oct 2021 10:27:42 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id F0CD216753 for ; Thu, 7 Oct 2021 10:27:41 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id DD03016632 for ; Thu, 7 Oct 2021 10:27:34 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 76925458A4 for ; Thu, 7 Oct 2021 10:27:28 +0200 (CEST) From: Dominik Csapak To: pve-devel@lists.proxmox.com Date: Thu, 7 Oct 2021 10:27:19 +0200 Message-Id: <20211007082727.1385888-1-d.csapak@proxmox.com> X-Mailer: git-send-email 2.30.2 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 AWL 0.307 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [cluster.pm, vzdump.pm, backup.pm, jobs.pm, proxmox.com, plugin.pm, pvescheduler.pm, backupinfo.pm] Subject: [pve-devel] [PATCH cluster/manager] add scheduling daemon for pvesr + vzdump (and more) X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 07 Oct 2021 08:27:42 -0000 with this series, we implement a new daemon (pvescheduler) that takes over from pvesrs' systemd timer (original patch from thomas[0]) and extends it with a generic job handling mechanism then i convert the vzdump cron jobs to these jobs, the immediate gain is that users can use calendarevent schedules instead of dow + starttime for now, this 'jobs.cfg' only handles vzdump jobs, but should be easily extendable for other type of recurring jobs (like auth realm sync, etc.) also, i did not yet convert the replication jobs to this job system, but that could probably be done without too much effort (though i did not look too deeply into it) patch 2/7 in manager could probably be squashed into the first, but since it does not only concern the new deaemon i left it as a seperate patch if some version of this gets applied, the further plan would be to remove the vzdump.cron part completely with 8.0, but until then we must at least list/parse that whats currently missing but not too hard to add is a calculated 'next-run' column in the gui a few things that are probably discussion-worthy: * not sure if a state file per job is the way to go, or if we want to go the direction of pvesr and use a single state file for all jobs. since we only have a single entry point (most of the time) for that, it should make much of a difference either way * the locking in general. i lock on every update of the state file, but cannot start the worker while locked, since those locks stay on the fork_worker call. i am sure there are ways around this, but did not found an easy way. also questioning if we need that much locking, since we have that single point when we start (we should still lock on create/update/delete) * there is currently no way to handle scheduling on different nodes. basically the plugin is responsible to run on the correct node, and do nothing on the others (works out for vzdump api, since it gets the local vms for each node) * the auto generation of ids. does not have to be a uuid, but should prevent id collision of parallel backup job creation (for api, on the gui id is enforced) 0: https://lists.proxmox.com/pipermail/pve-devel/2018-April/031357.html pve-cluster: Dominik Csapak (1): add 'jobs.cfg' to observed files data/PVE/Cluster.pm | 1 + data/src/status.c | 1 + 2 files changed, 2 insertions(+) pve-manager: Dominik Csapak (6): postinst: use reload-or-restart instead of reload-or-try-restart api/backup: refactor string for all days add PVE/Jobs to handle VZDump jobs pvescheduler: run jobs from jobs.cfg api/backup: handle new vzdump jobs ui: dc/backup: show id+schedule instead of dow+starttime Thomas Lamprecht (1): replace systemd timer with pvescheduler daemon PVE/API2/Backup.pm | 247 +++++++++++++++++++++++------ PVE/API2/Cluster/BackupInfo.pm | 10 ++ PVE/Jobs.pm | 210 ++++++++++++++++++++++++ PVE/Jobs/Makefile | 16 ++ PVE/Jobs/Plugin.pm | 61 +++++++ PVE/Jobs/VZDump.pm | 54 +++++++ PVE/Makefile | 3 +- PVE/Service/Makefile | 2 +- PVE/Service/pvescheduler.pm | 117 ++++++++++++++ bin/Makefile | 6 +- bin/pvescheduler | 28 ++++ debian/postinst | 5 +- services/Makefile | 3 +- services/pvescheduler.service | 16 ++ services/pvesr.service | 8 - services/pvesr.timer | 12 -- www/manager6/dc/Backup.js | 47 +++--- www/manager6/dc/BackupJobDetail.js | 10 +- 18 files changed, 749 insertions(+), 106 deletions(-) create mode 100644 PVE/Jobs.pm create mode 100644 PVE/Jobs/Makefile create mode 100644 PVE/Jobs/Plugin.pm create mode 100644 PVE/Jobs/VZDump.pm create mode 100644 PVE/Service/pvescheduler.pm create mode 100755 bin/pvescheduler create mode 100644 services/pvescheduler.service delete mode 100644 services/pvesr.service delete mode 100644 services/pvesr.timer -- 2.30.2