From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 4CAEB7DB6B for ; Tue, 9 Nov 2021 15:18:04 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 38882E785 for ; Tue, 9 Nov 2021 15:17:34 +0100 (CET) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id DF68EE777 for ; Tue, 9 Nov 2021 15:17:32 +0100 (CET) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id BECA742CF2 for ; Tue, 9 Nov 2021 15:17:32 +0100 (CET) To: Proxmox VE development discussion , Dominik Csapak References: <20211108130758.160914-1-d.csapak@proxmox.com> From: Dylan Whyte Message-ID: <3fa1c6e3-6ffe-19de-738b-6463c0407adb@proxmox.com> Date: Tue, 9 Nov 2021 15:17:31 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.14.0 MIME-Version: 1.0 In-Reply-To: <20211108130758.160914-1-d.csapak@proxmox.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US X-SPAM-LEVEL: Spam detection results: 0 AWL 2.049 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment NICE_REPLY_A -3.06 Looks like a legit reply (A) SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [backup.pm, cluster.pm, jobs.pm, proxmox.com, backupinfo.pm, plugin.pm, pvescheduler.pm, vzdump.pm] Subject: Re: [pve-devel] [PATCH cluster/manager v2] add scheduling daemon for pvesr + vzdump (and more) X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 09 Nov 2021 14:18:04 -0000 Hi, The patch set works as advertised and is very clean in terms of the switch from vzdump.cron to jobs.cfg. One very minor inconvenience that i seen was the fact that for vzdump.cron 'Pool based' backups, you can't simply select 'Edit->Ok' to convert it to the new format, but rather you need to change some detail to allow the edit. However, this could barely be considered an issue, as it just turns conversion into a two-step process of 'make minor edit. revert minor edit'. The IDs that are automatically generated for the old cron jobs are a bit ugly and all get shuffled each time a job is moved from vzdump.cron to jobs.cfg (as mentioned in the commit message), although they are at least easy to change from the job config file, so I don't imagine anyone using these default IDs for very long. On this note however, is having an API call to change the ID something that we want to avoid, or can it be added? I can see users being annoyed by not being able to change it from the GUI, even if only for the initial conversion. When entering the ID of new jobs, it might also be helpful to show the allowed characters in a tool-tip box and to prevent the user trying to validate bad IDs by disabling the "Create" button while invalid, as is done with the 'add storage' window. The "invalid ID" message is a bit vague in any case, so I would at least mention what is allowed there. Other than this, I didn't find any problems. Tested-By: Dylan Whyte On 11/8/21 2:07 PM, Dominik Csapak wrote: > with this series, we implement a new daemon (pvescheduler) that takes > over from pvesrs' systemd timer (original patch from thomas[0]) and > extends it with a generic job handling mechanism > > then i convert the vzdump cron jobs to these jobs, the immediate > gain is that users can use calendarevent schedules instead of > dow + starttime > > for now, this 'jobs.cfg' only handles vzdump jobs, but should be easily > extendable for other type of recurring jobs (like auth realm sync, etc.) > > also, i did not yet convert the replication jobs to this job system, > but that could probably be done without too much effort (though > i did not look too deeply into it) > > if some version of this gets applied, the further plan would be > to remove the vzdump.cron part completely with 8.0, but until then > we must at least list/parse that > > whats currently missing but not too hard to add is a calculated > 'next-run' column in the gui > > changes from v1: > * do not log replication into the syslog > * readjust the loop to start at the full minute every 1000 loops > * rework locking state locking/handling: > - i introduces a new 'starting' state that is set before we start > and we set it to started after the start. > we sadly cannot start the job while we hold the lock, since the open > file descriptor will be still open in the worker, and then we cannot > get the flock again. now it's more modeled after how we do qm/ct > long running locks (by writing 'starting' locked into the state) > - the stop check is now its own call at the beginning of the job handling > - handle created/removed jobs properly: > i did not think of state handling on other nodes in my previous > iteration. now on every loop, i sync the statefiles with the config > (create/remvoe) so that the file gets created/removed on all nodes > * incorporated fabians feedback for the api (thanks!) > > 0: https://lists.proxmox.com/pipermail/pve-devel/2018-April/031357.html > > pve-cluster: > > Dominik Csapak (1): > add 'jobs.cfg' to observed files > > data/PVE/Cluster.pm | 1 + > data/src/status.c | 1 + > 2 files changed, 2 insertions(+) > > pve-manager: > > Dominik Csapak (5): > add PVE/Jobs to handle VZDump jobs > pvescheduler: run jobs from jobs.cfg > api/backup: refactor string for all days > api/backup: handle new vzdump jobs > ui: dc/backup: show id+schedule instead of dow+starttime > > Thomas Lamprecht (1): > replace systemd timer with pvescheduler daemon > > PVE/API2/Backup.pm | 235 +++++++++++++++++++----- > PVE/API2/Cluster/BackupInfo.pm | 9 + > PVE/Jobs.pm | 286 +++++++++++++++++++++++++++++ > PVE/Jobs/Makefile | 16 ++ > PVE/Jobs/Plugin.pm | 61 ++++++ > PVE/Jobs/VZDump.pm | 54 ++++++ > PVE/Makefile | 3 +- > PVE/Service/Makefile | 2 +- > PVE/Service/pvescheduler.pm | 131 +++++++++++++ > bin/Makefile | 6 +- > bin/pvescheduler | 28 +++ > debian/postinst | 3 +- > services/Makefile | 3 +- > services/pvescheduler.service | 16 ++ > services/pvesr.service | 8 - > services/pvesr.timer | 12 -- > www/manager6/dc/Backup.js | 46 +++-- > www/manager6/dc/BackupJobDetail.js | 10 +- > 18 files changed, 823 insertions(+), 106 deletions(-) > create mode 100644 PVE/Jobs.pm > create mode 100644 PVE/Jobs/Makefile > create mode 100644 PVE/Jobs/Plugin.pm > create mode 100644 PVE/Jobs/VZDump.pm > create mode 100755 PVE/Service/pvescheduler.pm > create mode 100755 bin/pvescheduler > create mode 100644 services/pvescheduler.service > delete mode 100644 services/pvesr.service > delete mode 100644 services/pvesr.timer >