From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id B468E7E2BA for ; Wed, 10 Nov 2021 10:39:28 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id A501A195AE for ; Wed, 10 Nov 2021 10:38:58 +0100 (CET) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 74AE3195A3 for ; Wed, 10 Nov 2021 10:38:57 +0100 (CET) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 1C57D426BB for ; Wed, 10 Nov 2021 10:38:57 +0100 (CET) Message-ID: <9fbe7dc4-08dc-810c-1a64-0a2f08fdeb63@proxmox.com> Date: Wed, 10 Nov 2021 10:38:55 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.2.1 Content-Language: en-US To: Dylan Whyte , Proxmox VE development discussion References: <20211108130758.160914-1-d.csapak@proxmox.com> <3fa1c6e3-6ffe-19de-738b-6463c0407adb@proxmox.com> From: Dominik Csapak In-Reply-To: <3fa1c6e3-6ffe-19de-738b-6463c0407adb@proxmox.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-SPAM-LEVEL: Spam detection results: 0 AWL 1.069 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment NICE_REPLY_A -1.678 Looks like a legit reply (A) SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [cluster.pm, plugin.pm, proxmox.com, vzdump.pm, pvescheduler.pm, jobs.pm] Subject: Re: [pve-devel] [PATCH cluster/manager v2] add scheduling daemon for pvesr + vzdump (and more) X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 10 Nov 2021 09:39:28 -0000 On 11/9/21 15:17, Dylan Whyte wrote: > Hi, > > The patch set works as advertised and is very clean in terms of the > switch from vzdump.cron to jobs.cfg. thanks for testing! > > One very minor inconvenience that i seen was the fact that for > vzdump.cron 'Pool based' backups, you can't simply select 'Edit->Ok' to > convert it to the new format, but rather you need to change some detail > to allow the edit. However, this could barely be considered an issue, as > it just turns conversion into a two-step process of 'make minor edit. > revert minor edit'. > > The IDs that are automatically generated for the old cron jobs are a bit > ugly and all get shuffled each time a job is moved from vzdump.cron to > jobs.cfg (as mentioned in the commit message), although they are at > least easy to change from the job config file, so I don't imagine anyone > using these default IDs for very long. On this note however, is having > an API call to change the ID something that we want to avoid, or can it > be added? I can see users being annoyed by not being able to change it > from the GUI, even if only for the initial conversion. the cron job ids are basically the digest + linenumber, since we do not have an id in the cron file changing an id is something we generally do not allow, being storage/realms/etc. normally we tell the users to delete & recreate, though i think the better solution here is to have an additional 'comment' field and leave the autogeneration of ids in place (and maybe hide it again from the gui?) though as you said, if you really want to change an id, editing the file is not that bad (the statefiles get cleaned up/recreated by the scheduler anyway) the comment thing could be done as a followup though > > When entering the ID of new jobs, it might also be helpful to show the > allowed characters in a tool-tip box and to prevent the user trying to > validate bad IDs by disabling the "Create" button while invalid, as is > done with the 'add storage' window. The "invalid ID" message is a bit > vague in any case, so I would at least mention what is allowed there. yeah i forgot to add the 'ConfigId' vtype to the field. would also send this as a follow up, since it's only a single line > > Other than this, I didn't find any problems. > > Tested-By: Dylan Whyte > > > On 11/8/21 2:07 PM, Dominik Csapak wrote: >> with this series, we implement a new daemon (pvescheduler) that takes >> over from pvesrs' systemd timer (original patch from thomas[0]) and >> extends it with a generic job handling mechanism >> >> then i convert the vzdump cron jobs to these jobs, the immediate >> gain is that users can use calendarevent schedules instead of >> dow + starttime >> >> for now, this 'jobs.cfg' only handles vzdump jobs, but should be easily >> extendable for other type of recurring jobs (like auth realm sync, etc.) >> >> also, i did not yet convert the replication jobs to this job system, >> but that could probably be done without too much effort (though >> i did not look too deeply into it) >> >> if some version of this gets applied, the further plan would be >> to remove the vzdump.cron part completely with 8.0, but until then >> we must at least list/parse that >> >> whats currently missing but not too hard to add is a calculated >> 'next-run' column in the gui >> >> changes from v1: >> * do not log replication into the syslog >> * readjust the loop to start at the full minute every 1000 loops >> * rework locking state locking/handling: >>    - i introduces a new 'starting' state that is set before we start >>      and we set it to started after the start. >>      we sadly cannot start the job while we hold the lock, since the open >>      file descriptor will be still open in the worker, and then we cannot >>      get the flock again. now it's more modeled after how we do qm/ct >>      long running locks (by writing 'starting' locked into the state) >>    - the stop check is now its own call at the beginning of the job >> handling >>    - handle created/removed jobs properly: >>      i did not think of state handling on other nodes in my previous >>      iteration. now on every loop, i sync the statefiles with the config >>      (create/remvoe) so that the file gets created/removed on all nodes >> * incorporated fabians feedback for the api (thanks!) >> >> 0: https://lists.proxmox.com/pipermail/pve-devel/2018-April/031357.html >> >> pve-cluster: >> >> Dominik Csapak (1): >>    add 'jobs.cfg' to observed files >> >>   data/PVE/Cluster.pm | 1 + >>   data/src/status.c   | 1 + >>   2 files changed, 2 insertions(+) >> >> pve-manager: >> >> Dominik Csapak (5): >>    add PVE/Jobs to handle VZDump jobs >>    pvescheduler: run jobs from jobs.cfg >>    api/backup: refactor string for all days >>    api/backup: handle new vzdump jobs >>    ui: dc/backup: show id+schedule instead of dow+starttime >> >> Thomas Lamprecht (1): >>    replace systemd timer with pvescheduler daemon >> >>   PVE/API2/Backup.pm                 | 235 +++++++++++++++++++----- >>   PVE/API2/Cluster/BackupInfo.pm     |   9 + >>   PVE/Jobs.pm                        | 286 +++++++++++++++++++++++++++++ >>   PVE/Jobs/Makefile                  |  16 ++ >>   PVE/Jobs/Plugin.pm                 |  61 ++++++ >>   PVE/Jobs/VZDump.pm                 |  54 ++++++ >>   PVE/Makefile                       |   3 +- >>   PVE/Service/Makefile               |   2 +- >>   PVE/Service/pvescheduler.pm        | 131 +++++++++++++ >>   bin/Makefile                       |   6 +- >>   bin/pvescheduler                   |  28 +++ >>   debian/postinst                    |   3 +- >>   services/Makefile                  |   3 +- >>   services/pvescheduler.service      |  16 ++ >>   services/pvesr.service             |   8 - >>   services/pvesr.timer               |  12 -- >>   www/manager6/dc/Backup.js          |  46 +++-- >>   www/manager6/dc/BackupJobDetail.js |  10 +- >>   18 files changed, 823 insertions(+), 106 deletions(-) >>   create mode 100644 PVE/Jobs.pm >>   create mode 100644 PVE/Jobs/Makefile >>   create mode 100644 PVE/Jobs/Plugin.pm >>   create mode 100644 PVE/Jobs/VZDump.pm >>   create mode 100755 PVE/Service/pvescheduler.pm >>   create mode 100755 bin/pvescheduler >>   create mode 100644 services/pvescheduler.service >>   delete mode 100644 services/pvesr.service >>   delete mode 100644 services/pvesr.timer >>