From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 3D68284EB for ; Fri, 3 Mar 2023 16:33:16 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 16D5F19771 for ; Fri, 3 Mar 2023 16:33:16 +0100 (CET) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Fri, 3 Mar 2023 16:33:15 +0100 (CET) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 097D048DB0 for ; Fri, 3 Mar 2023 16:33:15 +0100 (CET) Message-ID: <90ed66ef-b03e-f6fe-fcb7-ce075f1a176b@proxmox.com> Date: Fri, 3 Mar 2023 16:33:14 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:111.0) Gecko/20100101 Thunderbird/111.0 Content-Language: de-AT, en-GB To: Proxmox VE development discussion , Leo Nunner References: <20230209092705.29496-1-l.nunner@proxmox.com> <20230209092705.29496-3-l.nunner@proxmox.com> From: Thomas Lamprecht In-Reply-To: <20230209092705.29496-3-l.nunner@proxmox.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-SPAM-LEVEL: Spam detection results: 0 AWL -0.005 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment NICE_REPLY_A -0.092 Looks like a legit reply (A) SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pve-devel] [PATCH cluster] fix #4234: vzdump: add cluster-wide configuration X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 03 Mar 2023 15:33:16 -0000 Am 09/02/2023 um 10:27 schrieb Leo Nunner: > Introduce a cluster-wide vzdump.conf file which gets filled with the > default vzdump configuration. > > Signed-off-by: Leo Nunner > --- > > data/PVE/Cluster.pm | 1 + > data/PVE/Cluster/Setup.pm | 32 +++++++++++++++++++++++++++++--- > data/src/status.c | 1 + > 3 files changed, 31 insertions(+), 3 deletions(-) > > diff --git a/data/PVE/Cluster.pm b/data/PVE/Cluster.pm > index 0154aae..efca58f 100644 > --- a/data/PVE/Cluster.pm > +++ b/data/PVE/Cluster.pm > @@ -45,6 +45,7 @@ my $dbbackupdir = "/var/lib/pve-cluster/backup"; > # using a computed version and only those can be used by the cfs_*_file methods > my $observed = { > 'vzdump.cron' => 1, > + 'vzdump.conf' => 1, > 'jobs.cfg' => 1, > 'storage.cfg' => 1, > 'datacenter.cfg' => 1, > diff --git a/data/PVE/Cluster/Setup.pm b/data/PVE/Cluster/Setup.pm > index 108817e..061fe08 100644 > --- a/data/PVE/Cluster/Setup.pm > +++ b/data/PVE/Cluster/Setup.pm > @@ -579,6 +579,28 @@ PATH="/usr/sbin:/usr/bin:/sbin:/bin" > > __EOD > > +my $vzdump_conf_dummy = <<__EOD; > +# vzdump default settings > +# these are overruled by the node-specific configuration in /etc/vzdump.conf > + > +#tmpdir: DIR > +#dumpdir: DIR > +#storage: STORAGE_ID > +#mode: snapshot|suspend|stop > +#bwlimit: KBPS > +#performance: max-workers=N > +#ionice: PRI > +#lockwait: MINUTES > +#stopwait: MINUTES > +#stdexcludes: BOOLEAN > +#mailto: ADDRESSLIST > +#prune-backups: keep-INTERVAL=N[,...] > +#script: FILENAME > +#exclude-path: PATHLIST > +#pigz: N > +#notes-template: {{guestname}} > +__EOD > + > sub gen_pve_vzdump_symlink { > > my $filename = "/etc/pve/vzdump.cron"; > @@ -593,10 +615,14 @@ sub gen_pve_vzdump_symlink { > > sub gen_pve_vzdump_files { > > - my $filename = "/etc/pve/vzdump.cron"; > + my $cron = "/etc/pve/vzdump.cron"; > + my $conf = "/etc/pve/vzdump.conf"; > + > + PVE::Tools::file_set_contents($cron, $vzdump_cron_dummy) > + if ! -f $cron; not directly related but shouldn't we drop writing out a vzdump.cron now that all vzdump backups are handled through the jobs.cfg? > > - PVE::Tools::file_set_contents($filename, $vzdump_cron_dummy) > - if ! -f $filename; > + PVE::Tools::file_set_contents($conf, $vzdump_conf_dummy) > + if ! -f $conf; I'm not a fan of setting this for the cluster too just because we have it on node-level, users should either read the docs. Besides, the implementation as is is actually open for TOCTOU race as the file could have been written since checking for existence by another process possibly on another node. So if, we'd need to use a cfs domain lock + possibly a local flock (and split it out in its own patch, not related to registering the file to CFS), but I'd rather just omit it completely.