From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: <pbs-devel-bounces@lists.proxmox.com> Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id CB7901FF168 for <inbox@lore.proxmox.com>; Tue, 18 Feb 2025 14:38:45 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 1F860D475; Tue, 18 Feb 2025 14:38:41 +0100 (CET) Message-ID: <d661fa00-85e5-447a-971d-61d0e703a2a8@proxmox.com> Date: Tue, 18 Feb 2025 14:38:37 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird To: Thomas Lamprecht <t.lamprecht@proxmox.com>, Proxmox Backup Server development discussion <pbs-devel@lists.proxmox.com>, =?UTF-8?Q?Fabian_Gr=C3=BCnbichler?= <f.gruenbichler@proxmox.com> References: <20250217131208.265219-1-c.ebner@proxmox.com> <20250217131208.265219-3-c.ebner@proxmox.com> <1739806100.hejephfmgd.astroid@yuna.none> <585ffd32-8868-44e5-91ee-df1eb2b2c87e@proxmox.com> <0e1975cc-d111-4556-bb26-340604939a55@proxmox.com> <0a345432-cb61-43d3-af9c-6561f8798574@proxmox.com> Content-Language: en-US, de-DE From: Christian Ebner <c.ebner@proxmox.com> In-Reply-To: <0a345432-cb61-43d3-af9c-6561f8798574@proxmox.com> X-SPAM-LEVEL: Spam detection results: 0 AWL 0.031 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pbs-devel] [PATCH proxmox-backup 2/2] fix #5982: garbage collection: check atime updates are honored X-BeenThere: pbs-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox Backup Server development discussion <pbs-devel.lists.proxmox.com> List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pbs-devel>, <mailto:pbs-devel-request@lists.proxmox.com?subject=unsubscribe> List-Archive: <http://lists.proxmox.com/pipermail/pbs-devel/> List-Post: <mailto:pbs-devel@lists.proxmox.com> List-Help: <mailto:pbs-devel-request@lists.proxmox.com?subject=help> List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel>, <mailto:pbs-devel-request@lists.proxmox.com?subject=subscribe> Reply-To: Proxmox Backup Server development discussion <pbs-devel@lists.proxmox.com> Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Errors-To: pbs-devel-bounces@lists.proxmox.com Sender: "pbs-devel" <pbs-devel-bounces@lists.proxmox.com> On 2/18/25 14:31, Thomas Lamprecht wrote: > Am 18.02.25 um 13:39 schrieb Christian Ebner: >> On 2/18/25 12:53, Thomas Lamprecht wrote: >>> +1; one (additional) option _might_ be to trigger suck a check on >>> datastore creation, e.g. create the all-zero chunk and then do that >>> test. As of now that probably would not win us much, but if we make >>> the 24h-wait opt-in then users would be warned early enough, or we >>> could even auto-set that option in such a case. >> >> Only checking the atime update check on datastore creation is not enough >> I think, as the backing filesystem might get remounted with changed >> mount parameters? Or do you mean to *also* check on datastore creation >> already to early on detect issues? Although, in my testing even with >> `noatime` the atime update seems to be honored by the way the garbage >> collection performs the time updates (further details see below). > > yes, I meant doing that additionally to checking on GC. > >> Anyways, creating the all-zero chunk and use that for the check sounds >> like a good optimization to me, as that allows to avoid conditional >> checking in the phase 1 of garbage collection. However, at the cost of >> having to make sure that it is never cleaned up by phase 2... > > I saw your second reply already but even without that in mind it would > be IMO fine to only use the all-zero chunk for the on-create check, as > I would not see it as a big problem if it then gets pruned during > GC, if the latter would use an actually existing chunk. But no hard > feelings here at all either way. I think using a 4M fixed size chunk for both cases makes it even more elegant, as one can then use the same logic for both cases, the check on datastore create and the check on garbage collection. And to be backwards compatible, this simply creates the zero chunk for both cases if it does not exist, covering existing datastores already. >> Regarding the 24 hour waiting period, as mentioned above I noted that >> atime updates are honored even if I set the `noatime` for an ext4 or >> `atime=off` on zfs. >> Seems like the utimensat() bypasses this directly, as it calls into >> vfs_utimes() [0], which sets this to be an explicit time update, >> followed by the notify_change() [1], which then calls the setattr() of >> the corresponding filesystem [2] via the given inode. >> This seems to bypass the atime_needs_update() [3], only called by >> touch_atime(). The atime_needs_update() also checks the >> relatime_needs_update() [4]. >> >> Although not conclusive (yet). >> > > Yeah, that would support making this opt-in. FWIW, we could maybe "sell" > this as sort of feature by not just transforming it into a boolean > "24h-wait period <true|false>" option but rather a more generic > "wait-period <X hours>" option that defaults to 0 hours (or maybe a > few minutes if we want to support minute granularity). Not sure if there > are enough (real world) use cases to warrant this, so mostly mentioned > for the sake of completeness. Yes, that sounds good to me! _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel