From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: <pbs-devel-bounces@lists.proxmox.com> Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id CE0891FF16E for <inbox@lore.proxmox.com>; Mon, 17 Feb 2025 16:36:37 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 20F252C380; Mon, 17 Feb 2025 16:36:35 +0100 (CET) Date: Mon, 17 Feb 2025 16:36:28 +0100 From: Fabian =?iso-8859-1?q?Gr=FCnbichler?= <f.gruenbichler@proxmox.com> To: Proxmox Backup Server development discussion <pbs-devel@lists.proxmox.com> References: <20250217131208.265219-1-c.ebner@proxmox.com> <20250217131208.265219-3-c.ebner@proxmox.com> In-Reply-To: <20250217131208.265219-3-c.ebner@proxmox.com> MIME-Version: 1.0 User-Agent: astroid/0.16.0 (https://github.com/astroidmail/astroid) Message-Id: <1739806100.hejephfmgd.astroid@yuna.none> X-SPAM-LEVEL: Spam detection results: 0 AWL -0.006 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment PROLO_LEO1 0.1 Meta Catches all Leo drug variations so far RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pbs-devel] [PATCH proxmox-backup 2/2] fix #5982: garbage collection: check atime updates are honored X-BeenThere: pbs-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox Backup Server development discussion <pbs-devel.lists.proxmox.com> List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pbs-devel>, <mailto:pbs-devel-request@lists.proxmox.com?subject=unsubscribe> List-Archive: <http://lists.proxmox.com/pipermail/pbs-devel/> List-Post: <mailto:pbs-devel@lists.proxmox.com> List-Help: <mailto:pbs-devel-request@lists.proxmox.com?subject=help> List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel>, <mailto:pbs-devel-request@lists.proxmox.com?subject=subscribe> Reply-To: Proxmox Backup Server development discussion <pbs-devel@lists.proxmox.com> Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: pbs-devel-bounces@lists.proxmox.com Sender: "pbs-devel" <pbs-devel-bounces@lists.proxmox.com> On February 17, 2025 2:12 pm, Christian Ebner wrote: > Check if the filesystem the chunk store is located on actually > updates the atime when performing the marking of the chunks in > phase 1 of the garbage collection. Since it is not enough to check if > a single/first chunks atime is updated, since the filesystem can be > mounted via the `relatime` option, find the first chunk which is' > outside the relatime's 24 hour cutoff window and check the update on > that chunk only. given that our touching should punch through relatime (and does so on all filesystems we tested so far), couldn't we just - stat the first chunk - touch the first chunk - check if timestamps have been updated - print a warning about the filesystem being potentially broken, and - if the option is enabled, suggest the user report the details to us - only continue if the option is explicitly disabled that way we should get a real world survey of broken file systems that could inform our decision to drop the 24h window (or keep it).. if we introduce an option (defaulting to yes for now) conditionalizing the 24h window, we could even tell users with semi-broken storages (see below) to explicitly set that option in case we later switch the default, although I am not sure whether such storages exist for real. the only downside would be a potential slew of reports if we missed some prominent filesystem that applies relatime to explicit timestamp updates (any prominent storage ignoring explicit timestamp updates altogether would have already cropped up in our support channels after causing fatal data loss, and we only had a handful such reports so far, usually involving proprietary storage appliances). > > Allow to disable the atime update checks via the optional datastores > tuning parameter, but enable them by default. > > Fixes: https://bugzilla.proxmox.com/show_bug.cgi?id=5982 > Signed-off-by: Christian Ebner <c.ebner@proxmox.com> > --- > pbs-datastore/src/datastore.rs | 50 +++++++++++++++++++++++++++++++--- > 1 file changed, 46 insertions(+), 4 deletions(-) > > diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs > index 75c0c16ab..9aa509e27 100644 > --- a/pbs-datastore/src/datastore.rs > +++ b/pbs-datastore/src/datastore.rs > @@ -4,6 +4,7 @@ use std::os::unix::ffi::OsStrExt; > use std::os::unix::io::AsRawFd; > use std::path::{Path, PathBuf}; > use std::sync::{Arc, LazyLock, Mutex}; > +use std::time::UNIX_EPOCH; > > use anyhow::{bail, format_err, Error}; > use nix::unistd::{unlinkat, UnlinkatFlags}; > @@ -1029,13 +1030,15 @@ impl DataStore { > Ok(list) > } > > - // mark chunks used by ``index`` as used > + // mark chunks used by ``index`` as used, > + // fail if the chunks atime is not updated as expected by the filesystem. > fn index_mark_used_chunks<I: IndexFile>( > &self, > index: I, > file_name: &Path, // only used for error reporting > status: &mut GarbageCollectionStatus, > worker: &dyn WorkerTaskContext, > + min_atime: &mut Option<i64>, > ) -> Result<(), Error> { > status.index_file_count += 1; > status.index_data_bytes += index.index_bytes(); > @@ -1044,6 +1047,20 @@ impl DataStore { > worker.check_abort()?; > worker.fail_on_shutdown()?; > let digest = index.index_digest(pos).unwrap(); > + > + // Check if the chunk store actually honors `atime` updates by checking the first chunk > + // outside of the cutoff range. If atime is not honored garbage collection must fail, > + // as otherwise chunks still in use can be lost. > + let mut old_atime = None; > + if let Some(min_atime) = min_atime { > + let metadata = self.stat_chunk(digest)?; > + let atime = metadata.accessed()?; > + let atime_epoch: i64 = atime.duration_since(UNIX_EPOCH)?.as_secs().try_into()?; > + if atime_epoch < *min_atime { > + old_atime = Some(atime) > + } > + } > + > if !self.inner.chunk_store.cond_touch_chunk(digest, false)? { > let hex = hex::encode(digest); > warn!( > @@ -1061,6 +1078,19 @@ impl DataStore { > self.inner.chunk_store.cond_touch_path(&bad_path, false)?; > } > } > + > + // `atime` was outside range for this chunk, check that it has been updated. > + if let Some(old_atime) = old_atime { > + let metadata = self.stat_chunk(digest)?; > + let atime = metadata.accessed()?; > + if old_atime < atime { > + // atime was updated, do not check further chunks > + *min_atime = None; > + } else { > + let hex = hex::encode(digest); > + bail!("atime not updated for chunk {hex}, is atime support enabled?"); > + } > + } > } > Ok(()) > } > @@ -1069,6 +1099,7 @@ impl DataStore { > &self, > status: &mut GarbageCollectionStatus, > worker: &dyn WorkerTaskContext, > + min_atime: &mut Option<i64>, > ) -> Result<(), Error> { > let image_list = self.list_images()?; > let image_count = image_list.len(); > @@ -1097,12 +1128,12 @@ impl DataStore { > let index = FixedIndexReader::new(file).map_err(|e| { > format_err!("can't read index '{}' - {}", img.to_string_lossy(), e) > })?; > - self.index_mark_used_chunks(index, &img, status, worker)?; > + self.index_mark_used_chunks(index, &img, status, worker, min_atime)?; > } else if archive_type == ArchiveType::DynamicIndex { > let index = DynamicIndexReader::new(file).map_err(|e| { > format_err!("can't read index '{}' - {}", img.to_string_lossy(), e) > })?; > - self.index_mark_used_chunks(index, &img, status, worker)?; > + self.index_mark_used_chunks(index, &img, status, worker, min_atime)?; > } > } > } > @@ -1170,10 +1201,21 @@ impl DataStore { > upid: Some(upid.to_string()), > ..Default::default() > }; > + let tuning: DatastoreTuning = serde_json::from_value( > + DatastoreTuning::API_SCHEMA > + .parse_property_string(gc_store_config.tuning.as_deref().unwrap_or(""))?, > + )?; > + let mut atime_check_ref = if tuning.check_gc_atime.unwrap_or(true) { > + // Cutoff atime including a 24h grace period for filesystems mounted with > + // `relatime` option. > + Some(phase1_start_time - 3600 * 24) > + } else { > + None > + }; > > info!("Start GC phase1 (mark used chunks)"); > > - self.mark_used_chunks(&mut gc_status, worker)?; > + self.mark_used_chunks(&mut gc_status, worker, &mut atime_check_ref)?; > > info!("Start GC phase2 (sweep unused chunks)"); > self.inner.chunk_store.sweep_unused_chunks( > -- > 2.39.5 > > > > _______________________________________________ > pbs-devel mailing list > pbs-devel@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel > > > _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel