From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 5CFB169B9F for ; Mon, 14 Mar 2022 10:36:50 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 5990017F9 for ; Mon, 14 Mar 2022 10:36:20 +0100 (CET) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id AA50917F0 for ; Mon, 14 Mar 2022 10:36:19 +0100 (CET) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 82A0C42899 for ; Mon, 14 Mar 2022 10:36:19 +0100 (CET) Date: Mon, 14 Mar 2022 10:36:17 +0100 From: Wolfgang Bumiller To: Thomas Lamprecht Cc: Proxmox Backup Server development discussion , Stefan Sterz Message-ID: <20220314093617.n2mc2jv4k6ntzroo@wobu-vie.proxmox.com> References: <20220309135031.1995207-1-s.sterz@proxmox.com> <717c8999-d3f8-a01b-a8f5-da0f5960d23f@proxmox.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <717c8999-d3f8-a01b-a8f5-da0f5960d23f@proxmox.com> X-SPAM-LEVEL: Spam detection results: 0 AWL 0.362 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record T_SCC_BODY_TEXT_LINE -0.01 - Subject: Re: [pbs-devel] [PATCH proxmox-backup] fix #3336: api: remove backup group if the last snapshot is removed X-BeenThere: pbs-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox Backup Server development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 14 Mar 2022 09:36:50 -0000 On Fri, Mar 11, 2022 at 01:20:22PM +0100, Thomas Lamprecht wrote: > On 09.03.22 14:50, Stefan Sterz wrote: > > Signed-off-by: Stefan Sterz > > --- > > pbs-datastore/src/datastore.rs | 22 ++++++++++++++++++++++ > > 1 file changed, 22 insertions(+) > > > > diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs > > index d416c8d8..623b7688 100644 > > --- a/pbs-datastore/src/datastore.rs > > +++ b/pbs-datastore/src/datastore.rs > > @@ -346,6 +346,28 @@ impl DataStore { > > ) > > })?; > > > > + // check if this was the last snapshot and if so remove the group > > + if backup_dir > > + .group() > > + .list_backups(&self.base_path())? > > + .is_empty() > > + { > > a log::info could be appropriate in the "success" (i.e., delete dir) case. > > I'd factor the this block below out into a non-pub (or pub(crate)) remove_empty_group_dir fn. > > > + let group_path = self.group_path(backup_dir.group()); > > + let _guard = proxmox_sys::fs::lock_dir_noblock( > > + &group_path, > > + "backup group", > > + "possible running backup", > > + )?; > > + > > + std::fs::remove_dir_all(&group_path).map_err(|err| { > > this is still unsafe as there's a TOCTOU race, the lock does not protects you from the > following sequence with two threads/async-excutions t1 and t1 > > t1.1 snapshot deleted > t1.2 empty dir check holds up, entering "delete group dir" code branch > t2.1 create new snapshot in group -> lock group dir > t2.2 finish new snapshot in group -> unlock group dir > t1.3 lock group dir > t1.4 delete all files, including the new snapshot made in-between. > > Rather, just use the safer "remove_dir" variant, that way the TOCTOU race doesn't matters, > the check merely becomes a short cut; if we'd explicitly check for > `err.kind() != ErrorKind::DirectoryNotEmpty > and silent it we could even do away with the check, should result in the same amount of > syscalls in the best-case (one rmdir vs. one readir) and can be better on success > (readdir + rmdir vs. rmdir only), not that perfromance matters much in this case. > > fyi, "remove_backup_group", the place where I think you copied this part, can use the > remove_dir_all safely because there's no check to made there, so no TOCTOU. Correct me if I'm wrong, but I think we need to rethink our locking there in general. We can't lock the directory itself if we also want to be allowed to delete it (same reasoning as with regular files): -> A locks backup group -> B begins locking: opens dir handle -> A deletes group, group is now gone -> C recreates the backup group, _locked_ -> A drops directory handle (& with it the lock) -> B acquries lock on deleted directory handle which works just fine now B and C both think they're holding an exlusive lock We *could* use a lock helper that also stats before and after the lock (on the handle first, then on the *path* for the second one) to see if the inode changed, to catch this... Or we just live with empty directories or (hidden) lock files lingering. (which would only be safe to clean up during a maintenance mode operation). Or we introduce a create/delete lock one level up, held only for the duration of mkdir()/rmdir() calls. (But in any case, all the current inline `lock_dir_noblock` calls should instead go over a safe helper dealing with this properly...)