From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: <pbs-devel-bounces@lists.proxmox.com> Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id B53091FF195 for <inbox@lore.proxmox.com>; Fri, 7 Mar 2025 16:53:23 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 3486B1F3DD; Fri, 7 Mar 2025 16:53:18 +0100 (CET) Mime-Version: 1.0 Date: Fri, 07 Mar 2025 16:53:14 +0100 Message-Id: <D8A5LC2K31FM.3F8NB7FLU5B74@proxmox.com> From: "Shannon Sterz" <s.sterz@proxmox.com> To: "Wolfgang Bumiller" <w.bumiller@proxmox.com> X-Mailer: aerc 0.20.1-0-g2ecb8770224a-dirty References: <20250306120810.361035-1-m.sandoval@proxmox.com> <D89YVM8JXKA5.2YPZFETE06ND5@proxmox.com> <6rq4ttz3i7rercdebpty3wpxfdtrcllahcsofr56fu7luydgpt@e6xasbedzhvp> In-Reply-To: <6rq4ttz3i7rercdebpty3wpxfdtrcllahcsofr56fu7luydgpt@e6xasbedzhvp> X-SPAM-LEVEL: Spam detection results: 0 AWL 0.013 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [proxmox.com] Subject: Re: [pbs-devel] [PATCH backup] fix #3336: cleanup when deleting last snapshot X-BeenThere: pbs-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox Backup Server development discussion <pbs-devel.lists.proxmox.com> List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pbs-devel>, <mailto:pbs-devel-request@lists.proxmox.com?subject=unsubscribe> List-Archive: <http://lists.proxmox.com/pipermail/pbs-devel/> List-Post: <mailto:pbs-devel@lists.proxmox.com> List-Help: <mailto:pbs-devel-request@lists.proxmox.com?subject=help> List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel>, <mailto:pbs-devel-request@lists.proxmox.com?subject=subscribe> Reply-To: Proxmox Backup Server development discussion <pbs-devel@lists.proxmox.com> Cc: Proxmox Backup Server development discussion <pbs-devel@lists.proxmox.com> Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: pbs-devel-bounces@lists.proxmox.com Sender: "pbs-devel" <pbs-devel-bounces@lists.proxmox.com> On Fri Mar 7, 2025 at 4:33 PM CET, Wolfgang Bumiller wrote: > On Fri, Mar 07, 2025 at 11:37:32AM +0100, Shannon Sterz wrote: >> On Thu Mar 6, 2025 at 1:08 PM CET, Maximiliano Sandoval wrote: >> > When the last snapshot from a group is deleted we clear the entire >> > group, this in turn cleans the owner for the group. >> > >> > Without this change, the user is unable to change the owner of the group >> > after the last snapshot has been deleted. This would prevent a new >> > backups to the same group from a different owner. >> > >> > Signed-off-by: Maximiliano Sandoval <m.sandoval@proxmox.com> >> > --- >> > src/api2/admin/datastore.rs | 12 +++++++++++- >> > 1 file changed, 11 insertions(+), 1 deletion(-) >> > >> > diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs >> > index dbb7ae47..305673f1 100644 >> > --- a/src/api2/admin/datastore.rs >> > +++ b/src/api2/admin/datastore.rs >> > @@ -423,10 +423,20 @@ pub async fn delete_snapshot( >> > &backup_dir.group, >> > )?; >> > >> > - let snapshot = datastore.backup_dir(ns, backup_dir)?; >> > + let snapshot = datastore.backup_dir(ns.clone(), backup_dir)?; >> > >> > snapshot.destroy(false)?; >> > >> > + let group = BackupGroup::from(snapshot); >> > + if group.list_backups().is_ok_and(|backups| backups.is_empty()) { >> > + if let Err(err) = datastore.remove_backup_group(&ns, group.as_ref()) { >> > + log::error!( >> > + "error while cleaning group {path:?} - {err}", >> > + path = group.full_group_path() >> > + ); >> > + } >> > + } >> > + >> >> this bug... looks so harmless, but comes back to haunt me every time. as >> explained by wobu here [1] it is not really possible to cleanly fix this >> bug without reworking our locking mechanism. i did send some patches for >> that (checks notes) almost 3 years ago ^^', but they are still not >> applied and definitively need a rework at this point [2]. i can pick >> this up again, sorry got focused on other things in the meantime. >> >> [1]: https://lore.proxmox.com/pbs-devel/20220314093617.n2mc2jv4k6ntzroo@wobu-vie.proxmox.com/ >> [2]: https://lore.proxmox.com/pbs-devel/20220824124829.392189-1-s.sterz@proxmox.com/ > > IIRC we need to figure out a good upgrade strategy so running processes > don't use different locking. > > One idea was to have the postinst script create a file in run (eg > `/run/proxmox-backup/old-locking`) which, when present, instructs the > daemons to keep using the old strategy. > > The new one would then automatically be used after either a reboot, or > manually removing the file between stop & start of the daemons. yeah i remember that being a blocker, but since pbs 4 is coming up soon-ish, couldn't we just apply it then? upgrading between 3 -> 4 requiring a reboot seems reasonable to me, though maybe i'm missing something (e.g. could it be problematic to have the services running, even shortly, before the reboot?). if that would be an option, it'd be much simpler than carrying around that switching logic forever (or at least one major version?). also, what would happen if a user accidentally creates that file after the new locking is already in-place? do we consider this "bad luck" or do we want some kind of protection in-place? _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel