From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id 3B0891FF17A for ; Tue, 11 Nov 2025 13:07:42 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id E16E2842A; Tue, 11 Nov 2025 13:08:27 +0100 (CET) Date: Tue, 11 Nov 2025 13:07:51 +0100 From: Fabian =?iso-8859-1?q?Gr=FCnbichler?= To: Proxmox Backup Server development discussion References: <20251029160103.241780-1-h.laimer@proxmox.com> <20251029160103.241780-4-h.laimer@proxmox.com> In-Reply-To: <20251029160103.241780-4-h.laimer@proxmox.com> MIME-Version: 1.0 User-Agent: astroid/0.17.0 (https://github.com/astroidmail/astroid) Message-Id: <1762862576.vbnms9h8p9.astroid@yuna.none> X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1762862851349 X-SPAM-LEVEL: Spam detection results: 0 AWL 0.046 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [datastore.rs, proxmox.com] Subject: Re: [pbs-devel] [PATCH proxmox-backup v2 2/4] api: datastore: unmount datastore after sync if configured X-BeenThere: pbs-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox Backup Server development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Proxmox Backup Server development discussion Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: pbs-devel-bounces@lists.proxmox.com Sender: "pbs-devel" On October 29, 2025 5:01 pm, Hannes Laimer wrote: > When a sync job is triggered by the mounting of a datastore, we now check > whether it should also be unmounted automatically afterwards. This is only > done for jobs triggered by mounting. > > We do not do this for manually started or scheduled sync jobs, as those > run in the proxy process and therefore cannot call the privileged API > endpoint for unmounting. > > The task that starts sync jobs on mount runs in the API process (where the > mounting occurs), so in that privileged context, we can also perform the > unmounting. > > Tested-by: Robert Obkircher > Signed-off-by: Hannes Laimer > --- > src/api2/admin/datastore.rs | 21 +++++++++++++++++++-- > 1 file changed, 19 insertions(+), 2 deletions(-) > > diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs > index 643d1694..75122260 100644 > --- a/src/api2/admin/datastore.rs > +++ b/src/api2/admin/datastore.rs > @@ -2430,6 +2430,7 @@ pub fn do_mount_device(datastore: DataStoreConfig) -> Result { > > async fn do_sync_jobs( > jobs_to_run: Vec, > + store: String, instead of this, the helper could also return > worker: Arc, > ) -> Result<(), Error> { > let count = jobs_to_run.len(); > @@ -2442,6 +2443,8 @@ async fn do_sync_jobs( > .join(", ") > ); > > + let mut unmount_on_done = false; > + > let client = crate::client_helpers::connect_to_localhost() > .context("Failed to connect to localhost for starting sync jobs")?; > for (i, job_config) in jobs_to_run.into_iter().enumerate() { > @@ -2484,7 +2487,21 @@ async fn do_sync_jobs( > } > } > } > + unmount_on_done |= job_config.unmount_on_done.unwrap_or_default(); > + } > + if unmount_on_done { whether unmounting is necessary/desired, and then the caller could handle the unmounting.. or even better, the unmount handling could live in the caller entirely, because right now if anything here fails, there won't be an unmount.. > + match client > + .post( > + format!("api2/json/admin/datastore/{store}/unmount").as_str(), > + None, > + ) > + .await > + { > + Ok(_) => info!("triggered unmounting successfully"), > + Err(err) => warn!("could not unmount: {err}"), > + }; > } we are already in the privileged api daemon here, so we don't need to connect to the proxy which forwards to the privileged api daemon again, we can just call the unmount inline directly, right? > + > Ok(()) > } > > @@ -2566,10 +2583,10 @@ pub fn mount(store: String, rpcenv: &mut dyn RpcEnvironment) -> Result info!("starting {} sync jobs", jobs_to_run.len()); > let _ = WorkerTask::spawn( > "mount-sync-jobs", > - Some(store), > + Some(store.clone()), > auth_id.to_string(), > false, > - move |worker| async move { do_sync_jobs(jobs_to_run, worker).await }, > + move |worker| async move { do_sync_jobs(jobs_to_run, store, worker).await }, > ); > } > Ok(()) > -- > 2.47.3 > > > > _______________________________________________ > pbs-devel mailing list > pbs-devel@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel > > > _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel