From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id 8975F1FF144 for ; Tue, 24 Mar 2026 15:02:20 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id E332E13C7E; Tue, 24 Mar 2026 15:02:39 +0100 (CET) Date: Tue, 24 Mar 2026 15:02:00 +0100 From: Fabian =?iso-8859-1?q?Gr=FCnbichler?= Subject: Re: [PATCH proxmox-backup 3/5] bin: proxy: periodically schedule fstrim on datastore's filesystems To: Christian Ebner , pbs-devel@lists.proxmox.com References: <20260319143649.681937-1-c.ebner@proxmox.com> <20260319143649.681937-5-c.ebner@proxmox.com> In-Reply-To: MIME-Version: 1.0 User-Agent: astroid/0.17.0 (https://github.com/astroidmail/astroid) Message-Id: <1774360858.6m2qiiimp7.astroid@yuna.none> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1774360876579 X-SPAM-LEVEL: Spam detection results: 0 AWL 0.053 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Message-ID-Hash: NPX6DG7FU7K3QHJIKMMPFZ5E2CVWGCGD X-Message-ID-Hash: NPX6DG7FU7K3QHJIKMMPFZ5E2CVWGCGD X-MailFrom: f.gruenbichler@proxmox.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header X-Mailman-Version: 3.3.10 Precedence: list List-Id: Proxmox Backup Server development discussion List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: On March 19, 2026 4:02 pm, Christian Ebner wrote: > On 3/19/26 3:36 PM, Christian Ebner wrote: >> Run the fstrim command as scheduled job on datastores having a >> schedule defined in the config. >>=20 >> This is done since by default the systemd service to execute fstrim >> invokes the command with: >>=20 >> `/sbin/fstrim --listed-in /etc/fstab:/proc/self/mountinfo ...` >>=20 >> which however does not cover datastores created on top of a disk, >> since these are mounted via systemd mount units. fstrim however only >> evaluates the given list above up to the first non-empty file as >> stated in the man page [0]. >>=20 >> [0] https://www.man7.org/linux/man-pages/man8/fstrim.8.html >>=20 >> Fixes: https://forum.proxmox.com/threads/181764/ >> Signed-off-by: Christian Ebner >> --- >> src/bin/proxmox-backup-proxy.rs | 59 +++++++++++++++++++++++++++++++++ >> 1 file changed, 59 insertions(+) >>=20 >> diff --git a/src/bin/proxmox-backup-proxy.rs b/src/bin/proxmox-backup-pr= oxy.rs >> index c1fe3ac15..1d375e69d 100644 >> --- a/src/bin/proxmox-backup-proxy.rs >> +++ b/src/bin/proxmox-backup-proxy.rs >> @@ -508,6 +508,7 @@ async fn schedule_tasks() -> Result<(), Error> { >> schedule_datastore_verify_jobs().await; >> schedule_tape_backup_jobs().await; >> schedule_task_log_rotate().await; >> + schedule_fstrim().await; >> =20 >> Ok(()) >> } >> @@ -879,6 +880,64 @@ async fn schedule_task_log_rotate() { >> } >> } >> =20 >> +async fn schedule_fstrim() { >> + let config =3D match pbs_config::datastore::config() { >> + Err(err) =3D> { >> + eprintln!("unable to read datastore config - {err}"); >> + return; >> + } >> + Ok((config, _digest)) =3D> config, >> + }; >> + >> + for (store, (_, store_config)) in config.sections { >> + let store_config: DataStoreConfig =3D match serde_json::from_va= lue(store_config) { >> + Ok(c) =3D> c, >> + Err(err) =3D> { >> + eprintln!("datastore config from_value failed - {err}")= ; >> + continue; >> + } >> + }; >> + >> + let event_schedule =3D match &store_config.fstrim_schedule { >> + Some(event_schedule) =3D> event_schedule, >> + None =3D> continue, >> + }; >> + >> + let worker_type =3D "fstrim"; >> + if check_schedule(worker_type, event_schedule, &store) { >> + let mut job =3D match Job::new(worker_type, &store) { >> + Ok(job) =3D> job, >> + Err(_) =3D> continue, // could not get lock >> + }; >> + >> + if let Err(err) =3D WorkerTask::new_thread( >> + worker_type, >> + None, >> + Authid::root_auth_id().to_string(), >> + false, >> + move |worker| { >> + job.start(&worker.upid().to_string())?; >> + info!("executing fstrim on filesystem for {store}")= ; >> + >> + let path =3D store_config.absolute_path(); >> + let result =3D proxmox_backup::tools::disks::fstrim= (Path::new(&path)) >> + .map(|output| log::info!("{output}")); >=20 > Only realized now that this does not work since unprivileged, but this=20 > needs to be run as root. So this needs additional rework... >=20 > Other open questions still remain though. it would also need to do a lookup_datastore, since we want to register fstrim as operation that blocks things like unmounting? which likely means it should be implemented on top of Datastore, and not called directly.. >=20 >> + >> + let status =3D worker.create_state(&result); >> + >> + if let Err(err) =3D job.finish(status) { >> + eprintln!("could not finish job state for {work= er_type}: {err}"); >> + } >> + >> + result >> + }, >> + ) { >> + eprintln!("unable to start fstrim task: {err}"); >> + } >> + } >> + } >> +} >> + >> async fn command_reopen_access_logfiles() -> Result<(), Error> { >> // only care about the most recent daemon instance for each, proxy= & api, as other older ones >> // should not respond to new requests anyway, but only finish thei= r current one and then exit. >=20 >=20 >=20 >=20 >=20 >=20