* [PATCH v1 proxmox-backup 00/11] fix various warnings
@ 2026-02-26 14:40 Robert Obkircher
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 01/11] datastore: remove allow(clippy::cast_ptr_alignment) attribute Robert Obkircher
` (11 more replies)
0 siblings, 12 replies; 16+ messages in thread
From: Robert Obkircher @ 2026-02-26 14:40 UTC (permalink / raw)
To: pbs-devel
The first 10 patches fix all clippy warnings expect the one about
future incompatibilities in the num-bigint-dig v0.8.4 depencency.
The last patch is an attempt at fixing the compiler warnings about the
deprecated flock function, but I'm not sure if this is an acceptable
solution.
Robert Obkircher (11):
datastore: remove allow(clippy::cast_ptr_alignment) attribute
api: use checked_div for compression ratio calculation
datastore+server: sort by key instead of using comparison functions
api: remove unnecessary ampersand to fix clippy warning
bin: debug: use pattern match instead of is_some+unwrap
bin: proxy: fix clippy warning about unnecesary use of find_map
datastore: silence warning about too many arguments
client: catalog shell: avoid unnecessary block_on in async code
client: catalog shell: combine multiple block_on calls into one
client: catalog shell: avoid unsafe transmute
tape: media catalog: use Flock wrapper instead of deprecated function
pbs-client/src/catalog_shell.rs | 124 +++++++++++++-----------
pbs-datastore/src/backup_info.rs | 4 +-
pbs-datastore/src/chunk_store.rs | 1 +
pbs-datastore/src/datastore.rs | 4 +-
pbs-datastore/src/dynamic_index.rs | 1 -
src/api2/backup/environment.rs | 7 +-
src/api2/node/certificates.rs | 2 +-
src/bin/proxmox-backup-proxy.rs | 4 +-
src/bin/proxmox_backup_debug/inspect.rs | 12 ++-
src/server/pull.rs | 2 +-
src/server/push.rs | 2 +-
src/tape/media_catalog.rs | 49 +++++-----
12 files changed, 112 insertions(+), 100 deletions(-)
--
2.47.3
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v1 proxmox-backup 01/11] datastore: remove allow(clippy::cast_ptr_alignment) attribute
2026-02-26 14:40 [PATCH v1 proxmox-backup 00/11] fix various warnings Robert Obkircher
@ 2026-02-26 14:40 ` Robert Obkircher
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 02/11] api: use checked_div for compression ratio calculation Robert Obkircher
` (10 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: Robert Obkircher @ 2026-02-26 14:40 UTC (permalink / raw)
To: pbs-devel
The attribute can be removed because the function doesn't perform any
pointer casts.
Signed-off-by: Robert Obkircher <r.obkircher@proxmox.com>
---
pbs-datastore/src/dynamic_index.rs | 1 -
1 file changed, 1 deletion(-)
diff --git a/pbs-datastore/src/dynamic_index.rs b/pbs-datastore/src/dynamic_index.rs
index ad49cdf3..d22150d5 100644
--- a/pbs-datastore/src/dynamic_index.rs
+++ b/pbs-datastore/src/dynamic_index.rs
@@ -149,7 +149,6 @@ impl DynamicIndexReader {
}
#[inline]
- #[allow(clippy::cast_ptr_alignment)]
pub fn chunk_end(&self, pos: usize) -> u64 {
if pos >= self.index.len() {
panic!("chunk index out of range");
--
2.47.3
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v1 proxmox-backup 02/11] api: use checked_div for compression ratio calculation
2026-02-26 14:40 [PATCH v1 proxmox-backup 00/11] fix various warnings Robert Obkircher
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 01/11] datastore: remove allow(clippy::cast_ptr_alignment) attribute Robert Obkircher
@ 2026-02-26 14:40 ` Robert Obkircher
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 03/11] datastore+server: sort by key instead of using comparison functions Robert Obkircher
` (9 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: Robert Obkircher @ 2026-02-26 14:40 UTC (permalink / raw)
To: pbs-devel
Avoid the manual zero-check to fix the clippy warning about it.
Signed-off-by: Robert Obkircher <r.obkircher@proxmox.com>
---
src/api2/backup/environment.rs | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/src/api2/backup/environment.rs b/src/api2/backup/environment.rs
index bd9c5211..d49a5075 100644
--- a/src/api2/backup/environment.rs
+++ b/src/api2/backup/environment.rs
@@ -489,11 +489,8 @@ impl BackupEnvironment {
));
}
- if upload_stat.size > 0 {
- self.log(format!(
- "Compression: {}%",
- (upload_stat.compressed_size * 100) / upload_stat.size
- ));
+ if let Some(c) = (upload_stat.compressed_size * 100).checked_div(upload_stat.size) {
+ self.log(format!("Compression: {c}%"));
}
}
--
2.47.3
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v1 proxmox-backup 03/11] datastore+server: sort by key instead of using comparison functions
2026-02-26 14:40 [PATCH v1 proxmox-backup 00/11] fix various warnings Robert Obkircher
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 01/11] datastore: remove allow(clippy::cast_ptr_alignment) attribute Robert Obkircher
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 02/11] api: use checked_div for compression ratio calculation Robert Obkircher
@ 2026-02-26 14:40 ` Robert Obkircher
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 04/11] api: remove unnecessary ampersand to fix clippy warning Robert Obkircher
` (8 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: Robert Obkircher @ 2026-02-26 14:40 UTC (permalink / raw)
To: pbs-devel
Make sorting more readable and fix mulitple clippy warnings.
Signed-off-by: Robert Obkircher <r.obkircher@proxmox.com>
---
pbs-datastore/src/backup_info.rs | 4 ++--
pbs-datastore/src/datastore.rs | 4 +---
src/server/pull.rs | 2 +-
src/server/push.rs | 2 +-
4 files changed, 5 insertions(+), 7 deletions(-)
diff --git a/pbs-datastore/src/backup_info.rs b/pbs-datastore/src/backup_info.rs
index 859039cf..9e298ee8 100644
--- a/pbs-datastore/src/backup_info.rs
+++ b/pbs-datastore/src/backup_info.rs
@@ -851,10 +851,10 @@ impl BackupInfo {
pub fn sort_list(list: &mut [BackupInfo], ascendending: bool) {
if ascendending {
// oldest first
- list.sort_unstable_by(|a, b| a.backup_dir.dir.time.cmp(&b.backup_dir.dir.time));
+ list.sort_unstable_by_key(|a| a.backup_dir.dir.time);
} else {
// newest first
- list.sort_unstable_by(|a, b| b.backup_dir.dir.time.cmp(&a.backup_dir.dir.time));
+ list.sort_unstable_by_key(|b| std::cmp::Reverse(b.backup_dir.dir.time));
}
}
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index 7ad3d917..51e38ea4 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -2250,9 +2250,7 @@ impl DataStore {
match self.inner.chunk_order {
// sorting by inode improves data locality, which makes it lots faster on spinners
- ChunkOrder::Inode => {
- chunk_list.sort_unstable_by(|(_, ino_a), (_, ino_b)| ino_a.cmp(ino_b))
- }
+ ChunkOrder::Inode => chunk_list.sort_unstable_by_key(|(_, ino)| *ino),
ChunkOrder::None => {}
}
diff --git a/src/server/pull.rs b/src/server/pull.rs
index 15d8b9de..57c5ef32 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -622,7 +622,7 @@ async fn pull_group(
.source
.list_backup_snapshots(source_namespace, group)
.await?;
- raw_list.sort_unstable_by(|a, b| a.backup.time.cmp(&b.backup.time));
+ raw_list.sort_unstable_by_key(|a| a.backup.time);
let target_ns = source_namespace.map_prefix(¶ms.source.get_ns(), ¶ms.target.ns)?;
diff --git a/src/server/push.rs b/src/server/push.rs
index d7884fce..b1c509b0 100644
--- a/src/server/push.rs
+++ b/src/server/push.rs
@@ -679,7 +679,7 @@ pub(crate) async fn push_group(
.source
.list_backup_snapshots(namespace, group)
.await?;
- snapshots.sort_unstable_by(|a, b| a.backup.time.cmp(&b.backup.time));
+ snapshots.sort_unstable_by_key(|a| a.backup.time);
if snapshots.is_empty() {
info!("Group '{group}' contains no snapshots to sync to remote");
--
2.47.3
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v1 proxmox-backup 04/11] api: remove unnecessary ampersand to fix clippy warning
2026-02-26 14:40 [PATCH v1 proxmox-backup 00/11] fix various warnings Robert Obkircher
` (2 preceding siblings ...)
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 03/11] datastore+server: sort by key instead of using comparison functions Robert Obkircher
@ 2026-02-26 14:40 ` Robert Obkircher
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 05/11] bin: debug: use pattern match instead of is_some+unwrap Robert Obkircher
` (7 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: Robert Obkircher @ 2026-02-26 14:40 UTC (permalink / raw)
To: pbs-devel
The `as_bytes` method already returns a reference.
Signed-off-by: Robert Obkircher <r.obkircher@proxmox.com>
---
src/api2/node/certificates.rs | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/api2/node/certificates.rs b/src/api2/node/certificates.rs
index 7fb3a478..e5761e5f 100644
--- a/src/api2/node/certificates.rs
+++ b/src/api2/node/certificates.rs
@@ -396,7 +396,7 @@ pub fn revoke_acme_cert(rpcenv: &mut dyn RpcEnvironment) -> Result<String, Error
true,
move |_worker| async move {
info!("Revoking old certificate");
- proxmox_acme_api::revoke_certificate(&acme_config, &cert_pem.as_bytes()).await?;
+ proxmox_acme_api::revoke_certificate(&acme_config, cert_pem.as_bytes()).await?;
info!("Deleting certificate and regenerating a self-signed one");
delete_custom_certificate().await?;
Ok(())
--
2.47.3
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v1 proxmox-backup 05/11] bin: debug: use pattern match instead of is_some+unwrap
2026-02-26 14:40 [PATCH v1 proxmox-backup 00/11] fix various warnings Robert Obkircher
` (3 preceding siblings ...)
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 04/11] api: remove unnecessary ampersand to fix clippy warning Robert Obkircher
@ 2026-02-26 14:40 ` Robert Obkircher
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 06/11] bin: proxy: fix clippy warning about unnecesary use of find_map Robert Obkircher
` (6 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: Robert Obkircher @ 2026-02-26 14:40 UTC (permalink / raw)
To: pbs-devel
This is slightly more idomatic and fixes a clippy warning.
Signed-off-by: Robert Obkircher <r.obkircher@proxmox.com>
---
src/bin/proxmox_backup_debug/inspect.rs | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/src/bin/proxmox_backup_debug/inspect.rs b/src/bin/proxmox_backup_debug/inspect.rs
index 0b129f28..b5a7d3f2 100644
--- a/src/bin/proxmox_backup_debug/inspect.rs
+++ b/src/bin/proxmox_backup_debug/inspect.rs
@@ -36,11 +36,13 @@ fn decode_blob(
let mut crypt_conf_opt = None;
let crypt_conf;
- if blob.is_encrypted() && key_file.is_some() {
- let (key, _created, _fingerprint) =
- load_and_decrypt_key(key_file.unwrap(), &get_encryption_key_password)?;
- crypt_conf = CryptConfig::new(key)?;
- crypt_conf_opt = Some(&crypt_conf);
+ if blob.is_encrypted() {
+ if let Some(key_file) = key_file {
+ let (key, _created, _fingerprint) =
+ load_and_decrypt_key(key_file, &get_encryption_key_password)?;
+ crypt_conf = CryptConfig::new(key)?;
+ crypt_conf_opt = Some(&crypt_conf);
+ }
}
output_path = match output_path {
--
2.47.3
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v1 proxmox-backup 06/11] bin: proxy: fix clippy warning about unnecesary use of find_map
2026-02-26 14:40 [PATCH v1 proxmox-backup 00/11] fix various warnings Robert Obkircher
` (4 preceding siblings ...)
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 05/11] bin: debug: use pattern match instead of is_some+unwrap Robert Obkircher
@ 2026-02-26 14:40 ` Robert Obkircher
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 07/11] datastore: silence warning about too many arguments Robert Obkircher
` (5 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: Robert Obkircher @ 2026-02-26 14:40 UTC (permalink / raw)
To: pbs-devel
This change is purely done to silence the warning until a second
variant is added to the RateLimiterTag enum.
Signed-off-by: Robert Obkircher <r.obkircher@proxmox.com>
---
src/bin/proxmox-backup-proxy.rs | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/src/bin/proxmox-backup-proxy.rs b/src/bin/proxmox-backup-proxy.rs
index 9f14b70d..af8056f7 100644
--- a/src/bin/proxmox-backup-proxy.rs
+++ b/src/bin/proxmox-backup-proxy.rs
@@ -977,9 +977,7 @@ fn lookup_rate_limiter(
cache.reload(now);
- let user = tags.iter().find_map(|tag| match tag {
- RateLimiterTag::User(user) => Some(user.as_str()),
- });
+ let user = tags.first().map(|RateLimiterTag::User(user)| user);
let authid = user.and_then(|s| s.parse::<pbs_api_types::Authid>().ok());
let user_parsed = authid.as_ref().map(|auth_id| auth_id.user());
--
2.47.3
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v1 proxmox-backup 07/11] datastore: silence warning about too many arguments
2026-02-26 14:40 [PATCH v1 proxmox-backup 00/11] fix various warnings Robert Obkircher
` (5 preceding siblings ...)
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 06/11] bin: proxy: fix clippy warning about unnecesary use of find_map Robert Obkircher
@ 2026-02-26 14:40 ` Robert Obkircher
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 08/11] client: catalog shell: avoid unnecessary block_on in async code Robert Obkircher
` (4 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: Robert Obkircher @ 2026-02-26 14:40 UTC (permalink / raw)
To: pbs-devel
The function already has a FIXME comment to remind us that it should
be refactored.
Signed-off-by: Robert Obkircher <r.obkircher@proxmox.com>
---
pbs-datastore/src/chunk_store.rs | 1 +
1 file changed, 1 insertion(+)
diff --git a/pbs-datastore/src/chunk_store.rs b/pbs-datastore/src/chunk_store.rs
index bd1dc353..62d07fad 100644
--- a/pbs-datastore/src/chunk_store.rs
+++ b/pbs-datastore/src/chunk_store.rs
@@ -538,6 +538,7 @@ impl ChunkStore {
///
/// Unsafe: requires locking and GC checks to be called
/// FIXME: make this internal with further refactoring
+ #[allow(clippy::too_many_arguments)]
pub(super) unsafe fn cond_sweep_chunk<T: FnOnce() -> Result<(), Error>>(
&self,
atime: i64,
--
2.47.3
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v1 proxmox-backup 08/11] client: catalog shell: avoid unnecessary block_on in async code
2026-02-26 14:40 [PATCH v1 proxmox-backup 00/11] fix various warnings Robert Obkircher
` (6 preceding siblings ...)
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 07/11] datastore: silence warning about too many arguments Robert Obkircher
@ 2026-02-26 14:40 ` Robert Obkircher
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 09/11] client: catalog shell: combine multiple block_on calls into one Robert Obkircher
` (3 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: Robert Obkircher @ 2026-02-26 14:40 UTC (permalink / raw)
To: pbs-devel
Prefer normal async/await over block_on. The signature of the "pub"
new method can be changed because it is implemented on a private type.
Signed-off-by: Robert Obkircher <r.obkircher@proxmox.com>
---
pbs-client/src/catalog_shell.rs | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/pbs-client/src/catalog_shell.rs b/pbs-client/src/catalog_shell.rs
index 5e1835e4..43f0a73b 100644
--- a/pbs-client/src/catalog_shell.rs
+++ b/pbs-client/src/catalog_shell.rs
@@ -1080,7 +1080,8 @@ impl Shell {
extractor,
match_list,
&self.accessor,
- )?;
+ )
+ .await?;
extractor.extract().await
}
@@ -1119,7 +1120,7 @@ struct ExtractorState<'a> {
}
impl<'a> ExtractorState<'a> {
- pub fn new(
+ pub async fn new(
catalog: &'a mut Option<CatalogReader>,
dir_stack: Vec<PathStackEntry>,
extractor: crate::pxar::extract::Extractor,
@@ -1132,8 +1133,8 @@ impl<'a> ExtractorState<'a> {
.into_iter()
} else {
let pxar_entry = parent_pxar_entry(&dir_stack)?;
- let dir = block_on(pxar_entry.enter_directory())?;
- let entries = block_on(crate::pxar::tools::pxar_metadata_read_dir(dir))?;
+ let dir = pxar_entry.enter_directory().await?;
+ let entries = crate::pxar::tools::pxar_metadata_read_dir(dir).await?;
let mut catalog_entries = Vec::with_capacity(entries.len());
for entry in entries {
@@ -1229,7 +1230,7 @@ impl<'a> ExtractorState<'a> {
let dir = Shell::walk_pxar_archive(self.accessor, &mut self.dir_stack).await?;
self.dir_stack.pop();
let dir = dir.enter_directory().await?;
- let entries = block_on(crate::pxar::tools::pxar_metadata_read_dir(dir))?;
+ let entries = crate::pxar::tools::pxar_metadata_read_dir(dir).await?;
entries
.into_iter()
.map(|entry| {
--
2.47.3
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v1 proxmox-backup 09/11] client: catalog shell: combine multiple block_on calls into one
2026-02-26 14:40 [PATCH v1 proxmox-backup 00/11] fix various warnings Robert Obkircher
` (7 preceding siblings ...)
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 08/11] client: catalog shell: avoid unnecessary block_on in async code Robert Obkircher
@ 2026-02-26 14:40 ` Robert Obkircher
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 10/11] client: catalog shell: avoid unsafe transmute Robert Obkircher
` (2 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: Robert Obkircher @ 2026-02-26 14:40 UTC (permalink / raw)
To: pbs-devel
Reduce the number of places where block_on is called.
Signed-off-by: Robert Obkircher <r.obkircher@proxmox.com>
---
pbs-client/src/catalog_shell.rs | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/pbs-client/src/catalog_shell.rs b/pbs-client/src/catalog_shell.rs
index 43f0a73b..5e62a62a 100644
--- a/pbs-client/src/catalog_shell.rs
+++ b/pbs-client/src/catalog_shell.rs
@@ -104,7 +104,7 @@ pub fn catalog_shell_cli() -> CommandLineInterface {
fn complete_path(complete_me: &str, _map: &HashMap<String, String>) -> Vec<String> {
let shell: &mut Shell = unsafe { std::mem::transmute(SHELL.unwrap()) };
- match shell.complete_path(complete_me) {
+ match block_on(shell.complete_path(complete_me)) {
Ok(list) => list,
Err(err) => {
error!("error during completion: {}", err);
@@ -554,7 +554,7 @@ impl Shell {
Ok(())
}
- fn step_nofollow(
+ async fn step_nofollow(
stack: &mut Vec<PathStackEntry>,
catalog: &mut Option<CatalogReader>,
component: std::path::Component<'_>,
@@ -579,8 +579,8 @@ impl Shell {
}
} else {
let pxar_entry = parent_pxar_entry(stack)?;
- let parent_dir = block_on(pxar_entry.enter_directory())?;
- match block_on(parent_dir.lookup(entry))? {
+ let parent_dir = pxar_entry.enter_directory().await?;
+ match parent_dir.lookup(entry).await? {
Some(entry) => {
let entry_attr = DirEntryAttribute::try_from(&entry)?;
stack.push(PathStackEntry {
@@ -614,13 +614,13 @@ impl Shell {
}
/// Non-async version cannot follow symlinks.
- fn walk_catalog_nofollow(
+ async fn walk_catalog_nofollow(
stack: &mut Vec<PathStackEntry>,
catalog: &mut Option<CatalogReader>,
path: &Path,
) -> Result<(), Error> {
for c in path.components() {
- Self::step_nofollow(stack, catalog, c)?;
+ Self::step_nofollow(stack, catalog, c).await?;
}
Ok(())
}
@@ -657,7 +657,7 @@ impl Shell {
Ok(stack.last().unwrap().pxar.clone().unwrap())
}
- fn complete_path(&mut self, input: &str) -> Result<Vec<String>, Error> {
+ async fn complete_path(&mut self, input: &str) -> Result<Vec<String>, Error> {
let mut tmp_stack;
let (parent, base, part) = match input.rfind('/') {
Some(ind) => {
@@ -668,7 +668,7 @@ impl Shell {
} else {
tmp_stack = self.position.clone();
}
- Self::walk_catalog_nofollow(&mut tmp_stack, &mut self.catalog, &path)?;
+ Self::walk_catalog_nofollow(&mut tmp_stack, &mut self.catalog, &path).await?;
(&tmp_stack.last().unwrap(), base, part)
}
None => (&self.position.last().unwrap(), "", input),
@@ -678,12 +678,12 @@ impl Shell {
catalog.read_dir(&parent.catalog)?
} else {
let dir = if let Some(entry) = parent.pxar.as_ref() {
- block_on(entry.enter_directory())?
+ entry.enter_directory().await?
} else {
bail!("missing pxar entry for parent");
};
let mut out = Vec::new();
- let entries = block_on(crate::pxar::tools::pxar_metadata_read_dir(dir))?;
+ let entries = crate::pxar::tools::pxar_metadata_read_dir(dir).await?;
for entry in entries {
let mut name = base.to_string();
let file_name = entry.file_name().as_bytes();
--
2.47.3
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v1 proxmox-backup 10/11] client: catalog shell: avoid unsafe transmute
2026-02-26 14:40 [PATCH v1 proxmox-backup 00/11] fix various warnings Robert Obkircher
` (8 preceding siblings ...)
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 09/11] client: catalog shell: combine multiple block_on calls into one Robert Obkircher
@ 2026-02-26 14:40 ` Robert Obkircher
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 11/11] tape: media catalog: use Flock wrapper instead of deprecated function Robert Obkircher
2026-03-05 11:20 ` partially-applied: [PATCH v1 proxmox-backup 00/11] fix various warnings Fabian Grünbichler
11 siblings, 0 replies; 16+ messages in thread
From: Robert Obkircher @ 2026-02-26 14:40 UTC (permalink / raw)
To: pbs-devel
Wrap data in a Mutex instead of storing a stack pointer in a global
usize variable.
The previous version created two simultaneous mutable references to
the same memory while readline invoked the completion callback.
Accoriding to a clippy warning, transmuting a pointer to an integer
and back was also undefined behavior, because it doesn't preserve
pointer provenance. The pointer would have also been left dangling if
the `Shell::shell` future was cancelled or panicked.
Signed-off-by: Robert Obkircher <r.obkircher@proxmox.com>
---
pbs-client/src/catalog_shell.rs | 95 ++++++++++++++++++---------------
1 file changed, 53 insertions(+), 42 deletions(-)
diff --git a/pbs-client/src/catalog_shell.rs b/pbs-client/src/catalog_shell.rs
index 5e62a62a..1871172b 100644
--- a/pbs-client/src/catalog_shell.rs
+++ b/pbs-client/src/catalog_shell.rs
@@ -7,6 +7,7 @@ use std::ops::ControlFlow;
use std::os::unix::ffi::{OsStrExt, OsStringExt};
use std::path::{Path, PathBuf};
use std::pin::Pin;
+use std::sync::Mutex;
use anyhow::{bail, format_err, Error};
use nix::dir::Dir;
@@ -35,7 +36,7 @@ type FileEntry = pxar::accessor::aio::FileEntry<Reader>;
const MAX_SYMLINK_COUNT: usize = 40;
-static mut SHELL: Option<usize> = None;
+static SHELL: Mutex<Option<Shell>> = Mutex::new(None);
/// This list defines all the shell commands and their properties
/// using the api schema
@@ -103,8 +104,10 @@ pub fn catalog_shell_cli() -> CommandLineInterface {
}
fn complete_path(complete_me: &str, _map: &HashMap<String, String>) -> Vec<String> {
- let shell: &mut Shell = unsafe { std::mem::transmute(SHELL.unwrap()) };
- match block_on(shell.complete_path(complete_me)) {
+ let result = block_on(Shell::with(async |shell| {
+ shell.complete_path(complete_me).await
+ }));
+ match result {
Ok(list) => list,
Err(err) => {
error!("error during completion: {}", err);
@@ -124,7 +127,7 @@ async fn exit() -> Result<(), Error> {
#[api(input: { properties: {} })]
/// List the current working directory.
async fn pwd_command() -> Result<(), Error> {
- Shell::with(move |shell| shell.pwd()).await
+ Shell::with(Shell::pwd).await
}
#[api(
@@ -141,7 +144,7 @@ async fn pwd_command() -> Result<(), Error> {
/// Change the current working directory to the new directory
async fn cd_command(path: Option<String>) -> Result<(), Error> {
let path = path.as_ref().map(Path::new);
- Shell::with(move |shell| shell.cd(path)).await
+ Shell::with(async |shell| shell.cd(path).await).await
}
#[api(
@@ -158,7 +161,7 @@ async fn cd_command(path: Option<String>) -> Result<(), Error> {
/// List the content of working directory or given path.
async fn ls_command(path: Option<String>) -> Result<(), Error> {
let path = path.as_ref().map(Path::new);
- Shell::with(move |shell| shell.ls(path)).await
+ Shell::with(async |shell| shell.ls(path).await).await
}
#[api(
@@ -176,7 +179,7 @@ async fn ls_command(path: Option<String>) -> Result<(), Error> {
/// This is expensive because the data has to be read from the pxar archive, which means reading
/// over the network.
async fn stat_command(path: String) -> Result<(), Error> {
- Shell::with(move |shell| shell.stat(PathBuf::from(path))).await
+ Shell::with(async |shell| shell.stat(PathBuf::from(path)).await).await
}
#[api(
@@ -194,7 +197,7 @@ async fn stat_command(path: String) -> Result<(), Error> {
/// This will return an error if the entry is already present in the list or
/// if an invalid path was provided.
async fn select_command(path: String) -> Result<(), Error> {
- Shell::with(move |shell| shell.select(PathBuf::from(path))).await
+ Shell::with(async |shell| shell.select(PathBuf::from(path)).await).await
}
#[api(
@@ -212,13 +215,13 @@ async fn select_command(path: String) -> Result<(), Error> {
/// This will return an error if the entry was not found in the list of entries
/// selected for restore.
async fn deselect_command(path: String) -> Result<(), Error> {
- Shell::with(move |shell| shell.deselect(PathBuf::from(path))).await
+ Shell::with(async |shell| shell.deselect(PathBuf::from(path)).await).await
}
#[api( input: { properties: { } })]
/// Clear the list of files selected for restore.
async fn clear_selected_command() -> Result<(), Error> {
- Shell::with(move |shell| shell.deselect_all()).await
+ Shell::with(async |shell| shell.deselect_all().await).await
}
#[api(
@@ -235,7 +238,7 @@ async fn clear_selected_command() -> Result<(), Error> {
)]
/// List entries currently selected for restore.
async fn list_selected_command(patterns: bool) -> Result<(), Error> {
- Shell::with(move |shell| shell.list_selected(patterns)).await
+ Shell::with(async |shell| shell.list_selected(patterns).await).await
}
#[api(
@@ -255,7 +258,7 @@ async fn list_selected_command(patterns: bool) -> Result<(), Error> {
)]
/// Find entries in the catalog matching the given match pattern.
async fn find_command(pattern: String, select: bool) -> Result<(), Error> {
- Shell::with(move |shell| shell.find(pattern, select)).await
+ Shell::with(async |shell| shell.find(pattern, select).await).await
}
#[api(
@@ -272,7 +275,7 @@ async fn find_command(pattern: String, select: bool) -> Result<(), Error> {
///
/// Target must not exist on the clients filesystem.
async fn restore_selected_command(target: String) -> Result<(), Error> {
- Shell::with(move |shell| shell.restore_selected(PathBuf::from(target))).await
+ Shell::with(async |shell| shell.restore_selected(PathBuf::from(target)).await).await
}
#[api(
@@ -295,7 +298,7 @@ async fn restore_selected_command(target: String) -> Result<(), Error> {
/// subset of this sub-archive.
/// If pattern is not present or empty, the full archive is restored to target.
async fn restore_command(target: String, pattern: Option<String>) -> Result<(), Error> {
- Shell::with(move |shell| shell.restore(PathBuf::from(target), pattern)).await
+ Shell::with(async |shell| shell.restore(PathBuf::from(target), pattern).await).await
}
/// TODO: Should we use this to fix `step()`? Make path resolution behave more like described in
@@ -305,9 +308,6 @@ async fn restore_command(target: String, pattern: Option<String>) -> Result<(),
/// trailing `Component::CurDir` entries. Since we only support regular paths we'll roll our own
/// here:
pub struct Shell {
- /// Readline instance handling input and callbacks
- rl: rustyline::Editor<CliHelper, rustyline::history::MemHistory>,
-
/// Interactive prompt.
prompt: String,
@@ -351,13 +351,6 @@ impl Shell {
archive_name: &str,
archive: Accessor,
) -> Result<Self, Error> {
- let cli_helper = CliHelper::new(catalog_shell_cli());
- let mut rl = rustyline::Editor::<CliHelper, _>::with_history(
- rustyline::Config::default(),
- rustyline::history::MemHistory::new(),
- )?;
- rl.set_helper(Some(cli_helper));
-
let mut position = Vec::new();
if let Some(catalog) = catalog.as_mut() {
let catalog_root = catalog.root()?;
@@ -385,7 +378,6 @@ impl Shell {
}
let mut this = Self {
- rl,
prompt: String::new(),
catalog,
selected: HashMap::new(),
@@ -396,28 +388,40 @@ impl Shell {
Ok(this)
}
- async fn with<'a, Fut, R, F>(call: F) -> Result<R, Error>
+ async fn with<R, F>(call: F) -> Result<R, Error>
where
- F: FnOnce(&'a mut Shell) -> Fut,
- Fut: Future<Output = Result<R, Error>>,
- F: 'a,
- Fut: 'a,
- R: 'static,
+ F: AsyncFnOnce(&mut Shell) -> Result<R, Error>,
{
- let shell: &mut Shell = unsafe { std::mem::transmute(SHELL.unwrap()) };
- call(&mut *shell).await
+ // Under the assumption that only one shell command is executed at a
+ // time, moving out of a std mutex like this ensures data integrity
+ // after panics (or cancellations) and detects deadlocks from recursion.
+ let mut shell = SHELL
+ .lock()
+ .unwrap()
+ .take()
+ .ok_or_else(|| format_err!("expected SHELL"))?;
+ let result = call(&mut shell).await;
+ *SHELL.lock().unwrap() = Some(shell);
+ result
}
- pub async fn shell(mut self) -> Result<(), Error> {
- let this = &mut self;
- unsafe {
- SHELL = Some(this as *mut Shell as usize);
- }
- while let Ok(line) = this.rl.readline(&this.prompt) {
+ pub async fn shell(self) -> Result<(), Error> {
+ let mut prompt = self.prompt.clone();
+
+ *SHELL.lock().unwrap() = Some(self);
+
+ let cli_helper = CliHelper::new(catalog_shell_cli());
+ let mut rl = rustyline::Editor::<CliHelper, _>::with_history(
+ rustyline::Config::default(),
+ rustyline::history::MemHistory::new(),
+ )?;
+ rl.set_helper(Some(cli_helper));
+
+ while let Ok(line) = rl.readline(&prompt) {
if line == "exit" {
break;
}
- let helper = this.rl.helper().unwrap();
+ let helper = rl.helper().unwrap();
let args = match cli::shellword_split(&line) {
Ok(args) => args,
Err(err) => {
@@ -429,9 +433,16 @@ impl Shell {
let _ =
cli::handle_command_future(helper.cmd_def(), "", args, cli::CliEnvironment::new())
.await;
- let _ = this.rl.add_history_entry(line);
- this.update_prompt();
+ let _ = rl.add_history_entry(line);
+
+ if let Some(shell) = &mut *SHELL.lock().unwrap() {
+ shell.update_prompt();
+ prompt = shell.prompt.clone();
+ } else {
+ bail!("SHELL missing at prompt update");
+ }
}
+ *SHELL.lock().unwrap() = None;
Ok(())
}
--
2.47.3
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v1 proxmox-backup 11/11] tape: media catalog: use Flock wrapper instead of deprecated function
2026-02-26 14:40 [PATCH v1 proxmox-backup 00/11] fix various warnings Robert Obkircher
` (9 preceding siblings ...)
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 10/11] client: catalog shell: avoid unsafe transmute Robert Obkircher
@ 2026-02-26 14:40 ` Robert Obkircher
2026-03-05 11:08 ` Fabian Grünbichler
2026-03-05 11:20 ` partially-applied: [PATCH v1 proxmox-backup 00/11] fix various warnings Fabian Grünbichler
11 siblings, 1 reply; 16+ messages in thread
From: Robert Obkircher @ 2026-02-26 14:40 UTC (permalink / raw)
To: pbs-devel
Switch to the new wrapper type to fix the deprecation warnings. Panic
if unlock fails, because there is no easy way to return the File that
was temporarily moved out of the field.
An alternative to moving out of the optional field would be to use
`File::try_clone` but that isn't ideal either because duplicated file
descriptors share the same lock status. That also doesn't solve the
problem that propagating the error without leaking the locked file
descriptor requires unsafe code or a change to nix.
[1] https://lore.proxmox.com/pbs-devel/1763634535.5sqd1r4buu.astroid@yuna.none/
Signed-off-by: Robert Obkircher <r.obkircher@proxmox.com>
---
src/tape/media_catalog.rs | 49 +++++++++++++++++++++------------------
1 file changed, 27 insertions(+), 22 deletions(-)
diff --git a/src/tape/media_catalog.rs b/src/tape/media_catalog.rs
index 63329a65..372bf59d 100644
--- a/src/tape/media_catalog.rs
+++ b/src/tape/media_catalog.rs
@@ -6,6 +6,7 @@ use std::path::{Path, PathBuf};
use anyhow::{bail, format_err, Error};
use endian_trait::Endian;
+use nix::fcntl::{Flock, FlockArg};
use proxmox_sys::fs::read_subdir;
@@ -194,7 +195,7 @@ impl MediaCatalog {
let me = proxmox_lang::try_block!({
Self::create_basedir(base_path)?;
- let mut file = std::fs::OpenOptions::new()
+ let file = std::fs::OpenOptions::new()
.read(true)
.write(write)
.create(create)
@@ -219,9 +220,10 @@ impl MediaCatalog {
};
// Note: lock file, to get a consistent view with load_catalog
- nix::fcntl::flock(file.as_raw_fd(), nix::fcntl::FlockArg::LockExclusive)?;
+ let mut file = Flock::lock(file, FlockArg::LockExclusive)
+ .map_err(|e| format_err!("flock failed: {e:?}"))?;
let result = me.load_catalog(&mut file, media_id.media_set_label.as_ref());
- nix::fcntl::flock(file.as_raw_fd(), nix::fcntl::FlockArg::Unlock)?;
+ let file = file.unlock().expect("shouldn't fail for valid fd");
let (found_magic_number, _) = result?;
@@ -367,27 +369,30 @@ impl MediaCatalog {
return Ok(());
}
- match self.file {
- Some(ref mut file) => {
- let pending = &self.pending;
- // Note: lock file, to get a consistent view with load_catalog
- nix::fcntl::flock(file.as_raw_fd(), nix::fcntl::FlockArg::LockExclusive)?;
- let result: Result<(), Error> = proxmox_lang::try_block!({
- file.write_all(pending)?;
- file.flush()?;
- file.sync_data()?;
- Ok(())
- });
- nix::fcntl::flock(file.as_raw_fd(), nix::fcntl::FlockArg::Unlock)?;
-
- result?;
- }
- None => bail!("media catalog not writable (opened read only)"),
- }
+ let Some(file) = self.file.take() else {
+ bail!("media catalog not writable (opened read only)");
+ };
+
+ let mut file = Flock::lock(file, FlockArg::LockExclusive).map_err(|(f, e)| {
+ self.file = Some(f);
+ format_err!("flock failed: {:?} - {e:?}", self.file)
+ })?;
- self.pending = Vec::new();
+ let pending = &self.pending;
+ // Note: lock file, to get a consistent view with load_catalog
+ let result: Result<(), Error> = proxmox_lang::try_block!({
+ file.write_all(pending)?;
+ file.flush()?;
+ file.sync_data()?;
+ Ok(())
+ });
- Ok(())
+ self.file = Some(file.unlock().expect("shouldn't fail for valid fd"));
+
+ if result.is_ok() {
+ self.pending = Vec::new();
+ }
+ result
}
/// Conditionally commit if in pending data is large (> 1Mb)
--
2.47.3
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v1 proxmox-backup 11/11] tape: media catalog: use Flock wrapper instead of deprecated function
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 11/11] tape: media catalog: use Flock wrapper instead of deprecated function Robert Obkircher
@ 2026-03-05 11:08 ` Fabian Grünbichler
2026-03-05 13:13 ` Dominik Csapak
0 siblings, 1 reply; 16+ messages in thread
From: Fabian Grünbichler @ 2026-03-05 11:08 UTC (permalink / raw)
To: pbs-devel, Robert Obkircher
On February 26, 2026 3:40 pm, Robert Obkircher wrote:
> Switch to the new wrapper type to fix the deprecation warnings. Panic
> if unlock fails, because there is no easy way to return the File that
> was temporarily moved out of the field.
>
> An alternative to moving out of the optional field would be to use
> `File::try_clone` but that isn't ideal either because duplicated file
> descriptors share the same lock status. That also doesn't solve the
> problem that propagating the error without leaking the locked file
> descriptor requires unsafe code or a change to nix.
>
> [1] https://lore.proxmox.com/pbs-devel/1763634535.5sqd1r4buu.astroid@yuna.none/
>
> Signed-off-by: Robert Obkircher <r.obkircher@proxmox.com>
> ---
> src/tape/media_catalog.rs | 49 +++++++++++++++++++++------------------
> 1 file changed, 27 insertions(+), 22 deletions(-)
>
> diff --git a/src/tape/media_catalog.rs b/src/tape/media_catalog.rs
> index 63329a65..372bf59d 100644
> --- a/src/tape/media_catalog.rs
> +++ b/src/tape/media_catalog.rs
> @@ -6,6 +6,7 @@ use std::path::{Path, PathBuf};
>
> use anyhow::{bail, format_err, Error};
> use endian_trait::Endian;
> +use nix::fcntl::{Flock, FlockArg};
>
> use proxmox_sys::fs::read_subdir;
>
> @@ -194,7 +195,7 @@ impl MediaCatalog {
> let me = proxmox_lang::try_block!({
> Self::create_basedir(base_path)?;
>
> - let mut file = std::fs::OpenOptions::new()
> + let file = std::fs::OpenOptions::new()
> .read(true)
> .write(write)
> .create(create)
> @@ -219,9 +220,10 @@ impl MediaCatalog {
> };
>
> // Note: lock file, to get a consistent view with load_catalog
> - nix::fcntl::flock(file.as_raw_fd(), nix::fcntl::FlockArg::LockExclusive)?;
> + let mut file = Flock::lock(file, FlockArg::LockExclusive)
> + .map_err(|e| format_err!("flock failed: {e:?}"))?;
> let result = me.load_catalog(&mut file, media_id.media_set_label.as_ref());
> - nix::fcntl::flock(file.as_raw_fd(), nix::fcntl::FlockArg::Unlock)?;
> + let file = file.unlock().expect("shouldn't fail for valid fd");
here we could just return an error instead of panicing? we only read
from the file anyway, and bubbling up the error should be fine..
>
> let (found_magic_number, _) = result?;
>
> @@ -367,27 +369,30 @@ impl MediaCatalog {
> return Ok(());
> }
>
> - match self.file {
> - Some(ref mut file) => {
> - let pending = &self.pending;
> - // Note: lock file, to get a consistent view with load_catalog
> - nix::fcntl::flock(file.as_raw_fd(), nix::fcntl::FlockArg::LockExclusive)?;
> - let result: Result<(), Error> = proxmox_lang::try_block!({
> - file.write_all(pending)?;
> - file.flush()?;
> - file.sync_data()?;
> - Ok(())
> - });
> - nix::fcntl::flock(file.as_raw_fd(), nix::fcntl::FlockArg::Unlock)?;
> -
> - result?;
> - }
> - None => bail!("media catalog not writable (opened read only)"),
> - }
> + let Some(file) = self.file.take() else {
> + bail!("media catalog not writable (opened read only)");
> + };
> +
> + let mut file = Flock::lock(file, FlockArg::LockExclusive).map_err(|(f, e)| {
> + self.file = Some(f);
> + format_err!("flock failed: {:?} - {e:?}", self.file)
> + })?;
>
> - self.pending = Vec::new();
> + let pending = &self.pending;
> + // Note: lock file, to get a consistent view with load_catalog
> + let result: Result<(), Error> = proxmox_lang::try_block!({
> + file.write_all(pending)?;
> + file.flush()?;
> + file.sync_data()?;
> + Ok(())
> + });
>
> - Ok(())
> + self.file = Some(file.unlock().expect("shouldn't fail for valid fd"));
here the situation is a bit more complicated, since we wrote to the file
above.. but looking at the call graphs/error behaviour, it seems to me
that we are often operating on temporary files (though we do not seem to
clean them up in the error path?), or failure to commit herer would
bubble up and abort writing to tape anyway..
triggering a panic here doesn't help in any case - the possibly
inconsistent writes/reads have already happened, at best we could maybe
(try to) invalidate the catalog as part of error handling to force
re-reading it from tape? or we could bubble up the error to abort
whatever operation is attempting to commit, and log the error
prominently - taking down the whole PBS instance doesn't buy us anything
AFAICT?
> +
> + if result.is_ok() {
> + self.pending = Vec::new();
> + }
> + result
> }
>
> /// Conditionally commit if in pending data is large (> 1Mb)
> --
> 2.47.3
>
>
>
>
>
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* partially-applied: [PATCH v1 proxmox-backup 00/11] fix various warnings
2026-02-26 14:40 [PATCH v1 proxmox-backup 00/11] fix various warnings Robert Obkircher
` (10 preceding siblings ...)
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 11/11] tape: media catalog: use Flock wrapper instead of deprecated function Robert Obkircher
@ 2026-03-05 11:20 ` Fabian Grünbichler
11 siblings, 0 replies; 16+ messages in thread
From: Fabian Grünbichler @ 2026-03-05 11:20 UTC (permalink / raw)
To: pbs-devel, Robert Obkircher
On Thu, 26 Feb 2026 15:40:14 +0100, Robert Obkircher wrote:
> The first 10 patches fix all clippy warnings expect the one about
> future incompatibilities in the num-bigint-dig v0.8.4 depencency.
>
> The last patch is an attempt at fixing the compiler warnings about the
> deprecated flock function, but I'm not sure if this is an acceptable
> solution.
>
> [...]
Applied all except 6 and 11, thanks!
[01/11] datastore: remove allow(clippy::cast_ptr_alignment) attribute
commit: bfcfadd945b09790d61a7488f46e4f0cd6a0ef28
[02/11] api: use checked_div for compression ratio calculation
commit: db2e38a7829a6e05848d8e42ea74a65932dc647b
[03/11] datastore+server: sort by key instead of using comparison functions
commit: a4170cddbafd78598b3ca9527e7780c1f77826f6
[04/11] api: remove unnecessary ampersand to fix clippy warning
commit: 25aab8374e6ed11ba46ed23d51e8487a021e8403
[05/11] bin: debug: use pattern match instead of is_some+unwrap
commit: a46daf70d48a6d5084ab6ae8c8d6de985ceb9266
[06/11] bin: proxy: fix clippy warning about unnecesary use of find_map
opted to instead allow the lint and add a TODO comment, less churn and easier
to read that way IMHO
[07/11] datastore: silence warning about too many arguments
commit: ac2135f74a30c7299712d59471877015034be37e
[08/11] client: catalog shell: avoid unnecessary block_on in async code
commit: 6bdd577755c45681c5c561f7b95796ea1b3decba
[09/11] client: catalog shell: combine multiple block_on calls into one
commit: 3b017ac5ca4867b40db1c5950795936546a67ec1
[10/11] client: catalog shell: avoid unsafe transmute
commit: acf2bb0f8bf9bcef21344833466fa96445a538a7
[11/11] tape: media catalog: use Flock wrapper instead of deprecated function
(no commit info)
replied to that one individually!
Best regards,
--
Fabian Grünbichler <f.gruenbichler@proxmox.com>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v1 proxmox-backup 11/11] tape: media catalog: use Flock wrapper instead of deprecated function
2026-03-05 11:08 ` Fabian Grünbichler
@ 2026-03-05 13:13 ` Dominik Csapak
2026-03-05 14:39 ` Robert Obkircher
0 siblings, 1 reply; 16+ messages in thread
From: Dominik Csapak @ 2026-03-05 13:13 UTC (permalink / raw)
To: Fabian Grünbichler, pbs-devel, Robert Obkircher
On 3/5/26 12:08 PM, Fabian Grünbichler wrote:
> On February 26, 2026 3:40 pm, Robert Obkircher wrote:
>> Switch to the new wrapper type to fix the deprecation warnings. Panic
>> if unlock fails, because there is no easy way to return the File that
>> was temporarily moved out of the field.
>>
>> An alternative to moving out of the optional field would be to use
>> `File::try_clone` but that isn't ideal either because duplicated file
>> descriptors share the same lock status. That also doesn't solve the
>> problem that propagating the error without leaking the locked file
>> descriptor requires unsafe code or a change to nix.
>>
>> [1] https://lore.proxmox.com/pbs-devel/1763634535.5sqd1r4buu.astroid@yuna.none/
>>
>> Signed-off-by: Robert Obkircher <r.obkircher@proxmox.com>
>> ---
>> src/tape/media_catalog.rs | 49 +++++++++++++++++++++------------------
>> 1 file changed, 27 insertions(+), 22 deletions(-)
>>
>> diff --git a/src/tape/media_catalog.rs b/src/tape/media_catalog.rs
>> index 63329a65..372bf59d 100644
>> --- a/src/tape/media_catalog.rs
>> +++ b/src/tape/media_catalog.rs
>> @@ -6,6 +6,7 @@ use std::path::{Path, PathBuf};
>>
>> use anyhow::{bail, format_err, Error};
>> use endian_trait::Endian;
>> +use nix::fcntl::{Flock, FlockArg};
>>
>> use proxmox_sys::fs::read_subdir;
>>
>> @@ -194,7 +195,7 @@ impl MediaCatalog {
>> let me = proxmox_lang::try_block!({
>> Self::create_basedir(base_path)?;
>>
>> - let mut file = std::fs::OpenOptions::new()
>> + let file = std::fs::OpenOptions::new()
>> .read(true)
>> .write(write)
>> .create(create)
>> @@ -219,9 +220,10 @@ impl MediaCatalog {
>> };
>>
>> // Note: lock file, to get a consistent view with load_catalog
>> - nix::fcntl::flock(file.as_raw_fd(), nix::fcntl::FlockArg::LockExclusive)?;
>> + let mut file = Flock::lock(file, FlockArg::LockExclusive)
>> + .map_err(|e| format_err!("flock failed: {e:?}"))?;
>> let result = me.load_catalog(&mut file, media_id.media_set_label.as_ref());
>> - nix::fcntl::flock(file.as_raw_fd(), nix::fcntl::FlockArg::Unlock)?;
>> + let file = file.unlock().expect("shouldn't fail for valid fd");
>
> here we could just return an error instead of panicing? we only read
> from the file anyway, and bubbling up the error should be fine..
full agree, we bubbled up the error before, so we should keep that behavior
>
>>
>> let (found_magic_number, _) = result?;
>>
>> @@ -367,27 +369,30 @@ impl MediaCatalog {
>> return Ok(());
>> }
>>
>> - match self.file {
>> - Some(ref mut file) => {
>> - let pending = &self.pending;
>> - // Note: lock file, to get a consistent view with load_catalog
>> - nix::fcntl::flock(file.as_raw_fd(), nix::fcntl::FlockArg::LockExclusive)?;
>> - let result: Result<(), Error> = proxmox_lang::try_block!({
>> - file.write_all(pending)?;
>> - file.flush()?;
>> - file.sync_data()?;
>> - Ok(())
>> - });
>> - nix::fcntl::flock(file.as_raw_fd(), nix::fcntl::FlockArg::Unlock)?;
>> -
>> - result?;
>> - }
>> - None => bail!("media catalog not writable (opened read only)"),
>> - }
>> + let Some(file) = self.file.take() else {
>> + bail!("media catalog not writable (opened read only)");
>> + };
>> +
>> + let mut file = Flock::lock(file, FlockArg::LockExclusive).map_err(|(f, e)| {
>> + self.file = Some(f);
>> + format_err!("flock failed: {:?} - {e:?}", self.file)
>> + })?;
>>
>> - self.pending = Vec::new();
>> + let pending = &self.pending;
>> + // Note: lock file, to get a consistent view with load_catalog
>> + let result: Result<(), Error> = proxmox_lang::try_block!({
>> + file.write_all(pending)?;
>> + file.flush()?;
>> + file.sync_data()?;
>> + Ok(())
>> + });
>>
>> - Ok(())
>> + self.file = Some(file.unlock().expect("shouldn't fail for valid fd"));
>
> here the situation is a bit more complicated, since we wrote to the file
> above.. but looking at the call graphs/error behaviour, it seems to me
> that we are often operating on temporary files (though we do not seem to
> clean them up in the error path?), or failure to commit herer would
> bubble up and abort writing to tape anyway..
>
> triggering a panic here doesn't help in any case - the possibly
> inconsistent writes/reads have already happened, at best we could maybe
> (try to) invalidate the catalog as part of error handling to force
> re-reading it from tape? or we could bubble up the error to abort
> whatever operation is attempting to commit, and log the error
> prominently - taking down the whole PBS instance doesn't buy us anything
> AFAICT?
also here agreeing regarding panicking, we did simply bubble up the
error before, no reason to change that IMO.
looking at the code i think we should maybe revamp the catalog
handling?
out of scope for this series, but I think we should try to write the
catalog atomically, (meaning we should always writing into a tmp file
and renaming into place, like we do with most of the other files we
write)
then I think we'd need no lock for reading the catalog at all,
but we'd have to copy the old catalog to a tmp one first when
we want to append data. (this might be a bad idea since
they can get quite large)
but as i wrote, not really relevant for this series, just
wanted it to point out
>
>> +
>> + if result.is_ok() {
>> + self.pending = Vec::new();
>> + }
>> + result
>> }
>>
>> /// Conditionally commit if in pending data is large (> 1Mb)
>> --
>> 2.47.3
>>
>>
>>
>>
>>
>>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v1 proxmox-backup 11/11] tape: media catalog: use Flock wrapper instead of deprecated function
2026-03-05 13:13 ` Dominik Csapak
@ 2026-03-05 14:39 ` Robert Obkircher
0 siblings, 0 replies; 16+ messages in thread
From: Robert Obkircher @ 2026-03-05 14:39 UTC (permalink / raw)
To: Dominik Csapak, Fabian Grünbichler, pbs-devel
On 3/5/26 14:13, Dominik Csapak wrote:
>
>
> On 3/5/26 12:08 PM, Fabian Grünbichler wrote:
>> On February 26, 2026 3:40 pm, Robert Obkircher wrote:
>>> Switch to the new wrapper type to fix the deprecation warnings. Panic
>>> if unlock fails, because there is no easy way to return the File that
>>> was temporarily moved out of the field.
>>>
>>> An alternative to moving out of the optional field would be to use
>>> `File::try_clone` but that isn't ideal either because duplicated file
>>> descriptors share the same lock status. That also doesn't solve the
>>> problem that propagating the error without leaking the locked file
>>> descriptor requires unsafe code or a change to nix.
>>>
>>> [1]
>>> https://lore.proxmox.com/pbs-devel/1763634535.5sqd1r4buu.astroid@yuna.none/
>>>
>>> Signed-off-by: Robert Obkircher <r.obkircher@proxmox.com>
>>> ---
>>> src/tape/media_catalog.rs | 49
>>> +++++++++++++++++++++------------------
>>> 1 file changed, 27 insertions(+), 22 deletions(-)
>>>
>>> diff --git a/src/tape/media_catalog.rs b/src/tape/media_catalog.rs
>>> index 63329a65..372bf59d 100644
>>> --- a/src/tape/media_catalog.rs
>>> +++ b/src/tape/media_catalog.rs
>>> @@ -6,6 +6,7 @@ use std::path::{Path, PathBuf};
>>> use anyhow::{bail, format_err, Error};
>>> use endian_trait::Endian;
>>> +use nix::fcntl::{Flock, FlockArg};
>>> use proxmox_sys::fs::read_subdir;
>>> @@ -194,7 +195,7 @@ impl MediaCatalog {
>>> let me = proxmox_lang::try_block!({
>>> Self::create_basedir(base_path)?;
>>> - let mut file = std::fs::OpenOptions::new()
>>> + let file = std::fs::OpenOptions::new()
>>> .read(true)
>>> .write(write)
>>> .create(create)
>>> @@ -219,9 +220,10 @@ impl MediaCatalog {
>>> };
>>> // Note: lock file, to get a consistent view with
>>> load_catalog
>>> - nix::fcntl::flock(file.as_raw_fd(),
>>> nix::fcntl::FlockArg::LockExclusive)?;
>>> + let mut file = Flock::lock(file,
>>> FlockArg::LockExclusive)
>>> + .map_err(|e| format_err!("flock failed: {e:?}"))?;
>>> let result = me.load_catalog(&mut file,
>>> media_id.media_set_label.as_ref());
>>> - nix::fcntl::flock(file.as_raw_fd(),
>>> nix::fcntl::FlockArg::Unlock)?;
>>> + let file = file.unlock().expect("shouldn't fail for
>>> valid fd");
>>
>> here we could just return an error instead of panicing? we only read
>> from the file anyway, and bubbling up the error should be fine..
>
> full agree, we bubbled up the error before, so we should keep that
> behavior
Naively returning the error would likely panic when the Flock wrapper
is dropped, and we also can't just mem::forget it, because that would
leak the File (unless we retrieve it with unsafe ptr::read).
Should I try to send a PR to nix with a `Flock::unwrap(self) -> T`
method that could be used in map_err? (Changing unlock to return T
instead of Self might be an alternative, but that would be weird if
errno is EINTR...)
It would also be useful if they could add an impl Flockable for &mut
File. I think that should be safe.
>
>>
>>> let (found_magic_number, _) = result?;
>>> @@ -367,27 +369,30 @@ impl MediaCatalog {
>>> return Ok(());
>>> }
>>> - match self.file {
>>> - Some(ref mut file) => {
>>> - let pending = &self.pending;
>>> - // Note: lock file, to get a consistent view with
>>> load_catalog
>>> - nix::fcntl::flock(file.as_raw_fd(),
>>> nix::fcntl::FlockArg::LockExclusive)?;
>>> - let result: Result<(), Error> =
>>> proxmox_lang::try_block!({
>>> - file.write_all(pending)?;
>>> - file.flush()?;
>>> - file.sync_data()?;
>>> - Ok(())
>>> - });
>>> - nix::fcntl::flock(file.as_raw_fd(),
>>> nix::fcntl::FlockArg::Unlock)?;
>>> -
>>> - result?;
>>> - }
>>> - None => bail!("media catalog not writable (opened
>>> read only)"),
>>> - }
>>> + let Some(file) = self.file.take() else {
>>> + bail!("media catalog not writable (opened read only)");
>>> + };
>>> +
>>> + let mut file = Flock::lock(file,
>>> FlockArg::LockExclusive).map_err(|(f, e)| {
>>> + self.file = Some(f);
>>> + format_err!("flock failed: {:?} - {e:?}", self.file)
>>> + })?;
>>> - self.pending = Vec::new();
>>> + let pending = &self.pending;
>>> + // Note: lock file, to get a consistent view with
>>> load_catalog
>>> + let result: Result<(), Error> = proxmox_lang::try_block!({
>>> + file.write_all(pending)?;
>>> + file.flush()?;
>>> + file.sync_data()?;
>>> + Ok(())
>>> + });
>>> - Ok(())
>>> + self.file = Some(file.unlock().expect("shouldn't fail for
>>> valid fd"));
>>
>> here the situation is a bit more complicated, since we wrote to the
>> file
>> above.. but looking at the call graphs/error behaviour, it seems to me
>> that we are often operating on temporary files (though we do not
>> seem to
>> clean them up in the error path?), or failure to commit herer would
>> bubble up and abort writing to tape anyway..
>>
>> triggering a panic here doesn't help in any case - the possibly
>> inconsistent writes/reads have already happened, at best we could
>> maybe
>> (try to) invalidate the catalog as part of error handling to force
>> re-reading it from tape? or we could bubble up the error to abort
>> whatever operation is attempting to commit, and log the error
>> prominently - taking down the whole PBS instance doesn't buy us
>> anything
>> AFAICT?
Theoretically it buys us slightly better protection against deadlocks,
but it doesn't look like the callers are doing anything potentially
dangerous like holding on to the file descriptor across await points.
(Leaving self.file as None would also prevent that, but it would be a
change of behavior as well.)
>
> also here agreeing regarding panicking, we did simply bubble up the
> error before, no reason to change that IMO.
>
> looking at the code i think we should maybe revamp the catalog
> handling?
>
> out of scope for this series, but I think we should try to write the
> catalog atomically, (meaning we should always writing into a tmp
> file and renaming into place, like we do with most of the other
> files we
> write)
>
> then I think we'd need no lock for reading the catalog at all,
> but we'd have to copy the old catalog to a tmp one first when
> we want to append data. (this might be a bad idea since
> they can get quite large)
Not sure if it is relevant, but copying wouldn't be enough if there
were multiple writers.
>
> but as i wrote, not really relevant for this series, just
> wanted it to point out
>
>>
>>> +
>>> + if result.is_ok() {
>>> + self.pending = Vec::new();
>>> + }
>>> + result
>>> }
>>> /// Conditionally commit if in pending data is large (> 1Mb)
>>> --
>>> 2.47.3
>>>
>>>
>>>
>>>
>>>
>>>
>
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2026-03-05 14:38 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-02-26 14:40 [PATCH v1 proxmox-backup 00/11] fix various warnings Robert Obkircher
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 01/11] datastore: remove allow(clippy::cast_ptr_alignment) attribute Robert Obkircher
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 02/11] api: use checked_div for compression ratio calculation Robert Obkircher
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 03/11] datastore+server: sort by key instead of using comparison functions Robert Obkircher
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 04/11] api: remove unnecessary ampersand to fix clippy warning Robert Obkircher
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 05/11] bin: debug: use pattern match instead of is_some+unwrap Robert Obkircher
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 06/11] bin: proxy: fix clippy warning about unnecesary use of find_map Robert Obkircher
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 07/11] datastore: silence warning about too many arguments Robert Obkircher
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 08/11] client: catalog shell: avoid unnecessary block_on in async code Robert Obkircher
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 09/11] client: catalog shell: combine multiple block_on calls into one Robert Obkircher
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 10/11] client: catalog shell: avoid unsafe transmute Robert Obkircher
2026-02-26 14:40 ` [PATCH v1 proxmox-backup 11/11] tape: media catalog: use Flock wrapper instead of deprecated function Robert Obkircher
2026-03-05 11:08 ` Fabian Grünbichler
2026-03-05 13:13 ` Dominik Csapak
2026-03-05 14:39 ` Robert Obkircher
2026-03-05 11:20 ` partially-applied: [PATCH v1 proxmox-backup 00/11] fix various warnings Fabian Grünbichler
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox