* [pbs-devel] [PATCH proxmox v2] sys: open process_locker lockfile lazy
@ 2023-11-15 14:31 Gabriel Goller
2023-11-28 10:04 ` Thomas Lamprecht
0 siblings, 1 reply; 4+ messages in thread
From: Gabriel Goller @ 2023-11-15 14:31 UTC (permalink / raw)
To: pbs-devel
When setting a datastore in maintenance mode (offline or read-only) we
should be able to unmount it. This isn't possible because the
`ChunkReader` has a `ProcessLocker` instance that holds an open
file descriptor to (f.e.) `/mnt/datastore/test1/.lock`.
The `ChunkReader` is created at startup, so if the datastore is not set
to a maintenance mode at startup, we always have the lockfile open.
Now we create/open the lockfile lazy, when a shared lock or a exclusive
lock is wanted. Like this, we can set a datastore to 'offline' and
unmount it without restarting the proxmox-backup service.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
---
changes v2:
* removed debug prints
proxmox-sys/src/process_locker.rs | 77 ++++++++++++++++++++++---------
1 file changed, 56 insertions(+), 21 deletions(-)
diff --git a/proxmox-sys/src/process_locker.rs b/proxmox-sys/src/process_locker.rs
index 4874da8..88c74dd 100644
--- a/proxmox-sys/src/process_locker.rs
+++ b/proxmox-sys/src/process_locker.rs
@@ -8,7 +8,9 @@
//! `oldest_shared_lock()`.
use std::collections::HashMap;
+use std::fs::File;
use std::os::unix::io::AsRawFd;
+use std::path::PathBuf;
use std::sync::{Arc, Mutex};
use anyhow::{bail, Error};
@@ -19,7 +21,8 @@ use anyhow::{bail, Error};
/// Inter-process reader-writer lock
pub struct ProcessLocker {
- file: std::fs::File,
+ path: PathBuf,
+ file_descriptor: Option<File>,
exclusive: bool,
writers: usize,
next_guard_id: u64,
@@ -53,11 +56,16 @@ impl Drop for ProcessLockSharedGuard {
l_pid: 0,
};
- if let Err(err) =
- nix::fcntl::fcntl(data.file.as_raw_fd(), nix::fcntl::FcntlArg::F_SETLKW(&op))
- {
- panic!("unable to drop writer lock - {}", err);
+ if let Some(file) = &data.file_descriptor {
+ if let Err(err) =
+ nix::fcntl::fcntl(file.as_raw_fd(), nix::fcntl::FcntlArg::F_SETLKW(&op))
+ {
+ panic!("unable to drop writer lock - {}", err);
+ }
+ } else {
+ panic!("file descriptor has been closed while locking");
}
+ data.file_descriptor = None;
}
if data.writers > 0 {
data.writers -= 1;
@@ -93,29 +101,30 @@ impl Drop for ProcessLockExclusiveGuard {
l_pid: 0,
};
- if let Err(err) =
- nix::fcntl::fcntl(data.file.as_raw_fd(), nix::fcntl::FcntlArg::F_SETLKW(&op))
- {
- panic!("unable to drop exclusive lock - {}", err);
+ if let Some(file) = &data.file_descriptor {
+ if let Err(err) =
+ nix::fcntl::fcntl(file.as_raw_fd(), nix::fcntl::FcntlArg::F_SETLKW(&op))
+ {
+ panic!("unable to drop exclusive lock - {}", err);
+ }
+ } else {
+ panic!("file descriptor has been closed while locking");
}
+ data.file_descriptor = None;
data.exclusive = false;
}
}
impl ProcessLocker {
- /// Create a new instance for the specified file.
+ /// Create a new instance of the ProcessLocker.
///
- /// This simply creates the file if it does not exist.
- pub fn new<P: AsRef<std::path::Path>>(lockfile: P) -> Result<Arc<Mutex<Self>>, Error> {
- let file = std::fs::OpenOptions::new()
- .create(true)
- .read(true)
- .write(true)
- .open(lockfile)?;
-
+ /// Does not check if the lockfile exists. The lockfile gets opened/created
+ /// when using [Self::try_shared_lock()] or [Self::try_exclusive_lock()].
+ pub fn new(lockfile: PathBuf) -> Result<Arc<Mutex<Self>>, Error> {
Ok(Arc::new(Mutex::new(Self {
- file,
+ path: lockfile,
+ file_descriptor: None,
exclusive: false,
writers: 0,
next_guard_id: 0,
@@ -144,7 +153,20 @@ impl ProcessLocker {
let mut data = locker.lock().unwrap();
if data.writers == 0 && !data.exclusive {
- if let Err(err) = Self::try_lock(&data.file, libc::F_RDLCK) {
+ if data.file_descriptor.is_none() {
+ let file = std::fs::OpenOptions::new()
+ .create(true)
+ .read(true)
+ .write(true)
+ .open(&data.path)?;
+ data.file_descriptor = Some(file);
+ }
+ if let Err(err) = Self::try_lock(
+ data.file_descriptor
+ .as_ref()
+ .expect("unable to open lockfile"),
+ libc::F_RDLCK,
+ ) {
bail!("unable to get shared lock - {}", err);
}
}
@@ -187,7 +209,20 @@ impl ProcessLocker {
bail!("already locked exclusively");
}
- if let Err(err) = Self::try_lock(&data.file, libc::F_WRLCK) {
+ if data.file_descriptor.is_none() {
+ let file = std::fs::OpenOptions::new()
+ .create(true)
+ .read(true)
+ .write(true)
+ .open(&data.path)?;
+ data.file_descriptor = Some(file);
+ }
+ if let Err(err) = Self::try_lock(
+ data.file_descriptor
+ .as_ref()
+ .expect("unable to open lockfile"),
+ libc::F_WRLCK,
+ ) {
bail!("unable to get exclusive lock - {}", err);
}
--
2.39.2
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [pbs-devel] [PATCH proxmox v2] sys: open process_locker lockfile lazy
2023-11-15 14:31 [pbs-devel] [PATCH proxmox v2] sys: open process_locker lockfile lazy Gabriel Goller
@ 2023-11-28 10:04 ` Thomas Lamprecht
2023-12-01 10:15 ` Gabriel Goller
0 siblings, 1 reply; 4+ messages in thread
From: Thomas Lamprecht @ 2023-11-28 10:04 UTC (permalink / raw)
To: Proxmox Backup Server development discussion, Gabriel Goller
Am 15/11/2023 um 15:31 schrieb Gabriel Goller:
> When setting a datastore in maintenance mode (offline or read-only) we
> should be able to unmount it. This isn't possible because the
> `ChunkReader` has a `ProcessLocker` instance that holds an open
> file descriptor to (f.e.) `/mnt/datastore/test1/.lock`.
>
> The `ChunkReader` is created at startup, so if the datastore is not set
> to a maintenance mode at startup, we always have the lockfile open.
> Now we create/open the lockfile lazy, when a shared lock or a exclusive
> lock is wanted. Like this, we can set a datastore to 'offline' and
> unmount it without restarting the proxmox-backup service.
>
I never had good experience with lazy open (or lazy unmount) so I'd like
to avoid such things if possible.
And luckily we already have a proposed solution, one that just gathered
a bit dust and where only bikeshedding questions where discussed anymore,
namely the "refactor datastore locking to use tmpfs" [0] one from Stefan
Sterz.
As with that we have a few advantages:
- no lazy opening code that needs lots of brain power to ensure it really
is OK
- all special FS (like NFS) profit from this change too, that was even
the original reason for the series.
- should be a bit faster to have locks in memory only
- the issue with unmount goes away too
The only potential disadvantage:
- locks are lost over (sudden) reboots, but should not matter really as
we're mostly locking for concurrency, but still write data safely, i.e.,
to tmpfile and then rename, so the on-disk state should always be
consistent anyway.
Maybe you can check with Stefan how the status is, and maybe take over
his series, rebase it and see if we can get the final nits sorted out.
[0]: https://lists.proxmox.com/pipermail/pbs-devel/2022-August/005414.html
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [pbs-devel] [PATCH proxmox v2] sys: open process_locker lockfile lazy
2023-11-28 10:04 ` Thomas Lamprecht
@ 2023-12-01 10:15 ` Gabriel Goller
2023-12-04 13:23 ` Gabriel Goller
0 siblings, 1 reply; 4+ messages in thread
From: Gabriel Goller @ 2023-12-01 10:15 UTC (permalink / raw)
To: Thomas Lamprecht, Proxmox Backup Server development discussion
On 11/28/23 11:04, Thomas Lamprecht wrote:
> [..]
> I never had good experience with lazy open (or lazy unmount) so I'd like
> to avoid such things if possible.
> And luckily we already have a proposed solution, one that just gathered
> a bit dust and where only bikeshedding questions where discussed anymore,
> namely the "refactor datastore locking to use tmpfs" [0] one from Stefan
> Sterz.
>
> As with that we have a few advantages:
> - no lazy opening code that needs lots of brain power to ensure it really
> is OK
>
> - all special FS (like NFS) profit from this change too, that was even
> the original reason for the series.
>
> - should be a bit faster to have locks in memory only
>
> - the issue with unmount goes away too
>
> The only potential disadvantage:
>
> - locks are lost over (sudden) reboots, but should not matter really as
> we're mostly locking for concurrency, but still write data safely, i.e.,
> to tmpfile and then rename, so the on-disk state should always be
> consistent anyway.
>
> Maybe you can check with Stefan how the status is, and maybe take over
> his series, rebase it and see if we can get the final nits sorted out.
>
> [0]: https://lists.proxmox.com/pipermail/pbs-devel/2022-August/005414.html
That's a good idea!
We could move the process-locker lock files (currently they are in
`/mnt/datastore{name}/.lock`)
to the tmpfs directory where all the other locks are (probably
`/run/proxmox-backup/{name}/.lock`.
Like this we don't have to do the lazy-locking, because the file can be
open all the time and we
can still unmount the datastore (as the lockfile is not on that mount
anymore).
This change could also be applied without @sterzy's series.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [pbs-devel] [PATCH proxmox v2] sys: open process_locker lockfile lazy
2023-12-01 10:15 ` Gabriel Goller
@ 2023-12-04 13:23 ` Gabriel Goller
0 siblings, 0 replies; 4+ messages in thread
From: Gabriel Goller @ 2023-12-04 13:23 UTC (permalink / raw)
To: Thomas Lamprecht, Proxmox Backup Server development discussion
Obsolete, submitted a new patch!
[..]
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2023-12-04 13:23 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-15 14:31 [pbs-devel] [PATCH proxmox v2] sys: open process_locker lockfile lazy Gabriel Goller
2023-11-28 10:04 ` Thomas Lamprecht
2023-12-01 10:15 ` Gabriel Goller
2023-12-04 13:23 ` Gabriel Goller
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox