public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Thomas Lamprecht <t.lamprecht@proxmox.com>
To: Proxmox Backup Server development discussion
	<pbs-devel@lists.proxmox.com>,
	Gabriel Goller <g.goller@proxmox.com>
Subject: Re: [pbs-devel] [PATCH proxmox v2] sys: open process_locker lockfile lazy
Date: Tue, 28 Nov 2023 11:04:39 +0100	[thread overview]
Message-ID: <8ba8aef7-b595-4c52-9bb9-4243a739b595@proxmox.com> (raw)
In-Reply-To: <20231115143158.217714-1-g.goller@proxmox.com>

Am 15/11/2023 um 15:31 schrieb Gabriel Goller:
> When setting a datastore in maintenance mode (offline or read-only) we
> should be able to unmount it. This isn't possible because the
> `ChunkReader` has a `ProcessLocker` instance that holds an open
> file descriptor to (f.e.) `/mnt/datastore/test1/.lock`.
> 
> The `ChunkReader` is created at startup, so if the datastore is not set
> to a maintenance mode at startup, we always have the lockfile open.
> Now we create/open the lockfile lazy, when a shared lock or a exclusive
> lock is wanted. Like this, we can set a datastore to 'offline' and
> unmount it without restarting the proxmox-backup service.
> 

I never had good experience with lazy open (or lazy unmount) so I'd like
to avoid such things if possible.

And luckily we already have a proposed solution, one that just gathered
a bit dust and where only bikeshedding questions where discussed anymore,
namely the "refactor datastore locking to use tmpfs" [0] one from Stefan
Sterz.

As with that we have a few advantages:
- no lazy opening code that needs lots of brain power to ensure it really
  is OK

- all special FS (like NFS) profit from this change too, that was even
  the original reason for the series.

- should be a bit faster to have locks in memory only

- the issue with unmount goes away too

The only potential disadvantage:

- locks are lost over (sudden) reboots, but should not matter really as
  we're mostly locking for concurrency, but still write data safely, i.e.,
  to tmpfile and then rename, so the on-disk state should always be
  consistent anyway.

Maybe you can check with Stefan how the status is, and maybe take over
his series, rebase it and see if we can get the final nits sorted out.

[0]: https://lists.proxmox.com/pipermail/pbs-devel/2022-August/005414.html




  reply	other threads:[~2023-11-28 10:05 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-15 14:31 Gabriel Goller
2023-11-28 10:04 ` Thomas Lamprecht [this message]
2023-12-01 10:15   ` Gabriel Goller
2023-12-04 13:23     ` Gabriel Goller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8ba8aef7-b595-4c52-9bb9-4243a739b595@proxmox.com \
    --to=t.lamprecht@proxmox.com \
    --cc=g.goller@proxmox.com \
    --cc=pbs-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal