public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Dominik Csapak <d.csapak@proxmox.com>
To: pbs-devel@lists.proxmox.com
Subject: [pbs-devel] [PATCH proxmox-backup 2/2] backup/datastore: move manifest locking to /run
Date: Wed,  2 Dec 2020 14:19:57 +0100	[thread overview]
Message-ID: <20201202131957.17051-2-d.csapak@proxmox.com> (raw)
In-Reply-To: <20201202131957.17051-1-d.csapak@proxmox.com>

this fixes the issue that on some filesystems, you cannot recursively
remove a directory when you hold a lock on a file inside (e.g. nfs/cifs)

it is not really backwards compatible (so during an upgrade, there
could be two daemons have the lock), but since the locking was
broken before (see previous patch) it should not really matter
(also it seems very unlikely that someone will trigger this)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 src/backup/datastore.rs | 26 +++++++++++++++++++++++---
 1 file changed, 23 insertions(+), 3 deletions(-)

diff --git a/src/backup/datastore.rs b/src/backup/datastore.rs
index 0f74ac3c..9cc88906 100644
--- a/src/backup/datastore.rs
+++ b/src/backup/datastore.rs
@@ -257,6 +257,12 @@ impl DataStore {
                 )
             })?;
 
+        // the manifest does not exists anymore, we do not need to keep the lock
+        if let Ok(path) = self.manifest_lock_path(backup_dir) {
+            // ignore errors
+            let _ = std::fs::remove_file(path);
+        }
+
         Ok(())
     }
 
@@ -698,13 +704,27 @@ impl DataStore {
         ))
     }
 
+    fn manifest_lock_path(
+        &self,
+        backup_dir: &BackupDir,
+    ) -> Result<PathBuf, Error> {
+
+        let mut path = PathBuf::from("/run/proxmox-backup/.locks/");
+        path.push(self.name());
+        path.push(backup_dir.group().backup_type());
+        path.push(backup_dir.group().backup_id());
+        std::fs::create_dir_all(&path)?;
+
+        path.push(format!( "{}{}", backup_dir.backup_time_string(), &MANIFEST_LOCK_NAME));
+
+        Ok(path)
+    }
+
     fn lock_manifest(
         &self,
         backup_dir: &BackupDir,
     ) -> Result<File, Error> {
-        let mut path = self.base_path();
-        path.push(backup_dir.relative_path());
-        path.push(&MANIFEST_LOCK_NAME);
+        let path = self.manifest_lock_path(backup_dir)?;
 
         // update_manifest should never take a long time, so if someone else has
         // the lock we can simply block a bit and should get it soon
-- 
2.20.1





  reply	other threads:[~2020-12-02 13:20 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-02 13:19 [pbs-devel] [PATCH proxmox-backup 1/2] backup/datastore: really lock manifest on delete Dominik Csapak
2020-12-02 13:19 ` Dominik Csapak [this message]
2020-12-02 13:50   ` [pbs-devel] [PATCH proxmox-backup 2/2] backup/datastore: move manifest locking to /run Wolfgang Bumiller
2020-12-02 13:58     ` Dominik Csapak
2020-12-02 14:07       ` Wolfgang Bumiller
2020-12-02 13:40 ` [pbs-devel] applied: [PATCH proxmox-backup 1/2] backup/datastore: really lock manifest on delete Wolfgang Bumiller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201202131957.17051-2-d.csapak@proxmox.com \
    --to=d.csapak@proxmox.com \
    --cc=pbs-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal