public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Stefan Reiter <s.reiter@proxmox.com>
To: pbs-devel@lists.proxmox.com
Subject: [pbs-devel] [PATCH proxmox-backup 5/5] file-restore/disk: support ZFS subvols with mountpoint=legacy
Date: Wed, 16 Jun 2021 12:55:52 +0200	[thread overview]
Message-ID: <20210616105552.2594536-6-s.reiter@proxmox.com> (raw)
In-Reply-To: <20210616105552.2594536-1-s.reiter@proxmox.com>

These require mounting using the regular 'mount' syscall.
Auto-generates an appropriate mount path.

Note that subvols with mountpoint=none cannot be mounted this way, and
would require setting the mountpoint property, which is not possible as
the zpools have to be imported with readonly=on.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
 src/bin/proxmox_restore_daemon/disk.rs | 43 ++++++++++++++++++++++----
 1 file changed, 37 insertions(+), 6 deletions(-)

diff --git a/src/bin/proxmox_restore_daemon/disk.rs b/src/bin/proxmox_restore_daemon/disk.rs
index 5b66dd2f..9d0cbe32 100644
--- a/src/bin/proxmox_restore_daemon/disk.rs
+++ b/src/bin/proxmox_restore_daemon/disk.rs
@@ -228,6 +228,34 @@ impl Filesystems {
                 cmd.args(["mount", "-a"].iter());
                 run_command(cmd, None)?;
 
+                // detect any datasets with 'legacy' mountpoints
+                let mut cmd = Command::new("/sbin/zfs");
+                cmd.args(["list", "-Hpro", "name,mountpoint", &data.name].iter());
+                let mps = run_command(cmd, None)?;
+                for subvol in mps.lines() {
+                    let subvol = subvol.splitn(2, '\t').collect::<Vec<&str>>();
+                    if subvol.len() != 2 {
+                        continue;
+                    }
+                    let name = subvol[0];
+                    let mp = subvol[1];
+
+                    if mp == "legacy" {
+                        let mut newmp = PathBuf::from(format!(
+                            "{}/legacy-{}",
+                            &mntpath,
+                            name.replace('/', "_")
+                        ));
+                        let mut i = 1;
+                        while newmp.exists() {
+                            newmp.set_extension(i.to_string());
+                            i += 1;
+                        }
+                        create_dir_all(&newmp)?;
+                        self.do_mount(Some(name), newmp.to_string_lossy().as_ref(), "zfs")?;
+                    }
+                }
+
                 // Now that we have imported the pool, we can also query the size
                 let mut cmd = Command::new("/sbin/zpool");
                 cmd.args(["list", "-o", "size", "-Hp", &data.name].iter());
@@ -244,19 +272,14 @@ impl Filesystems {
     }
 
     fn try_mount(&self, source: &str, target: &str) -> Result<(), Error> {
-        use nix::mount::*;
-
         create_dir_all(target)?;
 
         // try all supported fs until one works - this is the way Busybox's 'mount' does it too:
         // https://git.busybox.net/busybox/tree/util-linux/mount.c?id=808d93c0eca49e0b22056e23d965f0d967433fbb#n2152
         // note that ZFS is intentionally left out (see scan())
-        let flags =
-            MsFlags::MS_RDONLY | MsFlags::MS_NOEXEC | MsFlags::MS_NOSUID | MsFlags::MS_NODEV;
         for fs in &self.supported_fs {
             let fs: &str = fs.as_ref();
-            let opts = FS_OPT_MAP.get(fs).copied();
-            match mount(Some(source), target, Some(fs), flags, opts) {
+            match self.do_mount(Some(source), target, fs) {
                 Ok(()) => {
                     info!("mounting '{}' succeeded, fstype: '{}'", source, fs);
                     return Ok(());
@@ -270,6 +293,14 @@ impl Filesystems {
 
         bail!("all mounts failed or no supported file system")
     }
+
+    fn do_mount(&self, source: Option<&str>, target: &str, fs: &str) -> Result<(), nix::Error> {
+        use nix::mount::*;
+        let flags =
+            MsFlags::MS_RDONLY | MsFlags::MS_NOEXEC | MsFlags::MS_NOSUID | MsFlags::MS_NODEV;
+        let opts = FS_OPT_MAP.get(fs).copied();
+        mount(source, target, Some(fs), flags, opts)
+    }
 }
 
 pub struct DiskState {
-- 
2.30.2





  parent reply	other threads:[~2021-06-16 10:56 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-16 10:55 [pbs-devel] [PATCH 0/5] ZFS support for single file restore Stefan Reiter
2021-06-16 10:55 ` [pbs-devel] [PATCH proxmox-backup-restore-image 1/5] debian: update control for bullseye Stefan Reiter
2021-06-16 10:55 ` [pbs-devel] [PATCH proxmox-backup-restore-image 2/5] build custom ZFS tools without udev requirement Stefan Reiter
2021-06-16 10:55 ` [pbs-devel] [PATCH proxmox-backup 3/5] file-restore: increase RAM for ZFS and disable ARC Stefan Reiter
2021-06-16 10:55 ` [pbs-devel] [PATCH proxmox-backup 4/5] file-restore/disk: support ZFS pools Stefan Reiter
2021-06-16 10:55 ` Stefan Reiter [this message]
2021-06-28 12:26 ` [pbs-devel] applied-series: [PATCH 0/5] ZFS support for single file restore Thomas Lamprecht

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210616105552.2594536-6-s.reiter@proxmox.com \
    --to=s.reiter@proxmox.com \
    --cc=pbs-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal