* [pbs-devel] [PATCH proxmox-backup-restore-image 1/5] add LVM (thin) tooling
2021-06-30 15:57 [pbs-devel] [PATCH 0/5] Add LVM (thin) support for single file restore Stefan Reiter
@ 2021-06-30 15:57 ` Stefan Reiter
2021-06-30 15:57 ` [pbs-devel] [PATCH proxmox-backup 2/5] file-restore-daemon/disk: dedup BucketComponents and make size optional Stefan Reiter
` (5 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Stefan Reiter @ 2021-06-30 15:57 UTC (permalink / raw)
To: pbs-devel
FILE_LOCKING is required for LVM, the 4 lib* dependencies are not
automatically resolved for some reason, but are required.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
src/build_initramfs.sh | 6 ++++++
src/config-base | 1 +
2 files changed, 7 insertions(+)
diff --git a/src/build_initramfs.sh b/src/build_initramfs.sh
index 29eeedb..4d81b39 100755
--- a/src/build_initramfs.sh
+++ b/src/build_initramfs.sh
@@ -51,6 +51,12 @@ add_pkgs "
libblkid1:amd64 \
libuuid1:amd64 \
zlib1g:amd64 \
+ libzstd1:amd64 \
+ liblz4-1:amd64 \
+ liblzma5:amd64 \
+ libgcrypt20:amd64 \
+ lvm2:amd64 \
+ thin-provisioning-tools:amd64 \
"
# install custom ZFS tools (built without libudev)
diff --git a/src/config-base b/src/config-base
index 92f1fc9..e9af0ba 100644
--- a/src/config-base
+++ b/src/config-base
@@ -104,6 +104,7 @@ CONFIG_NLS_UTF8=y
CONFIG_NLS_ISO8859_1=y
CONFIG_MSDOS_PARTITION=y
CONFIG_EFI_PARTITION=y
+CONFIG_FILE_LOCKING=y
# filesystem support
CONFIG_MISC_FILESYSTEMS=y
--
2.30.2
^ permalink raw reply [flat|nested] 10+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 2/5] file-restore-daemon/disk: dedup BucketComponents and make size optional
2021-06-30 15:57 [pbs-devel] [PATCH 0/5] Add LVM (thin) support for single file restore Stefan Reiter
2021-06-30 15:57 ` [pbs-devel] [PATCH proxmox-backup-restore-image 1/5] add LVM (thin) tooling Stefan Reiter
@ 2021-06-30 15:57 ` Stefan Reiter
2021-06-30 15:57 ` [pbs-devel] [PATCH proxmox-backup 3/5] file-restore-daemon/disk: fix component path errors Stefan Reiter
` (4 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Stefan Reiter @ 2021-06-30 15:57 UTC (permalink / raw)
To: pbs-devel
To support nested BucketComponents, it is necessary to dedup them, as
otherwise two components like:
/foo/bar
/foo/baz
will result in /foo being shown twice at the first hierarchy.
Also make the size property based on index and optional, as for example
/foo in the example above might not have a size, and bar/baz might have
differing sizes.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
src/bin/proxmox_restore_daemon/api.rs | 2 +-
src/bin/proxmox_restore_daemon/disk.rs | 24 +++++++++++++-----------
2 files changed, 14 insertions(+), 12 deletions(-)
diff --git a/src/bin/proxmox_restore_daemon/api.rs b/src/bin/proxmox_restore_daemon/api.rs
index 42328fb7..d7355370 100644
--- a/src/bin/proxmox_restore_daemon/api.rs
+++ b/src/bin/proxmox_restore_daemon/api.rs
@@ -208,7 +208,7 @@ fn list(
&c_path[..],
// this marks the beginning of a filesystem, i.e. '/', so this is a Directory
Some(&DirEntryAttribute::Directory { start: 0 }),
- Some(c.1),
+ c.1,
));
}
}
diff --git a/src/bin/proxmox_restore_daemon/disk.rs b/src/bin/proxmox_restore_daemon/disk.rs
index 9d0cbe32..b90d2a28 100644
--- a/src/bin/proxmox_restore_daemon/disk.rs
+++ b/src/bin/proxmox_restore_daemon/disk.rs
@@ -44,7 +44,7 @@ lazy_static! {
pub enum ResolveResult {
Path(PathBuf),
BucketTypes(Vec<&'static str>),
- BucketComponents(Vec<(String, u64)>),
+ BucketComponents(Vec<(String, Option<u64>)>),
}
#[derive(Clone)]
@@ -59,7 +59,7 @@ struct PartitionBucketData {
struct ZFSBucketData {
name: String,
mountpoint: Option<PathBuf>,
- size: u64,
+ size: Option<u64>,
}
/// A "Bucket" represents a mapping found on a disk, e.g. a partition, a zfs dataset or an LV. A
@@ -139,9 +139,9 @@ impl Bucket {
})
}
- fn size(&self) -> u64 {
+ fn size(&self, idx: usize) -> Option<u64> {
match self {
- Bucket::Partition(data) | Bucket::RawFs(data) => data.size,
+ Bucket::Partition(data) | Bucket::RawFs(data) => Some(data.size),
Bucket::ZPool(data) => data.size,
}
}
@@ -261,7 +261,7 @@ impl Filesystems {
cmd.args(["list", "-o", "size", "-Hp", &data.name].iter());
let size = run_command(cmd, None)?;
if let Ok(size) = size.trim().parse::<u64>() {
- data.size = size;
+ data.size = Some(size);
}
let mp = PathBuf::from(mntpath);
@@ -409,7 +409,7 @@ impl DiskState {
for (pool, disks) in Self::parse_zpool_import(&result) {
let mut bucket = Bucket::ZPool(ZFSBucketData {
name: pool.clone(),
- size: 0,
+ size: None,
mountpoint: None,
});
@@ -418,11 +418,11 @@ impl DiskState {
if disks.len() <= 5 {
let mp = filesystems.ensure_mounted(&mut bucket);
info!(
- "zpool '{}' (on: {:?}) auto-mounted at '{:?}' (size: {}B)",
+ "zpool '{}' (on: {:?}) auto-mounted at '{:?}' (size: {:?})",
&pool,
&disks,
mp,
- bucket.size()
+ bucket.size(0)
);
} else {
info!(
@@ -503,18 +503,20 @@ impl DiskState {
Some(c) => bail!("invalid bucket component in path: {:?}", c),
None => {
// list bucket components available at this level
- let comps = buckets
+ let mut comps = buckets
.iter()
.filter_map(|b| {
if b.type_string() != bucket_type {
return None;
}
match b.component_string(components.len()) {
- Ok(cs) => Some((cs.to_owned(), b.size())),
+ Ok(cs) => Some((cs.to_owned(), b.size(components.len()))),
Err(_) => None,
}
})
- .collect();
+ .collect::<Vec<(String, Option<u64>)>>();
+ comps.sort_by(|a, b| a.0.cmp(&b.0));
+ comps.dedup();
return Ok(ResolveResult::BucketComponents(comps));
}
};
--
2.30.2
^ permalink raw reply [flat|nested] 10+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 3/5] file-restore-daemon/disk: fix component path errors
2021-06-30 15:57 [pbs-devel] [PATCH 0/5] Add LVM (thin) support for single file restore Stefan Reiter
2021-06-30 15:57 ` [pbs-devel] [PATCH proxmox-backup-restore-image 1/5] add LVM (thin) tooling Stefan Reiter
2021-06-30 15:57 ` [pbs-devel] [PATCH proxmox-backup 2/5] file-restore-daemon/disk: dedup BucketComponents and make size optional Stefan Reiter
@ 2021-06-30 15:57 ` Stefan Reiter
2021-06-30 15:57 ` [pbs-devel] [PATCH proxmox-backup 4/5] file-restore-daemon/disk: ignore already-mounted error and prefix zpool Stefan Reiter
` (3 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Stefan Reiter @ 2021-06-30 15:57 UTC (permalink / raw)
To: pbs-devel
otherwise the path ends in an array ["foo", "bar"] instead of "foo/bar"
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
src/bin/proxmox_restore_daemon/disk.rs | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/src/bin/proxmox_restore_daemon/disk.rs b/src/bin/proxmox_restore_daemon/disk.rs
index b90d2a28..f8f67d83 100644
--- a/src/bin/proxmox_restore_daemon/disk.rs
+++ b/src/bin/proxmox_restore_daemon/disk.rs
@@ -527,10 +527,10 @@ impl DiskState {
let mut bucket = match Bucket::filter_mut(buckets, &bucket_type, &components) {
Some(bucket) => bucket,
None => bail!(
- "bucket/component path not found: {}/{}/{:?}",
+ "bucket/component path not found: {}/{}/{}",
req_fidx,
bucket_type,
- components
+ components.join("/")
),
};
@@ -540,10 +540,10 @@ impl DiskState {
.ensure_mounted(&mut bucket)
.map_err(|err| {
format_err!(
- "mounting '{}/{}/{:?}' failed: {}",
+ "mounting '{}/{}/{}' failed: {}",
req_fidx,
bucket_type,
- components,
+ components.join("/"),
err
)
})?;
--
2.30.2
^ permalink raw reply [flat|nested] 10+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 4/5] file-restore-daemon/disk: ignore already-mounted error and prefix zpool
2021-06-30 15:57 [pbs-devel] [PATCH 0/5] Add LVM (thin) support for single file restore Stefan Reiter
` (2 preceding siblings ...)
2021-06-30 15:57 ` [pbs-devel] [PATCH proxmox-backup 3/5] file-restore-daemon/disk: fix component path errors Stefan Reiter
@ 2021-06-30 15:57 ` Stefan Reiter
2021-06-30 15:57 ` [pbs-devel] [PATCH proxmox-backup 5/5] file-restore-daemon/disk: add LVM (thin) support Stefan Reiter
` (2 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Stefan Reiter @ 2021-06-30 15:57 UTC (permalink / raw)
To: pbs-devel
Prefix zpool mount paths to avoid clashing with other mount namespaces
(like LVM).
Also ignore "already-mounted" error and return it as success instead -
as we always assume that a mount path is unique, this is a safe
assumption, as nothing else could have been mounted here.
This fixes an issue where a mountpoint=legacy subvol might be available
on different disks, and thus have different Bucket instances that don't
share the mountpoint cache, which could lead to an error if the user
tried opening it multiple times on different disks.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
src/bin/proxmox_restore_daemon/disk.rs | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/src/bin/proxmox_restore_daemon/disk.rs b/src/bin/proxmox_restore_daemon/disk.rs
index f8f67d83..cae62af3 100644
--- a/src/bin/proxmox_restore_daemon/disk.rs
+++ b/src/bin/proxmox_restore_daemon/disk.rs
@@ -192,7 +192,7 @@ impl Filesystems {
return Ok(mp.clone());
}
- let mntpath = format!("/mnt/{}", &data.name);
+ let mntpath = format!("/mnt/zpool/{}", &data.name);
create_dir_all(&mntpath)?;
// call ZFS tools to import and mount the pool with the root mount at 'mntpath'
@@ -285,6 +285,7 @@ impl Filesystems {
return Ok(());
}
Err(nix::Error::Sys(nix::errno::Errno::EINVAL)) => {}
+ Err(nix::Error::Sys(nix::errno::Errno::EBUSY)) => return Ok(()),
Err(err) => {
warn!("mount error on '{}' ({}) - {}", source, fs, err);
}
--
2.30.2
^ permalink raw reply [flat|nested] 10+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 5/5] file-restore-daemon/disk: add LVM (thin) support
2021-06-30 15:57 [pbs-devel] [PATCH 0/5] Add LVM (thin) support for single file restore Stefan Reiter
` (3 preceding siblings ...)
2021-06-30 15:57 ` [pbs-devel] [PATCH proxmox-backup 4/5] file-restore-daemon/disk: ignore already-mounted error and prefix zpool Stefan Reiter
@ 2021-06-30 15:57 ` Stefan Reiter
2021-07-02 12:39 ` [pbs-devel] [PATCH 0/5] Add LVM (thin) support for single file restore Fabian Grünbichler
2021-07-05 6:12 ` [pbs-devel] applied-series: " Thomas Lamprecht
6 siblings, 0 replies; 10+ messages in thread
From: Stefan Reiter @ 2021-06-30 15:57 UTC (permalink / raw)
To: pbs-devel
Parses JSON output from 'pvs' and 'lvs' LVM utils and does two passes:
one to scan for thinpools and create a device node for their
metadata_lv, and a second to load all LVs, thin-provisioned or not.
Should support every LV-type that LVM supports, as we only parse LVM
tools and use 'vgscan --mknodes' to create device nodes for us.
Produces a two-layer BucketComponent hierarchy with VGs followed by LVs,
PVs are mapped to their respective disk node.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
src/bin/proxmox_restore_daemon/disk.rs | 189 +++++++++++++++++++++++++
1 file changed, 189 insertions(+)
diff --git a/src/bin/proxmox_restore_daemon/disk.rs b/src/bin/proxmox_restore_daemon/disk.rs
index cae62af3..42b8d496 100644
--- a/src/bin/proxmox_restore_daemon/disk.rs
+++ b/src/bin/proxmox_restore_daemon/disk.rs
@@ -62,6 +62,14 @@ struct ZFSBucketData {
size: Option<u64>,
}
+#[derive(Clone)]
+struct LVMBucketData {
+ vg_name: String,
+ lv_name: String,
+ mountpoint: Option<PathBuf>,
+ size: u64,
+}
+
/// A "Bucket" represents a mapping found on a disk, e.g. a partition, a zfs dataset or an LV. A
/// uniquely identifying path to a file then consists of four components:
/// "/disk/bucket/component/path"
@@ -77,6 +85,7 @@ enum Bucket {
Partition(PartitionBucketData),
RawFs(PartitionBucketData),
ZPool(ZFSBucketData),
+ LVM(LVMBucketData),
}
impl Bucket {
@@ -102,6 +111,13 @@ impl Bucket {
false
}
}
+ Bucket::LVM(data) => {
+ if let (Some(ref vg), Some(ref lv)) = (comp.get(0), comp.get(1)) {
+ ty == "lvm" && vg.as_ref() == &data.vg_name && lv.as_ref() == &data.lv_name
+ } else {
+ false
+ }
+ }
})
}
@@ -110,6 +126,7 @@ impl Bucket {
Bucket::Partition(_) => "part",
Bucket::RawFs(_) => "raw",
Bucket::ZPool(_) => "zpool",
+ Bucket::LVM(_) => "lvm",
}
}
@@ -127,6 +144,13 @@ impl Bucket {
Bucket::Partition(data) => data.number.to_string(),
Bucket::RawFs(_) => "raw".to_owned(),
Bucket::ZPool(data) => data.name.clone(),
+ Bucket::LVM(data) => {
+ if idx == 0 {
+ data.vg_name.clone()
+ } else {
+ data.lv_name.clone()
+ }
+ }
})
}
@@ -135,6 +159,7 @@ impl Bucket {
"part" => 1,
"raw" => 0,
"zpool" => 1,
+ "lvm" => 2,
_ => bail!("invalid bucket type for component depth: {}", type_string),
})
}
@@ -143,6 +168,13 @@ impl Bucket {
match self {
Bucket::Partition(data) | Bucket::RawFs(data) => Some(data.size),
Bucket::ZPool(data) => data.size,
+ Bucket::LVM(data) => {
+ if idx == 1 {
+ Some(data.size)
+ } else {
+ None
+ }
+ }
}
}
}
@@ -264,6 +296,21 @@ impl Filesystems {
data.size = Some(size);
}
+ let mp = PathBuf::from(mntpath);
+ data.mountpoint = Some(mp.clone());
+ Ok(mp)
+ }
+ Bucket::LVM(data) => {
+ if let Some(mp) = &data.mountpoint {
+ return Ok(mp.clone());
+ }
+
+ let mntpath = format!("/mnt/lvm/{}/{}", &data.vg_name, &data.lv_name);
+ create_dir_all(&mntpath)?;
+
+ let mapper_path = format!("/dev/mapper/{}-{}", &data.vg_name, &data.lv_name);
+ self.try_mount(&mapper_path, &mntpath)?;
+
let mp = PathBuf::from(mntpath);
data.mountpoint = Some(mp.clone());
Ok(mp)
@@ -444,12 +491,154 @@ impl DiskState {
}
}
+ Self::scan_lvm(&mut disk_map, &drive_info)?;
+
Ok(Self {
filesystems,
disk_map,
})
}
+ /// scan for LVM volumes and create device nodes for them to later mount on demand
+ fn scan_lvm(
+ disk_map: &mut HashMap<String, Vec<Bucket>>,
+ drive_info: &HashMap<String, String>,
+ ) -> Result<(), Error> {
+ // first get mapping between devices and vgs
+ let mut pv_map: HashMap<String, Vec<String>> = HashMap::new();
+ let mut cmd = Command::new("/sbin/pvs");
+ cmd.args(["-o", "pv_name,vg_name", "--reportformat", "json"].iter());
+ let result = run_command(cmd, None).unwrap();
+ let result: serde_json::Value = serde_json::from_str(&result)?;
+ if let Some(result) = result["report"][0]["pv"].as_array() {
+ for pv in result {
+ let vg_name = pv["vg_name"].as_str().unwrap();
+ if vg_name.is_empty() {
+ continue;
+ }
+ let pv_name = pv["pv_name"].as_str().unwrap();
+ // remove '/dev/' part
+ let pv_name = &pv_name[pv_name.rfind('/').map(|i| i + 1).unwrap_or(0)..];
+ if let Some(fidx) = drive_info.get(pv_name) {
+ info!("LVM: found VG '{}' on '{}' ({})", vg_name, pv_name, fidx);
+ match pv_map.get_mut(vg_name) {
+ Some(list) => list.push(fidx.to_owned()),
+ None => {
+ pv_map.insert(vg_name.to_owned(), vec![fidx.to_owned()]);
+ }
+ }
+ }
+ }
+ }
+
+ let mknodes = || {
+ let mut cmd = Command::new("/sbin/vgscan");
+ cmd.arg("--mknodes");
+ if let Err(err) = run_command(cmd, None) {
+ warn!("LVM: 'vgscan --mknodes' failed: {}", err);
+ }
+ };
+
+ // then scan for LVs and assign their buckets to the correct disks
+ let mut cmd = Command::new("/sbin/lvs");
+ cmd.args(
+ [
+ "-o",
+ "vg_name,lv_name,lv_size,metadata_lv",
+ "--units",
+ "B",
+ "--reportformat",
+ "json",
+ ]
+ .iter(),
+ );
+ let result = run_command(cmd, None).unwrap();
+ let result: serde_json::Value = serde_json::from_str(&result)?;
+ let mut thinpools = Vec::new();
+ if let Some(result) = result["report"][0]["lv"].as_array() {
+ // first, look for thin-pools
+ for lv in result {
+ let metadata = lv["metadata_lv"].as_str().unwrap_or_default();
+ if !metadata.is_empty() {
+ // this is a thin-pool, activate the metadata LV
+ let vg_name = lv["vg_name"].as_str().unwrap();
+ let metadata = metadata.trim_matches(&['[', ']'][..]);
+ info!("LVM: attempting to activate thinpool '{}'", metadata);
+ let mut cmd = Command::new("/sbin/lvchange");
+ cmd.args(["-ay", "-y", &format!("{}/{}", vg_name, metadata)].iter());
+ if let Err(err) = run_command(cmd, None) {
+ // not critical, will simply mean its children can't be loaded
+ warn!("LVM: activating thinpool failed: {}", err);
+ } else {
+ thinpools.push((vg_name, metadata));
+ }
+ }
+ }
+
+ // now give the metadata LVs a device node
+ mknodes();
+
+ // cannot leave the metadata LV active, otherwise child-LVs won't activate
+ for (vg_name, metadata) in thinpools {
+ let mut cmd = Command::new("/sbin/lvchange");
+ cmd.args(["-an", "-y", &format!("{}/{}", vg_name, metadata)].iter());
+ let _ = run_command(cmd, None);
+ }
+
+ for lv in result {
+ let lv_name = lv["lv_name"].as_str().unwrap();
+ let vg_name = lv["vg_name"].as_str().unwrap();
+ let metadata = lv["metadata_lv"].as_str().unwrap_or_default();
+ if lv_name.is_empty() || vg_name.is_empty() || !metadata.is_empty() {
+ continue;
+ }
+ let lv_size = lv["lv_size"].as_str().unwrap();
+ // lv_size is in bytes with a capital 'B' at the end
+ let lv_size = lv_size[..lv_size.len() - 1].parse::<u64>().unwrap_or(0);
+
+ let bucket = Bucket::LVM(LVMBucketData {
+ vg_name: vg_name.to_owned(),
+ lv_name: lv_name.to_owned(),
+ size: lv_size,
+ mountpoint: None,
+ });
+
+ // activate the LV so 'vgscan' can create a node later - this may fail, and if it
+ // does, we ignore it and continue
+ let mut cmd = Command::new("/sbin/lvchange");
+ cmd.args(["-ay", &format!("{}/{}", vg_name, lv_name)].iter());
+ if let Err(err) = run_command(cmd, None) {
+ warn!(
+ "LVM: LV '{}' on '{}' ({}B) failed to activate: {}",
+ lv_name, vg_name, lv_size, err
+ );
+ continue;
+ }
+
+ info!(
+ "LVM: found LV '{}' on '{}' ({}B)",
+ lv_name, vg_name, lv_size
+ );
+
+ if let Some(drives) = pv_map.get(vg_name) {
+ for fidx in drives {
+ match disk_map.get_mut(fidx) {
+ Some(v) => v.push(bucket.clone()),
+ None => {
+ disk_map.insert(fidx.to_owned(), vec![bucket.clone()]);
+ }
+ }
+ }
+ }
+ }
+
+ // now that we've imported and activated all LV's, we let vgscan create the dev nodes
+ mknodes();
+ }
+
+ Ok(())
+ }
+
/// Given a path like "/drive-scsi0.img.fidx/part/0/etc/passwd", this will mount the first
/// partition of 'drive-scsi0' on-demand (i.e. if not already mounted) and return a path
/// pointing to the requested file locally, e.g. "/mnt/vda1/etc/passwd", which can be used to
--
2.30.2
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [pbs-devel] [PATCH 0/5] Add LVM (thin) support for single file restore
2021-06-30 15:57 [pbs-devel] [PATCH 0/5] Add LVM (thin) support for single file restore Stefan Reiter
` (4 preceding siblings ...)
2021-06-30 15:57 ` [pbs-devel] [PATCH proxmox-backup 5/5] file-restore-daemon/disk: add LVM (thin) support Stefan Reiter
@ 2021-07-02 12:39 ` Fabian Grünbichler
2021-07-05 6:26 ` Thomas Lamprecht
2021-07-05 6:12 ` [pbs-devel] applied-series: " Thomas Lamprecht
6 siblings, 1 reply; 10+ messages in thread
From: Fabian Grünbichler @ 2021-07-02 12:39 UTC (permalink / raw)
To: Proxmox Backup Server development discussion
Reviewed-By: Fabian Grünbichler <f.gruenbichler@proxmox.com>
requires a Breaks on the old restore image (else the restore daemon
crashes because of missing lock/LVM support).
some potential for follow-ups/issues I encountered with a not-so-fast
PBS instance and bigger VMs:
- increase VM start timeout (currently 12s?), as the initial scan of
disks seems to block responding to the startup status request, and
scanning all VGs/LVs can take quite a lot longer than that -
alternatively, fix this blockage and/or consider VM started sooner?
- filter out "known unreadable" LVs, like Ceph (bluestore) OSDs
- investigate '--sysinit' option for lvchange/vgchange (seems
appropriate for what we are doing inside the VM ;))
- investigate '-K' to allow looking at "non-auto-activatable" LVs
and as discussed off-list, I think we should switch to
build-dependencies and copying binaries+needed libs in
build-initramfs.sh instead of downloading .deb packages and extracting
them only to remove part of their contents again - but that is firmly in
"improve/cleanup build" territory, so lower priority than the above.
On June 30, 2021 5:57 pm, Stefan Reiter wrote:
> Uses LVM tools to support LVM(-thin) volumes in the single-file restore daemon.
> Tested with multi-disk LVs, thinpools and different filesystems on top of them.
>
> Note for testing: The current proxmox-backup git master will not work for
> building the proxmox-restore-daemon binary, as the OpenID changes pull in some
> library dependencies not available in the restore VM.
>
>
> proxmox-backup-restore-image: Stefan Reiter (1):
> add LVM (thin) tooling
>
> src/build_initramfs.sh | 6 ++++++
> src/config-base | 1 +
> 2 files changed, 7 insertions(+)
>
> proxmox-backup: Stefan Reiter (4):
> file-restore-daemon/disk: dedup BucketComponents and make size
> optional
> file-restore-daemon/disk: fix component path errors
> file-restore-daemon/disk: ignore already-mounted error and prefix
> zpool
> file-restore-daemon/disk: add LVM (thin) support
>
> src/bin/proxmox_restore_daemon/api.rs | 2 +-
> src/bin/proxmox_restore_daemon/disk.rs | 222 +++++++++++++++++++++++--
> 2 files changed, 208 insertions(+), 16 deletions(-)
>
> --
> 2.30.2
>
>
> _______________________________________________
> pbs-devel mailing list
> pbs-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
>
>
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [pbs-devel] [PATCH 0/5] Add LVM (thin) support for single file restore
2021-07-02 12:39 ` [pbs-devel] [PATCH 0/5] Add LVM (thin) support for single file restore Fabian Grünbichler
@ 2021-07-05 6:26 ` Thomas Lamprecht
2021-07-05 7:18 ` Fabian Grünbichler
0 siblings, 1 reply; 10+ messages in thread
From: Thomas Lamprecht @ 2021-07-05 6:26 UTC (permalink / raw)
To: Proxmox Backup Server development discussion, Fabian Grünbichler
On 02.07.21 14:39, Fabian Grünbichler wrote:
> Reviewed-By: Fabian Grünbichler <f.gruenbichler@proxmox.com>
>
> requires a Breaks on the old restore image (else the restore daemon
> crashes because of missing lock/LVM support).
>
> some potential for follow-ups/issues I encountered with a not-so-fast
> PBS instance and bigger VMs:
> - increase VM start timeout (currently 12s?), as the initial scan of
> disks seems to block responding to the startup status request, and
> scanning all VGs/LVs can take quite a lot longer than that -
> alternatively, fix this blockage and/or consider VM started sooner?
I do not think that is required, rather, did it work at all for you?
As here it did not, and that was rather due to a recently introduced side-effect by the
Memcom(pare) module[0], which is instantiated deep in the REST handler for the cached user
handling and expects to be able to create a file in `/run/proxmox-backup`. As that directory
did not exists in the restore image environment, all requests to the REST server instance
there just got a 400 Bad Request:
> proxmox_backup::server::rest] GET /api2/json/status: 400 Bad Request: [client 0.0.0.0:807] request failed
Which is stupid in two ways:
* in most cases this rather should be denoted as 50X HTTP error, as it's produced
by a general error result in the server, user and param verification already create
correct 40X errors, and API calls that want to denote fault on the user request side
should be able to do so too, so remaining stuff is quite likely a 500 error - making
it easier to search for.
* We did not set the log extension for the general error, only returned it as response
body, so the real error got lost in any access log.
The latter was fixed[1], but I did not care to much, nor had the time to look into details
of the first one, just noting for the record.
Anyway, the symptoms of the earlier missing, now created[2], proxmox-backup run-dir where
a "VM start timeout", as the any API call failed the status one used for detecting a
finished start failed to -> timeout.
In any way, neither of you did actually test this on master, Stefan send it the day
Dietmar applied the Memcom stuff, but it was still done so 9h before this series was
send, meh...
[0]: https://git.proxmox.com/?p=proxmox-backup.git;a=commitdiff;h=fda19dcc6f41b2537896b2e874be1c0cfe71d5ea;hp=cd975e5787a3a9ace56e4aea0acc2364c283f438
[1]: https://git.proxmox.com/?p=proxmox-backup.git;a=commitdiff;h=f4d371d2d2c5d79352e9681b5d942627e26c6fd4;hp=835d0e5dd3d69886f0d6bfdd4192a4a8c81af395
[2]: https://git.proxmox.com/?p=proxmox-backup.git;a=commitdiff;h=33d7292f29bedfd7570bebcef2a5d6943ebe8cbb;hp=f4d371d2d2c5d79352e9681b5d942627e26c6fd4
> - filter out "known unreadable" LVs, like Ceph (bluestore) OSDs
> - investigate '--sysinit' option for lvchange/vgchange (seems
> appropriate for what we are doing inside the VM ;))
> - investigate '-K' to allow looking at "non-auto-activatable" LVs
>
> and as discussed off-list, I think we should switch to
> build-dependencies and copying binaries+needed libs in
> build-initramfs.sh instead of downloading .deb packages and extracting
> them only to remove part of their contents again - but that is firmly in
> "improve/cleanup build" territory, so lower priority than the above.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [pbs-devel] [PATCH 0/5] Add LVM (thin) support for single file restore
2021-07-05 6:26 ` Thomas Lamprecht
@ 2021-07-05 7:18 ` Fabian Grünbichler
0 siblings, 0 replies; 10+ messages in thread
From: Fabian Grünbichler @ 2021-07-05 7:18 UTC (permalink / raw)
To: Proxmox Backup Server development discussion, Thomas Lamprecht
On July 5, 2021 8:26 am, Thomas Lamprecht wrote:
> On 02.07.21 14:39, Fabian Grünbichler wrote:
>> Reviewed-By: Fabian Grünbichler <f.gruenbichler@proxmox.com>
>>
>> requires a Breaks on the old restore image (else the restore daemon
>> crashes because of missing lock/LVM support).
>>
>> some potential for follow-ups/issues I encountered with a not-so-fast
>> PBS instance and bigger VMs:
>> - increase VM start timeout (currently 12s?), as the initial scan of
>> disks seems to block responding to the startup status request, and
>> scanning all VGs/LVs can take quite a lot longer than that -
>> alternatively, fix this blockage and/or consider VM started sooner?
>
> I do not think that is required, rather, did it work at all for you?
yeah, but see below
> As here it did not, and that was rather due to a recently introduced side-effect by the
> Memcom(pare) module[0], which is instantiated deep in the REST handler for the cached user
> handling and expects to be able to create a file in `/run/proxmox-backup`. As that directory
> did not exists in the restore image environment, all requests to the REST server instance
> there just got a 400 Bad Request:
>
>> proxmox_backup::server::rest] GET /api2/json/status: 400 Bad Request: [client 0.0.0.0:807] request failed
>
> Which is stupid in two ways:
> * in most cases this rather should be denoted as 50X HTTP error, as it's produced
> by a general error result in the server, user and param verification already create
> correct 40X errors, and API calls that want to denote fault on the user request side
> should be able to do so too, so remaining stuff is quite likely a 500 error - making
> it easier to search for.
> * We did not set the log extension for the general error, only returned it as response
> body, so the real error got lost in any access log.
>
> The latter was fixed[1], but I did not care to much, nor had the time to look into details
> of the first one, just noting for the record.
>
>
> Anyway, the symptoms of the earlier missing, now created[2], proxmox-backup run-dir where
> a "VM start timeout", as the any API call failed the status one used for detecting a
> finished start failed to -> timeout.
>
> In any way, neither of you did actually test this on master, Stefan send it the day
> Dietmar applied the Memcom stuff, but it was still done so 9h before this series was
> send, meh...
I did test it on master, ran into the issue including seeing the 400 log
entries, but incorrectly attributed them to "VM not yet ready". I then
tried with stock packages from the repo to rule out a general issue on
my system (worked) and then increased the timeout and the problem
disappeared, but the increased timeout was in a worktree that indeed had
Dietmar's openid patches reverted (it was the one I originally started
the review on before fixing the openid linkage issues, when this series
and master were incompatible).
in Stefan's defense - he did write that his series was developed with
the openid patches reverted as he was blocked on the linkage issue, so
in his testing the problem could/did not arise. so this one's on me -
sorry.
> [0]: https://git.proxmox.com/?p=proxmox-backup.git;a=commitdiff;h=fda19dcc6f41b2537896b2e874be1c0cfe71d5ea;hp=cd975e5787a3a9ace56e4aea0acc2364c283f438
> [1]: https://git.proxmox.com/?p=proxmox-backup.git;a=commitdiff;h=f4d371d2d2c5d79352e9681b5d942627e26c6fd4;hp=835d0e5dd3d69886f0d6bfdd4192a4a8c81af395
> [2]: https://git.proxmox.com/?p=proxmox-backup.git;a=commitdiff;h=33d7292f29bedfd7570bebcef2a5d6943ebe8cbb;hp=f4d371d2d2c5d79352e9681b5d942627e26c6fd4
>
>> - filter out "known unreadable" LVs, like Ceph (bluestore) OSDs
>> - investigate '--sysinit' option for lvchange/vgchange (seems
>> appropriate for what we are doing inside the VM ;))
>> - investigate '-K' to allow looking at "non-auto-activatable" LVs
>>
>> and as discussed off-list, I think we should switch to
>> build-dependencies and copying binaries+needed libs in
>> build-initramfs.sh instead of downloading .deb packages and extracting
>> them only to remove part of their contents again - but that is firmly in
>> "improve/cleanup build" territory, so lower priority than the above.
>
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* [pbs-devel] applied-series: [PATCH 0/5] Add LVM (thin) support for single file restore
2021-06-30 15:57 [pbs-devel] [PATCH 0/5] Add LVM (thin) support for single file restore Stefan Reiter
` (5 preceding siblings ...)
2021-07-02 12:39 ` [pbs-devel] [PATCH 0/5] Add LVM (thin) support for single file restore Fabian Grünbichler
@ 2021-07-05 6:12 ` Thomas Lamprecht
6 siblings, 0 replies; 10+ messages in thread
From: Thomas Lamprecht @ 2021-07-05 6:12 UTC (permalink / raw)
To: Proxmox Backup Server development discussion, Stefan Reiter
On 30.06.21 17:57, Stefan Reiter wrote:
> Uses LVM tools to support LVM(-thin) volumes in the single-file restore daemon.
> Tested with multi-disk LVs, thinpools and different filesystems on top of them.
>
> Note for testing: The current proxmox-backup git master will not work for
> building the proxmox-restore-daemon binary, as the OpenID changes pull in some
> library dependencies not available in the restore VM.
>
>
> proxmox-backup-restore-image: Stefan Reiter (1):
> add LVM (thin) tooling
>
> src/build_initramfs.sh | 6 ++++++
> src/config-base | 1 +
> 2 files changed, 7 insertions(+)
>
> proxmox-backup: Stefan Reiter (4):
> file-restore-daemon/disk: dedup BucketComponents and make size
> optional
> file-restore-daemon/disk: fix component path errors
> file-restore-daemon/disk: ignore already-mounted error and prefix
> zpool
> file-restore-daemon/disk: add LVM (thin) support
>
> src/bin/proxmox_restore_daemon/api.rs | 2 +-
> src/bin/proxmox_restore_daemon/disk.rs | 222 +++++++++++++++++++++++--
> 2 files changed, 208 insertions(+), 16 deletions(-)
>
applied series, thanks!
^ permalink raw reply [flat|nested] 10+ messages in thread