* Re: [pbs-devel] [PATCH proxmox-backup-restore-image 4/4] add workaround kernel patch for vsock panics
@ 2021-04-26 18:04 Dietmar Maurer
0 siblings, 0 replies; 3+ messages in thread
From: Dietmar Maurer @ 2021-04-26 18:04 UTC (permalink / raw)
To: Proxmox Backup Server development discussion, Stefan Reiter
Such small buffer sizes can reduce performance. I usually go
for a minimum of 256K.
So do we really need this?
What is a 128 MiB RAM machine???
> On 04/26/2021 3:04 PM Stefan Reiter <s.reiter@proxmox.com> wrote:
>
>
> Allocation failures for vsock packet buffers occur routinely when
> downloading more than one stream at the same time, with less then 512
> MiB of RAM it sometimes even occurs for single downloads.
>
> This appears to fix it in all of my reproducer scenarios, tested with up
> to 6 downloads at once in a 128 MiB RAM machine.
>
> Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
> ---
> .../0003-vsock-reduce-packet-size.patch | 36 +++++++++++++++++++
> 1 file changed, 36 insertions(+)
> create mode 100644 src/patches/kernel/0003-vsock-reduce-packet-size.patch
>
> diff --git a/src/patches/kernel/0003-vsock-reduce-packet-size.patch b/src/patches/kernel/0003-vsock-reduce-packet-size.patch
> new file mode 100644
> index 0000000..378da53
> --- /dev/null
> +++ b/src/patches/kernel/0003-vsock-reduce-packet-size.patch
> @@ -0,0 +1,36 @@
> +From a437d428733881f408b5d42eb75812600083cb75 Mon Sep 17 00:00:00 2001
> +From: Stefan Reiter <s.reiter@proxmox.com>
> +Date: Mon, 26 Apr 2021 14:08:36 +0200
> +Subject: [PATCH] vsock: reduce packet size
> +
> +Reduce the maximum packet size to avoid allocation errors in VMs with
> +very little memory available (since the buffer needs a contiguous
> +block of memory, which can get rare for 64kB blocks).
> +
> +4kB used to be the default, and according to [0] increasing it makes
> +the difference between ~25Gb/s and ~40Gb/s - certainly a lot faster,
> +but both within the realm of unreachable for our restore scenario.
> +
> +[0] https://stefano-garzarella.github.io/posts/2019-11-08-kvmforum-2019-vsock/
> +
> +Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
> +---
> + include/linux/virtio_vsock.h | 2 +-
> + 1 file changed, 1 insertion(+), 1 deletion(-)
> +
> +diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
> +index dc636b727179..18c09ff72929 100644
> +--- a/include/linux/virtio_vsock.h
> ++++ b/include/linux/virtio_vsock.h
> +@@ -9,7 +9,7 @@
> +
> + #define VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE (1024 * 4)
> + #define VIRTIO_VSOCK_MAX_BUF_SIZE 0xFFFFFFFFUL
> +-#define VIRTIO_VSOCK_MAX_PKT_BUF_SIZE (1024 * 64)
> ++#define VIRTIO_VSOCK_MAX_PKT_BUF_SIZE (1024 * 4)
> +
> + enum {
> + VSOCK_VQ_RX = 0, /* for host to guest data */
> +--
> +2.20.1
> +
> --
> 2.20.1
>
>
>
> _______________________________________________
> pbs-devel mailing list
> pbs-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [pbs-devel] [PATCH proxmox-backup-restore-image 4/4] add workaround kernel patch for vsock panics
@ 2021-04-27 6:13 Dietmar Maurer
0 siblings, 0 replies; 3+ messages in thread
From: Dietmar Maurer @ 2021-04-27 6:13 UTC (permalink / raw)
To: Proxmox Backup Server development discussion, Stefan Reiter
Answering myself, after talking with Thomas:
> On 04/26/2021 8:04 PM Dietmar Maurer <dietmar@proxmox.com> wrote:
>
>
> Such small buffer sizes can reduce performance. I usually go
> for a minimum of 256K.
>
> So do we really need this?
This is simply an optimization to reduce space usage. Performance
is still good enough...
> What is a 128 MiB RAM machine???
This is the restore VM ...
>
>
> > On 04/26/2021 3:04 PM Stefan Reiter <s.reiter@proxmox.com> wrote:
> >
> >
> > Allocation failures for vsock packet buffers occur routinely when
> > downloading more than one stream at the same time, with less then 512
> > MiB of RAM it sometimes even occurs for single downloads.
> >
> > This appears to fix it in all of my reproducer scenarios, tested with up
> > to 6 downloads at once in a 128 MiB RAM machine.
> >
> > Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
> > ---
> > .../0003-vsock-reduce-packet-size.patch | 36 +++++++++++++++++++
> > 1 file changed, 36 insertions(+)
> > create mode 100644 src/patches/kernel/0003-vsock-reduce-packet-size.patch
> >
> > diff --git a/src/patches/kernel/0003-vsock-reduce-packet-size.patch b/src/patches/kernel/0003-vsock-reduce-packet-size.patch
> > new file mode 100644
> > index 0000000..378da53
> > --- /dev/null
> > +++ b/src/patches/kernel/0003-vsock-reduce-packet-size.patch
> > @@ -0,0 +1,36 @@
> > +From a437d428733881f408b5d42eb75812600083cb75 Mon Sep 17 00:00:00 2001
> > +From: Stefan Reiter <s.reiter@proxmox.com>
> > +Date: Mon, 26 Apr 2021 14:08:36 +0200
> > +Subject: [PATCH] vsock: reduce packet size
> > +
> > +Reduce the maximum packet size to avoid allocation errors in VMs with
> > +very little memory available (since the buffer needs a contiguous
> > +block of memory, which can get rare for 64kB blocks).
> > +
> > +4kB used to be the default, and according to [0] increasing it makes
> > +the difference between ~25Gb/s and ~40Gb/s - certainly a lot faster,
> > +but both within the realm of unreachable for our restore scenario.
> > +
> > +[0] https://stefano-garzarella.github.io/posts/2019-11-08-kvmforum-2019-vsock/
^ permalink raw reply [flat|nested] 3+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 1/4] file-restore: add size to image files and components
@ 2021-04-26 13:04 Stefan Reiter
2021-04-26 13:04 ` [pbs-devel] [PATCH proxmox-backup-restore-image 4/4] add workaround kernel patch for vsock panics Stefan Reiter
0 siblings, 1 reply; 3+ messages in thread
From: Stefan Reiter @ 2021-04-26 13:04 UTC (permalink / raw)
To: pbs-devel
Read image sizes (.pxar.fidx/.img.didx) from manifest and partition
sizes from /sys/...
Requires a change to ArchiveEntry, as DirEntryAttribute::Directory
does not have a size associated with it (and that's probably good).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
src/api2/types/mod.rs | 19 ++++++++++++++-----
src/bin/proxmox-file-restore.rs | 2 +-
src/bin/proxmox_restore_daemon/api.rs | 5 +++--
src/bin/proxmox_restore_daemon/disk.rs | 23 +++++++++++++++++++----
4 files changed, 37 insertions(+), 12 deletions(-)
diff --git a/src/api2/types/mod.rs b/src/api2/types/mod.rs
index cfb4ec74..66b4d258 100644
--- a/src/api2/types/mod.rs
+++ b/src/api2/types/mod.rs
@@ -1354,6 +1354,18 @@ pub struct ArchiveEntry {
impl ArchiveEntry {
pub fn new(filepath: &[u8], entry_type: Option<&DirEntryAttribute>) -> Self {
+ let size = match entry_type {
+ Some(DirEntryAttribute::File { size, .. }) => Some(*size),
+ _ => None,
+ };
+ Self::new_with_size(filepath, entry_type, size)
+ }
+
+ pub fn new_with_size(
+ filepath: &[u8],
+ entry_type: Option<&DirEntryAttribute>,
+ size: Option<u64>,
+ ) -> Self {
Self {
filepath: base64::encode(filepath),
text: String::from_utf8_lossy(filepath.split(|x| *x == b'/').last().unwrap())
@@ -1363,13 +1375,10 @@ impl ArchiveEntry {
None => "v".to_owned(),
},
leaf: !matches!(entry_type, None | Some(DirEntryAttribute::Directory { .. })),
- size: match entry_type {
- Some(DirEntryAttribute::File { size, .. }) => Some(*size),
- _ => None
- },
+ size,
mtime: match entry_type {
Some(DirEntryAttribute::File { mtime, .. }) => Some(*mtime),
- _ => None
+ _ => None,
},
}
}
diff --git a/src/bin/proxmox-file-restore.rs b/src/bin/proxmox-file-restore.rs
index ffc8b0a5..5766b279 100644
--- a/src/bin/proxmox-file-restore.rs
+++ b/src/bin/proxmox-file-restore.rs
@@ -194,7 +194,7 @@ async fn list(
} else {
None
};
- entries.push(ArchiveEntry::new(path.as_bytes(), attr));
+ entries.push(ArchiveEntry::new_with_size(path.as_bytes(), attr, Some(file.size)));
}
Ok(entries)
diff --git a/src/bin/proxmox_restore_daemon/api.rs b/src/bin/proxmox_restore_daemon/api.rs
index 82ead647..f1d601ce 100644
--- a/src/bin/proxmox_restore_daemon/api.rs
+++ b/src/bin/proxmox_restore_daemon/api.rs
@@ -200,11 +200,12 @@ fn list(
for c in comps {
let mut c_path = path.clone();
c_path.push(b'/');
- c_path.extend(c.as_bytes());
- res.push(ArchiveEntry::new(
+ c_path.extend(c.0.as_bytes());
+ res.push(ArchiveEntry::new_with_size(
&c_path[..],
// this marks the beginning of a filesystem, i.e. '/', so this is a Directory
Some(&DirEntryAttribute::Directory { start: 0 }),
+ Some(c.1),
));
}
}
diff --git a/src/bin/proxmox_restore_daemon/disk.rs b/src/bin/proxmox_restore_daemon/disk.rs
index f9d7c8aa..08fe7490 100644
--- a/src/bin/proxmox_restore_daemon/disk.rs
+++ b/src/bin/proxmox_restore_daemon/disk.rs
@@ -36,13 +36,14 @@ lazy_static! {
pub enum ResolveResult {
Path(PathBuf),
BucketTypes(Vec<&'static str>),
- BucketComponents(Vec<String>),
+ BucketComponents(Vec<(String, u64)>),
}
struct PartitionBucketData {
dev_node: String,
number: i32,
mountpoint: Option<PathBuf>,
+ size: u64,
}
/// A "Bucket" represents a mapping found on a disk, e.g. a partition, a zfs dataset or an LV. A
@@ -83,6 +84,12 @@ impl Bucket {
Bucket::Partition(data) => data.number.to_string(),
}
}
+
+ fn size(&self) -> u64 {
+ match self {
+ Bucket::Partition(data) => data.size,
+ }
+ }
}
/// Functions related to the local filesystem. This mostly exists so we can use 'supported_fs' in
@@ -209,15 +216,23 @@ impl DiskState {
.trim()
.parse::<i32>()?;
+ // this *always* contains the number of 512-byte sectors, regardless of the true
+ // blocksize of this disk - which should always be 512 here anyway
+ let size = fs::file_read_firstline(&format!("{}/size", part_path))?
+ .trim()
+ .parse::<u64>()?
+ * 512;
+
info!(
- "drive '{}' ('{}'): found partition '{}' ({})",
- name, fidx, devnode, number
+ "drive '{}' ('{}'): found partition '{}' ({}, {}B)",
+ name, fidx, devnode, number, size
);
let bucket = Bucket::Partition(PartitionBucketData {
dev_node: devnode,
mountpoint: None,
number,
+ size,
});
parts.push(bucket);
@@ -281,7 +296,7 @@ impl DiskState {
let comps = buckets
.iter()
.filter(|b| b.type_string() == bucket_type)
- .map(Bucket::component_string)
+ .map(|b| (b.component_string(), b.size()))
.collect();
return Ok(ResolveResult::BucketComponents(comps));
}
--
2.20.1
^ permalink raw reply [flat|nested] 3+ messages in thread
* [pbs-devel] [PATCH proxmox-backup-restore-image 4/4] add workaround kernel patch for vsock panics
2021-04-26 13:04 [pbs-devel] [PATCH proxmox-backup 1/4] file-restore: add size to image files and components Stefan Reiter
@ 2021-04-26 13:04 ` Stefan Reiter
0 siblings, 0 replies; 3+ messages in thread
From: Stefan Reiter @ 2021-04-26 13:04 UTC (permalink / raw)
To: pbs-devel
Allocation failures for vsock packet buffers occur routinely when
downloading more than one stream at the same time, with less then 512
MiB of RAM it sometimes even occurs for single downloads.
This appears to fix it in all of my reproducer scenarios, tested with up
to 6 downloads at once in a 128 MiB RAM machine.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
.../0003-vsock-reduce-packet-size.patch | 36 +++++++++++++++++++
1 file changed, 36 insertions(+)
create mode 100644 src/patches/kernel/0003-vsock-reduce-packet-size.patch
diff --git a/src/patches/kernel/0003-vsock-reduce-packet-size.patch b/src/patches/kernel/0003-vsock-reduce-packet-size.patch
new file mode 100644
index 0000000..378da53
--- /dev/null
+++ b/src/patches/kernel/0003-vsock-reduce-packet-size.patch
@@ -0,0 +1,36 @@
+From a437d428733881f408b5d42eb75812600083cb75 Mon Sep 17 00:00:00 2001
+From: Stefan Reiter <s.reiter@proxmox.com>
+Date: Mon, 26 Apr 2021 14:08:36 +0200
+Subject: [PATCH] vsock: reduce packet size
+
+Reduce the maximum packet size to avoid allocation errors in VMs with
+very little memory available (since the buffer needs a contiguous
+block of memory, which can get rare for 64kB blocks).
+
+4kB used to be the default, and according to [0] increasing it makes
+the difference between ~25Gb/s and ~40Gb/s - certainly a lot faster,
+but both within the realm of unreachable for our restore scenario.
+
+[0] https://stefano-garzarella.github.io/posts/2019-11-08-kvmforum-2019-vsock/
+
+Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
+---
+ include/linux/virtio_vsock.h | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
+index dc636b727179..18c09ff72929 100644
+--- a/include/linux/virtio_vsock.h
++++ b/include/linux/virtio_vsock.h
+@@ -9,7 +9,7 @@
+
+ #define VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE (1024 * 4)
+ #define VIRTIO_VSOCK_MAX_BUF_SIZE 0xFFFFFFFFUL
+-#define VIRTIO_VSOCK_MAX_PKT_BUF_SIZE (1024 * 64)
++#define VIRTIO_VSOCK_MAX_PKT_BUF_SIZE (1024 * 4)
+
+ enum {
+ VSOCK_VQ_RX = 0, /* for host to guest data */
+--
+2.20.1
+
--
2.20.1
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2021-04-27 6:14 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-26 18:04 [pbs-devel] [PATCH proxmox-backup-restore-image 4/4] add workaround kernel patch for vsock panics Dietmar Maurer
-- strict thread matches above, loose matches on Subject: below --
2021-04-27 6:13 Dietmar Maurer
2021-04-26 13:04 [pbs-devel] [PATCH proxmox-backup 1/4] file-restore: add size to image files and components Stefan Reiter
2021-04-26 13:04 ` [pbs-devel] [PATCH proxmox-backup-restore-image 4/4] add workaround kernel patch for vsock panics Stefan Reiter
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox