From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <s.reiter@proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id CB1FD76FB1
 for <pbs-devel@lists.proxmox.com>; Mon, 26 Apr 2021 15:04:56 +0200 (CEST)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id C10CD16277
 for <pbs-devel@lists.proxmox.com>; Mon, 26 Apr 2021 15:04:26 +0200 (CEST)
Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com
 [94.136.29.106])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS id 1859916241
 for <pbs-devel@lists.proxmox.com>; Mon, 26 Apr 2021 15:04:25 +0200 (CEST)
Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1])
 by proxmox-new.maurer-it.com (Proxmox) with ESMTP id E62A34286D
 for <pbs-devel@lists.proxmox.com>; Mon, 26 Apr 2021 15:04:24 +0200 (CEST)
From: Stefan Reiter <s.reiter@proxmox.com>
To: pbs-devel@lists.proxmox.com
Date: Mon, 26 Apr 2021 15:04:17 +0200
Message-Id: <20210426130417.20979-4-s.reiter@proxmox.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210426130417.20979-1-s.reiter@proxmox.com>
References: <20210426130417.20979-1-s.reiter@proxmox.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SPAM-LEVEL: Spam detection results:  0
 AWL 0.020 Adjusted score from AWL reputation of From: address
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
Subject: [pbs-devel] [PATCH proxmox-backup-restore-image 4/4] add workaround
 kernel patch for vsock panics
X-BeenThere: pbs-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox Backup Server development discussion
 <pbs-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pbs-devel>, 
 <mailto:pbs-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pbs-devel/>
List-Post: <mailto:pbs-devel@lists.proxmox.com>
List-Help: <mailto:pbs-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel>, 
 <mailto:pbs-devel-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Mon, 26 Apr 2021 13:04:56 -0000

Allocation failures for vsock packet buffers occur routinely when
downloading more than one stream at the same time, with less then 512
MiB of RAM it sometimes even occurs for single downloads.

This appears to fix it in all of my reproducer scenarios, tested with up
to 6 downloads at once in a 128 MiB RAM machine.

Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
---
 .../0003-vsock-reduce-packet-size.patch       | 36 +++++++++++++++++++
 1 file changed, 36 insertions(+)
 create mode 100644 src/patches/kernel/0003-vsock-reduce-packet-size.patch

diff --git a/src/patches/kernel/0003-vsock-reduce-packet-size.patch b/src/patches/kernel/0003-vsock-reduce-packet-size.patch
new file mode 100644
index 0000000..378da53
--- /dev/null
+++ b/src/patches/kernel/0003-vsock-reduce-packet-size.patch
@@ -0,0 +1,36 @@
+From a437d428733881f408b5d42eb75812600083cb75 Mon Sep 17 00:00:00 2001
+From: Stefan Reiter <s.reiter@proxmox.com>
+Date: Mon, 26 Apr 2021 14:08:36 +0200
+Subject: [PATCH] vsock: reduce packet size
+
+Reduce the maximum packet size to avoid allocation errors in VMs with
+very little memory available (since the buffer needs a contiguous
+block of memory, which can get rare for 64kB blocks).
+
+4kB used to be the default, and according to [0] increasing it makes
+the difference between ~25Gb/s and ~40Gb/s - certainly a lot faster,
+but both within the realm of unreachable for our restore scenario.
+
+[0] https://stefano-garzarella.github.io/posts/2019-11-08-kvmforum-2019-vsock/
+
+Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
+---
+ include/linux/virtio_vsock.h | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
+index dc636b727179..18c09ff72929 100644
+--- a/include/linux/virtio_vsock.h
++++ b/include/linux/virtio_vsock.h
+@@ -9,7 +9,7 @@
+ 
+ #define VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE	(1024 * 4)
+ #define VIRTIO_VSOCK_MAX_BUF_SIZE		0xFFFFFFFFUL
+-#define VIRTIO_VSOCK_MAX_PKT_BUF_SIZE		(1024 * 64)
++#define VIRTIO_VSOCK_MAX_PKT_BUF_SIZE		(1024 * 4)
+ 
+ enum {
+ 	VSOCK_VQ_RX     = 0, /* for host to guest data */
+-- 
+2.20.1
+
-- 
2.20.1