From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <f.ebner@proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id 21F1EB8F0B
 for <pve-devel@lists.proxmox.com>; Tue, 12 Mar 2024 09:47:55 +0100 (CET)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id 0AEF0140BD
 for <pve-devel@lists.proxmox.com>; Tue, 12 Mar 2024 09:47:55 +0100 (CET)
Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com
 [94.136.29.106])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS
 for <pve-devel@lists.proxmox.com>; Tue, 12 Mar 2024 09:47:54 +0100 (CET)
Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1])
 by proxmox-new.maurer-it.com (Proxmox) with ESMTP id C97F64516E
 for <pve-devel@lists.proxmox.com>; Tue, 12 Mar 2024 09:47:53 +0100 (CET)
From: Fiona Ebner <f.ebner@proxmox.com>
To: pve-devel@lists.proxmox.com
Date: Tue, 12 Mar 2024 09:47:50 +0100
Message-Id: <20240312084750.57549-1-f.ebner@proxmox.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SPAM-LEVEL: Spam detection results:  0
 AWL -0.071 Adjusted score from AWL reputation of From: address
 BAYES_00                 -1.9 Bayes spam probability is 0 to 1%
 DMARC_MISSING             0.1 Missing DMARC policy
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
 T_SCC_BODY_TEXT_LINE    -0.01 -
Subject: [pve-devel] [PATCH qemu] fix patch for accepting NULL qiov when
 padding
X-BeenThere: pve-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/>
List-Post: <mailto:pve-devel@lists.proxmox.com>
List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Tue, 12 Mar 2024 08:47:55 -0000

All callers of the function pass an address, so dereferencing once
before checking for NULL is required. It's also necessary to update
bytes and offset nevertheless, so the request will actually be aligned
later and not trigger an assertion failure.

Seems like this was accidentally broken in 8dca018 ("udpate and rebase
to QEMU v6.0.0") and this is effectively a revert to the original
version of the patch. The qiov functions changed back then, which
might've been the reason Stefan tried to simplify the patch.

Should fix live-import for certain kinds of VMDK images.

Reported-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
---
 ...accept-NULL-qiov-in-bdrv_pad_request.patch | 59 ++++++++++++++-----
 1 file changed, 45 insertions(+), 14 deletions(-)

diff --git a/debian/patches/pve/0038-block-io-accept-NULL-qiov-in-bdrv_pad_request.patch b/debian/patches/pve/0038-block-io-accept-NULL-qiov-in-bdrv_pad_request.patch
index 851851f..bb9b72c 100644
--- a/debian/patches/pve/0038-block-io-accept-NULL-qiov-in-bdrv_pad_request.patch
+++ b/debian/patches/pve/0038-block-io-accept-NULL-qiov-in-bdrv_pad_request.patch
@@ -8,26 +8,57 @@ results (only copy-on-read matters). In this case they will pass NULL as
 the target QEMUIOVector, which will however trip bdrv_pad_request, since
 it wants to extend its passed vector.
 
-Simply check for NULL and do nothing, there's no reason to pad the
-target if it will be discarded anyway.
+If there is no qiov, no operation can be done with it, but the bytes
+and offset still need to be updated, so the subsequent aligned read
+will actually be aligned and not run into an assertion failure.
 
 Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
+[FE: do update bytes and offset in any case]
+Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
 ---
- block/io.c | 4 ++++
- 1 file changed, 4 insertions(+)
+ block/io.c | 29 ++++++++++++++++-------------
+ 1 file changed, 16 insertions(+), 13 deletions(-)
 
 diff --git a/block/io.c b/block/io.c
-index 83d1b1dfdc..24a3c84c93 100644
+index 83d1b1dfdc..e927881e40 100644
 --- a/block/io.c
 +++ b/block/io.c
-@@ -1710,6 +1710,10 @@ static int bdrv_pad_request(BlockDriverState *bs,
-     int sliced_niov;
-     size_t sliced_head, sliced_tail;
+@@ -1723,22 +1723,25 @@ static int bdrv_pad_request(BlockDriverState *bs,
+         return 0;
+     }
  
-+    if (!qiov) {
-+        return 0;
-+    }
+-    sliced_iov = qemu_iovec_slice(*qiov, *qiov_offset, *bytes,
+-                                  &sliced_head, &sliced_tail,
+-                                  &sliced_niov);
+-
+-    /* Guaranteed by bdrv_check_request32() */
+-    assert(*bytes <= SIZE_MAX);
+-    ret = bdrv_create_padded_qiov(bs, pad, sliced_iov, sliced_niov,
+-                                  sliced_head, *bytes);
+-    if (ret < 0) {
+-        bdrv_padding_finalize(pad);
+-        return ret;
++    if (qiov && *qiov) {
++        sliced_iov = qemu_iovec_slice(*qiov, *qiov_offset, *bytes,
++                                      &sliced_head, &sliced_tail,
++                                      &sliced_niov);
 +
-     /* Should have been checked by the caller already */
-     ret = bdrv_check_request32(*offset, *bytes, *qiov, *qiov_offset);
-     if (ret < 0) {
++        /* Guaranteed by bdrv_check_request32() */
++        assert(*bytes <= SIZE_MAX);
++        ret = bdrv_create_padded_qiov(bs, pad, sliced_iov, sliced_niov,
++                                      sliced_head, *bytes);
++        if (ret < 0) {
++            bdrv_padding_finalize(pad);
++            return ret;
++        }
++        *qiov = &pad->local_qiov;
++        *qiov_offset = 0;
+     }
++
+     *bytes += pad->head + pad->tail;
+     *offset -= pad->head;
+-    *qiov = &pad->local_qiov;
+-    *qiov_offset = 0;
+     if (padded) {
+         *padded = true;
+     }
-- 
2.39.2