From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <t.lamprecht@proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id 800C5814C3
 for <pve-devel@lists.proxmox.com>; Tue, 23 Nov 2021 15:16:21 +0100 (CET)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id 565B214466
 for <pve-devel@lists.proxmox.com>; Tue, 23 Nov 2021 15:15:51 +0100 (CET)
Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com
 [94.136.29.106])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS id B49B714451
 for <pve-devel@lists.proxmox.com>; Tue, 23 Nov 2021 15:15:47 +0100 (CET)
Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1])
 by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 7E8D345C88
 for <pve-devel@lists.proxmox.com>; Tue, 23 Nov 2021 15:15:47 +0100 (CET)
Message-ID: <2400bf8f-024f-1c12-efeb-59b7a917c325@proxmox.com>
Date: Tue, 23 Nov 2021 15:15:46 +0100
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:95.0) Gecko/20100101
 Thunderbird/95.0
Content-Language: en-US
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>,
 Fabian Ebner <f.ebner@proxmox.com>
References: <20211123115949.2462727-1-f.ebner@proxmox.com>
From: Thomas Lamprecht <t.lamprecht@proxmox.com>
In-Reply-To: <20211123115949.2462727-1-f.ebner@proxmox.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-SPAM-LEVEL: Spam detection results:  0
 AWL 0.106 Adjusted score from AWL reputation of From: address
 BAYES_00                 -1.9 Bayes spam probability is 0 to 1%
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
Subject: [pve-devel] applied: [PATCH kernel] Backport two io-wq fixes
 relevant for io_uring
X-BeenThere: pve-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/>
List-Post: <mailto:pve-devel@lists.proxmox.com>
List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Tue, 23 Nov 2021 14:16:21 -0000

On 23.11.21 12:59, Fabian Ebner wrote:
> There were quite a few reports in the community forum about Windows
> VMs with SATA disks not working after upgrading to kernel 5.13.
> Issue was reproducible during the installation of Win2019 (suggested
> by Thomas), and it's already fixed in 5.15. Bisecting led to
>     io-wq: split bounded and unbounded work into separate lists
> as the commit fixing the issue.
> 
> Indeed, the commit states
>     Fixes: ecc53c48c13d ("io-wq: check max_worker limits if a worker transitions bound state")
> which is present as a backport in ubuntu-impish:
>     f9eb79f840052285408ae9082dc4419dc1397954
> 
> The first backport
>     io-wq: fix queue stalling race
> also sounds nice to have and additionally served as a preparation for
> the second one to apply more cleanly.
> 
> Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
> ---
>  .../0010-io-wq-fix-queue-stalling-race.patch  |  72 +++
>  ...ded-and-unbounded-work-into-separate.patch | 415 ++++++++++++++++++
>  2 files changed, 487 insertions(+)
>  create mode 100644 patches/kernel/0010-io-wq-fix-queue-stalling-race.patch
>  create mode 100644 patches/kernel/0011-io-wq-split-bounded-and-unbounded-work-into-separate.patch
> 
>

applied, thanks!!

This fixes my reproducer (windows + SATA) nicely and Dominik also got their issue with
mass clone + setup of nested PVE on Debian use case working again.