From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <pve-devel-bounces@lists.proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9])
	by lore.proxmox.com (Postfix) with ESMTPS id B7ACA1FF15C
	for <inbox@lore.proxmox.com>; Wed, 30 Oct 2024 17:49:35 +0100 (CET)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
	by firstgate.proxmox.com (Proxmox) with ESMTP id B6B871DA1C;
	Wed, 30 Oct 2024 17:49:38 +0100 (CET)
Message-ID: <b5a7b0e5-91db-4fd1-8783-815c8b71a7fd@proxmox.com>
Date: Wed, 30 Oct 2024 17:49:34 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
To: Thomas Lamprecht <t.lamprecht@proxmox.com>,
 Proxmox VE development discussion <pve-devel@lists.proxmox.com>
References: <20241025111304.99680-1-f.weber@proxmox.com>
 <20241025111304.99680-2-f.weber@proxmox.com>
 <9ffcd2a7-54c6-43b4-8e11-3a8f7bdbdfeb@proxmox.com>
Content-Language: en-US
From: Friedrich Weber <f.weber@proxmox.com>
In-Reply-To: <9ffcd2a7-54c6-43b4-8e11-3a8f7bdbdfeb@proxmox.com>
X-SPAM-LEVEL: Spam detection results:  0
 AWL -0.026 Adjusted score from AWL reputation of From: address
 BAYES_00                 -1.9 Bayes spam probability is 0 to 1%
 DMARC_MISSING             0.1 Missing DMARC policy
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to
 Validity was blocked. See
 https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more
 information.
 RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to
 Validity was blocked. See
 https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more
 information.
 RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to
 Validity was blocked. See
 https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more
 information.
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
Subject: Re: [pve-devel] [PATCH storage 1/2] fix #5779: rbd: allow to pass
 custom krbd map options
X-BeenThere: pve-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/>
List-Post: <mailto:pve-devel@lists.proxmox.com>
List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe>
Reply-To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Errors-To: pve-devel-bounces@lists.proxmox.com
Sender: "pve-devel" <pve-devel-bounces@lists.proxmox.com>

On 30/10/2024 09:41, Thomas Lamprecht wrote:
> Am 25/10/2024 um 13:13 schrieb Friedrich Weber:
>> When KRBD is enabled for an RBD storage, the storage plugin calls out
>> to `rbd map` to map an RBD image as a block device on the host.
>> Sometimes it might be necessary to pass custom options to `rbd map`.
>> For instance, in some setups with Windows VMs, KRBD logs `bad
>> crc/signature` and VMs performance is degraded unless the `rxbounce`
>> option is enabled, as reported in the forum [1].
>>
>> To allow users to specify custom options for KRBD, introduce a
>> corresponding `krbd-map-options` property to the RBD plugin. The
>> property is designed to only accept a supported set of map options.
>> For now, this is only the `rxbounce` map option, but the supported set
>> can be extended in the future.
>>
>> The reasoning for constraining the supported set of map options
>> instead of allowing to pass a free-form option string is as follows:
>> If `rxbounce` turns out to be a sensible default, accepting a
>> free-form option string now will make it hard to switch over the
>> default to `rxbounce` while still allowing users to disable `rxbounce`
>> if needed. This would require scanning the free-form string for a
>> `norxbounce` or similar, which is cumbersome.
> 
> Reading the Ceph KRBD option docs [0] it seems a bit like it might
> be valid to always enable this for OS type Windows? Which could safe
> us an option here and avoid doing this storage wide.

I don't think the 'bad crc/signature' errors necessarily occur for each
and every Windows VM on KRBD. But then again, I just set up a Windows
Server 2022 VM on KRBD and got ~10 of those quite quickly, with
innocuous actions (opening the browser and the like). Also some users
recently reported [1] the need for rxbounce.

So yes, enabling rxbounce for all Windows VM disks might be a good
alternative, but as Fabian points out, technically this isn't really
possible at the moment, because activate_volume doesn't know about the
corresponding VM disk's ostype.

>> If users need to set a map option that `krbd-map-options` does not
>> support (yet), they can alternatively set the RBD config option
>> `rbd_default_map_options` [2].
> 
> But that would work now already? So this is basically just to expose it
> directly in the PVE (UI) stack?

In my tests, setting the `rbd_default_map_options` works for enabling
rxbounce. A forum user reported problems with that approach and I asked
for more details [2], but I haven't heard back yet.

> One reason I'm not totally happy with such stuff is that storage wide is
> quite a big scope; users might then tend to configure the same Ceph pool as
> multiple PVE storages, something that can have bad side effects.
> We basically had this issue for when the krbd flag was added first, then
> it was an "always use krbd or never user krbd" flag, now it's rather an
> "always use krbd or else use what works (librbd for VMs and krbd for CTs)"
> flag, and a big reason was that otherwise one would need two pools or,
> worse, exposing the same pool twice to PVE. This patch feels a bit like
> going slightly back to that direction, albeit it's not 1:1 the same and
> it might be fine, but I'd also like to have the alternatives evaluated a
> bit more closely before going this route.

Yeah, I see the point.

Of course, another alternative is enabling `rxbounce` unconditionally,
as initially requested in [1]. I'm a hesitant to do that because from
reading its description I'd expect it could have a performance impact --
it's probably small, if any, but this should probably be checked before
changing the default.

[1] https://forum.proxmox.com/threads/155741/
[2] https://forum.proxmox.com/threads/155741/post-715664


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel