public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Friedrich Weber <f.weber@proxmox.com>
To: Thomas Lamprecht <t.lamprecht@proxmox.com>,
	Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH storage 1/2] fix #5779: rbd: allow to pass custom krbd map options
Date: Wed, 30 Oct 2024 17:49:34 +0100	[thread overview]
Message-ID: <b5a7b0e5-91db-4fd1-8783-815c8b71a7fd@proxmox.com> (raw)
In-Reply-To: <9ffcd2a7-54c6-43b4-8e11-3a8f7bdbdfeb@proxmox.com>

On 30/10/2024 09:41, Thomas Lamprecht wrote:
> Am 25/10/2024 um 13:13 schrieb Friedrich Weber:
>> When KRBD is enabled for an RBD storage, the storage plugin calls out
>> to `rbd map` to map an RBD image as a block device on the host.
>> Sometimes it might be necessary to pass custom options to `rbd map`.
>> For instance, in some setups with Windows VMs, KRBD logs `bad
>> crc/signature` and VMs performance is degraded unless the `rxbounce`
>> option is enabled, as reported in the forum [1].
>>
>> To allow users to specify custom options for KRBD, introduce a
>> corresponding `krbd-map-options` property to the RBD plugin. The
>> property is designed to only accept a supported set of map options.
>> For now, this is only the `rxbounce` map option, but the supported set
>> can be extended in the future.
>>
>> The reasoning for constraining the supported set of map options
>> instead of allowing to pass a free-form option string is as follows:
>> If `rxbounce` turns out to be a sensible default, accepting a
>> free-form option string now will make it hard to switch over the
>> default to `rxbounce` while still allowing users to disable `rxbounce`
>> if needed. This would require scanning the free-form string for a
>> `norxbounce` or similar, which is cumbersome.
> 
> Reading the Ceph KRBD option docs [0] it seems a bit like it might
> be valid to always enable this for OS type Windows? Which could safe
> us an option here and avoid doing this storage wide.

I don't think the 'bad crc/signature' errors necessarily occur for each
and every Windows VM on KRBD. But then again, I just set up a Windows
Server 2022 VM on KRBD and got ~10 of those quite quickly, with
innocuous actions (opening the browser and the like). Also some users
recently reported [1] the need for rxbounce.

So yes, enabling rxbounce for all Windows VM disks might be a good
alternative, but as Fabian points out, technically this isn't really
possible at the moment, because activate_volume doesn't know about the
corresponding VM disk's ostype.

>> If users need to set a map option that `krbd-map-options` does not
>> support (yet), they can alternatively set the RBD config option
>> `rbd_default_map_options` [2].
> 
> But that would work now already? So this is basically just to expose it
> directly in the PVE (UI) stack?

In my tests, setting the `rbd_default_map_options` works for enabling
rxbounce. A forum user reported problems with that approach and I asked
for more details [2], but I haven't heard back yet.

> One reason I'm not totally happy with such stuff is that storage wide is
> quite a big scope; users might then tend to configure the same Ceph pool as
> multiple PVE storages, something that can have bad side effects.
> We basically had this issue for when the krbd flag was added first, then
> it was an "always use krbd or never user krbd" flag, now it's rather an
> "always use krbd or else use what works (librbd for VMs and krbd for CTs)"
> flag, and a big reason was that otherwise one would need two pools or,
> worse, exposing the same pool twice to PVE. This patch feels a bit like
> going slightly back to that direction, albeit it's not 1:1 the same and
> it might be fine, but I'd also like to have the alternatives evaluated a
> bit more closely before going this route.

Yeah, I see the point.

Of course, another alternative is enabling `rxbounce` unconditionally,
as initially requested in [1]. I'm a hesitant to do that because from
reading its description I'd expect it could have a performance impact --
it's probably small, if any, but this should probably be checked before
changing the default.

[1] https://forum.proxmox.com/threads/155741/
[2] https://forum.proxmox.com/threads/155741/post-715664


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


  parent reply	other threads:[~2024-10-30 16:49 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-10-25 11:13 [pve-devel] [PATCH storage/docs 0/2] fix #5779: storage: rbd: allow setting custom KRBD map option(s) Friedrich Weber
2024-10-25 11:13 ` [pve-devel] [PATCH storage 1/2] fix #5779: rbd: allow to pass custom krbd map options Friedrich Weber
2024-10-29 13:58   ` Aaron Lauterer
2024-10-29 17:01     ` Friedrich Weber
2024-10-30  8:41   ` Thomas Lamprecht
2024-10-30 13:29     ` Fabian Grünbichler
2024-10-30 16:49     ` Friedrich Weber [this message]
2024-10-25 11:13 ` [pve-devel] [PATCH docs 2/2] storage: rbd: document KRBD map options property Friedrich Weber

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b5a7b0e5-91db-4fd1-8783-815c8b71a7fd@proxmox.com \
    --to=f.weber@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    --cc=t.lamprecht@proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal