From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id 72C1B1FF15C for ; Wed, 30 Oct 2024 14:30:47 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 54D081925F; Wed, 30 Oct 2024 14:30:21 +0100 (CET) Date: Wed, 30 Oct 2024 14:29:47 +0100 (CET) From: =?UTF-8?Q?Fabian_Gr=C3=BCnbichler?= To: Proxmox VE development discussion , Thomas Lamprecht , Friedrich Weber Message-ID: <1234079298.5156.1730294987348@webmail.proxmox.com> In-Reply-To: <9ffcd2a7-54c6-43b4-8e11-3a8f7bdbdfeb@proxmox.com> References: <20241025111304.99680-1-f.weber@proxmox.com> <20241025111304.99680-2-f.weber@proxmox.com> <9ffcd2a7-54c6-43b4-8e11-3a8f7bdbdfeb@proxmox.com> MIME-Version: 1.0 X-Priority: 3 Importance: Normal X-Mailer: Open-Xchange Mailer v7.10.6-Rev69 X-Originating-Client: open-xchange-appsuite X-SPAM-LEVEL: Spam detection results: 0 AWL 0.049 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pve-devel] [PATCH storage 1/2] fix #5779: rbd: allow to pass custom krbd map options X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Proxmox VE development discussion Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: pve-devel-bounces@lists.proxmox.com Sender: "pve-devel" > Thomas Lamprecht hat am 30.10.2024 09:41 CET geschrieben: > > > Am 25/10/2024 um 13:13 schrieb Friedrich Weber: > > When KRBD is enabled for an RBD storage, the storage plugin calls out > > to `rbd map` to map an RBD image as a block device on the host. > > Sometimes it might be necessary to pass custom options to `rbd map`. > > For instance, in some setups with Windows VMs, KRBD logs `bad > > crc/signature` and VMs performance is degraded unless the `rxbounce` > > option is enabled, as reported in the forum [1]. > > > > To allow users to specify custom options for KRBD, introduce a > > corresponding `krbd-map-options` property to the RBD plugin. The > > property is designed to only accept a supported set of map options. > > For now, this is only the `rxbounce` map option, but the supported set > > can be extended in the future. > > > > The reasoning for constraining the supported set of map options > > instead of allowing to pass a free-form option string is as follows: > > If `rxbounce` turns out to be a sensible default, accepting a > > free-form option string now will make it hard to switch over the > > default to `rxbounce` while still allowing users to disable `rxbounce` > > if needed. This would require scanning the free-form string for a > > `norxbounce` or similar, which is cumbersome. > > Reading the Ceph KRBD option docs [0] it seems a bit like it might > be valid to always enable this for OS type Windows? Which could safe > us an option here and avoid doing this storage wide. > > [0]: https://docs.ceph.com/en/reef/man/8/rbd/#kernel-rbd-krbd-options > > > If users need to set a map option that `krbd-map-options` does not > > support (yet), they can alternatively set the RBD config option > > `rbd_default_map_options` [2]. > > But that would work now already? So this is basically just to expose it > directly in the PVE (UI) stack? > > One reason I'm not totally happy with such stuff is that storage wide is > quite a big scope; users might then tend to configure the same Ceph pool as > multiple PVE storages, something that can have bad side effects. > We basically had this issue for when the krbd flag was added first, then > it was an "always use krbd or never user krbd" flag, now it's rather an > "always use krbd or else use what works (librbd for VMs and krbd for CTs)" > flag, and a big reason was that otherwise one would need two pools or, > worse, exposing the same pool twice to PVE. This patch feels a bit like > going slightly back to that direction, albeit it's not 1:1 the same and > it might be fine, but I'd also like to have the alternatives evaluated a > bit more closely before going this route. that would require a way to pass this information through via PVE::Storage::activate_volumes, which currently doesn't exist.. and of course, in a way this would increase coupling of (in this case) qemu-server and pve-storage. maybe it would make sense to evaluate whether we have other use cases for such a mechanism, and decide based on that? in any case, if the option stays in pve-storage like proposed in this series, it seems its format should be an enum-string(-list), instead of a manual verify sub? _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel