From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 9AE1663F0C for ; Thu, 27 Jan 2022 17:28:21 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 8CAFD258B8 for ; Thu, 27 Jan 2022 17:28:21 +0100 (CET) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 84F2A258AD for ; Thu, 27 Jan 2022 17:28:20 +0100 (CET) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 56D1642CA0; Thu, 27 Jan 2022 17:28:20 +0100 (CET) Message-ID: <0e3081d0-3d09-798a-e4d2-209db646ae75@proxmox.com> Date: Thu, 27 Jan 2022 17:28:18 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.5.1 Content-Language: en-US To: Alwin Antreich , Proxmox VE development discussion References: <1ae1d772-c466-5694-cf77-4018aedddafc@proxmox.com> <20220126160734.2868618-1-a.lauterer@proxmox.com> <3dbb90bb8bfec2db7a08965c0301480f@antreich.com> <00b53f8566061be26bdf770332245ea9@antreich.com> From: Aaron Lauterer In-Reply-To: <00b53f8566061be26bdf770332245ea9@antreich.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-SPAM-LEVEL: Spam detection results: 0 AWL -0.000 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment NICE_REPLY_A -0.001 Looks like a legit reply (A) SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pve-devel] [PATCH storage] rbd: add support for erasure coded ec pools X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Jan 2022 16:28:21 -0000 On 1/27/22 16:41, Alwin Antreich wrote: > January 27, 2022 12:27 PM, "Aaron Lauterer" wrote: > >> Thanks for the hint, as I wasn't aware of it. It will not be considered for PVE managed Ceph >> though, so not really an option here.[0] >> >> [0] >> https://git.proxmox.com/?p=pve-storage.git;a=blob;f=PVE/CephConfig.pm;h=c388f025b409c660913c08276376 >> dda0fba2c6c;hb=HEAD#l192 > > That's where the db config would work. > >> What these approaches do have in common, is that we spread the config over multiple places and >> cannot set different data pools for different storages. > > Yes indeed, it adds to the fragmentation. But this conf file is for each storage, a data pool per > storage is already possible. Yeah you are right, for external Ceph clusters, with the extra config file, this would already be possible to configure per storage. > >> I rather keep the data pool stored in our storage.cfg and apply the parameter where needed. From >> what I can tell, I missed the image clone in this patch, where the data-pool also needs to be >> applied. >> But this way we have the settings for that storage in one place we control and are also able to >> have different EC pools for different storages. Not that I expect it to happen a lot in practice, >> but you never know. > > This sure is a good place. But to argue in favor of a separate config file. :) > > Wouldn't it make sense to have a parameter for a `client.conf` in the storage definition? Or maybe > an inherent place like it already exists. This would allow to not only set the data pool but also > adjust client caching, timeouts, debug levels, ...? [0] The benefit is mostly for users not having > administrative access to their Ceph cluster. > Correct me if I got something wrong, but adding custom config settings for an external Ceph cluster, which "I" as PVE admin might only have limited access to, is already possible via the previously mentioned `/etc/pve/priv/ceph/.conf`. And I have to do it manually, so I am aware of it. In case of hyperconverged setups, I can add anything I want in the `/etc/pve/ceph.conf` so there is no immediate need for a custom config file per storage for things like changing debug levels and so on? Anything that touches the storage setup should rather be stored in the storage.cfg. And the option where the data objects should be stored falls into that category IMO. Of course, the downside is that if I run some commands manually, (for example rbd create) I will have to provide the --data-pool parameter myself. But even with a custom config file, I would have to make sure to add it via the `-c` parameter to have any effect. And since the default ceph.conf is not used anymore, I would also need to add the mon list and auth parameters myself. So not much gained there AFAICS versus adding it to the /etc/pve/ceph.conf directly. Besides the whole "where to store the data-pool parameter" issue, having custom client configs per storage would most likely be its own feature request. Basically extending the current way to hyperconverged storages. Though that would mean some kind of config merging as the hyperconverged situation relies heavily on the default Ceph config file. I still see the custom config file as an option for the admin to add custom options, not to spread the PVE managed settings when it can be avoided. > Hyper-converged setups can store these settings in the config db. Each storage would need its > own user to separate the setting. Now that would open a whole different box of changing how hyperconverged Ceph clusters are set up ;) > > Thanks for listening to my 2 cents. ;) It's always good to hear other opinions from a different perspective to check if one is missing something or at least thinking it through even more :) > > Cheers, > Alwin > > [0] https://docs.ceph.com/en/latest/cephfs/client-config-ref > https://docs.ceph.com/en/latest/rbd/rbd-config-ref/# >