From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 01FF863EB0 for ; Thu, 27 Jan 2022 16:41:49 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id E744D2532F for ; Thu, 27 Jan 2022 16:41:18 +0100 (CET) Received: from mx.antreich.com (mx.antreich.com [173.249.42.230]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 2D23E25326 for ; Thu, 27 Jan 2022 16:41:18 +0100 (CET) MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=antreich.com; s=2018; t=1643298071; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/QmBNGBHo9tfktOgkkSivfEeCh5UCmok/A/1K4fQbBA=; b=Ze3mcFuIrRJ0TyVVoznbpMkQUoalbr3lDi2GB6aR1heVVuhitFdGMPJJesdZcvbwqpIOB2 ptiMXYGMJ8GgYjeWBQEJ2T8+3jqgO/c+zm6McNx3h5MSaNBcll6iyK2x6fYURz71tX1g6r UcBa1BlfDk5gjRtI5Hy5bg34l+69rrjrAa4P9q/HglzD+hBtgwQfkP5acJQUj0P7wGKmYw ltzmN4NoR6y484JQdsGqD8ZVwpVOaLo8B06/sSh7BWv1Y30bF4ckrUFUv/93ZaFeWbHWzw J3eM2id7loBJVISeR7a3Ko4uD+K5MGUaVJLCxgPU9lHzj6a/e9un3OLn/rgO6g== Date: Thu, 27 Jan 2022 15:41:12 +0000 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable From: "Alwin Antreich" Message-ID: <00b53f8566061be26bdf770332245ea9@antreich.com> To: "Aaron Lauterer" , "Proxmox VE development discussion" In-Reply-To: <1ae1d772-c466-5694-cf77-4018aedddafc@proxmox.com> References: <1ae1d772-c466-5694-cf77-4018aedddafc@proxmox.com> <20220126160734.2868618-1-a.lauterer@proxmox.com> <3dbb90bb8bfec2db7a08965c0301480f@antreich.com> X-SPAM-LEVEL: Spam detection results: 0 AWL 0.475 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DKIM_SIGNED 0.1 Message has a DKIM or DK signature, not necessarily valid DKIM_VALID -0.1 Message has at least one valid DKIM or DK signature DKIM_VALID_AU -0.1 Message has a valid DKIM or DK signature from author's domain DKIM_VALID_EF -0.1 Message has a valid DKIM or DK signature from envelope-from domain SPF_HELO_PASS -0.001 SPF: HELO matches SPF record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pve-devel] [PATCH storage] rbd: add support for erasure coded ec pools X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Jan 2022 15:41:49 -0000 January 27, 2022 12:27 PM, "Aaron Lauterer" wrot= e: > Thanks for the hint, as I wasn't aware of it. It will not be considered= for PVE managed Ceph > though, so not really an option here.[0] >=20 >=20[0] > https://git.proxmox.com/?p=3Dpve-storage.git;a=3Dblob;f=3DPVE/CephConfi= g.pm;h=3Dc388f025b409c660913c08276376 > dda0fba2c6c;hb=3DHEAD#l192 That's where the db config would work. > What these approaches do have in common, is that we spread the config o= ver multiple places and > cannot set different data pools for different storages. Yes indeed, it adds to the fragmentation. But this conf file is for each = storage, a data pool per storage is already possible. > I rather keep the data pool stored in our storage.cfg and apply the par= ameter where needed. From > what I can tell, I missed the image clone in this patch, where the data= -pool also needs to be > applied. > But this way we have the settings for that storage in one place we cont= rol and are also able to > have different EC pools for different storages. Not that I expect it to= happen a lot in practice, > but you never know. This sure is a good place. But to argue in favor of a separate config fil= e. :) Wouldn't it make sense to have a parameter for a `client.conf` in the sto= rage definition? Or maybe an inherent place like it already exists. This would allow to not only se= t the data pool but also adjust client caching, timeouts, debug levels, ...? [0] The benefit is mo= stly for users not having administrative access to their Ceph cluster. Hyper-converged setups can store these settings in the config db. Each st= orage would need its own user to separate the setting. Thanks for listening to my 2 cents. ;) Cheers, Alwin [0] https://docs.ceph.com/en/latest/cephfs/client-config-ref https://docs.ceph.com/en/latest/rbd/rbd-config-ref/#