From: Thomas Lamprecht <t.lamprecht@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>,
Aaron Lauterer <a.lauterer@proxmox.com>
Subject: [pve-devel] applied: [PATCH v2 storage] fix #1816: rbd: add support for erasure coded ec pools
Date: Fri, 4 Feb 2022 18:14:04 +0100 [thread overview]
Message-ID: <6154f7e9-6ed2-5956-792e-a8c52bd823e2@proxmox.com> (raw)
In-Reply-To: <20220128112241.3435277-1-a.lauterer@proxmox.com>
On 28.01.22 12:22, Aaron Lauterer wrote:
> The first step is to allocate rbd images correctly.
>
> The metadata objects still need to be stored in a replicated pool, but
> by providing the --data-pool parameter on image creation, we can place
> the data objects on the erasure coded (EC) pool.
>
> Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
> ---
> changes: add data-pool parameter in clone_image() if present
>
>
> Right now this only this only affects disk image creation and cloning.
> The EC pool needs to be created manually to test this.
>
> The Ceph blog about EC with RBD + CephFS gives a nice introduction and
> the necessary steps to set up such a pool [0].
>
> The steps needed are:
>
> - create EC profile (a 21 profile is only useful for testing purposes in
> a 3 node cluster, not something that should be considered for
> production use!)
> # ceph osd erasure-code-profile set ec-21-profile k=2 m=1 crush-failure-domain=host
>
> - create a new pool with that profile
> # ceph osd pool create ec21pool erasure ec-21-profile
>
> - allow overwrite
> # ceph osd pool set ec21pool allow_ec_overwrites true
>
> - enable application rbd on the pool (the command in the blog seems to
> have gotten the order of parameters a bit wrong here)
> # ceph osd pool application enable ec21pool rbd
>
> - add storage configuration
> # pvesm add rbd ectest --pool <replicated pool> --data-pool ec21pool
>
> For the replicated pool, either create a new one without adding the PVE
> storage config or use a namespace to separate it from the existing pool.
>
> To create a namespace:
> # rbd namespace create <pool>/<namespace>
>
> add the '--namespace' parameter in the pvesm add command.
>
> To check if the objects are stored correclty you can run rados:
>
> # rados -p <pool> ls
>
> This should only show metadata objects
>
> # rados -p <ec pool> ls
>
> This should then show only `rbd_data.xxx` objects.
> If you configured a namespace, you also need to add the `--namespace`
> parameter to the rados command.
>
>
> [0] https://ceph.io/en/news/blog/2017/new-luminous-erasure-coding-rbd-cephfs/
>
argh, above should have been in the commit message and I overlooked that..
>
> PVE/Storage/RBDPlugin.pm | 23 +++++++++++++----------
> 1 file changed, 13 insertions(+), 10 deletions(-)
>
>
applied, thanks! Made a few followups to improve readbillity, perl captures all
in the last array, e.g.:
my ($a, @foo) = ('a', 'all', @this, 'will', @be_in_foo)
prev parent reply other threads:[~2022-02-04 17:14 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-28 11:22 [pve-devel] " Aaron Lauterer
2022-02-04 17:14 ` Thomas Lamprecht [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6154f7e9-6ed2-5956-792e-a8c52bd823e2@proxmox.com \
--to=t.lamprecht@proxmox.com \
--cc=a.lauterer@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox