* Re: [PVE-User] KRBD vs librbd for qemu kvm vms
[not found] <CAHxUxjAvtzTAM52kTHSAYXFV3k9nbBbO_2KpzuQJyxEB4tRZkw@mail.gmail.com>
@ 2021-02-25 8:45 ` Thomas Lamprecht
0 siblings, 0 replies; only message in thread
From: Thomas Lamprecht @ 2021-02-25 8:45 UTC (permalink / raw)
To: Mark Adams, PVE User List
Hi,
On 24.02.21 20:25, Mark Adams wrote:
> Hi All,
>
> I have done some research on this, but I can't find any resources that
> answer my specific questions - so I am hoping that everyone on the list can
> help out.
>
> In one specific use case, I have a pve 5.4-11 cluster using ceph 12.2.12.
> The main reason I haven't upgraded this is because of the very manual
> process to upgrade ceph and this cluster has quite a few osds. Also it will
> be replaced in the not too distant future.
>
> On this version, should I be using KRBD or librbd for my qemu vms? I have
> read a number of posts/articles that say it performs better as KRBD, but
> then also see some which say it should only be for containers or bare metal.
>
General Tradeoff:
* KRBD: a bit faster, some operations (e.g., snapshots) may see more
improvements than others
* librbd: matches ceph version in use, so normally its supports all current
ceph RBD features, currently I'm actually not to sure if there are
any not supported by the kernel, IIRC the 5.4 kernel from PVE 6.x
works out fine with all features.
NOTE: you talk about PVE 5.4, as it was replaced by 6.x over 1.5 years ago
and is EOL since over 6 months I really do not have all version specific
pitfalls in mind, so please have that in mind when reading this, I tried hard
to remember any issues but at that time span my brain gets untrustworthy to be
sure, so I'd highly recommend to test any change before you employ it in
production.
> Additionally if I were to switch (Tick the KRBD box for the specific RBD
> storage) what is the correct procedure to apply this? Simply select it,
> then reboot each node?
It will the be used for the next VM start or disk hotplug, no reboot necessary.
>
> This is an HA cluster, so, are KRBD and librbd interchangeable between
> hosts in the cluster? can I live migrate the VMs on to another node after I
> enable KRBD then simply live migrate them back after I reboot before I then
> do the same for the next node in the cluster?
You should be able to do so, reboot is actually not required. Enable flag
and to a merry round of migrations to get it started up with KRBD.
But, this is so easy to test that it would be unwise to not do so bevorehand,
setup a test VM, if you don't have on lying around already, and test with that
one.
>
> Lastly, is the performance of KRBD really that much better? Is there any
> downsides to using it vs librbd? I guess I still don't understand why there
> are 2 options if KRBD performs better?
Why don't you compare yourself? Start a modern version of any distro shipping
fio and compare a boot with KRBD on and off.
Performance is not everything ;-) User-space crashes won't bring down your
whole host, for example. And as said the user-space library can adopt new
features better, which can include performance relevant features but also such
having other advantages.
cheers,
Thomas
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2021-02-25 8:45 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <CAHxUxjAvtzTAM52kTHSAYXFV3k9nbBbO_2KpzuQJyxEB4tRZkw@mail.gmail.com>
2021-02-25 8:45 ` [PVE-User] KRBD vs librbd for qemu kvm vms Thomas Lamprecht
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox