public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
* [PVE-User] Erasure coded pool and "rbd sparsify" - laggy pgs
@ 2023-10-02 17:03 Fabrizio Cuseo
  0 siblings, 0 replies; 2+ messages in thread
From: Fabrizio Cuseo @ 2023-10-02 17:03 UTC (permalink / raw)
  To: Proxmox VE user list

Hello.

I am trying for the first time to setup an erasure coded pool with PVE 8.0


Cluster is 6 hosts, 24 OSD each one (12 ssd + 12 sas), 4x10Gbit.

One pool is replica 3 for ssd class drives (2048 PGs).
Another pool is replica 3 for hdd class drives (with 0.45 target ratio and 1024 PGs)
The last one, is an EC pool (4+2) sharing the same hdd class drives (with 0.5 target ratio, and 256 PGs).
Of course, pveceph creates the ec metadata pool, with replica 3, hdd class drives, 0.05 target ratio and 32 PGs.

If I move a drive from another storage to EC pool, i loose the sparse setting, so I need to "rbd sparsify" the rbd file.
But, when I do it on an EC pool, i can see ceph-osd processes using 100% cpu, and see on ceph mgr log, PGs  in "active + laggy" state.

No any problem when I use the vm with drive image in erasure coded pool (but I think because is less IO intensive than a "sparsify").

Someone had the same problem or use EC pool for virtual machines ? I would like to have some "slow volumes" for archiving only purpose, but i am afraid that all the cluster could be impacted.

Regards, Fabrizio



^ permalink raw reply	[flat|nested] 2+ messages in thread

* [PVE-User] Erasure coded pool and "rbd sparsify" - laggy pgs
       [not found]   ` <313d9a10-46ab-6fba-49c7-4d5a82a83ba6@coppint.com>
@ 2023-10-02 12:38     ` Fabrizio Cuseo
  0 siblings, 0 replies; 2+ messages in thread
From: Fabrizio Cuseo @ 2023-10-02 12:38 UTC (permalink / raw)
  To: PVE User List

Hello.

I am trying for the first time to setup an erasure coded pool with PVE 8.0


Cluster is 6 hosts, 24 OSD each one (12 ssd + 12 sas), 4x10Gbit.

One pool is replica 3 for ssd class drives (2048 PGs).
Another pool is replica 3 for hdd class drives (with 0.45 target ratio and 1024 PGs)
The last one, is an EC pool (4+2) sharing the same hdd class drives (with 0.5 target ratio, and 256 PGs).
Of course, pveceph creates the ec metadata pool, with replica 3, hdd class drives, 0.05 target ratio and 32 PGs.

If I move a drive from another storage to EC pool, i loose the sparse setting, so I need to "rbd sparsify" the rbd file.
But, when I do it on an EC pool, i can see ceph-osd processes using 100% cpu, and see on ceph mgr log, PGs  in "active + laggy" state.

No any problem when I use the vm with drive image in erasure coded pool (but I think because is less IO intensive than a "sparsify").

Someone had the same problem or use EC pool for virtual machines ? I would like to have some "slow volumes" for archiving only purpose, but i am afraid that all the cluster could be impacted.

Regards, Fabrizio 





^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2023-10-02 17:13 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-10-02 17:03 [PVE-User] Erasure coded pool and "rbd sparsify" - laggy pgs Fabrizio Cuseo
     [not found] <8931df75-0813-3aac-0481-f51afd328f6c@coppint.com>
     [not found] ` <359756348.305396.1557210415135.JavaMail.zimbra@odiso.com>
     [not found]   ` <313d9a10-46ab-6fba-49c7-4d5a82a83ba6@coppint.com>
2023-10-02 12:38     ` Fabrizio Cuseo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal