public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
* [PVE-User] New Disk on one node of Cluster.
@ 2022-02-16  8:52 Сергей Цаболов
  2022-02-16  9:03 ` Aaron Lauterer
       [not found] ` <094b0da0-94b9-3b0f-3358-78d8e51561de@binovo.es>
  0 siblings, 2 replies; 6+ messages in thread
From: Сергей Цаболов @ 2022-02-16  8:52 UTC (permalink / raw)
  To: Proxmox VE user list

Hi to all.

I have 7 node's PVE Cluster + Ceph storage

In 7 node I add new 2 disks and want to make specific new osd pool on Ceph.

Is possible with new  disk create specific pool ?

Thanks

Sergey TS
The best Regard

_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PVE-User] New Disk on one node of Cluster.
  2022-02-16  8:52 [PVE-User] New Disk on one node of Cluster Сергей Цаболов
@ 2022-02-16  9:03 ` Aaron Lauterer
  2022-02-16  9:29   ` Сергей Цаболов
       [not found] ` <094b0da0-94b9-3b0f-3358-78d8e51561de@binovo.es>
  1 sibling, 1 reply; 6+ messages in thread
From: Aaron Lauterer @ 2022-02-16  9:03 UTC (permalink / raw)
  To: Proxmox VE user list,
	Сергей
	Цаболов

You will need to use device classes. They are either set automatically depending on the type (HDD, SSD, NVME) or you can define your own. If you create the OSDs via the Proxmox VE GUI, you can just type in a new device class name instead of selecting one of the predefined ones.

You then need to create rules that target the different device classes, as the default replicated rule will use all OSDs. Then assign all your pools the appropriate rule for the device class that they should use.

The Ceph docs have more details in how to change the device class of an existing OSD and how to create those rules: https://docs.ceph.com/en/latest/rados/operations/crush-map/#device-classes

Cheers,
Aaron

On 2/16/22 09:52, Сергей Цаболов wrote:
> Hi to all.
> 
> I have 7 node's PVE Cluster + Ceph storage
> 
> In 7 node I add new 2 disks and want to make specific new osd pool on Ceph.
> 
> Is possible with new  disk create specific pool ?
> 
> Thanks
> 
> Sergey TS
> The best Regard
> 
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user




^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PVE-User] New Disk on one node of Cluster.
       [not found] ` <094b0da0-94b9-3b0f-3358-78d8e51561de@binovo.es>
@ 2022-02-16  9:24   ` Сергей Цаболов
       [not found]     ` <b0ace09c-d03b-f929-78a5-d7eebf936e2f@binovo.es>
  0 siblings, 1 reply; 6+ messages in thread
From: Сергей Цаболов @ 2022-02-16  9:24 UTC (permalink / raw)
  To: Eneko Lacunza, pve-user

Hi Eneko,

16.02.2022 11:58, Eneko Lacunza пишет:
> Hi Sergey,
>
> El 16/2/22 a las 9:52, Сергей Цаболов escribió:
>>
>> I have 7 node's PVE Cluster + Ceph storage
>>
>> In 7 node I add new 2 disks and want to make specific new osd pool on 
>> Ceph.
>>
>> Is possible with new  disk create specific pool ?
>
> You are adding 2 additional disk in each node, right?
No, I add the new disk on node 7, not on each node of cluster.
>
> You can assign them to a new pool, creating custom crush rules.

Yes this I know how is make new rules

In one node I for test added 2 ssd disk and make the new rules|
|

|ceph osd crush rule create-replicated replicated_ssd default host ssd  
and with this rule I  make new pool vm.ssd
|
>
> Why do you want to use those disks for a different pool? What disks do 
> you have now, and what disk are the new? (for example, are all HDD or 
> SSD...)

I want make new pool with HDD - SAS for specific storage of some Windows 
Server VM.

In existing pools :

vm.pool  base pool for VM disks
cephfs_data  some disks and ISO and other datas
vm.ssd   new pool I make from 2 ssd disk

I try to test the Windows Server disk speed for Read/Write and RND4K 
Q32T1 with CrystalDiskMark 8.0.4x64

If I configure the VM disk to Sata and SSD emulation,Cache: Write back 
and Discard, Speed write/read is very good  something like :

SEQ1M Q8T1 1797.43/1713.07

SEQ1M Q1T1 1790.77/1350.55

but the RND4K Q32T1 and RND4K Q1T1 is not good, very small.

After the test I think if I add 2 new disks,  configure is for specific 
pool maybe my speed for  RND4K Q32T1 and RND4K Q1T1 maybe they will get 
better

Thank you

>
> Cheers
>
> Eneko Lacunza
> Zuzendari teknikoa | Director técnico
> Binovo IT Human Project
>
> Tel. +34 943 569 206 |https://www.binovo.es
> Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun
>
> https://www.youtube.com/user/CANALBINOVO
> https://www.linkedin.com/company/37269706/

Sergey TS
The best Regard

_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PVE-User] New Disk on one node of Cluster.
  2022-02-16  9:03 ` Aaron Lauterer
@ 2022-02-16  9:29   ` Сергей Цаболов
  0 siblings, 0 replies; 6+ messages in thread
From: Сергей Цаболов @ 2022-02-16  9:29 UTC (permalink / raw)
  To: Aaron Lauterer, Proxmox VE user list

Hi Aaron,

Thank you for answer, I make new rules:

In one node I for test added 2 ssd disk and make the new 
rules|(replicated_ssd)||
|
|ceph osd crush rule create-replicated replicated_ssd default host ssd  
and with this rule I  make new pool vm.ssd
|

||

|
|


|
|

|Cheers, |
|Sergey|

||

16.02.2022 12:03, Aaron Lauterer пишет:
> You will need to use device classes. They are either set automatically 
> depending on the type (HDD, SSD, NVME) or you can define your own. If 
> you create the OSDs via the Proxmox VE GUI, you can just type in a new 
> device class name instead of selecting one of the predefined ones.
>
> You then need to create rules that target the different device 
> classes, as the default replicated rule will use all OSDs. Then assign 
> all your pools the appropriate rule for the device class that they 
> should use.
>
> The Ceph docs have more details in how to change the device class of 
> an existing OSD and how to create those rules: 
> https://docs.ceph.com/en/latest/rados/operations/crush-map/#device-classes
>
> Cheers,
> Aaron
>
> On 2/16/22 09:52, Сергей Цаболов wrote:
>> Hi to all.
>>
>> I have 7 node's PVE Cluster + Ceph storage
>>
>> In 7 node I add new 2 disks and want to make specific new osd pool on 
>> Ceph.
>>
>> Is possible with new  disk create specific pool ?
>>
>> Thanks
>>
>> Sergey TS
>> The best Regard
>>
>> _______________________________________________
>> pve-user mailing list
>> pve-user@lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>> _______________________________________________
>> pve-user mailing list
>> pve-user@lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>
Sergey TS
The best Regard

_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PVE-User] New Disk on one node of Cluster.
       [not found]     ` <b0ace09c-d03b-f929-78a5-d7eebf936e2f@binovo.es>
@ 2022-02-16  9:54       ` Сергей Цаболов
       [not found]         ` <23cc8e89-9f08-9af1-8a35-eb786bf3993b@binovo.es>
  0 siblings, 1 reply; 6+ messages in thread
From: Сергей Цаболов @ 2022-02-16  9:54 UTC (permalink / raw)
  To: Eneko Lacunza, pve-user

Hi Eneko,

16.02.2022 12:33, Eneko Lacunza пишет:
>
> Hi Sergey,
>
> So, does this really make sense? If you put the new 2 disks in node7 
> in a pool, that data won't be able to survive node7 failure.

You are right, if 7 node is failure data won't be able.

But I think about if 2 disks/2 osd ad on new pool and is shared on all 
nodes.

>
> If you're trying to benchmark the disks, that wouldn't be a good test, 
> because in a real deployment disk IO for only one VM would be worse 
> (due to replication and network latencies).
Not only for one VM, I have 2 and more in future  Windows VM
>
> What IOPS are you getting in your 4K tests? You won't get near direct 
> disk IOPS...
I need to test the host disk or the VM disk ?
>
> Did you try with multiple parallel VMs? Aggregate 4K results should be 
> much better :)
I think about this way, maybe is work.
>
> Cheers
>
> El 16/2/22 a las 10:24, Сергей Цаболов escribió:
>>
>> Hi Eneko,
>>
>> 16.02.2022 11:58, Eneko Lacunza пишет:
>>> Hi Sergey,
>>>
>>> El 16/2/22 a las 9:52, Сергей Цаболов escribió:
>>>>
>>>> I have 7 node's PVE Cluster + Ceph storage
>>>>
>>>> In 7 node I add new 2 disks and want to make specific new osd pool 
>>>> on Ceph.
>>>>
>>>> Is possible with new  disk create specific pool ?
>>>
>>> You are adding 2 additional disk in each node, right?
>> No, I add the new disk on node 7, not on each node of cluster.
>>>
>>> You can assign them to a new pool, creating custom crush rules.
>>
>> Yes this I know how is make new rules
>>
>> In one node I for test added 2 ssd disk and make the new rules|
>> |
>>
>> |ceph osd crush rule create-replicated replicated_ssd default host 
>> ssd  and with this rule I  make new pool vm.ssd
>> |
>>>
>>> Why do you want to use those disks for a different pool? What disks 
>>> do you have now, and what disk are the new? (for example, are all 
>>> HDD or SSD...)
>>
>> I want make new pool with HDD - SAS for specific storage of some 
>> Windows Server VM.
>>
>> In existing pools :
>>
>> vm.pool  base pool for VM disks
>> cephfs_data  some disks and ISO and other datas
>> vm.ssd   new pool I make from 2 ssd disk
>>
>> I try to test the Windows Server disk speed for Read/Write and RND4K 
>> Q32T1 with CrystalDiskMark 8.0.4x64
>>
>> If I configure the VM disk to Sata and SSD emulation,Cache: Write 
>> back and Discard, Speed write/read is very good something like :
>>
>> SEQ1M Q8T1 1797.43/1713.07
>>
>> SEQ1M Q1T1 1790.77/1350.55
>>
>> but the RND4K Q32T1 and RND4K Q1T1 is not good, very small.
>>
>> After the test I think if I add 2 new disks,  configure is for  
>> specific pool maybe my speed for  RND4K Q32T1 and RND4K Q1T1 maybe 
>> they will get better
>>
>> Thank you
>>
>>>
>>> Cheers
>>>
>>> Eneko Lacunza
>>> Zuzendari teknikoa | Director técnico
>>> Binovo IT Human Project
>>>
>>> Tel. +34 943 569 206 |https://www.binovo.es
>>> Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun
>>>
>>> https://www.youtube.com/user/CANALBINOVO
>>> https://www.linkedin.com/company/37269706/
>
> Eneko Lacunza
> Zuzendari teknikoa | Director técnico
> Binovo IT Human Project
>
> Tel. +34 943 569 206 |https://www.binovo.es
> Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun
>
> https://www.youtube.com/user/CANALBINOVO
> https://www.linkedin.com/company/37269706/

Sergey TS
The best Regard

_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PVE-User] New Disk on one node of Cluster.
       [not found]         ` <23cc8e89-9f08-9af1-8a35-eb786bf3993b@binovo.es>
@ 2022-02-24 12:29           ` Сергей Цаболов
  0 siblings, 0 replies; 6+ messages in thread
From: Сергей Цаболов @ 2022-02-24 12:29 UTC (permalink / raw)
  To: Eneko Lacunza, pve-user

Hi Eneko,

I make some test and found if one node remove from Cluster and restore 
the VM on it the VM disk performance is very well!

I have some ideas to test other methods about performance.

My question is: If I add to all nodes 2 or 1 ssd disks and move the 
*Ceph journal to SSD* disks my performance of VM and Ceph it will be 
better for virtual machines and?

With ceph journal on ssd ceph working better and fast?

Have someone such an experience move the Ceph journal to ssd?

And what ssd disk with GB is enough for journal?


Thank you.


16.02.2022 12:59, Eneko Lacunza пишет:
> Hi Sergey,
>
> El 16/2/22 a las 10:54, Сергей Цаболов escribió:
>>
>>> What IOPS are you getting in your 4K tests? You won't get near 
>>> direct disk IOPS...
>> I need to test the host disk or the VM disk ?
>
> If you're worried about VM performance, then test VM disks... :)
>
> Cheers
>
> Eneko Lacunza
> Zuzendari teknikoa | Director técnico
> Binovo IT Human Project
>
> Tel. +34 943 569 206 |https://www.binovo.es
> Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun
>
> https://www.youtube.com/user/CANALBINOVO
> https://www.linkedin.com/company/37269706/

Sergey TS
The best Regard

_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2022-02-24 12:35 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-16  8:52 [PVE-User] New Disk on one node of Cluster Сергей Цаболов
2022-02-16  9:03 ` Aaron Lauterer
2022-02-16  9:29   ` Сергей Цаболов
     [not found] ` <094b0da0-94b9-3b0f-3358-78d8e51561de@binovo.es>
2022-02-16  9:24   ` Сергей Цаболов
     [not found]     ` <b0ace09c-d03b-f929-78a5-d7eebf936e2f@binovo.es>
2022-02-16  9:54       ` Сергей Цаболов
     [not found]         ` <23cc8e89-9f08-9af1-8a35-eb786bf3993b@binovo.es>
2022-02-24 12:29           ` Сергей Цаболов

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal