* [PVE-User] ceph
@ 2021-09-08 12:46 Leandro Roggerone
2021-09-08 12:55 ` Gilberto Ferreira
` (2 more replies)
0 siblings, 3 replies; 11+ messages in thread
From: Leandro Roggerone @ 2021-09-08 12:46 UTC (permalink / raw)
To: PVE User List
Hi guys , I have a 2 nodes cluster working,
I will add a third node to the cluster.
I would like to know the goods that a ceph storage can bring to my existing
cluster.
What is an easy / recommended way to implement it ?
Wich hardware should I consider to use ?
#############
Currently im facing the upgrade from pve 6 to pve 7.
Having a ceph storage can make this process easier ?
Regards.
Leandro.,
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PVE-User] ceph
2021-09-08 12:46 [PVE-User] ceph Leandro Roggerone
@ 2021-09-08 12:55 ` Gilberto Ferreira
2021-09-08 20:07 ` Alex K
2021-09-08 22:11 ` ic
2 siblings, 0 replies; 11+ messages in thread
From: Gilberto Ferreira @ 2021-09-08 12:55 UTC (permalink / raw)
To: Proxmox VE user list; +Cc: PVE User List
>> Wich hardware should I consider to use ?
At least SSD Enterprise/DataCenter Class
10G network cards.
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram
Em qua., 8 de set. de 2021 às 09:47, Leandro Roggerone <
leandro@tecnetmza.com.ar> escreveu:
> Hi guys , I have a 2 nodes cluster working,
> I will add a third node to the cluster.
> I would like to know the goods that a ceph storage can bring to my existing
> cluster.
> What is an easy / recommended way to implement it ?
> Wich hardware should I consider to use ?
>
> #############
>
> Currently im facing the upgrade from pve 6 to pve 7.
> Having a ceph storage can make this process easier ?
>
> Regards.
> Leandro.,
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PVE-User] ceph
2021-09-08 12:46 [PVE-User] ceph Leandro Roggerone
2021-09-08 12:55 ` Gilberto Ferreira
@ 2021-09-08 20:07 ` Alex K
2021-09-08 22:11 ` ic
2 siblings, 0 replies; 11+ messages in thread
From: Alex K @ 2021-09-08 20:07 UTC (permalink / raw)
To: Proxmox VE user list; +Cc: PVE User List
On Wed, Sep 8, 2021, 15:46 Leandro Roggerone <leandro@tecnetmza.com.ar>
wrote:
> Hi guys , I have a 2 nodes cluster working,
> I will add a third node to the cluster.
> I would like to know the goods that a ceph storage can bring to my existing
> cluster.
>
It brings high availability and Iive migration as any cluster aware shared
storage with the cost of having to take care ceph. One advantage also is
that you hopefully will not have downtime during maintenance.
What is an easy / recommended way to implement it ?
> Wich hardware should I consider to use ?
>
> #############
>
> Currently im facing the upgrade from pve 6 to pve 7.
> Having a ceph storage can make this process easier ?
>
> Regards.
> Leandro.,
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PVE-User] ceph
2021-09-08 12:46 [PVE-User] ceph Leandro Roggerone
2021-09-08 12:55 ` Gilberto Ferreira
2021-09-08 20:07 ` Alex K
@ 2021-09-08 22:11 ` ic
2021-09-13 11:32 ` Leandro Roggerone
2 siblings, 1 reply; 11+ messages in thread
From: ic @ 2021-09-08 22:11 UTC (permalink / raw)
To: Proxmox VE user list; +Cc: PVE User List
Hi there,
> On 8 Sep 2021, at 14:46, Leandro Roggerone <leandro@tecnetmza.com.ar> wrote:
>
> I would like to know the goods that a ceph storage can bring to my existing
> cluster.
> What is an easy / recommended way to implement it ?
> Wich hardware should I consider to use ?
First, HW.
Get two Cisco Nexus 3064PQ (they typically go for $600-700 for 48 10G ports) and two Intel X520-DA2 per server.
Hook up each port of the Intel cards to each of the Nexuses, getting a full redundancy between network cards and switches.
Add 4x40G DAC cables between the switches, setup 2 as VPC peer-links, 2 as a simple L2 trunk (can provide more details as why if needed).
Use ports 0 from both NICs for ceph, ports 1 for VM traffic. This way you get 2x10 Gbps for Ceph only and 2x10 Gbps for everything else, and if you loose one card or one switch, you still have 10 Gbps for each.
The benefits? With default configuration, your data lives in 3 places. Also, scale out. You know the expensive stuff, hyperconverged servers (nutanix and such) ? You get that with this.
The performance is wild, just moved my customers from a proxmox cluster backed by a TrueNAS server (full flash, 4x10Gbps) to a 3 node cluster of AMD EPYC nodes with Ceph on local SATA SSDs and the VMs started flying.
Keep your old storage infrastructure, whatever that is, for backups with PBS.
YMMV
Regards, ic
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PVE-User] ceph
2021-09-08 22:11 ` ic
@ 2021-09-13 11:32 ` Leandro Roggerone
0 siblings, 0 replies; 11+ messages in thread
From: Leandro Roggerone @ 2021-09-13 11:32 UTC (permalink / raw)
To: Proxmox VE user list; +Cc: PVE User List
hi guys , your responses were very useful.
Lets suppose I have my 3 nodes running and forming a cluster.
Please confirm:
a -Can I add the ceph storage at any time ?
b- All nodes should be running the same pve version ?
c- All nodes should have 1 or more non used storages with no hardware raid
to be included in the ceph ?
Those storages (c) should be exactly same in capacity , speed , and so ...
?
What can goes wrong if dont have 10 but 1 gbps ports ?
Regards.
Leandro
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
Libre
de virus. www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
El mié, 8 sept 2021 a las 19:21, ic (<lists@benappy.com>) escribió:
> Hi there,
>
> > On 8 Sep 2021, at 14:46, Leandro Roggerone <leandro@tecnetmza.com.ar>
> wrote:
> >
> > I would like to know the goods that a ceph storage can bring to my
> existing
> > cluster.
> > What is an easy / recommended way to implement it ?
> > Wich hardware should I consider to use ?
>
> First, HW.
>
> Get two Cisco Nexus 3064PQ (they typically go for $600-700 for 48 10G
> ports) and two Intel X520-DA2 per server.
>
> Hook up each port of the Intel cards to each of the Nexuses, getting a
> full redundancy between network cards and switches.
>
> Add 4x40G DAC cables between the switches, setup 2 as VPC peer-links, 2 as
> a simple L2 trunk (can provide more details as why if needed).
>
> Use ports 0 from both NICs for ceph, ports 1 for VM traffic. This way you
> get 2x10 Gbps for Ceph only and 2x10 Gbps for everything else, and if you
> loose one card or one switch, you still have 10 Gbps for each.
>
> The benefits? With default configuration, your data lives in 3 places.
> Also, scale out. You know the expensive stuff, hyperconverged servers
> (nutanix and such) ? You get that with this.
>
> The performance is wild, just moved my customers from a proxmox cluster
> backed by a TrueNAS server (full flash, 4x10Gbps) to a 3 node cluster of
> AMD EPYC nodes with Ceph on local SATA SSDs and the VMs started flying.
>
> Keep your old storage infrastructure, whatever that is, for backups with
> PBS.
>
> YMMV
>
> Regards, ic
>
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PVE-User] ceph
[not found] ` <mailman.235.1673444838.458.pve-user@lists.proxmox.com>
@ 2023-01-11 15:51 ` Piviul
0 siblings, 0 replies; 11+ messages in thread
From: Piviul @ 2023-01-11 15:51 UTC (permalink / raw)
To: pve-user
On 1/11/23 14:46, Eneko Lacunza via pve-user wrote:
> Hi,
>
> El 11/1/23 a las 12:19, Piviul escribió:
>> On 1/11/23 10:39, Eneko Lacunza via pve-user wrote:
>>> You should change your public_network to 192.168.255.0/24 .
>>
>> So the public_network is the pve communication network? I can edit
>> directly the /etc/pve/ceph.conf and then corosync should change the
>> ceph.conf on the others nodes?
>
> Sorry, I misread your info:
>
>> $ ip route
>> default via 192.168.64.1 dev vmbr0 proto kernel onlink
>> 192.168.64.0/20 dev vmbr0 proto kernel scope link src 192.168.70.30
>> 192.168.254.0/24 dev vmbr2 proto kernel scope link src 192.168.254.1
>> 192.168.255.0/24 dev vmbr1 proto kernel scope link src 192.168.255.1
>>
>> vmbr2 is the CEPH network, vmbr1 is the PVE network and vmbr0 is the
>> LAN network. So you suggest me to add first the 3 ceph monitors using
>> the CEPH IPs network and then destroy the 3 monitors having LAN IPs?
>
> You should set it to 192.168.254.0/24, as that's your ceph net.
many thanks Eneko, so in ceph.conf I have to set the cluster_network and
public_network to the same subnet? Furtermore a last question... to
change the content of ceph.conf I can edit it in one of the pve nodes?
Piviul
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PVE-User] ceph
[not found] ` <mailman.232.1673430028.458.pve-user@lists.proxmox.com>
@ 2023-01-11 11:19 ` Piviul
[not found] ` <mailman.235.1673444838.458.pve-user@lists.proxmox.com>
0 siblings, 1 reply; 11+ messages in thread
From: Piviul @ 2023-01-11 11:19 UTC (permalink / raw)
To: pve-user
On 1/11/23 10:39, Eneko Lacunza via pve-user wrote:
> You should change your public_network to 192.168.255.0/24 .
So the public_network is the pve communication network? I can edit
directly the /etc/pve/ceph.conf and then corosync should change the
ceph.conf on the others nodes?
>
> Then, one by one, remove a monitor and recreate it, check values for
> new monitor are on correct network.
>
> Finally restart one by one OSD services and check their listening IPs
> (they listen on public and private Ceph networks).
the osd services in each proxmox node?
Piviul
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PVE-User] ceph
[not found] ` <mailman.215.1673337884.458.pve-user@lists.proxmox.com>
@ 2023-01-10 13:29 ` Piviul
[not found] ` <mailman.232.1673430028.458.pve-user@lists.proxmox.com>
0 siblings, 1 reply; 11+ messages in thread
From: Piviul @ 2023-01-10 13:29 UTC (permalink / raw)
To: pve-user
On 1/10/23 09:04, Eneko Lacunza via pve-user wrote:
>
> I think you may have a wrong Ceph network definition in
> /etc/pve/ceph.conf, check for "public_network".
# cat /etc/pve/ceph.conf
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network = 192.168.254.3/24
fsid = 332d7723-d16f-443f-947d-a5ab160e4fac
mon_allow_pool_delete = true
mon_host = 192.168.70.34 192.168.70.30 192.168.70.32
osd_pool_default_min_size = 2
osd_pool_default_size = 3
public_network = 192.168.70.34/20
[client]
keyring = /etc/pve/priv/$cluster.$name.keyring
[mds]
keyring = /var/lib/ceph/mds/ceph-$id/keyring
[mds.pve01]
host = pve01
mds standby for name = pve
[mds.pve02]
host = pve02
mds_standby_for_name = pve
[mds.pve03]
host = pve03
mds_standby_for_name = pve
[mon.pve01]
public_addr = 192.168.70.30
[mon.pve02]
public_addr = 192.168.70.32
[mon.pve03]
public_addr = 192.168.70.34
> Also check what IP are listening on OSD daemons.
# netstat -tunlp | grep ceph-
tcp 0 0 192.168.254.1:6806 0.0.0.0:* LISTEN
1760/ceph-osd
tcp 0 0 192.168.70.30:6806 0.0.0.0:* LISTEN
1760/ceph-osd
tcp 0 0 192.168.254.1:6807 0.0.0.0:* LISTEN
1760/ceph-osd
tcp 0 0 192.168.70.30:6807 0.0.0.0:* LISTEN
1761/ceph-osd
tcp 0 0 192.168.254.1:6808 0.0.0.0:* LISTEN
1762/ceph-osd
tcp 0 0 192.168.70.30:6808 0.0.0.0:* LISTEN
1760/ceph-osd
tcp 0 0 192.168.254.1:6809 0.0.0.0:* LISTEN
1762/ceph-osd
tcp 0 0 192.168.70.30:6809 0.0.0.0:* LISTEN
1760/ceph-osd
tcp 0 0 192.168.254.1:6810 0.0.0.0:* LISTEN
1762/ceph-osd
tcp 0 0 192.168.70.30:6810 0.0.0.0:* LISTEN
1762/ceph-osd
tcp 0 0 192.168.254.1:6811 0.0.0.0:* LISTEN
1762/ceph-osd
tcp 0 0 192.168.70.30:6811 0.0.0.0:* LISTEN
1762/ceph-osd
tcp 0 0 192.168.254.1:6812 0.0.0.0:* LISTEN
68106/ceph-osd
tcp 0 0 192.168.70.30:6812 0.0.0.0:* LISTEN
1762/ceph-osd
tcp 0 0 192.168.254.1:6813 0.0.0.0:* LISTEN
68106/ceph-osd
tcp 0 0 192.168.70.30:6813 0.0.0.0:* LISTEN
1762/ceph-osd
tcp 0 0 192.168.254.1:6814 0.0.0.0:* LISTEN
68106/ceph-osd
tcp 0 0 192.168.70.30:6814 0.0.0.0:* LISTEN
68106/ceph-osd
tcp 0 0 192.168.254.1:6815 0.0.0.0:* LISTEN
68106/ceph-osd
tcp 0 0 192.168.70.30:6815 0.0.0.0:* LISTEN
68106/ceph-osd
tcp 0 0 192.168.70.30:6816 0.0.0.0:* LISTEN
68106/ceph-osd
tcp 0 0 192.168.70.30:6817 0.0.0.0:* LISTEN
68106/ceph-osd
tcp 0 0 192.168.70.30:3300 0.0.0.0:* LISTEN
1753/ceph-mon
tcp 0 0 192.168.70.30:6789 0.0.0.0:* LISTEN
1753/ceph-mon
tcp 0 0 192.168.254.1:6800 0.0.0.0:* LISTEN
1761/ceph-osd
tcp 0 0 192.168.70.30:6800 0.0.0.0:* LISTEN
1740/ceph-mds
tcp 0 0 192.168.254.1:6801 0.0.0.0:* LISTEN
1761/ceph-osd
tcp 0 0 192.168.70.30:6801 0.0.0.0:* LISTEN
1740/ceph-mds
tcp 0 0 192.168.254.1:6802 0.0.0.0:* LISTEN
1760/ceph-osd
tcp 0 0 192.168.70.30:6802 0.0.0.0:* LISTEN
1761/ceph-osd
tcp 0 0 192.168.254.1:6803 0.0.0.0:* LISTEN
1760/ceph-osd
tcp 0 0 192.168.70.30:6803 0.0.0.0:* LISTEN
1761/ceph-osd
tcp 0 0 192.168.254.1:6804 0.0.0.0:* LISTEN
1761/ceph-osd
tcp 0 0 192.168.70.30:6804 0.0.0.0:* LISTEN
1760/ceph-osd
tcp 0 0 192.168.254.1:6805 0.0.0.0:* LISTEN
1761/ceph-osd
tcp 0 0 192.168.70.30:6805 0.0.0.0:* LISTEN
1761/ceph-osd
I wrong something on CEPH configuration?
Piviul
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PVE-User] ceph
[not found] ` <mailman.203.1673265308.458.pve-user@lists.proxmox.com>
@ 2023-01-10 7:23 ` Piviul
[not found] ` <mailman.215.1673337884.458.pve-user@lists.proxmox.com>
0 siblings, 1 reply; 11+ messages in thread
From: Piviul @ 2023-01-10 7:23 UTC (permalink / raw)
To: pve-user
On 1/9/23 12:54, Eneko Lacunza via pve-user wrote:
> If all Ceph services/clients are on those Proxmox nodes, yes, that
> should work.
yes all services/clients are on proxmox nodes...
> Also check that there are no old monitor IPs on ceph config when
> you're done (/etc/pve/ceph.conf)
ok, I'll do.
I have a problem, when I add a monitor (Ceph->monitor->Create) I can
choose the monitor to add only from a combo box having the names of the
proxmox nodes. But the proxmox node name is resolved with a IP of the
LAN hosts not of the LAN CEPH dedicated...
Can I ask you if in your configuration the ceph monitors and mds are
referring to the IPs of the CEPH dedicated LAN or even in your
configuration the monitors and mds are referring to the IPs of the LAN
hosts?
Piviul
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PVE-User] ceph
[not found] ` <mailman.191.1673258105.458.pve-user@lists.proxmox.com>
@ 2023-01-09 11:47 ` Piviul
[not found] ` <mailman.203.1673265308.458.pve-user@lists.proxmox.com>
0 siblings, 1 reply; 11+ messages in thread
From: Piviul @ 2023-01-09 11:47 UTC (permalink / raw)
To: pve-user
On 1/9/23 10:54, Eneko Lacunza via pve-user wrote:
> Hi,
>
> You need to route traffic between LAN network and Ceph network, so
> that this works. When you have all monitors using ceph network IPs,
> undo the routing.
the routing table on a CEPH/PVE node is:
$ ip route
default via 192.168.64.1 dev vmbr0 proto kernel onlink
192.168.64.0/20 dev vmbr0 proto kernel scope link src 192.168.70.30
192.168.254.0/24 dev vmbr2 proto kernel scope link src 192.168.254.1
192.168.255.0/24 dev vmbr1 proto kernel scope link src 192.168.255.1
vmbr2 is the CEPH network, vmbr1 is the PVE network and vmbr0 is the LAN
network. So you suggest me to add first the 3 ceph monitors using the
CEPH IPs network and then destroy the 3 monitors having LAN IPs?
Similarly for the ceph managers?
Piviul
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PVE-User] ceph
@ 2023-01-09 9:14 Piviul
[not found] ` <mailman.191.1673258105.458.pve-user@lists.proxmox.com>
0 siblings, 1 reply; 11+ messages in thread
From: Piviul @ 2023-01-09 9:14 UTC (permalink / raw)
To: pve-user
Hi all, during the CEPH installation I have dedicated a 10Gb network to
CEPH but I fear I wrong the monitors and managers IPs because they are
referring to the LAN IPs PVE nodes instead of using the IPs of the CEPH
nodes of the CEPH dedicated network. To solve this problem can I first
of all create the monitors/managers on the dedicated CEPH network and
then remove the olds monitors/managers having LAN IPs? This operation is
safely?
Piviul
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2023-01-11 15:52 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-08 12:46 [PVE-User] ceph Leandro Roggerone
2021-09-08 12:55 ` Gilberto Ferreira
2021-09-08 20:07 ` Alex K
2021-09-08 22:11 ` ic
2021-09-13 11:32 ` Leandro Roggerone
2023-01-09 9:14 Piviul
[not found] ` <mailman.191.1673258105.458.pve-user@lists.proxmox.com>
2023-01-09 11:47 ` Piviul
[not found] ` <mailman.203.1673265308.458.pve-user@lists.proxmox.com>
2023-01-10 7:23 ` Piviul
[not found] ` <mailman.215.1673337884.458.pve-user@lists.proxmox.com>
2023-01-10 13:29 ` Piviul
[not found] ` <mailman.232.1673430028.458.pve-user@lists.proxmox.com>
2023-01-11 11:19 ` Piviul
[not found] ` <mailman.235.1673444838.458.pve-user@lists.proxmox.com>
2023-01-11 15:51 ` Piviul
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox