public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
* [PVE-User] CEPH migration from cephadm or keep cephadm?
@ 2024-06-30 15:08 Peter Eisch via pve-user
  2024-06-30 15:40 ` Gilberto Ferreira
  2024-07-01 18:53 ` Alwin Antreich via pve-user
  0 siblings, 2 replies; 4+ messages in thread
From: Peter Eisch via pve-user @ 2024-06-30 15:08 UTC (permalink / raw)
  To: pve-user; +Cc: Peter Eisch

[-- Attachment #1: Type: message/rfc822, Size: 3812 bytes --]

From: "Peter Eisch" <peter@boku.net>
To: pve-user@lists.proxmox.com
Subject: CEPH migration from cephadm or keep cephadm?
Date: Sun, 30 Jun 2024 15:08:31 +0000
Message-ID: <a43ba0d16c4b56acb1017d06304ad428@mail.boku.net>

Hi,
I have a CEPH cluster where I’ve been doing hyperconverged with Linux KVM.  I have had no problem migrating the VMs to PVE running on compute-only nodes.  No instances remain on the storage nodes.
Back to converged with:

- CEPH 18.2.2

  - cephadm orchestration

- 6 hosts with 16 OSDs each

  - 4 hosts rocky linux

  - 1 host PVE 8.2.2, deployed OSDs with cephadm

  - 1 host out of the cluster, pending options

- 4 hosts running PVE/RBD
Is there a way to convert a cephadm CEPH cluster to PVE’s CEPH?  Is it better to keep cephadm?
Thank you for your thoughts,
peter

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2024-07-01 19:02 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-06-30 15:08 [PVE-User] CEPH migration from cephadm or keep cephadm? Peter Eisch via pve-user
2024-06-30 15:40 ` Gilberto Ferreira
2024-06-30 18:44   ` Peter Eisch via pve-user
2024-07-01 18:53 ` Alwin Antreich via pve-user

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal