public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
* Re: [PVE-User] (no subject)
       [not found] <CAGehcqHmTqfjBRwt61hWjjA1CQm8inajfS0=EOqCPrAdmGUheA@mail.gmail.com>
@ 2023-07-24  8:58 ` Aaron Lauterer
       [not found]   ` <CAGehcqHiwKabU9pvRbD_9POTqTq3kGv26zJWBEYDkA+DpsEmVg@mail.gmail.com>
  0 siblings, 1 reply; 2+ messages in thread
From: Aaron Lauterer @ 2023-07-24  8:58 UTC (permalink / raw)
  To: Humberto Jose de Sousa, Proxmox VE user list

Are those OSDs bluestore ones or still old file-store ones?

Have you tried running `ceph-volume lvm activate --all`?

It should search for bluestore OSDs and activate them. Activating means, 
mounting the tmpfs into the /var/lib/ceph/osd/ceph-{id} and starting the systemd 
units.

On 7/21/23 21:56, Humberto Jose de Sousa wrote:
> Hi folks,
> 
> I upgraded my ceph cluster from Octopus to Pacific and Pacific to Quincy,
> no problems here.
> 
> So I upgraded pve 7 to 8. After reboot the OSDs don't up. There is no
> directory /etc/ceph/osd/ and json files. The directory
> /var/lib/ceph/osd/ceph-<id>/ is empty.
> 
> How can I recreate the lost files?




^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [PVE-User] SPAM: Re:
       [not found]   ` <CAGehcqHiwKabU9pvRbD_9POTqTq3kGv26zJWBEYDkA+DpsEmVg@mail.gmail.com>
@ 2023-07-25  7:47     ` Aaron Lauterer
  0 siblings, 0 replies; 2+ messages in thread
From: Aaron Lauterer @ 2023-07-25  7:47 UTC (permalink / raw)
  To: Humberto Jose de Sousa; +Cc: Proxmox VE user list



On 7/24/23 19:17, Humberto Jose de Sousa wrote:
> Hi Aaron,
> 
> It was OSDs Bluestore.
> I tried only `ceph-volume simple scan /dev/sdx` and it didn't work.

"simple" is for legacy file store OSDs :)

> In the end I decided to destroy and recreate.
> 
> This problem only occurred with one of the five updates. I didn't
> understand ...
> 
> Thanks!
> Em seg., 24 de jul. de 2023 às 05:58, Aaron Lauterer <a.lauterer@proxmox.com>
> escreveu:
> 
>> Are those OSDs bluestore ones or still old file-store ones?
>>
>> Have you tried running `ceph-volume lvm activate --all`?
>>
>> It should search for bluestore OSDs and activate them. Activating means,
>> mounting the tmpfs into the /var/lib/ceph/osd/ceph-{id} and starting the
>> systemd
>> units.
>>
>> On 7/21/23 21:56, Humberto Jose de Sousa wrote:
>>> Hi folks,
>>>
>>> I upgraded my ceph cluster from Octopus to Pacific and Pacific to Quincy,
>>> no problems here.
>>>
>>> So I upgraded pve 7 to 8. After reboot the OSDs don't up. There is no
>>> directory /etc/ceph/osd/ and json files. The directory
>>> /var/lib/ceph/osd/ceph-<id>/ is empty.
>>>
>>> How can I recreate the lost files?
>>
>>
> 




^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2023-07-25  7:47 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <CAGehcqHmTqfjBRwt61hWjjA1CQm8inajfS0=EOqCPrAdmGUheA@mail.gmail.com>
2023-07-24  8:58 ` [PVE-User] (no subject) Aaron Lauterer
     [not found]   ` <CAGehcqHiwKabU9pvRbD_9POTqTq3kGv26zJWBEYDkA+DpsEmVg@mail.gmail.com>
2023-07-25  7:47     ` [PVE-User] SPAM: Aaron Lauterer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal