public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
* [PVE-User] proxmox ceph osd option to move wal to a new device
@ 2022-01-12 13:57 Marco Witte
  0 siblings, 0 replies; 2+ messages in thread
From: Marco Witte @ 2022-01-12 13:57 UTC (permalink / raw)
  To: pve-user

One wal drive was failing. So I replaced it with:
pveceph osd destroy 17 --cleanup 1
pveceph osd destroy 18 --cleanup 1
pveceph osd destroy 19 --cleanup 1

This removed the three disks and removed the osd-wal.

The Drive sdf is the replacement for the failed wal device that above 
three osds ( sdb, sdc,sdd ) used:
pveceph osd create /dev/sdb -wal_dev /dev/sdf
pveceph osd create /dev/sdc -wal_dev /dev/sdf
pveceph osd create /dev/sdd -wal_dev /dev/sdf

This approach worked fine, but took a lot of time.

So I figured it would be better to change the wal for the existing osd:
At this state /dev/sdf is completly empty (has no lvm/wiped) and all 
three osd still use the failing wal device /dev/sdh.

ceph-volume lvm new-wal --osd-id 17 --osd-fsid 
01234567-1234-1234-123456789012 --target /dev/sdf

Which obviously fails, because the target should be --target vgname/new_wal

Question part:
What would be the fast way to make the new device /dev/sdf the wal 
device, without destroying the osds 17 18 19?

Versions:
pve-manager/7.1-8/5b267f33 (running kernel: 5.13.19-2-pve)
ceph version 16.2.7 (f9aa029788115b5df5eeee328f584156565ee5b7) pacific 
(stable)

Thank you





^ permalink raw reply	[flat|nested] 2+ messages in thread
[parent not found: <5eaaf695-e2fd-bef1-3f1e-a6a2fc89acdf@neusta.de>]

end of thread, other threads:[~2022-01-14 11:50 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-12 13:57 [PVE-User] proxmox ceph osd option to move wal to a new device Marco Witte
     [not found] <5eaaf695-e2fd-bef1-3f1e-a6a2fc89acdf@neusta.de>
     [not found] ` <mailman.36.1642149278.436.pve-user@lists.proxmox.com>
2022-01-14 11:50   ` Marco Witte

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal