From: Ralf Storm <ralf.storm@konzept-is.de>
To: pve-user@lists.proxmox.com
Subject: Re: [PVE-User] pve-user Digest, Vol 160, Issue 13
Date: Mon, 12 Jul 2021 09:25:42 +0200 [thread overview]
Message-ID: <607fc7e0-011e-ec35-5209-a1c1f0af441f@konzept-is.de> (raw)
In-Reply-To: <1375e99a83d247728c6213d6788800f5@xpecto.com>
Hi,
don´t be afraid! ;)
It´s straight forward, important is only not to use the same ip and name
on the host, then there will be no problems. No need to keep any config
- setup the osds again, moving them is a thing I did not try and seems
to be to much of an effort, easier to reinstall.
best regards
Ralf
Am 12/07/2021 um 09:01 schrieb Christoph Weber:
> Thanks for your advice, Ralf and Eneko ,
>
> @Ralph
>> please read the documentation about removing/replacing a Node carefully
>> - you may not reinstall it with the same IP and/or Name as this will crash your
>> whole cluster!
> This is my bigges fear ...
>
> Thanks for pointing me to the manual - there is in fact an detailed explanation which files to keep before reinstalling the node under "Re-installing a cluster node".
>
> This is excactly what I was looking for, with detailed step-by-step walktrough. Though I'm not sure if I will give it a try - looks a bit risky to mess things up in one of the many manual steps.
>
>> For the boot disks we use 2 or 3 mirrored zfs disks to be sure...
> Sounds like a good idea in retrospect ;-)
> Until now I just was under the impression that the system disk does not contain very important data and can easily be replaced ...
>
> @Eneko Lacunza
>> Do you have Ceph OSD journal/DB/WALs on system disk?
>
> No, we have everything belonging to one OSDs on the corresponding SSD drive.
>
>> Moving OSDs from node3 to node6 would trigger data movement, but I'd go
> Will it? I thought it might just relocate the ODSs location in the crush map from node3 to node6 when I shut down the prodve3, remove the disks and reinsert them in node 6? At least that was my impression in a thread here on the mailinglist a few weeks ago.
>
>> node3, "pvecm delnode" it, reinstall with new system disk and rejoin cluster.
> This seems to be the best solution.
>
> Maybe we will just discontinue the node, as it is getting near its planned lifetime ...
>
>
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
--
Ralf Storm
Systemadministrator
Konzept Informationssysteme GmbH
Am Weiher 13 • 88709 Meersburg
Fon: +49 7532 4466-299
Fax: +49 7532 4466-66
ralf.storm@konzept-is.de
www.konzept-is.de
Amtsgericht Freiburg 581491 • Geschäftsführer: Dr. Peer Griebel,
Frank Häßler
prev parent reply other threads:[~2021-07-12 7:26 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <mailman.5.1625911201.28204.pve-user@lists.proxmox.com>
2021-07-12 7:01 ` Christoph Weber
2021-07-12 7:25 ` Ralf Storm [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=607fc7e0-011e-ec35-5209-a1c1f0af441f@konzept-is.de \
--to=ralf.storm@konzept-is.de \
--cc=pve-user@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox