public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
* Re: [PVE-User] pve-user Digest, Vol 160, Issue 13
       [not found] <mailman.5.1625911201.28204.pve-user@lists.proxmox.com>
@ 2021-07-12  7:01 ` Christoph Weber
  2021-07-12  7:25   ` Ralf Storm
  0 siblings, 1 reply; 2+ messages in thread
From: Christoph Weber @ 2021-07-12  7:01 UTC (permalink / raw)
  To: 'pve-user@lists.proxmox.com'

Thanks for your advice, Ralf and Eneko ,

@Ralph
> please read the documentation about removing/replacing a Node carefully
> - you may not reinstall it with the same IP and/or Name as this will crash your
> whole cluster!

This is my bigges fear ...

Thanks for pointing me to the manual - there is in fact an detailed explanation which files to keep before reinstalling the node under "Re-installing a cluster node". 

This is excactly what I was looking for, with detailed step-by-step walktrough.  Though I'm not sure if I will give it a try - looks a bit risky to mess things up in one of the many manual steps.

> For the boot disks we use 2 or 3 mirrored zfs disks to be sure...

Sounds like a good idea in retrospect ;-) 
Until now I just was under the impression that the system disk does not contain very important data and can easily be replaced ...
 
@Eneko Lacunza
> Do you have Ceph OSD journal/DB/WALs on system disk?


No, we have everything belonging to one OSDs on the corresponding SSD drive.

> Moving OSDs from node3 to node6 would trigger data movement, but I'd go

Will it? I thought it might just relocate the ODSs location in the crush map from node3 to node6 when I shut down the prodve3, remove the disks and reinsert them in node 6? At least that was my impression in a thread here on the mailinglist a few weeks ago.

> node3, "pvecm delnode" it, reinstall with new system disk and rejoin cluster.

This seems to be the best solution. 

Maybe we will just discontinue the node, as it is getting near its planned lifetime ...




^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [PVE-User] pve-user Digest, Vol 160, Issue 13
  2021-07-12  7:01 ` [PVE-User] pve-user Digest, Vol 160, Issue 13 Christoph Weber
@ 2021-07-12  7:25   ` Ralf Storm
  0 siblings, 0 replies; 2+ messages in thread
From: Ralf Storm @ 2021-07-12  7:25 UTC (permalink / raw)
  To: pve-user

Hi,

don´t be afraid! ;)

It´s straight forward, important is only not to use the same ip and name 
on the host, then there will be no problems. No need to keep any config 
- setup the osds again, moving them is a thing I did not try and seems 
to be to much of an effort, easier to reinstall.


best regards


Ralf

Am 12/07/2021 um 09:01 schrieb Christoph Weber:
> Thanks for your advice, Ralf and Eneko ,
>
> @Ralph
>> please read the documentation about removing/replacing a Node carefully
>> - you may not reinstall it with the same IP and/or Name as this will crash your
>> whole cluster!
> This is my bigges fear ...
>
> Thanks for pointing me to the manual - there is in fact an detailed explanation which files to keep before reinstalling the node under "Re-installing a cluster node".
>
> This is excactly what I was looking for, with detailed step-by-step walktrough.  Though I'm not sure if I will give it a try - looks a bit risky to mess things up in one of the many manual steps.
>
>> For the boot disks we use 2 or 3 mirrored zfs disks to be sure...
> Sounds like a good idea in retrospect ;-)
> Until now I just was under the impression that the system disk does not contain very important data and can easily be replaced ...
>   
> @Eneko Lacunza
>> Do you have Ceph OSD journal/DB/WALs on system disk?
>
> No, we have everything belonging to one OSDs on the corresponding SSD drive.
>
>> Moving OSDs from node3 to node6 would trigger data movement, but I'd go
> Will it? I thought it might just relocate the ODSs location in the crush map from node3 to node6 when I shut down the prodve3, remove the disks and reinsert them in node 6? At least that was my impression in a thread here on the mailinglist a few weeks ago.
>
>> node3, "pvecm delnode" it, reinstall with new system disk and rejoin cluster.
> This seems to be the best solution.
>
> Maybe we will just discontinue the node, as it is getting near its planned lifetime ...
>
>
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
-- 
Ralf Storm

Systemadministrator


Konzept Informationssysteme GmbH
Am Weiher 13 • 88709 Meersburg

Fon: +49 7532 4466-299
Fax: +49 7532 4466-66
ralf.storm@konzept-is.de
www.konzept-is.de

Amtsgericht Freiburg 581491 • Geschäftsführer: Dr. Peer Griebel,
Frank Häßler




^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2021-07-12  7:26 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <mailman.5.1625911201.28204.pve-user@lists.proxmox.com>
2021-07-12  7:01 ` [PVE-User] pve-user Digest, Vol 160, Issue 13 Christoph Weber
2021-07-12  7:25   ` Ralf Storm

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal