From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id 9B19B1FF165 for ; Wed, 29 Jan 2025 11:27:59 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 5DCD61F0CE; Wed, 29 Jan 2025 11:27:52 +0100 (CET) X-Envelope-From: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=elettra.eu; s=esgkey1; t=1738146459; bh=g72egR3Lx956TZY23ZyPkzvu+1c5DeVoNAydGMiDmwk=; h=Date:From:Subject:To:From; b=RLvpXrIevtwhnrfGzqWVPE/SEqbuqmqKC2/SzoOTNPCgxA7iCSOpOyVNbzgIEEZYS m3TIIaVvex9o+MRdwGa/IU490Op4o87oaYqVvASJ6bdlwh2fbey1fhaqfdI7xuVYgV 7+D38WZStoiDy4H+EOJrrvqAFyAyhUjpm7kvNbaI= X-Virus-Scanned: amavis at zmp.elettra.eu Message-ID: <83250231-ada6-4e81-a45d-492e12b64d5f@elettra.eu> Date: Wed, 29 Jan 2025 11:27:39 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird From: Iztok Gregori Content-Language: it, en-US To: Proxmox VE user list X-elettra-Libra-ESVA-Information: Please contact elettra for more information X-elettra-Libra-ESVA-ID: 4YjddR5dq1zCBBP X-elettra-Libra-ESVA: No virus found X-elettra-Libra-ESVA-From: iztok.gregori@elettra.eu X-elettra-Libra-ESVA-Watermark: 1738751260.42253@uywZz7Q7RFhoX3RfFBpcAg X-SPAM-LEVEL: Spam detection results: 0 AWL -0.072 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DKIM_SIGNED 0.1 Message has a DKIM or DK signature, not necessarily valid DKIM_VALID -0.1 Message has at least one valid DKIM or DK signature DKIM_VALID_AU -0.1 Message has a valid DKIM or DK signature from author's domain DKIM_VALID_EF -0.1 Message has a valid DKIM or DK signature from envelope-from domain DMARC_PASS -0.1 DMARC pass policy KAM_EU 0.5 Prevalent use of .eu in spam/malware RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [elettra.eu] Subject: [PVE-User] Network migration of a hyper-converged cluster X-BeenThere: pve-user@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE user list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Proxmox VE user list Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Errors-To: pve-user-bounces@lists.proxmox.com Sender: "pve-user" Hi to all! I'm planning to migrate the network of a hyper-converged Proxmox cluster to a different subnet and I'm searching for some advice on how to do that without downtime and without the need to reinstall the nodes. The migration will be performed one node at the time and the node will be emptied from VMs (via live-migration) also the "noout" parameter will be enabled to avoid unnecessary data movement. The "empty" nodes can be restarted if needed. For Ceph I was thinking to add the new network to "public_network" and "cluster_network" (those 2 are the same), so Ceph will be ready to accept "requests" from the IPs of the new network. After that, I'm less sure of how to handle the configuration of monitors/managers. I was thinking something like that: - Destroy the monitor/manager daemon (if the node has one). - Stop OSDs. - Change Node IP. - Start OSDs (or restart the node). - Recreate the monitor/manager. Am I missing something? I have more doubts about how to migrate Proxmox network (corosync). I think that I'll need to add a new "ring" network in corosync.conf with the nodes new IPs, so once the node changes its IP address it will be seen again "UP". At the end of the migration I'll remove the old "ring" network. Is it feasible or it will not work? Any other advice or experiences are welcome! Cheers Iztok -- Iztok Gregori ICT Systems and Services Elettra - Sincrotrone Trieste S.C.p.A. Telephone: +39 040 3758948 http://www.elettra.eu _______________________________________________ pve-user mailing list pve-user@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user