From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id DC6B16018C for ; Mon, 30 Nov 2020 18:27:46 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id C4A678E9A for ; Mon, 30 Nov 2020 18:27:16 +0100 (CET) Received: from mail-qt1-x82c.google.com (mail-qt1-x82c.google.com [IPv6:2607:f8b0:4864:20::82c]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id B23328E90 for ; Mon, 30 Nov 2020 18:27:15 +0100 (CET) Received: by mail-qt1-x82c.google.com with SMTP id p12so8722406qtp.7 for ; Mon, 30 Nov 2020 09:27:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :content-transfer-encoding; bh=JC6TygwYCCMZ2stpNWwACTCklB8cnRKKcRJJWVGFPnE=; b=JpgTJlXUsxSMw2vY0Y4zTNHIgG0KzbrOYLuUUd9EAHkCpRXBNz6MADUZ61rYojS7pr 4rwvVFS1jteDIkgMYA8cNTOOF2mMs5IOAznTF3m4Nn7Y6X/LOpqBYhGwkVoFus+t8Log iD0PhPyR9tX+cnHji0osFWDe9ZzpxSlMs/fS/gBl9A+dKSrek+zwR+By+YLzt6t51oJQ gDP12Agqh8VQRbBjQ8D648Bo1RGWdqdF9Kg4cLAsWBkwwXQrxdsV6KuGfaHwcDwxiTDG I8bhhL61fv91GDNUxfcIgVcYDnNcpoCgr18i+2wSyvmPgG5ciFWAUJvS/IR/Pepw7P1n sECg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:content-transfer-encoding; bh=JC6TygwYCCMZ2stpNWwACTCklB8cnRKKcRJJWVGFPnE=; b=V+1NK/wTJudhariA4lKLsCNGVmoHAnnlWyDD0PgeRzaNxM9RVXQ5BPeoj5o1CZKhMD opwiZwlWjX1fEPeUFS2vTe/PQBhVo0ewESyknPZzdmPqFnH2qgTO1D08gBJxMTSsHzPz UEKhHCpkz66RubCgvIbbFirwkmcks0Zf95k6sHFoA8X8OX3zWk4CIEJyS5vCtjxtAxyW RffpXR3N0dxi8lKkO+fjfGfuxVU9+Hcmx6PTuc/BAeLAu2rLNkKf6pJ7JniBc6CzMqD7 xn4SNXR47asyOG2wOgjyOMRVrV1K3PKBX1THWWRIz0sodFvpE69Co7mG5zAQlpCZpp/s 6XrA== X-Gm-Message-State: AOAM533jy+RPnU9lgbGA4xB8PZLYeFemdckUqyr846+mgvhnIEbYfthz tkvk72rEyx+JcwzbeP72RTH/XURS9vDhLZHnVd4TGgrh2T4P/w== X-Google-Smtp-Source: ABdhPJx/K5bHoyv999wRvBIHPCXr9GZTwYpe7/zaK++3jk9+5jQqhdD/E4bzSQwwF2TiLC6kL3oMe8aKSkOiFVrkVoA= X-Received: by 2002:ac8:71c1:: with SMTP id i1mr23121530qtp.134.1606757227746; Mon, 30 Nov 2020 09:27:07 -0800 (PST) MIME-Version: 1.0 References: In-Reply-To: From: Gilberto Nunes Date: Mon, 30 Nov 2020 14:26:31 -0300 Message-ID: To: Proxmox VE user list Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-SPAM-LEVEL: Spam detection results: 0 AWL -0.368 Adjusted score from AWL reputation of From: address DKIM_SIGNED 0.1 Message has a DKIM or DK signature, not necessarily valid DKIM_VALID -0.1 Message has at least one valid DKIM or DK signature DKIM_VALID_AU -0.1 Message has a valid DKIM or DK signature from author's domain DKIM_VALID_EF -0.1 Message has a valid DKIM or DK signature from envelope-from domain FREEMAIL_ENVFROM_END_DIGIT 0.25 Envelope-from freemail username ends in digit FREEMAIL_FROM 0.001 Sender email is commonly abused enduser mail provider KAM_ASCII_DIVIDERS 0.8 Spam that uses ascii formatting tricks RCVD_IN_DNSWL_NONE -0.0001 Sender listed at https://www.dnswl.org/, no trust SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [PVE-User] moving machines among proxmoxs servers. X-BeenThere: pve-user@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE user list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 30 Nov 2020 17:27:46 -0000 Yes! Using two machines and GlusterFS for instance, is an easier way to achieve this. ( First of all you need create a cluster over Proxmox: https://pve.proxmox.com/wiki/Cluster_Manager) Just make two folders, like /DATA, in each server. Make sure this folder is separated into HDD, rather into just one HDD! Then, follow the instructions here to install and upgrade glusterfs: https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/ (make sure you choose buster!) Install gluster server: apt install glusterfs-server. Now, make sure that you have a separated NIC to make all the network traffic run over this NIC. Use some private network address. After installing gluster server do this in the first node: gluster peer probe server1 Then use this command to create a replica 2 glusterfs server: gluster vol create VMS replica 2 server1:/DATA/vms server2:/DATA/vms (make sure server1 and server2 are in /etc/hosts with correspondent private= IP) Make the vol VMS up: gluster vol start VMS Then add this do the /etc/fstab in server1: server1:VMS /vms glusterfs defaults,_netdev,x-systemd.automount,backupvolfile-server=3Dserver2 0 0 And add this do the /etc/fstab in server2: server2:VMS /vms glusterfs defaults,_netdev,x-systemd.automount,backupvolfile-server=3Dserver1 0 0 Add this in each server: file: /etc/systemd/system/glusterfsmounts.service: [Unit] Description=3DGlustermounting Requires=3Dglusterd.service [Service] Type=3Dsimple RemainAfterExit=3Dtrue ExecStartPre=3D/usr/sbin/gluster volume list ExecStart=3D/bin/mount -a -t glusterfs Restart=3Don-failure RestartSec=3D3 [Install] WantedBy=3Dmulti-user.target then you would: systemctl daemon-reload systemctl enable glusterfsmounts This will make sure the system mounts the /vms directory, after a reboot. You need apply some tricks here: gluster vol set VMS cluster.heal-timeout 5 gluster volume heal VMS enable gluster vol set VMS cluster.quorum-reads false gluster vol set VMS cluster.quorum-count 1 gluster vol set VMS network.ping-timeout 2 gluster volume set VMS cluster.favorite-child-policy mtime gluster volume heal VMS granular-entry-heal enable gluster volume set VMS cluster.data-self-heal-algorithm full After all that, go to Datacenter -> Storage -> Directory and add the /vms as a directory storage in your PROXMOX. Remember to mark as shared storage. I have used this set up for many months now and so far no issues. But the most clever way is to keep backup, right? Cheers --- Gilberto Nunes Ferreira Em seg., 30 de nov. de 2020 =C3=A0s 14:10, Leandro Roggerone escreveu: > > Alejandro , thanks for your words. > Let me explain: > About live migration ... yes I think this is what I need to achieve. > So basically you can "drag and drop" VMs from one node to another ? > > What do I need to achieve this ? / Only have one node. > My current pve bos is in production with very important machines running = on > it. > I will add a second pve server machine soon. > But , I dont have any network storage , so the question would be: > Having two pve machines (already running and fresh one) is it possible to > perform live migrations? > or is it mandatory to have an intermediate hardware or something like tha= t? > > Regards, > Leandro. > > > > > > Libre > de virus. www.avast.com > > <#m_-8318222231522076233_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > > El lun, 30 nov 2020 a las 13:45, Alejandro Bonilla via pve-user (< > pve-user@lists.proxmox.com>) escribi=C3=B3: > > > > > > > > > ---------- Forwarded message ---------- > > From: Alejandro Bonilla > > To: Proxmox VE user list > > Cc: > > Bcc: > > Date: Mon, 30 Nov 2020 16:45:29 +0000 > > Subject: Re: [PVE-User] moving machines among proxmoxs servers. > > > > > > > On Nov 30, 2020, at 11:21 AM, Leandro Roggerone < > > leandro@tecnetmza.com.ar> wrote: > > > > > > Hi guys. > > > Just wondering if is it possible to move machines without outage ? > > > > I thought at first you were referring to a live-migration which is easy= to > > achieve > > > > 64 bytes from 10.0.0.111: icmp_seq=3D25 ttl=3D64 time=3D0.363 ms > > 64 bytes from 10.0.0.111: icmp_seq=3D26 ttl=3D64 time=3D0.397 ms > > 64 bytes from 10.0.0.111: icmp_seq=3D27 ttl=3D64 time=3D0.502 ms > > Request timeout for icmp_seq 28 > > 64 bytes from 10.0.0.111: icmp_seq=3D29 ttl=3D64 time=3D0.366 ms > > 64 bytes from 10.0.0.111: icmp_seq=3D30 ttl=3D64 time=3D0.562 ms > > 64 bytes from 10.0.0.111: icmp_seq=3D31 ttl=3D64 time=3D0.469 ms > > > > And certainly happens with little to no outage. > > > > > What do I need to achieve this ? > > > > More than one node (cluster), storage, then perform a migration=E2=80= =A6 using > > storage like Ceph will make the migration way faster. > > > > > Currently have only one box ... > > > > And then I got confused. Are you trying to migrate from another hypervi= sor > > or you are just asking if it=E2=80=99s possible at all and then would a= dd another > > box? > > > > > Thanks. > > > Leandro. > > > > > > > > > > > > > ---------- Forwarded message ---------- > > From: Alejandro Bonilla via pve-user > > To: Proxmox VE user list > > Cc: Alejandro Bonilla > > Bcc: > > Date: Mon, 30 Nov 2020 16:45:29 +0000 > > Subject: Re: [PVE-User] moving machines among proxmoxs servers. > > _______________________________________________ > > pve-user mailing list > > pve-user@lists.proxmox.com > > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > > > Libre > de virus. www.avast.com > > <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > _______________________________________________ > pve-user mailing list > pve-user@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user