public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
From: Gilberto Nunes <gilberto.nunes32@gmail.com>
To: Proxmox VE user list <pve-user@lists.proxmox.com>
Subject: Re: [PVE-User] moving machines among proxmoxs servers.
Date: Mon, 30 Nov 2020 14:26:31 -0300	[thread overview]
Message-ID: <CAOKSTBsLbVqUyX2EowidcWYCBRVuHcT3uN2uQHqFOS+=CNNgNw@mail.gmail.com> (raw)
In-Reply-To: <CALt2oz7_gH4B4AsR2oj6P-kpBnAP4qm0vfCppj9RY=cf6tiung@mail.gmail.com>

Yes! Using two machines and GlusterFS for instance, is an easier way
to achieve this. ( First of all you need create a cluster over
Proxmox: https://pve.proxmox.com/wiki/Cluster_Manager)
Just make two folders, like /DATA, in each server. Make sure this
folder is separated into HDD, rather into just one HDD!
Then, follow the instructions here to install and upgrade glusterfs:
https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/
(make sure you choose buster!)
Install gluster server: apt install glusterfs-server.
Now, make sure that you have a separated NIC to make all the network
traffic run over this NIC. Use some private network address.
After installing gluster server do this in the first node:
gluster peer probe server1
Then use this command to create a replica 2 glusterfs server:
gluster vol create VMS replica 2 server1:/DATA/vms server2:/DATA/vms
(make sure server1 and server2 are in /etc/hosts with correspondent private IP)
Make the vol VMS up: gluster vol start VMS
Then add this do the /etc/fstab in server1:
server1:VMS /vms glusterfs
defaults,_netdev,x-systemd.automount,backupvolfile-server=server2 0 0
And add this do the /etc/fstab in server2:
server2:VMS /vms glusterfs
defaults,_netdev,x-systemd.automount,backupvolfile-server=server1 0 0

Add this in each server:
file: /etc/systemd/system/glusterfsmounts.service:

[Unit]
Description=Glustermounting
Requires=glusterd.service

[Service]
Type=simple
RemainAfterExit=true
ExecStartPre=/usr/sbin/gluster volume list
ExecStart=/bin/mount -a -t glusterfs
Restart=on-failure
RestartSec=3

[Install]
WantedBy=multi-user.target
then you would:

systemctl daemon-reload

systemctl enable glusterfsmounts

This will make sure the system mounts the /vms directory, after a reboot.

You need apply some tricks here:
gluster vol set VMS cluster.heal-timeout 5
gluster volume heal VMS enable
gluster vol set VMS cluster.quorum-reads false
gluster vol set VMS cluster.quorum-count 1
gluster vol set VMS network.ping-timeout 2
gluster volume set VMS cluster.favorite-child-policy mtime
gluster volume heal VMS granular-entry-heal enable
gluster volume set VMS cluster.data-self-heal-algorithm full

After all that, go to Datacenter -> Storage -> Directory and add the
/vms as a directory storage in your PROXMOX. Remember to mark as
shared storage.

I have used this set up for many months now and so far no issues. But
the most clever way is to keep backup, right?

Cheers
---
Gilberto Nunes Ferreira

Em seg., 30 de nov. de 2020 às 14:10, Leandro Roggerone
<leandro@tecnetmza.com.ar> escreveu:
>
> Alejandro , thanks for your words.
> Let me explain:
> About live migration ... yes I think this is what I need to achieve.
> So basically you can "drag and drop" VMs from one node to another ?
>
> What do I need to achieve this ? / Only have one node.
> My current pve bos is in production with very important machines running on
> it.
> I will add a second pve server machine soon.
> But , I dont have any network storage , so the question would be:
> Having two pve machines (already running and fresh one) is it possible to
> perform live migrations?
> or is it mandatory to have an intermediate hardware or something like that?
>
> Regards,
> Leandro.
>
>
>
>
> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
> Libre
> de virus. www.avast.com
> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
> <#m_-8318222231522076233_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>
> El lun, 30 nov 2020 a las 13:45, Alejandro Bonilla via pve-user (<
> pve-user@lists.proxmox.com>) escribió:
>
> >
> >
> >
> > ---------- Forwarded message ----------
> > From: Alejandro Bonilla <abonilla@suse.com>
> > To: Proxmox VE user list <pve-user@lists.proxmox.com>
> > Cc:
> > Bcc:
> > Date: Mon, 30 Nov 2020 16:45:29 +0000
> > Subject: Re: [PVE-User] moving machines among proxmoxs servers.
> >
> >
> > > On Nov 30, 2020, at 11:21 AM, Leandro Roggerone <
> > leandro@tecnetmza.com.ar> wrote:
> > >
> > > Hi guys.
> > > Just wondering if is it possible to move machines without outage ?
> >
> > I thought at first you were referring to a live-migration which is easy to
> > achieve
> >
> > 64 bytes from 10.0.0.111: icmp_seq=25 ttl=64 time=0.363 ms
> > 64 bytes from 10.0.0.111: icmp_seq=26 ttl=64 time=0.397 ms
> > 64 bytes from 10.0.0.111: icmp_seq=27 ttl=64 time=0.502 ms
> > Request timeout for icmp_seq 28
> > 64 bytes from 10.0.0.111: icmp_seq=29 ttl=64 time=0.366 ms
> > 64 bytes from 10.0.0.111: icmp_seq=30 ttl=64 time=0.562 ms
> > 64 bytes from 10.0.0.111: icmp_seq=31 ttl=64 time=0.469 ms
> >
> > And certainly happens with little to no outage.
> >
> > > What do I need to achieve this ?
> >
> > More than one node (cluster), storage, then perform a migration… using
> > storage like Ceph will make the migration way faster.
> >
> > > Currently have only one box ...
> >
> > And then I got confused. Are you trying to migrate from another hypervisor
> > or you are just asking if it’s possible at all and then would add another
> > box?
> >
> > > Thanks.
> > > Leandro.
> > >
> >
> >
> >
> >
> > ---------- Forwarded message ----------
> > From: Alejandro Bonilla via pve-user <pve-user@lists.proxmox.com>
> > To: Proxmox VE user list <pve-user@lists.proxmox.com>
> > Cc: Alejandro Bonilla <abonilla@suse.com>
> > Bcc:
> > Date: Mon, 30 Nov 2020 16:45:29 +0000
> > Subject: Re: [PVE-User] moving machines among proxmoxs servers.
> > _______________________________________________
> > pve-user mailing list
> > pve-user@lists.proxmox.com
> > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >
>
> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
> Libre
> de virus. www.avast.com
> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user



  reply	other threads:[~2020-11-30 17:27 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-30 16:21 Leandro Roggerone
     [not found] ` <mailman.0.1606754740.440.pve-user@lists.proxmox.com>
2020-11-30 17:10   ` Leandro Roggerone
2020-11-30 17:26     ` Gilberto Nunes [this message]
     [not found]     ` <mailman.6.1606757886.440.pve-user@lists.proxmox.com>
2020-12-01 11:06       ` Leandro Roggerone

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAOKSTBsLbVqUyX2EowidcWYCBRVuHcT3uN2uQHqFOS+=CNNgNw@mail.gmail.com' \
    --to=gilberto.nunes32@gmail.com \
    --cc=pve-user@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal