From: Rutger Verhoeven <rutger.verhoeven@gmail.com>
To: PVE User List <pve-user@pve.proxmox.com>
Subject: [PVE-User] Proxmox (2 node) cluster questions
Date: Fri, 28 Aug 2020 18:41:45 +0200 [thread overview]
Message-ID: <CAOekgU4pOojO_eBBh29k20C0V5ZwehHCzCGFucpThsT3+tqeDQ@mail.gmail.com> (raw)
Hello all,
Awhile ago i attempted to join 2 nodes into a cluster. I did this on
proxmox 6.0 and had the following setup:
- 2x 2TB SSD (1x 250GB partition to install proxmox on) (1x 1750GB
partition to store VM data on) I mounted these in /var/lib/vmdata5 on
server5 (and vmdata4 on server4)).
- 2x 5TB 5400RPM for extra storage. This is called vmstorage5 or vmstorage4
and is also mounted in /var/lib
underneath is an example of my storage.cfg:
----
*cat /etc/pve/storage.cfgdir: local path /var/lib/vz content
iso,vztmpl,backuplvmthin: local-lvm thinpool data vgname pve content
rootdir,imagesdir: vmdata4 path /var/lib/vmdata4 content
vztmpl,iso,snippets,backup,images,rootdir maxfiles 2 nodes server4 shared
1dir: vmstorage4 path /var/lib/vmstorage4 content
vztmpl,iso,snippets,backup,rootdir,images maxfiles 2 nodes server4 shared
1nfs: VM export /VM path /mnt/pve/VM server qnap.domain.local content
rootdir,vztmpl,iso,snippets,backup,images maxfiles 1 nodes server4*
* options vers=4*
----
Also output of 'df -h':
------
*df -hFilesystem Size Used Avail Use% Mounted onudev
48G 0 48G 0% /devtmpfs
9.5G 9.7M 9.5G 1% /run/dev/mapper/pve-root 30G 2.9G
25G 11% /tmpfs 48G 43M 48G 1% /dev/shmtmpfs
5.0M 0 5.0M 0% /run/locktmpfs
48G 0 48G 0% /sys/fs/cgroup/dev/sda2
511M 312K 511M 1% /boot/efi/dev/mapper/vg_vmdata4-lvv4 1.7T 2.1G
1.6T 1% /var/lib/vmdata4/dev/mapper/vg_storage4-lvs4 4.6T 1.8T 2.6T
42% /var/lib/vmstorage4/dev/fuse 30M 20K 30M 1%
/etc/pvetmpfs 9.5G 0 9.5G 0% /run/user/0*
*nas.domain.local:/VM 7.1T 2.8T 4.3T 40% /mnt/pve/VM*
----
These machines are connected with the following network setup:
-----
*cat /etc/network/interfacesauto lo*
*iface lo inet loopback*
*### !!! do not touch ilo nic !!! #####auto eno1#iface eno1 inet
static######################################## this nic is used for proxmox
clustering ###auto eno2iface eno2 inet
manual###############################################iface ens1f0 inet
manualiface ens1f1 inet manualauto bond0iface bond0 inet manual bond-slaves
ens1f0 ens1f1 bond-miimon 100 bond-mode 802.3ad bond-xmit-hash-policy
layer2+3auto vmbr0iface vmbr0 inet static address 192.168.xx.xx netmask
255.255.252.0 gateway 192.168.xx.xx bridge_ports bond0 bridge_stp off
bridge_fd 0*
---
The servers both have dual 10GB/s SFP+ and are connected via a unifi 10GBPS
SFP+ switch.
I haven't been able to create a seperate vlan yet for management interface
/ seperate cluster purposes.
For some reason it backfired, made the files in /etc/pve/qemu-server
readonly and i could not change anything in this files (also wasnt able via
the gui). I ended up reinstalling both nodes and solved everything with
manual migration (via bash script).
Since then i'm not so eager to try and join these servers in a cluster
anymore because server5 is a production machine with all kinds of
applications on it.
sidenote: im using ansible to create VM's on these machines, but i must
admit that mostly i work via the gui.
*My goals:*
- One host to manage them all (also for ansible)
- easy vm migration between the servers.
In the Netherlands we have a saying: "the blood crawls where it cannot go" .
So i hope you don't mind me asking a couple of questions since i am tempted
to try again:
- Can a proxmox cluster (with failover possibility) work with local
storage. (Or do i need distributed storage from a NAS / NFS via ceph?)
- Can i use failover possiblity in a 2 nodes cluster?
- Can i use vm migration in a 2 nodes cluster?
- Does it matter if i have the storage 'mounted' in Datacenter rather
than on the server in a directory (gui wise). (Datacenter > storage >
'mounts')
- Is it better to rename the mounts to vmdata rather than vmdata <number>
Any tips regarding this are appreciated. Thank you all in advance.
next reply other threads:[~2020-08-28 16:42 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-08-28 16:41 Rutger Verhoeven [this message]
2020-08-29 13:31 ` Alexandre DERUMIER
2020-08-31 8:43 ` Rutger Verhoeven
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAOekgU4pOojO_eBBh29k20C0V5ZwehHCzCGFucpThsT3+tqeDQ@mail.gmail.com \
--to=rutger.verhoeven@gmail.com \
--cc=pve-user@pve.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal