public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
* [PVE-User] Proxmox (2 node) cluster questions
@ 2020-08-28 16:41 Rutger Verhoeven
  2020-08-29 13:31 ` Alexandre DERUMIER
  0 siblings, 1 reply; 3+ messages in thread
From: Rutger Verhoeven @ 2020-08-28 16:41 UTC (permalink / raw)
  To: PVE User List

Hello all,

Awhile ago i attempted to join 2 nodes into a cluster. I did this on
proxmox 6.0 and had the following setup:
- 2x 2TB SSD (1x 250GB partition to install proxmox on) (1x 1750GB
partition to store VM data on) I mounted these in /var/lib/vmdata5 on
server5 (and vmdata4 on server4)).
- 2x 5TB 5400RPM for extra storage. This is called vmstorage5 or vmstorage4
and is also mounted in /var/lib

underneath is an example of my storage.cfg:
----































*cat /etc/pve/storage.cfgdir: local path /var/lib/vz content
iso,vztmpl,backuplvmthin: local-lvm thinpool data vgname pve content
rootdir,imagesdir: vmdata4 path /var/lib/vmdata4 content
vztmpl,iso,snippets,backup,images,rootdir maxfiles 2 nodes server4 shared
1dir: vmstorage4 path /var/lib/vmstorage4 content
vztmpl,iso,snippets,backup,rootdir,images maxfiles 2 nodes server4 shared
1nfs: VM export /VM path /mnt/pve/VM server qnap.domain.local content
rootdir,vztmpl,iso,snippets,backup,images maxfiles 1 nodes server4*
* options vers=4*
----

Also output of 'df -h':
------













*df -hFilesystem                    Size  Used Avail Use% Mounted onudev
                        48G     0   48G   0% /devtmpfs
    9.5G  9.7M  9.5G   1% /run/dev/mapper/pve-root           30G  2.9G
25G  11% /tmpfs                          48G   43M   48G   1% /dev/shmtmpfs
                        5.0M     0  5.0M   0% /run/locktmpfs
           48G     0   48G   0% /sys/fs/cgroup/dev/sda2
511M  312K  511M   1% /boot/efi/dev/mapper/vg_vmdata4-lvv4   1.7T  2.1G
 1.6T   1% /var/lib/vmdata4/dev/mapper/vg_storage4-lvs4  4.6T  1.8T  2.6T
 42% /var/lib/vmstorage4/dev/fuse                      30M   20K   30M   1%
/etc/pvetmpfs                         9.5G     0  9.5G   0% /run/user/0*
*nas.domain.local:/VM     7.1T  2.8T  4.3T  40% /mnt/pve/VM*
----

These machines are connected with the following network setup:
-----


*cat /etc/network/interfacesauto lo*
*iface lo inet loopback*





























*### !!! do not touch ilo nic !!! #####auto eno1#iface eno1 inet
static######################################## this nic is used for proxmox
clustering ###auto eno2iface eno2 inet
manual###############################################iface ens1f0 inet
manualiface ens1f1 inet manualauto bond0iface bond0 inet manual bond-slaves
ens1f0 ens1f1 bond-miimon 100 bond-mode 802.3ad bond-xmit-hash-policy
layer2+3auto vmbr0iface vmbr0 inet static address 192.168.xx.xx netmask
255.255.252.0 gateway 192.168.xx.xx bridge_ports bond0 bridge_stp off
bridge_fd 0*
---
The servers both have dual 10GB/s SFP+ and are connected via a unifi 10GBPS
SFP+ switch.
I haven't been able to create a seperate vlan yet for management interface
/ seperate cluster purposes.

For some reason it backfired, made the files in /etc/pve/qemu-server
readonly and i could not change anything in this files (also wasnt able via
the gui). I ended up reinstalling both nodes and solved everything with
manual migration (via bash script).

Since then i'm not so eager to try and join these servers in a cluster
anymore because server5 is a production machine with all kinds of
applications on it.

sidenote: im using ansible to create VM's on these machines, but i must
admit that mostly i work via the gui.

*My goals:*
- One host to manage them all (also for ansible)
- easy vm migration between the servers.

In the Netherlands we have a saying: "the blood crawls where it cannot go" .

So i hope you don't mind me asking a couple of questions since i am tempted
to try again:

   -  Can a proxmox cluster (with failover possibility) work with local
   storage. (Or do i need distributed storage from a NAS / NFS via ceph?)
   - Can i use failover possiblity in a 2 nodes cluster?
   - Can i use vm migration in a 2 nodes cluster?
   - Does it matter if i have the storage 'mounted' in Datacenter rather
   than on the server in a directory (gui wise). (Datacenter > storage >
   'mounts')
   - Is it better to rename the mounts to vmdata rather than vmdata <number>

Any tips regarding this are appreciated. Thank you all in advance.


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PVE-User] Proxmox (2 node) cluster questions
  2020-08-28 16:41 [PVE-User] Proxmox (2 node) cluster questions Rutger Verhoeven
@ 2020-08-29 13:31 ` Alexandre DERUMIER
  2020-08-31  8:43   ` Rutger Verhoeven
  0 siblings, 1 reply; 3+ messages in thread
From: Alexandre DERUMIER @ 2020-08-29 13:31 UTC (permalink / raw)
  To: Proxmox VE user list; +Cc: proxmoxve

Hi,

the main problem with 2 nodes cluster, is that if 1 node is done, you lost quorum,
so /etc/pve is read only.

if that occur, you can manually said to proxmox that you want only 1 node in the quorum,
with "pvecm expected 1" command. then you'll be able to write again in  /etc/pve.
(But do it only when you are sure that the other node is down)




>>So i hope you don't mind me asking a couple of questions since i am tempted
>>to try again:

>>   -  Can a proxmox cluster (with failover possibility) work with local
>>   storage. (Or do i need distributed storage from a NAS / NFS via ceph?)

I'm  not sure, but maybe with zfs it's possible. (but the replication is async, so you'll lost data since the last sync)


>>   - Can i use failover possiblity in a 2 nodes cluster?
manually yes. (but not with HA).
you can use "pvecm expected 1",  then on node1 "mv /etc/pve/nodes/node2/qemu-server/* /etc/pve/nodes/node1/qemu-server" to move vm config.
then if the storage is available on node1 (shared storage or maybe zfs local), you'll be able to start vm

>>   - Can i use vm migration in a 2 nodes cluster?
yes sure

>>   - Does it matter if i have the storage 'mounted' in Datacenter rather
>>   than on the server in a directory (gui wise). (Datacenter > storage >
>>   'mounts')
This is the same, but for network storage(nfs,cifs), it's better to use datacenter option, as it's monitoring server, and if a network timeout occur,
the pvestatd daemon will not hang when try to get stats


>>   - Is it better to rename the mounts to vmdata rather than vmdata <number>

for the failover, you only need to have same "storage name" define for each node.
so yes, local mountpoint should be same on each node, as you can define 1 storage name only once at datacenter level.


----- Mail original -----
De: "Rutger Verhoeven" <rutger.verhoeven@gmail.com>
À: "proxmoxve" <pve-user@pve.proxmox.com>
Envoyé: Vendredi 28 Août 2020 18:41:45
Objet: [PVE-User] Proxmox (2 node) cluster questions

Hello all, 

Awhile ago i attempted to join 2 nodes into a cluster. I did this on 
proxmox 6.0 and had the following setup: 
- 2x 2TB SSD (1x 250GB partition to install proxmox on) (1x 1750GB 
partition to store VM data on) I mounted these in /var/lib/vmdata5 on 
server5 (and vmdata4 on server4)). 
- 2x 5TB 5400RPM for extra storage. This is called vmstorage5 or vmstorage4 
and is also mounted in /var/lib 

underneath is an example of my storage.cfg: 
---- 































*cat /etc/pve/storage.cfgdir: local path /var/lib/vz content 
iso,vztmpl,backuplvmthin: local-lvm thinpool data vgname pve content 
rootdir,imagesdir: vmdata4 path /var/lib/vmdata4 content 
vztmpl,iso,snippets,backup,images,rootdir maxfiles 2 nodes server4 shared 
1dir: vmstorage4 path /var/lib/vmstorage4 content 
vztmpl,iso,snippets,backup,rootdir,images maxfiles 2 nodes server4 shared 
1nfs: VM export /VM path /mnt/pve/VM server qnap.domain.local content 
rootdir,vztmpl,iso,snippets,backup,images maxfiles 1 nodes server4* 
* options vers=4* 
---- 

Also output of 'df -h': 
------ 













*df -hFilesystem Size Used Avail Use% Mounted onudev 
48G 0 48G 0% /devtmpfs 
9.5G 9.7M 9.5G 1% /run/dev/mapper/pve-root 30G 2.9G 
25G 11% /tmpfs 48G 43M 48G 1% /dev/shmtmpfs 
5.0M 0 5.0M 0% /run/locktmpfs 
48G 0 48G 0% /sys/fs/cgroup/dev/sda2 
511M 312K 511M 1% /boot/efi/dev/mapper/vg_vmdata4-lvv4 1.7T 2.1G 
1.6T 1% /var/lib/vmdata4/dev/mapper/vg_storage4-lvs4 4.6T 1.8T 2.6T 
42% /var/lib/vmstorage4/dev/fuse 30M 20K 30M 1% 
/etc/pvetmpfs 9.5G 0 9.5G 0% /run/user/0* 
*nas.domain.local:/VM 7.1T 2.8T 4.3T 40% /mnt/pve/VM* 
---- 

These machines are connected with the following network setup: 
----- 


*cat /etc/network/interfacesauto lo* 
*iface lo inet loopback* 





























*### !!! do not touch ilo nic !!! #####auto eno1#iface eno1 inet 
static######################################## this nic is used for proxmox 
clustering ###auto eno2iface eno2 inet 
manual###############################################iface ens1f0 inet 
manualiface ens1f1 inet manualauto bond0iface bond0 inet manual bond-slaves 
ens1f0 ens1f1 bond-miimon 100 bond-mode 802.3ad bond-xmit-hash-policy 
layer2+3auto vmbr0iface vmbr0 inet static address 192.168.xx.xx netmask 
255.255.252.0 gateway 192.168.xx.xx bridge_ports bond0 bridge_stp off 
bridge_fd 0* 
--- 
The servers both have dual 10GB/s SFP+ and are connected via a unifi 10GBPS 
SFP+ switch. 
I haven't been able to create a seperate vlan yet for management interface 
/ seperate cluster purposes. 

For some reason it backfired, made the files in /etc/pve/qemu-server 
readonly and i could not change anything in this files (also wasnt able via 
the gui). I ended up reinstalling both nodes and solved everything with 
manual migration (via bash script). 

Since then i'm not so eager to try and join these servers in a cluster 
anymore because server5 is a production machine with all kinds of 
applications on it. 

sidenote: im using ansible to create VM's on these machines, but i must 
admit that mostly i work via the gui. 

*My goals:* 
- One host to manage them all (also for ansible) 
- easy vm migration between the servers. 

In the Netherlands we have a saying: "the blood crawls where it cannot go" . 

So i hope you don't mind me asking a couple of questions since i am tempted 
to try again: 

- Can a proxmox cluster (with failover possibility) work with local 
storage. (Or do i need distributed storage from a NAS / NFS via ceph?) 
- Can i use failover possiblity in a 2 nodes cluster? 
- Can i use vm migration in a 2 nodes cluster? 
- Does it matter if i have the storage 'mounted' in Datacenter rather 
than on the server in a directory (gui wise). (Datacenter > storage > 
'mounts') 
- Is it better to rename the mounts to vmdata rather than vmdata <number> 

Any tips regarding this are appreciated. Thank you all in advance. 
_______________________________________________ 
pve-user mailing list 
pve-user@lists.proxmox.com 
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user 




^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PVE-User] Proxmox (2 node) cluster questions
  2020-08-29 13:31 ` Alexandre DERUMIER
@ 2020-08-31  8:43   ` Rutger Verhoeven
  0 siblings, 0 replies; 3+ messages in thread
From: Rutger Verhoeven @ 2020-08-31  8:43 UTC (permalink / raw)
  To: Proxmox VE user list

Hey,

Thank you for your answers. I know what went wrong now.
Do the VM's in terms of storage 'watch' at the vmdata5 as named in the gui?
Or is that referred to the volumegroup name?

Kind regards,

Rutger Verhoeven.

Op za 29 aug. 2020 om 15:31 schreef Alexandre DERUMIER <aderumier@odiso.com
>:

> Hi,
>
> the main problem with 2 nodes cluster, is that if 1 node is done, you lost
> quorum,
> so /etc/pve is read only.
>
> if that occur, you can manually said to proxmox that you want only 1 node
> in the quorum,
> with "pvecm expected 1" command. then you'll be able to write again in
> /etc/pve.
> (But do it only when you are sure that the other node is down)
>
>
>
>
> >>So i hope you don't mind me asking a couple of questions since i am
> tempted
> >>to try again:
>
> >>   -  Can a proxmox cluster (with failover possibility) work with local
> >>   storage. (Or do i need distributed storage from a NAS / NFS via ceph?)
>
> I'm  not sure, but maybe with zfs it's possible. (but the replication is
> async, so you'll lost data since the last sync)
>
>
> >>   - Can i use failover possiblity in a 2 nodes cluster?
> manually yes. (but not with HA).
> you can use "pvecm expected 1",  then on node1 "mv
> /etc/pve/nodes/node2/qemu-server/* /etc/pve/nodes/node1/qemu-server" to
> move vm config.
> then if the storage is available on node1 (shared storage or maybe zfs
> local), you'll be able to start vm
>
> >>   - Can i use vm migration in a 2 nodes cluster?
> yes sure
>
> >>   - Does it matter if i have the storage 'mounted' in Datacenter rather
> >>   than on the server in a directory (gui wise). (Datacenter > storage >
> >>   'mounts')
> This is the same, but for network storage(nfs,cifs), it's better to use
> datacenter option, as it's monitoring server, and if a network timeout
> occur,
> the pvestatd daemon will not hang when try to get stats
>
>
> >>   - Is it better to rename the mounts to vmdata rather than vmdata
> <number>
>
> for the failover, you only need to have same "storage name" define for
> each node.
> so yes, local mountpoint should be same on each node, as you can define 1
> storage name only once at datacenter level.
>
>
> ----- Mail original -----
> De: "Rutger Verhoeven" <rutger.verhoeven@gmail.com>
> À: "proxmoxve" <pve-user@pve.proxmox.com>
> Envoyé: Vendredi 28 Août 2020 18:41:45
> Objet: [PVE-User] Proxmox (2 node) cluster questions
>
> Hello all,
>
> Awhile ago i attempted to join 2 nodes into a cluster. I did this on
> proxmox 6.0 and had the following setup:
> - 2x 2TB SSD (1x 250GB partition to install proxmox on) (1x 1750GB
> partition to store VM data on) I mounted these in /var/lib/vmdata5 on
> server5 (and vmdata4 on server4)).
> - 2x 5TB 5400RPM for extra storage. This is called vmstorage5 or
> vmstorage4
> and is also mounted in /var/lib
>
> underneath is an example of my storage.cfg:
> ----
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *cat /etc/pve/storage.cfgdir: local path /var/lib/vz content
> iso,vztmpl,backuplvmthin: local-lvm thinpool data vgname pve content
> rootdir,imagesdir: vmdata4 path /var/lib/vmdata4 content
> vztmpl,iso,snippets,backup,images,rootdir maxfiles 2 nodes server4 shared
> 1dir: vmstorage4 path /var/lib/vmstorage4 content
> vztmpl,iso,snippets,backup,rootdir,images maxfiles 2 nodes server4 shared
> 1nfs: VM export /VM path /mnt/pve/VM server qnap.domain.local content
> rootdir,vztmpl,iso,snippets,backup,images maxfiles 1 nodes server4*
> * options vers=4*
> ----
>
> Also output of 'df -h':
> ------
>
>
>
>
>
>
>
>
>
>
>
>
>
> *df -hFilesystem Size Used Avail Use% Mounted onudev
> 48G 0 48G 0% /devtmpfs
> 9.5G 9.7M 9.5G 1% /run/dev/mapper/pve-root 30G 2.9G
> 25G 11% /tmpfs 48G 43M 48G 1% /dev/shmtmpfs
> 5.0M 0 5.0M 0% /run/locktmpfs
> 48G 0 48G 0% /sys/fs/cgroup/dev/sda2
> 511M 312K 511M 1% /boot/efi/dev/mapper/vg_vmdata4-lvv4 1.7T 2.1G
> 1.6T 1% /var/lib/vmdata4/dev/mapper/vg_storage4-lvs4 4.6T 1.8T 2.6T
> 42% /var/lib/vmstorage4/dev/fuse 30M 20K 30M 1%
> /etc/pvetmpfs 9.5G 0 9.5G 0% /run/user/0*
> *nas.domain.local:/VM 7.1T 2.8T 4.3T 40% /mnt/pve/VM*
> ----
>
> These machines are connected with the following network setup:
> -----
>
>
> *cat /etc/network/interfacesauto lo*
> *iface lo inet loopback*
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *### !!! do not touch ilo nic !!! #####auto eno1#iface eno1 inet
> static######################################## this nic is used for
> proxmox
> clustering ###auto eno2iface eno2 inet
> manual###############################################iface ens1f0 inet
> manualiface ens1f1 inet manualauto bond0iface bond0 inet manual
> bond-slaves
> ens1f0 ens1f1 bond-miimon 100 bond-mode 802.3ad bond-xmit-hash-policy
> layer2+3auto vmbr0iface vmbr0 inet static address 192.168.xx.xx netmask
> 255.255.252.0 gateway 192.168.xx.xx bridge_ports bond0 bridge_stp off
> bridge_fd 0*
> ---
> The servers both have dual 10GB/s SFP+ and are connected via a unifi
> 10GBPS
> SFP+ switch.
> I haven't been able to create a seperate vlan yet for management interface
> / seperate cluster purposes.
>
> For some reason it backfired, made the files in /etc/pve/qemu-server
> readonly and i could not change anything in this files (also wasnt able
> via
> the gui). I ended up reinstalling both nodes and solved everything with
> manual migration (via bash script).
>
> Since then i'm not so eager to try and join these servers in a cluster
> anymore because server5 is a production machine with all kinds of
> applications on it.
>
> sidenote: im using ansible to create VM's on these machines, but i must
> admit that mostly i work via the gui.
>
> *My goals:*
> - One host to manage them all (also for ansible)
> - easy vm migration between the servers.
>
> In the Netherlands we have a saying: "the blood crawls where it cannot go"
> .
>
> So i hope you don't mind me asking a couple of questions since i am
> tempted
> to try again:
>
> - Can a proxmox cluster (with failover possibility) work with local
> storage. (Or do i need distributed storage from a NAS / NFS via ceph?)
> - Can i use failover possiblity in a 2 nodes cluster?
> - Can i use vm migration in a 2 nodes cluster?
> - Does it matter if i have the storage 'mounted' in Datacenter rather
> than on the server in a directory (gui wise). (Datacenter > storage >
> 'mounts')
> - Is it better to rename the mounts to vmdata rather than vmdata <number>
>
> Any tips regarding this are appreciated. Thank you all in advance.
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>
> _______________________________________________
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-08-31  8:44 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-28 16:41 [PVE-User] Proxmox (2 node) cluster questions Rutger Verhoeven
2020-08-29 13:31 ` Alexandre DERUMIER
2020-08-31  8:43   ` Rutger Verhoeven

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal