From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <aderumier@odiso.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id 6B11369169
 for <pve-user@lists.proxmox.com>; Sat, 29 Aug 2020 15:31:40 +0200 (CEST)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id 579AF27FE5
 for <pve-user@lists.proxmox.com>; Sat, 29 Aug 2020 15:31:10 +0200 (CEST)
Received: from mailpro.odiso.net (mailpro.odiso.net [89.248.211.110])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS id D312127FDA
 for <pve-user@lists.proxmox.com>; Sat, 29 Aug 2020 15:31:06 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by mailpro.odiso.net (Postfix) with ESMTP id 83D441638409;
 Sat, 29 Aug 2020 15:31:00 +0200 (CEST)
Received: from mailpro.odiso.net ([127.0.0.1])
 by localhost (mailpro.odiso.net [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id Zmsk6GzPNb3r; Sat, 29 Aug 2020 15:31:00 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by mailpro.odiso.net (Postfix) with ESMTP id 69A85163840A;
 Sat, 29 Aug 2020 15:31:00 +0200 (CEST)
X-Virus-Scanned: amavisd-new at mailpro.odiso.com
Received: from mailpro.odiso.net ([127.0.0.1])
 by localhost (mailpro.odiso.net [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id 8xd8vgQntlBt; Sat, 29 Aug 2020 15:31:00 +0200 (CEST)
Received: from mailpro.odiso.net (mailpro.odiso.net [10.1.31.111])
 by mailpro.odiso.net (Postfix) with ESMTP id 546531638409;
 Sat, 29 Aug 2020 15:31:00 +0200 (CEST)
Date: Sat, 29 Aug 2020 15:31:00 +0200 (CEST)
From: Alexandre DERUMIER <aderumier@odiso.com>
To: Proxmox VE user list <pve-user@lists.proxmox.com>
Cc: proxmoxve <pve-user@pve.proxmox.com>
Message-ID: <205693410.177586.1598707860152.JavaMail.zimbra@odiso.com>
In-Reply-To: <CAOekgU4pOojO_eBBh29k20C0V5ZwehHCzCGFucpThsT3+tqeDQ@mail.gmail.com>
References: <CAOekgU4pOojO_eBBh29k20C0V5ZwehHCzCGFucpThsT3+tqeDQ@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Mailer: Zimbra 8.8.12_GA_3866 (ZimbraWebClient - GC83 (Linux)/8.8.12_GA_3844)
Thread-Topic: Proxmox (2 node) cluster questions
Thread-Index: MUY0TydInLbjFf9iwN4eiH19Ti4iaA==
X-SPAM-LEVEL: Spam detection results:  0
 AWL 0.044 Adjusted score from AWL reputation of From: address
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 RCVD_IN_DNSWL_NONE     -0.0001 Sender listed at https://www.dnswl.org/,
 no trust
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
 URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See
 http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more
 information. [proxmox.com]
Subject: Re: [PVE-User] Proxmox (2 node) cluster questions
X-BeenThere: pve-user@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE user list <pve-user.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-user>, 
 <mailto:pve-user-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-user/>
List-Post: <mailto:pve-user@lists.proxmox.com>
List-Help: <mailto:pve-user-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user>, 
 <mailto:pve-user-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Sat, 29 Aug 2020 13:31:40 -0000

Hi,

the main problem with 2 nodes cluster, is that if 1 node is done, you lost =
quorum,
so /etc/pve is read only.

if that occur, you can manually said to proxmox that you want only 1 node i=
n the quorum,
with "pvecm expected 1" command. then you'll be able to write again in  /et=
c/pve.
(But do it only when you are sure that the other node is down)




>>So i hope you don't mind me asking a couple of questions since i am tempt=
ed
>>to try again:

>>   -  Can a proxmox cluster (with failover possibility) work with local
>>   storage. (Or do i need distributed storage from a NAS / NFS via ceph?)

I'm  not sure, but maybe with zfs it's possible. (but the replication is as=
ync, so you'll lost data since the last sync)


>>   - Can i use failover possiblity in a 2 nodes cluster?
manually yes. (but not with HA).
you can use "pvecm expected 1",  then on node1 "mv /etc/pve/nodes/node2/qem=
u-server/* /etc/pve/nodes/node1/qemu-server" to move vm config.
then if the storage is available on node1 (shared storage or maybe zfs loca=
l), you'll be able to start vm

>>   - Can i use vm migration in a 2 nodes cluster?
yes sure

>>   - Does it matter if i have the storage 'mounted' in Datacenter rather
>>   than on the server in a directory (gui wise). (Datacenter > storage >
>>   'mounts')
This is the same, but for network storage(nfs,cifs), it's better to use dat=
acenter option, as it's monitoring server, and if a network timeout occur,
the pvestatd daemon will not hang when try to get stats


>>   - Is it better to rename the mounts to vmdata rather than vmdata <numb=
er>

for the failover, you only need to have same "storage name" define for each=
 node.
so yes, local mountpoint should be same on each node, as you can define 1 s=
torage name only once at datacenter level.


----- Mail original -----
De: "Rutger Verhoeven" <rutger.verhoeven@gmail.com>
=C3=80: "proxmoxve" <pve-user@pve.proxmox.com>
Envoy=C3=A9: Vendredi 28 Ao=C3=BBt 2020 18:41:45
Objet: [PVE-User] Proxmox (2 node) cluster questions

Hello all,=20

Awhile ago i attempted to join 2 nodes into a cluster. I did this on=20
proxmox 6.0 and had the following setup:=20
- 2x 2TB SSD (1x 250GB partition to install proxmox on) (1x 1750GB=20
partition to store VM data on) I mounted these in /var/lib/vmdata5 on=20
server5 (and vmdata4 on server4)).=20
- 2x 5TB 5400RPM for extra storage. This is called vmstorage5 or vmstorage4=
=20
and is also mounted in /var/lib=20

underneath is an example of my storage.cfg:=20
----=20































*cat /etc/pve/storage.cfgdir: local path /var/lib/vz content=20
iso,vztmpl,backuplvmthin: local-lvm thinpool data vgname pve content=20
rootdir,imagesdir: vmdata4 path /var/lib/vmdata4 content=20
vztmpl,iso,snippets,backup,images,rootdir maxfiles 2 nodes server4 shared=
=20
1dir: vmstorage4 path /var/lib/vmstorage4 content=20
vztmpl,iso,snippets,backup,rootdir,images maxfiles 2 nodes server4 shared=
=20
1nfs: VM export /VM path /mnt/pve/VM server qnap.domain.local content=20
rootdir,vztmpl,iso,snippets,backup,images maxfiles 1 nodes server4*=20
* options vers=3D4*=20
----=20

Also output of 'df -h':=20
------=20













*df -hFilesystem Size Used Avail Use% Mounted onudev=20
48G 0 48G 0% /devtmpfs=20
9.5G 9.7M 9.5G 1% /run/dev/mapper/pve-root 30G 2.9G=20
25G 11% /tmpfs 48G 43M 48G 1% /dev/shmtmpfs=20
5.0M 0 5.0M 0% /run/locktmpfs=20
48G 0 48G 0% /sys/fs/cgroup/dev/sda2=20
511M 312K 511M 1% /boot/efi/dev/mapper/vg_vmdata4-lvv4 1.7T 2.1G=20
1.6T 1% /var/lib/vmdata4/dev/mapper/vg_storage4-lvs4 4.6T 1.8T 2.6T=20
42% /var/lib/vmstorage4/dev/fuse 30M 20K 30M 1%=20
/etc/pvetmpfs 9.5G 0 9.5G 0% /run/user/0*=20
*nas.domain.local:/VM 7.1T 2.8T 4.3T 40% /mnt/pve/VM*=20
----=20

These machines are connected with the following network setup:=20
-----=20


*cat /etc/network/interfacesauto lo*=20
*iface lo inet loopback*=20





























*### !!! do not touch ilo nic !!! #####auto eno1#iface eno1 inet=20
static######################################## this nic is used for proxmox=
=20
clustering ###auto eno2iface eno2 inet=20
manual###############################################iface ens1f0 inet=20
manualiface ens1f1 inet manualauto bond0iface bond0 inet manual bond-slaves=
=20
ens1f0 ens1f1 bond-miimon 100 bond-mode 802.3ad bond-xmit-hash-policy=20
layer2+3auto vmbr0iface vmbr0 inet static address 192.168.xx.xx netmask=20
255.255.252.0 gateway 192.168.xx.xx bridge_ports bond0 bridge_stp off=20
bridge_fd 0*=20
---=20
The servers both have dual 10GB/s SFP+ and are connected via a unifi 10GBPS=
=20
SFP+ switch.=20
I haven't been able to create a seperate vlan yet for management interface=
=20
/ seperate cluster purposes.=20

For some reason it backfired, made the files in /etc/pve/qemu-server=20
readonly and i could not change anything in this files (also wasnt able via=
=20
the gui). I ended up reinstalling both nodes and solved everything with=20
manual migration (via bash script).=20

Since then i'm not so eager to try and join these servers in a cluster=20
anymore because server5 is a production machine with all kinds of=20
applications on it.=20

sidenote: im using ansible to create VM's on these machines, but i must=20
admit that mostly i work via the gui.=20

*My goals:*=20
- One host to manage them all (also for ansible)=20
- easy vm migration between the servers.=20

In the Netherlands we have a saying: "the blood crawls where it cannot go" =
.=20

So i hope you don't mind me asking a couple of questions since i am tempted=
=20
to try again:=20

- Can a proxmox cluster (with failover possibility) work with local=20
storage. (Or do i need distributed storage from a NAS / NFS via ceph?)=20
- Can i use failover possiblity in a 2 nodes cluster?=20
- Can i use vm migration in a 2 nodes cluster?=20
- Does it matter if i have the storage 'mounted' in Datacenter rather=20
than on the server in a directory (gui wise). (Datacenter > storage >=20
'mounts')=20
- Is it better to rename the mounts to vmdata rather than vmdata <number>=
=20

Any tips regarding this are appreciated. Thank you all in advance.=20
_______________________________________________=20
pve-user mailing list=20
pve-user@lists.proxmox.com=20
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user=20