From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 002F069082 for ; Fri, 28 Aug 2020 18:42:30 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id E5BA423BA1 for ; Fri, 28 Aug 2020 18:42:30 +0200 (CEST) Received: from mail-io1-xd35.google.com (mail-io1-xd35.google.com [IPv6:2607:f8b0:4864:20::d35]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 21DC723B8D for ; Fri, 28 Aug 2020 18:42:29 +0200 (CEST) Received: by mail-io1-xd35.google.com with SMTP id u126so2006250iod.12 for ; Fri, 28 Aug 2020 09:42:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:from:date:message-id:subject:to; bh=jY+cOSGp/jgGHKu4YBwn1zZcVGNLotWStZnlihNp0ME=; b=V56/bLD+4f4AQRSqB7g5YMjaMuZVvaYqaPW1/C68r3ewveC150VbHTfieghdA7nr+q K8RFi7JqT8izvR2G9sVxmpV5rdcbbLZKy3A7Z9sPp+oSWDelYKhyuK7UhInPLR298cBZ hjywnqaDby7GGKXsFLesQQ66ii4YejbOt3sKu3SKdXTKELY1QP5RF7XSsm6dMWt0bn8s 1uZXr61ei5fxHURsEXeJkjoa0ay3qcQiM5y8Bb4bbirQ+vWcXJTJJAZTutixCRne3hNu 2BzQ9mg2PdIktJfjDWJxMohOlylEYYC396ob/T35566W5UM0FJRdz9Th5Sq4+HOZOAd5 v1Ng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:from:date:message-id:subject:to; bh=jY+cOSGp/jgGHKu4YBwn1zZcVGNLotWStZnlihNp0ME=; b=hUsBfl2bLGN28xLYw0IZ56bnHs84WibrJh5T+Rw/MUOFsMxYHq7wlIlCV/wKkoldp5 fNIyVqEk+//tM5QQbBInTxhZ7Ox3O/vHrIYOphzKt/udGI9WizHKzKZ0fjc2o6yDExWy G1jrwA1IbCOPRyyjdi7NlUqkIwwFhwWq6eJDSG236AuTpyNiljrN3P2K4bZIGmG4hnVJ rVyRl+sgWe3n96qVLatMfbOzhDqwLdPEdtxUhqZ5mQjAb0wrDKYhYJcKF8tXPQfN/0hP w6XNx/EYl722KJol8TFM9+rR2SZTL2z1hckxcLENrsbFVACMY3YL1YOiRn7fId+7sQdf 40YA== X-Gm-Message-State: AOAM5309axGuMgxfR3EQvm5geSirUitPf4FzmaLXs2M+ut9b6IyTYV7U jtTkGunH+E1Dy/cuNBtOG4FmnqZvymTDN57nQx+IoH9JtQt6mQ== X-Google-Smtp-Source: ABdhPJx8HPQIDGlOEi+WihmMm9Ieyu8f6xsy0zW2jB0R/BV/He1ZATcBz4W1WolSgHE9yFP078M0PpVu3GmEyimB/5U= X-Received: by 2002:a05:6638:594:: with SMTP id a20mr1995455jar.127.1598632941410; Fri, 28 Aug 2020 09:42:21 -0700 (PDT) MIME-Version: 1.0 From: Rutger Verhoeven Date: Fri, 28 Aug 2020 18:41:45 +0200 Message-ID: To: PVE User List X-SPAM-LEVEL: Spam detection results: 0 DKIM_SIGNED 0.1 Message has a DKIM or DK signature, not necessarily valid DKIM_VALID -0.1 Message has at least one valid DKIM or DK signature DKIM_VALID_AU -0.1 Message has a valid DKIM or DK signature from author's domain DKIM_VALID_EF -0.1 Message has a valid DKIM or DK signature from envelope-from domain FREEMAIL_FROM 0.001 Sender email is commonly abused enduser mail provider HTML_MESSAGE 0.001 HTML included in message RCVD_IN_DNSWL_NONE -0.0001 Sender listed at https://www.dnswl.org/, no trust SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.29 Subject: [PVE-User] Proxmox (2 node) cluster questions X-BeenThere: pve-user@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE user list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 28 Aug 2020 16:42:31 -0000 Hello all, Awhile ago i attempted to join 2 nodes into a cluster. I did this on proxmox 6.0 and had the following setup: - 2x 2TB SSD (1x 250GB partition to install proxmox on) (1x 1750GB partition to store VM data on) I mounted these in /var/lib/vmdata5 on server5 (and vmdata4 on server4)). - 2x 5TB 5400RPM for extra storage. This is called vmstorage5 or vmstorage4 and is also mounted in /var/lib underneath is an example of my storage.cfg: ---- *cat /etc/pve/storage.cfgdir: local path /var/lib/vz content iso,vztmpl,backuplvmthin: local-lvm thinpool data vgname pve content rootdir,imagesdir: vmdata4 path /var/lib/vmdata4 content vztmpl,iso,snippets,backup,images,rootdir maxfiles 2 nodes server4 shared 1dir: vmstorage4 path /var/lib/vmstorage4 content vztmpl,iso,snippets,backup,rootdir,images maxfiles 2 nodes server4 shared 1nfs: VM export /VM path /mnt/pve/VM server qnap.domain.local content rootdir,vztmpl,iso,snippets,backup,images maxfiles 1 nodes server4* * options vers=4* ---- Also output of 'df -h': ------ *df -hFilesystem Size Used Avail Use% Mounted onudev 48G 0 48G 0% /devtmpfs 9.5G 9.7M 9.5G 1% /run/dev/mapper/pve-root 30G 2.9G 25G 11% /tmpfs 48G 43M 48G 1% /dev/shmtmpfs 5.0M 0 5.0M 0% /run/locktmpfs 48G 0 48G 0% /sys/fs/cgroup/dev/sda2 511M 312K 511M 1% /boot/efi/dev/mapper/vg_vmdata4-lvv4 1.7T 2.1G 1.6T 1% /var/lib/vmdata4/dev/mapper/vg_storage4-lvs4 4.6T 1.8T 2.6T 42% /var/lib/vmstorage4/dev/fuse 30M 20K 30M 1% /etc/pvetmpfs 9.5G 0 9.5G 0% /run/user/0* *nas.domain.local:/VM 7.1T 2.8T 4.3T 40% /mnt/pve/VM* ---- These machines are connected with the following network setup: ----- *cat /etc/network/interfacesauto lo* *iface lo inet loopback* *### !!! do not touch ilo nic !!! #####auto eno1#iface eno1 inet static######################################## this nic is used for proxmox clustering ###auto eno2iface eno2 inet manual###############################################iface ens1f0 inet manualiface ens1f1 inet manualauto bond0iface bond0 inet manual bond-slaves ens1f0 ens1f1 bond-miimon 100 bond-mode 802.3ad bond-xmit-hash-policy layer2+3auto vmbr0iface vmbr0 inet static address 192.168.xx.xx netmask 255.255.252.0 gateway 192.168.xx.xx bridge_ports bond0 bridge_stp off bridge_fd 0* --- The servers both have dual 10GB/s SFP+ and are connected via a unifi 10GBPS SFP+ switch. I haven't been able to create a seperate vlan yet for management interface / seperate cluster purposes. For some reason it backfired, made the files in /etc/pve/qemu-server readonly and i could not change anything in this files (also wasnt able via the gui). I ended up reinstalling both nodes and solved everything with manual migration (via bash script). Since then i'm not so eager to try and join these servers in a cluster anymore because server5 is a production machine with all kinds of applications on it. sidenote: im using ansible to create VM's on these machines, but i must admit that mostly i work via the gui. *My goals:* - One host to manage them all (also for ansible) - easy vm migration between the servers. In the Netherlands we have a saying: "the blood crawls where it cannot go" . So i hope you don't mind me asking a couple of questions since i am tempted to try again: - Can a proxmox cluster (with failover possibility) work with local storage. (Or do i need distributed storage from a NAS / NFS via ceph?) - Can i use failover possiblity in a 2 nodes cluster? - Can i use vm migration in a 2 nodes cluster? - Does it matter if i have the storage 'mounted' in Datacenter rather than on the server in a directory (gui wise). (Datacenter > storage > 'mounts') - Is it better to rename the mounts to vmdata rather than vmdata Any tips regarding this are appreciated. Thank you all in advance.