From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id E1FBC6955C for ; Mon, 31 Aug 2020 10:44:19 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id D60E911EF6 for ; Mon, 31 Aug 2020 10:44:19 +0200 (CEST) Received: from mail-io1-xd35.google.com (mail-io1-xd35.google.com [IPv6:2607:f8b0:4864:20::d35]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id BE9F111EE2 for ; Mon, 31 Aug 2020 10:44:17 +0200 (CEST) Received: by mail-io1-xd35.google.com with SMTP id b16so5054546ioj.4 for ; Mon, 31 Aug 2020 01:44:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to; bh=vu5iOfFQaUe4zRVmPRb58DYer/qvBn4CvkRtTzMRnpQ=; b=AIuHkxmcbohz0GhEiqxFLZUCXaeC6EOL4BMCF24OYB5sw/51HXcWTJBRJ9I/7AJLPY X84ZZoJaYDoj0n/T8I8gXmpevzy2Ix1Eskqoj2cblw87Uf2HBPGF+B1oyPd7DNwKeUoI PGJ79UYEzuuVuysW2vbxewhNR0rGos0VTmkPU+nzjYVtHD26Eiz+IRfg7WUZQRvQ3Bh1 +8poSBXnyMXIEDR/Dw0a7R1j1woIX8OYYJjqT4JLpBoY/jr1oLC1xJXjjGWgHUrw6VWO pTLwXMrmYpKLOXB69tDWqCTr4kLsqw0gIc1ruEBbHTbJdZX4wwQfkueKCW0ui6MNska0 g+hQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to; bh=vu5iOfFQaUe4zRVmPRb58DYer/qvBn4CvkRtTzMRnpQ=; b=AH3ETADXJEEPDlcKAP9EUgJdo7L4/SSiH4DoPP9iRT6BaZN8NDfIgKXSnAGCWsbYkN fJy77vxrIIXVlNUF8TxdHwyhjv9QCv/FXAyIYJ8vD3wg7LikxulD0Nxdpd7rq5lBHZ+n ZfB/e9BiKByIfWeUEW8wjm9rfRHkjgllRYBNKi1sK/0G1f36yLqzAnlW6zyiJ7TWIyxd 7VvIo6dz7IwawImSpcZj3QEPJzFLEDdsy1Tq43abuKUQ4S/o4o8+scNwsJyOBAJcfI61 wtXbgFWb3FRKPnog38Uf1WRtOtpyjnDtws5AmOM5ea5babtMc8fGYxxNBKt5DVJ0/xMP aOMg== X-Gm-Message-State: AOAM530OtG5NMDyYAZRS+rt+rA6kyiOXReUboJo8KDhwSrDkftjG4qvv hylmXuAOFk0ZxtP7LPUBDUdbYdwYfazPfazjDXloCWfGWRw= X-Google-Smtp-Source: ABdhPJyquNS31Es8mYZIUSXvPDHHxyrcyZ1hS/cOP04Fny2NgESbtKgSFbHE8ghScm9oWMyHukxg+SeT5iKGufhrj8I= X-Received: by 2002:a6b:3101:: with SMTP id j1mr486673ioa.76.1598863449998; Mon, 31 Aug 2020 01:44:09 -0700 (PDT) MIME-Version: 1.0 References: <205693410.177586.1598707860152.JavaMail.zimbra@odiso.com> In-Reply-To: <205693410.177586.1598707860152.JavaMail.zimbra@odiso.com> From: Rutger Verhoeven Date: Mon, 31 Aug 2020 10:43:33 +0200 Message-ID: To: Proxmox VE user list X-SPAM-LEVEL: Spam detection results: 0 AC_BR_BONANZA 0.001 Too many newlines in a row... spammy template AWL -0.600 Adjusted score from AWL reputation of From: address DKIM_SIGNED 0.1 Message has a DKIM or DK signature, not necessarily valid DKIM_VALID -0.1 Message has at least one valid DKIM or DK signature DKIM_VALID_AU -0.1 Message has a valid DKIM or DK signature from author's domain DKIM_VALID_EF -0.1 Message has a valid DKIM or DK signature from envelope-from domain FREEMAIL_FROM 0.001 Sender email is commonly abused enduser mail provider HTML_MESSAGE 0.001 HTML included in message KAM_LINEPADDING 1.2 Spam that tries to get past blank line filters RCVD_IN_DNSWL_NONE -0.0001 Sender listed at https://www.dnswl.org/, no trust SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [proxmox.com] Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.29 Subject: Re: [PVE-User] Proxmox (2 node) cluster questions X-BeenThere: pve-user@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE user list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 31 Aug 2020 08:44:19 -0000 Hey, Thank you for your answers. I know what went wrong now. Do the VM's in terms of storage 'watch' at the vmdata5 as named in the gui? Or is that referred to the volumegroup name? Kind regards, Rutger Verhoeven. Op za 29 aug. 2020 om 15:31 schreef Alexandre DERUMIER : > Hi, > > the main problem with 2 nodes cluster, is that if 1 node is done, you los= t > quorum, > so /etc/pve is read only. > > if that occur, you can manually said to proxmox that you want only 1 node > in the quorum, > with "pvecm expected 1" command. then you'll be able to write again in > /etc/pve. > (But do it only when you are sure that the other node is down) > > > > > >>So i hope you don't mind me asking a couple of questions since i am > tempted > >>to try again: > > >> - Can a proxmox cluster (with failover possibility) work with local > >> storage. (Or do i need distributed storage from a NAS / NFS via ceph= ?) > > I'm not sure, but maybe with zfs it's possible. (but the replication is > async, so you'll lost data since the last sync) > > > >> - Can i use failover possiblity in a 2 nodes cluster? > manually yes. (but not with HA). > you can use "pvecm expected 1", then on node1 "mv > /etc/pve/nodes/node2/qemu-server/* /etc/pve/nodes/node1/qemu-server" to > move vm config. > then if the storage is available on node1 (shared storage or maybe zfs > local), you'll be able to start vm > > >> - Can i use vm migration in a 2 nodes cluster? > yes sure > > >> - Does it matter if i have the storage 'mounted' in Datacenter rathe= r > >> than on the server in a directory (gui wise). (Datacenter > storage = > > >> 'mounts') > This is the same, but for network storage(nfs,cifs), it's better to use > datacenter option, as it's monitoring server, and if a network timeout > occur, > the pvestatd daemon will not hang when try to get stats > > > >> - Is it better to rename the mounts to vmdata rather than vmdata > > > for the failover, you only need to have same "storage name" define for > each node. > so yes, local mountpoint should be same on each node, as you can define 1 > storage name only once at datacenter level. > > > ----- Mail original ----- > De: "Rutger Verhoeven" > =C3=80: "proxmoxve" > Envoy=C3=A9: Vendredi 28 Ao=C3=BBt 2020 18:41:45 > Objet: [PVE-User] Proxmox (2 node) cluster questions > > Hello all, > > Awhile ago i attempted to join 2 nodes into a cluster. I did this on > proxmox 6.0 and had the following setup: > - 2x 2TB SSD (1x 250GB partition to install proxmox on) (1x 1750GB > partition to store VM data on) I mounted these in /var/lib/vmdata5 on > server5 (and vmdata4 on server4)). > - 2x 5TB 5400RPM for extra storage. This is called vmstorage5 or > vmstorage4 > and is also mounted in /var/lib > > underneath is an example of my storage.cfg: > ---- > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > *cat /etc/pve/storage.cfgdir: local path /var/lib/vz content > iso,vztmpl,backuplvmthin: local-lvm thinpool data vgname pve content > rootdir,imagesdir: vmdata4 path /var/lib/vmdata4 content > vztmpl,iso,snippets,backup,images,rootdir maxfiles 2 nodes server4 shared > 1dir: vmstorage4 path /var/lib/vmstorage4 content > vztmpl,iso,snippets,backup,rootdir,images maxfiles 2 nodes server4 shared > 1nfs: VM export /VM path /mnt/pve/VM server qnap.domain.local content > rootdir,vztmpl,iso,snippets,backup,images maxfiles 1 nodes server4* > * options vers=3D4* > ---- > > Also output of 'df -h': > ------ > > > > > > > > > > > > > > *df -hFilesystem Size Used Avail Use% Mounted onudev > 48G 0 48G 0% /devtmpfs > 9.5G 9.7M 9.5G 1% /run/dev/mapper/pve-root 30G 2.9G > 25G 11% /tmpfs 48G 43M 48G 1% /dev/shmtmpfs > 5.0M 0 5.0M 0% /run/locktmpfs > 48G 0 48G 0% /sys/fs/cgroup/dev/sda2 > 511M 312K 511M 1% /boot/efi/dev/mapper/vg_vmdata4-lvv4 1.7T 2.1G > 1.6T 1% /var/lib/vmdata4/dev/mapper/vg_storage4-lvs4 4.6T 1.8T 2.6T > 42% /var/lib/vmstorage4/dev/fuse 30M 20K 30M 1% > /etc/pvetmpfs 9.5G 0 9.5G 0% /run/user/0* > *nas.domain.local:/VM 7.1T 2.8T 4.3T 40% /mnt/pve/VM* > ---- > > These machines are connected with the following network setup: > ----- > > > *cat /etc/network/interfacesauto lo* > *iface lo inet loopback* > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > *### !!! do not touch ilo nic !!! #####auto eno1#iface eno1 inet > static######################################## this nic is used for > proxmox > clustering ###auto eno2iface eno2 inet > manual###############################################iface ens1f0 inet > manualiface ens1f1 inet manualauto bond0iface bond0 inet manual > bond-slaves > ens1f0 ens1f1 bond-miimon 100 bond-mode 802.3ad bond-xmit-hash-policy > layer2+3auto vmbr0iface vmbr0 inet static address 192.168.xx.xx netmask > 255.255.252.0 gateway 192.168.xx.xx bridge_ports bond0 bridge_stp off > bridge_fd 0* > --- > The servers both have dual 10GB/s SFP+ and are connected via a unifi > 10GBPS > SFP+ switch. > I haven't been able to create a seperate vlan yet for management interfac= e > / seperate cluster purposes. > > For some reason it backfired, made the files in /etc/pve/qemu-server > readonly and i could not change anything in this files (also wasnt able > via > the gui). I ended up reinstalling both nodes and solved everything with > manual migration (via bash script). > > Since then i'm not so eager to try and join these servers in a cluster > anymore because server5 is a production machine with all kinds of > applications on it. > > sidenote: im using ansible to create VM's on these machines, but i must > admit that mostly i work via the gui. > > *My goals:* > - One host to manage them all (also for ansible) > - easy vm migration between the servers. > > In the Netherlands we have a saying: "the blood crawls where it cannot go= " > . > > So i hope you don't mind me asking a couple of questions since i am > tempted > to try again: > > - Can a proxmox cluster (with failover possibility) work with local > storage. (Or do i need distributed storage from a NAS / NFS via ceph?) > - Can i use failover possiblity in a 2 nodes cluster? > - Can i use vm migration in a 2 nodes cluster? > - Does it matter if i have the storage 'mounted' in Datacenter rather > than on the server in a directory (gui wise). (Datacenter > storage > > 'mounts') > - Is it better to rename the mounts to vmdata rather than vmdata > > Any tips regarding this are appreciated. Thank you all in advance. > _______________________________________________ > pve-user mailing list > pve-user@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > _______________________________________________ > pve-user mailing list > pve-user@lists.proxmox.com > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user >