From: Thomas Lamprecht <t.lamprecht@proxmox.com>
To: Proxmox VE user list <pve-user@lists.proxmox.com>,
alexandre derumier <aderumier@odiso.com>,
JR Richardson <jmr.richardson@gmail.com>
Subject: Re: [PVE-User] Multi Data Center Cluster or Not
Date: Wed, 28 Apr 2021 08:40:51 +0200 [thread overview]
Message-ID: <5f38f579-fe59-763e-6919-be691912aa87@proxmox.com> (raw)
In-Reply-To: <807b442a-7b57-2918-986d-fda0db321b45@odiso.com>
On 28.04.21 04:03, alexandre derumier wrote:
> On 27/04/2021 20:38, JR Richardson wrote:
>> I'm looking for suggestions for geo-diversity using PROXMOX
>> Clustering. I understand running hypervisors in the same cluster in
>> multiple data centers is possible with high capacity/low latency
>> inter-site links. What I'm learning is there could be better ways,
>> like running PROXMOX backup servers (PBS) with Remote Sync. Using PBS
>> is interesting but would require manually restoring nodes should a
>> failure occur.
>>
>> I'm looking for best practice or suggestions in topology that folks
>> are using successfully or even tales of failure for what to avoid.
>
> If you want same cluster on multiple datacenter, you really need low latency (for proxmox && storage), and at least 3 datacenters to keep quorum.
>
> if you need a 2dc datacenter, with 1 primary && 1 backup as disaster recovery
>
> you could manually replicate a zfs or ceph storage to the backup dc (with snapshot export/import), or other storage replication feature if you have a san like netapp for example and do an rsync of /etc/pve.
>
We know of setups which use rbd-mirror to mirror their production Ceph pool
to a second DC for recovery on failure. It's still needs a bit of hands-on
approach on setup and actual recovery can be prepared too (pre-create matching
VMs, maybe lock them by default so no start is done by accident).
We also know some city-gov IT people which run their cluster over multiple
DCs, but they have the luck to be able to run redundant fiber with LAN-like
latency between those DCs, which may not be an option for everyone.
A multi-datacenter management is planned, but we currently are still fleshing
out the basis, albeit some features required for that to happen are in-work.
Nothing ready to soon, though, just mentioning as FYI.
cheers,
Thomas
next prev parent reply other threads:[~2021-04-28 6:49 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-04-27 18:38 JR Richardson
2021-04-28 2:03 ` alexandre derumier
2021-04-28 6:40 ` Thomas Lamprecht [this message]
[not found] ` <mailman.839.1619635767.359.pve-user@lists.proxmox.com>
2021-04-28 19:06 ` Fabrizio Cuseo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5f38f579-fe59-763e-6919-be691912aa87@proxmox.com \
--to=t.lamprecht@proxmox.com \
--cc=aderumier@odiso.com \
--cc=jmr.richardson@gmail.com \
--cc=pve-user@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox