public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
From: Iztok Gregori <iztok.gregori@elettra.eu>
To: pve-user@lists.proxmox.com
Subject: Re: [PVE-User] Proxmox Backup Server (beta)
Date: Fri, 10 Jul 2020 18:29:22 +0200	[thread overview]
Message-ID: <511457c6-d878-b157-18da-0140a83ef52b@elettra.eu> (raw)
In-Reply-To: <39487713.486.1594395087264@webmail.proxmox.com>

On 10/07/20 17:31, Dietmar Maurer wrote:
>> On 10/07/20 15:41, Dietmar Maurer wrote:
>>>> Are you planning to support also CEPH (or other distributed file
>>>> systems) as destination storage backend?
>>>
>>> It is already possible to put the datastore a a mounted cephfs, or
>>> anything you can mount on the host.
>>
>> Is this "mount" managed by PBS or you have to "manually" mount it
>> outside PBS?
> 
> Not sure what kind of management you need for that? Usually people
> mount filesystems using /etc/fstab or by creating systemd mount units.

In PVE you can add a storage (like NFS for example) via GUI (or directly 
via config file) and, if I'm not mistaken, from the PVE will "manage" 
the storage (mount it under /mnt/pve, not performing a backup if the 
storage is not ready and so on).

> 
>>> But this means that you copy data over the network multiple times,
>>> so this is not the best option performance wise...
>>
>> True, PBS will act as a gateway to the backing storage cluster, but the
>> data will be only re-routed to the final destination (in this case and
>> OSD) not copied over (putting aside the CEPH replication policy).
> 
> That is probably a very simplistic view of things. It involves copying data
> multiple times, so I will affect performance by sure.

The replication you mean? Yes, it "copies"/distribute the same data on 
multiple targets/disk (more or less the same RAID or ZFS does). But I'm 
not aware of the internals of PBS so maybe my reasoning is really to 
simplistic.

> 
> Note: We take about huge amounts of data.

We daily backup with vzdump over NFS 2TB of data. Clearly because all of 
the backups are full backups we need a lot of space for keeping a 
reasonable retention (8 daily backups + 3 weekly). I resorted to cycle 
to 5 relatively huge NFS server, but it involved a complex 
backup-schedule. But because the amount of data is growing we are 
searching for a backup solution which can be integrated in PVE and could 
be easily expandable.


> 
>> So
>> performance wise you are limited by the bandwidth of the PBS network
>> interfaces (as you will be for a local network storage server) and to
>> the speed of the backing CEPH cluster. Maybe you will loose something on
>> raw performance (but depending on the CEPH cluster you could gain also
>> something) but you will gain the ability of "easily" expandable storage
>> space and no single point of failure.
> 
> Sure, that's true. Would be interesting to get some performance stats for
> such setup...

You mean performance stats about CEPH or about PBS backed with CEPHfs? 
For the latter we could try something in Autumn when some servers will 
became available.

Cheers

Iztok Gregori





  reply	other threads:[~2020-07-10 16:29 UTC|newest]

Thread overview: 47+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-10 10:56 Martin Maurer
2020-07-10 11:42 ` Roland
2020-07-10 12:09   ` Dietmar Maurer
2020-07-10 12:24     ` Roland
2020-07-10 13:43       ` Thomas Lamprecht
2020-07-10 14:06         ` Roland
2020-07-10 14:15           ` Thomas Lamprecht
2020-07-10 14:46             ` Roland
2020-07-10 17:31               ` Roland
2020-07-10 13:44       ` Dietmar Maurer
     [not found] ` <mailman.77.1594381090.12071.pve-user@lists.proxmox.com>
2020-07-10 11:45   ` Dietmar Maurer
     [not found]     ` <a92c7f1d-f492-2d43-00b9-15bdb0e805ec@binovo.es>
2020-07-10 13:50       ` Thomas Lamprecht
2020-07-10 12:03 ` Lindsay Mathieson
2020-07-10 12:13   ` Dietmar Maurer
2020-07-10 15:41     ` Dietmar Maurer
2020-07-11 11:03       ` mj
2020-07-11 11:38         ` Thomas Lamprecht
2020-07-11 13:34           ` mj
2020-07-11 13:47             ` Thomas Lamprecht
2020-07-11 14:40             ` Dietmar Maurer
2020-07-14 14:30               ` Alexandre DERUMIER
2020-07-14 15:52                 ` Thomas Lamprecht
2020-07-14 21:17                   ` Alexandre DERUMIER
2020-07-15  4:52                     ` Thomas Lamprecht
     [not found]                       ` <176392164.4390.1594849016963.JavaMail.zimbra@numberall.com>
2020-07-16  7:33                         ` Thomas Lamprecht
     [not found]                       ` <mailman.204.1594849027.12071.pve-user@lists.proxmox.com>
2020-07-16 10:17                         ` Wolfgang Bumiller
2020-07-16 14:36                           ` Mark Schouten
2020-07-16 17:04                             ` Thomas Lamprecht
2020-07-16 13:03                   ` Tom Weber
2020-07-17  7:31                     ` Fabian Grünbichler
2020-07-17 13:23                       ` Tom Weber
2020-07-17 17:43                         ` Thomas Lamprecht
2020-07-18 14:59                           ` Tom Weber
2020-07-18 18:07                             ` Thomas Lamprecht
2020-07-10 12:45 ` Iztok Gregori
2020-07-10 13:41   ` Dietmar Maurer
2020-07-10 15:20     ` Iztok Gregori
2020-07-10 15:31       ` Dietmar Maurer
2020-07-10 16:29         ` Iztok Gregori [this message]
2020-07-10 16:46           ` Dietmar Maurer
     [not found] ` <a1e5f8dd-efd5-f8e2-50c1-683d42b0f61b@truelite.it>
2020-07-10 14:23   ` Thomas Lamprecht
2020-07-10 15:59 ` Lindsay Mathieson
     [not found] ` <mailman.86.1594396120.12071.pve-user@lists.proxmox.com>
2020-07-10 16:32   ` Dietmar Maurer
2020-10-06 13:12 ` Lee Lists
2020-10-08  8:21   ` Thomas Lamprecht
2020-10-09  9:27     ` Lee Lists
2020-10-09 12:10 Lee Lists

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=511457c6-d878-b157-18da-0140a83ef52b@elettra.eu \
    --to=iztok.gregori@elettra.eu \
    --cc=pve-user@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal