From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dietmar@proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id 423F561D60
 for <pve-user@lists.proxmox.com>; Fri, 10 Jul 2020 17:32:09 +0200 (CEST)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id 3AEF923C64
 for <pve-user@lists.proxmox.com>; Fri, 10 Jul 2020 17:31:39 +0200 (CEST)
Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com
 [212.186.127.180])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS id 092BA23C55
 for <pve-user@lists.proxmox.com>; Fri, 10 Jul 2020 17:31:38 +0200 (CEST)
Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1])
 by proxmox-new.maurer-it.com (Proxmox) with ESMTP id D10C64311A
 for <pve-user@lists.proxmox.com>; Fri, 10 Jul 2020 17:31:37 +0200 (CEST)
Date: Fri, 10 Jul 2020 17:31:26 +0200 (CEST)
From: Dietmar Maurer <dietmar@proxmox.com>
To: Proxmox VE user list <pve-user@lists.proxmox.com>
Message-ID: <39487713.486.1594395087264@webmail.proxmox.com>
In-Reply-To: <594092ce-e65d-f9b4-a66c-84ac001f9df9@elettra.eu>
References: <c84ac772-d577-27fd-710c-293d8a4baffe@proxmox.com>
 <2ca19e1b-4c4c-daf7-3a48-aef4a6150ea9@elettra.eu>
 <521875662.472.1594388487491@webmail.proxmox.com>
 <594092ce-e65d-f9b4-a66c-84ac001f9df9@elettra.eu>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-Priority: 3
Importance: Normal
X-Mailer: Open-Xchange Mailer v7.10.3-Rev15
X-Originating-Client: open-xchange-appsuite
X-SPAM-LEVEL: Spam detection results:  0
 AWL 0.009 Adjusted score from AWL reputation of From: address
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 RCVD_IN_DNSWL_MED        -2.3 Sender listed at https://www.dnswl.org/,
 medium trust
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
Subject: Re: [PVE-User] Proxmox Backup Server (beta)
X-BeenThere: pve-user@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE user list <pve-user.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-user>, 
 <mailto:pve-user-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-user/>
List-Post: <mailto:pve-user@lists.proxmox.com>
List-Help: <mailto:pve-user-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user>, 
 <mailto:pve-user-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Fri, 10 Jul 2020 15:32:09 -0000

> On 10/07/20 15:41, Dietmar Maurer wrote:
> >> Are you planning to support also CEPH (or other distributed file
> >> systems) as destination storage backend?
> > 
> > It is already possible to put the datastore a a mounted cephfs, or
> > anything you can mount on the host.
> 
> Is this "mount" managed by PBS or you have to "manually" mount it 
> outside PBS?

Not sure what kind of management you need for that? Usually people
mount filesystems using /etc/fstab or by creating systemd mount units.

> > But this means that you copy data over the network multiple times,
> > so this is not the best option performance wise...
> 
> True, PBS will act as a gateway to the backing storage cluster, but the 
> data will be only re-routed to the final destination (in this case and 
> OSD) not copied over (putting aside the CEPH replication policy). 

That is probably a very simplistic view of things. It involves copying data
multiple times, so I will affect performance by sure.

Note: We take about huge amounts of data.

> So 
> performance wise you are limited by the bandwidth of the PBS network 
> interfaces (as you will be for a local network storage server) and to 
> the speed of the backing CEPH cluster. Maybe you will loose something on 
> raw performance (but depending on the CEPH cluster you could gain also 
> something) but you will gain the ability of "easily" expandable storage 
> space and no single point of failure.

Sure, that's true. Would be interesting to get some performance stats for 
such setup...