From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: <iztok.gregori@elettra.eu> Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 0C0BB61DE2 for <pve-user@lists.proxmox.com>; Fri, 10 Jul 2020 18:29:24 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 07787247EA for <pve-user@lists.proxmox.com>; Fri, 10 Jul 2020 18:29:24 +0200 (CEST) Received: from hormel.elettra.eu (hormel.elettra.trieste.it [140.105.206.207]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 43030247DC for <pve-user@lists.proxmox.com>; Fri, 10 Jul 2020 18:29:23 +0200 (CEST) Received: from hormel.elettra.eu (localhost.localdomain [127.0.0.1]) by localhost (Email Security Appliance) with SMTP id DF21E1C482A_F089762B for <pve-user@lists.proxmox.com>; Fri, 10 Jul 2020 16:29:22 +0000 (GMT) Received: from [140.105.2.28] (iztok-pc.elettra.trieste.it [140.105.2.28]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by hormel.elettra.eu (Sophos Email Appliance) with ESMTPSA id 9237A1C22FD_F089762F for <pve-user@lists.proxmox.com>; Fri, 10 Jul 2020 16:29:22 +0000 (GMT) To: pve-user@lists.proxmox.com References: <c84ac772-d577-27fd-710c-293d8a4baffe@proxmox.com> <2ca19e1b-4c4c-daf7-3a48-aef4a6150ea9@elettra.eu> <521875662.472.1594388487491@webmail.proxmox.com> <594092ce-e65d-f9b4-a66c-84ac001f9df9@elettra.eu> <39487713.486.1594395087264@webmail.proxmox.com> From: Iztok Gregori <iztok.gregori@elettra.eu> Message-ID: <511457c6-d878-b157-18da-0140a83ef52b@elettra.eu> Date: Fri, 10 Jul 2020 18:29:22 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: <39487713.486.1594395087264@webmail.proxmox.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-GB Content-Transfer-Encoding: 7bit X-SASI-RCODE: 200 X-SPAM-LEVEL: Spam detection results: 0 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [PVE-User] Proxmox Backup Server (beta) X-BeenThere: pve-user@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE user list <pve-user.lists.proxmox.com> List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-user>, <mailto:pve-user-request@lists.proxmox.com?subject=unsubscribe> List-Archive: <http://lists.proxmox.com/pipermail/pve-user/> List-Post: <mailto:pve-user@lists.proxmox.com> List-Help: <mailto:pve-user-request@lists.proxmox.com?subject=help> List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user>, <mailto:pve-user-request@lists.proxmox.com?subject=subscribe> X-List-Received-Date: Fri, 10 Jul 2020 16:29:24 -0000 On 10/07/20 17:31, Dietmar Maurer wrote: >> On 10/07/20 15:41, Dietmar Maurer wrote: >>>> Are you planning to support also CEPH (or other distributed file >>>> systems) as destination storage backend? >>> >>> It is already possible to put the datastore a a mounted cephfs, or >>> anything you can mount on the host. >> >> Is this "mount" managed by PBS or you have to "manually" mount it >> outside PBS? > > Not sure what kind of management you need for that? Usually people > mount filesystems using /etc/fstab or by creating systemd mount units. In PVE you can add a storage (like NFS for example) via GUI (or directly via config file) and, if I'm not mistaken, from the PVE will "manage" the storage (mount it under /mnt/pve, not performing a backup if the storage is not ready and so on). > >>> But this means that you copy data over the network multiple times, >>> so this is not the best option performance wise... >> >> True, PBS will act as a gateway to the backing storage cluster, but the >> data will be only re-routed to the final destination (in this case and >> OSD) not copied over (putting aside the CEPH replication policy). > > That is probably a very simplistic view of things. It involves copying data > multiple times, so I will affect performance by sure. The replication you mean? Yes, it "copies"/distribute the same data on multiple targets/disk (more or less the same RAID or ZFS does). But I'm not aware of the internals of PBS so maybe my reasoning is really to simplistic. > > Note: We take about huge amounts of data. We daily backup with vzdump over NFS 2TB of data. Clearly because all of the backups are full backups we need a lot of space for keeping a reasonable retention (8 daily backups + 3 weekly). I resorted to cycle to 5 relatively huge NFS server, but it involved a complex backup-schedule. But because the amount of data is growing we are searching for a backup solution which can be integrated in PVE and could be easily expandable. > >> So >> performance wise you are limited by the bandwidth of the PBS network >> interfaces (as you will be for a local network storage server) and to >> the speed of the backing CEPH cluster. Maybe you will loose something on >> raw performance (but depending on the CEPH cluster you could gain also >> something) but you will gain the ability of "easily" expandable storage >> space and no single point of failure. > > Sure, that's true. Would be interesting to get some performance stats for > such setup... You mean performance stats about CEPH or about PBS backed with CEPHfs? For the latter we could try something in Autumn when some servers will became available. Cheers Iztok Gregori