From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id E10EF61E66 for ; Fri, 10 Jul 2020 18:47:01 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id DCAB824BB9 for ; Fri, 10 Jul 2020 18:47:01 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [212.186.127.180]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 4DD1524BAC for ; Fri, 10 Jul 2020 18:47:01 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 1E0E04311A for ; Fri, 10 Jul 2020 18:47:01 +0200 (CEST) Date: Fri, 10 Jul 2020 18:46:49 +0200 (CEST) From: Dietmar Maurer To: Proxmox VE user list Message-ID: <1901933384.494.1594399610529@webmail.proxmox.com> In-Reply-To: <511457c6-d878-b157-18da-0140a83ef52b@elettra.eu> References: <2ca19e1b-4c4c-daf7-3a48-aef4a6150ea9@elettra.eu> <521875662.472.1594388487491@webmail.proxmox.com> <594092ce-e65d-f9b4-a66c-84ac001f9df9@elettra.eu> <39487713.486.1594395087264@webmail.proxmox.com> <511457c6-d878-b157-18da-0140a83ef52b@elettra.eu> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Priority: 3 Importance: Normal X-Mailer: Open-Xchange Mailer v7.10.3-Rev15 X-Originating-Client: open-xchange-appsuite X-SPAM-LEVEL: Spam detection results: 0 AWL 0.007 Adjusted score from AWL reputation of From: address KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_DNSWL_MED -2.3 Sender listed at https://www.dnswl.org/, medium trust SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [PVE-User] Proxmox Backup Server (beta) X-BeenThere: pve-user@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE user list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 10 Jul 2020 16:47:01 -0000 > >> Is this "mount" managed by PBS or you have to "manually" mount it > >> outside PBS? > > > > Not sure what kind of management you need for that? Usually people > > mount filesystems using /etc/fstab or by creating systemd mount units. > > In PVE you can add a storage (like NFS for example) via GUI (or directly > via config file) and, if I'm not mistaken, from the PVE will "manage" > the storage (mount it under /mnt/pve, not performing a backup if the > storage is not ready and so on). Ah, yes. We currectly restrict ourself to local disks (because of the performance implication). > >>> But this means that you copy data over the network multiple times, > >>> so this is not the best option performance wise... > >> > >> True, PBS will act as a gateway to the backing storage cluster, but the > >> data will be only re-routed to the final destination (in this case and > >> OSD) not copied over (putting aside the CEPH replication policy). > > > > That is probably a very simplistic view of things. It involves copying data > > multiple times, so I will affect performance by sure. > > The replication you mean? Yes, it "copies"/distribute the same data on > multiple targets/disk (more or less the same RAID or ZFS does). But I'm > not aware of the internals of PBS so maybe my reasoning is really to > simplistic. > > > > > Note: We take about huge amounts of data. > > We daily backup with vzdump over NFS 2TB of data. Clearly because all of > the backups are full backups we need a lot of space for keeping a > reasonable retention (8 daily backups + 3 weekly). I resorted to cycle > to 5 relatively huge NFS server, but it involved a complex > backup-schedule. But because the amount of data is growing we are > searching for a backup solution which can be integrated in PVE and could > be easily expandable. I would start using proxmox-backup server the way it is designed for, using a local zfs storage pool for the backups. This is high performance and future proof. To get redundancy, you can use a second backup server and sync the backups. This is also much simpler to recover things, because there is no need to get ceph storage online first (Always plan for recovery..). But sure, you can also use cepfs if it meets your performance requirements and you have enough network bandwidth.