From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id BC11F641D5 for ; Sat, 18 Jul 2020 17:00:07 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id A862F22AAC for ; Sat, 18 Jul 2020 16:59:37 +0200 (CEST) Received: from mxf.4t2.com (mxf.4t2.com [78.47.65.59]) by firstgate.proxmox.com (Proxmox) with ESMTP id 18E4522AA1 for ; Sat, 18 Jul 2020 16:59:36 +0200 (CEST) X-Spam-Status: No X-4t2Systems-MailScanner-Watermark: 1595689161.51895@wF3b8mtDt07RX40Zb1V8TQ X-4t2Systems-MailScanner-From: pve@junkyard.4t2.com X-4t2Systems-MailScanner: Found to be clean X-4t2Systems-MailScanner-ID: DC86F41387.AF1EE X-4t2Systems-MailScanner-Information: processed at mxf.4t2.com Received: from mailrelay.abyss.4t2.com (mailrelay.abyss.4t2.com [192.168.1.11]) by mxf.4t2.com (Postfix) with ESMTP id DC86F41387 for ; Sat, 18 Jul 2020 16:59:20 +0200 (CEST) Received: from mailserv.abyss.4t2.com (mailserv.abyss.4t2.com [192.168.1.12]) by mailrelay.abyss.4t2.com (Postfix) with ESMTP id C4FB2158 for ; Sat, 18 Jul 2020 16:59:20 +0200 (CEST) Received: from morgoth.abyss.4t2.com (morgoth.abyss.4t2.com [192.168.1.22]) (Authenticated sender: x) by mailserv.abyss.4t2.com (Postfix) with ESMTPSA id B94EC20636 for ; Sat, 18 Jul 2020 16:59:20 +0200 (CEST) Message-ID: <0c115f31d30593e8f169ef6c6418128bbdb1435a.camel@junkyard.4t2.com> From: Tom Weber To: pve-user@lists.proxmox.com Date: Sat, 18 Jul 2020 16:59:20 +0200 In-Reply-To: <97c35a8e-eeb6-1d49-6a09-bc7f367f89dc@proxmox.com> References: <7d1bd7a4-f47b-4a1a-9278-ff1889508c33@gmail.com> <1768587204.461.1594383204281@webmail.proxmox.com> <450971473.487.1594395714078@webmail.proxmox.com> <1514606299.525.1594478405167@webmail.proxmox.com> <16057806.272035.1594737045788.JavaMail.zimbra@odiso.com> <0852a3fa-ab39-d551-5a01-0264687d4b56@proxmox.com> <4b7e3a23d9b1f41ef51e6363373072d265797380.camel@junkyard.4t2.com> <1594969118.mb1771vpca.astroid@nora.none> <1edc84ae5a9b082c5744149ce7d3e0dfdc32a2ae.camel@junkyard.4t2.com> <97c35a8e-eeb6-1d49-6a09-bc7f367f89dc@proxmox.com> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 Mime-Version: 1.0 Content-Transfer-Encoding: 7bit X-SPAM-LEVEL: Spam detection results: 0 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_PASS -0.001 SPF: HELO matches SPF record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [PVE-User] Proxmox Backup Server (beta) X-BeenThere: pve-user@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE user list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 18 Jul 2020 15:00:07 -0000 Am Freitag, den 17.07.2020, 19:43 +0200 schrieb Thomas Lamprecht: > On 17.07.20 15:23, Tom Weber wrote: > > thanks for the very detailed answer :) > > > > I was already thinking that this wouldn't work like my current > > setup. > > > > Once the bitmap on the source side of the backup gets corrupted for > > whatever reason, incremental wouldn't work and break. > > Is there some way that the system would notify such a "corrupted" > > bitmap? > > I'm thinking of a manual / test / accidential backup run to a > > different > > backup server which would / could ruin all further regular > > incremental > > backups undetected. > > If a backup fails, or the last backup index we get doesn't matches > the > checksum we cache in the VM QEMU process we drop the bitmap and do > read > everything (it's still send incremental from the index we got now), > and > setup a new bitmap from that point. ah, I think I start to understand (read a bit about the qemu side too now) :) So you keep some checksum/signature of a successfull backup run with the one (non-persistant) dirty bitmap in qemu. The next backup run can check this and only makes use of the bitmap if it matches else it will fall back to reading and comparing all qemu blocks against the ones in the backup - saving only the changed ones? If that's the case, it's the answer I was looking for :) > > about my setup scenario - a bit off topic - backing up to 2 > > different > > locations every other day basically doubles my backup space and > > reduces > > the risk of one failing backup server - of course by taking a 50:50 > > chance of needing to go back 2 days in a worst case scenario. > > Syncing the backup servers would require twice the space capacity > > (and > > additional bw). > > I do not think it would require twice as much space. You already have > now > twice copies of what normally would be used for a single backup > target. > So even if deduplication between backups is way off you'd still only > need > that if you sync remotes. And normally you should need less, as > deduplication should reduce the per-backup server storage space and > thus > the doubled space usage from syncing is actually smaller than the > doubled > space usage from the odd/even backups - or? First of all, that noted backup scenario was not designed for a blocklevel incremental backup like pbs is meant. I don't know yet if I'd do it like this for pbs. But it probably helps to understand why it raised the above question. If the same "area" of data changes everyday, say 1GB, and I do incremental backups and have like 10GB of space for that on 2 independent Servers. Doing that incremental Backup odd/even to those 2 Backupservers, I end up with 20 days of history whereas with 2 syncronized Backupservers only 10 days of history are possible (one could also translate this in doubled backup space ;) ). And then there are bandwith considerations between these 3 locations. > Note that remotes sync only the delta since last sync, so bandwidth > correlates > to that delta churn. And as long as that churn stays below 50% size > of a full > backup you still need less total bandwidth than the odd/even full- > backup > approach. At least if averaged over time. ohh... I think there's the misunderstanding: I wasn't talking about odd/even FULL-backups! Right now I'm doing odd/even incremental backups! Incremental against the last state of the backup server im backing up to (backing up what changed in 2 days). Best, Tom