From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id A84F063F5F for ; Fri, 17 Jul 2020 19:44:02 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 9207D1E3C6 for ; Fri, 17 Jul 2020 19:43:32 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [212.186.127.180]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id 96F8B1E3B9 for ; Fri, 17 Jul 2020 19:43:30 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 65B8B43148; Fri, 17 Jul 2020 19:43:30 +0200 (CEST) To: Proxmox VE user list , Tom Weber References: <7d1bd7a4-f47b-4a1a-9278-ff1889508c33@gmail.com> <1768587204.461.1594383204281@webmail.proxmox.com> <450971473.487.1594395714078@webmail.proxmox.com> <1514606299.525.1594478405167@webmail.proxmox.com> <16057806.272035.1594737045788.JavaMail.zimbra@odiso.com> <0852a3fa-ab39-d551-5a01-0264687d4b56@proxmox.com> <4b7e3a23d9b1f41ef51e6363373072d265797380.camel@junkyard.4t2.com> <1594969118.mb1771vpca.astroid@nora.none> <1edc84ae5a9b082c5744149ce7d3e0dfdc32a2ae.camel@junkyard.4t2.com> From: Thomas Lamprecht Message-ID: <97c35a8e-eeb6-1d49-6a09-bc7f367f89dc@proxmox.com> Date: Fri, 17 Jul 2020 19:43:29 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:79.0) Gecko/20100101 Thunderbird/79.0 MIME-Version: 1.0 In-Reply-To: <1edc84ae5a9b082c5744149ce7d3e0dfdc32a2ae.camel@junkyard.4t2.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: quoted-printable X-SPAM-LEVEL: Spam detection results: 0 AWL 0.008 Adjusted score from AWL reputation of From: address KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment NICE_REPLY_A -0.001 Looks like a legit reply (A) RCVD_IN_DNSWL_MED -2.3 Sender listed at https://www.dnswl.org/, medium trust SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [proxmox.com] Subject: Re: [PVE-User] Proxmox Backup Server (beta) X-BeenThere: pve-user@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE user list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Jul 2020 17:44:02 -0000 On 17.07.20 15:23, Tom Weber wrote: > Am Freitag, den 17.07.2020, 09:31 +0200 schrieb Fabian Gr=C3=BCnbichler= : >> On July 16, 2020 3:03 pm, Tom Weber wrote: >>> Am Dienstag, den 14.07.2020, 17:52 +0200 schrieb Thomas Lamprecht: >>>> >>>> Proxmox Backup Server effectively does that too, but independent >>>> from >>>> the >>>> source storage. We always get the last backup index and only >>>> upload >>>> the chunks >>>> which changed. For running VMs dirty-bitmap is on to improve this >>>> (avoids >>>> reading of unchanged blocks) but it's only an optimization - the >>>> backup is >>>> incremental either way. >>> >>> So there is exactly one dirty-bitmap that get's nulled after a >>> backup? >>> >>> I'm asking because I have Backup setups with 2 Backup Servers at >>> different Locations, backing up (file-level, incremental) on odd >>> days >>> to server1 on even days to server2.=20 >>> >>> Such a setup wouldn't work with the block level incremental backup >>> and >>> the dirty-bitmap for pve vms + pbs, right? >>> >>> Regards, >>> Tom >> >> right now, this would not work since for each backup, the bitmap >> would=20 >> be invalidated since the last backup returned by the server does not=20 >> match the locally stored value. theoretically we could track >> multiple=20 >> backup storages, but bitmaps are not free and the handling would >> quickly=20 >> become unwieldy. >> >> probably you are better off backing up to one server and syncing=20 >> that to your second one - you can define both as storage on the PVE >> side=20 >> and switch over the backup job targets if the primary one fails. >> >> theoretically[1] >> >> 1.) backup to A >> 2.) sync A->B >> 3.) backup to B >> 4.) sync B->A >> 5.) repeat >> >> works as well and keeps the bitmap valid, but you carefully need to=20 >> lock-step backup and sync jobs, so it's probably less robust than: >> >> 1.) backup to A >> 2.) sync A->B >> >> where missing a sync is not ideal, but does not invalidate the >> bitmap. >> >> note that your backup will still be incremental in any case w.r.t.=20 >> client <-> server traffic, the client just has to re-read all disks >> to=20 >> decide whether it has to upload those chunks or not if the bitmap is >> not=20 >> valid or does not exist. >> >> 1: theoretically, as you probably run into=20 >> https://bugzilla.proxmox.com/show_bug.cgi?id=3D2864 unless you do your= =20 >> backups as 'backup@pam', which is not recommended ;) >> >=20 > thanks for the very detailed answer :) >=20 > I was already thinking that this wouldn't work like my current setup. >=20 > Once the bitmap on the source side of the backup gets corrupted for > whatever reason, incremental wouldn't work and break. > Is there some way that the system would notify such a "corrupted" > bitmap?=20 > I'm thinking of a manual / test / accidential backup run to a different= > backup server which would / could ruin all further regular incremental > backups undetected. If a backup fails, or the last backup index we get doesn't matches the checksum we cache in the VM QEMU process we drop the bitmap and do read everything (it's still send incremental from the index we got now), and setup a new bitmap from that point. >=20 >=20 > about my setup scenario - a bit off topic - backing up to 2 different > locations every other day basically doubles my backup space and reduces= > the risk of one failing backup server - of course by taking a 50:50 > chance of needing to go back 2 days in a worst case scenario. > Syncing the backup servers would require twice the space capacity (and > additional bw). I do not think it would require twice as much space. You already have now= twice copies of what normally would be used for a single backup target. So even if deduplication between backups is way off you'd still only need= that if you sync remotes. And normally you should need less, as deduplication should reduce the per-backup server storage space and thus the doubled space usage from syncing is actually smaller than the doubled= space usage from the odd/even backups - or? Note that remotes sync only the delta since last sync, so bandwidth corre= lates to that delta churn. And as long as that churn stays below 50% size of a = full backup you still need less total bandwidth than the odd/even full-backup approach. At least if averaged over time. cheers, Thomas