From: Thomas Lamprecht <t.lamprecht@proxmox.com>
To: Proxmox VE user list <pve-user@lists.proxmox.com>,
Tom Weber <pve@junkyard.4t2.com>
Subject: Re: [PVE-User] Proxmox Backup Server (beta)
Date: Fri, 17 Jul 2020 19:43:29 +0200 [thread overview]
Message-ID: <97c35a8e-eeb6-1d49-6a09-bc7f367f89dc@proxmox.com> (raw)
In-Reply-To: <1edc84ae5a9b082c5744149ce7d3e0dfdc32a2ae.camel@junkyard.4t2.com>
On 17.07.20 15:23, Tom Weber wrote:
> Am Freitag, den 17.07.2020, 09:31 +0200 schrieb Fabian Grünbichler:
>> On July 16, 2020 3:03 pm, Tom Weber wrote:
>>> Am Dienstag, den 14.07.2020, 17:52 +0200 schrieb Thomas Lamprecht:
>>>>
>>>> Proxmox Backup Server effectively does that too, but independent
>>>> from
>>>> the
>>>> source storage. We always get the last backup index and only
>>>> upload
>>>> the chunks
>>>> which changed. For running VMs dirty-bitmap is on to improve this
>>>> (avoids
>>>> reading of unchanged blocks) but it's only an optimization - the
>>>> backup is
>>>> incremental either way.
>>>
>>> So there is exactly one dirty-bitmap that get's nulled after a
>>> backup?
>>>
>>> I'm asking because I have Backup setups with 2 Backup Servers at
>>> different Locations, backing up (file-level, incremental) on odd
>>> days
>>> to server1 on even days to server2.
>>>
>>> Such a setup wouldn't work with the block level incremental backup
>>> and
>>> the dirty-bitmap for pve vms + pbs, right?
>>>
>>> Regards,
>>> Tom
>>
>> right now, this would not work since for each backup, the bitmap
>> would
>> be invalidated since the last backup returned by the server does not
>> match the locally stored value. theoretically we could track
>> multiple
>> backup storages, but bitmaps are not free and the handling would
>> quickly
>> become unwieldy.
>>
>> probably you are better off backing up to one server and syncing
>> that to your second one - you can define both as storage on the PVE
>> side
>> and switch over the backup job targets if the primary one fails.
>>
>> theoretically[1]
>>
>> 1.) backup to A
>> 2.) sync A->B
>> 3.) backup to B
>> 4.) sync B->A
>> 5.) repeat
>>
>> works as well and keeps the bitmap valid, but you carefully need to
>> lock-step backup and sync jobs, so it's probably less robust than:
>>
>> 1.) backup to A
>> 2.) sync A->B
>>
>> where missing a sync is not ideal, but does not invalidate the
>> bitmap.
>>
>> note that your backup will still be incremental in any case w.r.t.
>> client <-> server traffic, the client just has to re-read all disks
>> to
>> decide whether it has to upload those chunks or not if the bitmap is
>> not
>> valid or does not exist.
>>
>> 1: theoretically, as you probably run into
>> https://bugzilla.proxmox.com/show_bug.cgi?id=2864 unless you do your
>> backups as 'backup@pam', which is not recommended ;)
>>
>
> thanks for the very detailed answer :)
>
> I was already thinking that this wouldn't work like my current setup.
>
> Once the bitmap on the source side of the backup gets corrupted for
> whatever reason, incremental wouldn't work and break.
> Is there some way that the system would notify such a "corrupted"
> bitmap?
> I'm thinking of a manual / test / accidential backup run to a different
> backup server which would / could ruin all further regular incremental
> backups undetected.
If a backup fails, or the last backup index we get doesn't matches the
checksum we cache in the VM QEMU process we drop the bitmap and do read
everything (it's still send incremental from the index we got now), and
setup a new bitmap from that point.
>
>
> about my setup scenario - a bit off topic - backing up to 2 different
> locations every other day basically doubles my backup space and reduces
> the risk of one failing backup server - of course by taking a 50:50
> chance of needing to go back 2 days in a worst case scenario.
> Syncing the backup servers would require twice the space capacity (and
> additional bw).
I do not think it would require twice as much space. You already have now
twice copies of what normally would be used for a single backup target.
So even if deduplication between backups is way off you'd still only need
that if you sync remotes. And normally you should need less, as
deduplication should reduce the per-backup server storage space and thus
the doubled space usage from syncing is actually smaller than the doubled
space usage from the odd/even backups - or?
Note that remotes sync only the delta since last sync, so bandwidth correlates
to that delta churn. And as long as that churn stays below 50% size of a full
backup you still need less total bandwidth than the odd/even full-backup
approach. At least if averaged over time.
cheers,
Thomas
next prev parent reply other threads:[~2020-07-17 17:44 UTC|newest]
Thread overview: 47+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-07-10 10:56 Martin Maurer
2020-07-10 11:42 ` Roland
2020-07-10 12:09 ` Dietmar Maurer
2020-07-10 12:24 ` Roland
2020-07-10 13:43 ` Thomas Lamprecht
2020-07-10 14:06 ` Roland
2020-07-10 14:15 ` Thomas Lamprecht
2020-07-10 14:46 ` Roland
2020-07-10 17:31 ` Roland
2020-07-10 13:44 ` Dietmar Maurer
[not found] ` <mailman.77.1594381090.12071.pve-user@lists.proxmox.com>
2020-07-10 11:45 ` Dietmar Maurer
[not found] ` <a92c7f1d-f492-2d43-00b9-15bdb0e805ec@binovo.es>
2020-07-10 13:50 ` Thomas Lamprecht
2020-07-10 12:03 ` Lindsay Mathieson
2020-07-10 12:13 ` Dietmar Maurer
2020-07-10 15:41 ` Dietmar Maurer
2020-07-11 11:03 ` mj
2020-07-11 11:38 ` Thomas Lamprecht
2020-07-11 13:34 ` mj
2020-07-11 13:47 ` Thomas Lamprecht
2020-07-11 14:40 ` Dietmar Maurer
2020-07-14 14:30 ` Alexandre DERUMIER
2020-07-14 15:52 ` Thomas Lamprecht
2020-07-14 21:17 ` Alexandre DERUMIER
2020-07-15 4:52 ` Thomas Lamprecht
[not found] ` <176392164.4390.1594849016963.JavaMail.zimbra@numberall.com>
2020-07-16 7:33 ` Thomas Lamprecht
[not found] ` <mailman.204.1594849027.12071.pve-user@lists.proxmox.com>
2020-07-16 10:17 ` Wolfgang Bumiller
2020-07-16 14:36 ` Mark Schouten
2020-07-16 17:04 ` Thomas Lamprecht
2020-07-16 13:03 ` Tom Weber
2020-07-17 7:31 ` Fabian Grünbichler
2020-07-17 13:23 ` Tom Weber
2020-07-17 17:43 ` Thomas Lamprecht [this message]
2020-07-18 14:59 ` Tom Weber
2020-07-18 18:07 ` Thomas Lamprecht
2020-07-10 12:45 ` Iztok Gregori
2020-07-10 13:41 ` Dietmar Maurer
2020-07-10 15:20 ` Iztok Gregori
2020-07-10 15:31 ` Dietmar Maurer
2020-07-10 16:29 ` Iztok Gregori
2020-07-10 16:46 ` Dietmar Maurer
[not found] ` <a1e5f8dd-efd5-f8e2-50c1-683d42b0f61b@truelite.it>
2020-07-10 14:23 ` Thomas Lamprecht
2020-07-10 15:59 ` Lindsay Mathieson
[not found] ` <mailman.86.1594396120.12071.pve-user@lists.proxmox.com>
2020-07-10 16:32 ` Dietmar Maurer
2020-10-06 13:12 ` Lee Lists
2020-10-08 8:21 ` Thomas Lamprecht
2020-10-09 9:27 ` Lee Lists
2020-10-09 12:10 Lee Lists
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=97c35a8e-eeb6-1d49-6a09-bc7f367f89dc@proxmox.com \
--to=t.lamprecht@proxmox.com \
--cc=pve-user@lists.proxmox.com \
--cc=pve@junkyard.4t2.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox