public inbox for pve-user@lists.proxmox.com
 help / color / mirror / Atom feed
From: Tom Weber <pve@junkyard.4t2.com>
To: pve-user@lists.proxmox.com
Subject: Re: [PVE-User] Proxmox Backup Server (beta)
Date: Sat, 18 Jul 2020 16:59:20 +0200	[thread overview]
Message-ID: <0c115f31d30593e8f169ef6c6418128bbdb1435a.camel@junkyard.4t2.com> (raw)
In-Reply-To: <97c35a8e-eeb6-1d49-6a09-bc7f367f89dc@proxmox.com>

Am Freitag, den 17.07.2020, 19:43 +0200 schrieb Thomas Lamprecht:
> On 17.07.20 15:23, Tom Weber wrote:
> > thanks for the very detailed answer :)
> > 
> > I was already thinking that this wouldn't work like my current
> > setup.
> > 
> > Once the bitmap on the source side of the backup gets corrupted for
> > whatever reason, incremental wouldn't work and break.
> > Is there some way that the system would notify such a "corrupted"
> > bitmap? 
> > I'm thinking of a manual / test / accidential backup run to a
> > different
> > backup server which would / could ruin all further regular
> > incremental
> > backups undetected.
> 
> If a backup fails, or the last backup index we get doesn't matches
> the
> checksum we cache in the VM QEMU process we drop the bitmap and do
> read
> everything (it's still send incremental from the index we got  now),
> and
> setup a new bitmap from that point.

ah, I think I start to understand (read a bit about the qemu side too
now) :)

So you keep some checksum/signature of a successfull backup run with
the one (non-persistant) dirty bitmap in qemu.
The next backup run can check this and only makes use of the bitmap if
it matches else it will fall back to reading and comparing all qemu
blocks against the ones in the backup - saving only the changed ones? 

If that's the case, it's the answer I was looking for :)


> > about my setup scenario - a bit off topic - backing up to 2
> > different
> > locations every other day basically doubles my backup space and
> > reduces
> > the risk of one failing backup server - of course by taking a 50:50
> > chance of needing to go back 2 days in a worst case scenario.
> > Syncing the backup servers would require twice the space capacity
> > (and
> > additional bw).
> 
> I do not think it would require twice as much space. You already have
> now
> twice copies of what normally would be used for a single backup
> target.
> So even if deduplication between backups is way off you'd still only
> need
> that if you sync remotes. And normally you should need less, as
> deduplication should reduce the per-backup server storage space and
> thus
> the doubled space usage from syncing is actually smaller than the
> doubled
> space usage from the odd/even backups - or?

First of all, that noted backup scenario was not designed for a
blocklevel incremental backup like pbs is meant. I don't know yet if
I'd do it like this for pbs. But it probably helps to understand why it
raised the above question.

If the same "area" of data changes everyday, say 1GB, and I do
incremental backups and have like 10GB of space for that on 2
independent Servers.
Doing that incremental Backup odd/even to those 2 Backupservers, I end
up with 20 days of history whereas with 2 syncronized Backupservers
only 10 days of history are possible (one could also translate this in
doubled backup space ;) ).

And then there are bandwith considerations between these 3 locations.

> Note that remotes sync only the delta since last sync, so bandwidth
> correlates
> to that delta churn. And as long as that churn stays below 50% size
> of a full
> backup you still need less total bandwidth than the odd/even full-
> backup
> approach. At least if averaged over time.

ohh... I think there's the misunderstanding: I wasn't talking about
odd/even FULL-backups! 
Right now I'm doing odd/even incremental backups! Incremental against
the last state of the backup server im backing up to (backing up what
changed in 2 days).

Best,
  Tom





  reply	other threads:[~2020-07-18 15:00 UTC|newest]

Thread overview: 47+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-10 10:56 Martin Maurer
2020-07-10 11:42 ` Roland
2020-07-10 12:09   ` Dietmar Maurer
2020-07-10 12:24     ` Roland
2020-07-10 13:43       ` Thomas Lamprecht
2020-07-10 14:06         ` Roland
2020-07-10 14:15           ` Thomas Lamprecht
2020-07-10 14:46             ` Roland
2020-07-10 17:31               ` Roland
2020-07-10 13:44       ` Dietmar Maurer
     [not found] ` <mailman.77.1594381090.12071.pve-user@lists.proxmox.com>
2020-07-10 11:45   ` Dietmar Maurer
     [not found]     ` <a92c7f1d-f492-2d43-00b9-15bdb0e805ec@binovo.es>
2020-07-10 13:50       ` Thomas Lamprecht
2020-07-10 12:03 ` Lindsay Mathieson
2020-07-10 12:13   ` Dietmar Maurer
2020-07-10 15:41     ` Dietmar Maurer
2020-07-11 11:03       ` mj
2020-07-11 11:38         ` Thomas Lamprecht
2020-07-11 13:34           ` mj
2020-07-11 13:47             ` Thomas Lamprecht
2020-07-11 14:40             ` Dietmar Maurer
2020-07-14 14:30               ` Alexandre DERUMIER
2020-07-14 15:52                 ` Thomas Lamprecht
2020-07-14 21:17                   ` Alexandre DERUMIER
2020-07-15  4:52                     ` Thomas Lamprecht
     [not found]                       ` <176392164.4390.1594849016963.JavaMail.zimbra@numberall.com>
2020-07-16  7:33                         ` Thomas Lamprecht
     [not found]                       ` <mailman.204.1594849027.12071.pve-user@lists.proxmox.com>
2020-07-16 10:17                         ` Wolfgang Bumiller
2020-07-16 14:36                           ` Mark Schouten
2020-07-16 17:04                             ` Thomas Lamprecht
2020-07-16 13:03                   ` Tom Weber
2020-07-17  7:31                     ` Fabian Grünbichler
2020-07-17 13:23                       ` Tom Weber
2020-07-17 17:43                         ` Thomas Lamprecht
2020-07-18 14:59                           ` Tom Weber [this message]
2020-07-18 18:07                             ` Thomas Lamprecht
2020-07-10 12:45 ` Iztok Gregori
2020-07-10 13:41   ` Dietmar Maurer
2020-07-10 15:20     ` Iztok Gregori
2020-07-10 15:31       ` Dietmar Maurer
2020-07-10 16:29         ` Iztok Gregori
2020-07-10 16:46           ` Dietmar Maurer
     [not found] ` <a1e5f8dd-efd5-f8e2-50c1-683d42b0f61b@truelite.it>
2020-07-10 14:23   ` Thomas Lamprecht
2020-07-10 15:59 ` Lindsay Mathieson
     [not found] ` <mailman.86.1594396120.12071.pve-user@lists.proxmox.com>
2020-07-10 16:32   ` Dietmar Maurer
2020-10-06 13:12 ` Lee Lists
2020-10-08  8:21   ` Thomas Lamprecht
2020-10-09  9:27     ` Lee Lists
2020-10-09 12:10 Lee Lists

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0c115f31d30593e8f169ef6c6418128bbdb1435a.camel@junkyard.4t2.com \
    --to=pve@junkyard.4t2.com \
    --cc=pve-user@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal