public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: "DERUMIER, Alexandre" <Alexandre.DERUMIER@groupe-cyllene.com>
To: "pve-devel@lists.proxmox.com" <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] qemu 7.0 : fleecing backup (aka: local temp write cache"
Date: Mon, 1 Aug 2022 08:10:48 +0000	[thread overview]
Message-ID: <8e48bfaa-8923-cece-6daf-844e7f76ff64@groupe-cyllene.com> (raw)
In-Reply-To: <b099114e-c185-637f-f454-a75cad7485e9@groupe-cyllene.com>

Le 31/07/22 à 18:49, DERUMIER, Alexandre a écrit :
> Le 31/07/22 à 18:19, Dietmar Maurer a écrit :
>>> This is really a blocker for me,I can't use pbs because I'm using nvme
>>> is production, and a 7200k hdd backup in a remote 200km site with 5ms
>>> latency.
>> Why don't you use a local(fast) PBS instance, then sync to the slow remote?
>>
> Hi Dietmar.
>
> Can I use a small local fast PBS instance without need to keep the full
> datastore chunks ?
>
> I have 300TB nvme in production, I don't want to buy 300TB nvme for backup.
>
> I known that I can keep more retentions on remote storage slow, but what
> about the local fast pbs ?
>
>
Maybe also, currently if you pbs server is crashing/shutdown/halt/... 
when a backup is running,

the vm writes are totally frozen.

Or if network problem occur, it can hang the vm too.

That's why I think than a local cache (could be optionnal) could be a 
great improvment.



I found doc about fleecing,

technally, it's just exposing a new blockdev inside qemu, like a virtual 
frozen snapshot.

So I think it could work with proxmox backup code too.

https://www.mail-archive.com/qemu-devel@nongnu.org/msg876056.html


####create the fleecing device

qmp: transaction [
    block-dirty-bitmap-add {node: disk0, name: bitmap0, persistent: true}
    blockdev-add* {node-name: tmp-protocol, driver: file, filename: 
temp.qcow2}
    blockdev-add {node-name: tmp, driver: qcow2, file: tmp-protocol}
    blockdev-add {node-name: cbw, driver: copy-before-write, file: disk0,
target: tmp}
    blockdev-replace** {parent-type: qdev, qdev-id: sda, new-child: cbw}
    blockdev-add {node-name: acc, driver: snapshot-access, file: cbw}
]



#### launch qemu backup (push model) --> should use proxmox backup code 
here instead


# Add target node. Here is qcow2 added, but it may be nbd node or 
something else
     blockdev-add {node-name: target-protocol, driver: file, filename:
target.qcow2}
     blockdev-add {node-name: target, driver: qcow2, file: target-protocol}

# Start backup
     blockdev-backup {device: acc, target: target, ...}
]



  parent reply	other threads:[~2022-08-01  8:10 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-31 16:19 Dietmar Maurer
2022-07-31 16:49 ` DERUMIER, Alexandre
2022-07-31 17:47   ` Laurent GUERBY
2022-07-31 18:24     ` DERUMIER, Alexandre
2022-08-01  8:10   ` DERUMIER, Alexandre [this message]
  -- strict thread matches above, loose matches on Subject: below --
2022-08-01  8:31 Dietmar Maurer
2022-08-01 14:36 ` DERUMIER, Alexandre
2022-07-31 15:58 DERUMIER, Alexandre

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8e48bfaa-8923-cece-6daf-844e7f76ff64@groupe-cyllene.com \
    --to=alexandre.derumier@groupe-cyllene.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal