* Re: [PVE-User] Proxmox Backup Server (beta)
@ 2020-10-09 12:10 Lee Lists
0 siblings, 0 replies; 47+ messages in thread
From: Lee Lists @ 2020-10-09 12:10 UTC (permalink / raw)
To: Thomas Lamprecht
Cc: Proxmox VE user list, PVE User List, pve-devel, pbs-devel
Hi Thomas,
Thank you, it was effectively a missing clang installation.
Finally i managed to compile pbs on armbian / aarch64 (https://kobol.io/).
First tests gives good results on this rk3399 platform.
┌───────────────────────────────────┬───────────────────┐
│ Name │ Value │
╞═══════════════════════════════════╪═══════════════════╡
│ TLS (maximal backup upload speed) │ not tested │
├───────────────────────────────────┼───────────────────┤
│ SHA256 checksum computation speed │ 885.79 MB/s (44%) │
├───────────────────────────────────┼───────────────────┤
│ ZStd level 1 compression speed │ 139.33 MB/s (19%) │
├───────────────────────────────────┼───────────────────┤
│ ZStd level 1 decompression speed │ 326.64 MB/s (27%) │
├───────────────────────────────────┼───────────────────┤
│ Chunk verification speed │ 271.91 MB/s (36%) │
├───────────────────────────────────┼───────────────────┤
│ AES256 GCM encryption speed │ 561.27 MB/s (15%) │
└───────────────────────────────────┴───────────────────┘
Regards,
Lee
----- Mail original -----
De: "Thomas Lamprecht" <t.lamprecht@proxmox.com>
À: "Proxmox VE user list" <pve-user@lists.proxmox.com>, "Lee Lists" <lists@jave.fr>
Cc: "PVE User List" <pve-user@pve.proxmox.com>, "pbs-devel" <pbs-devel@lists.proxmox.com>, "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Jeudi 8 Octobre 2020 10:21:47
Objet: Re: [PVE-User] Proxmox Backup Server (beta)
On 06.10.20 15:12, Lee Lists wrote:
> I'm trying to build proxmox backup server from source,
> but the build failed in compiling zstd lib bindings.
>
> Any clues ?
Some more hints about the build environment and the executed steps would
be great.
Are all build dependencies installed, this error comes sometimes up if
clang isn't correctly installed.
_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
^ permalink raw reply [flat|nested] 47+ messages in thread
* [PVE-User] Proxmox Backup Server (beta)
@ 2020-07-10 10:56 Martin Maurer
2020-07-10 11:42 ` Roland
` (7 more replies)
0 siblings, 8 replies; 47+ messages in thread
From: Martin Maurer @ 2020-07-10 10:56 UTC (permalink / raw)
To: PVE User List, pve-devel, pbs-devel
We are proud to announce the first beta release of our new Proxmox Backup Server.
It's an enterprise-class client-server backup software that backups virtual machines, containers, and physical hosts. It is specially optimized for the Proxmox Virtual Environment platform and allows you to backup and replicate your data securely. It provides easy management with a command line and web-based user interface, and is licensed under the GNU Affero General Public License v3 (GNU AGPL, v3).
Proxmox Backup Server supports incremental backups, deduplication, compression and authenticated encryption. Using Rust https://www.rust-lang.org/ as implementation language guarantees high performance, low resource usage, and a safe, high quality code base. It features strong encryption done on the client side. Thus, it’s possible to backup data to not fully trusted targets.
Main Features
Support for Proxmox VE:
The Proxmox Virtual Environment is fully supported and you can easily backup virtual machines (supporting QEMU dirty bitmaps - https://www.qemu.org/docs/master/interop/bitmaps.html) and containers.
Performance:
The whole software stack is written in Rust https://www.rust-lang.org/, to provide high speed and memory efficiency.
Deduplication:
Periodic backups produce large amounts of duplicate data. The deduplication layer avoids redundancy and minimizes the used storage space.
Incremental backups:
Changes between backups are typically low. Reading and sending only the delta reduces storage and network impact of backups.
Data Integrity:
The built in SHA-256 https://en.wikipedia.org/wiki/SHA-2 checksum algorithm assures the accuracy and consistency of your backups.
Remote Sync:
It is possible to efficiently synchronize data to remote sites. Only deltas containing new data are transferred.
Compression:
The ultra fast Zstandard compression is able to compress several gigabytes of data per second.
Encryption:
Backups can be encrypted on the client-side using AES-256 in Galois/Counter Mode (GCM https://en.wikipedia.org/wiki/Galois/Counter_Mode). This authenticated encryption mode provides very high performance on modern hardware.
Web interface:
Manage Proxmox backups with the integrated web-based user interface.
Open Source:
No secrets. Proxmox Backup Server is free and open-source software. The source code is licensed under AGPL, v3.
Support:
Enterprise support will be available from Proxmox.
And of course - Backups can be restored!
Release notes
https://pbs.proxmox.com/wiki/index.php/Roadmap
Download
https://www.proxmox.com/downloads
Alternate ISO download:
http://download.proxmox.com/iso
Documentation
https://pbs.proxmox.com
Community Forum
https://forum.proxmox.com
Source Code
https://git.proxmox.com
Bugtracker
https://bugzilla.proxmox.com
FAQ
Q: How does this integrate into Proxmox VE?
A: Just add your Proxmox Backup Server storage as new storage backup target to your Proxmox VE. Make sure that you have at least pve-manager 6.2-9 installed.
Q: What will happen with the existing Proxmox VE backup (vzdump)?
A: You can still use vzdump. The new backup is an additional but very powerful way to backup and restore your VMs and container.
Q: Can I already backup my other Debian servers (file backup agent)?
A: Yes, just install the Proxmox Backup Client (https://pbs.proxmox.com/docs/installation.html#install-proxmox-backup-client-on-debian).
Q: Are there already backup agents for other distributions?
A: Not packaged yet, but using a statically linked binary should work in most cases on modern Linux OS (work in progress).
Q: Is there any recommended server hardware for the Proxmox Backup Server?
A: Use enterprise class server hardware with enough disks for the (big) ZFS pool holding your backup data. The Backup Server should be in the same datacenter as your Proxmox VE hosts.
Q: Where can I get more information about coming feature updates?
A: Follow the announcement forum, pbs-devel mailing list https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel, and subscribe to our newsletter https://www.proxmox.com/news.
Please help us reaching the final release date by testing this beta and by providing feedback via https://forum.proxmox.com
--
Best Regards,
Martin Maurer
martin@proxmox.com
https://www.proxmox.com
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-10 10:56 Martin Maurer
@ 2020-07-10 11:42 ` Roland
2020-07-10 12:09 ` Dietmar Maurer
[not found] ` <mailman.77.1594381090.12071.pve-user@lists.proxmox.com>
` (6 subsequent siblings)
7 siblings, 1 reply; 47+ messages in thread
From: Roland @ 2020-07-10 11:42 UTC (permalink / raw)
To: pve-user, martin
great to hear! :)
one technical/performance question - will delta backup be i/o efficient
( like VMWare cbt ) ?
regards
roland
Am 10.07.20 um 12:56 schrieb Martin Maurer:
> We are proud to announce the first beta release of our new Proxmox
> Backup Server.
>
> It's an enterprise-class client-server backup software that backups
> virtual machines, containers, and physical hosts. It is specially
> optimized for the Proxmox Virtual Environment platform and allows you
> to backup and replicate your data securely. It provides easy
> management with a command line and web-based user interface, and is
> licensed under the GNU Affero General Public License v3 (GNU AGPL, v3).
>
> Proxmox Backup Server supports incremental backups, deduplication,
> compression and authenticated encryption. Using Rust
> https://www.rust-lang.org/ as implementation language guarantees high
> performance, low resource usage, and a safe, high quality code base.
> It features strong encryption done on the client side. Thus, it’s
> possible to backup data to not fully trusted targets.
>
> Main Features
>
> Support for Proxmox VE:
> The Proxmox Virtual Environment is fully supported and you can easily
> backup virtual machines (supporting QEMU dirty bitmaps -
> https://www.qemu.org/docs/master/interop/bitmaps.html) and containers.
>
> Performance:
> The whole software stack is written in Rust
> https://www.rust-lang.org/, to provide high speed and memory efficiency.
>
> Deduplication:
> Periodic backups produce large amounts of duplicate data. The
> deduplication layer avoids redundancy and minimizes the used storage
> space.
>
> Incremental backups:
> Changes between backups are typically low. Reading and sending only
> the delta reduces storage and network impact of backups.
>
> Data Integrity:
> The built in SHA-256 https://en.wikipedia.org/wiki/SHA-2 checksum
> algorithm assures the accuracy and consistency of your backups.
>
> Remote Sync:
> It is possible to efficiently synchronize data to remote sites. Only
> deltas containing new data are transferred.
>
> Compression:
> The ultra fast Zstandard compression is able to compress several
> gigabytes of data per second.
>
> Encryption:
> Backups can be encrypted on the client-side using AES-256 in
> Galois/Counter Mode (GCM
> https://en.wikipedia.org/wiki/Galois/Counter_Mode). This authenticated
> encryption mode provides very high performance on modern hardware.
>
> Web interface:
> Manage Proxmox backups with the integrated web-based user interface.
>
> Open Source:
> No secrets. Proxmox Backup Server is free and open-source software.
> The source code is licensed under AGPL, v3.
>
> Support:
> Enterprise support will be available from Proxmox.
>
> And of course - Backups can be restored!
>
> Release notes
> https://pbs.proxmox.com/wiki/index.php/Roadmap
>
> Download
> https://www.proxmox.com/downloads
> Alternate ISO download:
> http://download.proxmox.com/iso
>
> Documentation
> https://pbs.proxmox.com
>
> Community Forum
> https://forum.proxmox.com
>
> Source Code
> https://git.proxmox.com
>
> Bugtracker
> https://bugzilla.proxmox.com
>
> FAQ
> Q: How does this integrate into Proxmox VE?
> A: Just add your Proxmox Backup Server storage as new storage backup
> target to your Proxmox VE. Make sure that you have at least
> pve-manager 6.2-9 installed.
>
> Q: What will happen with the existing Proxmox VE backup (vzdump)?
> A: You can still use vzdump. The new backup is an additional but very
> powerful way to backup and restore your VMs and container.
>
> Q: Can I already backup my other Debian servers (file backup agent)?
> A: Yes, just install the Proxmox Backup Client
> (https://pbs.proxmox.com/docs/installation.html#install-proxmox-backup-client-on-debian).
>
> Q: Are there already backup agents for other distributions?
> A: Not packaged yet, but using a statically linked binary should work
> in most cases on modern Linux OS (work in progress).
>
> Q: Is there any recommended server hardware for the Proxmox Backup
> Server?
> A: Use enterprise class server hardware with enough disks for the
> (big) ZFS pool holding your backup data. The Backup Server should be
> in the same datacenter as your Proxmox VE hosts.
>
> Q: Where can I get more information about coming feature updates?
> A: Follow the announcement forum, pbs-devel mailing list
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel, and
> subscribe to our newsletter https://www.proxmox.com/news.
>
> Please help us reaching the final release date by testing this beta
> and by providing feedback via https://forum.proxmox.com
>
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-10 11:42 ` Roland
@ 2020-07-10 12:09 ` Dietmar Maurer
2020-07-10 12:24 ` Roland
0 siblings, 1 reply; 47+ messages in thread
From: Dietmar Maurer @ 2020-07-10 12:09 UTC (permalink / raw)
To: Proxmox VE user list, Roland, martin
> one technical/performance question - will delta backup be i/o efficient
> ( like VMWare cbt ) ?
yes
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-10 12:09 ` Dietmar Maurer
@ 2020-07-10 12:24 ` Roland
2020-07-10 13:43 ` Thomas Lamprecht
2020-07-10 13:44 ` Dietmar Maurer
0 siblings, 2 replies; 47+ messages in thread
From: Roland @ 2020-07-10 12:24 UTC (permalink / raw)
To: Dietmar Maurer, Proxmox VE user list, martin
fantastic! :)
but - how does it work ?
Am 10.07.20 um 14:09 schrieb Dietmar Maurer:
>> one technical/performance question - will delta backup be i/o efficient
>> ( like VMWare cbt ) ?
> yes
>
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-10 12:24 ` Roland
@ 2020-07-10 13:43 ` Thomas Lamprecht
2020-07-10 14:06 ` Roland
2020-07-10 13:44 ` Dietmar Maurer
1 sibling, 1 reply; 47+ messages in thread
From: Thomas Lamprecht @ 2020-07-10 13:43 UTC (permalink / raw)
To: Proxmox VE user list, Roland, Dietmar Maurer, martin
On 10.07.20 14:24, Roland wrote:
> fantastic! :)
>
> but - how does it work ?
It uses a content addressable storage to save the data chunks.
Effectively, the same data chunk doesn't uses additional storage if saved more than once.
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-10 13:43 ` Thomas Lamprecht
@ 2020-07-10 14:06 ` Roland
2020-07-10 14:15 ` Thomas Lamprecht
0 siblings, 1 reply; 47+ messages in thread
From: Roland @ 2020-07-10 14:06 UTC (permalink / raw)
To: Thomas Lamprecht, Proxmox VE user list, Dietmar Maurer, martin
i think there may be a misunderstanding here or i was not clear enough
to express what i meant.
i guess in terms of backup storage, pbs is doing similar to what
borgbackup does - so indeed that IS i/o and storage effient , but that
refers to the backup target side.
but what about the backup source?
I was referring to VMware cbt as that is a means of avoiding I/O on the
VM storage, i.e. the backup source.
afaik, proxmox/kvm does not (yet) have something like that !?
I you have lot's of terabytes of VM disks, each incremental backup run
will hog the VMs storage (the same like full backup).
In VMware, this is adressed with "changed block tracking", as a backup
agent can determine which blocks of a VMs disks have changed between
incremental backups, so it won't need to scan through the whole VMs
disks on each differential/incremental backup run.
see:
https://kb.vmware.com/s/article/1020128
https://helpcenter.veeam.com/docs/backup/vsphere/changed_block_tracking.html?ver=100
i don't want to criticize proxmox, i think proxmox is fantastic, i just
want to know what we get ( and what we don't get).
regards
roland
Am 10.07.20 um 15:43 schrieb Thomas Lamprecht:
> On 10.07.20 14:24, Roland wrote:
>> fantastic! :)
>>
>> but - how does it work ?
> It uses a content addressable storage to save the data chunks.
> Effectively, the same data chunk doesn't uses additional storage if saved more than once.
>
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-10 14:06 ` Roland
@ 2020-07-10 14:15 ` Thomas Lamprecht
2020-07-10 14:46 ` Roland
0 siblings, 1 reply; 47+ messages in thread
From: Thomas Lamprecht @ 2020-07-10 14:15 UTC (permalink / raw)
To: Proxmox VE user list, Roland, Dietmar Maurer, martin
On 10.07.20 16:06, Roland wrote:
> i think there may be a misunderstanding here or i was not clear enough
> to express what i meant.
>
> i guess in terms of backup storage, pbs is doing similar to what
> borgbackup does - so indeed that IS i/o and storage effient , but that
> refers to the backup target side.
>
> but what about the backup source?
>
> I was referring to VMware cbt as that is a means of avoiding I/O on the
> VM storage, i.e. the backup source.
>
> afaik, proxmox/kvm does not (yet) have something like that !?
Proxmox Backup Server and Proxmox VE supports tracking what changed with
dirty-bitmaps, this avoids reading anything from the storage and sending
anything over the network that has not changed.
>
> I you have lot's of terabytes of VM disks, each incremental backup run
> will hog the VMs storage (the same like full backup).
>
> In VMware, this is adressed with "changed block tracking", as a backup
> agent can determine which blocks of a VMs disks have changed between
> incremental backups, so it won't need to scan through the whole VMs
> disks on each differential/incremental backup run.
see above, we effectively support both - deduplication to reduce target
storage impact and incremental backups to reduce source storage and
network impact.
https://pbs.proxmox.com/docs/introduction.html#main-features
>
> see:
> https://kb.vmware.com/s/article/1020128
> https://helpcenter.veeam.com/docs/backup/vsphere/changed_block_tracking.html?ver=100
>
> i don't want to criticize proxmox, i think proxmox is fantastic, i just
> want to know what we get ( and what we don't get).
>
No worries, no offense taken ;)
cheers,
Thomas
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-10 14:15 ` Thomas Lamprecht
@ 2020-07-10 14:46 ` Roland
2020-07-10 17:31 ` Roland
0 siblings, 1 reply; 47+ messages in thread
From: Roland @ 2020-07-10 14:46 UTC (permalink / raw)
To: Thomas Lamprecht, Proxmox VE user list, Dietmar Maurer, martin
wo this is great to hear, thanks !
Am 10.07.20 um 16:15 schrieb Thomas Lamprecht:
> On 10.07.20 16:06, Roland wrote:
>> i think there may be a misunderstanding here or i was not clear enough
>> to express what i meant.
>>
>> i guess in terms of backup storage, pbs is doing similar to what
>> borgbackup does - so indeed that IS i/o and storage effient , but that
>> refers to the backup target side.
>>
>> but what about the backup source?
>>
>> I was referring to VMware cbt as that is a means of avoiding I/O on the
>> VM storage, i.e. the backup source.
>>
>> afaik, proxmox/kvm does not (yet) have something like that !?
> Proxmox Backup Server and Proxmox VE supports tracking what changed with
> dirty-bitmaps, this avoids reading anything from the storage and sending
> anything over the network that has not changed.
>
>> I you have lot's of terabytes of VM disks, each incremental backup run
>> will hog the VMs storage (the same like full backup).
>>
>> In VMware, this is adressed with "changed block tracking", as a backup
>> agent can determine which blocks of a VMs disks have changed between
>> incremental backups, so it won't need to scan through the whole VMs
>> disks on each differential/incremental backup run.
> see above, we effectively support both - deduplication to reduce target
> storage impact and incremental backups to reduce source storage and
> network impact.
>
> https://pbs.proxmox.com/docs/introduction.html#main-features
>
>> see:
>> https://kb.vmware.com/s/article/1020128
>> https://helpcenter.veeam.com/docs/backup/vsphere/changed_block_tracking.html?ver=100
>>
>> i don't want to criticize proxmox, i think proxmox is fantastic, i just
>> want to know what we get ( and what we don't get).
>>
> No worries, no offense taken ;)
>
> cheers,
> Thomas
>
>
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-10 14:46 ` Roland
@ 2020-07-10 17:31 ` Roland
0 siblings, 0 replies; 47+ messages in thread
From: Roland @ 2020-07-10 17:31 UTC (permalink / raw)
To: Thomas Lamprecht, Proxmox VE user list, Dietmar Maurer, martin
works like a charm.
2 seconds for finishing an incremental backup job. works with qcow2,
works with zvol. (did not test restore yet)
I'm impressed. congratulations!
roland
INFO: starting new backup job: vzdump 101 --node pve1.local --storage
pbs.local --quiet 1 --mailnotification always --all 0 --compress zstd
--mode snapshot
INFO: Starting Backup of VM 101 (qemu)
INFO: Backup started at 2020-07-10 19:16:03
INFO: status = running
INFO: VM Name: grafana.local
INFO: include disk 'scsi0' 'local-zfs-files:101/vm-101-disk-0.qcow2' 20G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/101/2020-07-10T17:16:03Z'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task '5a0a0ef3-2802-42e0-acc3-06147ad1549f'
INFO: resuming VM again
INFO: using fast incremental mode (dirty-bitmap), 48.0 MiB dirty of 20.0
GiB total
INFO: status: 100% (48.0 MiB of 48.0 MiB), duration 1, read: 48.0 MiB/s,
write: 48.0 MiB/s
INFO: backup was done incrementally, reused 19.95 GiB (99%)
INFO: transferred 48.00 MiB in 1 seconds (48.0 MiB/s)
INFO: run: /usr/bin/proxmox-backup-client prune vm/101 --quiet 1
--keep-last 2 --repository root@pam@172.16.37.106:ds_backup1
INFO: vm/101/2020-07-10T17:13:29Z Fri Jul 10 19:13:29 2020 remove
INFO: Finished Backup of VM 101 (00:00:02)
INFO: Backup finished at 2020-07-10 19:16:05
INFO: Backup job finished successfully
TASK OK
Am 10.07.20 um 16:46 schrieb Roland:
> wo this is great to hear, thanks !
>
> Am 10.07.20 um 16:15 schrieb Thomas Lamprecht:
>> On 10.07.20 16:06, Roland wrote:
>>> i think there may be a misunderstanding here or i was not clear enough
>>> to express what i meant.
>>>
>>> i guess in terms of backup storage, pbs is doing similar to what
>>> borgbackup does - so indeed that IS i/o and storage effient , but that
>>> refers to the backup target side.
>>>
>>> but what about the backup source?
>>>
>>> I was referring to VMware cbt as that is a means of avoiding I/O on the
>>> VM storage, i.e. the backup source.
>>>
>>> afaik, proxmox/kvm does not (yet) have something like that !?
>> Proxmox Backup Server and Proxmox VE supports tracking what changed with
>> dirty-bitmaps, this avoids reading anything from the storage and sending
>> anything over the network that has not changed.
>>
>>> I you have lot's of terabytes of VM disks, each incremental backup run
>>> will hog the VMs storage (the same like full backup).
>>>
>>> In VMware, this is adressed with "changed block tracking", as a backup
>>> agent can determine which blocks of a VMs disks have changed between
>>> incremental backups, so it won't need to scan through the whole VMs
>>> disks on each differential/incremental backup run.
>> see above, we effectively support both - deduplication to reduce target
>> storage impact and incremental backups to reduce source storage and
>> network impact.
>>
>> https://pbs.proxmox.com/docs/introduction.html#main-features
>>
>>> see:
>>> https://kb.vmware.com/s/article/1020128
>>> https://helpcenter.veeam.com/docs/backup/vsphere/changed_block_tracking.html?ver=100
>>>
>>>
>>> i don't want to criticize proxmox, i think proxmox is fantastic, i just
>>> want to know what we get ( and what we don't get).
>>>
>> No worries, no offense taken ;)
>>
>> cheers,
>> Thomas
>>
>>
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-10 12:24 ` Roland
2020-07-10 13:43 ` Thomas Lamprecht
@ 2020-07-10 13:44 ` Dietmar Maurer
1 sibling, 0 replies; 47+ messages in thread
From: Dietmar Maurer @ 2020-07-10 13:44 UTC (permalink / raw)
To: Roland, Proxmox VE user list, martin
> fantastic! :)
>
> but - how does it work ?
see: https://pbs.proxmox.com/wiki/index.php/Main_Page
^ permalink raw reply [flat|nested] 47+ messages in thread
[parent not found: <mailman.77.1594381090.12071.pve-user@lists.proxmox.com>]
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-10 10:56 Martin Maurer
2020-07-10 11:42 ` Roland
[not found] ` <mailman.77.1594381090.12071.pve-user@lists.proxmox.com>
@ 2020-07-10 12:03 ` Lindsay Mathieson
2020-07-10 12:13 ` Dietmar Maurer
2020-07-10 12:45 ` Iztok Gregori
` (4 subsequent siblings)
7 siblings, 1 reply; 47+ messages in thread
From: Lindsay Mathieson @ 2020-07-10 12:03 UTC (permalink / raw)
To: pve-user
On 10/07/2020 8:56 pm, Martin Maurer wrote:
> We are proud to announce the first beta release of our new Proxmox
> Backup Server.
Oh excellent, the backup system really needed some love and this looks
interesting. Since I have no life I'll be testing this on a VM tonight :)
Before I get into it - does the backup server support copying the
backups to a external device such as a USB Drive so I can rotate backups
offsite?
--
Lindsay
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-10 12:03 ` Lindsay Mathieson
@ 2020-07-10 12:13 ` Dietmar Maurer
2020-07-10 15:41 ` Dietmar Maurer
0 siblings, 1 reply; 47+ messages in thread
From: Dietmar Maurer @ 2020-07-10 12:13 UTC (permalink / raw)
To: Proxmox VE user list, Lindsay Mathieson
> Before I get into it - does the backup server support copying the
> backups to a external device such as a USB Drive so I can rotate backups
> offsite?
I guess you can simply use rsync to copy the datastore to the usb stick.
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-10 12:13 ` Dietmar Maurer
@ 2020-07-10 15:41 ` Dietmar Maurer
2020-07-11 11:03 ` mj
0 siblings, 1 reply; 47+ messages in thread
From: Dietmar Maurer @ 2020-07-10 15:41 UTC (permalink / raw)
To: Proxmox VE user list, Lindsay Mathieson
> On 07/10/2020 2:13 PM Dietmar Maurer <dietmar@proxmox.com> wrote:
>
>
> > Before I get into it - does the backup server support copying the
> > backups to a external device such as a USB Drive so I can rotate backups
> > offsite?
>
> I guess you can simply use rsync to copy the datastore to the usb stick.
Also, we already have plans to add tape support, so we may support USB drives
as backup media when we implement that. But that is work for the futures ...
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-10 15:41 ` Dietmar Maurer
@ 2020-07-11 11:03 ` mj
2020-07-11 11:38 ` Thomas Lamprecht
0 siblings, 1 reply; 47+ messages in thread
From: mj @ 2020-07-11 11:03 UTC (permalink / raw)
To: pve-user
On 7/10/20 5:41 PM, Dietmar Maurer wrote:
> Also, we already have plans to add tape support, so we may support USB drives
> as backup media when we implement that. But that is work for the futures ...
Tape support would be truly fantastic! We are still using storix for our
tape backups, and have been looking for an alternative for a couple of
years now.
Being able to use Proxmox Backup Server as storix replacement would be
great.
We have one bare-metal linux server that we are also backing up to tape
using storix. I guess when adopting Proxmox Backup Server, we would need
to find a new solution for that bare-metal server?
(as in: Proxmox Backup Server is *only* capable to backup VM's, right..?)
Proxmox Backup Server is a great addition to the proxmox line of
products! Thanks a lot!
MJ
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-11 11:03 ` mj
@ 2020-07-11 11:38 ` Thomas Lamprecht
2020-07-11 13:34 ` mj
0 siblings, 1 reply; 47+ messages in thread
From: Thomas Lamprecht @ 2020-07-11 11:38 UTC (permalink / raw)
To: Proxmox VE user list, mj
On 11.07.20 13:03, mj wrote:
> On 7/10/20 5:41 PM, Dietmar Maurer wrote:
>> Also, we already have plans to add tape support, so we may support USB drives
>> as backup media when we implement that. But that is work for the futures ...
>
> Tape support would be truly fantastic! We are still using storix for our tape backups, and have been looking for an alternative for a couple of years now.
>
> Being able to use Proxmox Backup Server as storix replacement would be great.
>
> We have one bare-metal linux server that we are also backing up to tape using storix. I guess when adopting Proxmox Backup Server, we would need to find a new solution for that bare-metal server?
>
> (as in: Proxmox Backup Server is *only* capable to backup VM's, right..?)
Nope, can do everything[0][1][2]! You can do file-based backups also. The client
is a statically linked binary and runs on every relatively current Linux with
an amd64 based CPU, so it doesn't even has to be a Debian server.
cheers,
Thomas
[0]: besides file-based backup filesystems not accessible in Linux, yet ;)
[1]: https://pbs.proxmox.com/docs/administration-guide.html#creating-backups
[2]: https://pbs.proxmox.com/docs/introduction.html#main-features
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-11 11:38 ` Thomas Lamprecht
@ 2020-07-11 13:34 ` mj
2020-07-11 13:47 ` Thomas Lamprecht
2020-07-11 14:40 ` Dietmar Maurer
0 siblings, 2 replies; 47+ messages in thread
From: mj @ 2020-07-11 13:34 UTC (permalink / raw)
To: Thomas Lamprecht, Proxmox VE user list
Hi Thomas,
On 7/11/20 1:38 PM, Thomas Lamprecht wrote:
> Nope, can do everything[0][1][2]! You can do file-based backups also. The client
> is a statically linked binary and runs on every relatively current Linux with
> an amd64 based CPU, so it doesn't even has to be a Debian server.
That is great.
And then some follow=up questions if I may...:
- I don't see any 'DR' options, right? As in: bare metal disaster
recovery restores, using a recovery boot iso, and restore a system from
scratch to bootable state. It's not a tool for that, right?
- I guess with VMs etc, the backup will use the available VM options
(ceph, zfs, lvm) to snapshot a VM, in order to get consistent backups,
like the current pve backup does.
But how does that work with non-VM client? (some non-VM client systems
run LVM, so lvm could be used to create a snapshot and backup that, for
example. Does it do that? Will my non-VM mysql backups be consistent?)
- Any timeframe for adding LTO tape support..?
We're really excited, and time-permitted I will try to play around with
this monday/tuesday. :-)
MJ
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-11 13:34 ` mj
@ 2020-07-11 13:47 ` Thomas Lamprecht
2020-07-11 14:40 ` Dietmar Maurer
1 sibling, 0 replies; 47+ messages in thread
From: Thomas Lamprecht @ 2020-07-11 13:47 UTC (permalink / raw)
To: Proxmox VE user list, mj
Hi MJ,
On 11.07.20 15:34, mj wrote:
> On 7/11/20 1:38 PM, Thomas Lamprecht wrote:
>> Nope, can do everything[0][1][2]! You can do file-based backups also. The client
>> is a statically linked binary and runs on every relatively current Linux with
>> an amd64 based CPU, so it doesn't even has to be a Debian server.
>
> That is great.
>
> And then some follow=up questions if I may...:
>
> - I don't see any 'DR' options, right? As in: bare metal disaster recovery restores, using a recovery boot iso, and restore a system from scratch to bootable state. It's not a tool for that, right?
Currently there's no such integrated tool, but honestly I do not think that
would be *that* hard to make. We have a similar process in plan for VMs, i.e.,
boot a VM with a live system and the backup disks as read only disks
plugged in.
Note also that the client has already support to mount an archive of a backup
locally over a FUSE filesystem implementation - maybe that would help already.
>
> - I guess with VMs etc, the backup will use the available VM options (ceph, zfs, lvm) to snapshot a VM, in order to get consistent backups, like the current pve backup does.
Yes.
> But how does that work with non-VM client? (some non-VM client systems run LVM, so lvm could be used to create a snapshot and backup that, for example. Does it do that? Will my non-VM mysql backups be consistent?)
So here I do not have all details in mind but AFAIK: mot yet, it detects
some file changes where inconsistencies could have happened but doesn't
yet tries to detect if the underlying storage supports snapshots and uses
that to get a more consistent state. For containers we do that explicit
through the vzdump tooling.
>
> - Any timeframe for adding LTO tape support..?
No, currently I do not have any, I'm afraid.
> We're really excited, and time-permitted I will try to play around with this monday/tuesday. :-)
>
Great, hope it fits your use case(s).
cheers,
Thomas
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-11 13:34 ` mj
2020-07-11 13:47 ` Thomas Lamprecht
@ 2020-07-11 14:40 ` Dietmar Maurer
2020-07-14 14:30 ` Alexandre DERUMIER
1 sibling, 1 reply; 47+ messages in thread
From: Dietmar Maurer @ 2020-07-11 14:40 UTC (permalink / raw)
To: Proxmox VE user list, mj, Thomas Lamprecht
> But how does that work with non-VM client? (some non-VM client systems
> run LVM, so lvm could be used to create a snapshot and backup that, for
> example. Does it do that? Will my non-VM mysql backups be consistent?)
Currently not, but there are plans to add that for ZFS (any maybe btrfs).
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-11 14:40 ` Dietmar Maurer
@ 2020-07-14 14:30 ` Alexandre DERUMIER
2020-07-14 15:52 ` Thomas Lamprecht
0 siblings, 1 reply; 47+ messages in thread
From: Alexandre DERUMIER @ 2020-07-14 14:30 UTC (permalink / raw)
To: Proxmox VE user list; +Cc: mj, Thomas Lamprecht
Hi,
I don't have tested it yet and read the full docs,
but it is possible to do ceph to ceph backup with ceph snapshots (instead qemu bitmap tracking)?
Currently in production, we are backuping like that, with incremental snapshot,
we keep X snapshots on ceph backup storage by vm, and production ceph cluster only keep the last snasphot.
The main advantage, is that we are only doing a full backup once, then incremental backups forever.
(and we have checkum verifications,encryption,...) on ceph backup
We can restore full block volume, but also selected files with mounting the volume with nbd.
----- Mail original -----
De: "dietmar" <dietmar@proxmox.com>
À: "Proxmox VE user list" <pve-user@lists.proxmox.com>, "mj" <lists@merit.unu.edu>, "Thomas Lamprecht" <t.lamprecht@proxmox.com>
Envoyé: Samedi 11 Juillet 2020 16:40:04
Objet: Re: [PVE-User] Proxmox Backup Server (beta)
> But how does that work with non-VM client? (some non-VM client systems
> run LVM, so lvm could be used to create a snapshot and backup that, for
> example. Does it do that? Will my non-VM mysql backups be consistent?)
Currently not, but there are plans to add that for ZFS (any maybe btrfs).
_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-14 14:30 ` Alexandre DERUMIER
@ 2020-07-14 15:52 ` Thomas Lamprecht
2020-07-14 21:17 ` Alexandre DERUMIER
2020-07-16 13:03 ` Tom Weber
0 siblings, 2 replies; 47+ messages in thread
From: Thomas Lamprecht @ 2020-07-14 15:52 UTC (permalink / raw)
To: Alexandre DERUMIER, Proxmox VE user list
Hi,
On 14.07.20 16:30, Alexandre DERUMIER wrote:
> I don't have tested it yet and read the full docs,
The following gives a quick overview:
https://pbs.proxmox.com/docs/introduction.html#main-features
>
> but it is possible to do ceph to ceph backup with ceph snapshots (instead qemu bitmap tracking)?
No. ceph, or other storage snapshots, are not used for backup in PBS.
>
> Currently in production, we are backuping like that, with incremental snapshot,
>
> we keep X snapshots on ceph backup storage by vm, and production ceph cluster only keep the last snasphot.
>
> The main advantage, is that we are only doing a full backup once, then incremental backups forever.
> (and we have checkum verifications,encryption,...) on ceph backup
Proxmox Backup Server effectively does that too, but independent from the
source storage. We always get the last backup index and only upload the chunks
which changed. For running VMs dirty-bitmap is on to improve this (avoids
reading of unchanged blocks) but it's only an optimization - the backup is
incremental either way.
> We can restore full block volume, but also selected files with mounting the volume with nbd.
There's a block driver for Proxmox Backup Server, so that should work just
the same way.
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-14 15:52 ` Thomas Lamprecht
@ 2020-07-14 21:17 ` Alexandre DERUMIER
2020-07-15 4:52 ` Thomas Lamprecht
2020-07-16 13:03 ` Tom Weber
1 sibling, 1 reply; 47+ messages in thread
From: Alexandre DERUMIER @ 2020-07-14 21:17 UTC (permalink / raw)
To: Thomas Lamprecht; +Cc: Proxmox VE user list
>>Proxmox Backup Server effectively does that too, but independent from the
>>source storage. We always get the last backup index and only upload the chunks
>>which changed. For running VMs dirty-bitmap is on to improve this (avoids
>>reading of unchanged blocks) but it's only an optimization - the backup is
>>incremental either way.
What happen if a vm or host crash ? (I think on clean shutdown, the dirty-bitmap is saved, but on failure ?)
does it need to re-read all blocks to find diff ? or make a new full backup ?
Is it possible to read files inside a vm backup, without restoring it first ?
(Don't have check vma format recently, but I think it was not possible because of out of orders blocks)
I really think it could be great to add some storage snapshot feature in the future.
For ceph, the backup speed is really faster because it's done a bigger block than 64K. (I think it's 4MB object).
and also, I really need a lot of space for my backups, and I can't fill them in a single local storage. (don't want to play with multiple datastores)
Bonus, it could also be used for disaster recovery management :)
But that seem really great for now, I known a lot of people that will be happy with PBS :)
Congrats to all proxmox team !
----- Mail original -----
De: "Thomas Lamprecht" <t.lamprecht@proxmox.com>
À: "aderumier" <aderumier@odiso.com>, "Proxmox VE user list" <pve-user@lists.proxmox.com>
Envoyé: Mardi 14 Juillet 2020 17:52:41
Objet: Re: [PVE-User] Proxmox Backup Server (beta)
Hi,
On 14.07.20 16:30, Alexandre DERUMIER wrote:
> I don't have tested it yet and read the full docs,
The following gives a quick overview:
https://pbs.proxmox.com/docs/introduction.html#main-features
>
> but it is possible to do ceph to ceph backup with ceph snapshots (instead qemu bitmap tracking)?
No. ceph, or other storage snapshots, are not used for backup in PBS.
>
> Currently in production, we are backuping like that, with incremental snapshot,
>
> we keep X snapshots on ceph backup storage by vm, and production ceph cluster only keep the last snasphot.
>
> The main advantage, is that we are only doing a full backup once, then incremental backups forever.
> (and we have checkum verifications,encryption,...) on ceph backup
Proxmox Backup Server effectively does that too, but independent from the
source storage. We always get the last backup index and only upload the chunks
which changed. For running VMs dirty-bitmap is on to improve this (avoids
reading of unchanged blocks) but it's only an optimization - the backup is
incremental either way.
> We can restore full block volume, but also selected files with mounting the volume with nbd.
There's a block driver for Proxmox Backup Server, so that should work just
the same way.
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-14 21:17 ` Alexandre DERUMIER
@ 2020-07-15 4:52 ` Thomas Lamprecht
[not found] ` <176392164.4390.1594849016963.JavaMail.zimbra@numberall.com>
[not found] ` <mailman.204.1594849027.12071.pve-user@lists.proxmox.com>
0 siblings, 2 replies; 47+ messages in thread
From: Thomas Lamprecht @ 2020-07-15 4:52 UTC (permalink / raw)
To: Alexandre DERUMIER; +Cc: Proxmox VE user list
On 14.07.20 23:17, Alexandre DERUMIER wrote:
>>> Proxmox Backup Server effectively does that too, but independent from the
>>> source storage. We always get the last backup index and only upload the chunks
>>> which changed. For running VMs dirty-bitmap is on to improve this (avoids
>>> reading of unchanged blocks) but it's only an optimization - the backup is
>>> incremental either way.
>
> What happen if a vm or host crash ? (I think on clean shutdown, the dirty-bitmap is saved, but on failure ?)
> does it need to re-read all blocks to find diff ? or make a new full backup ?
There's never a new "full backup" as long as the PBS has at least one.
But yes, it needs to re-read everything to get the diff for the first
backup after the VM process starts, from then the tracking is active again.
>
> Is it possible to read files inside a vm backup, without restoring it first ?
> (Don't have check vma format recently, but I think it was not possible because of out of orders blocks)
There's support for block and file level backup, CTs are using a file level
backup, you can then even browse the backup on the server (if it's not encrypted)
As said, there's a block backend driver for it in QEMU, Stefan made it with
Dietmar's libproxmox-backup-qemu0 library. So you should be able to get a backup
as block device over NBD and mount it, I guess. (did not tried that yet fully
myself).
>
> I really think it could be great to add some storage snapshot feature in the future.
The storage would need to allow us diffing from the outside between the previous
snapshot and the current state though, not sure where that's possible in such
away that it could be integrated into PBS in a reasonable way.
The ceph RBD diff format wouldn't seem to bad, though:
https://docs.ceph.com/docs/master/dev/rbd-diff/
> For ceph, the backup speed is really faster because it's done a bigger block than 64K. (I think it's 4MB object).
We use 4MiB chunks for block-level backup by default too, for file-level they're
dynamic and scale between 64KiB and 4MiB.
> and also, I really need a lot of space for my backups, and I can't fill them in a single local storage. (don't want to play with multiple datastores)
What are your (rough) space requirements?
You could always attach a big CephFS or RBD device with local FS as a storage too.
Theoretically PBS could live on your separate "backup only" ceph cluster node, or
be directly attached to it over 25 to 100G.
> Bonus, it could also be used for disaster recovery management :)
Something like that would be nice, what's in your mind for your use case?
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-14 15:52 ` Thomas Lamprecht
2020-07-14 21:17 ` Alexandre DERUMIER
@ 2020-07-16 13:03 ` Tom Weber
2020-07-17 7:31 ` Fabian Grünbichler
1 sibling, 1 reply; 47+ messages in thread
From: Tom Weber @ 2020-07-16 13:03 UTC (permalink / raw)
To: pve-user
Am Dienstag, den 14.07.2020, 17:52 +0200 schrieb Thomas Lamprecht:
>
> Proxmox Backup Server effectively does that too, but independent from
> the
> source storage. We always get the last backup index and only upload
> the chunks
> which changed. For running VMs dirty-bitmap is on to improve this
> (avoids
> reading of unchanged blocks) but it's only an optimization - the
> backup is
> incremental either way.
So there is exactly one dirty-bitmap that get's nulled after a backup?
I'm asking because I have Backup setups with 2 Backup Servers at
different Locations, backing up (file-level, incremental) on odd days
to server1 on even days to server2.
Such a setup wouldn't work with the block level incremental backup and
the dirty-bitmap for pve vms + pbs, right?
Regards,
Tom
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-16 13:03 ` Tom Weber
@ 2020-07-17 7:31 ` Fabian Grünbichler
2020-07-17 13:23 ` Tom Weber
0 siblings, 1 reply; 47+ messages in thread
From: Fabian Grünbichler @ 2020-07-17 7:31 UTC (permalink / raw)
To: Proxmox VE user list
On July 16, 2020 3:03 pm, Tom Weber wrote:
> Am Dienstag, den 14.07.2020, 17:52 +0200 schrieb Thomas Lamprecht:
>>
>> Proxmox Backup Server effectively does that too, but independent from
>> the
>> source storage. We always get the last backup index and only upload
>> the chunks
>> which changed. For running VMs dirty-bitmap is on to improve this
>> (avoids
>> reading of unchanged blocks) but it's only an optimization - the
>> backup is
>> incremental either way.
>
> So there is exactly one dirty-bitmap that get's nulled after a backup?
>
> I'm asking because I have Backup setups with 2 Backup Servers at
> different Locations, backing up (file-level, incremental) on odd days
> to server1 on even days to server2.
>
> Such a setup wouldn't work with the block level incremental backup and
> the dirty-bitmap for pve vms + pbs, right?
>
> Regards,
> Tom
right now, this would not work since for each backup, the bitmap would
be invalidated since the last backup returned by the server does not
match the locally stored value. theoretically we could track multiple
backup storages, but bitmaps are not free and the handling would quickly
become unwieldy.
probably you are better off backing up to one server and syncing
that to your second one - you can define both as storage on the PVE side
and switch over the backup job targets if the primary one fails.
theoretically[1]
1.) backup to A
2.) sync A->B
3.) backup to B
4.) sync B->A
5.) repeat
works as well and keeps the bitmap valid, but you carefully need to
lock-step backup and sync jobs, so it's probably less robust than:
1.) backup to A
2.) sync A->B
where missing a sync is not ideal, but does not invalidate the bitmap.
note that your backup will still be incremental in any case w.r.t.
client <-> server traffic, the client just has to re-read all disks to
decide whether it has to upload those chunks or not if the bitmap is not
valid or does not exist.
1: theoretically, as you probably run into
https://bugzilla.proxmox.com/show_bug.cgi?id=2864 unless you do your
backups as 'backup@pam', which is not recommended ;)
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-17 7:31 ` Fabian Grünbichler
@ 2020-07-17 13:23 ` Tom Weber
2020-07-17 17:43 ` Thomas Lamprecht
0 siblings, 1 reply; 47+ messages in thread
From: Tom Weber @ 2020-07-17 13:23 UTC (permalink / raw)
To: pve-user
Am Freitag, den 17.07.2020, 09:31 +0200 schrieb Fabian Grünbichler:
> On July 16, 2020 3:03 pm, Tom Weber wrote:
> > Am Dienstag, den 14.07.2020, 17:52 +0200 schrieb Thomas Lamprecht:
> > >
> > > Proxmox Backup Server effectively does that too, but independent
> > > from
> > > the
> > > source storage. We always get the last backup index and only
> > > upload
> > > the chunks
> > > which changed. For running VMs dirty-bitmap is on to improve this
> > > (avoids
> > > reading of unchanged blocks) but it's only an optimization - the
> > > backup is
> > > incremental either way.
> >
> > So there is exactly one dirty-bitmap that get's nulled after a
> > backup?
> >
> > I'm asking because I have Backup setups with 2 Backup Servers at
> > different Locations, backing up (file-level, incremental) on odd
> > days
> > to server1 on even days to server2.
> >
> > Such a setup wouldn't work with the block level incremental backup
> > and
> > the dirty-bitmap for pve vms + pbs, right?
> >
> > Regards,
> > Tom
>
> right now, this would not work since for each backup, the bitmap
> would
> be invalidated since the last backup returned by the server does not
> match the locally stored value. theoretically we could track
> multiple
> backup storages, but bitmaps are not free and the handling would
> quickly
> become unwieldy.
>
> probably you are better off backing up to one server and syncing
> that to your second one - you can define both as storage on the PVE
> side
> and switch over the backup job targets if the primary one fails.
>
> theoretically[1]
>
> 1.) backup to A
> 2.) sync A->B
> 3.) backup to B
> 4.) sync B->A
> 5.) repeat
>
> works as well and keeps the bitmap valid, but you carefully need to
> lock-step backup and sync jobs, so it's probably less robust than:
>
> 1.) backup to A
> 2.) sync A->B
>
> where missing a sync is not ideal, but does not invalidate the
> bitmap.
>
> note that your backup will still be incremental in any case w.r.t.
> client <-> server traffic, the client just has to re-read all disks
> to
> decide whether it has to upload those chunks or not if the bitmap is
> not
> valid or does not exist.
>
> 1: theoretically, as you probably run into
> https://bugzilla.proxmox.com/show_bug.cgi?id=2864 unless you do your
> backups as 'backup@pam', which is not recommended ;)
>
thanks for the very detailed answer :)
I was already thinking that this wouldn't work like my current setup.
Once the bitmap on the source side of the backup gets corrupted for
whatever reason, incremental wouldn't work and break.
Is there some way that the system would notify such a "corrupted"
bitmap?
I'm thinking of a manual / test / accidential backup run to a different
backup server which would / could ruin all further regular incremental
backups undetected.
about my setup scenario - a bit off topic - backing up to 2 different
locations every other day basically doubles my backup space and reduces
the risk of one failing backup server - of course by taking a 50:50
chance of needing to go back 2 days in a worst case scenario.
Syncing the backup servers would require twice the space capacity (and
additional bw).
For now I'm just trying to understand the features and limits of pbs -
which really looks nice so far!
Regards,
Tom
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-17 13:23 ` Tom Weber
@ 2020-07-17 17:43 ` Thomas Lamprecht
2020-07-18 14:59 ` Tom Weber
0 siblings, 1 reply; 47+ messages in thread
From: Thomas Lamprecht @ 2020-07-17 17:43 UTC (permalink / raw)
To: Proxmox VE user list, Tom Weber
On 17.07.20 15:23, Tom Weber wrote:
> Am Freitag, den 17.07.2020, 09:31 +0200 schrieb Fabian Grünbichler:
>> On July 16, 2020 3:03 pm, Tom Weber wrote:
>>> Am Dienstag, den 14.07.2020, 17:52 +0200 schrieb Thomas Lamprecht:
>>>>
>>>> Proxmox Backup Server effectively does that too, but independent
>>>> from
>>>> the
>>>> source storage. We always get the last backup index and only
>>>> upload
>>>> the chunks
>>>> which changed. For running VMs dirty-bitmap is on to improve this
>>>> (avoids
>>>> reading of unchanged blocks) but it's only an optimization - the
>>>> backup is
>>>> incremental either way.
>>>
>>> So there is exactly one dirty-bitmap that get's nulled after a
>>> backup?
>>>
>>> I'm asking because I have Backup setups with 2 Backup Servers at
>>> different Locations, backing up (file-level, incremental) on odd
>>> days
>>> to server1 on even days to server2.
>>>
>>> Such a setup wouldn't work with the block level incremental backup
>>> and
>>> the dirty-bitmap for pve vms + pbs, right?
>>>
>>> Regards,
>>> Tom
>>
>> right now, this would not work since for each backup, the bitmap
>> would
>> be invalidated since the last backup returned by the server does not
>> match the locally stored value. theoretically we could track
>> multiple
>> backup storages, but bitmaps are not free and the handling would
>> quickly
>> become unwieldy.
>>
>> probably you are better off backing up to one server and syncing
>> that to your second one - you can define both as storage on the PVE
>> side
>> and switch over the backup job targets if the primary one fails.
>>
>> theoretically[1]
>>
>> 1.) backup to A
>> 2.) sync A->B
>> 3.) backup to B
>> 4.) sync B->A
>> 5.) repeat
>>
>> works as well and keeps the bitmap valid, but you carefully need to
>> lock-step backup and sync jobs, so it's probably less robust than:
>>
>> 1.) backup to A
>> 2.) sync A->B
>>
>> where missing a sync is not ideal, but does not invalidate the
>> bitmap.
>>
>> note that your backup will still be incremental in any case w.r.t.
>> client <-> server traffic, the client just has to re-read all disks
>> to
>> decide whether it has to upload those chunks or not if the bitmap is
>> not
>> valid or does not exist.
>>
>> 1: theoretically, as you probably run into
>> https://bugzilla.proxmox.com/show_bug.cgi?id=2864 unless you do your
>> backups as 'backup@pam', which is not recommended ;)
>>
>
> thanks for the very detailed answer :)
>
> I was already thinking that this wouldn't work like my current setup.
>
> Once the bitmap on the source side of the backup gets corrupted for
> whatever reason, incremental wouldn't work and break.
> Is there some way that the system would notify such a "corrupted"
> bitmap?
> I'm thinking of a manual / test / accidential backup run to a different
> backup server which would / could ruin all further regular incremental
> backups undetected.
If a backup fails, or the last backup index we get doesn't matches the
checksum we cache in the VM QEMU process we drop the bitmap and do read
everything (it's still send incremental from the index we got now), and
setup a new bitmap from that point.
>
>
> about my setup scenario - a bit off topic - backing up to 2 different
> locations every other day basically doubles my backup space and reduces
> the risk of one failing backup server - of course by taking a 50:50
> chance of needing to go back 2 days in a worst case scenario.
> Syncing the backup servers would require twice the space capacity (and
> additional bw).
I do not think it would require twice as much space. You already have now
twice copies of what normally would be used for a single backup target.
So even if deduplication between backups is way off you'd still only need
that if you sync remotes. And normally you should need less, as
deduplication should reduce the per-backup server storage space and thus
the doubled space usage from syncing is actually smaller than the doubled
space usage from the odd/even backups - or?
Note that remotes sync only the delta since last sync, so bandwidth correlates
to that delta churn. And as long as that churn stays below 50% size of a full
backup you still need less total bandwidth than the odd/even full-backup
approach. At least if averaged over time.
cheers,
Thomas
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-17 17:43 ` Thomas Lamprecht
@ 2020-07-18 14:59 ` Tom Weber
2020-07-18 18:07 ` Thomas Lamprecht
0 siblings, 1 reply; 47+ messages in thread
From: Tom Weber @ 2020-07-18 14:59 UTC (permalink / raw)
To: pve-user
Am Freitag, den 17.07.2020, 19:43 +0200 schrieb Thomas Lamprecht:
> On 17.07.20 15:23, Tom Weber wrote:
> > thanks for the very detailed answer :)
> >
> > I was already thinking that this wouldn't work like my current
> > setup.
> >
> > Once the bitmap on the source side of the backup gets corrupted for
> > whatever reason, incremental wouldn't work and break.
> > Is there some way that the system would notify such a "corrupted"
> > bitmap?
> > I'm thinking of a manual / test / accidential backup run to a
> > different
> > backup server which would / could ruin all further regular
> > incremental
> > backups undetected.
>
> If a backup fails, or the last backup index we get doesn't matches
> the
> checksum we cache in the VM QEMU process we drop the bitmap and do
> read
> everything (it's still send incremental from the index we got now),
> and
> setup a new bitmap from that point.
ah, I think I start to understand (read a bit about the qemu side too
now) :)
So you keep some checksum/signature of a successfull backup run with
the one (non-persistant) dirty bitmap in qemu.
The next backup run can check this and only makes use of the bitmap if
it matches else it will fall back to reading and comparing all qemu
blocks against the ones in the backup - saving only the changed ones?
If that's the case, it's the answer I was looking for :)
> > about my setup scenario - a bit off topic - backing up to 2
> > different
> > locations every other day basically doubles my backup space and
> > reduces
> > the risk of one failing backup server - of course by taking a 50:50
> > chance of needing to go back 2 days in a worst case scenario.
> > Syncing the backup servers would require twice the space capacity
> > (and
> > additional bw).
>
> I do not think it would require twice as much space. You already have
> now
> twice copies of what normally would be used for a single backup
> target.
> So even if deduplication between backups is way off you'd still only
> need
> that if you sync remotes. And normally you should need less, as
> deduplication should reduce the per-backup server storage space and
> thus
> the doubled space usage from syncing is actually smaller than the
> doubled
> space usage from the odd/even backups - or?
First of all, that noted backup scenario was not designed for a
blocklevel incremental backup like pbs is meant. I don't know yet if
I'd do it like this for pbs. But it probably helps to understand why it
raised the above question.
If the same "area" of data changes everyday, say 1GB, and I do
incremental backups and have like 10GB of space for that on 2
independent Servers.
Doing that incremental Backup odd/even to those 2 Backupservers, I end
up with 20 days of history whereas with 2 syncronized Backupservers
only 10 days of history are possible (one could also translate this in
doubled backup space ;) ).
And then there are bandwith considerations between these 3 locations.
> Note that remotes sync only the delta since last sync, so bandwidth
> correlates
> to that delta churn. And as long as that churn stays below 50% size
> of a full
> backup you still need less total bandwidth than the odd/even full-
> backup
> approach. At least if averaged over time.
ohh... I think there's the misunderstanding: I wasn't talking about
odd/even FULL-backups!
Right now I'm doing odd/even incremental backups! Incremental against
the last state of the backup server im backing up to (backing up what
changed in 2 days).
Best,
Tom
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-18 14:59 ` Tom Weber
@ 2020-07-18 18:07 ` Thomas Lamprecht
0 siblings, 0 replies; 47+ messages in thread
From: Thomas Lamprecht @ 2020-07-18 18:07 UTC (permalink / raw)
To: Proxmox VE user list, Tom Weber
On 18.07.20 16:59, Tom Weber wrote:
> Am Freitag, den 17.07.2020, 19:43 +0200 schrieb Thomas Lamprecht:
>> If a backup fails, or the last backup index we get doesn't matches
>> the
>> checksum we cache in the VM QEMU process we drop the bitmap and do
>> read
>> everything (it's still send incremental from the index we got now),
>> and
>> setup a new bitmap from that point.
>
> ah, I think I start to understand (read a bit about the qemu side too
> now) :)
>
> So you keep some checksum/signature of a successfull backup run with
> the one (non-persistant) dirty bitmap in qemu.
> The next backup run can check this and only makes use of the bitmap if
> it matches else it will fall back to reading and comparing all qemu
> blocks against the ones in the backup - saving only the changed ones?
Exactly.
> First of all, that noted backup scenario was not designed for a
> blocklevel incremental backup like pbs is meant. I don't know yet if
> I'd do it like this for pbs. But it probably helps to understand why it
> raised the above question.
>
> If the same "area" of data changes everyday, say 1GB, and I do
> incremental backups and have like 10GB of space for that on 2
> independent Servers.
> Doing that incremental Backup odd/even to those 2 Backupservers, I end
> up with 20 days of history whereas with 2 syncronized Backupservers
> only 10 days of history are possible (one could also translate this in
> doubled backup space ;) ).
Yeah if only the same disk blocks are touched the math would work out.
But you've doubled the risk of loosing the most recent backup, that's
the price.
But you do you, I'd honestly just check it out and test around a bit to
see how it really behaves for your use case and setup behavior and
limitations.
cheers,
Thomas
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-10 10:56 Martin Maurer
` (2 preceding siblings ...)
2020-07-10 12:03 ` Lindsay Mathieson
@ 2020-07-10 12:45 ` Iztok Gregori
2020-07-10 13:41 ` Dietmar Maurer
[not found] ` <a1e5f8dd-efd5-f8e2-50c1-683d42b0f61b@truelite.it>
` (3 subsequent siblings)
7 siblings, 1 reply; 47+ messages in thread
From: Iztok Gregori @ 2020-07-10 12:45 UTC (permalink / raw)
To: pve-user
On 10/07/20 12:56, Martin Maurer wrote:
> We are proud to announce the first beta release of our new Proxmox
> Backup Server.
>
Great to hear!
Are you planning to support also CEPH (or other distributed file
systems) as destination storage backend?
Iztok Gregori
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-10 12:45 ` Iztok Gregori
@ 2020-07-10 13:41 ` Dietmar Maurer
2020-07-10 15:20 ` Iztok Gregori
0 siblings, 1 reply; 47+ messages in thread
From: Dietmar Maurer @ 2020-07-10 13:41 UTC (permalink / raw)
To: Proxmox VE user list
> Are you planning to support also CEPH (or other distributed file
> systems) as destination storage backend?
It is already possible to put the datastore a a mounted cephfs, or
anything you can mount on the host.
But this means that you copy data over the network multiple times,
so this is not the best option performance wise...
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-10 13:41 ` Dietmar Maurer
@ 2020-07-10 15:20 ` Iztok Gregori
2020-07-10 15:31 ` Dietmar Maurer
0 siblings, 1 reply; 47+ messages in thread
From: Iztok Gregori @ 2020-07-10 15:20 UTC (permalink / raw)
To: pve-user
On 10/07/20 15:41, Dietmar Maurer wrote:
>> Are you planning to support also CEPH (or other distributed file
>> systems) as destination storage backend?
>
> It is already possible to put the datastore a a mounted cephfs, or
> anything you can mount on the host.
Is this "mount" managed by PBS or you have to "manually" mount it
outside PBS?
>
> But this means that you copy data over the network multiple times,
> so this is not the best option performance wise...
True, PBS will act as a gateway to the backing storage cluster, but the
data will be only re-routed to the final destination (in this case and
OSD) not copied over (putting aside the CEPH replication policy). So
performance wise you are limited by the bandwidth of the PBS network
interfaces (as you will be for a local network storage server) and to
the speed of the backing CEPH cluster. Maybe you will loose something on
raw performance (but depending on the CEPH cluster you could gain also
something) but you will gain the ability of "easily" expandable storage
space and no single point of failure.
Thanks a lot for your work!
Iztok Gregori
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-10 15:20 ` Iztok Gregori
@ 2020-07-10 15:31 ` Dietmar Maurer
2020-07-10 16:29 ` Iztok Gregori
0 siblings, 1 reply; 47+ messages in thread
From: Dietmar Maurer @ 2020-07-10 15:31 UTC (permalink / raw)
To: Proxmox VE user list
> On 10/07/20 15:41, Dietmar Maurer wrote:
> >> Are you planning to support also CEPH (or other distributed file
> >> systems) as destination storage backend?
> >
> > It is already possible to put the datastore a a mounted cephfs, or
> > anything you can mount on the host.
>
> Is this "mount" managed by PBS or you have to "manually" mount it
> outside PBS?
Not sure what kind of management you need for that? Usually people
mount filesystems using /etc/fstab or by creating systemd mount units.
> > But this means that you copy data over the network multiple times,
> > so this is not the best option performance wise...
>
> True, PBS will act as a gateway to the backing storage cluster, but the
> data will be only re-routed to the final destination (in this case and
> OSD) not copied over (putting aside the CEPH replication policy).
That is probably a very simplistic view of things. It involves copying data
multiple times, so I will affect performance by sure.
Note: We take about huge amounts of data.
> So
> performance wise you are limited by the bandwidth of the PBS network
> interfaces (as you will be for a local network storage server) and to
> the speed of the backing CEPH cluster. Maybe you will loose something on
> raw performance (but depending on the CEPH cluster you could gain also
> something) but you will gain the ability of "easily" expandable storage
> space and no single point of failure.
Sure, that's true. Would be interesting to get some performance stats for
such setup...
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-10 15:31 ` Dietmar Maurer
@ 2020-07-10 16:29 ` Iztok Gregori
2020-07-10 16:46 ` Dietmar Maurer
0 siblings, 1 reply; 47+ messages in thread
From: Iztok Gregori @ 2020-07-10 16:29 UTC (permalink / raw)
To: pve-user
On 10/07/20 17:31, Dietmar Maurer wrote:
>> On 10/07/20 15:41, Dietmar Maurer wrote:
>>>> Are you planning to support also CEPH (or other distributed file
>>>> systems) as destination storage backend?
>>>
>>> It is already possible to put the datastore a a mounted cephfs, or
>>> anything you can mount on the host.
>>
>> Is this "mount" managed by PBS or you have to "manually" mount it
>> outside PBS?
>
> Not sure what kind of management you need for that? Usually people
> mount filesystems using /etc/fstab or by creating systemd mount units.
In PVE you can add a storage (like NFS for example) via GUI (or directly
via config file) and, if I'm not mistaken, from the PVE will "manage"
the storage (mount it under /mnt/pve, not performing a backup if the
storage is not ready and so on).
>
>>> But this means that you copy data over the network multiple times,
>>> so this is not the best option performance wise...
>>
>> True, PBS will act as a gateway to the backing storage cluster, but the
>> data will be only re-routed to the final destination (in this case and
>> OSD) not copied over (putting aside the CEPH replication policy).
>
> That is probably a very simplistic view of things. It involves copying data
> multiple times, so I will affect performance by sure.
The replication you mean? Yes, it "copies"/distribute the same data on
multiple targets/disk (more or less the same RAID or ZFS does). But I'm
not aware of the internals of PBS so maybe my reasoning is really to
simplistic.
>
> Note: We take about huge amounts of data.
We daily backup with vzdump over NFS 2TB of data. Clearly because all of
the backups are full backups we need a lot of space for keeping a
reasonable retention (8 daily backups + 3 weekly). I resorted to cycle
to 5 relatively huge NFS server, but it involved a complex
backup-schedule. But because the amount of data is growing we are
searching for a backup solution which can be integrated in PVE and could
be easily expandable.
>
>> So
>> performance wise you are limited by the bandwidth of the PBS network
>> interfaces (as you will be for a local network storage server) and to
>> the speed of the backing CEPH cluster. Maybe you will loose something on
>> raw performance (but depending on the CEPH cluster you could gain also
>> something) but you will gain the ability of "easily" expandable storage
>> space and no single point of failure.
>
> Sure, that's true. Would be interesting to get some performance stats for
> such setup...
You mean performance stats about CEPH or about PBS backed with CEPHfs?
For the latter we could try something in Autumn when some servers will
became available.
Cheers
Iztok Gregori
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-10 16:29 ` Iztok Gregori
@ 2020-07-10 16:46 ` Dietmar Maurer
0 siblings, 0 replies; 47+ messages in thread
From: Dietmar Maurer @ 2020-07-10 16:46 UTC (permalink / raw)
To: Proxmox VE user list
> >> Is this "mount" managed by PBS or you have to "manually" mount it
> >> outside PBS?
> >
> > Not sure what kind of management you need for that? Usually people
> > mount filesystems using /etc/fstab or by creating systemd mount units.
>
> In PVE you can add a storage (like NFS for example) via GUI (or directly
> via config file) and, if I'm not mistaken, from the PVE will "manage"
> the storage (mount it under /mnt/pve, not performing a backup if the
> storage is not ready and so on).
Ah, yes. We currectly restrict ourself to local disks (because of the performance
implication).
> >>> But this means that you copy data over the network multiple times,
> >>> so this is not the best option performance wise...
> >>
> >> True, PBS will act as a gateway to the backing storage cluster, but the
> >> data will be only re-routed to the final destination (in this case and
> >> OSD) not copied over (putting aside the CEPH replication policy).
> >
> > That is probably a very simplistic view of things. It involves copying data
> > multiple times, so I will affect performance by sure.
>
> The replication you mean? Yes, it "copies"/distribute the same data on
> multiple targets/disk (more or less the same RAID or ZFS does). But I'm
> not aware of the internals of PBS so maybe my reasoning is really to
> simplistic.
>
> >
> > Note: We take about huge amounts of data.
>
> We daily backup with vzdump over NFS 2TB of data. Clearly because all of
> the backups are full backups we need a lot of space for keeping a
> reasonable retention (8 daily backups + 3 weekly). I resorted to cycle
> to 5 relatively huge NFS server, but it involved a complex
> backup-schedule. But because the amount of data is growing we are
> searching for a backup solution which can be integrated in PVE and could
> be easily expandable.
I would start using proxmox-backup server the way it is designed for, using
a local zfs storage pool for the backups. This is high performance and future proof.
To get redundancy, you can use a second backup server and sync the backups.
This is also much simpler to recover things, because there is no need to get ceph storage
online first (Always plan for recovery..).
But sure, you can also use cepfs if it meets your performance requirements and
you have enough network bandwidth.
^ permalink raw reply [flat|nested] 47+ messages in thread
[parent not found: <a1e5f8dd-efd5-f8e2-50c1-683d42b0f61b@truelite.it>]
* Re: [PVE-User] Proxmox Backup Server (beta)
[not found] ` <a1e5f8dd-efd5-f8e2-50c1-683d42b0f61b@truelite.it>
@ 2020-07-10 14:23 ` Thomas Lamprecht
0 siblings, 0 replies; 47+ messages in thread
From: Thomas Lamprecht @ 2020-07-10 14:23 UTC (permalink / raw)
To: Simone Piccardi, pve-user
On 10.07.20 16:01, Simone Piccardi wrote:
> Il 10/07/20 12:56, Martin Maurer ha scritto:
>> We are proud to announce the first beta release of our new Proxmox Backup Server.
>>
>
>
> Thanks for the effort, that's very interesting.
> Two question:
>
> 1. Having two indipendend Proxmox server can install it on both, to do a cross backup?
You can add remotes to Proxmox Backup servers which can be synced efficiently
and also automatically with a set schedule.
And, you can also use it as target for multiple seprate Proxmox VE clusters,
albeit some optimizations are still planned here:
https://pbs.proxmox.com/wiki/index.php/Roadmap
> 2. There is a stress on ZFS support on the kernel and in the documentation there is a chapter on managing it, it's not clear to me if this is needed just for better performance or I can use it also just using an installation having just LVM
>
Effectively you can use whatever is supported on the system where Proxmox
Backup Server is installed, it needs to be a filesystem.
The web-interface of PBS supports creating an ext4 or XFS backed datastore
besides ZFS also.
We recommend ZFS mainly because it has built-in support to get some redundancy
easily and can work with really huge datasets (hundreds of TB), so this makes it
ideal for a future proof Backup Server where hundreds to thousand of hosts
backup too.
If you're rather happy with another filesystem as backing datastore you can
naturally use it :)
cheers,
Thomas
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-10 10:56 Martin Maurer
` (4 preceding siblings ...)
[not found] ` <a1e5f8dd-efd5-f8e2-50c1-683d42b0f61b@truelite.it>
@ 2020-07-10 15:59 ` Lindsay Mathieson
[not found] ` <mailman.86.1594396120.12071.pve-user@lists.proxmox.com>
2020-10-06 13:12 ` Lee Lists
7 siblings, 0 replies; 47+ messages in thread
From: Lindsay Mathieson @ 2020-07-10 15:59 UTC (permalink / raw)
To: pve-user
Have been reading through the PDF docs, concise and well written, thanks.
--
Lindsay
^ permalink raw reply [flat|nested] 47+ messages in thread
[parent not found: <mailman.86.1594396120.12071.pve-user@lists.proxmox.com>]
* Re: [PVE-User] Proxmox Backup Server (beta)
[not found] ` <mailman.86.1594396120.12071.pve-user@lists.proxmox.com>
@ 2020-07-10 16:32 ` Dietmar Maurer
0 siblings, 0 replies; 47+ messages in thread
From: Dietmar Maurer @ 2020-07-10 16:32 UTC (permalink / raw)
To: Proxmox VE user list
> I was planning on doing something like described below, and I'm wondering if PBS can do this.
>
> We have one Hypervisor (we are very small)
>
> I have two ZFS storage pools.
> dpool- for running VM's
> bpool- for backing up our VM's
> both are SAS attached HDD's.
>
> Right now I use pve-zsync to backup the VM's to bpool on a 15 minute, daily, weekly, and monthly basis.
>
> I WANT to send the weekly snapshots to an offsite pool. I was going to use zfs send to do this.
>
> Can PBS backup to our local zfs pool,
that works
> and then sync to the remote server.
if the remote site is a proxmox backup server
> If so, does it use zfs send?
no.
> Finally, can I somehow move the snapshots that pve-zsync is currently creating to PBS? Versus destroying them and starting over again?
We currently do not have any tools for pve-zsync/proxmox-backup-server interaction. So far, I though those
are complete different concepts...
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-07-10 10:56 Martin Maurer
` (6 preceding siblings ...)
[not found] ` <mailman.86.1594396120.12071.pve-user@lists.proxmox.com>
@ 2020-10-06 13:12 ` Lee Lists
2020-10-08 8:21 ` Thomas Lamprecht
7 siblings, 1 reply; 47+ messages in thread
From: Lee Lists @ 2020-10-06 13:12 UTC (permalink / raw)
To: Proxmox VE user list; +Cc: PVE User List, pve-devel, pbs-devel
Hi,
I'm trying to build proxmox backup server from source,
but the build failed in compiling zstd lib bindings.
Any clues ?
Thanks,
Jurgen
Fresh pxar v0.6.1 (/root/pxar)
Fresh proxmox-fuse v0.1.0 (/root/proxmox-fuse)
Fresh hyper v0.13.8
Compiling proxmox v0.4.2 (/root/proxmox/proxmox)
Fresh bindgen v0.49.4
Running `rustc --crate-name proxmox --edition=2018 /root/proxmox/proxmox/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C embed-bitcode=no -C debuginfo=2 --cfg 'feature="api-macro"' --cfg 'feature="cli"' --cfg 'feature="default"' --cfg 'feature="futures"' --cfg 'feature="hyper"' --cfg 'feature="openssl"' --cfg 'feature="proxmox-api-macro"' --cfg 'feature="proxmox-sortable-macro"' --cfg 'feature="router"' --cfg 'feature="sortable-macro"' --cfg 'feature="tokio"' --cfg 'feature="websocket"' -C metadata=494ea91d58d02b19 -C extra-filename=-494ea91d58d02b19 --out-dir /root/proxmox-backup/target/debug/deps -C incremental=/root/proxmox-backup/target/debug/incremental -L dependency=/root/proxmox-backup/target/debug/deps --extern anyhow=/root/proxmox-backup/target/debug/deps/libanyhow-547643689d8f1fe1.rmeta --extern base64=/root/proxmox-backup/target/debug/deps/libbase64-75b6df1cdb5dcedb.rmeta --extern bytes=/root/proxmox-backup/target/debug/deps/libbytes-de79ab6ad237b260.rmeta --extern endian_trait=/root/proxmox-backup/target/debug/deps/libendian_trait-8410c7a3f7fc6de5.rmeta --extern futures=/root/proxmox-backup/target/debug/deps/libfutures-68cbc13a6c4e5d08.rmeta --extern http=/root/proxmox-backup/target/debug/deps/libhttp-4f2085239d8db6c5.rmeta --extern hyper=/root/proxmox-backup/target/debug/deps/libhyper-33e2f11afaf2d6cd.rmeta --extern lazy_static=/root/proxmox-backup/target/debug/deps/liblazy_static-9441bed367485869.rmeta --extern libc=/root/proxmox-backup/target/debug/deps/liblibc-85afcfd6d5dd745a.rmeta --extern nix=/root/proxmox-backup/target/debug/deps/libnix-19850f768394dcc5.rmeta --extern openssl=/root/proxmox-backup/target/debug/deps/libopenssl-a45d25e9645a7846.rmeta --extern percent_encoding=/root/proxmox-backup/target/debug/deps/libpercent_encoding-00fe2006917413e4.rmeta --extern proxmox_api_macro=/root/proxmox-backup/target/debug/deps/libproxmox_api_macro-45b1df18057a8628.so --extern proxmox_sortable_macro=/root/proxmox-backup/target/debug/deps/libproxmox_sortable_macro-e0ce43c23fa4803c.so --extern regex=/root/proxmox-backup/target/debug/deps/libregex-37b4c1de7b101096.rmeta --extern rustyline=/root/proxmox-backup/target/debug/deps/librustyline-201c56bc71ec2bb7.rmeta --extern serde=/root/proxmox-backup/target/debug/deps/libserde-75724b33e89dcb58.rmeta --extern serde_derive=/root/proxmox-backup/target/debug/deps/libserde_derive-e6d2c9cdac5acf10.so --extern serde_json=/root/proxmox-backup/target/debug/deps/libserde_json-caf74f34e0a23558.rmeta --extern textwrap=/root/proxmox-backup/target/debug/deps/libtextwrap-8dccd2a72ee64e9e.rmeta --extern tokio=/root/proxmox-backup/target/debug/deps/libtokio-8c2cdd714cabf70e.rmeta --extern url=/root/proxmox-backup/target/debug/deps/liburl-0b6b3b5adf147475.rmeta`
Compiling zstd-sys v1.4.13+zstd.1.4.3
Running `/root/proxmox-backup/target/debug/build/zstd-sys-0efc8671c6ad61e7/build-script-build`
error: failed to run custom build command for `zstd-sys v1.4.13+zstd.1.4.3`
Caused by:
process didn't exit successfully: `/root/proxmox-backup/target/debug/build/zstd-sys-0efc8671c6ad61e7/build-script-build` (exit code: 101)
...
...
...
running: "ar" "crs" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/libzstd.a" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/common/debug.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/common/entropy_common.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/common/error_private.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/common/fse_decompress.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/common/pool.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/common/threading.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/common/xxhash.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/common/zstd_common.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/compress/fse_compress.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/compress/hist.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/compress/huf_compress.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/compress/zstd_compress.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/compress/zstd_compress_literals.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/compress/zstd_compress_sequences.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/compress/zstd_double_fast.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/compress/zstd_fast.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/compress/zstd_lazy.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/compress/zstd_ldm.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/compress/zstd_opt.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/compress/zstdmt_compress.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/decompress/huf_decompress.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/decompress/zstd_ddict.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/decompress/zstd_decompress.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/decompress/zstd_decompress_block.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/legacy/zstd_v01.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/legacy/zstd_v02.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/legacy/zstd_v03.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/legacy/zstd_v04.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/legacy/zstd_v05.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/legacy/zstd_v06.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/legacy/zstd_v07.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/dictBuilder/cover.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/dictBuilder/divsufsort.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/dictBuilder/fastcover.o" "/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out/zstd/lib/dictBuilder/zdict.o"
exit code: 0
cargo:rustc-link-lib=static=zstd
cargo:rustc-link-search=native=/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out
cargo:root=/root/proxmox-backup/target/debug/build/zstd-sys-c18e259b2d671f1b/out
--- stderr
./zstd/lib/zstd.h:18:10: fatal error: 'stddef.h' file not found
./zstd/lib/zstd.h:18:10: fatal error: 'stddef.h' file not found, err: true
thread 'main' panicked at 'Unable to generate bindings: ()', /root/.cargo/registry/src/github.com-1ecc6299db9ec823/zstd-sys-1.4.13+zstd.1.4.3/build.rs:33:40
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-10-06 13:12 ` Lee Lists
@ 2020-10-08 8:21 ` Thomas Lamprecht
2020-10-09 9:27 ` Lee Lists
0 siblings, 1 reply; 47+ messages in thread
From: Thomas Lamprecht @ 2020-10-08 8:21 UTC (permalink / raw)
To: Proxmox VE user list, Lee Lists; +Cc: PVE User List, pbs-devel, pve-devel
On 06.10.20 15:12, Lee Lists wrote:
> I'm trying to build proxmox backup server from source,
> but the build failed in compiling zstd lib bindings.
>
> Any clues ?
Some more hints about the build environment and the executed steps would
be great.
Are all build dependencies installed, this error comes sometimes up if
clang isn't correctly installed.
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PVE-User] Proxmox Backup Server (beta)
2020-10-08 8:21 ` Thomas Lamprecht
@ 2020-10-09 9:27 ` Lee Lists
0 siblings, 0 replies; 47+ messages in thread
From: Lee Lists @ 2020-10-09 9:27 UTC (permalink / raw)
To: Thomas Lamprecht
Cc: Proxmox VE user list, PVE User List, pbs-devel, pve-devel
Hi Thomas,
Thank you, it was effectively a missing clang installation.
Finally i managed to compile pbs on armbian / aarch64 (https://kobol.io/).
First tests gives good results on this rk3399 platform.
┌───────────────────────────────────┬───────────────────┐
│ Name │ Value │
╞═══════════════════════════════════╪═══════════════════╡
│ TLS (maximal backup upload speed) │ not tested │
├───────────────────────────────────┼───────────────────┤
│ SHA256 checksum computation speed │ 885.79 MB/s (44%) │
├───────────────────────────────────┼───────────────────┤
│ ZStd level 1 compression speed │ 139.33 MB/s (19%) │
├───────────────────────────────────┼───────────────────┤
│ ZStd level 1 decompression speed │ 326.64 MB/s (27%) │
├───────────────────────────────────┼───────────────────┤
│ Chunk verification speed │ 271.91 MB/s (36%) │
├───────────────────────────────────┼───────────────────┤
│ AES256 GCM encryption speed │ 561.27 MB/s (15%) │
└───────────────────────────────────┴───────────────────┘
Regards,
Lee
----- Mail original -----
De: "Thomas Lamprecht" <t.lamprecht@proxmox.com>
À: "Proxmox VE user list" <pve-user@lists.proxmox.com>, "Lee Lists" <lists@jave.fr>
Cc: "PVE User List" <pve-user@pve.proxmox.com>, "pbs-devel" <pbs-devel@lists.proxmox.com>, "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Jeudi 8 Octobre 2020 10:21:47
Objet: Re: [PVE-User] Proxmox Backup Server (beta)
On 06.10.20 15:12, Lee Lists wrote:
> I'm trying to build proxmox backup server from source,
> but the build failed in compiling zstd lib bindings.
>
> Any clues ?
Some more hints about the build environment and the executed steps would
be great.
Are all build dependencies installed, this error comes sometimes up if
clang isn't correctly installed.
^ permalink raw reply [flat|nested] 47+ messages in thread
end of thread, other threads:[~2020-10-09 12:10 UTC | newest]
Thread overview: 47+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-09 12:10 [PVE-User] Proxmox Backup Server (beta) Lee Lists
-- strict thread matches above, loose matches on Subject: below --
2020-07-10 10:56 Martin Maurer
2020-07-10 11:42 ` Roland
2020-07-10 12:09 ` Dietmar Maurer
2020-07-10 12:24 ` Roland
2020-07-10 13:43 ` Thomas Lamprecht
2020-07-10 14:06 ` Roland
2020-07-10 14:15 ` Thomas Lamprecht
2020-07-10 14:46 ` Roland
2020-07-10 17:31 ` Roland
2020-07-10 13:44 ` Dietmar Maurer
[not found] ` <mailman.77.1594381090.12071.pve-user@lists.proxmox.com>
2020-07-10 11:45 ` Dietmar Maurer
[not found] ` <a92c7f1d-f492-2d43-00b9-15bdb0e805ec@binovo.es>
2020-07-10 13:50 ` Thomas Lamprecht
2020-07-10 12:03 ` Lindsay Mathieson
2020-07-10 12:13 ` Dietmar Maurer
2020-07-10 15:41 ` Dietmar Maurer
2020-07-11 11:03 ` mj
2020-07-11 11:38 ` Thomas Lamprecht
2020-07-11 13:34 ` mj
2020-07-11 13:47 ` Thomas Lamprecht
2020-07-11 14:40 ` Dietmar Maurer
2020-07-14 14:30 ` Alexandre DERUMIER
2020-07-14 15:52 ` Thomas Lamprecht
2020-07-14 21:17 ` Alexandre DERUMIER
2020-07-15 4:52 ` Thomas Lamprecht
[not found] ` <176392164.4390.1594849016963.JavaMail.zimbra@numberall.com>
2020-07-16 7:33 ` Thomas Lamprecht
[not found] ` <mailman.204.1594849027.12071.pve-user@lists.proxmox.com>
2020-07-16 10:17 ` Wolfgang Bumiller
2020-07-16 14:36 ` Mark Schouten
2020-07-16 17:04 ` Thomas Lamprecht
2020-07-16 13:03 ` Tom Weber
2020-07-17 7:31 ` Fabian Grünbichler
2020-07-17 13:23 ` Tom Weber
2020-07-17 17:43 ` Thomas Lamprecht
2020-07-18 14:59 ` Tom Weber
2020-07-18 18:07 ` Thomas Lamprecht
2020-07-10 12:45 ` Iztok Gregori
2020-07-10 13:41 ` Dietmar Maurer
2020-07-10 15:20 ` Iztok Gregori
2020-07-10 15:31 ` Dietmar Maurer
2020-07-10 16:29 ` Iztok Gregori
2020-07-10 16:46 ` Dietmar Maurer
[not found] ` <a1e5f8dd-efd5-f8e2-50c1-683d42b0f61b@truelite.it>
2020-07-10 14:23 ` Thomas Lamprecht
2020-07-10 15:59 ` Lindsay Mathieson
[not found] ` <mailman.86.1594396120.12071.pve-user@lists.proxmox.com>
2020-07-10 16:32 ` Dietmar Maurer
2020-10-06 13:12 ` Lee Lists
2020-10-08 8:21 ` Thomas Lamprecht
2020-10-09 9:27 ` Lee Lists
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox