* [PVE-User] Backup broken? @ 2020-10-31 14:11 Bernhard Dick 2020-11-01 12:48 ` Thomas Lamprecht 0 siblings, 1 reply; 6+ messages in thread From: Bernhard Dick @ 2020-10-31 14:11 UTC (permalink / raw) To: pve-user Hi, I've a weekly backup job on my proxmox (community version 6.2) infrastructure. Yesterday I've updated to the latest package versions available and in the night the backup job ran. But he decided to first create a backup and then delete _all_ backups being on the backup location which is now empty besides some log files. Here is an excerpt of one of the mailed logs: 101: 2020-10-31 02:04:31 INFO: creating vzdump archive '/mnt/pve/backup-fsn0/dump/vzdump-lxc-101-2020_10_31-02_04_30.tar.lzo' 101: 2020-10-31 02:05:15 INFO: Total bytes written: 838041600 (800MiB, 19MiB/s) 101: 2020-10-31 02:05:16 INFO: archive file size: 383MB 101: 2020-10-31 02:05:16 INFO: removing backup 'backup-fsn0:backup/vzdump-lxc-101-2020_10_14-00_57_00.tar.zst' 101: 2020-10-31 02:05:16 INFO: removing backup 'backup-fsn0:backup/vzdump-lxc-101-2020_10_16-19_20_41.tar.lzo' 101: 2020-10-31 02:05:16 INFO: removing backup 'backup-fsn0:backup/vzdump-lxc-101-2020_10_17-02_10_31.tar.lzo' 101: 2020-10-31 02:05:17 INFO: removing backup 'backup-fsn0:backup/vzdump-lxc-101-2020_10_24-02_03_29.tar.lzo' 101: 2020-10-31 02:05:25 INFO: removing backup 'backup-fsn0:backup/vzdump-lxc-101-2020_10_31-02_04_30.tar.lzo' 101: 2020-10-31 02:05:25 INFO: remove vzdump snapshot 101: 2020-10-31 02:05:26 INFO: Finished Backup of VM 101 (00:00:56) Is it possible that I've triggert a very evil bug here? Best regards Bernhard ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PVE-User] Backup broken? 2020-10-31 14:11 [PVE-User] Backup broken? Bernhard Dick @ 2020-11-01 12:48 ` Thomas Lamprecht 2020-11-01 13:14 ` Bernhard Dick 0 siblings, 1 reply; 6+ messages in thread From: Thomas Lamprecht @ 2020-11-01 12:48 UTC (permalink / raw) To: Proxmox VE user list, Bernhard Dick Hi, On 31.10.20 15:11, Bernhard Dick wrote: > I've a weekly backup job on my proxmox (community version 6.2) infrastructure. Yesterday I've updated to the latest package versions available and in the night the backup job ran. But he decided to first create a backup and then delete _all_ backups being on the backup location which is now empty besides some log files. > Here is an excerpt of one of the mailed logs: > > 101: 2020-10-31 02:04:31 INFO: creating vzdump archive '/mnt/pve/backup-fsn0/dump/vzdump-lxc-101-2020_10_31-02_04_30.tar.lzo' > 101: 2020-10-31 02:05:15 INFO: Total bytes written: 838041600 (800MiB, 19MiB/s) > 101: 2020-10-31 02:05:16 INFO: archive file size: 383MB > 101: 2020-10-31 02:05:16 INFO: removing backup 'backup-fsn0:backup/vzdump-lxc-101-2020_10_14-00_57_00.tar.zst' > 101: 2020-10-31 02:05:16 INFO: removing backup 'backup-fsn0:backup/vzdump-lxc-101-2020_10_16-19_20_41.tar.lzo' > 101: 2020-10-31 02:05:16 INFO: removing backup 'backup-fsn0:backup/vzdump-lxc-101-2020_10_17-02_10_31.tar.lzo' > 101: 2020-10-31 02:05:17 INFO: removing backup 'backup-fsn0:backup/vzdump-lxc-101-2020_10_24-02_03_29.tar.lzo' > 101: 2020-10-31 02:05:25 INFO: removing backup 'backup-fsn0:backup/vzdump-lxc-101-2020_10_31-02_04_30.tar.lzo' > 101: 2020-10-31 02:05:25 INFO: remove vzdump snapshot > 101: 2020-10-31 02:05:26 INFO: Finished Backup of VM 101 (00:00:56) > It'd be great if you could provided the full task log, not only an excerpt. > Is it possible that I've triggert a very evil bug here? > Can you please post the storage config and the backup job one? I.e., # cat /etc/pve/storage.cfg # cat /etc/pve/vzdump.cron thanks! - Thomas ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PVE-User] Backup broken? 2020-11-01 12:48 ` Thomas Lamprecht @ 2020-11-01 13:14 ` Bernhard Dick 2020-11-01 19:48 ` Thomas Lamprecht 0 siblings, 1 reply; 6+ messages in thread From: Bernhard Dick @ 2020-11-01 13:14 UTC (permalink / raw) To: Proxmox VE user list Hi, Am 01.11.2020 um 13:48 schrieb Thomas Lamprecht: > Hi, > > On 31.10.20 15:11, Bernhard Dick wrote: >> I've a weekly backup job on my proxmox (community version 6.2) infrastructure. Yesterday I've updated to the latest package versions available and in the night the backup job ran. But he decided to first create a backup and then delete _all_ backups being on the backup location which is now empty besides some log files. >> Here is an excerpt of one of the mailed logs: >> >> [..] > > It'd be great if you could provided the full task log, not only an excerpt. Here is a full log of two Systems (one container, one VM) of from the job: vzdump --quiet 1 --mailnotification always --mailto bernhard@bdick.de --compress lzo --mode snapshot --storage backup-fsn0 --node he2 --all 1 101: 2020-10-31 20:43:26 INFO: Starting Backup of VM 101 (lxc) 101: 2020-10-31 20:43:26 INFO: status = running 101: 2020-10-31 20:43:26 INFO: CT Name: ns0.bdick.de 101: 2020-10-31 20:43:26 INFO: including mount point rootfs ('/') in backup 101: 2020-10-31 20:43:26 INFO: backup mode: snapshot 101: 2020-10-31 20:43:26 INFO: ionice priority: 7 101: 2020-10-31 20:43:26 INFO: create storage snapshot 'vzdump' 101: 2020-10-31 20:43:27 INFO: creating vzdump archive '/mnt/pve/backup-fsn0/dump/vzdump-lxc-101-2020_10_31-20_43_26.tar.lzo' 101: 2020-10-31 20:44:05 INFO: Total bytes written: 838082560 (800MiB, 21MiB/s) 101: 2020-10-31 20:44:06 INFO: archive file size: 383MB 101: 2020-10-31 20:44:06 INFO: removing backup 'backup-fsn0:backup/vzdump-lxc-101-2020_10_31-20_43_26.tar.lzo' 101: 2020-10-31 20:44:06 INFO: remove vzdump snapshot 101: 2020-10-31 20:44:06 INFO: Finished Backup of VM 101 (00:00:40) 102: 2020-10-31 20:44:06 INFO: Starting Backup of VM 102 (qemu) 102: 2020-10-31 20:44:06 INFO: status = running 102: 2020-10-31 20:44:06 INFO: VM Name: db.bdick.de 102: 2020-10-31 20:44:06 INFO: include disk 'sata0' 'local-zfs:vm-102-disk-0' 8G 102: 2020-10-31 20:44:06 INFO: backup mode: snapshot 102: 2020-10-31 20:44:06 INFO: ionice priority: 7 102: 2020-10-31 20:44:07 INFO: creating vzdump archive '/mnt/pve/backup-fsn0/dump/vzdump-qemu-102-2020_10_31-20_44_06.vma.lzo' 102: 2020-10-31 20:44:07 INFO: started backup task 'edde0ee1-047d-49a4-8b2d-22da79c5b540' 102: 2020-10-31 20:44:07 INFO: resuming VM again 102: 2020-10-31 20:44:10 INFO: 9% (738.9 MiB of 8.0 GiB) in 3s, read: 246.3 MiB/s, write: 45.4 MiB/s 102: 2020-10-31 20:44:13 INFO: 12% (1.0 GiB of 8.0 GiB) in 6s, read: 102.4 MiB/s, write: 89.2 MiB/s 102: 2020-10-31 20:44:16 INFO: 16% (1.3 GiB of 8.0 GiB) in 9s, read: 96.8 MiB/s, write: 82.3 MiB/s 102: 2020-10-31 20:44:19 INFO: 17% (1.4 GiB of 8.0 GiB) in 12s, read: 35.8 MiB/s, write: 26.7 MiB/s 102: 2020-10-31 20:44:22 INFO: 21% (1.7 GiB of 8.0 GiB) in 15s, read: 115.6 MiB/s, write: 77.3 MiB/s 102: 2020-10-31 20:44:25 INFO: 24% (2.0 GiB of 8.0 GiB) in 18s, read: 72.1 MiB/s, write: 71.7 MiB/s 102: 2020-10-31 20:44:28 INFO: 26% (2.1 GiB of 8.0 GiB) in 21s, read: 53.6 MiB/s, write: 53.6 MiB/s 102: 2020-10-31 20:44:31 INFO: 27% (2.2 GiB of 8.0 GiB) in 24s, read: 16.1 MiB/s, write: 16.1 MiB/s 102: 2020-10-31 20:44:34 INFO: 29% (2.4 GiB of 8.0 GiB) in 27s, read: 78.6 MiB/s, write: 73.4 MiB/s 102: 2020-10-31 20:44:37 INFO: 33% (2.7 GiB of 8.0 GiB) in 30s, read: 102.6 MiB/s, write: 71.5 MiB/s 102: 2020-10-31 20:44:40 INFO: 36% (2.9 GiB of 8.0 GiB) in 33s, read: 80.7 MiB/s, write: 80.6 MiB/s 102: 2020-10-31 20:44:43 INFO: 39% (3.1 GiB of 8.0 GiB) in 36s, read: 71.3 MiB/s, write: 71.3 MiB/s 102: 2020-10-31 20:44:46 INFO: 41% (3.3 GiB of 8.0 GiB) in 39s, read: 66.5 MiB/s, write: 66.5 MiB/s 102: 2020-10-31 20:44:49 INFO: 44% (3.5 GiB of 8.0 GiB) in 42s, read: 72.2 MiB/s, write: 72.2 MiB/s 102: 2020-10-31 20:44:52 INFO: 47% (3.8 GiB of 8.0 GiB) in 45s, read: 97.6 MiB/s, write: 97.3 MiB/s 102: 2020-10-31 20:44:55 INFO: 52% (4.2 GiB of 8.0 GiB) in 48s, read: 116.8 MiB/s, write: 55.6 MiB/s 102: 2020-10-31 20:44:58 INFO: 89% (7.2 GiB of 8.0 GiB) in 51s, read: 1023.2 MiB/s, write: 0 B/s 102: 2020-10-31 20:44:59 INFO: 100% (8.0 GiB of 8.0 GiB) in 52s, read: 846.9 MiB/s, write: 0 B/s 102: 2020-10-31 20:44:59 INFO: backup is sparse: 4.92 GiB (61%) total zero data 102: 2020-10-31 20:44:59 INFO: transferred 8.00 GiB in 52 seconds (157.5 MiB/s) 102: 2020-10-31 20:44:59 INFO: archive file size: 1.40GB 102: 2020-10-31 20:44:59 INFO: removing backup 'backup-fsn0:backup/vzdump-qemu-102-2020_10_31-20_44_06.vma.lzo' 102: 2020-10-31 20:44:59 INFO: Finished Backup of VM 102 (00:00:53) > >> Is it possible that I've triggert a very evil bug here? >> > > Can you please post the storage config and the backup job one? I.e., > > # cat /etc/pve/storage.cfg dir: local path /var/lib/vz content backup,iso,vztmpl zfspool: local-zfs pool rpool/data content rootdir,images sparse 1 cifs: backup-fsn0 path /mnt/pve/backup-fsn0 server uXXXXXX-sub1.your-storagebox.de share uXXXXXX-sub1 content backup maxfiles 0 username uXXXXXX-sub1 > > # cat /etc/pve/vzdump.cron # cluster wide vzdump cron schedule # Automatically generated file - do not edit PATH="/usr/sbin:/usr/bin:/sbin:/bin" 0 2 * * 6 root vzdump --mailto bernhard@bdick.de --all 1 --quiet 1 --compress lzo --mode snapshot --mailnotification always --storage backup-fsn0 I can even reproduce this behaviour by triggering the global Backup job from the web console. If I backup single VMs/Containers from the Host part of the web console it runs fine, however the global job removes also those backups when it is running. Regards, Bernhard > > thanks! > - Thomas > > ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PVE-User] Backup broken? 2020-11-01 13:14 ` Bernhard Dick @ 2020-11-01 19:48 ` Thomas Lamprecht 2020-11-01 20:17 ` Thomas Lamprecht 2020-11-02 9:17 ` Bernhard Dick 0 siblings, 2 replies; 6+ messages in thread From: Thomas Lamprecht @ 2020-11-01 19:48 UTC (permalink / raw) To: Proxmox VE user list, Bernhard Dick On 01.11.20 14:14, Bernhard Dick wrote: > I can even reproduce this behaviour by triggering the global Backup job from the web console. If I backup single VMs/Containers from the Host part of the web console it runs fine, however the global job removes also those backups when it is running. Yes, there was a regression with this when adopting the newer prune "keep-daily", "keep-weekly", ... logic. It acts quite different internally, but the storage special case for maxfiles==0 was handled rather implicit, thus this did not rang any alarm bells. I transformed it to a more explicit logic and we'll add some more extensive test for this special case, so that it won't happen again. The fix is packaged in pve-manager version 6.2-15, currently available on pvetest. You can either add the pvetest repository[0], do `apt update && apt install pve-manager`, then drop the test repo again, or manually download and install it, with using `apt install` this still checks if the package is valid (i.e., signed by a trusted key): # wget http://download.proxmox.com/debian/pve/dists/buster/pvetest/binary-amd64/pve-manager_6.2-15_amd64.deb # apt install ./pve-manager_6.2-15_amd64.deb thanks for your report! regards, Thomas [0]: https://pve.proxmox.com/wiki/Package_Repositories ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PVE-User] Backup broken? 2020-11-01 19:48 ` Thomas Lamprecht @ 2020-11-01 20:17 ` Thomas Lamprecht 2020-11-02 9:17 ` Bernhard Dick 1 sibling, 0 replies; 6+ messages in thread From: Thomas Lamprecht @ 2020-11-01 20:17 UTC (permalink / raw) To: Proxmox VE user list, Bernhard Dick On 01.11.20 20:48, Thomas Lamprecht wrote: > with using > `apt install` this still checks if the package is valid (i.e., signed by a trusted > key): > > # wget http://download.proxmox.com/debian/pve/dists/buster/pvetest/binary-amd64/pve-manager_6.2-15_amd64.deb > # apt install ./pve-manager_6.2-15_amd64.deb Albeit, I have some feeling I was confused here, it's late Sunday after all, so be safe check the SHA256 sum. # sha256sum pve-manager_6.2-15_amd64.deb 3b5a7377406ae9d85757a5ac31d9e2688a87acf878d6bc0ccde2e5475cffa651 pve-manager_6.2-15_amd64.deb cheers, Thomas ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PVE-User] Backup broken? 2020-11-01 19:48 ` Thomas Lamprecht 2020-11-01 20:17 ` Thomas Lamprecht @ 2020-11-02 9:17 ` Bernhard Dick 1 sibling, 0 replies; 6+ messages in thread From: Bernhard Dick @ 2020-11-02 9:17 UTC (permalink / raw) To: Thomas Lamprecht, Proxmox VE user list Hi, Am 01.11.2020 um 20:48 schrieb Thomas Lamprecht: > On 01.11.20 14:14, Bernhard Dick wrote: >> I can even reproduce this behaviour by triggering the global Backup job from the web console. If I backup single VMs/Containers from the Host part of the web console it runs fine, however the global job removes also those backups when it is running. > > Yes, there was a regression with this when adopting the newer prune "keep-daily", > "keep-weekly", ... logic. It acts quite different internally, but the storage > special case for maxfiles==0 was handled rather implicit, thus this did not > rang any alarm bells. I transformed it to a more explicit logic and we'll > add some more extensive test for this special case, so that it won't happen again. > > The fix is packaged in pve-manager version 6.2-15, currently available on pvetest. > You can either add the pvetest repository[0], do `apt update && apt install pve-manager`, > then drop the test repo again, or manually download and install it, with using > `apt install` this still checks if the package is valid (i.e., signed by a trusted > key): > > # wget http://download.proxmox.com/debian/pve/dists/buster/pvetest/binary-amd64/pve-manager_6.2-15_amd64.deb > # apt install ./pve-manager_6.2-15_amd64.deb > I tried it now and it works as expected. So thanks for fixing this fast. Regards Bernhard > thanks for your report! > > regards, > Thomas > > > [0]: https://pve.proxmox.com/wiki/Package_Repositories > > ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2020-11-02 9:17 UTC | newest] Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2020-10-31 14:11 [PVE-User] Backup broken? Bernhard Dick 2020-11-01 12:48 ` Thomas Lamprecht 2020-11-01 13:14 ` Bernhard Dick 2020-11-01 19:48 ` Thomas Lamprecht 2020-11-01 20:17 ` Thomas Lamprecht 2020-11-02 9:17 ` Bernhard Dick
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox