* [pbs-devel] [PATCH proxmox-backup 1/6] .gitignore: add build directory
2020-07-21 13:03 [pbs-devel] [PATCH proxmox-backup 0/6] various sync fixes/improvements Fabian Grünbichler
@ 2020-07-21 13:03 ` Fabian Grünbichler
2020-07-21 13:03 ` [pbs-devel] [PATCH proxmox-backup 2/6] fix #2860: skip in-progress snapshots when syncing Fabian Grünbichler
` (5 subsequent siblings)
6 siblings, 0 replies; 9+ messages in thread
From: Fabian Grünbichler @ 2020-07-21 13:03 UTC (permalink / raw)
To: pbs-devel
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
.gitignore | 1 +
1 file changed, 1 insertion(+)
diff --git a/.gitignore b/.gitignore
index 17ee61cb..e9ee6784 100644
--- a/.gitignore
+++ b/.gitignore
@@ -3,3 +3,4 @@ local.mak
**/*.rs.bk
/etc/proxmox-backup.service
/etc/proxmox-backup-proxy.service
+build/
--
2.20.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 2/6] fix #2860: skip in-progress snapshots when syncing
2020-07-21 13:03 [pbs-devel] [PATCH proxmox-backup 0/6] various sync fixes/improvements Fabian Grünbichler
2020-07-21 13:03 ` [pbs-devel] [PATCH proxmox-backup 1/6] .gitignore: add build directory Fabian Grünbichler
@ 2020-07-21 13:03 ` Fabian Grünbichler
2020-07-21 13:03 ` [pbs-devel] [PATCH proxmox-backup 3/6] api: translate ENOTFOUND to 404 for downloads Fabian Grünbichler
` (4 subsequent siblings)
6 siblings, 0 replies; 9+ messages in thread
From: Fabian Grünbichler @ 2020-07-21 13:03 UTC (permalink / raw)
To: pbs-devel
they don't have a final manifest yet and are not done, so they can't be
synced either.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
src/client/pull.rs | 18 ++++++++++++------
1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/src/client/pull.rs b/src/client/pull.rs
index 5cf0dd1f..bbe01969 100644
--- a/src/client/pull.rs
+++ b/src/client/pull.rs
@@ -6,7 +6,6 @@ use std::convert::TryFrom;
use std::sync::Arc;
use std::collections::HashMap;
use std::io::{Seek, SeekFrom};
-use chrono::{Utc, TimeZone};
use crate::server::{WorkerTask};
use crate::backup::*;
@@ -302,7 +301,16 @@ pub async fn pull_group(
let mut remote_snapshots = std::collections::HashSet::new();
for item in list {
- let backup_time = Utc.timestamp(item.backup_time, 0);
+ let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.backup_time);
+
+ // in-progress backups can't be synced
+ if let None = item.size {
+ worker.log(format!("skipping snapshot {} - in-progress backup", snapshot));
+ continue;
+ }
+
+ let backup_time = snapshot.backup_time();
+
remote_snapshots.insert(backup_time);
if let Some(last_sync_time) = last_sync {
@@ -319,14 +327,12 @@ pub async fn pull_group(
new_client,
None,
src_repo.store(),
- &item.backup_type,
- &item.backup_id,
+ snapshot.group().backup_type(),
+ snapshot.group().backup_id(),
backup_time,
true,
).await?;
- let snapshot = BackupDir::new(item.backup_type, item.backup_id, item.backup_time);
-
pull_snapshot_from(worker, reader, tgt_store.clone(), &snapshot).await?;
}
--
2.20.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 3/6] api: translate ENOTFOUND to 404 for downloads
2020-07-21 13:03 [pbs-devel] [PATCH proxmox-backup 0/6] various sync fixes/improvements Fabian Grünbichler
2020-07-21 13:03 ` [pbs-devel] [PATCH proxmox-backup 1/6] .gitignore: add build directory Fabian Grünbichler
2020-07-21 13:03 ` [pbs-devel] [PATCH proxmox-backup 2/6] fix #2860: skip in-progress snapshots when syncing Fabian Grünbichler
@ 2020-07-21 13:03 ` Fabian Grünbichler
2020-07-21 13:03 ` [pbs-devel] [PATCH proxmox-backup 4/6] fix #2865: detect and skip vanished snapshots Fabian Grünbichler
` (3 subsequent siblings)
6 siblings, 0 replies; 9+ messages in thread
From: Fabian Grünbichler @ 2020-07-21 13:03 UTC (permalink / raw)
To: pbs-devel
and percolate the HttpError back up on the client side
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
src/api2/helpers.rs | 7 ++++++-
src/client/http_client.rs | 5 +++--
2 files changed, 9 insertions(+), 3 deletions(-)
diff --git a/src/api2/helpers.rs b/src/api2/helpers.rs
index 9dd4ecba..a3c68087 100644
--- a/src/api2/helpers.rs
+++ b/src/api2/helpers.rs
@@ -6,7 +6,12 @@ use proxmox::http_err;
pub async fn create_download_response(path: PathBuf) -> Result<Response<Body>, Error> {
let file = tokio::fs::File::open(path.clone())
- .map_err(move |err| http_err!(BAD_REQUEST, format!("open file {:?} failed: {}", path.clone(), err)))
+ .map_err(move |err| {
+ match err.kind() {
+ std::io::ErrorKind::NotFound => http_err!(NOT_FOUND, format!("open file {:?} failed - not found", path.clone())),
+ _ => http_err!(BAD_REQUEST, format!("open file {:?} failed: {}", path.clone(), err)),
+ }
+ })
.await?;
let payload = tokio_util::codec::FramedRead::new(file, tokio_util::codec::BytesCodec::new())
diff --git a/src/client/http_client.rs b/src/client/http_client.rs
index 3886c40f..a930ea56 100644
--- a/src/client/http_client.rs
+++ b/src/client/http_client.rs
@@ -16,6 +16,7 @@ use percent_encoding::percent_encode;
use xdg::BaseDirectories;
use proxmox::{
+ api::error::HttpError,
sys::linux::tty,
tools::{
fs::{file_get_json, replace_file, CreateOptions},
@@ -606,7 +607,7 @@ impl HttpClient {
Ok(value)
}
} else {
- bail!("HTTP Error {}: {}", status, text);
+ Err(Error::from(HttpError::new(status, text)))
}
}
@@ -819,7 +820,7 @@ impl H2Client {
bail!("got result without data property");
}
} else {
- bail!("HTTP Error {}: {}", status, text);
+ Err(Error::from(HttpError::new(status, text)))
}
}
--
2.20.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 4/6] fix #2865: detect and skip vanished snapshots
2020-07-21 13:03 [pbs-devel] [PATCH proxmox-backup 0/6] various sync fixes/improvements Fabian Grünbichler
` (2 preceding siblings ...)
2020-07-21 13:03 ` [pbs-devel] [PATCH proxmox-backup 3/6] api: translate ENOTFOUND to 404 for downloads Fabian Grünbichler
@ 2020-07-21 13:03 ` Fabian Grünbichler
2020-07-21 13:03 ` [pbs-devel] [PATCH proxmox-backup 5/6] fix #2871: close FDs when scanning backup group Fabian Grünbichler
` (2 subsequent siblings)
6 siblings, 0 replies; 9+ messages in thread
From: Fabian Grünbichler @ 2020-07-21 13:03 UTC (permalink / raw)
To: pbs-devel
also when they have been removed/forgotten since we retrieved the
snapshot list for the currently syncing backup group.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
Notes:
there is probably a more elegant ways to do this?
src/client/pull.rs | 24 +++++++++++++++++++++++-
1 file changed, 23 insertions(+), 1 deletion(-)
diff --git a/src/client/pull.rs b/src/client/pull.rs
index bbe01969..421260a3 100644
--- a/src/client/pull.rs
+++ b/src/client/pull.rs
@@ -7,6 +7,7 @@ use std::sync::Arc;
use std::collections::HashMap;
use std::io::{Seek, SeekFrom};
+use proxmox::api::error::{StatusCode, HttpError};
use crate::server::{WorkerTask};
use crate::backup::*;
use crate::api2::types::*;
@@ -151,7 +152,28 @@ async fn pull_snapshot(
let mut tmp_manifest_name = manifest_name.clone();
tmp_manifest_name.set_extension("tmp");
- let mut tmp_manifest_file = download_manifest(&reader, &tmp_manifest_name).await?;
+ let download_res = download_manifest(&reader, &tmp_manifest_name).await;
+ let mut tmp_manifest_file = match download_res {
+ Ok(manifest_file) => manifest_file,
+ Err(err) => {
+ match err.downcast_ref::<HttpError>() {
+ Some(HttpError { code, message }) => {
+ match code {
+ &StatusCode::NOT_FOUND => {
+ worker.log(format!("skipping snapshot {} - vanished since start of sync", snapshot));
+ return Ok(());
+ },
+ _ => {
+ bail!("HTTP error {} - {}", code, message);
+ },
+ }
+ },
+ None => {
+ return Err(err);
+ },
+ };
+ },
+ };
let tmp_manifest_blob = DataBlob::load(&mut tmp_manifest_file)?;
tmp_manifest_blob.verify_crc()?;
--
2.20.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 5/6] fix #2871: close FDs when scanning backup group
2020-07-21 13:03 [pbs-devel] [PATCH proxmox-backup 0/6] various sync fixes/improvements Fabian Grünbichler
` (3 preceding siblings ...)
2020-07-21 13:03 ` [pbs-devel] [PATCH proxmox-backup 4/6] fix #2865: detect and skip vanished snapshots Fabian Grünbichler
@ 2020-07-21 13:03 ` Fabian Grünbichler
2020-07-21 13:03 ` [pbs-devel] [RFC proxmox-backup 6/6] sync: improve log output Fabian Grünbichler
2020-07-22 7:32 ` [pbs-devel] partially-applied: [PATCH proxmox-backup 0/6] various sync fixes/improvements Thomas Lamprecht
6 siblings, 0 replies; 9+ messages in thread
From: Fabian Grünbichler @ 2020-07-21 13:03 UTC (permalink / raw)
To: pbs-devel
otherwise we leak those descriptors and run into EMFILE when a backup
group contains many snapshots.
fcntl::openat and Dir::openat are not the same ;)
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
src/backup/backup_info.rs | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/src/backup/backup_info.rs b/src/backup/backup_info.rs
index 352d81e4..f8ea11bd 100644
--- a/src/backup/backup_info.rs
+++ b/src/backup/backup_info.rs
@@ -106,7 +106,11 @@ impl BackupGroup {
use nix::fcntl::{openat, OFlag};
match openat(l2_fd, &manifest_path, OFlag::O_RDONLY, nix::sys::stat::Mode::empty()) {
- Ok(_) => { /* manifest exists --> assume backup was successful */ },
+ Ok(rawfd) => {
+ /* manifest exists --> assume backup was successful */
+ /* close else this leaks! */
+ nix::unistd::close(rawfd)?;
+ },
Err(nix::Error::Sys(nix::errno::Errno::ENOENT)) => { return Ok(()); }
Err(err) => {
bail!("last_successful_backup: unexpected error - {}", err);
--
2.20.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* [pbs-devel] [RFC proxmox-backup 6/6] sync: improve log output
2020-07-21 13:03 [pbs-devel] [PATCH proxmox-backup 0/6] various sync fixes/improvements Fabian Grünbichler
` (4 preceding siblings ...)
2020-07-21 13:03 ` [pbs-devel] [PATCH proxmox-backup 5/6] fix #2871: close FDs when scanning backup group Fabian Grünbichler
@ 2020-07-21 13:03 ` Fabian Grünbichler
2020-07-22 7:31 ` Thomas Lamprecht
2020-07-22 7:32 ` [pbs-devel] partially-applied: [PATCH proxmox-backup 0/6] various sync fixes/improvements Thomas Lamprecht
6 siblings, 1 reply; 9+ messages in thread
From: Fabian Grünbichler @ 2020-07-21 13:03 UTC (permalink / raw)
To: pbs-devel
add a bit more info and simplify some log statements.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
Notes:
I am not sure whether this idea of 'last successful sync' == 'last successful
backup' is such a good idea in any case..
this also might be too verbose for busy datastores - maybe just printing a
count of skipped, already previously synced snapshots is better?
src/client/pull.rs | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/src/client/pull.rs b/src/client/pull.rs
index 421260a3..1718c752 100644
--- a/src/client/pull.rs
+++ b/src/client/pull.rs
@@ -310,8 +310,10 @@ pub async fn pull_group(
"backup-id": group.backup_id(),
});
+ worker.log(format!("sync group {}", group));
let mut result = client.get(&path, Some(args)).await?;
let mut list: Vec<SnapshotListItem> = serde_json::from_value(result["data"].take())?;
+ worker.log(format!("remote returned {} snapshots", list.len()));
list.sort_unstable_by(|a, b| a.backup_time.cmp(&b.backup_time));
@@ -336,7 +338,10 @@ pub async fn pull_group(
remote_snapshots.insert(backup_time);
if let Some(last_sync_time) = last_sync {
- if last_sync_time > backup_time { continue; }
+ if last_sync_time > backup_time {
+ worker.log(format!("skipping snapshot {} - already synced", snapshot));
+ continue;
+ }
}
let options = HttpClientOptions::new()
@@ -411,14 +416,14 @@ pub async fn pull_store(
let owner = tgt_store.create_backup_group(&group, &username)?;
// permission check
if owner != username { // only the owner is allowed to create additional snapshots
- worker.log(format!("sync group {}/{} failed - owner check failed ({} != {})",
- item.backup_type, item.backup_id, username, owner));
+ worker.log(format!("sync group {} failed - owner check failed ({} != {})",
+ group, username, owner));
errors = true;
continue; // do not stop here, instead continue
}
if let Err(err) = pull_group(worker, client, src_repo, tgt_store.clone(), &group, delete).await {
- worker.log(format!("sync group {}/{} failed - {}", item.backup_type, item.backup_id, err));
+ worker.log(format!("sync group {} failed - {}", group, err));
errors = true;
continue; // do not stop here, instead continue
}
@@ -429,7 +434,7 @@ pub async fn pull_store(
let local_groups = BackupGroup::list_groups(&tgt_store.base_path())?;
for local_group in local_groups {
if new_groups.contains(&local_group) { continue; }
- worker.log(format!("delete vanished group '{}/{}'", local_group.backup_type(), local_group.backup_id()));
+ worker.log(format!("delete vanished group '{}'", local_group));
if let Err(err) = tgt_store.remove_backup_group(&local_group) {
worker.log(err.to_string());
errors = true;
--
2.20.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [pbs-devel] [RFC proxmox-backup 6/6] sync: improve log output
2020-07-21 13:03 ` [pbs-devel] [RFC proxmox-backup 6/6] sync: improve log output Fabian Grünbichler
@ 2020-07-22 7:31 ` Thomas Lamprecht
0 siblings, 0 replies; 9+ messages in thread
From: Thomas Lamprecht @ 2020-07-22 7:31 UTC (permalink / raw)
To: Proxmox Backup Server development discussion, Fabian Grünbichler
On 21.07.20 15:03, Fabian Grünbichler wrote:
> add a bit more info and simplify some log statements.
>
> Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
> ---
>
> Notes:
> I am not sure whether this idea of 'last successful sync' == 'last successful
> backup' is such a good idea in any case..
>
> this also might be too verbose for busy datastores - maybe just printing a
> count of skipped, already previously synced snapshots is better?
yeah, IMO the "skipping snapshot {} - already synced" is to much, can generated
easily some thousands to hundred thousands of log lines in a big and "old"
datastore.
I'd then also put the count of skipped, already previously synced and "remote
returned {} snapshots" into one single line.
>
> src/client/pull.rs | 15 ++++++++++-----
> 1 file changed, 10 insertions(+), 5 deletions(-)
>
> diff --git a/src/client/pull.rs b/src/client/pull.rs
> index 421260a3..1718c752 100644
> --- a/src/client/pull.rs
> +++ b/src/client/pull.rs
> @@ -310,8 +310,10 @@ pub async fn pull_group(
> "backup-id": group.backup_id(),
> });
>
> + worker.log(format!("sync group {}", group));
> let mut result = client.get(&path, Some(args)).await?;
> let mut list: Vec<SnapshotListItem> = serde_json::from_value(result["data"].take())?;
> + worker.log(format!("remote returned {} snapshots", list.len()));
>
> list.sort_unstable_by(|a, b| a.backup_time.cmp(&b.backup_time));
>
> @@ -336,7 +338,10 @@ pub async fn pull_group(
> remote_snapshots.insert(backup_time);
>
> if let Some(last_sync_time) = last_sync {
> - if last_sync_time > backup_time { continue; }
> + if last_sync_time > backup_time {
> + worker.log(format!("skipping snapshot {} - already synced", snapshot));
> + continue;
> + }
> }
>
> let options = HttpClientOptions::new()
> @@ -411,14 +416,14 @@ pub async fn pull_store(
> let owner = tgt_store.create_backup_group(&group, &username)?;
> // permission check
> if owner != username { // only the owner is allowed to create additional snapshots
> - worker.log(format!("sync group {}/{} failed - owner check failed ({} != {})",
> - item.backup_type, item.backup_id, username, owner));
> + worker.log(format!("sync group {} failed - owner check failed ({} != {})",
> + group, username, owner));
> errors = true;
> continue; // do not stop here, instead continue
> }
>
> if let Err(err) = pull_group(worker, client, src_repo, tgt_store.clone(), &group, delete).await {
> - worker.log(format!("sync group {}/{} failed - {}", item.backup_type, item.backup_id, err));
> + worker.log(format!("sync group {} failed - {}", group, err));
> errors = true;
> continue; // do not stop here, instead continue
> }
> @@ -429,7 +434,7 @@ pub async fn pull_store(
> let local_groups = BackupGroup::list_groups(&tgt_store.base_path())?;
> for local_group in local_groups {
> if new_groups.contains(&local_group) { continue; }
> - worker.log(format!("delete vanished group '{}/{}'", local_group.backup_type(), local_group.backup_id()));
> + worker.log(format!("delete vanished group '{}'", local_group));
> if let Err(err) = tgt_store.remove_backup_group(&local_group) {
> worker.log(err.to_string());
> errors = true;
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* [pbs-devel] partially-applied: [PATCH proxmox-backup 0/6] various sync fixes/improvements
2020-07-21 13:03 [pbs-devel] [PATCH proxmox-backup 0/6] various sync fixes/improvements Fabian Grünbichler
` (5 preceding siblings ...)
2020-07-21 13:03 ` [pbs-devel] [RFC proxmox-backup 6/6] sync: improve log output Fabian Grünbichler
@ 2020-07-22 7:32 ` Thomas Lamprecht
6 siblings, 0 replies; 9+ messages in thread
From: Thomas Lamprecht @ 2020-07-22 7:32 UTC (permalink / raw)
To: Proxmox Backup Server development discussion, Fabian Grünbichler
On 21.07.20 15:03, Fabian Grünbichler wrote:
> Fabian Grünbichler (6):
> .gitignore: add build directory
> fix #2860: skip in-progress snapshots when syncing
> api: translate ENOTFOUND to 404 for downloads
> fix #2865: detect and skip vanished snapshots
> fix #2871: close FDs when scanning backup group
> sync: improve log output
>
> .gitignore | 1 +
> src/api2/helpers.rs | 7 ++++-
> src/backup/backup_info.rs | 6 ++++-
> src/client/http_client.rs | 5 ++--
> src/client/pull.rs | 57 ++++++++++++++++++++++++++++++---------
> 5 files changed, 60 insertions(+), 16 deletions(-)
>
applied all but the 6/6 RFC, thanks!
^ permalink raw reply [flat|nested] 9+ messages in thread