* [pbs-devel] [PATCH v3 proxmox 1/1] updater: impl UpdaterType for Vec
2021-10-28 13:00 [pbs-devel] [PATCH v3 proxmox-backup 0/12] pull/sync group filter Fabian Grünbichler
@ 2021-10-28 13:00 ` Fabian Grünbichler
2021-11-09 8:31 ` [pbs-devel] applied: " Dietmar Maurer
2021-10-28 13:00 ` [pbs-devel] [PATCH v3 proxmox-backup 01/11] api-types: add schema for backup group Fabian Grünbichler
` (12 subsequent siblings)
13 siblings, 1 reply; 16+ messages in thread
From: Fabian Grünbichler @ 2021-10-28 13:00 UTC (permalink / raw)
To: pbs-devel
by replacing the whole Vec.
if we ever want to support adding/removing/modifying elements of a Vec
via the Updater, we'd need to extend it anyway (or use a custom
updater).
Suggested-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
proxmox-schema/src/schema.rs | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/proxmox-schema/src/schema.rs b/proxmox-schema/src/schema.rs
index 34135f4..5e6e818 100644
--- a/proxmox-schema/src/schema.rs
+++ b/proxmox-schema/src/schema.rs
@@ -1179,6 +1179,11 @@ where
type Updater = T::Updater;
}
+// this will replace the whole Vec
+impl<T> UpdaterType for Vec<T> {
+ type Updater = Option<Self>;
+}
+
pub trait ApiType {
const API_SCHEMA: Schema;
}
--
2.30.2
^ permalink raw reply [flat|nested] 16+ messages in thread
* [pbs-devel] applied: [PATCH v3 proxmox 1/1] updater: impl UpdaterType for Vec
2021-10-28 13:00 ` [pbs-devel] [PATCH v3 proxmox 1/1] updater: impl UpdaterType for Vec Fabian Grünbichler
@ 2021-11-09 8:31 ` Dietmar Maurer
0 siblings, 0 replies; 16+ messages in thread
From: Dietmar Maurer @ 2021-11-09 8:31 UTC (permalink / raw)
To: Proxmox Backup Server development discussion, Fabian Grünbichler
applied
On 10/28/21 3:00 PM, Fabian Grünbichler wrote:
> by replacing the whole Vec.
>
> if we ever want to support adding/removing/modifying elements of a Vec
> via the Updater, we'd need to extend it anyway (or use a custom
> updater).
>
> Suggested-by: Dominik Csapak <d.csapak@proxmox.com>
> Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
> ---
> proxmox-schema/src/schema.rs | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/proxmox-schema/src/schema.rs b/proxmox-schema/src/schema.rs
> index 34135f4..5e6e818 100644
> --- a/proxmox-schema/src/schema.rs
> +++ b/proxmox-schema/src/schema.rs
> @@ -1179,6 +1179,11 @@ where
> type Updater = T::Updater;
> }
>
> +// this will replace the whole Vec
> +impl<T> UpdaterType for Vec<T> {
> + type Updater = Option<Self>;
> +}
> +
> pub trait ApiType {
> const API_SCHEMA: Schema;
> }
^ permalink raw reply [flat|nested] 16+ messages in thread
* [pbs-devel] [PATCH v3 proxmox-backup 01/11] api-types: add schema for backup group
2021-10-28 13:00 [pbs-devel] [PATCH v3 proxmox-backup 0/12] pull/sync group filter Fabian Grünbichler
2021-10-28 13:00 ` [pbs-devel] [PATCH v3 proxmox 1/1] updater: impl UpdaterType for Vec Fabian Grünbichler
@ 2021-10-28 13:00 ` Fabian Grünbichler
2021-10-28 13:00 ` [pbs-devel] [PATCH v3 proxmox-backup 02/11] api: add GroupFilter(List) type Fabian Grünbichler
` (11 subsequent siblings)
13 siblings, 0 replies; 16+ messages in thread
From: Fabian Grünbichler @ 2021-10-28 13:00 UTC (permalink / raw)
To: pbs-devel
the regex was already there, and we need a simple type/schema for
passing in multiple groups as Vec/Array via the API.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
pbs-api-types/src/datastore.rs | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/pbs-api-types/src/datastore.rs b/pbs-api-types/src/datastore.rs
index 77c1258f..b1dd09d4 100644
--- a/pbs-api-types/src/datastore.rs
+++ b/pbs-api-types/src/datastore.rs
@@ -40,6 +40,7 @@ pub const BACKUP_ARCHIVE_NAME_SCHEMA: Schema = StringSchema::new("Backup archive
.schema();
pub const BACKUP_ID_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&BACKUP_ID_REGEX);
+pub const BACKUP_GROUP_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&GROUP_PATH_REGEX);
pub const BACKUP_ID_SCHEMA: Schema = StringSchema::new("Backup ID.")
.format(&BACKUP_ID_FORMAT)
@@ -57,6 +58,10 @@ pub const BACKUP_TIME_SCHEMA: Schema = IntegerSchema::new("Backup time (Unix epo
.minimum(1_547_797_308)
.schema();
+pub const BACKUP_GROUP_SCHEMA: Schema = StringSchema::new("Backup Group")
+ .format(&BACKUP_GROUP_FORMAT)
+ .schema();
+
pub const DATASTORE_SCHEMA: Schema = StringSchema::new("Datastore name.")
.format(&PROXMOX_SAFE_ID_FORMAT)
.min_length(3)
--
2.30.2
^ permalink raw reply [flat|nested] 16+ messages in thread
* [pbs-devel] [PATCH v3 proxmox-backup 02/11] api: add GroupFilter(List) type
2021-10-28 13:00 [pbs-devel] [PATCH v3 proxmox-backup 0/12] pull/sync group filter Fabian Grünbichler
2021-10-28 13:00 ` [pbs-devel] [PATCH v3 proxmox 1/1] updater: impl UpdaterType for Vec Fabian Grünbichler
2021-10-28 13:00 ` [pbs-devel] [PATCH v3 proxmox-backup 01/11] api-types: add schema for backup group Fabian Grünbichler
@ 2021-10-28 13:00 ` Fabian Grünbichler
2021-10-28 13:00 ` [pbs-devel] [PATCH v3 proxmox-backup 03/11] BackupGroup: add filter helper Fabian Grünbichler
` (10 subsequent siblings)
13 siblings, 0 replies; 16+ messages in thread
From: Fabian Grünbichler @ 2021-10-28 13:00 UTC (permalink / raw)
To: pbs-devel
at the API level, this is a simple (wrapped) Vec of Strings with a
verifier function. all users should use the provided helper to get the
actual GroupFilter enum values, which can't be directly used in the API
schema because of restrictions of the api macro.
validation of the schema + parsing into the proper type uses the same fn
intentionally to avoid running out of sync, even if it means compiling
the REs twice.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
Notes:
v2->v3:
- use ArraySchema directly now that proxmox-schema supports updating Vecs
- use forward_(de)_serialize_to_..
Requires bumped proxmox-schema with UpdaterType impl for Vec<T>!
pbs-api-types/src/jobs.rs | 56 +++++++++++++++++++++++++++++++++++++++
1 file changed, 56 insertions(+)
diff --git a/pbs-api-types/src/jobs.rs b/pbs-api-types/src/jobs.rs
index f47a294a..eda4ef95 100644
--- a/pbs-api-types/src/jobs.rs
+++ b/pbs-api-types/src/jobs.rs
@@ -1,3 +1,7 @@
+use anyhow::format_err;
+use std::str::FromStr;
+
+use regex::Regex;
use serde::{Deserialize, Serialize};
use proxmox_schema::*;
@@ -5,6 +9,7 @@ use proxmox_schema::*;
use crate::{
Userid, Authid, REMOTE_ID_SCHEMA, DRIVE_NAME_SCHEMA, MEDIA_POOL_NAME_SCHEMA,
SINGLE_LINE_COMMENT_SCHEMA, PROXMOX_SAFE_ID_FORMAT, DATASTORE_SCHEMA,
+ BACKUP_GROUP_SCHEMA, BACKUP_TYPE_SCHEMA,
};
const_regex!{
@@ -317,6 +322,57 @@ pub struct TapeBackupJobStatus {
pub next_media_label: Option<String>,
}
+#[derive(Clone, Debug)]
+/// Filter for matching `BackupGroup`s, for use with `BackupGroup::filter`.
+pub enum GroupFilter {
+ /// BackupGroup type - either `vm`, `ct`, or `host`.
+ BackupType(String),
+ /// Full identifier of BackupGroup, including type
+ Group(String),
+ /// A regular expression matched against the full identifier of the BackupGroup
+ Regex(Regex),
+}
+
+impl std::str::FromStr for GroupFilter {
+ type Err = anyhow::Error;
+
+ fn from_str(s: &str) -> Result<Self, Self::Err> {
+ match s.split_once(":") {
+ Some(("group", value)) => parse_simple_value(value, &BACKUP_GROUP_SCHEMA).map(|_| GroupFilter::Group(value.to_string())),
+ Some(("type", value)) => parse_simple_value(value, &BACKUP_TYPE_SCHEMA).map(|_| GroupFilter::BackupType(value.to_string())),
+ Some(("regex", value)) => Ok(GroupFilter::Regex(Regex::new(value)?)),
+ Some((ty, _value)) => Err(format_err!("expected 'group', 'type' or 'regex' prefix, got '{}'", ty)),
+ None => Err(format_err!("input doesn't match expected format '<group:GROUP||type:<vm|ct|host>|regex:REGEX>'")),
+ }.map_err(|err| format_err!("'{}' - {}", s, err))
+ }
+}
+
+// used for serializing below, caution!
+impl std::fmt::Display for GroupFilter {
+ fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+ match self {
+ GroupFilter::BackupType(backup_type) => write!(f, "type:{}", backup_type),
+ GroupFilter::Group(backup_group) => write!(f, "group:{}", backup_group),
+ GroupFilter::Regex(regex) => write!(f, "regex:{}", regex.as_str()),
+ }
+ }
+}
+
+proxmox::forward_deserialize_to_from_str!(GroupFilter);
+proxmox::forward_serialize_to_display!(GroupFilter);
+
+fn verify_group_filter(input: &str) -> Result<(), anyhow::Error> {
+ GroupFilter::from_str(input).map(|_| ())
+}
+
+pub const GROUP_FILTER_SCHEMA: Schema = StringSchema::new(
+ "Group filter based on group identifier ('group:GROUP'), group type ('type:<vm|ct|host>'), or regex ('regex:RE').")
+ .format(&ApiStringFormat::VerifyFn(verify_group_filter))
+ .type_text("<type:<vm|ct|host>|group:GROUP|regex:RE>")
+ .schema();
+
+pub const GROUP_FILTER_LIST_SCHEMA: Schema = ArraySchema::new("List of group filters.", &GROUP_FILTER_SCHEMA).schema();
+
#[api(
properties: {
id: {
--
2.30.2
^ permalink raw reply [flat|nested] 16+ messages in thread
* [pbs-devel] [PATCH v3 proxmox-backup 03/11] BackupGroup: add filter helper
2021-10-28 13:00 [pbs-devel] [PATCH v3 proxmox-backup 0/12] pull/sync group filter Fabian Grünbichler
` (2 preceding siblings ...)
2021-10-28 13:00 ` [pbs-devel] [PATCH v3 proxmox-backup 02/11] api: add GroupFilter(List) type Fabian Grünbichler
@ 2021-10-28 13:00 ` Fabian Grünbichler
2021-10-28 13:00 ` [pbs-devel] [PATCH v3 proxmox-backup 04/11] pull: use BackupGroup consistently Fabian Grünbichler
` (9 subsequent siblings)
13 siblings, 0 replies; 16+ messages in thread
From: Fabian Grünbichler @ 2021-10-28 13:00 UTC (permalink / raw)
To: pbs-devel
to have a single implementation of "group is matched by group filter".
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
Notes:
v2->v3: rename `filter` fn to `matches`
there might be a better place for this if we want to support more complex
filters in the future (like, exists in local datastore, or has > X snapshots,
..)
pbs-datastore/src/backup_info.rs | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/pbs-datastore/src/backup_info.rs b/pbs-datastore/src/backup_info.rs
index b94b9779..9f56e924 100644
--- a/pbs-datastore/src/backup_info.rs
+++ b/pbs-datastore/src/backup_info.rs
@@ -1,5 +1,6 @@
use std::os::unix::io::RawFd;
use std::path::{Path, PathBuf};
+use std::str::FromStr;
use anyhow::{bail, format_err, Error};
@@ -10,6 +11,7 @@ use pbs_api_types::{
GROUP_PATH_REGEX,
SNAPSHOT_PATH_REGEX,
BACKUP_FILE_REGEX,
+ GroupFilter,
};
use super::manifest::MANIFEST_BLOB_NAME;
@@ -155,6 +157,17 @@ impl BackupGroup {
Ok(last)
}
+
+ pub fn matches(&self, filter: &GroupFilter) -> bool {
+ match filter {
+ GroupFilter::Group(backup_group) => match BackupGroup::from_str(&backup_group) {
+ Ok(group) => &group == self,
+ Err(_) => false, // shouldn't happen if value is schema-checked
+ },
+ GroupFilter::BackupType(backup_type) => self.backup_type() == backup_type,
+ GroupFilter::Regex(regex) => regex.is_match(&self.to_string()),
+ }
+ }
}
impl std::fmt::Display for BackupGroup {
--
2.30.2
^ permalink raw reply [flat|nested] 16+ messages in thread
* [pbs-devel] [PATCH v3 proxmox-backup 04/11] pull: use BackupGroup consistently
2021-10-28 13:00 [pbs-devel] [PATCH v3 proxmox-backup 0/12] pull/sync group filter Fabian Grünbichler
` (3 preceding siblings ...)
2021-10-28 13:00 ` [pbs-devel] [PATCH v3 proxmox-backup 03/11] BackupGroup: add filter helper Fabian Grünbichler
@ 2021-10-28 13:00 ` Fabian Grünbichler
2021-10-28 13:00 ` [pbs-devel] [PATCH v3 proxmox-backup 05/11] pull/sync: extract passed along vars into struct Fabian Grünbichler
` (8 subsequent siblings)
13 siblings, 0 replies; 16+ messages in thread
From: Fabian Grünbichler @ 2021-10-28 13:00 UTC (permalink / raw)
To: pbs-devel
instead of `GroupListItem`s. we convert it anyway, so might as well do
that at the start.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
src/server/pull.rs | 25 ++++++++++++++-----------
1 file changed, 14 insertions(+), 11 deletions(-)
diff --git a/src/server/pull.rs b/src/server/pull.rs
index 555f0a94..5c3f9a18 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -656,29 +656,32 @@ pub async fn pull_store(
}
});
+ let list:Vec<BackupGroup> = list
+ .into_iter()
+ .map(|item| BackupGroup::new(item.backup_type, item.backup_id))
+ .collect();
+
let mut errors = false;
let mut new_groups = std::collections::HashSet::new();
- for item in list.iter() {
- new_groups.insert(BackupGroup::new(&item.backup_type, &item.backup_id));
+ for group in list.iter() {
+ new_groups.insert(group.clone());
}
let mut progress = StoreProgress::new(list.len() as u64);
- for (done, item) in list.into_iter().enumerate() {
+ for (done, group) in list.into_iter().enumerate() {
progress.done_groups = done as u64;
progress.done_snapshots = 0;
progress.group_snapshots = 0;
- let group = BackupGroup::new(&item.backup_type, &item.backup_id);
-
let (owner, _lock_guard) = match tgt_store.create_locked_backup_group(&group, &auth_id) {
Ok(result) => result,
Err(err) => {
task_log!(
worker,
- "sync group {}/{} failed - group lock failed: {}",
- item.backup_type, item.backup_id, err
+ "sync group {} failed - group lock failed: {}",
+ &group, err
);
errors = true; // do not stop here, instead continue
continue;
@@ -690,8 +693,8 @@ pub async fn pull_store(
// only the owner is allowed to create additional snapshots
task_log!(
worker,
- "sync group {}/{} failed - owner check failed ({} != {})",
- item.backup_type, item.backup_id, auth_id, owner
+ "sync group {} failed - owner check failed ({} != {})",
+ &group, auth_id, owner
);
errors = true; // do not stop here, instead continue
} else if let Err(err) = pull_group(
@@ -707,8 +710,8 @@ pub async fn pull_store(
{
task_log!(
worker,
- "sync group {}/{} failed - {}",
- item.backup_type, item.backup_id, err,
+ "sync group {} failed - {}",
+ &group, err,
);
errors = true; // do not stop here, instead continue
}
--
2.30.2
^ permalink raw reply [flat|nested] 16+ messages in thread
* [pbs-devel] [PATCH v3 proxmox-backup 05/11] pull/sync: extract passed along vars into struct
2021-10-28 13:00 [pbs-devel] [PATCH v3 proxmox-backup 0/12] pull/sync group filter Fabian Grünbichler
` (4 preceding siblings ...)
2021-10-28 13:00 ` [pbs-devel] [PATCH v3 proxmox-backup 04/11] pull: use BackupGroup consistently Fabian Grünbichler
@ 2021-10-28 13:00 ` Fabian Grünbichler
2021-10-28 13:00 ` [pbs-devel] [PATCH v3 proxmox-backup 06/11] pull: allow pulling groups selectively Fabian Grünbichler
` (7 subsequent siblings)
13 siblings, 0 replies; 16+ messages in thread
From: Fabian Grünbichler @ 2021-10-28 13:00 UTC (permalink / raw)
To: pbs-devel
this is basically the sync job config without ID and some stuff
converted already, and a convenient helper to generate the http client
from it.
Suggested-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
src/api2/config/remote.rs | 4 +-
src/api2/pull.rs | 61 +++++++++++++-------------
src/server/pull.rs | 91 ++++++++++++++++++++++++++-------------
3 files changed, 92 insertions(+), 64 deletions(-)
diff --git a/src/api2/config/remote.rs b/src/api2/config/remote.rs
index 29e638d7..4dffe6bb 100644
--- a/src/api2/config/remote.rs
+++ b/src/api2/config/remote.rs
@@ -277,7 +277,7 @@ pub fn delete_remote(name: String, digest: Option<String>) -> Result<(), Error>
}
/// Helper to get client for remote.cfg entry
-pub async fn remote_client(remote: Remote) -> Result<HttpClient, Error> {
+pub async fn remote_client(remote: &Remote) -> Result<HttpClient, Error> {
let options = HttpClientOptions::new_non_interactive(remote.password.clone(), remote.config.fingerprint.clone());
let client = HttpClient::new(
@@ -322,7 +322,7 @@ pub async fn scan_remote_datastores(name: String) -> Result<Vec<DataStoreListIte
api_err)
};
- let client = remote_client(remote)
+ let client = remote_client(&remote)
.await
.map_err(map_remote_err)?;
let api_res = client
diff --git a/src/api2/pull.rs b/src/api2/pull.rs
index 4280d922..5ae916ed 100644
--- a/src/api2/pull.rs
+++ b/src/api2/pull.rs
@@ -1,5 +1,5 @@
//! Sync datastore from remote server
-use std::sync::{Arc};
+use std::convert::TryFrom;
use anyhow::{format_err, Error};
use futures::{select, future::FutureExt};
@@ -7,18 +7,18 @@ use futures::{select, future::FutureExt};
use proxmox_schema::api;
use proxmox_router::{ApiMethod, Router, RpcEnvironment, Permission};
-use pbs_client::{HttpClient, BackupRepository};
use pbs_api_types::{
- Remote, Authid, SyncJobConfig,
+ Authid, SyncJobConfig,
DATASTORE_SCHEMA, REMOTE_ID_SCHEMA, REMOVE_VANISHED_BACKUPS_SCHEMA,
PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_PRUNE, PRIV_REMOTE_READ,
};
use pbs_tools::task_log;
use proxmox_rest_server::WorkerTask;
use pbs_config::CachedUserInfo;
-use pbs_datastore::DataStore;
-use crate::server::{jobstate::Job, pull::pull_store};
+use crate::server::pull::{PullParameters, pull_store};
+use crate::server::jobstate::Job;
+
pub fn check_pull_privs(
auth_id: &Authid,
@@ -40,27 +40,18 @@ pub fn check_pull_privs(
Ok(())
}
-pub async fn get_pull_parameters(
- store: &str,
- remote: &str,
- remote_store: &str,
-) -> Result<(HttpClient, BackupRepository, Arc<DataStore>), Error> {
-
- let tgt_store = DataStore::lookup_datastore(store)?;
-
- let (remote_config, _digest) = pbs_config::remote::config()?;
- let remote: Remote = remote_config.lookup("remote", remote)?;
-
- let src_repo = BackupRepository::new(
- Some(remote.config.auth_id.clone()),
- Some(remote.config.host.clone()),
- remote.config.port,
- remote_store.to_string(),
- );
-
- let client = crate::api2::config::remote::remote_client(remote).await?;
-
- Ok((client, src_repo, tgt_store))
+impl TryFrom<&SyncJobConfig> for PullParameters {
+ type Error = Error;
+
+ fn try_from(sync_job: &SyncJobConfig) -> Result<Self, Self::Error> {
+ PullParameters::new(
+ &sync_job.store,
+ &sync_job.remote,
+ &sync_job.remote_store,
+ sync_job.owner.as_ref().unwrap_or_else(|| Authid::root_auth_id()).clone(),
+ sync_job.remove_vanished,
+ )
+ }
}
pub fn do_sync_job(
@@ -94,9 +85,8 @@ pub fn do_sync_job(
let worker_future = async move {
- let delete = sync_job.remove_vanished.unwrap_or(true);
- let sync_owner = sync_job.owner.unwrap_or_else(|| Authid::root_auth_id().clone());
- let (client, src_repo, tgt_store) = get_pull_parameters(&sync_job.store, &sync_job.remote, &sync_job.remote_store).await?;
+ let pull_params = PullParameters::try_from(&sync_job)?;
+ let client = pull_params.client().await?;
task_log!(worker, "Starting datastore sync job '{}'", job_id);
if let Some(event_str) = schedule {
@@ -110,7 +100,7 @@ pub fn do_sync_job(
sync_job.remote_store,
);
- pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete, sync_owner).await?;
+ pull_store(&worker, &client, &pull_params).await?;
task_log!(worker, "sync job '{}' end", &job_id);
@@ -187,14 +177,21 @@ async fn pull (
check_pull_privs(&auth_id, &store, &remote, &remote_store, delete)?;
- let (client, src_repo, tgt_store) = get_pull_parameters(&store, &remote, &remote_store).await?;
+ let pull_params = PullParameters::new(
+ &store,
+ &remote,
+ &remote_store,
+ auth_id.clone(),
+ remove_vanished,
+ )?;
+ let client = pull_params.client().await?;
// fixme: set to_stdout to false?
let upid_str = WorkerTask::spawn("sync", Some(store.clone()), auth_id.to_string(), true, move |worker| async move {
task_log!(worker, "sync datastore '{}' start", store);
- let pull_future = pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete, auth_id);
+ let pull_future = pull_store(&worker, &client, &pull_params);
let future = select!{
success = pull_future.fuse() => success,
abort = worker.abort_future().map(|_| Err(format_err!("pull aborted"))) => abort,
diff --git a/src/server/pull.rs b/src/server/pull.rs
index 5c3f9a18..2c454e2d 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -13,7 +13,7 @@ use http::StatusCode;
use proxmox_router::HttpError;
-use pbs_api_types::{Authid, SnapshotListItem, GroupListItem};
+use pbs_api_types::{Authid, GroupListItem, Remote, SnapshotListItem};
use pbs_datastore::{DataStore, BackupInfo, BackupDir, BackupGroup, StoreProgress};
use pbs_datastore::data_blob::DataBlob;
use pbs_datastore::dynamic_index::DynamicIndexReader;
@@ -33,6 +33,44 @@ use crate::tools::ParallelHandler;
// fixme: delete vanished groups
// Todo: correctly lock backup groups
+pub struct PullParameters {
+ remote: Remote,
+ source: BackupRepository,
+ store: Arc<DataStore>,
+ owner: Authid,
+ remove_vanished: bool,
+}
+
+impl PullParameters {
+ pub fn new(
+ store: &str,
+ remote: &str,
+ remote_store: &str,
+ owner: Authid,
+ remove_vanished: Option<bool>,
+ ) -> Result<Self, Error> {
+ let store = DataStore::lookup_datastore(store)?;
+
+ let (remote_config, _digest) = pbs_config::remote::config()?;
+ let remote: Remote = remote_config.lookup("remote", remote)?;
+
+ let remove_vanished = remove_vanished.unwrap_or(true);
+
+ let source = BackupRepository::new(
+ Some(remote.config.auth_id.clone()),
+ Some(remote.config.host.clone()),
+ remote.config.port,
+ remote_store.to_string(),
+ );
+
+ Ok(Self { remote, source, store, owner, remove_vanished })
+ }
+
+ pub async fn client(&self) -> Result<HttpClient, Error> {
+ crate::api2::config::remote::remote_client(&self.remote).await
+ }
+}
+
async fn pull_index_chunks<I: IndexFile>(
worker: &WorkerTask,
chunk_reader: RemoteChunkReader,
@@ -503,13 +541,11 @@ impl std::fmt::Display for SkipInfo {
pub async fn pull_group(
worker: &WorkerTask,
client: &HttpClient,
- src_repo: &BackupRepository,
- tgt_store: Arc<DataStore>,
+ params: &PullParameters,
group: &BackupGroup,
- delete: bool,
progress: &mut StoreProgress,
) -> Result<(), Error> {
- let path = format!("api2/json/admin/datastore/{}/snapshots", src_repo.store());
+ let path = format!("api2/json/admin/datastore/{}/snapshots", params.source.store());
let args = json!({
"backup-type": group.backup_type(),
@@ -525,7 +561,7 @@ pub async fn pull_group(
let fingerprint = client.fingerprint();
- let last_sync = tgt_store.last_successful_backup(group)?;
+ let last_sync = params.store.last_successful_backup(group)?;
let mut remote_snapshots = std::collections::HashSet::new();
@@ -566,16 +602,16 @@ pub async fn pull_group(
let options = HttpClientOptions::new_non_interactive(auth_info.ticket.clone(), fingerprint.clone());
let new_client = HttpClient::new(
- src_repo.host(),
- src_repo.port(),
- src_repo.auth_id(),
+ params.source.host(),
+ params.source.port(),
+ params.source.auth_id(),
options,
)?;
let reader = BackupReader::start(
new_client,
None,
- src_repo.store(),
+ params.source.store(),
snapshot.group().backup_type(),
snapshot.group().backup_id(),
backup_time,
@@ -586,7 +622,7 @@ pub async fn pull_group(
let result = pull_snapshot_from(
worker,
reader,
- tgt_store.clone(),
+ params.store.clone(),
&snapshot,
downloaded_chunks.clone(),
)
@@ -598,14 +634,14 @@ pub async fn pull_group(
result?; // stop on error
}
- if delete {
- let local_list = group.list_backups(&tgt_store.base_path())?;
+ if params.remove_vanished {
+ let local_list = group.list_backups(¶ms.store.base_path())?;
for info in local_list {
let backup_time = info.backup_dir.backup_time();
if remote_snapshots.contains(&backup_time) {
continue;
}
- if info.backup_dir.is_protected(tgt_store.base_path()) {
+ if info.backup_dir.is_protected(params.store.base_path()) {
task_log!(
worker,
"don't delete vanished snapshot {:?} (protected)",
@@ -614,7 +650,7 @@ pub async fn pull_group(
continue;
}
task_log!(worker, "delete vanished snapshot {:?}", info.backup_dir.relative_path());
- tgt_store.remove_backup_dir(&info.backup_dir, false)?;
+ params.store.remove_backup_dir(&info.backup_dir, false)?;
}
}
@@ -628,15 +664,12 @@ pub async fn pull_group(
pub async fn pull_store(
worker: &WorkerTask,
client: &HttpClient,
- src_repo: &BackupRepository,
- tgt_store: Arc<DataStore>,
- delete: bool,
- auth_id: Authid,
+ params: &PullParameters,
) -> Result<(), Error> {
// explicit create shared lock to prevent GC on newly created chunks
- let _shared_store_lock = tgt_store.try_shared_chunk_store_lock()?;
+ let _shared_store_lock = params.store.try_shared_chunk_store_lock()?;
- let path = format!("api2/json/admin/datastore/{}/groups", src_repo.store());
+ let path = format!("api2/json/admin/datastore/{}/groups", params.source.store());
let mut result = client
.get(&path, None)
@@ -675,7 +708,7 @@ pub async fn pull_store(
progress.done_snapshots = 0;
progress.group_snapshots = 0;
- let (owner, _lock_guard) = match tgt_store.create_locked_backup_group(&group, &auth_id) {
+ let (owner, _lock_guard) = match params.store.create_locked_backup_group(&group, ¶ms.owner) {
Ok(result) => result,
Err(err) => {
task_log!(
@@ -689,21 +722,19 @@ pub async fn pull_store(
};
// permission check
- if auth_id != owner {
+ if params.owner != owner {
// only the owner is allowed to create additional snapshots
task_log!(
worker,
"sync group {} failed - owner check failed ({} != {})",
- &group, auth_id, owner
+ &group, params.owner, owner
);
errors = true; // do not stop here, instead continue
} else if let Err(err) = pull_group(
worker,
client,
- src_repo,
- tgt_store.clone(),
+ params,
&group,
- delete,
&mut progress,
)
.await
@@ -717,9 +748,9 @@ pub async fn pull_store(
}
}
- if delete {
+ if params.remove_vanished {
let result: Result<(), Error> = proxmox_lang::try_block!({
- let local_groups = BackupInfo::list_backup_groups(&tgt_store.base_path())?;
+ let local_groups = BackupInfo::list_backup_groups(¶ms.store.base_path())?;
for local_group in local_groups {
if new_groups.contains(&local_group) {
continue;
@@ -730,7 +761,7 @@ pub async fn pull_store(
local_group.backup_type(),
local_group.backup_id()
);
- match tgt_store.remove_backup_group(&local_group) {
+ match params.store.remove_backup_group(&local_group) {
Ok(true) => {},
Ok(false) => {
task_log!(worker, "kept some protected snapshots of group '{}'", local_group);
--
2.30.2
^ permalink raw reply [flat|nested] 16+ messages in thread
* [pbs-devel] [PATCH v3 proxmox-backup 06/11] pull: allow pulling groups selectively
2021-10-28 13:00 [pbs-devel] [PATCH v3 proxmox-backup 0/12] pull/sync group filter Fabian Grünbichler
` (5 preceding siblings ...)
2021-10-28 13:00 ` [pbs-devel] [PATCH v3 proxmox-backup 05/11] pull/sync: extract passed along vars into struct Fabian Grünbichler
@ 2021-10-28 13:00 ` Fabian Grünbichler
2021-10-28 13:00 ` [pbs-devel] [PATCH v3 proxmox-backup 07/11] sync: add group filtering Fabian Grünbichler
` (6 subsequent siblings)
13 siblings, 0 replies; 16+ messages in thread
From: Fabian Grünbichler @ 2021-10-28 13:00 UTC (permalink / raw)
To: pbs-devel
without requiring workarounds based on ownership and limited
visibility/access.
if a group filter is set, remove_vanished will only consider filtered
groups for removal to prevent concurrent disjunct filters from trashing
eachother's synced groups.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
Notes:
v2->v3: use Vec<GroupFilter> and GROUP_FILTER_LIST_SCHEMA
src/api2/pull.rs | 9 +++++++-
src/bin/proxmox-backup-manager.rs | 14 ++++++++++--
src/server/pull.rs | 38 +++++++++++++++++++++++++++----
3 files changed, 53 insertions(+), 8 deletions(-)
diff --git a/src/api2/pull.rs b/src/api2/pull.rs
index 5ae916ed..d4a14f10 100644
--- a/src/api2/pull.rs
+++ b/src/api2/pull.rs
@@ -8,7 +8,7 @@ use proxmox_schema::api;
use proxmox_router::{ApiMethod, Router, RpcEnvironment, Permission};
use pbs_api_types::{
- Authid, SyncJobConfig,
+ Authid, SyncJobConfig, GroupFilter, GROUP_FILTER_LIST_SCHEMA,
DATASTORE_SCHEMA, REMOTE_ID_SCHEMA, REMOVE_VANISHED_BACKUPS_SCHEMA,
PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_PRUNE, PRIV_REMOTE_READ,
};
@@ -50,6 +50,7 @@ impl TryFrom<&SyncJobConfig> for PullParameters {
&sync_job.remote_store,
sync_job.owner.as_ref().unwrap_or_else(|| Authid::root_auth_id()).clone(),
sync_job.remove_vanished,
+ None,
)
}
}
@@ -151,6 +152,10 @@ pub fn do_sync_job(
schema: REMOVE_VANISHED_BACKUPS_SCHEMA,
optional: true,
},
+ "groups": {
+ schema: GROUP_FILTER_LIST_SCHEMA,
+ optional: true,
+ },
},
},
access: {
@@ -168,6 +173,7 @@ async fn pull (
remote: String,
remote_store: String,
remove_vanished: Option<bool>,
+ groups: Option<Vec<GroupFilter>>,
_info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<String, Error> {
@@ -183,6 +189,7 @@ async fn pull (
&remote_store,
auth_id.clone(),
remove_vanished,
+ groups,
)?;
let client = pull_params.client().await?;
diff --git a/src/bin/proxmox-backup-manager.rs b/src/bin/proxmox-backup-manager.rs
index 92e6bb2a..9e52a474 100644
--- a/src/bin/proxmox-backup-manager.rs
+++ b/src/bin/proxmox-backup-manager.rs
@@ -12,8 +12,9 @@ use pbs_client::{display_task_log, view_task_result};
use pbs_tools::percent_encoding::percent_encode_component;
use pbs_tools::json::required_string_param;
use pbs_api_types::{
- DATASTORE_SCHEMA, UPID_SCHEMA, REMOTE_ID_SCHEMA, REMOVE_VANISHED_BACKUPS_SCHEMA,
- IGNORE_VERIFIED_BACKUPS_SCHEMA, VERIFICATION_OUTDATED_AFTER_SCHEMA,
+ GroupFilter,
+ DATASTORE_SCHEMA, GROUP_FILTER_LIST_SCHEMA, IGNORE_VERIFIED_BACKUPS_SCHEMA, REMOTE_ID_SCHEMA,
+ REMOVE_VANISHED_BACKUPS_SCHEMA, UPID_SCHEMA, VERIFICATION_OUTDATED_AFTER_SCHEMA,
};
use proxmox_rest_server::wait_for_local_worker;
@@ -238,6 +239,10 @@ fn task_mgmt_cli() -> CommandLineInterface {
schema: REMOVE_VANISHED_BACKUPS_SCHEMA,
optional: true,
},
+ "groups": {
+ schema: GROUP_FILTER_LIST_SCHEMA,
+ optional: true,
+ },
"output-format": {
schema: OUTPUT_FORMAT,
optional: true,
@@ -251,6 +256,7 @@ async fn pull_datastore(
remote_store: String,
local_store: String,
remove_vanished: Option<bool>,
+ groups: Option<Vec<GroupFilter>>,
param: Value,
) -> Result<Value, Error> {
@@ -264,6 +270,10 @@ async fn pull_datastore(
"remote-store": remote_store,
});
+ if groups.is_some() {
+ args["groups"] = json!(groups);
+ }
+
if let Some(remove_vanished) = remove_vanished {
args["remove-vanished"] = Value::from(remove_vanished);
}
diff --git a/src/server/pull.rs b/src/server/pull.rs
index 2c454e2d..63bf92b4 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -13,8 +13,9 @@ use http::StatusCode;
use proxmox_router::HttpError;
-use pbs_api_types::{Authid, GroupListItem, Remote, SnapshotListItem};
-use pbs_datastore::{DataStore, BackupInfo, BackupDir, BackupGroup, StoreProgress};
+use pbs_api_types::{Authid, GroupFilter, GroupListItem, Remote, SnapshotListItem};
+
+use pbs_datastore::{BackupDir, BackupInfo, BackupGroup, DataStore, StoreProgress};
use pbs_datastore::data_blob::DataBlob;
use pbs_datastore::dynamic_index::DynamicIndexReader;
use pbs_datastore::fixed_index::FixedIndexReader;
@@ -39,6 +40,7 @@ pub struct PullParameters {
store: Arc<DataStore>,
owner: Authid,
remove_vanished: bool,
+ group_filter: Option<Vec<GroupFilter>>,
}
impl PullParameters {
@@ -48,6 +50,7 @@ impl PullParameters {
remote_store: &str,
owner: Authid,
remove_vanished: Option<bool>,
+ group_filter: Option<Vec<GroupFilter>>,
) -> Result<Self, Error> {
let store = DataStore::lookup_datastore(store)?;
@@ -63,7 +66,7 @@ impl PullParameters {
remote_store.to_string(),
);
- Ok(Self { remote, source, store, owner, remove_vanished })
+ Ok(Self { remote, source, store, owner, remove_vanished, group_filter })
}
pub async fn client(&self) -> Result<HttpClient, Error> {
@@ -678,8 +681,7 @@ pub async fn pull_store(
let mut list: Vec<GroupListItem> = serde_json::from_value(result["data"].take())?;
- task_log!(worker, "found {} groups to sync", list.len());
-
+ let total_count = list.len();
list.sort_unstable_by(|a, b| {
let type_order = a.backup_type.cmp(&b.backup_type);
if type_order == std::cmp::Ordering::Equal {
@@ -689,11 +691,32 @@ pub async fn pull_store(
}
});
+ let apply_filters = |group: &BackupGroup, filters: &[GroupFilter]| -> bool {
+ filters
+ .iter()
+ .any(|filter| group.matches(filter))
+ };
+
let list:Vec<BackupGroup> = list
.into_iter()
.map(|item| BackupGroup::new(item.backup_type, item.backup_id))
.collect();
+ let list = if let Some(ref group_filter) = ¶ms.group_filter {
+ let unfiltered_count = list.len();
+ let list:Vec<BackupGroup> = list
+ .into_iter()
+ .filter(|group| {
+ apply_filters(&group, group_filter)
+ })
+ .collect();
+ task_log!(worker, "found {} groups to sync (out of {} total)", list.len(), unfiltered_count);
+ list
+ } else {
+ task_log!(worker, "found {} groups to sync", total_count);
+ list
+ };
+
let mut errors = false;
let mut new_groups = std::collections::HashSet::new();
@@ -755,6 +778,11 @@ pub async fn pull_store(
if new_groups.contains(&local_group) {
continue;
}
+ if let Some(ref group_filter) = ¶ms.group_filter {
+ if !apply_filters(&local_group, group_filter) {
+ continue;
+ }
+ }
task_log!(
worker,
"delete vanished group '{}/{}'",
--
2.30.2
^ permalink raw reply [flat|nested] 16+ messages in thread
* [pbs-devel] [PATCH v3 proxmox-backup 07/11] sync: add group filtering
2021-10-28 13:00 [pbs-devel] [PATCH v3 proxmox-backup 0/12] pull/sync group filter Fabian Grünbichler
` (6 preceding siblings ...)
2021-10-28 13:00 ` [pbs-devel] [PATCH v3 proxmox-backup 06/11] pull: allow pulling groups selectively Fabian Grünbichler
@ 2021-10-28 13:00 ` Fabian Grünbichler
2021-10-28 13:00 ` [pbs-devel] [PATCH v3 proxmox-backup 08/11] remote: add backup group scanning Fabian Grünbichler
` (5 subsequent siblings)
13 siblings, 0 replies; 16+ messages in thread
From: Fabian Grünbichler @ 2021-10-28 13:00 UTC (permalink / raw)
To: pbs-devel
like for manual pulls, but persisted in the sync job config and visible
in the relevant GUI parts.
GUI is read-only for now (and defaults to no filtering on creation), as
this is a rather advanced feature that requires a complex GUI to be
user-friendly (regex-freeform, type-combobox, remote group scanning +
selector with additional freeform input).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
Notes:
v2->v3:
- use Vec<GroupFilter> / GROUP_FILTER_LIST_SCHEMA
- simplify renderer
I did test the API manually though to see whether it works as expected, and
updating the filter list by overwriting with a new one passed in as multiple
parameters works as expected.
if we want to make this configurable over the GUI, we probably want to switch
the job edit window to a tabpanel and add a second grid tab for selecting the
groups.
pbs-api-types/src/jobs.rs | 6 ++++++
src/api2/config/sync.rs | 5 +++++
src/api2/pull.rs | 2 +-
www/config/SyncView.js | 13 ++++++++++++-
www/window/SyncJobEdit.js | 12 ++++++++++++
5 files changed, 36 insertions(+), 2 deletions(-)
diff --git a/pbs-api-types/src/jobs.rs b/pbs-api-types/src/jobs.rs
index eda4ef95..c5d3bafe 100644
--- a/pbs-api-types/src/jobs.rs
+++ b/pbs-api-types/src/jobs.rs
@@ -403,6 +403,10 @@ pub const GROUP_FILTER_LIST_SCHEMA: Schema = ArraySchema::new("List of group fil
optional: true,
schema: SYNC_SCHEDULE_SCHEMA,
},
+ groups: {
+ schema: GROUP_FILTER_LIST_SCHEMA,
+ optional: true,
+ },
}
)]
#[derive(Serialize,Deserialize,Clone,Updater)]
@@ -422,6 +426,8 @@ pub struct SyncJobConfig {
pub comment: Option<String>,
#[serde(skip_serializing_if="Option::is_none")]
pub schedule: Option<String>,
+ #[serde(skip_serializing_if="Option::is_none")]
+ pub groups: Option<Vec<GroupFilter>>,
}
#[api(
diff --git a/src/api2/config/sync.rs b/src/api2/config/sync.rs
index fba476da..890221e6 100644
--- a/src/api2/config/sync.rs
+++ b/src/api2/config/sync.rs
@@ -192,6 +192,8 @@ pub enum DeletableProperty {
schedule,
/// Delete the remove-vanished flag.
remove_vanished,
+ /// Delete the groups property.
+ groups,
}
#[api(
@@ -254,6 +256,7 @@ pub fn update_sync_job(
DeletableProperty::comment => { data.comment = None; },
DeletableProperty::schedule => { data.schedule = None; },
DeletableProperty::remove_vanished => { data.remove_vanished = None; },
+ DeletableProperty::groups => { data.groups = None; },
}
}
}
@@ -271,6 +274,7 @@ pub fn update_sync_job(
if let Some(remote) = update.remote { data.remote = remote; }
if let Some(remote_store) = update.remote_store { data.remote_store = remote_store; }
if let Some(owner) = update.owner { data.owner = Some(owner); }
+ if let Some(groups) = update.groups { data.groups = Some(groups); }
let schedule_changed = data.schedule != update.schedule;
if update.schedule.is_some() { data.schedule = update.schedule; }
@@ -390,6 +394,7 @@ acl:1:/remote/remote1/remotestore1:write@pbs:RemoteSyncOperator
owner: Some(write_auth_id.clone()),
comment: None,
remove_vanished: None,
+ groups: None,
schedule: None,
};
diff --git a/src/api2/pull.rs b/src/api2/pull.rs
index d4a14f10..3d644202 100644
--- a/src/api2/pull.rs
+++ b/src/api2/pull.rs
@@ -50,7 +50,7 @@ impl TryFrom<&SyncJobConfig> for PullParameters {
&sync_job.remote_store,
sync_job.owner.as_ref().unwrap_or_else(|| Authid::root_auth_id()).clone(),
sync_job.remove_vanished,
- None,
+ sync_job.groups.clone(),
)
}
}
diff --git a/www/config/SyncView.js b/www/config/SyncView.js
index 7d7e751c..d2a3954f 100644
--- a/www/config/SyncView.js
+++ b/www/config/SyncView.js
@@ -1,7 +1,7 @@
Ext.define('pbs-sync-jobs-status', {
extend: 'Ext.data.Model',
fields: [
- 'id', 'owner', 'remote', 'remote-store', 'store', 'schedule',
+ 'id', 'owner', 'remote', 'remote-store', 'store', 'schedule', 'groups',
'next-run', 'last-run-upid', 'last-run-state', 'last-run-endtime',
{
name: 'duration',
@@ -214,6 +219,12 @@ Ext.define('PBS.config.SyncJobView', {
flex: 2,
sortable: true,
},
+ {
+ header: gettext('Backup Groups'),
+ dataIndex: 'groups',
+ renderer: v => v ? Ext.String.htmlEncode(v) : gettext('All'),
+ width: 80,
+ },
{
header: gettext('Schedule'),
dataIndex: 'schedule',
diff --git a/www/window/SyncJobEdit.js b/www/window/SyncJobEdit.js
index 47e65ae3..2399f11f 100644
--- a/www/window/SyncJobEdit.js
+++ b/www/window/SyncJobEdit.js
@@ -199,6 +199,15 @@ Ext.define('PBS.window.SyncJobEdit', {
],
columnB: [
+ {
+ fieldLabel: gettext('Backup Groups'),
+ xtype: 'displayfield',
+ name: 'groups',
+ renderer: v => v ? Ext.String.htmlEncode(v) : gettext('All'),
+ cbind: {
+ hidden: '{isCreate}',
+ },
+ },
{
fieldLabel: gettext('Comment'),
xtype: 'proxmoxtextfield',
--
2.30.2
^ permalink raw reply [flat|nested] 16+ messages in thread
* [pbs-devel] [PATCH v3 proxmox-backup 08/11] remote: add backup group scanning
2021-10-28 13:00 [pbs-devel] [PATCH v3 proxmox-backup 0/12] pull/sync group filter Fabian Grünbichler
` (7 preceding siblings ...)
2021-10-28 13:00 ` [pbs-devel] [PATCH v3 proxmox-backup 07/11] sync: add group filtering Fabian Grünbichler
@ 2021-10-28 13:00 ` Fabian Grünbichler
2021-10-28 13:00 ` [pbs-devel] [PATCH v3 proxmox-backup 09/11] manager: render group filter properly Fabian Grünbichler
` (4 subsequent siblings)
13 siblings, 0 replies; 16+ messages in thread
From: Fabian Grünbichler @ 2021-10-28 13:00 UTC (permalink / raw)
To: pbs-devel
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
src/api2/config/remote.rs | 73 ++++++++++++++++-
src/bin/proxmox-backup-manager.rs | 105 ++++++++++++++++++++++---
src/bin/proxmox_backup_manager/sync.rs | 2 +
3 files changed, 166 insertions(+), 14 deletions(-)
diff --git a/src/api2/config/remote.rs b/src/api2/config/remote.rs
index 4dffe6bb..8077b610 100644
--- a/src/api2/config/remote.rs
+++ b/src/api2/config/remote.rs
@@ -1,4 +1,7 @@
use anyhow::{bail, format_err, Error};
+use proxmox::sortable;
+use proxmox_router::SubdirMap;
+use proxmox_router::list_subdirs_api_method;
use serde_json::Value;
use ::serde::{Deserialize, Serialize};
@@ -8,8 +11,8 @@ use proxmox_schema::api;
use pbs_client::{HttpClient, HttpClientOptions};
use pbs_api_types::{
REMOTE_ID_SCHEMA, REMOTE_PASSWORD_SCHEMA, Remote, RemoteConfig, RemoteConfigUpdater,
- Authid, PROXMOX_CONFIG_DIGEST_SCHEMA, DataStoreListItem, SyncJobConfig,
- PRIV_REMOTE_AUDIT, PRIV_REMOTE_MODIFY,
+ Authid, PROXMOX_CONFIG_DIGEST_SCHEMA, DATASTORE_SCHEMA, GroupListItem,
+ DataStoreListItem, SyncJobConfig, PRIV_REMOTE_AUDIT, PRIV_REMOTE_MODIFY,
};
use pbs_config::sync;
@@ -340,8 +343,72 @@ pub async fn scan_remote_datastores(name: String) -> Result<Vec<DataStoreListIte
}
}
+#[api(
+ input: {
+ properties: {
+ name: {
+ schema: REMOTE_ID_SCHEMA,
+ },
+ store: {
+ schema: DATASTORE_SCHEMA,
+ },
+ },
+ },
+ access: {
+ permission: &Permission::Privilege(&["remote", "{name}"], PRIV_REMOTE_AUDIT, false),
+ },
+ returns: {
+ description: "Lists the accessible backup groups in a remote datastore.",
+ type: Array,
+ items: { type: GroupListItem },
+ },
+)]
+/// List groups of a remote.cfg entry's datastore
+pub async fn scan_remote_groups(name: String, store: String) -> Result<Vec<GroupListItem>, Error> {
+ let (remote_config, _digest) = pbs_config::remote::config()?;
+ let remote: Remote = remote_config.lookup("remote", &name)?;
+
+ let map_remote_err = |api_err| {
+ http_err!(INTERNAL_SERVER_ERROR,
+ "failed to scan remote '{}' - {}",
+ &name,
+ api_err)
+ };
+
+ let client = remote_client(&remote)
+ .await
+ .map_err(map_remote_err)?;
+ let api_res = client
+ .get(&format!("api2/json/admin/datastore/{}/groups", store), None)
+ .await
+ .map_err(map_remote_err)?;
+ let parse_res = match api_res.get("data") {
+ Some(data) => serde_json::from_value::<Vec<GroupListItem>>(data.to_owned()),
+ None => bail!("remote {} did not return any group list data", &name),
+ };
+
+ match parse_res {
+ Ok(parsed) => Ok(parsed),
+ Err(_) => bail!("Failed to parse remote scan api result."),
+ }
+}
+
+#[sortable]
+const DATASTORE_SCAN_SUBDIRS: SubdirMap = &[
+ (
+ "groups",
+ &Router::new()
+ .get(&API_METHOD_SCAN_REMOTE_GROUPS)
+ ),
+];
+
+const DATASTORE_SCAN_ROUTER: Router = Router::new()
+ .get(&list_subdirs_api_method!(DATASTORE_SCAN_SUBDIRS))
+ .subdirs(DATASTORE_SCAN_SUBDIRS);
+
const SCAN_ROUTER: Router = Router::new()
- .get(&API_METHOD_SCAN_REMOTE_DATASTORES);
+ .get(&API_METHOD_SCAN_REMOTE_DATASTORES)
+ .match_all("store", &DATASTORE_SCAN_ROUTER);
const ITEM_ROUTER: Router = Router::new()
.get(&API_METHOD_READ_REMOTE)
diff --git a/src/bin/proxmox-backup-manager.rs b/src/bin/proxmox-backup-manager.rs
index 9e52a474..637c04ad 100644
--- a/src/bin/proxmox-backup-manager.rs
+++ b/src/bin/proxmox-backup-manager.rs
@@ -1,7 +1,7 @@
use std::collections::HashMap;
use std::io::{self, Write};
-use anyhow::{format_err, Error};
+use anyhow::Error;
use serde_json::{json, Value};
use proxmox::tools::fs::CreateOptions;
@@ -9,10 +9,11 @@ use proxmox_router::{cli::*, RpcEnvironment};
use proxmox_schema::api;
use pbs_client::{display_task_log, view_task_result};
+use pbs_config::sync;
use pbs_tools::percent_encoding::percent_encode_component;
use pbs_tools::json::required_string_param;
use pbs_api_types::{
- GroupFilter,
+ GroupFilter, SyncJobConfig,
DATASTORE_SCHEMA, GROUP_FILTER_LIST_SCHEMA, IGNORE_VERIFIED_BACKUPS_SCHEMA, REMOTE_ID_SCHEMA,
REMOVE_VANISHED_BACKUPS_SCHEMA, UPID_SCHEMA, VERIFICATION_OUTDATED_AFTER_SCHEMA,
};
@@ -398,6 +399,7 @@ async fn run() -> Result<(), Error> {
.completion_cb("local-store", pbs_config::datastore::complete_datastore_name)
.completion_cb("remote", pbs_config::remote::complete_remote_name)
.completion_cb("remote-store", complete_remote_datastore_name)
+ .completion_cb("groups", complete_remote_datastore_group_filter)
)
.insert(
"verify",
@@ -440,24 +442,105 @@ fn main() -> Result<(), Error> {
pbs_runtime::main(run())
}
+fn get_sync_job(id: &String) -> Result<SyncJobConfig, Error> {
+ let (config, _digest) = sync::config()?;
+
+ config.lookup("sync", id)
+}
+
+fn get_remote(param: &HashMap<String, String>) -> Option<String> {
+ param
+ .get("remote")
+ .map(|r| r.to_owned())
+ .or_else(|| {
+ if let Some(id) = param.get("id") {
+ if let Ok(job) = get_sync_job(id) {
+ return Some(job.remote.clone());
+ }
+ }
+ None
+ })
+}
+
+fn get_remote_store(param: &HashMap<String, String>) -> Option<(String, String)> {
+ let mut job: Option<SyncJobConfig> = None;
+
+ let remote = param
+ .get("remote")
+ .map(|r| r.to_owned())
+ .or_else(|| {
+ if let Some(id) = param.get("id") {
+ job = get_sync_job(id).ok();
+ if let Some(ref job) = job {
+ return Some(job.remote.clone());
+ }
+ }
+ None
+ });
+
+ if let Some(remote) = remote {
+ let store = param
+ .get("remote-store")
+ .map(|r| r.to_owned())
+ .or_else(|| job.map(|job| job.remote_store.clone()));
+
+ if let Some(store) = store {
+ return Some((remote, store))
+ }
+ }
+
+ None
+}
+
// shell completion helper
pub fn complete_remote_datastore_name(_arg: &str, param: &HashMap<String, String>) -> Vec<String> {
let mut list = Vec::new();
- let _ = proxmox_lang::try_block!({
- let remote = param.get("remote").ok_or_else(|| format_err!("no remote"))?;
+ if let Some(remote) = get_remote(param) {
+ if let Ok(data) = pbs_runtime::block_on(async move {
+ crate::api2::config::remote::scan_remote_datastores(remote).await
+ }) {
- let data = pbs_runtime::block_on(async move {
- crate::api2::config::remote::scan_remote_datastores(remote.clone()).await
- })?;
+ for item in data {
+ list.push(item.store);
+ }
+ }
+ }
- for item in data {
- list.push(item.store);
+ list
+}
+
+// shell completion helper
+pub fn complete_remote_datastore_group(_arg: &str, param: &HashMap<String, String>) -> Vec<String> {
+
+ let mut list = Vec::new();
+
+ if let Some((remote, remote_store)) = get_remote_store(param) {
+ if let Ok(data) = pbs_runtime::block_on(async move {
+ crate::api2::config::remote::scan_remote_groups(remote.clone(), remote_store.clone()).await
+ }) {
+
+ for item in data {
+ list.push(format!("{}/{}", item.backup_type, item.backup_id));
+ }
}
+ }
+
+ list
+}
+
+// shell completion helper
+pub fn complete_remote_datastore_group_filter(_arg: &str, param: &HashMap<String, String>) -> Vec<String> {
+
+ let mut list = Vec::new();
+
+ list.push("regex:".to_string());
+ list.push("type:ct".to_string());
+ list.push("type:host".to_string());
+ list.push("type:vm".to_string());
- Ok(())
- }).map_err(|_err: Error| { /* ignore */ });
+ list.extend(complete_remote_datastore_group(_arg, param).iter().map(|group| format!("group:{}", group)));
list
}
diff --git a/src/bin/proxmox_backup_manager/sync.rs b/src/bin/proxmox_backup_manager/sync.rs
index dfd8688d..8bf490ea 100644
--- a/src/bin/proxmox_backup_manager/sync.rs
+++ b/src/bin/proxmox_backup_manager/sync.rs
@@ -89,6 +89,7 @@ pub fn sync_job_commands() -> CommandLineInterface {
.completion_cb("store", pbs_config::datastore::complete_datastore_name)
.completion_cb("remote", pbs_config::remote::complete_remote_name)
.completion_cb("remote-store", crate::complete_remote_datastore_name)
+ .completion_cb("groups", crate::complete_remote_datastore_group_filter)
)
.insert("update",
CliCommand::new(&api2::config::sync::API_METHOD_UPDATE_SYNC_JOB)
@@ -97,6 +98,7 @@ pub fn sync_job_commands() -> CommandLineInterface {
.completion_cb("schedule", pbs_config::datastore::complete_calendar_event)
.completion_cb("store", pbs_config::datastore::complete_datastore_name)
.completion_cb("remote-store", crate::complete_remote_datastore_name)
+ .completion_cb("groups", crate::complete_remote_datastore_group_filter)
)
.insert("remove",
CliCommand::new(&api2::config::sync::API_METHOD_DELETE_SYNC_JOB)
--
2.30.2
^ permalink raw reply [flat|nested] 16+ messages in thread
* [pbs-devel] [PATCH v3 proxmox-backup 09/11] manager: render group filter properly
2021-10-28 13:00 [pbs-devel] [PATCH v3 proxmox-backup 0/12] pull/sync group filter Fabian Grünbichler
` (8 preceding siblings ...)
2021-10-28 13:00 ` [pbs-devel] [PATCH v3 proxmox-backup 08/11] remote: add backup group scanning Fabian Grünbichler
@ 2021-10-28 13:00 ` Fabian Grünbichler
2021-10-28 13:00 ` [pbs-devel] [PATCH v3 proxmox-backup 10/11] docs: mention group filter in sync docs Fabian Grünbichler
` (3 subsequent siblings)
13 siblings, 0 replies; 16+ messages in thread
From: Fabian Grünbichler @ 2021-10-28 13:00 UTC (permalink / raw)
To: pbs-devel
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
src/bin/proxmox_backup_manager/sync.rs | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/src/bin/proxmox_backup_manager/sync.rs b/src/bin/proxmox_backup_manager/sync.rs
index 8bf490ea..0fd89a9c 100644
--- a/src/bin/proxmox_backup_manager/sync.rs
+++ b/src/bin/proxmox_backup_manager/sync.rs
@@ -8,6 +8,18 @@ use pbs_api_types::JOB_ID_SCHEMA;
use proxmox_backup::api2;
+fn render_group_filter(value: &Value, _record: &Value) -> Result<String, Error> {
+ if let Some(group_filters) = value.as_array() {
+ let group_filters:Vec<&str> = group_filters
+ .iter()
+ .filter_map(Value::as_str)
+ .collect();
+ Ok(group_filters.join(" OR "))
+ } else {
+ Ok(String::from("all"))
+ }
+}
+
#[api(
input: {
properties: {
@@ -35,6 +47,7 @@ fn list_sync_jobs(param: Value, rpcenv: &mut dyn RpcEnvironment) -> Result<Value
.column(ColumnConfig::new("remote"))
.column(ColumnConfig::new("remote-store"))
.column(ColumnConfig::new("schedule"))
+ .column(ColumnConfig::new("groups").renderer(render_group_filter))
.column(ColumnConfig::new("comment"));
format_and_print_result_full(&mut data, &info.returns, &output_format, &options);
@@ -66,6 +79,12 @@ fn show_sync_job(param: Value, rpcenv: &mut dyn RpcEnvironment) -> Result<Value,
_ => unreachable!(),
};
+ if let Some(groups) = data.get_mut("groups") {
+ if let Ok(rendered) = render_group_filter(groups, groups) {
+ *groups = Value::String(rendered);
+ }
+ }
+
let options = default_table_format_options();
format_and_print_result_full(&mut data, &info.returns, &output_format, &options);
--
2.30.2
^ permalink raw reply [flat|nested] 16+ messages in thread
* [pbs-devel] [PATCH v3 proxmox-backup 10/11] docs: mention group filter in sync docs
2021-10-28 13:00 [pbs-devel] [PATCH v3 proxmox-backup 0/12] pull/sync group filter Fabian Grünbichler
` (9 preceding siblings ...)
2021-10-28 13:00 ` [pbs-devel] [PATCH v3 proxmox-backup 09/11] manager: render group filter properly Fabian Grünbichler
@ 2021-10-28 13:00 ` Fabian Grünbichler
2021-10-28 13:00 ` [pbs-devel] [RFC v3 proxmox-backup 11/11] fix #sync.cfg/pull: don't remove by default Fabian Grünbichler
` (2 subsequent siblings)
13 siblings, 0 replies; 16+ messages in thread
From: Fabian Grünbichler @ 2021-10-28 13:00 UTC (permalink / raw)
To: pbs-devel
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
docs/managing-remotes.rst | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/docs/managing-remotes.rst b/docs/managing-remotes.rst
index ccb7313e..e3b13d52 100644
--- a/docs/managing-remotes.rst
+++ b/docs/managing-remotes.rst
@@ -90,6 +90,12 @@ the local datastore as well. If the ``owner`` option is not set (defaulting to
``root@pam``) or is set to something other than the configuring user,
``Datastore.Modify`` is required as well.
+If the ``groups`` option is set, only backup groups matching at least one of
+the specified criteria (backup type, full group identifier, or a regular
+expression matched against the full group identifier) are synced. The same
+filter is applied to local groups for handling of the ``remove-vanished``
+option.
+
.. note:: A sync job can only sync backup groups that the configured remote's
user/API token can read. If a remote is configured with a user/API token that
only has ``Datastore.Backup`` privileges, only the limited set of accessible
--
2.30.2
^ permalink raw reply [flat|nested] 16+ messages in thread
* [pbs-devel] [RFC v3 proxmox-backup 11/11] fix #sync.cfg/pull: don't remove by default
2021-10-28 13:00 [pbs-devel] [PATCH v3 proxmox-backup 0/12] pull/sync group filter Fabian Grünbichler
` (10 preceding siblings ...)
2021-10-28 13:00 ` [pbs-devel] [PATCH v3 proxmox-backup 10/11] docs: mention group filter in sync docs Fabian Grünbichler
@ 2021-10-28 13:00 ` Fabian Grünbichler
2021-11-04 9:57 ` [pbs-devel] [PATCH v3 proxmox-backup 0/12] pull/sync group filter Dominik Csapak
2021-11-18 10:11 ` [pbs-devel] applied: " Thomas Lamprecht
13 siblings, 0 replies; 16+ messages in thread
From: Fabian Grünbichler @ 2021-10-28 13:00 UTC (permalink / raw)
To: pbs-devel
and convert existing (manually created/edited) jobs to the previous
default value of 'true'. the GUI has always set this value and defaults
to 'false'.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
probably want to hold off on this until close to 7.1 bump, so that we
can notify potentially affected users of the `pull` API via release
notes, and only run the conversion once ;)
debian/postinst | 31 +++++++++++++++++++++++++++++++
pbs-api-types/src/jobs.rs | 2 +-
src/api2/pull.rs | 2 +-
src/server/pull.rs | 2 +-
4 files changed, 34 insertions(+), 3 deletions(-)
diff --git a/debian/postinst b/debian/postinst
index 83352853..af290069 100644
--- a/debian/postinst
+++ b/debian/postinst
@@ -4,6 +4,14 @@ set -e
#DEBHELPER#
+update_sync_job() {
+ job="$1"
+
+ echo "Updating sync job '$job' to make old 'remove-vanished' default explicit.."
+ proxmox-backup-manager sync-job update "$job" --remove-vanished true \
+ || echo "Failed, please check sync.cfg manually!"
+}
+
case "$1" in
configure)
# need to have user backup in the tape group
@@ -32,6 +40,29 @@ case "$1" in
echo "Fixing up termproxy user id in task log..."
flock -w 30 /var/log/proxmox-backup/tasks/active.lock sed -i 's/:termproxy::\([^@]\+\): /:termproxy::\1@pam: /' /var/log/proxmox-backup/tasks/active || true
fi
+
+ if dpkg --compare-versions "$2" 'lt' '7.1-1' && test -e /etc/proxmox-backup/sync.cfg; then
+ prev_job=""
+
+ # read from HERE doc because POSIX sh limitations
+ while read -r key value; do
+ if test "$key" = "sync:"; then
+ if test -n "$prev_job"; then
+ # previous job doesn't have an explicit value
+ update_sync_job "$prev_job"
+ fi
+ prev_job=$value
+ else
+ prev_job=""
+ fi
+ done <<EOF
+$(grep -e '^sync:' -e 'remove-vanished' /etc/proxmox-backup/sync.cfg)
+EOF
+ if test -n "$prev_job"; then
+ # last job doesn't have an explicit value
+ update_sync_job "$prev_job"
+ fi
+ fi
fi
;;
diff --git a/pbs-api-types/src/jobs.rs b/pbs-api-types/src/jobs.rs
index c5d3bafe..18c55dad 100644
--- a/pbs-api-types/src/jobs.rs
+++ b/pbs-api-types/src/jobs.rs
@@ -52,7 +52,7 @@ pub const VERIFICATION_SCHEDULE_SCHEMA: Schema = StringSchema::new(
pub const REMOVE_VANISHED_BACKUPS_SCHEMA: Schema = BooleanSchema::new(
"Delete vanished backups. This remove the local copy if the remote backup was deleted.")
- .default(true)
+ .default(false)
.schema();
#[api(
diff --git a/src/api2/pull.rs b/src/api2/pull.rs
index 3d644202..acbd2884 100644
--- a/src/api2/pull.rs
+++ b/src/api2/pull.rs
@@ -179,7 +179,7 @@ async fn pull (
) -> Result<String, Error> {
let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
- let delete = remove_vanished.unwrap_or(true);
+ let delete = remove_vanished.unwrap_or(false);
check_pull_privs(&auth_id, &store, &remote, &remote_store, delete)?;
diff --git a/src/server/pull.rs b/src/server/pull.rs
index 63bf92b4..e34ac891 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -57,7 +57,7 @@ impl PullParameters {
let (remote_config, _digest) = pbs_config::remote::config()?;
let remote: Remote = remote_config.lookup("remote", remote)?;
- let remove_vanished = remove_vanished.unwrap_or(true);
+ let remove_vanished = remove_vanished.unwrap_or(false);
let source = BackupRepository::new(
Some(remote.config.auth_id.clone()),
--
2.30.2
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [pbs-devel] [PATCH v3 proxmox-backup 0/12] pull/sync group filter
2021-10-28 13:00 [pbs-devel] [PATCH v3 proxmox-backup 0/12] pull/sync group filter Fabian Grünbichler
` (11 preceding siblings ...)
2021-10-28 13:00 ` [pbs-devel] [RFC v3 proxmox-backup 11/11] fix #sync.cfg/pull: don't remove by default Fabian Grünbichler
@ 2021-11-04 9:57 ` Dominik Csapak
2021-11-18 10:11 ` [pbs-devel] applied: " Thomas Lamprecht
13 siblings, 0 replies; 16+ messages in thread
From: Dominik Csapak @ 2021-11-04 9:57 UTC (permalink / raw)
To: Proxmox Backup Server development discussion, Fabian Grünbichler
series LGTM, i sent the tape filter series
on-top of this series
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
On 10/28/21 15:00, Fabian Grünbichler wrote:
> this has been requested a few times on the forum, e.g. for a special
> sync job for the most important groups, or seeding of a new datastore
> with a partial view of an existing one.
>
> while it's possible to achieve similar results with hacky workarounds
> based on group ownership and reduced "visibility", implementing it
> properly is not that complex.
>
> possible future additions in a similar fashion:
> - exclude filters
> - filtering in other API calls (tape, listing groups/snapshots)
> - only sync/pull encrypted snapshots (less trusted off-site
> location)
> - only sync/pull latest snapshot in each group (fast seeding of new
> datastore)
>
> not changed: BackupGroup was not pulled into api-types - we can still
> re-evaluate this at some point if we want to switch the split api
> parameters we currently have, but at the moment this would mean lots of
> churn for a very small gain.
>
> changed since v2:
> - implement UpdaterType for Vec<T> in proxmox-schema, simplify proxmox-backup patches
> - pull parameters into struct
> - added patch for flipped remove-vanished default
>
> changed since v1:
> - reworked filter to support different types, rebased
> - dropped last patch
> - add docs patch
>
> proxmox:
>
> Fabian Grünbichler (1):
> updater: impl UpdaterType for Vec
>
> proxmox-schema/src/schema.rs | 5 +++++
> 1 file changed, 5 insertions(+)
>
> proxmox-backup:
>
> Fabian Grünbichler (11):
> api-types: add schema for backup group
> api: add GroupFilter(List) type
> BackupGroup: add filter helper
> pull: use BackupGroup consistently
> pull/sync: extract passed along vars into struct
> pull: allow pulling groups selectively
> sync: add group filtering
> remote: add backup group scanning
> manager: render group filter properly
> docs: mention group filter in sync docs
> fix #sync.cfg/pull: don't remove by default
>
> docs/managing-remotes.rst | 6 +
> debian/postinst | 31 ++++++
> pbs-api-types/src/datastore.rs | 5 +
> pbs-api-types/src/jobs.rs | 64 ++++++++++-
> pbs-datastore/src/backup_info.rs | 13 +++
> src/api2/config/remote.rs | 77 ++++++++++++-
> src/api2/config/sync.rs | 5 +
> src/api2/pull.rs | 70 ++++++------
> src/bin/proxmox-backup-manager.rs | 117 +++++++++++++++++--
> src/bin/proxmox_backup_manager/sync.rs | 21 ++++
> src/server/pull.rs | 148 ++++++++++++++++++-------
> www/config/SyncView.js | 13 ++-
> www/window/SyncJobEdit.js | 12 ++
> 13 files changed, 487 insertions(+), 95 deletions(-)
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* [pbs-devel] applied: [PATCH v3 proxmox-backup 0/12] pull/sync group filter
2021-10-28 13:00 [pbs-devel] [PATCH v3 proxmox-backup 0/12] pull/sync group filter Fabian Grünbichler
` (12 preceding siblings ...)
2021-11-04 9:57 ` [pbs-devel] [PATCH v3 proxmox-backup 0/12] pull/sync group filter Dominik Csapak
@ 2021-11-18 10:11 ` Thomas Lamprecht
13 siblings, 0 replies; 16+ messages in thread
From: Thomas Lamprecht @ 2021-11-18 10:11 UTC (permalink / raw)
To: Proxmox Backup Server development discussion, Fabian Grünbichler
On 28.10.21 15:00, Fabian Grünbichler wrote:
> this has been requested a few times on the forum, e.g. for a special
> sync job for the most important groups, or seeding of a new datastore
> with a partial view of an existing one.
>
> while it's possible to achieve similar results with hacky workarounds
> based on group ownership and reduced "visibility", implementing it
> properly is not that complex.
>
> possible future additions in a similar fashion:
> - exclude filters
> - filtering in other API calls (tape, listing groups/snapshots)
> - only sync/pull encrypted snapshots (less trusted off-site
> location)
> - only sync/pull latest snapshot in each group (fast seeding of new
> datastore)
>
> not changed: BackupGroup was not pulled into api-types - we can still
> re-evaluate this at some point if we want to switch the split api
> parameters we currently have, but at the moment this would mean lots of
> churn for a very small gain.
>
> changed since v2:
> - implement UpdaterType for Vec<T> in proxmox-schema, simplify proxmox-backup patches
> - pull parameters into struct
> - added patch for flipped remove-vanished default
>
> changed since v1:
> - reworked filter to support different types, rebased
> - dropped last patch
> - add docs patch
>
> proxmox:
>
> Fabian Grünbichler (1):
> updater: impl UpdaterType for Vec
>
> proxmox-schema/src/schema.rs | 5 +++++
> 1 file changed, 5 insertions(+)
>
> proxmox-backup:
>
> Fabian Grünbichler (11):
> api-types: add schema for backup group
> api: add GroupFilter(List) type
> BackupGroup: add filter helper
> pull: use BackupGroup consistently
> pull/sync: extract passed along vars into struct
> pull: allow pulling groups selectively
> sync: add group filtering
> remote: add backup group scanning
> manager: render group filter properly
> docs: mention group filter in sync docs
> fix #sync.cfg/pull: don't remove by default
>
> docs/managing-remotes.rst | 6 +
> debian/postinst | 31 ++++++
> pbs-api-types/src/datastore.rs | 5 +
> pbs-api-types/src/jobs.rs | 64 ++++++++++-
> pbs-datastore/src/backup_info.rs | 13 +++
> src/api2/config/remote.rs | 77 ++++++++++++-
> src/api2/config/sync.rs | 5 +
> src/api2/pull.rs | 70 ++++++------
> src/bin/proxmox-backup-manager.rs | 117 +++++++++++++++++--
> src/bin/proxmox_backup_manager/sync.rs | 21 ++++
> src/server/pull.rs | 148 ++++++++++++++++++-------
> www/config/SyncView.js | 13 ++-
> www/window/SyncJobEdit.js | 12 ++
> 13 files changed, 487 insertions(+), 95 deletions(-)
>
applied series, thanks!
Made two follow ups:
- renamed `groups` to `group-filter` (please verify that nothing bogus was done)
- restructured the docs slightly and added a `type` and `group` example
^ permalink raw reply [flat|nested] 16+ messages in thread