public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pbs-devel] [PATCH proxmox-backup 0/7] pull/sync group filter
@ 2021-07-22 14:35 Fabian Grünbichler
  2021-07-22 14:35 ` [pbs-devel] [PATCH proxmox-backup 1/7] api-types: add schema for backup group Fabian Grünbichler
                   ` (7 more replies)
  0 siblings, 8 replies; 11+ messages in thread
From: Fabian Grünbichler @ 2021-07-22 14:35 UTC (permalink / raw)
  To: pbs-devel

this has been requested a few times on the forum, e.g. for a special
sync job for the most important groups, or seeding of a new datastore
with a partial view of an existing one.

while it's possible to achieve similar results with hacky workarounds
based on group ownership and reduced "visibility", implementing it
properly is not that complex.

possible future additions in a similar fashion:
- only sync/pull encrypted snapshots (less trusted off-site location)
- only sync/pull latest snapshot in each group (fast seeding of new
  datastore)

Fabian Grünbichler (7):
  api-types: add schema for backup group
  pull: allow pulling groups selectively
  sync: add group filtering
  remote: add backup group scanning
  manager: extend sync/pull completion
  manager: render group filter properly
  manager: don't complete sync job ID on creation

 pbs-api-types/src/lib.rs               |   4 +
 src/api2/config/remote.rs              |  69 +++++++++++++++-
 src/api2/config/sync.rs                |  22 ++++++
 src/api2/pull.rs                       |  36 ++++++++-
 src/bin/proxmox-backup-manager.rs      | 105 ++++++++++++++++++++++---
 src/bin/proxmox_backup_manager/sync.rs |  22 +++++-
 src/config/sync.rs                     |  10 +++
 src/server/pull.rs                     |  24 +++++-
 www/config/SyncView.js                 |  13 ++-
 www/window/SyncJobEdit.js              |  12 +++
 10 files changed, 295 insertions(+), 22 deletions(-)

-- 
2.30.2





^ permalink raw reply	[flat|nested] 11+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 1/7] api-types: add schema for backup group
  2021-07-22 14:35 [pbs-devel] [PATCH proxmox-backup 0/7] pull/sync group filter Fabian Grünbichler
@ 2021-07-22 14:35 ` Fabian Grünbichler
  2021-07-22 14:35 ` [pbs-devel] [PATCH proxmox-backup 2/7] pull: allow pulling groups selectively Fabian Grünbichler
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: Fabian Grünbichler @ 2021-07-22 14:35 UTC (permalink / raw)
  To: pbs-devel

the regex was already there, and we need a simple type/schema for
passing in multiple groups as Vec/Array via the API.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
 pbs-api-types/src/lib.rs | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/pbs-api-types/src/lib.rs b/pbs-api-types/src/lib.rs
index 576099eb..5120e7c0 100644
--- a/pbs-api-types/src/lib.rs
+++ b/pbs-api-types/src/lib.rs
@@ -138,6 +138,9 @@ pub const BACKUP_TYPE_SCHEMA: Schema = StringSchema::new("Backup type.")
 pub const BACKUP_TIME_SCHEMA: Schema = IntegerSchema::new("Backup time (Unix epoch.)")
     .minimum(1_547_797_308)
     .schema();
+pub const BACKUP_GROUP_SCHEMA: Schema = StringSchema::new("Backup Group")
+    .format(&BACKUP_GROUP_FORMAT)
+    .schema();
 
 pub const DATASTORE_SCHEMA: Schema = StringSchema::new("Datastore name.")
     .format(&PROXMOX_SAFE_ID_FORMAT)
@@ -200,6 +203,7 @@ pub const PROXMOX_CONFIG_DIGEST_SCHEMA: Schema = StringSchema::new(
 .schema();
 
 pub const BACKUP_ID_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&BACKUP_ID_REGEX);
+pub const BACKUP_GROUP_FORMAT: ApiStringFormat = ApiStringFormat::Pattern(&GROUP_PATH_REGEX);
 
 /// API schema format definition for repository URLs
 pub const BACKUP_REPO_URL: ApiStringFormat = ApiStringFormat::Pattern(&BACKUP_REPO_URL_REGEX);
-- 
2.30.2





^ permalink raw reply	[flat|nested] 11+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 2/7] pull: allow pulling groups selectively
  2021-07-22 14:35 [pbs-devel] [PATCH proxmox-backup 0/7] pull/sync group filter Fabian Grünbichler
  2021-07-22 14:35 ` [pbs-devel] [PATCH proxmox-backup 1/7] api-types: add schema for backup group Fabian Grünbichler
@ 2021-07-22 14:35 ` Fabian Grünbichler
  2021-07-22 14:35 ` [pbs-devel] [PATCH proxmox-backup 3/7] sync: add group filtering Fabian Grünbichler
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: Fabian Grünbichler @ 2021-07-22 14:35 UTC (permalink / raw)
  To: pbs-devel

without requiring workarounds based on ownership and limited
visibility/access.

if a group filter is set, remove_vanished will only consider filtered
groups for removal to prevent concurrent disjunct filters from trashing
eachother's synced groups.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
 src/api2/pull.rs                  | 29 +++++++++++++++++++++++++----
 src/bin/proxmox-backup-manager.rs | 14 ++++++++++++++
 src/server/pull.rs                | 24 ++++++++++++++++++++++--
 3 files changed, 61 insertions(+), 6 deletions(-)

diff --git a/src/api2/pull.rs b/src/api2/pull.rs
index 4893c9fb..36149761 100644
--- a/src/api2/pull.rs
+++ b/src/api2/pull.rs
@@ -1,5 +1,6 @@
 //! Sync datastore from remote server
 use std::sync::{Arc};
+use std::str::FromStr;
 
 use anyhow::{format_err, Error};
 use futures::{select, future::FutureExt};
@@ -10,9 +11,9 @@ use proxmox::api::{ApiMethod, Router, RpcEnvironment, Permission};
 use pbs_client::{HttpClient, BackupRepository};
 
 use crate::server::{WorkerTask, jobstate::Job, pull::pull_store};
-use crate::backup::DataStore;
+use crate::backup::{BackupGroup, DataStore};
 use crate::api2::types::{
-    DATASTORE_SCHEMA, REMOTE_ID_SCHEMA, REMOVE_VANISHED_BACKUPS_SCHEMA, Authid,
+    BACKUP_GROUP_SCHEMA, DATASTORE_SCHEMA, REMOTE_ID_SCHEMA, REMOVE_VANISHED_BACKUPS_SCHEMA, Authid,
 };
 use crate::config::{
     remote,
@@ -101,7 +102,7 @@ pub fn do_sync_job(
                 worker.log(format!("Sync datastore '{}' from '{}/{}'",
                         sync_job.store, sync_job.remote, sync_job.remote_store));
 
-                pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete, sync_owner).await?;
+                pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete, sync_owner, None).await?;
 
                 worker.log(format!("sync job '{}' end", &job_id));
 
@@ -152,6 +153,14 @@ pub fn do_sync_job(
                 schema: REMOVE_VANISHED_BACKUPS_SCHEMA,
                 optional: true,
             },
+            "groups": {
+                type: Array,
+                description: "List of group identifiers to filter for. All if unspecified.",
+                items: {
+                    schema: BACKUP_GROUP_SCHEMA,
+                },
+                optional: true,
+            },
         },
     },
     access: {
@@ -169,6 +178,7 @@ async fn pull (
     remote: String,
     remote_store: String,
     remove_vanished: Option<bool>,
+    groups: Option<Vec<String>>,
     _info: &ApiMethod,
     rpcenv: &mut dyn RpcEnvironment,
 ) -> Result<String, Error> {
@@ -185,7 +195,18 @@ async fn pull (
 
         worker.log(format!("sync datastore '{}' start", store));
 
-        let pull_future = pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete, auth_id);
+        let groups = match groups {
+            Some(filter) => {
+                let mut groups = std::collections::HashSet::with_capacity(filter.len());
+                for group in filter {
+                    let group = BackupGroup::from_str(&group)?;
+                    groups.insert(group);
+                }
+                Some(groups)
+            },
+            None => None,
+        };
+        let pull_future = pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete, auth_id, groups);
         let future = select!{
             success = pull_future.fuse() => success,
             abort = worker.abort_future().map(|_| Err(format_err!("pull aborted"))) => abort,
diff --git a/src/bin/proxmox-backup-manager.rs b/src/bin/proxmox-backup-manager.rs
index 93d6de57..6844a1ab 100644
--- a/src/bin/proxmox-backup-manager.rs
+++ b/src/bin/proxmox-backup-manager.rs
@@ -230,6 +230,15 @@ fn task_mgmt_cli() -> CommandLineInterface {
                 schema: REMOVE_VANISHED_BACKUPS_SCHEMA,
                 optional: true,
             },
+            "groups": {
+                type: Array,
+                description: "List of group identifiers to filter for. All if unspecified.",
+                items: {
+                    type: String,
+                    description: "Backup group identifier",
+                },
+                optional: true,
+            },
             "output-format": {
                 schema: OUTPUT_FORMAT,
                 optional: true,
@@ -243,6 +252,7 @@ async fn pull_datastore(
     remote_store: String,
     local_store: String,
     remove_vanished: Option<bool>,
+    groups: Option<Vec<String>>,
     param: Value,
 ) -> Result<Value, Error> {
 
@@ -256,6 +266,10 @@ async fn pull_datastore(
         "remote-store": remote_store,
     });
 
+    if groups.is_some() {
+        args["groups"] = json!(groups);
+    }
+
     if let Some(remove_vanished) = remove_vanished {
         args["remove-vanished"] = Value::from(remove_vanished);
     }
diff --git a/src/server/pull.rs b/src/server/pull.rs
index 5214a218..2904c37f 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -631,6 +631,7 @@ pub async fn pull_store(
     tgt_store: Arc<DataStore>,
     delete: bool,
     auth_id: Authid,
+    group_filter: Option<HashSet<BackupGroup>>,
 ) -> Result<(), Error> {
     // explicit create shared lock to prevent GC on newly created chunks
     let _shared_store_lock = tgt_store.try_shared_chunk_store_lock()?;
@@ -644,8 +645,7 @@ pub async fn pull_store(
 
     let mut list: Vec<GroupListItem> = serde_json::from_value(result["data"].take())?;
 
-    worker.log(format!("found {} groups to sync", list.len()));
-
+    let total_count = list.len();
     list.sort_unstable_by(|a, b| {
         let type_order = a.backup_type.cmp(&b.backup_type);
         if type_order == std::cmp::Ordering::Equal {
@@ -655,6 +655,21 @@ pub async fn pull_store(
         }
     });
 
+
+    let list = if let Some(ref group_filter) = &group_filter {
+        let list:Vec<GroupListItem> = list
+            .into_iter()
+            .filter(|group| {
+                group_filter.contains(&BackupGroup::new(&group.backup_type, &group.backup_id))
+            })
+            .collect();
+        worker.log(format!("found {} groups to sync (out of {} requested by filter)", list.len(), group_filter.len()));
+        list
+    } else {
+        worker.log(format!("found {} groups to sync", total_count));
+        list
+    };
+
     let mut errors = false;
 
     let mut new_groups = std::collections::HashSet::new();
@@ -717,6 +732,11 @@ pub async fn pull_store(
                 if new_groups.contains(&local_group) {
                     continue;
                 }
+                if let Some(ref group_filter) = &group_filter {
+                    if !group_filter.contains(&local_group) {
+                        continue;
+                    }
+                }
                 worker.log(format!(
                     "delete vanished group '{}/{}'",
                     local_group.backup_type(),
-- 
2.30.2





^ permalink raw reply	[flat|nested] 11+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 3/7] sync: add group filtering
  2021-07-22 14:35 [pbs-devel] [PATCH proxmox-backup 0/7] pull/sync group filter Fabian Grünbichler
  2021-07-22 14:35 ` [pbs-devel] [PATCH proxmox-backup 1/7] api-types: add schema for backup group Fabian Grünbichler
  2021-07-22 14:35 ` [pbs-devel] [PATCH proxmox-backup 2/7] pull: allow pulling groups selectively Fabian Grünbichler
@ 2021-07-22 14:35 ` Fabian Grünbichler
  2021-07-22 14:35 ` [pbs-devel] [PATCH proxmox-backup 4/7] remote: add backup group scanning Fabian Grünbichler
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: Fabian Grünbichler @ 2021-07-22 14:35 UTC (permalink / raw)
  To: pbs-devel

like for manual pulls, but persisted in the sync job config and visible
in the relevant GUI parts.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---

Notes:
    if we want to make this configurable over the GUI, we probably want to switch
    the job edit window to a tabpanel and add a second grid tab for selecting
    the groups.

 src/api2/config/sync.rs   | 22 ++++++++++++++++++++++
 src/api2/pull.rs          | 31 +++++++++++++++++++------------
 src/config/sync.rs        | 10 ++++++++++
 www/config/SyncView.js    | 13 ++++++++++++-
 www/window/SyncJobEdit.js | 12 ++++++++++++
 5 files changed, 75 insertions(+), 13 deletions(-)

diff --git a/src/api2/config/sync.rs b/src/api2/config/sync.rs
index bc7b9f24..28c04179 100644
--- a/src/api2/config/sync.rs
+++ b/src/api2/config/sync.rs
@@ -137,6 +137,14 @@ pub fn list_sync_jobs(
                 optional: true,
                 schema: SYNC_SCHEDULE_SCHEMA,
             },
+            groups: {
+                type: Array,
+                description: "List of group identifiers to filter for. All if unspecified.",
+                items: {
+                    schema: BACKUP_GROUP_SCHEMA,
+                },
+                optional: true,
+            },
         },
     },
     access: {
@@ -222,6 +230,8 @@ pub enum DeletableProperty {
     schedule,
     /// Delete the remove-vanished flag.
     remove_vanished,
+    /// Delete the groups property.
+    groups,
 }
 
 #[api(
@@ -259,6 +269,14 @@ pub enum DeletableProperty {
                 optional: true,
                 schema: SYNC_SCHEDULE_SCHEMA,
             },
+            groups: {
+                type: Array,
+                description: "List of group identifiers to filter for. All if unspecified.",
+                items: {
+                    schema: BACKUP_GROUP_SCHEMA,
+                },
+                optional: true,
+            },
             delete: {
                 description: "List of properties to delete.",
                 type: Array,
@@ -289,6 +307,7 @@ pub fn update_sync_job(
     remove_vanished: Option<bool>,
     comment: Option<String>,
     schedule: Option<String>,
+    groups: Option<Vec<String>>,
     delete: Option<Vec<DeletableProperty>>,
     digest: Option<String>,
     rpcenv: &mut dyn RpcEnvironment,
@@ -315,6 +334,7 @@ pub fn update_sync_job(
                 DeletableProperty::comment => { data.comment = None; },
                 DeletableProperty::schedule => { data.schedule = None; },
                 DeletableProperty::remove_vanished => { data.remove_vanished = None; },
+                DeletableProperty::groups => { data.groups = None; },
             }
         }
     }
@@ -332,6 +352,7 @@ pub fn update_sync_job(
     if let Some(remote) = remote { data.remote = remote; }
     if let Some(remote_store) = remote_store { data.remote_store = remote_store; }
     if let Some(owner) = owner { data.owner = Some(owner); }
+    if let Some(groups) = groups { data.groups = Some(groups); }
 
     let schedule_changed = data.schedule != schedule;
     if schedule.is_some() { data.schedule = schedule; }
@@ -451,6 +472,7 @@ acl:1:/remote/remote1/remotestore1:write@pbs:RemoteSyncOperator
         owner: Some(write_auth_id.clone()),
         comment: None,
         remove_vanished: None,
+        groups: None,
         schedule: None,
     };
 
diff --git a/src/api2/pull.rs b/src/api2/pull.rs
index 36149761..fbcabb11 100644
--- a/src/api2/pull.rs
+++ b/src/api2/pull.rs
@@ -1,4 +1,5 @@
 //! Sync datastore from remote server
+use std::collections::HashSet;
 use std::sync::{Arc};
 use std::str::FromStr;
 
@@ -61,6 +62,20 @@ pub async fn get_pull_parameters(
     Ok((client, src_repo, tgt_store))
 }
 
+fn convert_group_filter(groups: Option<Vec<String>>) -> Result<Option<HashSet<BackupGroup>>, Error> {
+    match groups {
+        Some(filter) => {
+            let mut groups = std::collections::HashSet::with_capacity(filter.len());
+            for group in filter {
+                let group = BackupGroup::from_str(&group)?;
+                groups.insert(group);
+            }
+            Ok(Some(groups))
+        },
+        None => Ok(None)
+    }
+}
+
 pub fn do_sync_job(
     mut job: Job,
     sync_job: SyncJobConfig,
@@ -102,7 +117,9 @@ pub fn do_sync_job(
                 worker.log(format!("Sync datastore '{}' from '{}/{}'",
                         sync_job.store, sync_job.remote, sync_job.remote_store));
 
-                pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete, sync_owner, None).await?;
+                let sync_group_filter = convert_group_filter(sync_job.groups)?;
+
+                pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete, sync_owner, sync_group_filter).await?;
 
                 worker.log(format!("sync job '{}' end", &job_id));
 
@@ -195,17 +212,7 @@ async fn pull (
 
         worker.log(format!("sync datastore '{}' start", store));
 
-        let groups = match groups {
-            Some(filter) => {
-                let mut groups = std::collections::HashSet::with_capacity(filter.len());
-                for group in filter {
-                    let group = BackupGroup::from_str(&group)?;
-                    groups.insert(group);
-                }
-                Some(groups)
-            },
-            None => None,
-        };
+        let groups = convert_group_filter(groups)?;
         let pull_future = pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete, auth_id, groups);
         let future = select!{
             success = pull_future.fuse() => success,
diff --git a/src/config/sync.rs b/src/config/sync.rs
index 5d5b2060..c088e08a 100644
--- a/src/config/sync.rs
+++ b/src/config/sync.rs
@@ -49,6 +49,14 @@ lazy_static! {
             optional: true,
             schema: SYNC_SCHEDULE_SCHEMA,
         },
+        groups: {
+            type: Array,
+            description: "List of group identifiers to filter for. All if unspecified.",
+            items: {
+                schema: BACKUP_GROUP_SCHEMA,
+            },
+            optional: true,
+        },
     }
 )]
 #[derive(Serialize,Deserialize,Clone)]
@@ -67,6 +75,8 @@ pub struct SyncJobConfig {
     pub comment: Option<String>,
     #[serde(skip_serializing_if="Option::is_none")]
     pub schedule: Option<String>,
+    #[serde(skip_serializing_if="Option::is_none")]
+    pub groups: Option<Vec<String>>,
 }
 
 #[api(
diff --git a/www/config/SyncView.js b/www/config/SyncView.js
index 7d7e751c..d2a3954f 100644
--- a/www/config/SyncView.js
+++ b/www/config/SyncView.js
@@ -1,7 +1,7 @@
 Ext.define('pbs-sync-jobs-status', {
     extend: 'Ext.data.Model',
     fields: [
-	'id', 'owner', 'remote', 'remote-store', 'store', 'schedule',
+	'id', 'owner', 'remote', 'remote-store', 'store', 'schedule', 'groups',
 	'next-run', 'last-run-upid', 'last-run-state', 'last-run-endtime',
 	{
 	    name: 'duration',
@@ -106,6 +106,11 @@ Ext.define('PBS.config.SyncJobView', {
 	    return Ext.String.htmlEncode(value, metadata, record);
 	},
 
+	render_optional_groups: function(value, metadata, record) {
+	    if (!value) return gettext('All');
+	    return Ext.String.htmlEncode(value, metadata, record);
+	},
+
 	startStore: function() { this.getView().getStore().rstore.startUpdate(); },
 	stopStore: function() { this.getView().getStore().rstore.stopUpdate(); },
 
@@ -214,6 +219,12 @@ Ext.define('PBS.config.SyncJobView', {
 	    flex: 2,
 	    sortable: true,
 	},
+	{
+	    header: gettext('Backup Groups'),
+	    dataIndex: 'groups',
+	    renderer: 'render_optional_groups',
+	    width: 80,
+	},
 	{
 	    header: gettext('Schedule'),
 	    dataIndex: 'schedule',
diff --git a/www/window/SyncJobEdit.js b/www/window/SyncJobEdit.js
index 47e65ae3..2399f11f 100644
--- a/www/window/SyncJobEdit.js
+++ b/www/window/SyncJobEdit.js
@@ -199,6 +199,18 @@ Ext.define('PBS.window.SyncJobEdit', {
 	],
 
 	columnB: [
+	    {
+		fieldLabel: gettext('Backup Groups'),
+		xtype: 'displayfield',
+		name: 'groups',
+		renderer: function(value, metadata, record) {
+		    if (!value) return gettext('All');
+		    return Ext.String.htmlEncode(value, metadata, record);
+		},
+		cbind: {
+		    hidden: '{isCreate}',
+		},
+	    },
 	    {
 		fieldLabel: gettext('Comment'),
 		xtype: 'proxmoxtextfield',
-- 
2.30.2





^ permalink raw reply	[flat|nested] 11+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 4/7] remote: add backup group scanning
  2021-07-22 14:35 [pbs-devel] [PATCH proxmox-backup 0/7] pull/sync group filter Fabian Grünbichler
                   ` (2 preceding siblings ...)
  2021-07-22 14:35 ` [pbs-devel] [PATCH proxmox-backup 3/7] sync: add group filtering Fabian Grünbichler
@ 2021-07-22 14:35 ` Fabian Grünbichler
  2021-07-22 14:35 ` [pbs-devel] [PATCH proxmox-backup 5/7] manager: extend sync/pull completion Fabian Grünbichler
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: Fabian Grünbichler @ 2021-07-22 14:35 UTC (permalink / raw)
  To: pbs-devel

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
 src/api2/config/remote.rs | 69 +++++++++++++++++++++++++++++++++++++--
 1 file changed, 67 insertions(+), 2 deletions(-)

diff --git a/src/api2/config/remote.rs b/src/api2/config/remote.rs
index 24ef8702..9397afee 100644
--- a/src/api2/config/remote.rs
+++ b/src/api2/config/remote.rs
@@ -3,7 +3,8 @@ use serde_json::Value;
 use ::serde::{Deserialize, Serialize};
 
 use proxmox::api::{api, ApiMethod, Router, RpcEnvironment, Permission};
-use proxmox::http_err;
+use proxmox::api::router::SubdirMap;
+use proxmox::{http_err, list_subdirs_api_method, sortable};
 
 use pbs_client::{HttpClient, HttpClientOptions};
 
@@ -374,8 +375,72 @@ pub async fn scan_remote_datastores(name: String) -> Result<Vec<DataStoreListIte
     }
 }
 
+#[api(
+    input: {
+        properties: {
+            name: {
+                schema: REMOTE_ID_SCHEMA,
+            },
+            store: {
+                schema: DATASTORE_SCHEMA,
+            },
+        },
+    },
+    access: {
+        permission: &Permission::Privilege(&["remote", "{name}"], PRIV_REMOTE_AUDIT, false),
+    },
+    returns: {
+        description: "Lists the accessible backup groups in a remote datastore.",
+        type: Array,
+        items: { type: GroupListItem },
+    },
+)]
+/// List groups of a remote.cfg entry's datastore
+pub async fn scan_remote_groups(name: String, store: String) -> Result<Vec<GroupListItem>, Error> {
+    let (remote_config, _digest) = remote::config()?;
+    let remote: remote::Remote = remote_config.lookup("remote", &name)?;
+
+    let map_remote_err = |api_err| {
+        http_err!(INTERNAL_SERVER_ERROR,
+                  "failed to scan remote '{}' - {}",
+                  &name,
+                  api_err)
+    };
+
+    let client = remote_client(remote)
+        .await
+        .map_err(map_remote_err)?;
+    let api_res = client
+        .get(&format!("api2/json/admin/datastore/{}/groups", store), None)
+        .await
+        .map_err(map_remote_err)?;
+    let parse_res = match api_res.get("data") {
+        Some(data) => serde_json::from_value::<Vec<GroupListItem>>(data.to_owned()),
+        None => bail!("remote {} did not return any group list data", &name),
+    };
+
+    match parse_res {
+        Ok(parsed) => Ok(parsed),
+        Err(_) => bail!("Failed to parse remote scan api result."),
+    }
+}
+
+#[sortable]
+const DATASTORE_SCAN_SUBDIRS: SubdirMap = &[
+    (
+        "groups",
+        &Router::new()
+            .get(&API_METHOD_SCAN_REMOTE_GROUPS)
+    ),
+];
+
+const DATASTORE_SCAN_ROUTER: Router = Router::new()
+    .get(&list_subdirs_api_method!(DATASTORE_SCAN_SUBDIRS))
+    .subdirs(DATASTORE_SCAN_SUBDIRS);
+
 const SCAN_ROUTER: Router = Router::new()
-    .get(&API_METHOD_SCAN_REMOTE_DATASTORES);
+    .get(&API_METHOD_SCAN_REMOTE_DATASTORES)
+    .match_all("store", &DATASTORE_SCAN_ROUTER);
 
 const ITEM_ROUTER: Router = Router::new()
     .get(&API_METHOD_READ_REMOTE)
-- 
2.30.2





^ permalink raw reply	[flat|nested] 11+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 5/7] manager: extend sync/pull completion
  2021-07-22 14:35 [pbs-devel] [PATCH proxmox-backup 0/7] pull/sync group filter Fabian Grünbichler
                   ` (3 preceding siblings ...)
  2021-07-22 14:35 ` [pbs-devel] [PATCH proxmox-backup 4/7] remote: add backup group scanning Fabian Grünbichler
@ 2021-07-22 14:35 ` Fabian Grünbichler
  2021-07-22 14:35 ` [pbs-devel] [PATCH proxmox-backup 6/7] manager: render group filter properly Fabian Grünbichler
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: Fabian Grünbichler @ 2021-07-22 14:35 UTC (permalink / raw)
  To: pbs-devel

complete groups by scanning the remote store if available, and query the
sync job config if remote or remote-store is not given on the current
command-line (e.g., when updating a job config).

unfortunately groups already given on the current command line are not
passed to the completion helper, so we can't filter those out..

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
 src/bin/proxmox-backup-manager.rs      | 91 ++++++++++++++++++++++----
 src/bin/proxmox_backup_manager/sync.rs |  2 +
 2 files changed, 81 insertions(+), 12 deletions(-)

diff --git a/src/bin/proxmox-backup-manager.rs b/src/bin/proxmox-backup-manager.rs
index 6844a1ab..1e257886 100644
--- a/src/bin/proxmox-backup-manager.rs
+++ b/src/bin/proxmox-backup-manager.rs
@@ -1,7 +1,7 @@
 use std::collections::HashMap;
 use std::io::{self, Write};
 
-use anyhow::{format_err, Error};
+use anyhow::Error;
 use serde_json::{json, Value};
 
 use proxmox::api::{api, cli::*, RpcEnvironment};
@@ -10,7 +10,7 @@ use pbs_client::{connect_to_localhost, display_task_log, view_task_result};
 use pbs_tools::percent_encoding::percent_encode_component;
 use pbs_tools::json::required_string_param;
 
-use proxmox_backup::config;
+use proxmox_backup::config::{self, sync, sync::SyncJobConfig};
 use proxmox_backup::api2::{self, types::* };
 use proxmox_backup::server::wait_for_local_worker;
 
@@ -396,6 +396,7 @@ fn main() {
                 .completion_cb("local-store", config::datastore::complete_datastore_name)
                 .completion_cb("remote", config::remote::complete_remote_name)
                 .completion_cb("remote-store", complete_remote_datastore_name)
+                .completion_cb("groups", complete_remote_datastore_group)
         )
         .insert(
             "verify",
@@ -418,24 +419,90 @@ fn main() {
    pbs_runtime::main(run_async_cli_command(cmd_def, rpcenv));
 }
 
+fn get_sync_job(id: &String) -> Result<SyncJobConfig, Error> {
+    let (config, _digest) = sync::config()?;
+
+    config.lookup("sync", id)
+}
+
+fn get_remote(param: &HashMap<String, String>) -> Option<String> {
+    param
+        .get("remote")
+        .map(|r| r.to_owned())
+        .or_else(|| {
+            if let Some(id) = param.get("id") {
+                if let Ok(job) = get_sync_job(id) {
+                    return Some(job.remote.clone());
+                }
+            }
+            None
+        })
+}
+
+fn get_remote_store(param: &HashMap<String, String>) -> Option<(String, String)> {
+    let mut job: Option<SyncJobConfig> = None;
+
+    let remote = param
+        .get("remote")
+        .map(|r| r.to_owned())
+        .or_else(|| {
+            if let Some(id) = param.get("id") {
+                job = get_sync_job(id).ok();
+                if let Some(ref job) = job {
+                    return Some(job.remote.clone());
+                }
+            }
+            None
+        });
+
+    if let Some(remote) = remote {
+        let store = param
+            .get("remote-store")
+            .map(|r| r.to_owned())
+            .or_else(|| job.map(|job| job.remote_store.clone()));
+
+        if let Some(store) = store {
+            return Some((remote, store))
+        }
+    }
+
+    None
+}
+
 // shell completion helper
 pub fn complete_remote_datastore_name(_arg: &str, param: &HashMap<String, String>) -> Vec<String> {
 
     let mut list = Vec::new();
 
-    let _ = proxmox::try_block!({
-        let remote = param.get("remote").ok_or_else(|| format_err!("no remote"))?;
+    if let Some(remote) = get_remote(param) {
+        if let Ok(data) = pbs_runtime::block_on(async move {
+                crate::api2::config::remote::scan_remote_datastores(remote).await
+            }) {
 
-        let data = pbs_runtime::block_on(async move {
-            crate::api2::config::remote::scan_remote_datastores(remote.clone()).await
-        })?;
-
-        for item in data {
-            list.push(item.store);
+            for item in data {
+                list.push(item.store);
+            }
         }
+    }
+
+    list
+}
+
+// shell completion helper
+pub fn complete_remote_datastore_group(_arg: &str, param: &HashMap<String, String>) -> Vec<String> {
+
+    let mut list = Vec::new();
+
+    if let Some((remote, remote_store)) = get_remote_store(param) {
+        if let Ok(data) = pbs_runtime::block_on(async move {
+            crate::api2::config::remote::scan_remote_groups(remote.clone(), remote_store.clone()).await
+        }) {
 
-        Ok(())
-    }).map_err(|_err: Error| { /* ignore */ });
+            for item in data {
+                list.push(format!("{}/{}", item.backup_type, item.backup_id));
+            }
+        }
+    }
 
     list
 }
diff --git a/src/bin/proxmox_backup_manager/sync.rs b/src/bin/proxmox_backup_manager/sync.rs
index f05f0c8d..0c9bac49 100644
--- a/src/bin/proxmox_backup_manager/sync.rs
+++ b/src/bin/proxmox_backup_manager/sync.rs
@@ -87,6 +87,7 @@ pub fn sync_job_commands() -> CommandLineInterface {
                 .completion_cb("store", config::datastore::complete_datastore_name)
                 .completion_cb("remote", config::remote::complete_remote_name)
                 .completion_cb("remote-store", crate::complete_remote_datastore_name)
+                .completion_cb("groups", crate::complete_remote_datastore_group)
         )
         .insert("update",
                 CliCommand::new(&api2::config::sync::API_METHOD_UPDATE_SYNC_JOB)
@@ -95,6 +96,7 @@ pub fn sync_job_commands() -> CommandLineInterface {
                 .completion_cb("schedule", config::datastore::complete_calendar_event)
                 .completion_cb("store", config::datastore::complete_datastore_name)
                 .completion_cb("remote-store", crate::complete_remote_datastore_name)
+                .completion_cb("groups", crate::complete_remote_datastore_group)
         )
         .insert("remove",
                 CliCommand::new(&api2::config::sync::API_METHOD_DELETE_SYNC_JOB)
-- 
2.30.2





^ permalink raw reply	[flat|nested] 11+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 6/7] manager: render group filter properly
  2021-07-22 14:35 [pbs-devel] [PATCH proxmox-backup 0/7] pull/sync group filter Fabian Grünbichler
                   ` (4 preceding siblings ...)
  2021-07-22 14:35 ` [pbs-devel] [PATCH proxmox-backup 5/7] manager: extend sync/pull completion Fabian Grünbichler
@ 2021-07-22 14:35 ` Fabian Grünbichler
  2021-07-22 14:35 ` [pbs-devel] [PATCH proxmox-backup 7/7] manager: don't complete sync job ID on creation Fabian Grünbichler
  2021-07-26  8:01 ` [pbs-devel] [PATCH proxmox-backup 0/7] pull/sync group filter Dietmar Maurer
  7 siblings, 0 replies; 11+ messages in thread
From: Fabian Grünbichler @ 2021-07-22 14:35 UTC (permalink / raw)
  To: pbs-devel

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
 src/bin/proxmox_backup_manager/sync.rs | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/src/bin/proxmox_backup_manager/sync.rs b/src/bin/proxmox_backup_manager/sync.rs
index 0c9bac49..a2c36854 100644
--- a/src/bin/proxmox_backup_manager/sync.rs
+++ b/src/bin/proxmox_backup_manager/sync.rs
@@ -6,6 +6,18 @@ use proxmox::api::{api, cli::*, RpcEnvironment, ApiHandler};
 use proxmox_backup::config;
 use proxmox_backup::api2::{self, types::* };
 
+fn render_groups(value: &Value, _record: &Value) -> Result<String, Error> {
+    if let Some(groups) = value.as_array() {
+        let groups:Vec<&str> = groups
+            .iter()
+            .filter_map(Value::as_str)
+            .collect();
+        Ok(groups.join(", "))
+    } else {
+        Ok(String::from("all"))
+    }
+}
+
 #[api(
     input: {
         properties: {
@@ -33,6 +45,7 @@ fn list_sync_jobs(param: Value, rpcenv: &mut dyn RpcEnvironment) -> Result<Value
         .column(ColumnConfig::new("remote"))
         .column(ColumnConfig::new("remote-store"))
         .column(ColumnConfig::new("schedule"))
+        .column(ColumnConfig::new("groups").renderer(render_groups))
         .column(ColumnConfig::new("comment"));
 
     format_and_print_result_full(&mut data, &info.returns, &output_format, &options);
@@ -64,6 +77,12 @@ fn show_sync_job(param: Value, rpcenv: &mut dyn RpcEnvironment) -> Result<Value,
         _ => unreachable!(),
     };
 
+    if let Some(groups) = data.get_mut("groups") {
+        if let Ok(rendered) = render_groups(groups, groups) {
+            *groups = Value::String(rendered);
+        }
+    }
+
     let options = default_table_format_options();
     format_and_print_result_full(&mut data, &info.returns, &output_format, &options);
 
-- 
2.30.2





^ permalink raw reply	[flat|nested] 11+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 7/7] manager: don't complete sync job ID on creation
  2021-07-22 14:35 [pbs-devel] [PATCH proxmox-backup 0/7] pull/sync group filter Fabian Grünbichler
                   ` (5 preceding siblings ...)
  2021-07-22 14:35 ` [pbs-devel] [PATCH proxmox-backup 6/7] manager: render group filter properly Fabian Grünbichler
@ 2021-07-22 14:35 ` Fabian Grünbichler
  2021-07-26  8:01 ` [pbs-devel] [PATCH proxmox-backup 0/7] pull/sync group filter Dietmar Maurer
  7 siblings, 0 replies; 11+ messages in thread
From: Fabian Grünbichler @ 2021-07-22 14:35 UTC (permalink / raw)
  To: pbs-devel

that does not make sense, since re-using an existing one leads to an
error.

Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
 src/bin/proxmox_backup_manager/sync.rs | 1 -
 1 file changed, 1 deletion(-)

diff --git a/src/bin/proxmox_backup_manager/sync.rs b/src/bin/proxmox_backup_manager/sync.rs
index a2c36854..37eb8ba5 100644
--- a/src/bin/proxmox_backup_manager/sync.rs
+++ b/src/bin/proxmox_backup_manager/sync.rs
@@ -101,7 +101,6 @@ pub fn sync_job_commands() -> CommandLineInterface {
         .insert("create",
                 CliCommand::new(&api2::config::sync::API_METHOD_CREATE_SYNC_JOB)
                 .arg_param(&["id"])
-                .completion_cb("id", config::sync::complete_sync_job_id)
                 .completion_cb("schedule", config::datastore::complete_calendar_event)
                 .completion_cb("store", config::datastore::complete_datastore_name)
                 .completion_cb("remote", config::remote::complete_remote_name)
-- 
2.30.2





^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup 0/7] pull/sync group filter
  2021-07-22 14:35 [pbs-devel] [PATCH proxmox-backup 0/7] pull/sync group filter Fabian Grünbichler
                   ` (6 preceding siblings ...)
  2021-07-22 14:35 ` [pbs-devel] [PATCH proxmox-backup 7/7] manager: don't complete sync job ID on creation Fabian Grünbichler
@ 2021-07-26  8:01 ` Dietmar Maurer
  7 siblings, 0 replies; 11+ messages in thread
From: Dietmar Maurer @ 2021-07-26  8:01 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Fabian Grünbichler

As discussed offline, we want to make the filter more flexible, e.g.:

-group vm/100 # this is what we have now

-group regex:vm/.* # a regex

-group type:ct #all containers

-exclude-groups true # exclude specified groups instead of include

...

On 7/22/21 4:35 PM, Fabian Grünbichler wrote:
> this has been requested a few times on the forum, e.g. for a special
> sync job for the most important groups, or seeding of a new datastore
> with a partial view of an existing one.
>
> while it's possible to achieve similar results with hacky workarounds
> based on group ownership and reduced "visibility", implementing it
> properly is not that complex.
>
> possible future additions in a similar fashion:
> - only sync/pull encrypted snapshots (less trusted off-site location)
> - only sync/pull latest snapshot in each group (fast seeding of new
>    datastore)
>
> Fabian Grünbichler (7):
>    api-types: add schema for backup group
>    pull: allow pulling groups selectively
>    sync: add group filtering
>    remote: add backup group scanning
>    manager: extend sync/pull completion
>    manager: render group filter properly
>    manager: don't complete sync job ID on creation
>
>   pbs-api-types/src/lib.rs               |   4 +
>   src/api2/config/remote.rs              |  69 +++++++++++++++-
>   src/api2/config/sync.rs                |  22 ++++++
>   src/api2/pull.rs                       |  36 ++++++++-
>   src/bin/proxmox-backup-manager.rs      | 105 ++++++++++++++++++++++---
>   src/bin/proxmox_backup_manager/sync.rs |  22 +++++-
>   src/config/sync.rs                     |  10 +++
>   src/server/pull.rs                     |  24 +++++-
>   www/config/SyncView.js                 |  13 ++-
>   www/window/SyncJobEdit.js              |  12 +++
>   10 files changed, 295 insertions(+), 22 deletions(-)
>




^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup 7/7] manager: don't complete sync job ID on creation
  2021-07-22 15:46 [pbs-devel] [PATCH proxmox-backup 7/7] manager: don't complete sync job ID on creation Dietmar Maurer
@ 2021-07-23  6:17 ` Fabian Grünbichler
  0 siblings, 0 replies; 11+ messages in thread
From: Fabian Grünbichler @ 2021-07-23  6:17 UTC (permalink / raw)
  To: Dietmar Maurer, Proxmox Backup Server development discussion

On July 22, 2021 5:46 pm, Dietmar Maurer wrote:
>> On 07/22/2021 4:35 PM Fabian Grünbichler <f.gruenbichler@proxmox.com> wrote:
>> 
>>  
>> that does not make sense, 
> 
> make sense to me,

ack, maybe that was a bit strongly worded ;)

>> since re-using an existing one leads to an
>> error.
> 
> because it really helps to see what already exists
> 

we don't have that semantic for other completions though, and for sync 
jobs most users will have auto-generated names if they created their 
jobs with the GUI.. creating a datastore does not complete existing 
names, updating/showing/removing/using does. creating a new remote does 
not complete existing remote names, updating/showing/removing/using one 
does. same for users, API tokens - the only other exception that 
completes on creation is adding a new network interface.

but I don't have strong feelings about this (I literally stumbled upon 
this when doing the other patches) - other than that we should  be 
consistent. so if we agree that we want to complete existing identifiers 
when adding new entities, I'd send a separate series adding that to the 
others where it's missing.




^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup 7/7] manager: don't complete sync job ID on creation
@ 2021-07-22 15:46 Dietmar Maurer
  2021-07-23  6:17 ` Fabian Grünbichler
  0 siblings, 1 reply; 11+ messages in thread
From: Dietmar Maurer @ 2021-07-22 15:46 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Fabian Grünbichler

> On 07/22/2021 4:35 PM Fabian Grünbichler <f.gruenbichler@proxmox.com> wrote:
> 
>  
> that does not make sense, 

make sense to me,

> since re-using an existing one leads to an
> error.

because it really helps to see what already exists




^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2021-07-26  8:01 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-22 14:35 [pbs-devel] [PATCH proxmox-backup 0/7] pull/sync group filter Fabian Grünbichler
2021-07-22 14:35 ` [pbs-devel] [PATCH proxmox-backup 1/7] api-types: add schema for backup group Fabian Grünbichler
2021-07-22 14:35 ` [pbs-devel] [PATCH proxmox-backup 2/7] pull: allow pulling groups selectively Fabian Grünbichler
2021-07-22 14:35 ` [pbs-devel] [PATCH proxmox-backup 3/7] sync: add group filtering Fabian Grünbichler
2021-07-22 14:35 ` [pbs-devel] [PATCH proxmox-backup 4/7] remote: add backup group scanning Fabian Grünbichler
2021-07-22 14:35 ` [pbs-devel] [PATCH proxmox-backup 5/7] manager: extend sync/pull completion Fabian Grünbichler
2021-07-22 14:35 ` [pbs-devel] [PATCH proxmox-backup 6/7] manager: render group filter properly Fabian Grünbichler
2021-07-22 14:35 ` [pbs-devel] [PATCH proxmox-backup 7/7] manager: don't complete sync job ID on creation Fabian Grünbichler
2021-07-26  8:01 ` [pbs-devel] [PATCH proxmox-backup 0/7] pull/sync group filter Dietmar Maurer
2021-07-22 15:46 [pbs-devel] [PATCH proxmox-backup 7/7] manager: don't complete sync job ID on creation Dietmar Maurer
2021-07-23  6:17 ` Fabian Grünbichler

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal