* [pbs-devel] [PATCH proxmox{, -backup} 0/7] introduce streaming content api call @ 2025-10-03 8:50 Dominik Csapak 2025-10-03 8:50 ` [pbs-devel] [PATCH proxmox 1/1] pbs-api-types: add api types for " Dominik Csapak ` (6 more replies) 0 siblings, 7 replies; 21+ messages in thread From: Dominik Csapak @ 2025-10-03 8:50 UTC (permalink / raw) To: pbs-devel This series introduces a new api call `content` on a datastore that returns the whole content recursively, including namespaces, groups and snapshots. It's doing that using the async-stream + proxmox-router::Stream, so a client can get the separate records without having to wait for the api call to finish. This is intended to improve ui responsiveness, since we can show things even when it's still loading. This can be very useful for large datastores. This series contains some refactorings that were necessary, but also includes one performance improvement for the `groups` api call (stumbled upon it during testing) proxmox: Dominik Csapak (1): pbs-api-types: add api types for streaming content api call pbs-api-types/Cargo.toml | 1 + pbs-api-types/src/datastore.rs | 99 ++++++++++++++++++++++++++++++++++ 2 files changed, 100 insertions(+) proxmox-backup: Dominik Csapak (6): backup: hierarchy: add new can_access_any_namespace_in_range helper backup: hierarchy: reuse 'NS_PRIVS_OK' for namespace helper api: admin: datastore: refactor BackupGroup to GroupListItem conversion api: admin: datastore: factor out 'get_group_owner' api: admin: datastore: optimize `groups` api call api: admin: datastore: implement streaming content api call Cargo.toml | 2 + src/api2/admin/datastore.rs | 340 +++++++++++++++++++++++++++--------- src/backup/hierarchy.rs | 31 ++-- 3 files changed, 281 insertions(+), 92 deletions(-) Summary over all repositories: 5 files changed, 381 insertions(+), 92 deletions(-) -- Generated by git-murpp 0.8.1 _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 21+ messages in thread
* [pbs-devel] [PATCH proxmox 1/1] pbs-api-types: add api types for streaming content api call 2025-10-03 8:50 [pbs-devel] [PATCH proxmox{, -backup} 0/7] introduce streaming content api call Dominik Csapak @ 2025-10-03 8:50 ` Dominik Csapak 2025-10-07 8:59 ` Wolfgang Bumiller 2025-10-03 8:50 ` [pbs-devel] [PATCH proxmox-backup 1/6] backup: hierarchy: add new can_access_any_namespace_in_range helper Dominik Csapak ` (5 subsequent siblings) 6 siblings, 1 reply; 21+ messages in thread From: Dominik Csapak @ 2025-10-03 8:50 UTC (permalink / raw) To: pbs-devel Main point are the two new structs * ContentType for api parameter to select a specific type of content to be returned * ContentListItem, which contains either - a namespace - a snapshot (including namespace) - a backup group (including namespace) Since the enums variants are newtypes, we need to annotate the 'serde tag' and given them an 'id-property' and 'id-schema' even though we don't need it, since it's only relevant for saving these on disk with section config. Signed-off-by: Dominik Csapak <d.csapak@proxmox.com> --- pbs-api-types/Cargo.toml | 1 + pbs-api-types/src/datastore.rs | 99 ++++++++++++++++++++++++++++++++++ 2 files changed, 100 insertions(+) diff --git a/pbs-api-types/Cargo.toml b/pbs-api-types/Cargo.toml index 358536d9..b3dee41d 100644 --- a/pbs-api-types/Cargo.toml +++ b/pbs-api-types/Cargo.toml @@ -23,5 +23,6 @@ proxmox-lang.workspace=true proxmox-s3-client = { workspace = true, features = [ "api-types" ] } proxmox-schema = { workspace = true, features = [ "api-macro" ] } proxmox-serde.workspace = true +proxmox-section-config.workspace = true proxmox-time.workspace = true proxmox-uuid = { workspace = true, features = [ "serde" ] } diff --git a/pbs-api-types/src/datastore.rs b/pbs-api-types/src/datastore.rs index fe73cbc4..f2999007 100644 --- a/pbs-api-types/src/datastore.rs +++ b/pbs-api-types/src/datastore.rs @@ -1479,6 +1479,96 @@ pub struct NamespaceListItem { pub comment: Option<String>, } +#[api] +/// A type of content of a datastore +#[derive(Serialize, Deserialize)] +pub enum ContentType { + /// Namespace + Namespace, + /// Backup Group + Group, + /// Backup Snapshot + Snapshot, +} + +#[api( + properties: { + ns: { + type: BackupNamespace, + }, + snapshot: { + type: SnapshotListItem, + flatten: true, + }, + }, +)] +#[derive(Serialize, Deserialize)] +/// Represents a snapshot in a datastore with namespace information +pub struct SnapshotContentItem { + /// The namespace of the snapshot + ns: BackupNamespace, + #[serde(flatten)] + /// The snapshot information + snapshot: SnapshotListItem, +} + +#[api( + properties: { + ns: { + type: BackupNamespace, + }, + group: { + type: GroupListItem, + flatten: true, + }, + }, +)] +#[derive(Serialize, Deserialize)] +/// Represents a snapshot in a datastore with namespace information +pub struct GroupContentItem { + /// The namespace of the snapshot + ns: BackupNamespace, + #[serde(flatten)] + /// The backup group information + group: GroupListItem, +} + +#[api( + "id-property": "id", + "id-schema": { + type: String, + description: "ID", + }, +)] +#[derive(Serialize, Deserialize)] +#[serde(rename_all = "kebab-case")] +#[serde(tag = "type")] +pub enum ContentListItem { + NameSpace(NamespaceListItem), + Group(GroupContentItem), + Snapshot(SnapshotContentItem), +} + +impl From<NamespaceListItem> for ContentListItem { + fn from(value: NamespaceListItem) -> Self { + ContentListItem::NameSpace(value) + } +} + +impl From<(BackupNamespace, GroupListItem)> for ContentListItem { + fn from(value: (BackupNamespace, GroupListItem)) -> Self { + let (ns, group) = value; + ContentListItem::Group(GroupContentItem { ns, group }) + } +} + +impl From<(BackupNamespace, SnapshotListItem)> for ContentListItem { + fn from(value: (BackupNamespace, SnapshotListItem)) -> Self { + let (ns, snapshot) = value; + ContentListItem::Snapshot(SnapshotContentItem { ns, snapshot }) + } +} + #[api( properties: { "backup": { type: BackupDir }, @@ -1759,6 +1849,15 @@ pub const ADMIN_DATASTORE_LIST_NAMESPACE_RETURN_TYPE: ReturnType = ReturnType { .schema(), }; +pub const ADMIN_DATASTORE_LIST_CONTENT_RETURN_TYPE: ReturnType = ReturnType { + optional: false, + schema: &ArraySchema::new( + "Returns the list of namespaces, backup groups and snapshots of a datastore.", + &ContentListItem::API_SCHEMA, + ) + .schema(), +}; + pub const ADMIN_DATASTORE_PRUNE_RETURN_TYPE: ReturnType = ReturnType { optional: false, schema: &ArraySchema::new( -- 2.47.3 _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [pbs-devel] [PATCH proxmox 1/1] pbs-api-types: add api types for streaming content api call 2025-10-03 8:50 ` [pbs-devel] [PATCH proxmox 1/1] pbs-api-types: add api types for " Dominik Csapak @ 2025-10-07 8:59 ` Wolfgang Bumiller 2025-10-08 6:41 ` Dominik Csapak 0 siblings, 1 reply; 21+ messages in thread From: Wolfgang Bumiller @ 2025-10-07 8:59 UTC (permalink / raw) To: Dominik Csapak; +Cc: pbs-devel On Fri, Oct 03, 2025 at 10:50:33AM +0200, Dominik Csapak wrote: > Main point are the two new structs > * ContentType for api parameter to select a specific type of content to > be returned > * ContentListItem, which contains either > - a namespace > - a snapshot (including namespace) > - a backup group (including namespace) Random thought: With an "item" in a list of "Things" not being a "Thing" we still call it a "ThingListItem" 🙄. Can we rename SnapshotListItem to just Snapshot and GroupListItem to Group and NamespaceListItem to Namespace? > > Since the enums variants are newtypes, we need to annotate the 'serde > tag' and given them an 'id-property' and 'id-schema' even though we > don't need it, since it's only relevant for saving these on disk with > section config. Need to update the api macro to make the section-config part opt in (with a deprecation period at first to not break stuff...) > > Signed-off-by: Dominik Csapak <d.csapak@proxmox.com> > --- > pbs-api-types/Cargo.toml | 1 + > pbs-api-types/src/datastore.rs | 99 ++++++++++++++++++++++++++++++++++ > 2 files changed, 100 insertions(+) > > diff --git a/pbs-api-types/Cargo.toml b/pbs-api-types/Cargo.toml > index 358536d9..b3dee41d 100644 > --- a/pbs-api-types/Cargo.toml > +++ b/pbs-api-types/Cargo.toml > @@ -23,5 +23,6 @@ proxmox-lang.workspace=true > proxmox-s3-client = { workspace = true, features = [ "api-types" ] } > proxmox-schema = { workspace = true, features = [ "api-macro" ] } > proxmox-serde.workspace = true > +proxmox-section-config.workspace = true > proxmox-time.workspace = true > proxmox-uuid = { workspace = true, features = [ "serde" ] } > diff --git a/pbs-api-types/src/datastore.rs b/pbs-api-types/src/datastore.rs > index fe73cbc4..f2999007 100644 > --- a/pbs-api-types/src/datastore.rs > +++ b/pbs-api-types/src/datastore.rs > @@ -1479,6 +1479,96 @@ pub struct NamespaceListItem { > pub comment: Option<String>, > } > > +#[api] > +/// A type of content of a datastore > +#[derive(Serialize, Deserialize)] Should this have a `#[serde(rename_all = "lowercase"/"kebab-case")]`? > +pub enum ContentType { > + /// Namespace > + Namespace, > + /// Backup Group > + Group, > + /// Backup Snapshot > + Snapshot, > +} > + > +#[api( > + properties: { > + ns: { > + type: BackupNamespace, > + }, > + snapshot: { > + type: SnapshotListItem, > + flatten: true, > + }, > + }, > +)] > +#[derive(Serialize, Deserialize)] > +/// Represents a snapshot in a datastore with namespace information > +pub struct SnapshotContentItem { We go from FooListItem to FooContentItem to mean that a namespace is included. NamespacedSnapshot? SnapshotAndNamespace? > + /// The namespace of the snapshot > + ns: BackupNamespace, > + #[serde(flatten)] > + /// The snapshot information > + snapshot: SnapshotListItem, > +} > + > +#[api( > + properties: { > + ns: { > + type: BackupNamespace, > + }, > + group: { > + type: GroupListItem, > + flatten: true, > + }, > + }, > +)] > +#[derive(Serialize, Deserialize)] > +/// Represents a snapshot in a datastore with namespace information > +pub struct GroupContentItem { NamespacedGroup? GroupAndNamespace? > + /// The namespace of the snapshot > + ns: BackupNamespace, > + #[serde(flatten)] > + /// The backup group information > + group: GroupListItem, > +} > + > +#[api( > + "id-property": "id", > + "id-schema": { > + type: String, > + description: "ID", > + }, > +)] > +#[derive(Serialize, Deserialize)] > +#[serde(rename_all = "kebab-case")] > +#[serde(tag = "type")] > +pub enum ContentListItem { Content? > + NameSpace(NamespaceListItem), > + Group(GroupContentItem), > + Snapshot(SnapshotContentItem), > +} > + > +impl From<NamespaceListItem> for ContentListItem { > + fn from(value: NamespaceListItem) -> Self { > + ContentListItem::NameSpace(value) > + } > +} > + > +impl From<(BackupNamespace, GroupListItem)> for ContentListItem { > + fn from(value: (BackupNamespace, GroupListItem)) -> Self { > + let (ns, group) = value; > + ContentListItem::Group(GroupContentItem { ns, group }) > + } > +} > + > +impl From<(BackupNamespace, SnapshotListItem)> for ContentListItem { > + fn from(value: (BackupNamespace, SnapshotListItem)) -> Self { > + let (ns, snapshot) = value; > + ContentListItem::Snapshot(SnapshotContentItem { ns, snapshot }) > + } > +} > + > #[api( > properties: { > "backup": { type: BackupDir }, > @@ -1759,6 +1849,15 @@ pub const ADMIN_DATASTORE_LIST_NAMESPACE_RETURN_TYPE: ReturnType = ReturnType { > .schema(), > }; > > +pub const ADMIN_DATASTORE_LIST_CONTENT_RETURN_TYPE: ReturnType = ReturnType { > + optional: false, > + schema: &ArraySchema::new( > + "Returns the list of namespaces, backup groups and snapshots of a datastore.", > + &ContentListItem::API_SCHEMA, > + ) > + .schema(), > +}; > + > pub const ADMIN_DATASTORE_PRUNE_RETURN_TYPE: ReturnType = ReturnType { > optional: false, > schema: &ArraySchema::new( > -- > 2.47.3 _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [pbs-devel] [PATCH proxmox 1/1] pbs-api-types: add api types for streaming content api call 2025-10-07 8:59 ` Wolfgang Bumiller @ 2025-10-08 6:41 ` Dominik Csapak 0 siblings, 0 replies; 21+ messages in thread From: Dominik Csapak @ 2025-10-08 6:41 UTC (permalink / raw) To: Wolfgang Bumiller; +Cc: pbs-devel On 10/7/25 10:59 AM, Wolfgang Bumiller wrote: > On Fri, Oct 03, 2025 at 10:50:33AM +0200, Dominik Csapak wrote: >> Main point are the two new structs >> * ContentType for api parameter to select a specific type of content to >> be returned >> * ContentListItem, which contains either >> - a namespace >> - a snapshot (including namespace) >> - a backup group (including namespace) > > Random thought: With an "item" in a list of "Things" not being a "Thing" > we still call it a "ThingListItem" 🙄. > Can we rename SnapshotListItem to just Snapshot and GroupListItem to > Group and NamespaceListItem to Namespace? > I tried to keep the current namin scheme ;) Since I'm not the best at naming, I'll take every suggestion in that regard^^ >> >> Since the enums variants are newtypes, we need to annotate the 'serde >> tag' and given them an 'id-property' and 'id-schema' even though we >> don't need it, since it's only relevant for saving these on disk with >> section config. > > Need to update the api macro to make the section-config part opt in > (with a deprecation period at first to not break stuff...) yep, but as long as we're not actively using it for these enum i think it's no harm to do it this way for now. > >> >> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com> >> --- >> pbs-api-types/Cargo.toml | 1 + >> pbs-api-types/src/datastore.rs | 99 ++++++++++++++++++++++++++++++++++ >> 2 files changed, 100 insertions(+) >> >> diff --git a/pbs-api-types/Cargo.toml b/pbs-api-types/Cargo.toml >> index 358536d9..b3dee41d 100644 >> --- a/pbs-api-types/Cargo.toml >> +++ b/pbs-api-types/Cargo.toml >> @@ -23,5 +23,6 @@ proxmox-lang.workspace=true >> proxmox-s3-client = { workspace = true, features = [ "api-types" ] } >> proxmox-schema = { workspace = true, features = [ "api-macro" ] } >> proxmox-serde.workspace = true >> +proxmox-section-config.workspace = true >> proxmox-time.workspace = true >> proxmox-uuid = { workspace = true, features = [ "serde" ] } >> diff --git a/pbs-api-types/src/datastore.rs b/pbs-api-types/src/datastore.rs >> index fe73cbc4..f2999007 100644 >> --- a/pbs-api-types/src/datastore.rs >> +++ b/pbs-api-types/src/datastore.rs >> @@ -1479,6 +1479,96 @@ pub struct NamespaceListItem { >> pub comment: Option<String>, >> } >> >> +#[api] >> +/// A type of content of a datastore >> +#[derive(Serialize, Deserialize)] > > Should this have a `#[serde(rename_all = "lowercase"/"kebab-case")]`? yep > >> +pub enum ContentType { >> + /// Namespace >> + Namespace, >> + /// Backup Group >> + Group, >> + /// Backup Snapshot >> + Snapshot, >> +} >> + >> +#[api( >> + properties: { >> + ns: { >> + type: BackupNamespace, >> + }, >> + snapshot: { >> + type: SnapshotListItem, >> + flatten: true, >> + }, >> + }, >> +)] >> +#[derive(Serialize, Deserialize)] >> +/// Represents a snapshot in a datastore with namespace information >> +pub struct SnapshotContentItem { > > We go from FooListItem to FooContentItem to mean that a namespace is > included. > > NamespacedSnapshot? > SnapshotAndNamespace? IMHO both a bit unwieldy, but '*ContentItem' is not better .. I'd probably go with SnapshotWithNamespace (IMO 'with' suggests a stronger correlation than 'and', but no strong opinion here) ? > >> + /// The namespace of the snapshot >> + ns: BackupNamespace, >> + #[serde(flatten)] >> + /// The snapshot information >> + snapshot: SnapshotListItem, >> +} >> + >> +#[api( >> + properties: { >> + ns: { >> + type: BackupNamespace, >> + }, >> + group: { >> + type: GroupListItem, >> + flatten: true, >> + }, >> + }, >> +)] >> +#[derive(Serialize, Deserialize)] >> +/// Represents a snapshot in a datastore with namespace information >> +pub struct GroupContentItem { > > NamespacedGroup? > GroupAndNamespace? > same as above >> + /// The namespace of the snapshot >> + ns: BackupNamespace, >> + #[serde(flatten)] >> + /// The backup group information >> + group: GroupListItem, >> +} >> + >> +#[api( >> + "id-property": "id", >> + "id-schema": { >> + type: String, >> + description: "ID", >> + }, >> +)] >> +#[derive(Serialize, Deserialize)] >> +#[serde(rename_all = "kebab-case")] >> +#[serde(tag = "type")] >> +pub enum ContentListItem { > > Content? 'DatastoreContent' ? As i find Content alone a bit too generic but i didn't want to do 'DatastoreContentListItem' ^^ > >> + NameSpace(NamespaceListItem), >> + Group(GroupContentItem), >> + Snapshot(SnapshotContentItem), >> +} >> + >> +impl From<NamespaceListItem> for ContentListItem { >> + fn from(value: NamespaceListItem) -> Self { >> + ContentListItem::NameSpace(value) >> + } >> +} >> + >> +impl From<(BackupNamespace, GroupListItem)> for ContentListItem { >> + fn from(value: (BackupNamespace, GroupListItem)) -> Self { >> + let (ns, group) = value; >> + ContentListItem::Group(GroupContentItem { ns, group }) >> + } >> +} >> + >> +impl From<(BackupNamespace, SnapshotListItem)> for ContentListItem { >> + fn from(value: (BackupNamespace, SnapshotListItem)) -> Self { >> + let (ns, snapshot) = value; >> + ContentListItem::Snapshot(SnapshotContentItem { ns, snapshot }) >> + } >> +} >> + >> #[api( >> properties: { >> "backup": { type: BackupDir }, >> @@ -1759,6 +1849,15 @@ pub const ADMIN_DATASTORE_LIST_NAMESPACE_RETURN_TYPE: ReturnType = ReturnType { >> .schema(), >> }; >> >> +pub const ADMIN_DATASTORE_LIST_CONTENT_RETURN_TYPE: ReturnType = ReturnType { >> + optional: false, >> + schema: &ArraySchema::new( >> + "Returns the list of namespaces, backup groups and snapshots of a datastore.", >> + &ContentListItem::API_SCHEMA, >> + ) >> + .schema(), >> +}; >> + >> pub const ADMIN_DATASTORE_PRUNE_RETURN_TYPE: ReturnType = ReturnType { >> optional: false, >> schema: &ArraySchema::new( >> -- >> 2.47.3 _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 21+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 1/6] backup: hierarchy: add new can_access_any_namespace_in_range helper 2025-10-03 8:50 [pbs-devel] [PATCH proxmox{, -backup} 0/7] introduce streaming content api call Dominik Csapak 2025-10-03 8:50 ` [pbs-devel] [PATCH proxmox 1/1] pbs-api-types: add api types for " Dominik Csapak @ 2025-10-03 8:50 ` Dominik Csapak 2025-10-03 9:52 ` Thomas Lamprecht 2025-10-03 8:50 ` [pbs-devel] [PATCH proxmox-backup 2/6] backup: hierarchy: reuse 'NS_PRIVS_OK' for namespace helper Dominik Csapak ` (4 subsequent siblings) 6 siblings, 1 reply; 21+ messages in thread From: Dominik Csapak @ 2025-10-03 8:50 UTC (permalink / raw) To: pbs-devel sometimes we need to check the permissions in a range from a starting namespace with a certain depth. Signed-off-by: Dominik Csapak <d.csapak@proxmox.com> --- src/backup/hierarchy.rs | 27 ++++++++++++++++++++------- 1 file changed, 20 insertions(+), 7 deletions(-) diff --git a/src/backup/hierarchy.rs b/src/backup/hierarchy.rs index 8dd71fcf7..438bc3ee3 100644 --- a/src/backup/hierarchy.rs +++ b/src/backup/hierarchy.rs @@ -68,19 +68,23 @@ pub fn check_ns_privs_full( ); } -pub fn can_access_any_namespace( +/// Checks if the given user has read/access rights on any namespace on the given datastore, +/// beginning with `start_ns` up to `max_depth` below. +pub fn can_access_any_namespace_in_range( store: Arc<DataStore>, auth_id: &Authid, user_info: &CachedUserInfo, + start_ns: Option<BackupNamespace>, + max_depth: Option<usize>, ) -> bool { + let ns = start_ns.unwrap_or_default(); // NOTE: traversing the datastore could be avoided if we had an "ACL tree: is there any priv // below /datastore/{store}" helper - let mut iter = - if let Ok(iter) = store.recursive_iter_backup_ns_ok(BackupNamespace::root(), None) { - iter - } else { - return false; - }; + let mut iter = if let Ok(iter) = store.recursive_iter_backup_ns_ok(ns, max_depth) { + iter + } else { + return false; + }; let wanted = PRIV_DATASTORE_AUDIT | PRIV_DATASTORE_MODIFY | PRIV_DATASTORE_READ | PRIV_DATASTORE_BACKUP; let name = store.name(); @@ -90,6 +94,15 @@ pub fn can_access_any_namespace( }) } +/// Checks if the given user has read/access rights on any namespace on given datastore +pub fn can_access_any_namespace( + store: Arc<DataStore>, + auth_id: &Authid, + user_info: &CachedUserInfo, +) -> bool { + can_access_any_namespace_in_range(store, auth_id, user_info, None, None) +} + /// A privilege aware iterator for all backup groups in all Namespaces below an anchor namespace, /// most often that will be the `BackupNamespace::root()` one. /// -- 2.47.3 _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [pbs-devel] [PATCH proxmox-backup 1/6] backup: hierarchy: add new can_access_any_namespace_in_range helper 2025-10-03 8:50 ` [pbs-devel] [PATCH proxmox-backup 1/6] backup: hierarchy: add new can_access_any_namespace_in_range helper Dominik Csapak @ 2025-10-03 9:52 ` Thomas Lamprecht 2025-10-03 10:10 ` Dominik Csapak 0 siblings, 1 reply; 21+ messages in thread From: Thomas Lamprecht @ 2025-10-03 9:52 UTC (permalink / raw) To: Proxmox Backup Server development discussion, Dominik Csapak Am 03.10.25 um 10:50 schrieb Dominik Csapak: > sometimes we need to check the permissions in a range from a starting > namespace with a certain depth. > > Signed-off-by: Dominik Csapak <d.csapak@proxmox.com> > --- > src/backup/hierarchy.rs | 27 ++++++++++++++++++++------- > 1 file changed, 20 insertions(+), 7 deletions(-) > > diff --git a/src/backup/hierarchy.rs b/src/backup/hierarchy.rs > index 8dd71fcf7..438bc3ee3 100644 > --- a/src/backup/hierarchy.rs > +++ b/src/backup/hierarchy.rs > @@ -68,19 +68,23 @@ pub fn check_ns_privs_full( > ); > } > > -pub fn can_access_any_namespace( > +/// Checks if the given user has read/access rights on any namespace on the given datastore, > +/// beginning with `start_ns` up to `max_depth` below. > +pub fn can_access_any_namespace_in_range( I would interpret a range being over a linear list, not a tree, the "below" you use in the doccomment is already much better fitting, like: can_access_any_namespace_below > store: Arc<DataStore>, > auth_id: &Authid, > user_info: &CachedUserInfo, > + start_ns: Option<BackupNamespace>, nit: start is IMO slightly confusing for the tree-like nature of namespaces, maybe parent_ns would be better suited? > + max_depth: Option<usize>, > ) -> bool { > + let ns = start_ns.unwrap_or_default(); > // NOTE: traversing the datastore could be avoided if we had an "ACL tree: is there any priv > // below /datastore/{store}" helper > - let mut iter = > - if let Ok(iter) = store.recursive_iter_backup_ns_ok(BackupNamespace::root(), None) { > - iter > - } else { > - return false; > - }; > + let mut iter = if let Ok(iter) = store.recursive_iter_backup_ns_ok(ns, max_depth) { > + iter > + } else { > + return false; > + }; This could use let-else, e.g. something like (untested): let Ok(mut iter) = store.recursive_iter_backup_ns_ok(ns, max_depth) else { return false; }; > let wanted = > PRIV_DATASTORE_AUDIT | PRIV_DATASTORE_MODIFY | PRIV_DATASTORE_READ | PRIV_DATASTORE_BACKUP; > let name = store.name(); > @@ -90,6 +94,15 @@ pub fn can_access_any_namespace( > }) > } > > +/// Checks if the given user has read/access rights on any namespace on given datastore > +pub fn can_access_any_namespace( > + store: Arc<DataStore>, > + auth_id: &Authid, > + user_info: &CachedUserInfo, > +) -> bool { > + can_access_any_namespace_in_range(store, auth_id, user_info, None, None) > +} > + > /// A privilege aware iterator for all backup groups in all Namespaces below an anchor namespace, > /// most often that will be the `BackupNamespace::root()` one. > /// _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [pbs-devel] [PATCH proxmox-backup 1/6] backup: hierarchy: add new can_access_any_namespace_in_range helper 2025-10-03 9:52 ` Thomas Lamprecht @ 2025-10-03 10:10 ` Dominik Csapak 2025-10-03 10:21 ` Thomas Lamprecht 0 siblings, 1 reply; 21+ messages in thread From: Dominik Csapak @ 2025-10-03 10:10 UTC (permalink / raw) To: Thomas Lamprecht, Proxmox Backup Server development discussion On 10/3/25 11:52 AM, Thomas Lamprecht wrote: > Am 03.10.25 um 10:50 schrieb Dominik Csapak: >> sometimes we need to check the permissions in a range from a starting >> namespace with a certain depth. >> >> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com> >> --- >> src/backup/hierarchy.rs | 27 ++++++++++++++++++++------- >> 1 file changed, 20 insertions(+), 7 deletions(-) >> >> diff --git a/src/backup/hierarchy.rs b/src/backup/hierarchy.rs >> index 8dd71fcf7..438bc3ee3 100644 >> --- a/src/backup/hierarchy.rs >> +++ b/src/backup/hierarchy.rs >> @@ -68,19 +68,23 @@ pub fn check_ns_privs_full( >> ); >> } >> >> -pub fn can_access_any_namespace( >> +/// Checks if the given user has read/access rights on any namespace on the given datastore, >> +/// beginning with `start_ns` up to `max_depth` below. >> +pub fn can_access_any_namespace_in_range( > > I would interpret a range being over a linear list, not a tree, the "below" > you use in the doccomment is already much better fitting, like: > > can_access_any_namespace_below > actually what i had at first^^ i opted for range because of the max-depth limiting, but you're right range makes not much sense in a tree > >> store: Arc<DataStore>, >> auth_id: &Authid, >> user_info: &CachedUserInfo, >> + start_ns: Option<BackupNamespace>, > > nit: start is IMO slightly confusing for the tree-like nature of namespaces, maybe > parent_ns would be better suited? sounds good > >> + max_depth: Option<usize>, >> ) -> bool { >> + let ns = start_ns.unwrap_or_default(); >> // NOTE: traversing the datastore could be avoided if we had an "ACL tree: is there any priv >> // below /datastore/{store}" helper >> - let mut iter = >> - if let Ok(iter) = store.recursive_iter_backup_ns_ok(BackupNamespace::root(), None) { >> - iter >> - } else { >> - return false; >> - }; >> + let mut iter = if let Ok(iter) = store.recursive_iter_backup_ns_ok(ns, max_depth) { >> + iter >> + } else { >> + return false; >> + }; > > This could use let-else, e.g. something like (untested): > > let Ok(mut iter) = store.recursive_iter_backup_ns_ok(ns, max_depth) else { > return false; > }; > can do, i wanted to stay as close as the original code as possible so it's easier to see that the code only moved should i do that cleanup as an extra commit or should i include it in this one? > >> let wanted = >> PRIV_DATASTORE_AUDIT | PRIV_DATASTORE_MODIFY | PRIV_DATASTORE_READ | PRIV_DATASTORE_BACKUP; >> let name = store.name(); >> @@ -90,6 +94,15 @@ pub fn can_access_any_namespace( >> }) >> } >> >> +/// Checks if the given user has read/access rights on any namespace on given datastore >> +pub fn can_access_any_namespace( >> + store: Arc<DataStore>, >> + auth_id: &Authid, >> + user_info: &CachedUserInfo, >> +) -> bool { >> + can_access_any_namespace_in_range(store, auth_id, user_info, None, None) >> +} >> + >> /// A privilege aware iterator for all backup groups in all Namespaces below an anchor namespace, >> /// most often that will be the `BackupNamespace::root()` one. >> /// > _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [pbs-devel] [PATCH proxmox-backup 1/6] backup: hierarchy: add new can_access_any_namespace_in_range helper 2025-10-03 10:10 ` Dominik Csapak @ 2025-10-03 10:21 ` Thomas Lamprecht 0 siblings, 0 replies; 21+ messages in thread From: Thomas Lamprecht @ 2025-10-03 10:21 UTC (permalink / raw) To: Dominik Csapak, Proxmox Backup Server development discussion Am 03.10.25 um 12:10 schrieb Dominik Csapak: >>> + let mut iter = if let Ok(iter) = store.recursive_iter_backup_ns_ok(ns, max_depth) { >>> + iter >>> + } else { >>> + return false; >>> + }; >> >> This could use let-else, e.g. something like (untested): >> >> let Ok(mut iter) = store.recursive_iter_backup_ns_ok(ns, max_depth) else { >> return false; >> }; >> > > can do, i wanted to stay as close as the original code as possible so > it's easier to see that the code only moved > > should i do that cleanup as an extra commit or should i include it in this one? Both are fine, personally I think it would be OK to change in one go as it's not a complex tranformation and `git show --color-words=.` (or any advanced pager) will show that the expression returning the iter stayed the same. _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 21+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 2/6] backup: hierarchy: reuse 'NS_PRIVS_OK' for namespace helper 2025-10-03 8:50 [pbs-devel] [PATCH proxmox{, -backup} 0/7] introduce streaming content api call Dominik Csapak 2025-10-03 8:50 ` [pbs-devel] [PATCH proxmox 1/1] pbs-api-types: add api types for " Dominik Csapak 2025-10-03 8:50 ` [pbs-devel] [PATCH proxmox-backup 1/6] backup: hierarchy: add new can_access_any_namespace_in_range helper Dominik Csapak @ 2025-10-03 8:50 ` Dominik Csapak 2025-10-03 8:50 ` [pbs-devel] [PATCH proxmox-backup 3/6] api: admin: datastore: refactor BackupGroup to GroupListItem conversion Dominik Csapak ` (3 subsequent siblings) 6 siblings, 0 replies; 21+ messages in thread From: Dominik Csapak @ 2025-10-03 8:50 UTC (permalink / raw) To: pbs-devel Aside from the order of the privileges, it's identical to what we already used here, so simply reuse it. Signed-off-by: Dominik Csapak <d.csapak@proxmox.com> --- src/backup/hierarchy.rs | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/src/backup/hierarchy.rs b/src/backup/hierarchy.rs index 438bc3ee3..eec9d744b 100644 --- a/src/backup/hierarchy.rs +++ b/src/backup/hierarchy.rs @@ -85,12 +85,10 @@ pub fn can_access_any_namespace_in_range( } else { return false; }; - let wanted = - PRIV_DATASTORE_AUDIT | PRIV_DATASTORE_MODIFY | PRIV_DATASTORE_READ | PRIV_DATASTORE_BACKUP; let name = store.name(); iter.any(|ns| -> bool { let user_privs = user_info.lookup_privs(auth_id, &["datastore", name, &ns.to_string()]); - user_privs & wanted != 0 + user_privs & NS_PRIVS_OK != 0 }) } -- 2.47.3 _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 21+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 3/6] api: admin: datastore: refactor BackupGroup to GroupListItem conversion 2025-10-03 8:50 [pbs-devel] [PATCH proxmox{, -backup} 0/7] introduce streaming content api call Dominik Csapak ` (2 preceding siblings ...) 2025-10-03 8:50 ` [pbs-devel] [PATCH proxmox-backup 2/6] backup: hierarchy: reuse 'NS_PRIVS_OK' for namespace helper Dominik Csapak @ 2025-10-03 8:50 ` Dominik Csapak 2025-10-03 8:50 ` [pbs-devel] [PATCH proxmox-backup 4/6] api: admin: datastore: factor out 'get_group_owner' Dominik Csapak ` (2 subsequent siblings) 6 siblings, 0 replies; 21+ messages in thread From: Dominik Csapak @ 2025-10-03 8:50 UTC (permalink / raw) To: pbs-devel We will reuse this later. No functionial change intended. Signed-off-by: Dominik Csapak <d.csapak@proxmox.com> --- src/api2/admin/datastore.rs | 107 +++++++++++++++++++++--------------- 1 file changed, 62 insertions(+), 45 deletions(-) diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs index a098aca8e..26799de6e 100644 --- a/src/api2/admin/datastore.rs +++ b/src/api2/admin/datastore.rs @@ -142,57 +142,74 @@ pub fn list_groups( .try_fold(Vec::new(), |mut group_info, group| { let group = group?; - let owner = match datastore.get_owner(&ns, group.as_ref()) { - Ok(auth_id) => auth_id, - Err(err) => { - eprintln!( - "Failed to get owner of group '{}' in {} - {}", - group.group(), - print_store_and_ns(&store, &ns), - err - ); - return Ok(group_info); - } - }; - if !list_all && check_backup_owner(&owner, &auth_id).is_err() { - return Ok(group_info); + let item = + backup_group_to_group_list_item(datastore.clone(), group, &ns, &auth_id, list_all); + + if let Some(item) = item { + group_info.push(item); } - let snapshots = match group.list_backups() { - Ok(snapshots) => snapshots, - Err(_) => return Ok(group_info), - }; + Ok(group_info) + }) +} - let backup_count: u64 = snapshots.len() as u64; - if backup_count == 0 { - return Ok(group_info); - } +fn backup_group_to_group_list_item( + datastore: Arc<DataStore>, + group: pbs_datastore::BackupGroup, + ns: &BackupNamespace, + auth_id: &Authid, + list_all: bool, +) -> Option<GroupListItem> { + let owner = match datastore.get_owner(ns, group.as_ref()) { + Ok(auth_id) => auth_id, + Err(err) => { + eprintln!( + "Failed to get owner of group '{}' in {} - {}", + group.group(), + print_store_and_ns(datastore.name(), ns), + err + ); + return None; + } + }; + if !list_all && check_backup_owner(&owner, auth_id).is_err() { + return None; + } - let last_backup = snapshots - .iter() - .fold(&snapshots[0], |a, b| { - if a.is_finished() && a.backup_dir.backup_time() > b.backup_dir.backup_time() { - a - } else { - b - } - }) - .to_owned(); - - let notes_path = datastore.group_notes_path(&ns, group.as_ref()); - let comment = file_read_firstline(notes_path).ok(); - - group_info.push(GroupListItem { - backup: group.into(), - last_backup: last_backup.backup_dir.backup_time(), - owner: Some(owner), - backup_count, - files: last_backup.files, - comment, - }); + let snapshots = match group.list_backups() { + Ok(snapshots) => snapshots, + Err(_) => return None, + }; - Ok(group_info) + let backup_count: u64 = snapshots.len() as u64; + if backup_count == 0 { + return None; + } + + let last_backup = snapshots + .iter() + .fold(&snapshots[0], |a, b| { + if a.is_finished() && a.backup_dir.backup_time() > b.backup_dir.backup_time() { + a + } else { + b + } }) + .to_owned(); + + let notes_path = datastore.group_notes_path(ns, group.as_ref()); + let comment = file_read_firstline(notes_path).ok(); + + let item = GroupListItem { + backup: group.into(), + last_backup: last_backup.backup_dir.backup_time(), + owner: Some(owner), + backup_count, + files: last_backup.files, + comment, + }; + + Some(item) } #[api( -- 2.47.3 _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 21+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 4/6] api: admin: datastore: factor out 'get_group_owner' 2025-10-03 8:50 [pbs-devel] [PATCH proxmox{, -backup} 0/7] introduce streaming content api call Dominik Csapak ` (3 preceding siblings ...) 2025-10-03 8:50 ` [pbs-devel] [PATCH proxmox-backup 3/6] api: admin: datastore: refactor BackupGroup to GroupListItem conversion Dominik Csapak @ 2025-10-03 8:50 ` Dominik Csapak 2025-10-03 8:50 ` [pbs-devel] [PATCH proxmox-backup 5/6] api: admin: datastore: optimize `groups` api call Dominik Csapak 2025-10-03 8:50 ` [pbs-devel] [PATCH proxmox-backup 6/6] api: admin: datastore: implement streaming content " Dominik Csapak 6 siblings, 0 replies; 21+ messages in thread From: Dominik Csapak @ 2025-10-03 8:50 UTC (permalink / raw) To: pbs-devel and change the `eprintln` to a `log::warn` Signed-off-by: Dominik Csapak <d.csapak@proxmox.com> --- src/api2/admin/datastore.rs | 47 +++++++++++++++++++------------------ 1 file changed, 24 insertions(+), 23 deletions(-) diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs index 26799de6e..0b133d166 100644 --- a/src/api2/admin/datastore.rs +++ b/src/api2/admin/datastore.rs @@ -160,18 +160,8 @@ fn backup_group_to_group_list_item( auth_id: &Authid, list_all: bool, ) -> Option<GroupListItem> { - let owner = match datastore.get_owner(ns, group.as_ref()) { - Ok(auth_id) => auth_id, - Err(err) => { - eprintln!( - "Failed to get owner of group '{}' in {} - {}", - group.group(), - print_store_and_ns(datastore.name(), ns), - err - ); - return None; - } - }; + let owner = get_group_owner(datastore.name(), ns, &group)?; + if !list_all && check_backup_owner(&owner, auth_id).is_err() { return None; } @@ -439,6 +429,25 @@ pub async fn list_snapshots( .map_err(|err| format_err!("failed to await blocking task: {err}"))? } +fn get_group_owner( + store: &str, + ns: &BackupNamespace, + group: &pbs_datastore::BackupGroup, +) -> Option<Authid> { + match group.get_owner() { + Ok(auth_id) => Some(auth_id), + Err(err) => { + log::warn!( + "Failed to get owner of group '{}' in {} - {}", + group.group(), + print_store_and_ns(store, ns), + err + ); + None + } + } +} + /// This must not run in a main worker thread as it potentially does tons of I/O. unsafe fn list_snapshots_blocking( store: String, @@ -482,17 +491,9 @@ unsafe fn list_snapshots_blocking( }; groups.iter().try_fold(Vec::new(), |mut snapshots, group| { - let owner = match group.get_owner() { - Ok(auth_id) => auth_id, - Err(err) => { - eprintln!( - "Failed to get owner of group '{}' in {} - {}", - group.group(), - print_store_and_ns(&store, &ns), - err - ); - return Ok(snapshots); - } + let owner = match get_group_owner(&store, &ns, group) { + Some(auth_id) => auth_id, + None => return Ok(snapshots), }; if !list_all && check_backup_owner(&owner, &auth_id).is_err() { -- 2.47.3 _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 21+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 5/6] api: admin: datastore: optimize `groups` api call 2025-10-03 8:50 [pbs-devel] [PATCH proxmox{, -backup} 0/7] introduce streaming content api call Dominik Csapak ` (4 preceding siblings ...) 2025-10-03 8:50 ` [pbs-devel] [PATCH proxmox-backup 4/6] api: admin: datastore: factor out 'get_group_owner' Dominik Csapak @ 2025-10-03 8:50 ` Dominik Csapak 2025-10-03 10:18 ` Thomas Lamprecht 2025-10-03 8:50 ` [pbs-devel] [PATCH proxmox-backup 6/6] api: admin: datastore: implement streaming content " Dominik Csapak 6 siblings, 1 reply; 21+ messages in thread From: Dominik Csapak @ 2025-10-03 8:50 UTC (permalink / raw) To: pbs-devel Currently we always touch all files for each snapshot in a group when listing them, even though we don't need all that info. We're only interested in getting either the last finished snapshot information, or the last unfinished one (which must the only one in normal use, we can't have multiple unfinished snapshots usually) Instead of getting all the information upgfront, use the snapshot iterator of the group to get only the id, sort them by time, and use the first we're interested in, getting the snapshot specific info only for those we want to check. In my (admittedly extreme) setup with ~600 groups with ~1000 snapshots each, this changes the time this api call needs from ~40s to <1s. (on a relatively fast disk). While at it, lift the restriction of only returning groups with snapshots in them, now returning also empty ones. To keep api compatibility, use a timestamp of 0 for those. (no valid backup could have been made at that time anyway) Signed-off-by: Dominik Csapak <d.csapak@proxmox.com> --- src/api2/admin/datastore.rs | 45 +++++++++++++++++++++++-------------- 1 file changed, 28 insertions(+), 17 deletions(-) diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs index 0b133d166..2252dcfa4 100644 --- a/src/api2/admin/datastore.rs +++ b/src/api2/admin/datastore.rs @@ -166,36 +166,47 @@ fn backup_group_to_group_list_item( return None; } - let snapshots = match group.list_backups() { - Ok(snapshots) => snapshots, + let mut snapshots: Vec<_> = match group.iter_snapshots() { + Ok(snapshots) => snapshots.collect::<Result<Vec<_>, Error>>().ok()?, Err(_) => return None, }; let backup_count: u64 = snapshots.len() as u64; - if backup_count == 0 { - return None; - } + let last = if backup_count == 1 { + // we may have only one unfinished snapshot + snapshots.pop().and_then(|dir| BackupInfo::new(dir).ok()) + } else { + // we either have no snapshots, or at least one finished one, since we cannot have + // multiple unfinished ones + snapshots.sort_by_key(|b| std::cmp::Reverse(b.backup_time())); + snapshots + .iter() + .filter_map(|backup| match BackupInfo::new(backup.clone()) { + Ok(info) => { + if info.is_finished() { + Some(info) + } else { + None + } + } + Err(_) => None, + }) + .next() + }; - let last_backup = snapshots - .iter() - .fold(&snapshots[0], |a, b| { - if a.is_finished() && a.backup_dir.backup_time() > b.backup_dir.backup_time() { - a - } else { - b - } - }) - .to_owned(); + let (last_backup, files) = last + .map(|info| (info.backup_dir.backup_time(), info.files)) + .unwrap_or((0, Vec::new())); let notes_path = datastore.group_notes_path(ns, group.as_ref()); let comment = file_read_firstline(notes_path).ok(); let item = GroupListItem { backup: group.into(), - last_backup: last_backup.backup_dir.backup_time(), + last_backup, owner: Some(owner), backup_count, - files: last_backup.files, + files, comment, }; -- 2.47.3 _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [pbs-devel] [PATCH proxmox-backup 5/6] api: admin: datastore: optimize `groups` api call 2025-10-03 8:50 ` [pbs-devel] [PATCH proxmox-backup 5/6] api: admin: datastore: optimize `groups` api call Dominik Csapak @ 2025-10-03 10:18 ` Thomas Lamprecht 2025-10-03 10:51 ` Dominik Csapak 0 siblings, 1 reply; 21+ messages in thread From: Thomas Lamprecht @ 2025-10-03 10:18 UTC (permalink / raw) To: Proxmox Backup Server development discussion, Dominik Csapak Am 03.10.25 um 10:50 schrieb Dominik Csapak: > Currently we always touch all files for each snapshot in a group when > listing them, even though we don't need all that info. > > We're only interested in getting either the last finished snapshot > information, or the last unfinished one (which must the only one in > normal use, we can't have multiple unfinished snapshots usually) > > Instead of getting all the information upgfront, use the snapshot s/upgfront/upfront/ > iterator of the group to get only the id, sort them by time, and > use the first we're interested in, getting the snapshot specific info > only for those we want to check. > > In my (admittedly extreme) setup with ~600 groups with ~1000 snapshots > each, this changes the time this api call needs from ~40s to <1s. > (on a relatively fast disk). nice! In the UI we mainly (only?) uses this for gathering the notes of a group, FWICT? > > While at it, lift the restriction of only returning groups with > snapshots in them, now returning also empty ones. > > To keep api compatibility, use a timestamp of 0 for those. > (no valid backup could have been made at that time anyway) lazy question: Is this dependent on the previous patches? From a quick check, it looks like it might be rather independent and one could apply this one already. > Signed-off-by: Dominik Csapak <d.csapak@proxmox.com> > --- > src/api2/admin/datastore.rs | 45 +++++++++++++++++++++++-------------- > 1 file changed, 28 insertions(+), 17 deletions(-) > > diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs > index 0b133d166..2252dcfa4 100644 > --- a/src/api2/admin/datastore.rs > +++ b/src/api2/admin/datastore.rs > @@ -166,36 +166,47 @@ fn backup_group_to_group_list_item( > return None; > } > > - let snapshots = match group.list_backups() { > - Ok(snapshots) => snapshots, > + let mut snapshots: Vec<_> = match group.iter_snapshots() { > + Ok(snapshots) => snapshots.collect::<Result<Vec<_>, Error>>().ok()?, > Err(_) => return None, > }; > > let backup_count: u64 = snapshots.len() as u64; > - if backup_count == 0 { > - return None; > - } > + let last = if backup_count == 1 { > + // we may have only one unfinished snapshot > + snapshots.pop().and_then(|dir| BackupInfo::new(dir).ok()) > + } else { > + // we either have no snapshots, or at least one finished one, since we cannot have > + // multiple unfinished ones > + snapshots.sort_by_key(|b| std::cmp::Reverse(b.backup_time())); > + snapshots > + .iter() > + .filter_map(|backup| match BackupInfo::new(backup.clone()) { > + Ok(info) => { > + if info.is_finished() { > + Some(info) > + } else { > + None > + } > + } > + Err(_) => None, nit: Could be written as a bit more concise way: snapshots .iter() .filter_map(|backup| BackupInfo::new(backup.clone()).ok()) .filter(|info| info.is_finished()) .next() With splitting the conversion to BackupInfo and filtering for finished backups into two makes it a bit easier (or rather quicker) to follow to me. > + }) > + .next() > + }; > > - let last_backup = snapshots > - .iter() > - .fold(&snapshots[0], |a, b| { > - if a.is_finished() && a.backup_dir.backup_time() > b.backup_dir.backup_time() { > - a > - } else { > - b > - } > - }) > - .to_owned(); > + let (last_backup, files) = last > + .map(|info| (info.backup_dir.backup_time(), info.files)) > + .unwrap_or((0, Vec::new())); > > let notes_path = datastore.group_notes_path(ns, group.as_ref()); > let comment = file_read_firstline(notes_path).ok(); > > let item = GroupListItem { > backup: group.into(), > - last_backup: last_backup.backup_dir.backup_time(), > + last_backup, > owner: Some(owner), > backup_count, > - files: last_backup.files, > + files, > comment, > }; > _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [pbs-devel] [PATCH proxmox-backup 5/6] api: admin: datastore: optimize `groups` api call 2025-10-03 10:18 ` Thomas Lamprecht @ 2025-10-03 10:51 ` Dominik Csapak 2025-10-03 12:37 ` Thomas Lamprecht 0 siblings, 1 reply; 21+ messages in thread From: Dominik Csapak @ 2025-10-03 10:51 UTC (permalink / raw) To: Thomas Lamprecht, Proxmox Backup Server development discussion On 10/3/25 12:18 PM, Thomas Lamprecht wrote: > Am 03.10.25 um 10:50 schrieb Dominik Csapak: >> Currently we always touch all files for each snapshot in a group when >> listing them, even though we don't need all that info. >> >> We're only interested in getting either the last finished snapshot >> information, or the last unfinished one (which must the only one in >> normal use, we can't have multiple unfinished snapshots usually) >> >> Instead of getting all the information upgfront, use the snapshot > > s/upgfront/upfront/ > >> iterator of the group to get only the id, sort them by time, and >> use the first we're interested in, getting the snapshot specific info >> only for those we want to check. >> >> In my (admittedly extreme) setup with ~600 groups with ~1000 snapshots >> each, this changes the time this api call needs from ~40s to <1s. >> (on a relatively fast disk). > > nice! In the UI we mainly (only?) uses this for gathering the notes of > a group, FWICT? > yes currently we only use it for the notes, but it's also exposed on the pbs client AFAIR, there we show the data ofc note that the improvements are only relevant for groups with many snapshots. >> >> While at it, lift the restriction of only returning groups with >> snapshots in them, now returning also empty ones. >> >> To keep api compatibility, use a timestamp of 0 for those. >> (no valid backup could have been made at that time anyway) > > lazy question: Is this dependent on the previous patches? From a quick > check, it looks like it might be rather independent and one could apply > this one already. > the way i ordered the patches the previous refactorings here in the api (3/6, 4/6) are necessary. If you want i can reverse them though (first the improvement, second the refactoring) >> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com> >> --- >> src/api2/admin/datastore.rs | 45 +++++++++++++++++++++++-------------- >> 1 file changed, 28 insertions(+), 17 deletions(-) >> >> diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs >> index 0b133d166..2252dcfa4 100644 >> --- a/src/api2/admin/datastore.rs >> +++ b/src/api2/admin/datastore.rs >> @@ -166,36 +166,47 @@ fn backup_group_to_group_list_item( >> return None; >> } >> >> - let snapshots = match group.list_backups() { >> - Ok(snapshots) => snapshots, >> + let mut snapshots: Vec<_> = match group.iter_snapshots() { >> + Ok(snapshots) => snapshots.collect::<Result<Vec<_>, Error>>().ok()?, >> Err(_) => return None, >> }; >> >> let backup_count: u64 = snapshots.len() as u64; >> - if backup_count == 0 { >> - return None; >> - } >> + let last = if backup_count == 1 { >> + // we may have only one unfinished snapshot >> + snapshots.pop().and_then(|dir| BackupInfo::new(dir).ok()) >> + } else { >> + // we either have no snapshots, or at least one finished one, since we cannot have >> + // multiple unfinished ones >> + snapshots.sort_by_key(|b| std::cmp::Reverse(b.backup_time())); >> + snapshots >> + .iter() >> + .filter_map(|backup| match BackupInfo::new(backup.clone()) { >> + Ok(info) => { >> + if info.is_finished() { >> + Some(info) >> + } else { >> + None >> + } >> + } >> + Err(_) => None, > > nit: Could be written as a bit more concise way: > > snapshots > .iter() > .filter_map(|backup| BackupInfo::new(backup.clone()).ok()) > .filter(|info| info.is_finished()) > .next() > > With splitting the conversion to BackupInfo and filtering for finished > backups into two makes it a bit easier (or rather quicker) to follow to me. fine with me, I didn't want to have multiple filter/map calls but yeah it makes it easier to read > >> + }) >> + .next() >> + }; >> >> - let last_backup = snapshots >> - .iter() >> - .fold(&snapshots[0], |a, b| { >> - if a.is_finished() && a.backup_dir.backup_time() > b.backup_dir.backup_time() { >> - a >> - } else { >> - b >> - } >> - }) >> - .to_owned(); >> + let (last_backup, files) = last >> + .map(|info| (info.backup_dir.backup_time(), info.files)) >> + .unwrap_or((0, Vec::new())); >> >> let notes_path = datastore.group_notes_path(ns, group.as_ref()); >> let comment = file_read_firstline(notes_path).ok(); >> >> let item = GroupListItem { >> backup: group.into(), >> - last_backup: last_backup.backup_dir.backup_time(), >> + last_backup, >> owner: Some(owner), >> backup_count, >> - files: last_backup.files, >> + files, >> comment, >> }; >> > _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [pbs-devel] [PATCH proxmox-backup 5/6] api: admin: datastore: optimize `groups` api call 2025-10-03 10:51 ` Dominik Csapak @ 2025-10-03 12:37 ` Thomas Lamprecht 0 siblings, 0 replies; 21+ messages in thread From: Thomas Lamprecht @ 2025-10-03 12:37 UTC (permalink / raw) To: Proxmox Backup Server development discussion, Dominik Csapak Am 03.10.25 um 12:51 schrieb Dominik Csapak: >> lazy question: Is this dependent on the previous patches? From a quick >> check, it looks like it might be rather independent and one could apply >> this one already. >> > > > the way i ordered the patches the previous refactorings here in the api > (3/6, 4/6) are necessary. > > If you want i can reverse them though (first the improvement, second > the refactoring) Ack, thanks for the info, and no, it's fine that way. _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 21+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 6/6] api: admin: datastore: implement streaming content api call 2025-10-03 8:50 [pbs-devel] [PATCH proxmox{, -backup} 0/7] introduce streaming content api call Dominik Csapak ` (5 preceding siblings ...) 2025-10-03 8:50 ` [pbs-devel] [PATCH proxmox-backup 5/6] api: admin: datastore: optimize `groups` api call Dominik Csapak @ 2025-10-03 8:50 ` Dominik Csapak 2025-10-03 11:55 ` Thomas Lamprecht 2025-10-07 12:51 ` Wolfgang Bumiller 6 siblings, 2 replies; 21+ messages in thread From: Dominik Csapak @ 2025-10-03 8:50 UTC (permalink / raw) To: pbs-devel this is a new api call that utilizes `async-stream` together with `proxmox_router::Stream` to provide a streaming interface to querying the datastore content. This can be done when a client reuqests this api call with the `application/json-seq` Accept header. In contrast to the existing api calls, this one * returns all types of content items (namespaces, groups, snapshots; can be filtered with a parameter) * iterates over them recursively (with the range that is given with the parameter) The api call returns the data in the following order: * first all visible namespaces * then for each ns in order * each group * each snapshot This is done so that we can have a good way of building a tree view in the ui. Signed-off-by: Dominik Csapak <d.csapak@proxmox.com> --- This should be thouroughly checked for permission checks. I did it to the best of my ability, but of course some bug/issue could have crept in. interesting side node, in my rather large setup with ~600 groups and ~1000 snapshosts per group, streaming this is faster than using the current `snapshot` api (by a lot): * `snapshot` api -> ~3 min * `content` api with streaming -> ~2:11 min * `content` api without streaming -> ~3 min It seems that either collecting such a 'large' api response (~200MiB) is expensive. My guesses what happens here are either: * frequent (re)allocation of the resulting vec * or serde's serializing code but the cost seems still pretty high for that. LMK if i should further investigate this. Cargo.toml | 2 + src/api2/admin/datastore.rs | 201 +++++++++++++++++++++++++++++++----- 2 files changed, 176 insertions(+), 27 deletions(-) diff --git a/Cargo.toml b/Cargo.toml index b3f55b4db..21eb293ba 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -114,6 +114,7 @@ pbs-tools = { path = "pbs-tools" } # regular crates anyhow = "1.0" +async-stream = "0.3" async-trait = "0.1.56" apt-pkg-native = "0.3.2" bitflags = "2.4" @@ -168,6 +169,7 @@ zstd-safe = "7" [dependencies] anyhow.workspace = true async-trait.workspace = true +async-stream.workspace = true bytes.workspace = true cidr.workspace = true const_format.workspace = true diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs index 2252dcfa4..bf94f6400 100644 --- a/src/api2/admin/datastore.rs +++ b/src/api2/admin/datastore.rs @@ -23,7 +23,7 @@ use proxmox_compression::zstd::ZstdEncoder; use proxmox_log::LogContext; use proxmox_router::{ http_err, list_subdirs_api_method, ApiHandler, ApiMethod, ApiResponseFuture, Permission, - Router, RpcEnvironment, RpcEnvironmentType, SubdirMap, + Record, Router, RpcEnvironment, RpcEnvironmentType, SubdirMap, }; use proxmox_rrd_api_types::{RrdMode, RrdTimeframe}; use proxmox_schema::*; @@ -39,15 +39,16 @@ use pxar::EntryKind; use pbs_api_types::{ print_ns_and_snapshot, print_store_and_ns, ArchiveType, Authid, BackupArchiveName, - BackupContent, BackupGroupDeleteStats, BackupNamespace, BackupType, Counts, CryptMode, - DataStoreConfig, DataStoreListItem, DataStoreMountStatus, DataStoreStatus, - GarbageCollectionJobStatus, GroupListItem, JobScheduleStatus, KeepOptions, MaintenanceMode, - MaintenanceType, Operation, PruneJobOptions, SnapshotListItem, SyncJobConfig, - BACKUP_ARCHIVE_NAME_SCHEMA, BACKUP_ID_SCHEMA, BACKUP_NAMESPACE_SCHEMA, BACKUP_TIME_SCHEMA, - BACKUP_TYPE_SCHEMA, CATALOG_NAME, CLIENT_LOG_BLOB_NAME, DATASTORE_SCHEMA, - IGNORE_VERIFIED_BACKUPS_SCHEMA, MAX_NAMESPACE_DEPTH, NS_MAX_DEPTH_SCHEMA, PRIV_DATASTORE_AUDIT, - PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_MODIFY, PRIV_DATASTORE_PRUNE, PRIV_DATASTORE_READ, - PRIV_DATASTORE_VERIFY, PRIV_SYS_MODIFY, UPID, UPID_SCHEMA, VERIFICATION_OUTDATED_AFTER_SCHEMA, + BackupContent, BackupGroupDeleteStats, BackupNamespace, BackupType, ContentListItem, + ContentType, Counts, CryptMode, DataStoreConfig, DataStoreListItem, DataStoreMountStatus, + DataStoreStatus, GarbageCollectionJobStatus, GroupListItem, JobScheduleStatus, KeepOptions, + MaintenanceMode, MaintenanceType, NamespaceListItem, Operation, PruneJobOptions, + SnapshotListItem, SyncJobConfig, BACKUP_ARCHIVE_NAME_SCHEMA, BACKUP_ID_SCHEMA, + BACKUP_NAMESPACE_SCHEMA, BACKUP_TIME_SCHEMA, BACKUP_TYPE_SCHEMA, CATALOG_NAME, + CLIENT_LOG_BLOB_NAME, DATASTORE_SCHEMA, IGNORE_VERIFIED_BACKUPS_SCHEMA, MAX_NAMESPACE_DEPTH, + NS_MAX_DEPTH_SCHEMA, PRIV_DATASTORE_AUDIT, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_MODIFY, + PRIV_DATASTORE_PRUNE, PRIV_DATASTORE_READ, PRIV_DATASTORE_VERIFY, PRIV_SYS_MODIFY, UPID, + UPID_SCHEMA, VERIFICATION_OUTDATED_AFTER_SCHEMA, }; use pbs_client::pxar::{create_tar, create_zip}; use pbs_config::CachedUserInfo; @@ -70,7 +71,10 @@ use proxmox_rest_server::{formatter, worker_is_active, WorkerTask}; use crate::api2::backup::optional_ns_param; use crate::api2::node::rrd::create_value_from_rrd; -use crate::backup::{check_ns_privs_full, ListAccessibleBackupGroups, VerifyWorker, NS_PRIVS_OK}; +use crate::backup::{ + can_access_any_namespace_in_range, check_ns_privs, check_ns_privs_full, + ListAccessibleBackupGroups, VerifyWorker, NS_PRIVS_OK, +}; use crate::server::jobstate::{compute_schedule_status, Job, JobState}; use crate::tools::{backup_info_to_snapshot_list_item, get_all_snapshot_files, read_backup_index}; @@ -396,7 +400,7 @@ pub async fn delete_snapshot( } #[api( - serializing: true, + stream: true, input: { properties: { store: { schema: DATASTORE_SCHEMA }, @@ -404,40 +408,137 @@ pub async fn delete_snapshot( type: BackupNamespace, optional: true, }, - "backup-type": { + "max-depth": { + schema: NS_MAX_DEPTH_SCHEMA, optional: true, - type: BackupType, }, - "backup-id": { + "content-type": { optional: true, - schema: BACKUP_ID_SCHEMA, + type: ContentType, }, }, }, - returns: pbs_api_types::ADMIN_DATASTORE_LIST_SNAPSHOTS_RETURN_TYPE, access: { permission: &Permission::Anybody, description: "Requires on /datastore/{store}[/{namespace}] either DATASTORE_AUDIT for any \ or DATASTORE_BACKUP and being the owner of the group", }, )] -/// List backup snapshots. -pub async fn list_snapshots( +/// List datastore content, recursively through all namespaces. +pub async fn list_content( store: String, ns: Option<BackupNamespace>, - backup_type: Option<BackupType>, - backup_id: Option<String>, + max_depth: Option<usize>, + content_type: Option<ContentType>, _param: Value, _info: &ApiMethod, rpcenv: &mut dyn RpcEnvironment, -) -> Result<Vec<SnapshotListItem>, Error> { +) -> Result<proxmox_router::Stream, Error> { + let (sender, mut receiver) = tokio::sync::mpsc::channel(128); + let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?; + let user_info = CachedUserInfo::new()?; - tokio::task::spawn_blocking(move || unsafe { - list_snapshots_blocking(store, ns, backup_type, backup_id, auth_id) - }) - .await - .map_err(|err| format_err!("failed to await blocking task: {err}"))? + let datastore = DataStore::lookup_datastore(&store, Some(Operation::Read))?; + if !can_access_any_namespace_in_range( + datastore.clone(), + &auth_id, + &user_info, + ns.clone(), + max_depth, + ) { + proxmox_router::http_bail!(FORBIDDEN, "permission check failed"); + } + + let ns = ns.unwrap_or_default(); + + let (list_ns, list_group, list_snapshots) = match content_type { + Some(ContentType::Namespace) => (true, false, false), + Some(ContentType::Group) => (false, true, false), + Some(ContentType::Snapshot) => (false, false, true), + None => (true, true, true), + }; + + tokio::spawn(async move { + tokio::task::spawn_blocking(move || { + if list_ns { + for ns in datastore.recursive_iter_backup_ns_ok(ns.clone(), max_depth)? { + match check_ns_privs(&store, &ns, &auth_id, NS_PRIVS_OK) { + Ok(_) => sender.blocking_send(Record::new(ContentListItem::from( + NamespaceListItem { ns, comment: None }, + )))?, + Err(_) => continue, + } + } + } + + if !list_group && !list_snapshots { + return Ok(()); + } + + for ns in datastore.recursive_iter_backup_ns_ok(ns, max_depth)? { + let list_all = match check_ns_privs_full( + &store, + &ns, + &auth_id, + PRIV_DATASTORE_AUDIT, + PRIV_DATASTORE_BACKUP, + ) { + Ok(requires_owner) => !requires_owner, + Err(_) => continue, + }; + if list_group { + for group in datastore.iter_backup_groups(ns.clone())? { + let group = group?; + let group = backup_group_to_group_list_item( + datastore.clone(), + group, + &ns, + &auth_id, + list_all, + ); + + if let Some(group) = group { + sender.blocking_send(Record::new(ContentListItem::from(( + ns.clone(), + group, + ))))?; + } + } + } + + if !list_snapshots { + continue; + } + + for group in datastore.iter_backup_groups(ns.clone())? { + let group = group?; + let owner = match get_group_owner(&store, &ns, &group) { + Some(auth_id) => auth_id, + None => continue, + }; + for snapshot in group.iter_snapshots()? { + let snapshot = BackupInfo::new(snapshot?)?; + let snapshot = backup_info_to_snapshot_list_item(&snapshot, &owner); + sender.blocking_send(Record::new(ContentListItem::from(( + ns.clone(), + snapshot, + ))))?; + } + } + } + Ok::<_, Error>(()) + }) + .await??; + + Ok::<_, Error>(()) + }); + Ok(async_stream::try_stream! { + while let Some(elem) = receiver.recv().await { + yield elem; + } + } + .into()) } fn get_group_owner( @@ -523,6 +624,51 @@ unsafe fn list_snapshots_blocking( }) } +#[api( + serializing: true, + input: { + properties: { + store: { schema: DATASTORE_SCHEMA }, + ns: { + type: BackupNamespace, + optional: true, + }, + "backup-type": { + optional: true, + type: BackupType, + }, + "backup-id": { + optional: true, + schema: BACKUP_ID_SCHEMA, + }, + }, + }, + returns: pbs_api_types::ADMIN_DATASTORE_LIST_SNAPSHOTS_RETURN_TYPE, + access: { + permission: &Permission::Anybody, + description: "Requires on /datastore/{store}[/{namespace}] either DATASTORE_AUDIT for any \ + or DATASTORE_BACKUP and being the owner of the group", + }, +)] +/// List backup snapshots. +pub async fn list_snapshots( + store: String, + ns: Option<BackupNamespace>, + backup_type: Option<BackupType>, + backup_id: Option<String>, + _param: Value, + _info: &ApiMethod, + rpcenv: &mut dyn RpcEnvironment, +) -> Result<Vec<SnapshotListItem>, Error> { + let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?; + + tokio::task::spawn_blocking(move || unsafe { + list_snapshots_blocking(store, ns, backup_type, backup_id, auth_id) + }) + .await + .map_err(|err| format_err!("failed to await blocking task: {err}"))? +} + async fn get_snapshots_count( store: &Arc<DataStore>, owner: Option<&Authid>, @@ -2773,6 +2919,7 @@ const DATASTORE_INFO_SUBDIRS: SubdirMap = &[ "change-owner", &Router::new().post(&API_METHOD_SET_BACKUP_OWNER), ), + ("content", &Router::new().get(&API_METHOD_LIST_CONTENT)), ( "download", &Router::new().download(&API_METHOD_DOWNLOAD_FILE), -- 2.47.3 _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [pbs-devel] [PATCH proxmox-backup 6/6] api: admin: datastore: implement streaming content api call 2025-10-03 8:50 ` [pbs-devel] [PATCH proxmox-backup 6/6] api: admin: datastore: implement streaming content " Dominik Csapak @ 2025-10-03 11:55 ` Thomas Lamprecht 2025-10-07 12:51 ` Wolfgang Bumiller 1 sibling, 0 replies; 21+ messages in thread From: Thomas Lamprecht @ 2025-10-03 11:55 UTC (permalink / raw) To: Proxmox Backup Server development discussion, Dominik Csapak Am 03.10.25 um 10:51 schrieb Dominik Csapak: > this is a new api call that utilizes `async-stream` together with > `proxmox_router::Stream` to provide a streaming interface to querying > the datastore content. > > This can be done when a client reuqests this api call with the > `application/json-seq` Accept header. > > In contrast to the existing api calls, this one > * returns all types of content items (namespaces, groups, snapshots; can > be filtered with a parameter) > * iterates over them recursively (with the range that is given with the > parameter) > > The api call returns the data in the following order: > * first all visible namespaces > * then for each ns in order > * each group > * each snapshot > > This is done so that we can have a good way of building a tree view in > the ui. > > Signed-off-by: Dominik Csapak <d.csapak@proxmox.com> > --- > This should be thouroughly checked for permission checks. I did it to > the best of my ability, but of course some bug/issue could have crept in. > > interesting side node, in my rather large setup with ~600 groups and ~1000 > snapshosts per group, streaming this is faster than using the current > `snapshot` api (by a lot): > * `snapshot` api -> ~3 min > * `content` api with streaming -> ~2:11 min > * `content` api without streaming -> ~3 min > > It seems that either collecting such a 'large' api response (~200MiB) > is expensive. My guesses what happens here are either: > * frequent (re)allocation of the resulting vec > * or serde's serializing code You could compare peak (RSS) memory usage of the daemon as side-effect, and/or also use bpftrace to log bigger allocations. While I did use bpftrace lots of times, I did not try this specifically to rust, but I found a shorth'ish article that describes doing just that for rust, and looks like it would not be _that_ much work (and could be a nice tool to have in the belt in the future): https://readyset.io/blog/tracing-large-memory-allocations-in-rust-with-bpftrace > but the cost seems still pretty high for that. > LMK if i should further investigate this. tbh. if this holds up in an in-depth review, especially at priv checking like you mentioned, I'm fine with taking it as is; mostly mentioned above as it would be interesting for a deeper understanding, and in my experience especially bpftrace is often quite widely applicable, so it can be worth spending some time playing around with it during "calmer times" ^^ Am 03.10.25 um 10:51 schrieb Dominik Csapak: > diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs > index 2252dcfa4..bf94f6400 100644 > --- a/src/api2/admin/datastore.rs > +++ b/src/api2/admin/datastore.rs > @@ -23,7 +23,7 @@ use proxmox_compression::zstd::ZstdEncoder; > use proxmox_log::LogContext; > use proxmox_router::{ > http_err, list_subdirs_api_method, ApiHandler, ApiMethod, ApiResponseFuture, Permission, > - Router, RpcEnvironment, RpcEnvironmentType, SubdirMap, > + Record, Router, RpcEnvironment, RpcEnvironmentType, SubdirMap, > }; > use proxmox_rrd_api_types::{RrdMode, RrdTimeframe}; > use proxmox_schema::*; > @@ -39,15 +39,16 @@ use pxar::EntryKind; > > use pbs_api_types::{ > print_ns_and_snapshot, print_store_and_ns, ArchiveType, Authid, BackupArchiveName, > - BackupContent, BackupGroupDeleteStats, BackupNamespace, BackupType, Counts, CryptMode, > - DataStoreConfig, DataStoreListItem, DataStoreMountStatus, DataStoreStatus, > - GarbageCollectionJobStatus, GroupListItem, JobScheduleStatus, KeepOptions, MaintenanceMode, > - MaintenanceType, Operation, PruneJobOptions, SnapshotListItem, SyncJobConfig, > - BACKUP_ARCHIVE_NAME_SCHEMA, BACKUP_ID_SCHEMA, BACKUP_NAMESPACE_SCHEMA, BACKUP_TIME_SCHEMA, > - BACKUP_TYPE_SCHEMA, CATALOG_NAME, CLIENT_LOG_BLOB_NAME, DATASTORE_SCHEMA, > - IGNORE_VERIFIED_BACKUPS_SCHEMA, MAX_NAMESPACE_DEPTH, NS_MAX_DEPTH_SCHEMA, PRIV_DATASTORE_AUDIT, > - PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_MODIFY, PRIV_DATASTORE_PRUNE, PRIV_DATASTORE_READ, > - PRIV_DATASTORE_VERIFY, PRIV_SYS_MODIFY, UPID, UPID_SCHEMA, VERIFICATION_OUTDATED_AFTER_SCHEMA, > + BackupContent, BackupGroupDeleteStats, BackupNamespace, BackupType, ContentListItem, > + ContentType, Counts, CryptMode, DataStoreConfig, DataStoreListItem, DataStoreMountStatus, > + DataStoreStatus, GarbageCollectionJobStatus, GroupListItem, JobScheduleStatus, KeepOptions, > + MaintenanceMode, MaintenanceType, NamespaceListItem, Operation, PruneJobOptions, > + SnapshotListItem, SyncJobConfig, BACKUP_ARCHIVE_NAME_SCHEMA, BACKUP_ID_SCHEMA, > + BACKUP_NAMESPACE_SCHEMA, BACKUP_TIME_SCHEMA, BACKUP_TYPE_SCHEMA, CATALOG_NAME, > + CLIENT_LOG_BLOB_NAME, DATASTORE_SCHEMA, IGNORE_VERIFIED_BACKUPS_SCHEMA, MAX_NAMESPACE_DEPTH, > + NS_MAX_DEPTH_SCHEMA, PRIV_DATASTORE_AUDIT, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_MODIFY, > + PRIV_DATASTORE_PRUNE, PRIV_DATASTORE_READ, PRIV_DATASTORE_VERIFY, PRIV_SYS_MODIFY, UPID, > + UPID_SCHEMA, VERIFICATION_OUTDATED_AFTER_SCHEMA, oof. Would be probably good to split this use statement into multiple ones, e.g. one for PRIV_* one for other const UPPERCASE thingies, and then maybe one for Backup* types and one for DataStore* and the rest. While one can diff per word to see what's going on, this still causes lot's of churn for applying/merging if anything happened in between and history (blame). But doesn't have to be the job of you and this patch series, I'm just venting it. > }; > use pbs_client::pxar::{create_tar, create_zip}; > use pbs_config::CachedUserInfo; > @@ -70,7 +71,10 @@ use proxmox_rest_server::{formatter, worker_is_active, WorkerTask}; > > use crate::api2::backup::optional_ns_param; > use crate::api2::node::rrd::create_value_from_rrd; > -use crate::backup::{check_ns_privs_full, ListAccessibleBackupGroups, VerifyWorker, NS_PRIVS_OK}; > +use crate::backup::{ > + can_access_any_namespace_in_range, check_ns_privs, check_ns_privs_full, > + ListAccessibleBackupGroups, VerifyWorker, NS_PRIVS_OK, > +}; > use crate::server::jobstate::{compute_schedule_status, Job, JobState}; > use crate::tools::{backup_info_to_snapshot_list_item, get_all_snapshot_files, read_backup_index}; > > @@ -396,7 +400,7 @@ pub async fn delete_snapshot( > } > > #[api( > - serializing: true, > + stream: true, > input: { > properties: { > store: { schema: DATASTORE_SCHEMA }, > @@ -404,40 +408,137 @@ pub async fn delete_snapshot( > type: BackupNamespace, > optional: true, > }, > - "backup-type": { > + "max-depth": { > + schema: NS_MAX_DEPTH_SCHEMA, > optional: true, > - type: BackupType, > }, > - "backup-id": { > + "content-type": { > optional: true, > - schema: BACKUP_ID_SCHEMA, > + type: ContentType, > }, > }, > }, > - returns: pbs_api_types::ADMIN_DATASTORE_LIST_SNAPSHOTS_RETURN_TYPE, > access: { > permission: &Permission::Anybody, > description: "Requires on /datastore/{store}[/{namespace}] either DATASTORE_AUDIT for any \ > or DATASTORE_BACKUP and being the owner of the group", > }, > )] > -/// List backup snapshots. > -pub async fn list_snapshots( > +/// List datastore content, recursively through all namespaces. > +pub async fn list_content( > store: String, > ns: Option<BackupNamespace>, > - backup_type: Option<BackupType>, > - backup_id: Option<String>, > + max_depth: Option<usize>, > + content_type: Option<ContentType>, > _param: Value, > _info: &ApiMethod, > rpcenv: &mut dyn RpcEnvironment, > -) -> Result<Vec<SnapshotListItem>, Error> { > +) -> Result<proxmox_router::Stream, Error> { > + let (sender, mut receiver) = tokio::sync::mpsc::channel(128); > + > let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?; > + let user_info = CachedUserInfo::new()?; > > - tokio::task::spawn_blocking(move || unsafe { > - list_snapshots_blocking(store, ns, backup_type, backup_id, auth_id) > - }) > - .await > - .map_err(|err| format_err!("failed to await blocking task: {err}"))? > + let datastore = DataStore::lookup_datastore(&store, Some(Operation::Read))?; > + if !can_access_any_namespace_in_range( > + datastore.clone(), > + &auth_id, > + &user_info, > + ns.clone(), > + max_depth, > + ) { > + proxmox_router::http_bail!(FORBIDDEN, "permission check failed"); > + } > + > + let ns = ns.unwrap_or_default(); > + > + let (list_ns, list_group, list_snapshots) = match content_type { > + Some(ContentType::Namespace) => (true, false, false), > + Some(ContentType::Group) => (false, true, false), > + Some(ContentType::Snapshot) => (false, false, true), > + None => (true, true, true), > + }; Hmm, might it make sense to have a filter param with a flag per type? So that one can choose to include groups and snapshots, but not namespaces? Albeit, here it's not really a filter in the classical sense, as besides for skipping snapshot it basically only affects namespaces or groups that are empty, otherwise the info is there indirectly anyway. OTOH. most use cases might just use max-depth to return everything from a single level and load the rest on demand/select. So might be an option to skip this param for now, but maybe someone else has better input or arguments here. > + > + tokio::spawn(async move { Is this really needed? The spawn blocking below already moves the function to a thread dedicated for stuff that can block, so this seems like a needless indirection, or am I overlooking something? > + tokio::task::spawn_blocking(move || { Looked at the rest below more shallowly, nothing stuck out, but would indeed warrant a more in-depth review. _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [pbs-devel] [PATCH proxmox-backup 6/6] api: admin: datastore: implement streaming content api call 2025-10-03 8:50 ` [pbs-devel] [PATCH proxmox-backup 6/6] api: admin: datastore: implement streaming content " Dominik Csapak 2025-10-03 11:55 ` Thomas Lamprecht @ 2025-10-07 12:51 ` Wolfgang Bumiller 2025-10-07 14:22 ` Thomas Lamprecht 1 sibling, 1 reply; 21+ messages in thread From: Wolfgang Bumiller @ 2025-10-07 12:51 UTC (permalink / raw) To: Dominik Csapak; +Cc: pbs-devel On Fri, Oct 03, 2025 at 10:50:39AM +0200, Dominik Csapak wrote: > this is a new api call that utilizes `async-stream` together with > `proxmox_router::Stream` to provide a streaming interface to querying > the datastore content. > > This can be done when a client reuqests this api call with the > `application/json-seq` Accept header. > > In contrast to the existing api calls, this one > * returns all types of content items (namespaces, groups, snapshots; can > be filtered with a parameter) > * iterates over them recursively (with the range that is given with the > parameter) > > The api call returns the data in the following order: > * first all visible namespaces > * then for each ns in order > * each group > * each snapshot > > This is done so that we can have a good way of building a tree view in > the ui. > > Signed-off-by: Dominik Csapak <d.csapak@proxmox.com> > --- > This should be thouroughly checked for permission checks. I did it to > the best of my ability, but of course some bug/issue could have crept in. > > interesting side node, in my rather large setup with ~600 groups and ~1000 > snapshosts per group, streaming this is faster than using the current > `snapshot` api (by a lot): > * `snapshot` api -> ~3 min > * `content` api with streaming -> ~2:11 min > * `content` api without streaming -> ~3 min > > It seems that either collecting such a 'large' api response (~200MiB) > is expensive. My guesses what happens here are either: > * frequent (re)allocation of the resulting vec > * or serde's serializing code > > but the cost seems still pretty high for that. > LMK if i should further investigate this. > > Cargo.toml | 2 + > src/api2/admin/datastore.rs | 201 +++++++++++++++++++++++++++++++----- > 2 files changed, 176 insertions(+), 27 deletions(-) > > diff --git a/Cargo.toml b/Cargo.toml > index b3f55b4db..21eb293ba 100644 > --- a/Cargo.toml > +++ b/Cargo.toml > @@ -114,6 +114,7 @@ pbs-tools = { path = "pbs-tools" } > > # regular crates > anyhow = "1.0" > +async-stream = "0.3" > async-trait = "0.1.56" > apt-pkg-native = "0.3.2" > bitflags = "2.4" > @@ -168,6 +169,7 @@ zstd-safe = "7" > [dependencies] > anyhow.workspace = true > async-trait.workspace = true > +async-stream.workspace = true > bytes.workspace = true > cidr.workspace = true > const_format.workspace = true > diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs > index 2252dcfa4..bf94f6400 100644 > --- a/src/api2/admin/datastore.rs > +++ b/src/api2/admin/datastore.rs > @@ -23,7 +23,7 @@ use proxmox_compression::zstd::ZstdEncoder; > use proxmox_log::LogContext; > use proxmox_router::{ > http_err, list_subdirs_api_method, ApiHandler, ApiMethod, ApiResponseFuture, Permission, > - Router, RpcEnvironment, RpcEnvironmentType, SubdirMap, > + Record, Router, RpcEnvironment, RpcEnvironmentType, SubdirMap, > }; > use proxmox_rrd_api_types::{RrdMode, RrdTimeframe}; > use proxmox_schema::*; > @@ -39,15 +39,16 @@ use pxar::EntryKind; > > use pbs_api_types::{ > print_ns_and_snapshot, print_store_and_ns, ArchiveType, Authid, BackupArchiveName, > - BackupContent, BackupGroupDeleteStats, BackupNamespace, BackupType, Counts, CryptMode, > - DataStoreConfig, DataStoreListItem, DataStoreMountStatus, DataStoreStatus, > - GarbageCollectionJobStatus, GroupListItem, JobScheduleStatus, KeepOptions, MaintenanceMode, > - MaintenanceType, Operation, PruneJobOptions, SnapshotListItem, SyncJobConfig, > - BACKUP_ARCHIVE_NAME_SCHEMA, BACKUP_ID_SCHEMA, BACKUP_NAMESPACE_SCHEMA, BACKUP_TIME_SCHEMA, > - BACKUP_TYPE_SCHEMA, CATALOG_NAME, CLIENT_LOG_BLOB_NAME, DATASTORE_SCHEMA, > - IGNORE_VERIFIED_BACKUPS_SCHEMA, MAX_NAMESPACE_DEPTH, NS_MAX_DEPTH_SCHEMA, PRIV_DATASTORE_AUDIT, > - PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_MODIFY, PRIV_DATASTORE_PRUNE, PRIV_DATASTORE_READ, > - PRIV_DATASTORE_VERIFY, PRIV_SYS_MODIFY, UPID, UPID_SCHEMA, VERIFICATION_OUTDATED_AFTER_SCHEMA, > + BackupContent, BackupGroupDeleteStats, BackupNamespace, BackupType, ContentListItem, > + ContentType, Counts, CryptMode, DataStoreConfig, DataStoreListItem, DataStoreMountStatus, > + DataStoreStatus, GarbageCollectionJobStatus, GroupListItem, JobScheduleStatus, KeepOptions, > + MaintenanceMode, MaintenanceType, NamespaceListItem, Operation, PruneJobOptions, > + SnapshotListItem, SyncJobConfig, BACKUP_ARCHIVE_NAME_SCHEMA, BACKUP_ID_SCHEMA, > + BACKUP_NAMESPACE_SCHEMA, BACKUP_TIME_SCHEMA, BACKUP_TYPE_SCHEMA, CATALOG_NAME, > + CLIENT_LOG_BLOB_NAME, DATASTORE_SCHEMA, IGNORE_VERIFIED_BACKUPS_SCHEMA, MAX_NAMESPACE_DEPTH, > + NS_MAX_DEPTH_SCHEMA, PRIV_DATASTORE_AUDIT, PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_MODIFY, > + PRIV_DATASTORE_PRUNE, PRIV_DATASTORE_READ, PRIV_DATASTORE_VERIFY, PRIV_SYS_MODIFY, UPID, > + UPID_SCHEMA, VERIFICATION_OUTDATED_AFTER_SCHEMA, And more reasons why [this][1] *really* needs at least *partial* stabilization so we can switch to `Item` level imports. 🙄 [1] https://github.com/rust-lang/rustfmt/issues/4991#issuecomment-3369792412 > }; > use pbs_client::pxar::{create_tar, create_zip}; > use pbs_config::CachedUserInfo; > @@ -70,7 +71,10 @@ use proxmox_rest_server::{formatter, worker_is_active, WorkerTask}; > > use crate::api2::backup::optional_ns_param; > use crate::api2::node::rrd::create_value_from_rrd; > -use crate::backup::{check_ns_privs_full, ListAccessibleBackupGroups, VerifyWorker, NS_PRIVS_OK}; > +use crate::backup::{ > + can_access_any_namespace_in_range, check_ns_privs, check_ns_privs_full, > + ListAccessibleBackupGroups, VerifyWorker, NS_PRIVS_OK, > +}; > use crate::server::jobstate::{compute_schedule_status, Job, JobState}; > use crate::tools::{backup_info_to_snapshot_list_item, get_all_snapshot_files, read_backup_index}; > > @@ -396,7 +400,7 @@ pub async fn delete_snapshot( > } > > #[api( > - serializing: true, > + stream: true, > input: { > properties: { > store: { schema: DATASTORE_SCHEMA }, > @@ -404,40 +408,137 @@ pub async fn delete_snapshot( > type: BackupNamespace, > optional: true, > }, > - "backup-type": { > + "max-depth": { > + schema: NS_MAX_DEPTH_SCHEMA, > optional: true, > - type: BackupType, > }, > - "backup-id": { > + "content-type": { > optional: true, > - schema: BACKUP_ID_SCHEMA, > + type: ContentType, > }, > }, > }, > - returns: pbs_api_types::ADMIN_DATASTORE_LIST_SNAPSHOTS_RETURN_TYPE, > access: { > permission: &Permission::Anybody, > description: "Requires on /datastore/{store}[/{namespace}] either DATASTORE_AUDIT for any \ > or DATASTORE_BACKUP and being the owner of the group", > }, > )] > -/// List backup snapshots. > -pub async fn list_snapshots( > +/// List datastore content, recursively through all namespaces. > +pub async fn list_content( > store: String, > ns: Option<BackupNamespace>, > - backup_type: Option<BackupType>, > - backup_id: Option<String>, > + max_depth: Option<usize>, > + content_type: Option<ContentType>, > _param: Value, > _info: &ApiMethod, > rpcenv: &mut dyn RpcEnvironment, > -) -> Result<Vec<SnapshotListItem>, Error> { > +) -> Result<proxmox_router::Stream, Error> { > + let (sender, mut receiver) = tokio::sync::mpsc::channel(128); > + > let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?; > + let user_info = CachedUserInfo::new()?; > > - tokio::task::spawn_blocking(move || unsafe { > - list_snapshots_blocking(store, ns, backup_type, backup_id, auth_id) > - }) > - .await > - .map_err(|err| format_err!("failed to await blocking task: {err}"))? > + let datastore = DataStore::lookup_datastore(&store, Some(Operation::Read))?; > + if !can_access_any_namespace_in_range( > + datastore.clone(), > + &auth_id, > + &user_info, > + ns.clone(), > + max_depth, > + ) { > + proxmox_router::http_bail!(FORBIDDEN, "permission check failed"); > + } > + > + let ns = ns.unwrap_or_default(); > + > + let (list_ns, list_group, list_snapshots) = match content_type { > + Some(ContentType::Namespace) => (true, false, false), > + Some(ContentType::Group) => (false, true, false), > + Some(ContentType::Snapshot) => (false, false, true), > + None => (true, true, true), > + }; > + > + tokio::spawn(async move { As Thomas already pointed out, you don't need to wrap the `spawn_blocking` future as what amounts to an `async { fut.await }`. But also, see the [tokio docs][2] on `JoinHandle`: A JoinHandle detaches the associated task when it is dropped so you can skip the entire above line. [2]: https://docs.rs/tokio/latest/tokio/task/struct.JoinHandle.html > + tokio::task::spawn_blocking(move || { > + if list_ns { > + for ns in datastore.recursive_iter_backup_ns_ok(ns.clone(), max_depth)? { > + match check_ns_privs(&store, &ns, &auth_id, NS_PRIVS_OK) { > + Ok(_) => sender.blocking_send(Record::new(ContentListItem::from( > + NamespaceListItem { ns, comment: None }, > + )))?, > + Err(_) => continue, Note that in the current streaming architecture the records can also be error values, so you may want to consider passing errors through. There are various helpers: `Record::error` (for std errors), `Record::error_msg`, or `Record::error_value` if you need structured errors. > + } > + } > + } > + > + if !list_group && !list_snapshots { > + return Ok(()); > + } > + > + for ns in datastore.recursive_iter_backup_ns_ok(ns, max_depth)? { > + let list_all = match check_ns_privs_full( > + &store, > + &ns, > + &auth_id, > + PRIV_DATASTORE_AUDIT, > + PRIV_DATASTORE_BACKUP, > + ) { > + Ok(requires_owner) => !requires_owner, > + Err(_) => continue, > + }; > + if list_group { > + for group in datastore.iter_backup_groups(ns.clone())? { > + let group = group?; ^ The above 2 `?` may also be cases where it would be better to send a `Record::error`? Unless we want those to cancel the entire rest of the stream? > + let group = backup_group_to_group_list_item( > + datastore.clone(), > + group, > + &ns, > + &auth_id, > + list_all, > + ); > + > + if let Some(group) = group { > + sender.blocking_send(Record::new(ContentListItem::from(( > + ns.clone(), > + group, > + ))))?; > + } > + } > + } > + > + if !list_snapshots { > + continue; > + } > + > + for group in datastore.iter_backup_groups(ns.clone())? { > + let group = group?; ^ Same here > + let owner = match get_group_owner(&store, &ns, &group) { > + Some(auth_id) => auth_id, > + None => continue, > + }; > + for snapshot in group.iter_snapshots()? { > + let snapshot = BackupInfo::new(snapshot?)?; > + let snapshot = backup_info_to_snapshot_list_item(&snapshot, &owner); > + sender.blocking_send(Record::new(ContentListItem::from(( > + ns.clone(), > + snapshot, > + ))))?; > + } > + } > + } > + Ok::<_, Error>(()) > + }) > + .await??; > + > + Ok::<_, Error>(()) > + }); > + Ok(async_stream::try_stream! { > + while let Some(elem) = receiver.recv().await { > + yield elem; > + } > + } > + .into()) You should be able to instead return `ReceiverStream(receiver).into()`, instead of this entire manual loop. ;-) > } > > fn get_group_owner( > @@ -523,6 +624,51 @@ unsafe fn list_snapshots_blocking( > }) > } > > +#[api( > + serializing: true, > + input: { > + properties: { > + store: { schema: DATASTORE_SCHEMA }, > + ns: { > + type: BackupNamespace, > + optional: true, > + }, > + "backup-type": { > + optional: true, > + type: BackupType, > + }, > + "backup-id": { > + optional: true, > + schema: BACKUP_ID_SCHEMA, > + }, > + }, > + }, > + returns: pbs_api_types::ADMIN_DATASTORE_LIST_SNAPSHOTS_RETURN_TYPE, > + access: { > + permission: &Permission::Anybody, > + description: "Requires on /datastore/{store}[/{namespace}] either DATASTORE_AUDIT for any \ > + or DATASTORE_BACKUP and being the owner of the group", > + }, > +)] > +/// List backup snapshots. > +pub async fn list_snapshots( > + store: String, > + ns: Option<BackupNamespace>, > + backup_type: Option<BackupType>, > + backup_id: Option<String>, > + _param: Value, > + _info: &ApiMethod, > + rpcenv: &mut dyn RpcEnvironment, > +) -> Result<Vec<SnapshotListItem>, Error> { > + let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?; > + > + tokio::task::spawn_blocking(move || unsafe { > + list_snapshots_blocking(store, ns, backup_type, backup_id, auth_id) > + }) > + .await > + .map_err(|err| format_err!("failed to await blocking task: {err}"))? > +} > + > async fn get_snapshots_count( > store: &Arc<DataStore>, > owner: Option<&Authid>, > @@ -2773,6 +2919,7 @@ const DATASTORE_INFO_SUBDIRS: SubdirMap = &[ > "change-owner", > &Router::new().post(&API_METHOD_SET_BACKUP_OWNER), > ), > + ("content", &Router::new().get(&API_METHOD_LIST_CONTENT)), > ( > "download", > &Router::new().download(&API_METHOD_DOWNLOAD_FILE), > -- > 2.47.3 _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [pbs-devel] [PATCH proxmox-backup 6/6] api: admin: datastore: implement streaming content api call 2025-10-07 12:51 ` Wolfgang Bumiller @ 2025-10-07 14:22 ` Thomas Lamprecht 2025-10-07 14:31 ` Wolfgang Bumiller 0 siblings, 1 reply; 21+ messages in thread From: Thomas Lamprecht @ 2025-10-07 14:22 UTC (permalink / raw) To: Proxmox Backup Server development discussion, Wolfgang Bumiller, Dominik Csapak Am 07.10.25 um 14:51 schrieb Wolfgang Bumiller: > And more reasons why [this][1] *really* needs at least *partial* > stabilization so we can switch to `Item` level imports. 🙄 > > [1] https://github.com/rust-lang/rustfmt/issues/4991#issuecomment-3369792412 In practice I'd prefer module level granularity though and instead question if crates like pbs-api-types really do need to re-export everything on a top-level instead of using actual modules there too. Which as a side effect then would also lessen the need for adding ThingListItem type names, which are IMO somewhat needed (or at least will always be something that devs go for) if we want to provide a single global type namespace. _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [pbs-devel] [PATCH proxmox-backup 6/6] api: admin: datastore: implement streaming content api call 2025-10-07 14:22 ` Thomas Lamprecht @ 2025-10-07 14:31 ` Wolfgang Bumiller 2025-10-07 15:05 ` Thomas Lamprecht 0 siblings, 1 reply; 21+ messages in thread From: Wolfgang Bumiller @ 2025-10-07 14:31 UTC (permalink / raw) To: Thomas Lamprecht; +Cc: Proxmox Backup Server development discussion On Tue, Oct 07, 2025 at 04:22:25PM +0200, Thomas Lamprecht wrote: > Am 07.10.25 um 14:51 schrieb Wolfgang Bumiller: > > And more reasons why [this][1] *really* needs at least *partial* > > stabilization so we can switch to `Item` level imports. 🙄 > > > > [1] https://github.com/rust-lang/rustfmt/issues/4991#issuecomment-3369792412 > > In practice I'd prefer module level granularity though and instead > question if crates like pbs-api-types really do need to re-export > everything on a top-level instead of using actual modules there They definitely do *not*. > too. Which as a side effect then would also lessen the need for adding > ThingListItem type names, which are IMO somewhat needed (or at least > will always be something that devs go for) if we want to provide a > single global type namespace. Module level sounds nice in theory, and *could* look fine. I'd definitely love undo the export mess we have, but as it is now, module level imports_granularity would be an *absolute nightmare* for our code base, while for Item level it wouldn't matter, it would produce the least friction with git merges. _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [pbs-devel] [PATCH proxmox-backup 6/6] api: admin: datastore: implement streaming content api call 2025-10-07 14:31 ` Wolfgang Bumiller @ 2025-10-07 15:05 ` Thomas Lamprecht 0 siblings, 0 replies; 21+ messages in thread From: Thomas Lamprecht @ 2025-10-07 15:05 UTC (permalink / raw) To: Wolfgang Bumiller; +Cc: Proxmox Backup Server development discussion Am 07.10.25 um 16:31 schrieb Wolfgang Bumiller: > Module level sounds nice in theory, and *could* look fine. Great, so we agree on the essentials. > I'd definitely love undo the export mess we have, but as it is now, > module level imports_granularity would be an *absolute nightmare* for > our code base, while for Item level it wouldn't matter, it would produce > the least friction with git merges. With about 100 to 1k of use statement lines at the top of a lot of files though. FWIW, for better merge conflict handling one can also use language aware tooling like mergiraf [0] as git merge driver. Getting a more organized module export has more benefits than just being able to use the module import level granularity in rustfmt (once it's stable). So if mergiraf delivers what it promises, I'd that as better stop-gap until we get around to start an organized effort to cleanup that export mess we currently got. [0]: https://mergiraf.org/ _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel ^ permalink raw reply [flat|nested] 21+ messages in thread
end of thread, other threads:[~2025-10-08 6:42 UTC | newest] Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2025-10-03 8:50 [pbs-devel] [PATCH proxmox{, -backup} 0/7] introduce streaming content api call Dominik Csapak 2025-10-03 8:50 ` [pbs-devel] [PATCH proxmox 1/1] pbs-api-types: add api types for " Dominik Csapak 2025-10-07 8:59 ` Wolfgang Bumiller 2025-10-08 6:41 ` Dominik Csapak 2025-10-03 8:50 ` [pbs-devel] [PATCH proxmox-backup 1/6] backup: hierarchy: add new can_access_any_namespace_in_range helper Dominik Csapak 2025-10-03 9:52 ` Thomas Lamprecht 2025-10-03 10:10 ` Dominik Csapak 2025-10-03 10:21 ` Thomas Lamprecht 2025-10-03 8:50 ` [pbs-devel] [PATCH proxmox-backup 2/6] backup: hierarchy: reuse 'NS_PRIVS_OK' for namespace helper Dominik Csapak 2025-10-03 8:50 ` [pbs-devel] [PATCH proxmox-backup 3/6] api: admin: datastore: refactor BackupGroup to GroupListItem conversion Dominik Csapak 2025-10-03 8:50 ` [pbs-devel] [PATCH proxmox-backup 4/6] api: admin: datastore: factor out 'get_group_owner' Dominik Csapak 2025-10-03 8:50 ` [pbs-devel] [PATCH proxmox-backup 5/6] api: admin: datastore: optimize `groups` api call Dominik Csapak 2025-10-03 10:18 ` Thomas Lamprecht 2025-10-03 10:51 ` Dominik Csapak 2025-10-03 12:37 ` Thomas Lamprecht 2025-10-03 8:50 ` [pbs-devel] [PATCH proxmox-backup 6/6] api: admin: datastore: implement streaming content " Dominik Csapak 2025-10-03 11:55 ` Thomas Lamprecht 2025-10-07 12:51 ` Wolfgang Bumiller 2025-10-07 14:22 ` Thomas Lamprecht 2025-10-07 14:31 ` Wolfgang Bumiller 2025-10-07 15:05 ` Thomas Lamprecht
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox