* [pdm-devel] [PATCH proxmox-datacenter-manager v2 0/3] add helper to fetch from many remote nodes in parallel
@ 2025-08-29 14:10 Lukas Wagner
2025-08-29 14:10 ` [pdm-devel] [PATCH proxmox-datacenter-manager v2 1/3] server: add convenience helper to fetch results " Lukas Wagner
` (3 more replies)
0 siblings, 4 replies; 5+ messages in thread
From: Lukas Wagner @ 2025-08-29 14:10 UTC (permalink / raw)
To: pdm-devel
proxmox-datacenter-manager:
Lukas Wagner (3):
server: add convenience helper to fetch results from many remote nodes
in parallel
server: add helper for fetching from multiple remotes at once
remote task fetching: use ParallelFetcher helper
.../tasks/remote_tasks.rs | 238 +++++---------
server/src/lib.rs | 1 +
server/src/parallel_fetcher.rs | 292 ++++++++++++++++++
3 files changed, 368 insertions(+), 163 deletions(-)
create mode 100644 server/src/parallel_fetcher.rs
Summary over all repositories:
3 files changed, 368 insertions(+), 163 deletions(-)
--
Generated by murpp 0.9.0
_______________________________________________
pdm-devel mailing list
pdm-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pdm-devel
^ permalink raw reply [flat|nested] 5+ messages in thread
* [pdm-devel] [PATCH proxmox-datacenter-manager v2 1/3] server: add convenience helper to fetch results from many remote nodes in parallel
2025-08-29 14:10 [pdm-devel] [PATCH proxmox-datacenter-manager v2 0/3] add helper to fetch from many remote nodes in parallel Lukas Wagner
@ 2025-08-29 14:10 ` Lukas Wagner
2025-08-29 14:10 ` [pdm-devel] [PATCH proxmox-datacenter-manager v2 2/3] server: add helper for fetching from multiple remotes at once Lukas Wagner
` (2 subsequent siblings)
3 siblings, 0 replies; 5+ messages in thread
From: Lukas Wagner @ 2025-08-29 14:10 UTC (permalink / raw)
To: pdm-devel
Implementing this over and over again is a lot of work; many parts of
PDM need to fetch some kind of resource from all remotes and all nodes
of these remotes.
The general approach is the same as for remote task fetching (JoinSet
and semaphores to control concurrency), it will be refactored in a later
commit to use this new helper.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
---
server/src/lib.rs | 1 +
server/src/parallel_fetcher.rs | 243 +++++++++++++++++++++++++++++++++
2 files changed, 244 insertions(+)
create mode 100644 server/src/parallel_fetcher.rs
diff --git a/server/src/lib.rs b/server/src/lib.rs
index 485bc792..3f8b7708 100644
--- a/server/src/lib.rs
+++ b/server/src/lib.rs
@@ -6,6 +6,7 @@ pub mod auth;
pub mod context;
pub mod env;
pub mod metric_collection;
+pub mod parallel_fetcher;
pub mod remote_cache;
pub mod remote_tasks;
pub mod resource_cache;
diff --git a/server/src/parallel_fetcher.rs b/server/src/parallel_fetcher.rs
new file mode 100644
index 00000000..e4a5c106
--- /dev/null
+++ b/server/src/parallel_fetcher.rs
@@ -0,0 +1,243 @@
+use std::{
+ collections::HashMap,
+ fmt::Debug,
+ future::Future,
+ sync::Arc,
+ time::{Duration, Instant},
+};
+
+use anyhow::Error;
+use pdm_api_types::remotes::{Remote, RemoteType};
+use pve_api_types::ClusterNodeIndexResponse;
+use tokio::{
+ sync::{OwnedSemaphorePermit, Semaphore},
+ task::JoinSet,
+};
+
+use crate::connection;
+
+pub const DEFAULT_MAX_CONNECTIONS: usize = 20;
+pub const DEFAULT_MAX_CONNECTIONS_PER_REMOTE: usize = 5;
+
+pub struct ParallelFetcher<C> {
+ pub max_connections: usize,
+ pub max_connections_per_remote: usize,
+ pub context: C,
+}
+
+pub struct FetchResults<T> {
+ /// Per-remote results. The key in the map is the remote name.
+ pub remote_results: HashMap<String, Result<RemoteResult<T>, Error>>,
+}
+
+impl<T> Default for FetchResults<T> {
+ fn default() -> Self {
+ Self {
+ remote_results: Default::default(),
+ }
+ }
+}
+
+#[derive(Debug)]
+pub struct RemoteResult<T> {
+ /// Per-node results. The key in the map is the node name.
+ pub node_results: HashMap<String, Result<NodeResults<T>, Error>>,
+}
+
+impl<T> Default for RemoteResult<T> {
+ fn default() -> Self {
+ Self {
+ node_results: Default::default(),
+ }
+ }
+}
+
+#[derive(Debug)]
+pub struct NodeResults<T> {
+ /// The data returned from the passed function.
+ pub data: T,
+ /// Time needed waiting for the passed function to return.
+ pub api_response_time: Duration,
+}
+
+impl<C: Clone + Send + 'static> ParallelFetcher<C> {
+ pub fn new(context: C) -> Self {
+ Self {
+ max_connections: DEFAULT_MAX_CONNECTIONS,
+ max_connections_per_remote: DEFAULT_MAX_CONNECTIONS_PER_REMOTE,
+ context,
+ }
+ }
+
+ pub async fn do_for_all_remote_nodes<A, F, T, Ft>(self, remotes: A, func: F) -> FetchResults<T>
+ where
+ A: Iterator<Item = Remote>,
+ F: Fn(C, Remote, String) -> Ft + Clone + Send + 'static,
+ Ft: Future<Output = Result<T, Error>> + Send + 'static,
+ T: Send + Debug + 'static,
+ {
+ let total_connections_semaphore = Arc::new(Semaphore::new(self.max_connections));
+
+ let mut remote_join_set = JoinSet::new();
+
+ for remote in remotes {
+ let semaphore = Arc::clone(&total_connections_semaphore);
+
+ let f = func.clone();
+
+ remote_join_set.spawn(Self::fetch_remote(
+ remote,
+ self.context.clone(),
+ semaphore,
+ f,
+ self.max_connections_per_remote,
+ ));
+ }
+
+ let mut results = FetchResults::default();
+
+ while let Some(a) = remote_join_set.join_next().await {
+ match a {
+ Ok((remote_name, remote_result)) => {
+ results.remote_results.insert(remote_name, remote_result);
+ }
+ Err(err) => {
+ log::error!("join error when waiting for future: {err}")
+ }
+ }
+ }
+
+ results
+ }
+
+ async fn fetch_remote<F, Ft, T>(
+ remote: Remote,
+ context: C,
+ semaphore: Arc<Semaphore>,
+ func: F,
+ max_connections_per_remote: usize,
+ ) -> (String, Result<RemoteResult<T>, Error>)
+ where
+ F: Fn(C, Remote, String) -> Ft + Clone + Send + 'static,
+ Ft: Future<Output = Result<T, Error>> + Send + 'static,
+ T: Send + Debug + 'static,
+ {
+ let mut per_remote_results = RemoteResult::default();
+
+ let mut permit = Some(Arc::clone(&semaphore).acquire_owned().await.unwrap());
+ let per_remote_semaphore = Arc::new(Semaphore::new(max_connections_per_remote));
+
+ match remote.ty {
+ RemoteType::Pve => {
+ let remote_clone = remote.clone();
+
+ let nodes = match async move {
+ let client = connection::make_pve_client(&remote_clone)?;
+ let nodes = client.list_nodes().await?;
+
+ Ok::<Vec<ClusterNodeIndexResponse>, Error>(nodes)
+ }
+ .await
+ {
+ Ok(nodes) => nodes,
+ Err(err) => return (remote.id.clone(), Err(err)),
+ };
+
+ let mut nodes_join_set = JoinSet::new();
+
+ for node in nodes {
+ let permit = if let Some(permit) = permit.take() {
+ permit
+ } else {
+ Arc::clone(&semaphore).acquire_owned().await.unwrap()
+ };
+
+ let per_remote_connections_permit = Arc::clone(&per_remote_semaphore)
+ .acquire_owned()
+ .await
+ .unwrap();
+
+ let func_clone = func.clone();
+ let remote_clone = remote.clone();
+ let node_name = node.node.clone();
+ let context_clone = context.clone();
+
+ nodes_join_set.spawn(Self::fetch_pve_node(
+ func_clone,
+ context_clone,
+ remote_clone,
+ node_name,
+ permit,
+ per_remote_connections_permit,
+ ));
+ }
+
+ while let Some(join_result) = nodes_join_set.join_next().await {
+ match join_result {
+ Ok((node_name, per_node_result)) => {
+ per_remote_results
+ .node_results
+ .insert(node_name, per_node_result);
+ }
+ Err(e) => {
+ log::error!("join error when waiting for future: {e}")
+ }
+ }
+ }
+ }
+ RemoteType::Pbs => {
+ let node = "localhost".to_string();
+
+ let now = Instant::now();
+ let result = func(context, remote.clone(), node.clone()).await;
+ let api_response_time = now.elapsed();
+
+ match result {
+ Ok(data) => {
+ per_remote_results.node_results.insert(
+ node,
+ Ok(NodeResults {
+ data,
+ api_response_time,
+ }),
+ );
+ }
+ Err(err) => {
+ per_remote_results.node_results.insert(node, Err(err));
+ }
+ }
+ }
+ }
+
+ (remote.id, Ok(per_remote_results))
+ }
+
+ async fn fetch_pve_node<F, Ft, T>(
+ func: F,
+ context: C,
+ remote: Remote,
+ node: String,
+ _permit: OwnedSemaphorePermit,
+ _per_remote_connections_permit: OwnedSemaphorePermit,
+ ) -> (String, Result<NodeResults<T>, Error>)
+ where
+ F: Fn(C, Remote, String) -> Ft + Clone + Send + 'static,
+ Ft: Future<Output = Result<T, Error>> + Send + 'static,
+ T: Send + Debug + 'static,
+ {
+ let now = Instant::now();
+ let result = func(context, remote.clone(), node.clone()).await;
+ let api_response_time = now.elapsed();
+
+ match result {
+ Ok(data) => (
+ node,
+ Ok(NodeResults {
+ data,
+ api_response_time,
+ }),
+ ),
+ Err(err) => (node, Err(err)),
+ }
+ }
+}
--
2.47.2
_______________________________________________
pdm-devel mailing list
pdm-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pdm-devel
^ permalink raw reply [flat|nested] 5+ messages in thread
* [pdm-devel] [PATCH proxmox-datacenter-manager v2 2/3] server: add helper for fetching from multiple remotes at once
2025-08-29 14:10 [pdm-devel] [PATCH proxmox-datacenter-manager v2 0/3] add helper to fetch from many remote nodes in parallel Lukas Wagner
2025-08-29 14:10 ` [pdm-devel] [PATCH proxmox-datacenter-manager v2 1/3] server: add convenience helper to fetch results " Lukas Wagner
@ 2025-08-29 14:10 ` Lukas Wagner
2025-08-29 14:10 ` [pdm-devel] [PATCH proxmox-datacenter-manager v2 3/3] remote task fetching: use ParallelFetcher helper Lukas Wagner
2025-09-01 10:04 ` [pdm-devel] applied: [PATCH proxmox-datacenter-manager v2 0/3] add helper to fetch from many remote nodes in parallel Dominik Csapak
3 siblings, 0 replies; 5+ messages in thread
From: Lukas Wagner @ 2025-08-29 14:10 UTC (permalink / raw)
To: pdm-devel
There is already a helper for fetching results from all remotes and all
their nodes. In some cases (e.g. SDN) it is sufficient to fetch the
result of the API calls once from any node. Add a second helper that
executes the API requests only once per remote on any node, instead of
on all nodes.
Originally-by: Stefan Hanreich <s.hanreich@proxmox.com>
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
---
Notes:
Changes since Stefan's RFC:
- always use 'localhost' node, instead of getting a node list first and using
the first node
- Some minor changes to make the code nicer/shorter
server/src/parallel_fetcher.rs | 93 ++++++++++++++++++++++++++--------
1 file changed, 71 insertions(+), 22 deletions(-)
diff --git a/server/src/parallel_fetcher.rs b/server/src/parallel_fetcher.rs
index e4a5c106..58ca1f55 100644
--- a/server/src/parallel_fetcher.rs
+++ b/server/src/parallel_fetcher.rs
@@ -162,13 +162,13 @@ impl<C: Clone + Send + 'static> ParallelFetcher<C> {
let node_name = node.node.clone();
let context_clone = context.clone();
- nodes_join_set.spawn(Self::fetch_pve_node(
+ nodes_join_set.spawn(Self::fetch_node(
func_clone,
context_clone,
remote_clone,
node_name,
permit,
- per_remote_connections_permit,
+ Some(per_remote_connections_permit),
));
}
@@ -186,39 +186,33 @@ impl<C: Clone + Send + 'static> ParallelFetcher<C> {
}
}
RemoteType::Pbs => {
- let node = "localhost".to_string();
-
- let now = Instant::now();
- let result = func(context, remote.clone(), node.clone()).await;
- let api_response_time = now.elapsed();
+ let (nodename, result) = Self::fetch_node(
+ func,
+ context,
+ remote.clone(),
+ "localhost".into(),
+ permit.unwrap(), // Always set to `Some` at this point
+ None,
+ )
+ .await;
match result {
- Ok(data) => {
- per_remote_results.node_results.insert(
- node,
- Ok(NodeResults {
- data,
- api_response_time,
- }),
- );
- }
- Err(err) => {
- per_remote_results.node_results.insert(node, Err(err));
- }
- }
+ Ok(a) => per_remote_results.node_results.insert(nodename, Ok(a)),
+ Err(err) => per_remote_results.node_results.insert(nodename, Err(err)),
+ };
}
}
(remote.id, Ok(per_remote_results))
}
- async fn fetch_pve_node<F, Ft, T>(
+ async fn fetch_node<F, Ft, T>(
func: F,
context: C,
remote: Remote,
node: String,
_permit: OwnedSemaphorePermit,
- _per_remote_connections_permit: OwnedSemaphorePermit,
+ _per_remote_connections_permit: Option<OwnedSemaphorePermit>,
) -> (String, Result<NodeResults<T>, Error>)
where
F: Fn(C, Remote, String) -> Ft + Clone + Send + 'static,
@@ -240,4 +234,59 @@ impl<C: Clone + Send + 'static> ParallelFetcher<C> {
Err(err) => (node, Err(err)),
}
}
+
+ pub async fn do_for_all_remotes<A, F, T, Ft>(self, remotes: A, func: F) -> FetchResults<T>
+ where
+ A: Iterator<Item = Remote>,
+ F: Fn(C, Remote, String) -> Ft + Clone + Send + 'static,
+ Ft: Future<Output = Result<T, Error>> + Send + 'static,
+ T: Send + Debug + 'static,
+ {
+ let total_connections_semaphore = Arc::new(Semaphore::new(self.max_connections));
+
+ let mut node_join_set = JoinSet::new();
+ let mut results = FetchResults::default();
+
+ for remote in remotes {
+ let total_connections_semaphore = total_connections_semaphore.clone();
+
+ let remote_id = remote.id.clone();
+ let context = self.context.clone();
+ let func = func.clone();
+
+ node_join_set.spawn(async move {
+ let permit = total_connections_semaphore.acquire_owned().await.unwrap();
+
+ (
+ remote_id,
+ Self::fetch_node(func, context, remote, "localhost".into(), permit, None).await,
+ )
+ });
+ }
+
+ while let Some(a) = node_join_set.join_next().await {
+ match a {
+ Ok((remote_id, (node_id, node_result))) => {
+ let mut node_results = HashMap::new();
+ node_results.insert(node_id, node_result);
+
+ let remote_result = RemoteResult { node_results };
+
+ if results
+ .remote_results
+ .insert(remote_id, Ok(remote_result))
+ .is_some()
+ {
+ // should never happen, but log for good measure if it actually does
+ log::warn!("made multiple requests for a remote!");
+ }
+ }
+ Err(err) => {
+ log::error!("join error when waiting for future: {err}")
+ }
+ }
+ }
+
+ results
+ }
}
--
2.47.2
_______________________________________________
pdm-devel mailing list
pdm-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pdm-devel
^ permalink raw reply [flat|nested] 5+ messages in thread
* [pdm-devel] [PATCH proxmox-datacenter-manager v2 3/3] remote task fetching: use ParallelFetcher helper
2025-08-29 14:10 [pdm-devel] [PATCH proxmox-datacenter-manager v2 0/3] add helper to fetch from many remote nodes in parallel Lukas Wagner
2025-08-29 14:10 ` [pdm-devel] [PATCH proxmox-datacenter-manager v2 1/3] server: add convenience helper to fetch results " Lukas Wagner
2025-08-29 14:10 ` [pdm-devel] [PATCH proxmox-datacenter-manager v2 2/3] server: add helper for fetching from multiple remotes at once Lukas Wagner
@ 2025-08-29 14:10 ` Lukas Wagner
2025-09-01 10:04 ` [pdm-devel] applied: [PATCH proxmox-datacenter-manager v2 0/3] add helper to fetch from many remote nodes in parallel Dominik Csapak
3 siblings, 0 replies; 5+ messages in thread
From: Lukas Wagner @ 2025-08-29 14:10 UTC (permalink / raw)
To: pdm-devel
This allows us to simplify the fetching logic quite a bit.
Signed-off-by: Lukas Wagner <l.wagner@proxmox.com>
---
Notes:
Changes since the RFC:
- drop the map_err/format_err, since the remote/node info is
handled elsewhere now
- Use a single filter_map instead of map *and* filter_map
- Chain the filter_map to the get_task_list_call, might save
allocations?
.../tasks/remote_tasks.rs | 238 ++++++------------
1 file changed, 75 insertions(+), 163 deletions(-)
diff --git a/server/src/bin/proxmox-datacenter-api/tasks/remote_tasks.rs b/server/src/bin/proxmox-datacenter-api/tasks/remote_tasks.rs
index 04c51dac..e97f2a08 100644
--- a/server/src/bin/proxmox-datacenter-api/tasks/remote_tasks.rs
+++ b/server/src/bin/proxmox-datacenter-api/tasks/remote_tasks.rs
@@ -4,7 +4,7 @@ use std::{
time::{Duration, Instant},
};
-use anyhow::{format_err, Error};
+use anyhow::Error;
use nix::sys::stat::Mode;
use tokio::{sync::Semaphore, task::JoinSet};
@@ -17,6 +17,7 @@ use pve_api_types::{ListTasks, ListTasksResponse, ListTasksSource};
use server::{
api::pve,
+ parallel_fetcher::{NodeResults, ParallelFetcher},
remote_tasks::{
self,
task_cache::{NodeFetchSuccessMap, State, TaskCache, TaskCacheItem},
@@ -185,12 +186,7 @@ async fn do_tick(task_state: &mut TaskState) -> Result<(), Error> {
get_remotes_with_finished_tasks(&remote_config, &poll_results)
};
- let (all_tasks, update_state_for_remote) = fetch_remotes(
- remotes,
- Arc::new(cache_state),
- Arc::clone(&total_connections_semaphore),
- )
- .await;
+ let (all_tasks, update_state_for_remote) = fetch_remotes(remotes, Arc::new(cache_state)).await;
if !all_tasks.is_empty() {
update_task_cache(cache, all_tasks, update_state_for_remote, poll_results).await?;
@@ -221,42 +217,87 @@ async fn init_cache() -> Result<(), Error> {
async fn fetch_remotes(
remotes: Vec<Remote>,
cache_state: Arc<State>,
- total_connections_semaphore: Arc<Semaphore>,
) -> (Vec<TaskCacheItem>, NodeFetchSuccessMap) {
- let mut join_set = JoinSet::new();
+ let fetcher = ParallelFetcher {
+ max_connections: MAX_CONNECTIONS,
+ max_connections_per_remote: CONNECTIONS_PER_PVE_REMOTE,
+ context: cache_state,
+ };
- for remote in remotes {
- let semaphore = Arc::clone(&total_connections_semaphore);
- let state_clone = Arc::clone(&cache_state);
-
- join_set.spawn(async move {
- log::debug!("fetching remote tasks for '{}'", remote.id);
- fetch_tasks(&remote, state_clone, semaphore)
- .await
- .map_err(|err| {
- format_err!("could not fetch tasks from remote '{}': {err}", remote.id)
- })
- });
- }
+ let fetch_results = fetcher
+ .do_for_all_remote_nodes(remotes.into_iter(), fetch_tasks_from_single_node)
+ .await;
let mut all_tasks = Vec::new();
- let mut update_state_for_remote = NodeFetchSuccessMap::default();
+ let mut node_success_map = NodeFetchSuccessMap::default();
- while let Some(res) = join_set.join_next().await {
- match res {
- Ok(Ok(FetchedTasks {
- tasks,
- node_results,
- })) => {
- all_tasks.extend(tasks);
- update_state_for_remote.merge(node_results);
+ for (remote_name, result) in fetch_results.remote_results {
+ match result {
+ Ok(remote_result) => {
+ for (node_name, node_result) in remote_result.node_results {
+ match node_result {
+ Ok(NodeResults { data, .. }) => {
+ all_tasks.extend(data);
+ node_success_map.set_node_success(remote_name.clone(), node_name);
+ }
+ Err(err) => {
+ log::error!("could not fetch tasks from remote '{remote_name}', node {node_name}: {err:#}");
+ }
+ }
+ }
+ }
+ Err(err) => {
+ log::error!("could not fetch tasks from remote '{remote_name}': {err:#}");
}
- Ok(Err(err)) => log::error!("{err:#}"),
- Err(err) => log::error!("could not join task fetching future: {err:#}"),
}
}
- (all_tasks, update_state_for_remote)
+ (all_tasks, node_success_map)
+}
+
+async fn fetch_tasks_from_single_node(
+ context: Arc<State>,
+ remote: Remote,
+ node: String,
+) -> Result<Vec<TaskCacheItem>, Error> {
+ match remote.ty {
+ RemoteType::Pve => {
+ let since = context
+ .cutoff_timestamp(&remote.id, &node)
+ .unwrap_or_else(|| {
+ proxmox_time::epoch_i64() - (KEEP_OLD_FILES as u64 * ROTATE_AFTER) as i64
+ });
+
+ let params = ListTasks {
+ source: Some(ListTasksSource::Archive),
+ since: Some(since),
+ // If `limit` is not provided, we only receive 50 tasks
+ limit: Some(MAX_TASKS_TO_FETCH),
+ ..Default::default()
+ };
+
+ let client = pve::connect(&remote)?;
+
+ let task_list = client
+ .get_task_list(&node, params)
+ .await?
+ .into_iter()
+ .filter_map(|task| match map_pve_task(task, &remote.id) {
+ Ok(task) => Some(task),
+ Err(err) => {
+ log::error!("could not map PVE task: {err:#}");
+ None
+ }
+ })
+ .collect();
+
+ Ok(task_list)
+ }
+ RemoteType::Pbs => {
+ // TODO: Support PBS.
+ Ok(vec![])
+ }
+ }
}
/// Return all remotes from the given config.
@@ -301,135 +342,6 @@ async fn apply_journal(cache: TaskCache) -> Result<(), Error> {
tokio::task::spawn_blocking(move || cache.write()?.apply_journal()).await?
}
-/// Fetched tasks from a single remote.
-struct FetchedTasks {
- /// List of tasks.
- tasks: Vec<TaskCacheItem>,
- /// Contains whether a cluster node was fetched successfully.
- node_results: NodeFetchSuccessMap,
-}
-
-/// Fetch tasks (active and finished) from a remote.
-async fn fetch_tasks(
- remote: &Remote,
- state: Arc<State>,
- total_connections_semaphore: Arc<Semaphore>,
-) -> Result<FetchedTasks, Error> {
- let mut tasks = Vec::new();
-
- let mut node_results = NodeFetchSuccessMap::default();
-
- match remote.ty {
- RemoteType::Pve => {
- let client = pve::connect(remote)?;
-
- let nodes = {
- // This permit *must* be dropped before we acquire the permits for the
- // per-node connections - otherwise we risk a deadlock.
- let _permit = total_connections_semaphore.acquire().await.unwrap();
- client.list_nodes().await?
- };
-
- // This second semaphore is used to limit the number of concurrent connections
- // *per remote*, not in total.
- let per_remote_semaphore = Arc::new(Semaphore::new(CONNECTIONS_PER_PVE_REMOTE));
- let mut join_set = JoinSet::new();
-
- for node in nodes {
- let node_name = node.node.to_string();
-
- let since = state
- .cutoff_timestamp(&remote.id, &node_name)
- .unwrap_or_else(|| {
- proxmox_time::epoch_i64() - (KEEP_OLD_FILES as u64 * ROTATE_AFTER) as i64
- });
-
- let params = ListTasks {
- source: Some(ListTasksSource::Archive),
- since: Some(since),
- // If `limit` is not provided, we only receive 50 tasks
- limit: Some(MAX_TASKS_TO_FETCH),
- ..Default::default()
- };
-
- let per_remote_permit = Arc::clone(&per_remote_semaphore)
- .acquire_owned()
- .await
- .unwrap();
-
- let total_connections_permit = Arc::clone(&total_connections_semaphore)
- .acquire_owned()
- .await
- .unwrap();
-
- let remote_clone = remote.clone();
-
- join_set.spawn(async move {
- let res = async {
- let client = pve::connect(&remote_clone)?;
- let task_list =
- client
- .get_task_list(&node.node, params)
- .await
- .map_err(|err| {
- format_err!(
- "remote '{}', node '{}': {err}",
- remote_clone.id,
- node.node
- )
- })?;
- Ok::<Vec<_>, Error>(task_list)
- }
- .await;
-
- drop(total_connections_permit);
- drop(per_remote_permit);
-
- (node_name, res)
- });
- }
-
- while let Some(result) = join_set.join_next().await {
- match result {
- Ok((node_name, result)) => match result {
- Ok(task_list) => {
- let mapped =
- task_list.into_iter().filter_map(|task| {
- match map_pve_task(task, &remote.id) {
- Ok(task) => Some(task),
- Err(err) => {
- log::error!(
- "could not map task data, skipping: {err:#}"
- );
- None
- }
- }
- });
-
- tasks.extend(mapped);
- node_results.set_node_success(remote.id.clone(), node_name);
- }
- Err(error) => {
- log::error!("could not fetch tasks: {error:#}");
- }
- },
- Err(error) => {
- log::error!("could not join task fetching task: {error:#}");
- }
- }
- }
- }
- RemoteType::Pbs => {
- // TODO: Add code for PBS
- }
- }
-
- Ok(FetchedTasks {
- tasks,
- node_results,
- })
-}
-
#[derive(PartialEq, Debug)]
/// Outcome from polling a tracked task.
enum PollResult {
--
2.47.2
_______________________________________________
pdm-devel mailing list
pdm-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pdm-devel
^ permalink raw reply [flat|nested] 5+ messages in thread
* [pdm-devel] applied: [PATCH proxmox-datacenter-manager v2 0/3] add helper to fetch from many remote nodes in parallel
2025-08-29 14:10 [pdm-devel] [PATCH proxmox-datacenter-manager v2 0/3] add helper to fetch from many remote nodes in parallel Lukas Wagner
` (2 preceding siblings ...)
2025-08-29 14:10 ` [pdm-devel] [PATCH proxmox-datacenter-manager v2 3/3] remote task fetching: use ParallelFetcher helper Lukas Wagner
@ 2025-09-01 10:04 ` Dominik Csapak
3 siblings, 0 replies; 5+ messages in thread
From: Dominik Csapak @ 2025-09-01 10:04 UTC (permalink / raw)
To: Proxmox Datacenter Manager development discussion, Lukas Wagner
On 8/29/25 4:10 PM, Lukas Wagner wrote:
>
>
>
> proxmox-datacenter-manager:
>
> Lukas Wagner (3):
> server: add convenience helper to fetch results from many remote nodes
> in parallel
> server: add helper for fetching from multiple remotes at once
> remote task fetching: use ParallelFetcher helper
>
> .../tasks/remote_tasks.rs | 238 +++++---------
> server/src/lib.rs | 1 +
> server/src/parallel_fetcher.rs | 292 ++++++++++++++++++
> 3 files changed, 368 insertions(+), 163 deletions(-)
> create mode 100644 server/src/parallel_fetcher.rs
>
>
> Summary over all repositories:
> 3 files changed, 368 insertions(+), 163 deletions(-)
>
applied series, thanks!
_______________________________________________
pdm-devel mailing list
pdm-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pdm-devel
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2025-09-01 10:04 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-08-29 14:10 [pdm-devel] [PATCH proxmox-datacenter-manager v2 0/3] add helper to fetch from many remote nodes in parallel Lukas Wagner
2025-08-29 14:10 ` [pdm-devel] [PATCH proxmox-datacenter-manager v2 1/3] server: add convenience helper to fetch results " Lukas Wagner
2025-08-29 14:10 ` [pdm-devel] [PATCH proxmox-datacenter-manager v2 2/3] server: add helper for fetching from multiple remotes at once Lukas Wagner
2025-08-29 14:10 ` [pdm-devel] [PATCH proxmox-datacenter-manager v2 3/3] remote task fetching: use ParallelFetcher helper Lukas Wagner
2025-09-01 10:04 ` [pdm-devel] applied: [PATCH proxmox-datacenter-manager v2 0/3] add helper to fetch from many remote nodes in parallel Dominik Csapak
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox