public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pbs-devel] [PATCH proxmox{, -backup} 0/5] parallelize chunk reads in verification
@ 2025-11-05 15:51 Nicolas Frey
  2025-11-05 15:51 ` [pbs-devel] [PATCH proxmox 1/1] pbs-api-types: jobs: verify: add worker-threads to VerificationJobConfig Nicolas Frey
                   ` (4 more replies)
  0 siblings, 5 replies; 6+ messages in thread
From: Nicolas Frey @ 2025-11-05 15:51 UTC (permalink / raw)
  To: pbs-devel

This patch series aims to expand on Dominik's series [0] written for
pbs 3, parallelizing chunk reads in `VerifyWorker` using a seperate
thread pool from the verification.

The number of threads was previously hard-coded, but is now
configurable via the API and GUI with a new property called
`worker-threads`, similarly to tape backups.

The number of `worker-threads` only controls the thread pool for
reading, though if it makes sense to reuse this for the verification
pool, it could be adjusted to do so too.

In my local tests I measured the following speed difference:
verified a single snapshot with ~32 GiB (4x the RAM size) with 4 cores

1 thread:    ~440MiB/s
2 threads:   ~780MiB/s
4 threads:   ~1140MiB/s

[0] https://lore.proxmox.com/pbs-devel/20250707132706.2854973-1-d.csapak@proxmox.com/#t

proxmox:

Nicolas Frey (1):
  pbs-api-types: jobs: verify: add worker-threads to
    VerificationJobSetup

 pbs-api-types/src/jobs.rs | 10 ++++++++++
 1 file changed, 10 insertions(+)


proxmox-backup:

Nicolas Frey (4):
  api: verify: move chunk loading into parallel handler
  api: verify: use worker-threads to determine the number of threads to
    use
  api: verify: add worker-threads to update endpoint
  ui: verify: add option to set number of threads for job

 src/api2/admin/datastore.rs    |  13 +++-
 src/api2/backup/environment.rs |   2 +-
 src/api2/config/verify.rs      |   8 +++
 src/backup/verify.rs           | 123 +++++++++++++++++++++------------
 src/server/verify_job.rs       |   3 +-
 www/window/VerifyAll.js        |  12 ++++
 www/window/VerifyJobEdit.js    |  13 ++++
 7 files changed, 125 insertions(+), 49 deletions(-)


Summary over all repositories:
  8 files changed, 135 insertions(+), 49 deletions(-)

-- 
Generated by git-murpp 0.8.1

_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [pbs-devel] [PATCH proxmox 1/1] pbs-api-types: jobs: verify: add worker-threads to VerificationJobConfig
  2025-11-05 15:51 [pbs-devel] [PATCH proxmox{, -backup} 0/5] parallelize chunk reads in verification Nicolas Frey
@ 2025-11-05 15:51 ` Nicolas Frey
  2025-11-05 15:51 ` [pbs-devel] [PATCH proxmox-backup 1/4] api: verify: move chunk loading into parallel handler Nicolas Frey
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Nicolas Frey @ 2025-11-05 15:51 UTC (permalink / raw)
  To: pbs-devel

Signed-off-by: Nicolas Frey <n.frey@proxmox.com>
---
 pbs-api-types/src/jobs.rs | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/pbs-api-types/src/jobs.rs b/pbs-api-types/src/jobs.rs
index 4dbbef2b..d904f797 100644
--- a/pbs-api-types/src/jobs.rs
+++ b/pbs-api-types/src/jobs.rs
@@ -203,6 +203,13 @@ pub const VERIFICATION_OUTDATED_AFTER_SCHEMA: Schema =
             optional: true,
             schema: crate::NS_MAX_DEPTH_SCHEMA,
         },
+        "worker-threads": {
+            type: Integer,
+            optional: true,
+            minimum: 1,
+            maximum: 32,
+            default: 1,
+        },
     }
 )]
 #[derive(Serialize, Deserialize, Updater, Clone, PartialEq)]
@@ -221,6 +228,9 @@ pub struct VerificationJobConfig {
     #[serde(skip_serializing_if = "Option::is_none")]
     /// Reverify snapshots after X days, never if 0. Ignored if 'ignore_verified' is false.
     pub outdated_after: Option<i64>,
+    /// Set the number of worker threads to use for the job
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub worker_threads: Option<usize>,
     #[serde(skip_serializing_if = "Option::is_none")]
     pub comment: Option<String>,
     #[serde(skip_serializing_if = "Option::is_none")]
-- 
2.47.3


_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 1/4] api: verify: move chunk loading into parallel handler
  2025-11-05 15:51 [pbs-devel] [PATCH proxmox{, -backup} 0/5] parallelize chunk reads in verification Nicolas Frey
  2025-11-05 15:51 ` [pbs-devel] [PATCH proxmox 1/1] pbs-api-types: jobs: verify: add worker-threads to VerificationJobConfig Nicolas Frey
@ 2025-11-05 15:51 ` Nicolas Frey
  2025-11-05 15:51 ` [pbs-devel] [PATCH proxmox-backup 2/4] api: verify: use worker-threads to determine the number of threads to use Nicolas Frey
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Nicolas Frey @ 2025-11-05 15:51 UTC (permalink / raw)
  To: pbs-devel

This way, the chunks will be loaded in parallel in addition to being
checked in parallel.

Depending on the underlying storage, this can speed up reading chunks
from disk, especially when the underlying storage is IO depth
dependent, and the CPU is faster than the storage.

Originally-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Nicolas Frey <n.frey@proxmox.com>
---
 src/backup/verify.rs | 120 +++++++++++++++++++++++++++----------------
 1 file changed, 75 insertions(+), 45 deletions(-)

diff --git a/src/backup/verify.rs b/src/backup/verify.rs
index e33fdf50..7f91f38c 100644
--- a/src/backup/verify.rs
+++ b/src/backup/verify.rs
@@ -1,6 +1,6 @@
 use pbs_config::BackupLockGuard;
 use std::collections::HashSet;
-use std::sync::atomic::{AtomicUsize, Ordering};
+use std::sync::atomic::{AtomicU64, AtomicUsize, Ordering};
 use std::sync::{Arc, Mutex};
 use std::time::Instant;
 
@@ -20,7 +20,7 @@ use pbs_datastore::index::{ChunkReadInfo, IndexFile};
 use pbs_datastore::manifest::{BackupManifest, FileInfo};
 use pbs_datastore::{DataBlob, DataStore, DatastoreBackend, StoreProgress};
 
-use crate::tools::parallel_handler::ParallelHandler;
+use crate::tools::parallel_handler::{ParallelHandler, SendHandle};
 
 use crate::backup::hierarchy::ListAccessibleBackupGroups;
 
@@ -156,23 +156,20 @@ impl VerifyWorker {
 
         let start_time = Instant::now();
 
-        let mut read_bytes = 0;
-        let mut decoded_bytes = 0;
+        let read_bytes = Arc::new(AtomicU64::new(0));
+        let decoded_bytes = Arc::new(AtomicU64::new(0));
 
-        let datastore2 = Arc::clone(&self.datastore);
-        let corrupt_chunks2 = Arc::clone(&self.corrupt_chunks);
-        let verified_chunks2 = Arc::clone(&self.verified_chunks);
-        let errors2 = Arc::clone(&errors);
-
-        let decoder_pool = ParallelHandler::new(
-            "verify chunk decoder",
-            4,
+        let decoder_pool = ParallelHandler::new("verify chunk decoder", 4, {
+            let datastore = Arc::clone(&self.datastore);
+            let corrupt_chunks = Arc::clone(&self.corrupt_chunks);
+            let verified_chunks = Arc::clone(&self.verified_chunks);
+            let errors = Arc::clone(&errors);
             move |(chunk, digest, size): (DataBlob, [u8; 32], u64)| {
                 let chunk_crypt_mode = match chunk.crypt_mode() {
                     Err(err) => {
-                        corrupt_chunks2.lock().unwrap().insert(digest);
+                        corrupt_chunks.lock().unwrap().insert(digest);
                         info!("can't verify chunk, unknown CryptMode - {err}");
-                        errors2.fetch_add(1, Ordering::SeqCst);
+                        errors.fetch_add(1, Ordering::SeqCst);
                         return Ok(());
                     }
                     Ok(mode) => mode,
@@ -182,21 +179,21 @@ impl VerifyWorker {
                     info!(
                     "chunk CryptMode {chunk_crypt_mode:?} does not match index CryptMode {crypt_mode:?}"
                 );
-                    errors2.fetch_add(1, Ordering::SeqCst);
+                    errors.fetch_add(1, Ordering::SeqCst);
                 }
 
                 if let Err(err) = chunk.verify_unencrypted(size as usize, &digest) {
-                    corrupt_chunks2.lock().unwrap().insert(digest);
+                    corrupt_chunks.lock().unwrap().insert(digest);
                     info!("{err}");
-                    errors2.fetch_add(1, Ordering::SeqCst);
-                    Self::rename_corrupted_chunk(datastore2.clone(), &digest);
+                    errors.fetch_add(1, Ordering::SeqCst);
+                    Self::rename_corrupted_chunk(datastore.clone(), &digest);
                 } else {
-                    verified_chunks2.lock().unwrap().insert(digest);
+                    verified_chunks.lock().unwrap().insert(digest);
                 }
 
                 Ok(())
-            },
-        );
+            }
+        });
 
         let skip_chunk = |digest: &[u8; 32]| -> bool {
             if self.verified_chunks.lock().unwrap().contains(digest) {
@@ -223,6 +220,29 @@ impl VerifyWorker {
             .datastore
             .get_chunks_in_order(&*index, skip_chunk, check_abort)?;
 
+        let reader_pool = ParallelHandler::new("read chunks", 4, {
+            let decoder_pool = decoder_pool.channel();
+            let datastore = Arc::clone(&self.datastore);
+            let corrupt_chunks = Arc::clone(&self.corrupt_chunks);
+            let read_bytes = Arc::clone(&read_bytes);
+            let decoded_bytes = Arc::clone(&decoded_bytes);
+            let errors = Arc::clone(&errors);
+            let backend = self.backend.clone();
+
+            move |info: ChunkReadInfo| {
+                Self::verify_chunk_by_backend(
+                    &backend,
+                    Arc::clone(&datastore),
+                    Arc::clone(&corrupt_chunks),
+                    Arc::clone(&read_bytes),
+                    Arc::clone(&decoded_bytes),
+                    Arc::clone(&errors),
+                    &decoder_pool,
+                    &info,
+                )
+            }
+        });
+
         for (pos, _) in chunk_list {
             self.worker.check_abort()?;
             self.worker.fail_on_shutdown()?;
@@ -234,19 +254,16 @@ impl VerifyWorker {
                 continue; // already verified or marked corrupt
             }
 
-            self.verify_chunk_by_backend(
-                &info,
-                &mut read_bytes,
-                &mut decoded_bytes,
-                Arc::clone(&errors),
-                &decoder_pool,
-            )?;
+            reader_pool.send(info)?;
         }
 
-        decoder_pool.complete()?;
+        reader_pool.complete()?;
 
         let elapsed = start_time.elapsed().as_secs_f64();
 
+        let read_bytes = read_bytes.load(Ordering::SeqCst);
+        let decoded_bytes = decoded_bytes.load(Ordering::SeqCst);
+
         let read_bytes_mib = (read_bytes as f64) / (1024.0 * 1024.0);
         let decoded_bytes_mib = (decoded_bytes as f64) / (1024.0 * 1024.0);
 
@@ -266,26 +283,31 @@ impl VerifyWorker {
         Ok(())
     }
 
+    #[allow(clippy::too_many_arguments)]
     fn verify_chunk_by_backend(
-        &self,
-        info: &ChunkReadInfo,
-        read_bytes: &mut u64,
-        decoded_bytes: &mut u64,
+        backend: &DatastoreBackend,
+        datastore: Arc<DataStore>,
+        corrupt_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
+        read_bytes: Arc<AtomicU64>,
+        decoded_bytes: Arc<AtomicU64>,
         errors: Arc<AtomicUsize>,
-        decoder_pool: &ParallelHandler<(DataBlob, [u8; 32], u64)>,
+        decoder_pool: &SendHandle<(DataBlob, [u8; 32], u64)>,
+        info: &ChunkReadInfo,
     ) -> Result<(), Error> {
-        match &self.backend {
-            DatastoreBackend::Filesystem => match self.datastore.load_chunk(&info.digest) {
-                Err(err) => self.add_corrupt_chunk(
+        match backend {
+            DatastoreBackend::Filesystem => match datastore.load_chunk(&info.digest) {
+                Err(err) => Self::add_corrupt_chunk(
+                    datastore,
+                    corrupt_chunks,
                     info.digest,
                     errors,
                     &format!("can't verify chunk, load failed - {err}"),
                 ),
                 Ok(chunk) => {
                     let size = info.size();
-                    *read_bytes += chunk.raw_size();
+                    read_bytes.fetch_add(chunk.raw_size(), Ordering::SeqCst);
                     decoder_pool.send((chunk, info.digest, size))?;
-                    *decoded_bytes += size;
+                    decoded_bytes.fetch_add(size, Ordering::SeqCst);
                 }
             },
             DatastoreBackend::S3(s3_client) => {
@@ -302,9 +324,9 @@ impl VerifyWorker {
                         match chunk_result {
                             Ok(chunk) => {
                                 let size = info.size();
-                                *read_bytes += chunk.raw_size();
+                                read_bytes.fetch_add(chunk.raw_size(), Ordering::SeqCst);
                                 decoder_pool.send((chunk, info.digest, size))?;
-                                *decoded_bytes += size;
+                                decoded_bytes.fetch_add(size, Ordering::SeqCst);
                             }
                             Err(err) => {
                                 errors.fetch_add(1, Ordering::SeqCst);
@@ -312,7 +334,9 @@ impl VerifyWorker {
                             }
                         }
                     }
-                    Ok(None) => self.add_corrupt_chunk(
+                    Ok(None) => Self::add_corrupt_chunk(
+                        datastore,
+                        corrupt_chunks,
                         info.digest,
                         errors,
                         &format!(
@@ -330,13 +354,19 @@ impl VerifyWorker {
         Ok(())
     }
 
-    fn add_corrupt_chunk(&self, digest: [u8; 32], errors: Arc<AtomicUsize>, message: &str) {
+    fn add_corrupt_chunk(
+        datastore: Arc<DataStore>,
+        corrupt_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
+        digest: [u8; 32],
+        errors: Arc<AtomicUsize>,
+        message: &str,
+    ) {
         // Panic on poisoned mutex
-        let mut corrupt_chunks = self.corrupt_chunks.lock().unwrap();
+        let mut corrupt_chunks = corrupt_chunks.lock().unwrap();
         corrupt_chunks.insert(digest);
         error!(message);
         errors.fetch_add(1, Ordering::SeqCst);
-        Self::rename_corrupted_chunk(self.datastore.clone(), &digest);
+        Self::rename_corrupted_chunk(datastore.clone(), &digest);
     }
 
     fn verify_fixed_index(&self, backup_dir: &BackupDir, info: &FileInfo) -> Result<(), Error> {
-- 
2.47.3


_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 2/4] api: verify: use worker-threads to determine the number of threads to use
  2025-11-05 15:51 [pbs-devel] [PATCH proxmox{, -backup} 0/5] parallelize chunk reads in verification Nicolas Frey
  2025-11-05 15:51 ` [pbs-devel] [PATCH proxmox 1/1] pbs-api-types: jobs: verify: add worker-threads to VerificationJobConfig Nicolas Frey
  2025-11-05 15:51 ` [pbs-devel] [PATCH proxmox-backup 1/4] api: verify: move chunk loading into parallel handler Nicolas Frey
@ 2025-11-05 15:51 ` Nicolas Frey
  2025-11-05 15:51 ` [pbs-devel] [PATCH proxmox-backup 3/4] api: verify: add worker-threads to update endpoint Nicolas Frey
  2025-11-05 15:51 ` [pbs-devel] [PATCH proxmox-backup 4/4] ui: verify: add option to set number of threads for job Nicolas Frey
  4 siblings, 0 replies; 6+ messages in thread
From: Nicolas Frey @ 2025-11-05 15:51 UTC (permalink / raw)
  To: pbs-devel

Signed-off-by: Nicolas Frey <n.frey@proxmox.com>
---
 src/api2/admin/datastore.rs    | 13 +++++++++++--
 src/api2/backup/environment.rs |  2 +-
 src/backup/verify.rs           |  5 ++++-
 src/server/verify_job.rs       |  3 ++-
 4 files changed, 18 insertions(+), 5 deletions(-)

diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
index d192ee39..69a09081 100644
--- a/src/api2/admin/datastore.rs
+++ b/src/api2/admin/datastore.rs
@@ -677,6 +677,14 @@ pub async fn status(
                 schema: NS_MAX_DEPTH_SCHEMA,
                 optional: true,
             },
+            "worker-threads": {
+                description: "Set the number of worker threads to use for the job",
+                type: Integer,
+                optional: true,
+                minimum: 1,
+                maximum: 32,
+                default: 1,
+            },
         },
     },
     returns: {
@@ -690,7 +698,7 @@ pub async fn status(
 )]
 /// Verify backups.
 ///
-/// This function can verify a single backup snapshot, all backup from a backup group,
+/// This function can verify a single backup snapshot, all backups from a backup group,
 /// or all backups in the datastore.
 #[allow(clippy::too_many_arguments)]
 pub fn verify(
@@ -702,6 +710,7 @@ pub fn verify(
     ignore_verified: Option<bool>,
     outdated_after: Option<i64>,
     max_depth: Option<usize>,
+    worker_threads: Option<usize>,
     rpcenv: &mut dyn RpcEnvironment,
 ) -> Result<Value, Error> {
     let auth_id: Authid = rpcenv.get_auth_id().unwrap().parse()?;
@@ -781,7 +790,7 @@ pub fn verify(
         auth_id.to_string(),
         to_stdout,
         move |worker| {
-            let verify_worker = VerifyWorker::new(worker.clone(), datastore)?;
+            let verify_worker = VerifyWorker::new(worker.clone(), datastore, worker_threads)?;
             let failed_dirs = if let Some(backup_dir) = backup_dir {
                 let mut res = Vec::new();
                 if !verify_worker.verify_backup_dir(
diff --git a/src/api2/backup/environment.rs b/src/api2/backup/environment.rs
index 0e8eab1b..5e6a73b9 100644
--- a/src/api2/backup/environment.rs
+++ b/src/api2/backup/environment.rs
@@ -812,7 +812,7 @@ impl BackupEnvironment {
             move |worker| {
                 worker.log_message("Automatically verifying newly added snapshot");
 
-                let verify_worker = VerifyWorker::new(worker.clone(), datastore)?;
+                let verify_worker = VerifyWorker::new(worker.clone(), datastore, None)?;
                 if !verify_worker.verify_backup_dir_with_lock(
                     &backup_dir,
                     worker.upid().clone(),
diff --git a/src/backup/verify.rs b/src/backup/verify.rs
index 7f91f38c..e11dba8e 100644
--- a/src/backup/verify.rs
+++ b/src/backup/verify.rs
@@ -32,6 +32,7 @@ pub struct VerifyWorker {
     verified_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
     corrupt_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
     backend: DatastoreBackend,
+    worker_threads: Option<usize>,
 }
 
 impl VerifyWorker {
@@ -39,6 +40,7 @@ impl VerifyWorker {
     pub fn new(
         worker: Arc<dyn WorkerTaskContext>,
         datastore: Arc<DataStore>,
+        worker_threads: Option<usize>,
     ) -> Result<Self, Error> {
         let backend = datastore.backend()?;
         Ok(Self {
@@ -49,6 +51,7 @@ impl VerifyWorker {
             // start with 64 chunks since we assume there are few corrupt ones
             corrupt_chunks: Arc::new(Mutex::new(HashSet::with_capacity(64))),
             backend,
+            worker_threads,
         })
     }
 
@@ -220,7 +223,7 @@ impl VerifyWorker {
             .datastore
             .get_chunks_in_order(&*index, skip_chunk, check_abort)?;
 
-        let reader_pool = ParallelHandler::new("read chunks", 4, {
+        let reader_pool = ParallelHandler::new("read chunks", self.worker_threads.unwrap_or(4), {
             let decoder_pool = decoder_pool.channel();
             let datastore = Arc::clone(&self.datastore);
             let corrupt_chunks = Arc::clone(&self.corrupt_chunks);
diff --git a/src/server/verify_job.rs b/src/server/verify_job.rs
index c8792174..9d790b07 100644
--- a/src/server/verify_job.rs
+++ b/src/server/verify_job.rs
@@ -41,7 +41,8 @@ pub fn do_verification_job(
                 None => Default::default(),
             };
 
-            let verify_worker = VerifyWorker::new(worker.clone(), datastore)?;
+            let verify_worker =
+                VerifyWorker::new(worker.clone(), datastore, verification_job.worker_threads)?;
             let result = verify_worker.verify_all_backups(
                 worker.upid(),
                 ns,
-- 
2.47.3


_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 3/4] api: verify: add worker-threads to update endpoint
  2025-11-05 15:51 [pbs-devel] [PATCH proxmox{, -backup} 0/5] parallelize chunk reads in verification Nicolas Frey
                   ` (2 preceding siblings ...)
  2025-11-05 15:51 ` [pbs-devel] [PATCH proxmox-backup 2/4] api: verify: use worker-threads to determine the number of threads to use Nicolas Frey
@ 2025-11-05 15:51 ` Nicolas Frey
  2025-11-05 15:51 ` [pbs-devel] [PATCH proxmox-backup 4/4] ui: verify: add option to set number of threads for job Nicolas Frey
  4 siblings, 0 replies; 6+ messages in thread
From: Nicolas Frey @ 2025-11-05 15:51 UTC (permalink / raw)
  To: pbs-devel

Signed-off-by: Nicolas Frey <n.frey@proxmox.com>
---
 src/api2/config/verify.rs | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/src/api2/config/verify.rs b/src/api2/config/verify.rs
index e71e0c2e..2847c984 100644
--- a/src/api2/config/verify.rs
+++ b/src/api2/config/verify.rs
@@ -149,6 +149,8 @@ pub enum DeletableProperty {
     Ns,
     /// Delete max-depth property, defaulting to full recursion again
     MaxDepth,
+    /// Delete worker-threads property
+    WorkerThreads,
 }
 
 #[api(
@@ -229,6 +231,9 @@ pub fn update_verification_job(
                 DeletableProperty::MaxDepth => {
                     data.max_depth = None;
                 }
+                DeletableProperty::WorkerThreads => {
+                    data.worker_threads = None;
+                }
             }
         }
     }
@@ -266,6 +271,9 @@ pub fn update_verification_job(
             data.max_depth = Some(max_depth);
         }
     }
+    if update.worker_threads.is_some() {
+        data.worker_threads = update.worker_threads;
+    }
 
     // check new store and NS
     user_info.check_privs(&auth_id, &data.acl_path(), PRIV_DATASTORE_VERIFY, true)?;
-- 
2.47.3


_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 4/4] ui: verify: add option to set number of threads for job
  2025-11-05 15:51 [pbs-devel] [PATCH proxmox{, -backup} 0/5] parallelize chunk reads in verification Nicolas Frey
                   ` (3 preceding siblings ...)
  2025-11-05 15:51 ` [pbs-devel] [PATCH proxmox-backup 3/4] api: verify: add worker-threads to update endpoint Nicolas Frey
@ 2025-11-05 15:51 ` Nicolas Frey
  4 siblings, 0 replies; 6+ messages in thread
From: Nicolas Frey @ 2025-11-05 15:51 UTC (permalink / raw)
  To: pbs-devel

Signed-off-by: Nicolas Frey <n.frey@proxmox.com>
---
 www/window/VerifyAll.js     | 12 ++++++++++++
 www/window/VerifyJobEdit.js | 13 +++++++++++++
 2 files changed, 25 insertions(+)

diff --git a/www/window/VerifyAll.js b/www/window/VerifyAll.js
index 01bcd63d..82f62aae 100644
--- a/www/window/VerifyAll.js
+++ b/www/window/VerifyAll.js
@@ -80,6 +80,18 @@ Ext.define('PBS.window.VerifyAll', {
                         },
                     ],
                 },
+
+                {
+                    xtype: 'proxmoxintegerfield',
+                    name: 'worker-threads',
+                    fieldLabel: gettext('# of Worker Threads'),
+                    emptyText: '1',
+                    minValue: 1,
+                    maxValue: 32,
+                    cbind: {
+                        deleteEmpty: '{!isCreate}',
+                    },
+                },
             ],
         },
     ],
diff --git a/www/window/VerifyJobEdit.js b/www/window/VerifyJobEdit.js
index e87ca069..7b7a96c4 100644
--- a/www/window/VerifyJobEdit.js
+++ b/www/window/VerifyJobEdit.js
@@ -166,5 +166,18 @@ Ext.define('PBS.window.VerifyJobEdit', {
                 },
             },
         ],
+        advancedColumn2: [
+            {
+                xtype: 'proxmoxintegerfield',
+                name: 'worker-threads',
+                fieldLabel: gettext('# of Worker Threads'),
+                emptyText: '1',
+                minValue: 1,
+                maxValue: 32,
+                cbind: {
+                    deleteEmpty: '{!isCreate}',
+                },
+            },
+        ]
     },
 });
-- 
2.47.3


_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2025-11-05 15:51 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-11-05 15:51 [pbs-devel] [PATCH proxmox{, -backup} 0/5] parallelize chunk reads in verification Nicolas Frey
2025-11-05 15:51 ` [pbs-devel] [PATCH proxmox 1/1] pbs-api-types: jobs: verify: add worker-threads to VerificationJobConfig Nicolas Frey
2025-11-05 15:51 ` [pbs-devel] [PATCH proxmox-backup 1/4] api: verify: move chunk loading into parallel handler Nicolas Frey
2025-11-05 15:51 ` [pbs-devel] [PATCH proxmox-backup 2/4] api: verify: use worker-threads to determine the number of threads to use Nicolas Frey
2025-11-05 15:51 ` [pbs-devel] [PATCH proxmox-backup 3/4] api: verify: add worker-threads to update endpoint Nicolas Frey
2025-11-05 15:51 ` [pbs-devel] [PATCH proxmox-backup 4/4] ui: verify: add option to set number of threads for job Nicolas Frey

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal