public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pbs-devel] [PATCH proxmox-backup v2 0/9] improve task list handling
@ 2020-09-28 13:32 Dominik Csapak
  2020-09-28 13:32 ` [pbs-devel] [PATCH proxmox-backup v2 1/9] tools: add logrotate module Dominik Csapak
                   ` (9 more replies)
  0 siblings, 10 replies; 11+ messages in thread
From: Dominik Csapak @ 2020-09-28 13:32 UTC (permalink / raw)
  To: pbs-devel

this series extends the task handling in a way so that we can safely
have more than 1000 tasks and properly filter and read them

this also introduces a daily task to rotate the now existing
task archive when it is over 500k up to maximum 20 files

strictly speaking the widget-toolkit patch is not necessary, but makes
the user interface a bit better to use

changes from v1:
* rebases on master
* move the logrotate to proxmox-backup
* use zstd in logrotate instead of gzip
* TaskListInfoIterator now has an option to only return the 'active' tasks
  (this is a performance optimization)

NOTE: i did not resend the widget-toolkit patch, but i would still
recommend that it gets applied

Dominik Csapak (9):
  tools: add logrotate module
  server/worker_task: refactor locking of the task list
  server/worker_task: split task list file into two
  server/worker_task: write older tasks into archive file
  server/worker_task: add TaskListInfoIterator
  api2/node/tasks: use TaskListInfoIterator instead of read_task_list
  api2/status: use the TaskListInfoIterator here
  server/worker_task: remove unecessary read_task_list
  proxmox-backup-proxy: add task archive rotation

 src/api2/node/tasks.rs          |  52 +++---
 src/api2/status.rs              |  32 +++-
 src/bin/proxmox-backup-proxy.rs |  96 ++++++++++
 src/server/worker_task.rs       | 300 ++++++++++++++++++++++++--------
 src/tools.rs                    |   1 +
 src/tools/logrotate.rs          | 184 ++++++++++++++++++++
 6 files changed, 553 insertions(+), 112 deletions(-)
 create mode 100644 src/tools/logrotate.rs

-- 
2.20.1





^ permalink raw reply	[flat|nested] 11+ messages in thread

* [pbs-devel] [PATCH proxmox-backup v2 1/9] tools: add logrotate module
  2020-09-28 13:32 [pbs-devel] [PATCH proxmox-backup v2 0/9] improve task list handling Dominik Csapak
@ 2020-09-28 13:32 ` Dominik Csapak
  2020-09-28 13:32 ` [pbs-devel] [PATCH proxmox-backup v2 2/9] server/worker_task: refactor locking of the task list Dominik Csapak
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2020-09-28 13:32 UTC (permalink / raw)
  To: pbs-devel

this is a helper to rotate and iterate over log files
there is an iterator for open filehandles as well as
only the filename

also it has the possibilty to rotate them
for compression, zstd is used

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
changes from v1:
* small code reordering
* use zstd instead of gzip
 src/tools.rs           |   1 +
 src/tools/logrotate.rs | 184 +++++++++++++++++++++++++++++++++++++++++
 2 files changed, 185 insertions(+)
 create mode 100644 src/tools/logrotate.rs

diff --git a/src/tools.rs b/src/tools.rs
index c16fe785..5cf674fe 100644
--- a/src/tools.rs
+++ b/src/tools.rs
@@ -32,6 +32,7 @@ pub mod ticket;
 pub mod statistics;
 pub mod systemd;
 pub mod nom;
+pub mod logrotate;
 
 mod parallel_handler;
 pub use parallel_handler::*;
diff --git a/src/tools/logrotate.rs b/src/tools/logrotate.rs
new file mode 100644
index 00000000..ce311fbe
--- /dev/null
+++ b/src/tools/logrotate.rs
@@ -0,0 +1,184 @@
+use std::path::{Path, PathBuf};
+use std::fs::{File, rename};
+use std::os::unix::io::FromRawFd;
+use std::io::Read;
+
+use anyhow::{bail, Error};
+use nix::unistd;
+
+use proxmox::tools::fs::{CreateOptions, make_tmp_file, replace_file};
+
+/// Used for rotating log files and iterating over them
+pub struct LogRotate {
+    base_path: PathBuf,
+    compress: bool,
+}
+
+impl LogRotate {
+    /// Creates a new instance if the path given is a valid file name
+    /// (iow. does not end with ..)
+    /// 'compress' decides if compresses files will be created on
+    /// rotation, and if it will search '.zst' files when iterating
+    pub fn new<P: AsRef<Path>>(path: P, compress: bool) -> Option<Self> {
+        if path.as_ref().file_name().is_some() {
+            Some(Self {
+                base_path: path.as_ref().to_path_buf(),
+                compress,
+            })
+        } else {
+            None
+        }
+    }
+
+    /// Returns an iterator over the logrotated file names that exist
+    pub fn file_names(&self) -> LogRotateFileNames {
+        LogRotateFileNames {
+            base_path: self.base_path.clone(),
+            count: 0,
+            compress: self.compress
+        }
+    }
+
+    /// Returns an iterator over the logrotated file handles
+    pub fn files(&self) -> LogRotateFiles {
+        LogRotateFiles {
+            file_names: self.file_names(),
+        }
+    }
+
+    /// Rotates the files up to 'max_files'
+    /// if the 'compress' option was given it will compress the newest file
+    ///
+    /// e.g. rotates
+    /// foo.2.zst => foo.3.zst
+    /// foo.1.zst => foo.2.zst
+    /// foo       => foo.1.zst
+    ///           => foo
+    pub fn rotate(&mut self, options: CreateOptions, max_files: Option<usize>) -> Result<(), Error> {
+        let mut filenames: Vec<PathBuf> = self.file_names().collect();
+        if filenames.is_empty() {
+            return Ok(()); // no file means nothing to rotate
+        }
+
+        let mut next_filename = self.base_path.clone().canonicalize()?.into_os_string();
+
+        if self.compress {
+            next_filename.push(format!(".{}.zst", filenames.len()));
+        } else {
+            next_filename.push(format!(".{}", filenames.len()));
+        }
+
+        filenames.push(PathBuf::from(next_filename));
+        let count = filenames.len();
+
+        // rotate all but the first, that we maybe have to compress
+        for i in (1..count-1).rev() {
+            rename(&filenames[i], &filenames[i+1])?;
+        }
+
+        if self.compress {
+            let mut source = File::open(&filenames[0])?;
+            let (fd, tmp_path) = make_tmp_file(&filenames[1], options.clone())?;
+            let target = unsafe { File::from_raw_fd(fd) };
+            let mut encoder = match zstd::stream::write::Encoder::new(target, 0) {
+                Ok(encoder) => encoder,
+                Err(err) => {
+                    let _ = unistd::unlink(&tmp_path);
+                    bail!("creating zstd encoder failed - {}", err);
+                }
+            };
+
+            if let Err(err) = std::io::copy(&mut source, &mut encoder) {
+                let _ = unistd::unlink(&tmp_path);
+                bail!("zstd encoding failed for file {:?} - {}", &filenames[1], err);
+            }
+
+            if let Err(err) = encoder.finish() {
+                let _ = unistd::unlink(&tmp_path);
+                bail!("zstd finish failed for file {:?} - {}", &filenames[1], err);
+            }
+
+            if let Err(err) = rename(&tmp_path, &filenames[1]) {
+                let _ = unistd::unlink(&tmp_path);
+                bail!("rename failed for file {:?} - {}", &filenames[1], err);
+            }
+
+            unistd::unlink(&filenames[0])?;
+        } else {
+            rename(&filenames[0], &filenames[1])?;
+        }
+
+        // create empty original file
+        replace_file(&filenames[0], b"", options)?;
+
+        if let Some(max_files) = max_files {
+            // delete all files > max_files
+            for file in filenames.iter().skip(max_files) {
+                if let Err(err) = unistd::unlink(file) {
+                    eprintln!("could not remove {:?}: {}", &file, err);
+                }
+            }
+        }
+
+        Ok(())
+    }
+}
+
+/// Iterator over logrotated file names
+pub struct LogRotateFileNames {
+    base_path: PathBuf,
+    count: usize,
+    compress: bool,
+}
+
+impl Iterator for LogRotateFileNames {
+    type Item = PathBuf;
+
+    fn next(&mut self) -> Option<Self::Item> {
+        if self.count > 0 {
+            let mut path: std::ffi::OsString = self.base_path.clone().into();
+
+            path.push(format!(".{}", self.count));
+            self.count += 1;
+
+            if Path::new(&path).is_file() {
+                Some(path.into())
+            } else if self.compress {
+                path.push(".zst");
+                if Path::new(&path).is_file() {
+                    Some(path.into())
+                } else {
+                    None
+                }
+            } else {
+                None
+            }
+        } else if self.base_path.is_file() {
+            self.count += 1;
+            Some(self.base_path.to_path_buf())
+        } else {
+            None
+        }
+    }
+}
+
+/// Iterator over logrotated files by returning a boxed reader
+pub struct LogRotateFiles {
+    file_names: LogRotateFileNames,
+}
+
+impl Iterator for LogRotateFiles {
+    type Item = Box<dyn Read + Send>;
+
+    fn next(&mut self) -> Option<Self::Item> {
+        let filename = self.file_names.next()?;
+        let file = File::open(&filename).ok()?;
+
+        if filename.extension().unwrap_or(std::ffi::OsStr::new("")) == "zst" {
+            let encoder = zstd::stream::read::Decoder::new(file).ok()?;
+            return Some(Box::new(encoder));
+        }
+
+        Some(Box::new(file))
+    }
+}
-- 
2.20.1





^ permalink raw reply	[flat|nested] 11+ messages in thread

* [pbs-devel] [PATCH proxmox-backup v2 2/9] server/worker_task: refactor locking of the task list
  2020-09-28 13:32 [pbs-devel] [PATCH proxmox-backup v2 0/9] improve task list handling Dominik Csapak
  2020-09-28 13:32 ` [pbs-devel] [PATCH proxmox-backup v2 1/9] tools: add logrotate module Dominik Csapak
@ 2020-09-28 13:32 ` Dominik Csapak
  2020-09-28 13:32 ` [pbs-devel] [PATCH proxmox-backup v2 3/9] server/worker_task: split task list file into two Dominik Csapak
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2020-09-28 13:32 UTC (permalink / raw)
  To: pbs-devel

also add the functionality of having a 'shared' (read) lock for the list
we will need this later

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
changes from v1:
* rebase on master (use open_file_locked)
 src/server/worker_task.rs | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/src/server/worker_task.rs b/src/server/worker_task.rs
index a24e59b4..34d31f99 100644
--- a/src/server/worker_task.rs
+++ b/src/server/worker_task.rs
@@ -325,6 +325,15 @@ pub struct TaskListInfo {
     pub state: Option<TaskState>, // endtime, status
 }
 
+fn lock_task_list_files(exclusive: bool) -> Result<std::fs::File, Error> {
+    let backup_user = crate::backup::backup_user()?;
+
+    let lock = open_file_locked(PROXMOX_BACKUP_TASK_LOCK_FN, std::time::Duration::new(10, 0), exclusive)?;
+    nix::unistd::chown(PROXMOX_BACKUP_TASK_LOCK_FN, Some(backup_user.uid), Some(backup_user.gid))?;
+
+    Ok(lock)
+}
+
 // atomically read/update the task list, update status of finished tasks
 // new_upid is added to the list when specified.
 // Returns a sorted list of known tasks,
@@ -332,8 +341,7 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<Vec<TaskListInfo>, E
 
     let backup_user = crate::backup::backup_user()?;
 
-    let lock = open_file_locked(PROXMOX_BACKUP_TASK_LOCK_FN, std::time::Duration::new(10, 0), true)?;
-    nix::unistd::chown(PROXMOX_BACKUP_TASK_LOCK_FN, Some(backup_user.uid), Some(backup_user.gid))?;
+    let lock = lock_task_list_files(true)?;
 
     let reader = match File::open(PROXMOX_BACKUP_ACTIVE_TASK_FN) {
         Ok(f) => Some(BufReader::new(f)),
-- 
2.20.1





^ permalink raw reply	[flat|nested] 11+ messages in thread

* [pbs-devel] [PATCH proxmox-backup v2 3/9] server/worker_task: split task list file into two
  2020-09-28 13:32 [pbs-devel] [PATCH proxmox-backup v2 0/9] improve task list handling Dominik Csapak
  2020-09-28 13:32 ` [pbs-devel] [PATCH proxmox-backup v2 1/9] tools: add logrotate module Dominik Csapak
  2020-09-28 13:32 ` [pbs-devel] [PATCH proxmox-backup v2 2/9] server/worker_task: refactor locking of the task list Dominik Csapak
@ 2020-09-28 13:32 ` Dominik Csapak
  2020-09-28 13:32 ` [pbs-devel] [PATCH proxmox-backup v2 4/9] server/worker_task: write older tasks into archive file Dominik Csapak
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2020-09-28 13:32 UTC (permalink / raw)
  To: pbs-devel

one for only the active tasks and one for up to 1000 finished tasks

factor out the parsing of a task file (we will later need this again)
and use iterator combinators for easier code

we now sort the tasks ascending (this will become important in a later patch)
but reverse (for now) it to keep compatibility

this code also omits the converting into an intermittent hash
since it cannot really happen that we have duplicate tasks in this list
(since the call is locked by an flock, and it is the only place where we
write into the lists)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
changes from v1:
* improve comment that the active list has the old format after upgrade
 src/server/worker_task.rs | 144 +++++++++++++++++++++-----------------
 1 file changed, 80 insertions(+), 64 deletions(-)

diff --git a/src/server/worker_task.rs b/src/server/worker_task.rs
index 34d31f99..2ce71136 100644
--- a/src/server/worker_task.rs
+++ b/src/server/worker_task.rs
@@ -31,6 +31,9 @@ pub const PROXMOX_BACKUP_LOG_DIR: &str = PROXMOX_BACKUP_LOG_DIR_M!();
 pub const PROXMOX_BACKUP_TASK_DIR: &str = PROXMOX_BACKUP_TASK_DIR_M!();
 pub const PROXMOX_BACKUP_TASK_LOCK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/.active.lock");
 pub const PROXMOX_BACKUP_ACTIVE_TASK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/active");
+pub const PROXMOX_BACKUP_INDEX_TASK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/index");
+
+const MAX_INDEX_TASKS: usize = 1000;
 
 lazy_static! {
     static ref WORKER_TASK_LIST: Mutex<HashMap<usize, Arc<WorkerTask>>> = Mutex::new(HashMap::new());
@@ -343,76 +346,47 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<Vec<TaskListInfo>, E
 
     let lock = lock_task_list_files(true)?;
 
-    let reader = match File::open(PROXMOX_BACKUP_ACTIVE_TASK_FN) {
-        Ok(f) => Some(BufReader::new(f)),
-        Err(err) => {
-            if err.kind() ==  std::io::ErrorKind::NotFound {
-                 None
-            } else {
-                bail!("unable to open active worker {:?} - {}", PROXMOX_BACKUP_ACTIVE_TASK_FN, err);
+    let mut finish_list: Vec<TaskListInfo> = read_task_file_from_path(PROXMOX_BACKUP_INDEX_TASK_FN)?;
+    let mut active_list: Vec<TaskListInfo> = read_task_file_from_path(PROXMOX_BACKUP_ACTIVE_TASK_FN)?
+        .into_iter()
+        .filter_map(|info| {
+            if info.state.is_some() {
+                // this can happen when the active file still includes finished tasks
+                finish_list.push(info);
+                return None;
             }
-        }
-    };
 
-    let mut active_list = vec![];
-    let mut finish_list = vec![];
-
-    if let Some(lines) = reader.map(|r| r.lines()) {
-
-        for line in lines {
-            let line = line?;
-            match parse_worker_status_line(&line) {
-                Err(err) => bail!("unable to parse active worker status '{}' - {}", line, err),
-                Ok((upid_str, upid, state)) => match state {
-                    None if worker_is_active_local(&upid) => {
-                        active_list.push(TaskListInfo { upid, upid_str, state: None });
-                    },
-                    None => {
-                        println!("Detected stopped UPID {}", upid_str);
-                        let now = proxmox::tools::time::epoch_i64();
-                        let status = upid_read_status(&upid)
-                            .unwrap_or_else(|_| TaskState::Unknown { endtime: now });
-                        finish_list.push(TaskListInfo {
-                            upid, upid_str, state: Some(status)
-                        });
-                    },
-                    Some(status) => {
-                        finish_list.push(TaskListInfo {
-                            upid, upid_str, state: Some(status)
-                        })
-                    }
-                }
+            if !worker_is_active_local(&info.upid) {
+                println!("Detected stopped UPID {}", &info.upid_str);
+                let now = proxmox::tools::time::epoch_i64();
+                let status = upid_read_status(&info.upid)
+                    .unwrap_or_else(|_| TaskState::Unknown { endtime: now });
+                finish_list.push(TaskListInfo {
+                    upid: info.upid,
+                    upid_str: info.upid_str,
+                    state: Some(status)
+                });
+                return None;
             }
-        }
-    }
+
+            Some(info)
+        }).collect();
 
     if let Some(upid) = new_upid {
         active_list.push(TaskListInfo { upid: upid.clone(), upid_str: upid.to_string(), state: None });
     }
 
-    // assemble list without duplicates
-    // we include all active tasks,
-    // and fill up to 1000 entries with finished tasks
+    let active_raw = render_task_list(&active_list);
 
-    let max = 1000;
-
-    let mut task_hash = HashMap::new();
-
-    for info in active_list {
-        task_hash.insert(info.upid_str.clone(), info);
-    }
-
-    for info in finish_list {
-        if task_hash.len() > max { break; }
-        if !task_hash.contains_key(&info.upid_str) {
-            task_hash.insert(info.upid_str.clone(), info);
-        }
-    }
-
-    let mut task_list: Vec<TaskListInfo> = vec![];
-    for (_, info) in task_hash { task_list.push(info); }
+    replace_file(
+        PROXMOX_BACKUP_ACTIVE_TASK_FN,
+        active_raw.as_bytes(),
+        CreateOptions::new()
+            .owner(backup_user.uid)
+            .group(backup_user.gid),
+    )?;
 
-    task_list.sort_unstable_by(|b, a| { // lastest on top
+    finish_list.sort_unstable_by(|a, b| {
         match (&a.state, &b.state) {
             (Some(s1), Some(s2)) => s1.cmp(&s2),
             (Some(_), None) => std::cmp::Ordering::Less,
@@ -421,11 +395,13 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<Vec<TaskListInfo>, E
         }
     });
 
-    let raw = render_task_list(&task_list[..]);
+    let start = (finish_list.len()-MAX_INDEX_TASKS).max(0);
+    let end = (start+MAX_INDEX_TASKS).min(finish_list.len());
+    let index_raw = render_task_list(&finish_list[start..end]);
 
     replace_file(
-        PROXMOX_BACKUP_ACTIVE_TASK_FN,
-        raw.as_bytes(),
+        PROXMOX_BACKUP_INDEX_TASK_FN,
+        index_raw.as_bytes(),
         CreateOptions::new()
             .owner(backup_user.uid)
             .group(backup_user.gid),
@@ -433,7 +409,9 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<Vec<TaskListInfo>, E
 
     drop(lock);
 
-    Ok(task_list)
+    finish_list.append(&mut active_list);
+    finish_list.reverse();
+    Ok(finish_list)
 }
 
 /// Returns a sorted list of known tasks
@@ -463,6 +441,44 @@ fn render_task_list(list: &[TaskListInfo]) -> String {
     raw
 }
 
+// note this is not locked, caller has to make sure it is
+// this will skip (and log) lines that are not valid status lines
+fn read_task_file<R: Read>(reader: R) -> Result<Vec<TaskListInfo>, Error>
+{
+    let reader = BufReader::new(reader);
+    let mut list = Vec::new();
+    for line in reader.lines() {
+        let line = line?;
+        match parse_worker_status_line(&line) {
+            Ok((upid_str, upid, state)) => list.push(TaskListInfo {
+                upid_str,
+                upid,
+                state
+            }),
+            Err(err) => {
+                eprintln!("unable to parse worker status '{}' - {}", line, err);
+                continue;
+            }
+        };
+    }
+
+    Ok(list)
+}
+
+// note this is not locked, caller has to make sure it is
+fn read_task_file_from_path<P>(path: P) -> Result<Vec<TaskListInfo>, Error>
+where
+    P: AsRef<std::path::Path> + std::fmt::Debug,
+{
+    let file = match File::open(&path) {
+        Ok(f) => f,
+        Err(err) if err.kind() == std::io::ErrorKind::NotFound => return Ok(Vec::new()),
+        Err(err) => bail!("unable to open task list {:?} - {}", path, err),
+    };
+
+    read_task_file(file)
+}
+
 /// Launch long running worker tasks.
 ///
 /// A worker task can either be a whole thread, or a simply tokio
-- 
2.20.1





^ permalink raw reply	[flat|nested] 11+ messages in thread

* [pbs-devel] [PATCH proxmox-backup v2 4/9] server/worker_task: write older tasks into archive file
  2020-09-28 13:32 [pbs-devel] [PATCH proxmox-backup v2 0/9] improve task list handling Dominik Csapak
                   ` (2 preceding siblings ...)
  2020-09-28 13:32 ` [pbs-devel] [PATCH proxmox-backup v2 3/9] server/worker_task: split task list file into two Dominik Csapak
@ 2020-09-28 13:32 ` Dominik Csapak
  2020-09-28 13:32 ` [pbs-devel] [PATCH proxmox-backup v2 5/9] server/worker_task: add TaskListInfoIterator Dominik Csapak
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2020-09-28 13:32 UTC (permalink / raw)
  To: pbs-devel

instead of removing tasks beyond the 1000 that are in the index
write them into an archive file by appending them at the end
this way we can later still read them

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 src/server/worker_task.rs | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/src/server/worker_task.rs b/src/server/worker_task.rs
index 2ce71136..4a4406e1 100644
--- a/src/server/worker_task.rs
+++ b/src/server/worker_task.rs
@@ -1,6 +1,6 @@
 use std::collections::HashMap;
 use std::fs::File;
-use std::io::{Read, BufRead, BufReader};
+use std::io::{Read, Write, BufRead, BufReader};
 use std::panic::UnwindSafe;
 use std::sync::atomic::{AtomicBool, Ordering};
 use std::sync::{Arc, Mutex};
@@ -32,6 +32,7 @@ pub const PROXMOX_BACKUP_TASK_DIR: &str = PROXMOX_BACKUP_TASK_DIR_M!();
 pub const PROXMOX_BACKUP_TASK_LOCK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/.active.lock");
 pub const PROXMOX_BACKUP_ACTIVE_TASK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/active");
 pub const PROXMOX_BACKUP_INDEX_TASK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/index");
+pub const PROXMOX_BACKUP_ARCHIVE_TASK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/archive");
 
 const MAX_INDEX_TASKS: usize = 1000;
 
@@ -407,6 +408,19 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<Vec<TaskListInfo>, E
             .group(backup_user.gid),
     )?;
 
+    if !finish_list.is_empty() && start > 0 {
+        match std::fs::OpenOptions::new().append(true).create(true).open(PROXMOX_BACKUP_ARCHIVE_TASK_FN) {
+            Ok(mut writer) => {
+                for info in &finish_list[0..start] {
+                    writer.write_all(render_task_line(&info).as_bytes())?;
+                }
+            },
+            Err(err) => bail!("could not write task archive - {}", err),
+        }
+
+        nix::unistd::chown(PROXMOX_BACKUP_ARCHIVE_TASK_FN, Some(backup_user.uid), Some(backup_user.gid))?;
+    }
+
     drop(lock);
 
     finish_list.append(&mut active_list);
-- 
2.20.1





^ permalink raw reply	[flat|nested] 11+ messages in thread

* [pbs-devel] [PATCH proxmox-backup v2 5/9] server/worker_task: add TaskListInfoIterator
  2020-09-28 13:32 [pbs-devel] [PATCH proxmox-backup v2 0/9] improve task list handling Dominik Csapak
                   ` (3 preceding siblings ...)
  2020-09-28 13:32 ` [pbs-devel] [PATCH proxmox-backup v2 4/9] server/worker_task: write older tasks into archive file Dominik Csapak
@ 2020-09-28 13:32 ` Dominik Csapak
  2020-09-28 13:32 ` [pbs-devel] [PATCH proxmox-backup v2 6/9] api2/node/tasks: use TaskListInfoIterator instead of read_task_list Dominik Csapak
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2020-09-28 13:32 UTC (permalink / raw)
  To: pbs-devel

this is an iterator that reads/parses/updates the task list as
necessary and returns the tasks in descending order (newest first)

it does this by using our logrotate iterator and using a vecdeque

we can use this to iterate over all tasks, even if they are in the
archive and even if the archive is logrotated but only read
as much as we need

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
changes from v1:
* add option 'active_only' which skips the index and archive
  this is useful when we only want the active ones, previously we would
  read/parse the index file just to see there is no active task anymore

* drop lock after the last archive

 src/server/worker_task.rs | 98 ++++++++++++++++++++++++++++++++++++++-
 1 file changed, 97 insertions(+), 1 deletion(-)

diff --git a/src/server/worker_task.rs b/src/server/worker_task.rs
index 4a4406e1..a2189596 100644
--- a/src/server/worker_task.rs
+++ b/src/server/worker_task.rs
@@ -1,4 +1,4 @@
-use std::collections::HashMap;
+use std::collections::{HashMap, VecDeque};
 use std::fs::File;
 use std::io::{Read, Write, BufRead, BufReader};
 use std::panic::UnwindSafe;
@@ -19,6 +19,7 @@ use proxmox::tools::fs::{create_path, open_file_locked, replace_file, CreateOpti
 
 use super::UPID;
 
+use crate::tools::logrotate::{LogRotate, LogRotateFiles};
 use crate::tools::FileLogger;
 use crate::api2::types::Userid;
 
@@ -493,6 +494,101 @@ where
     read_task_file(file)
 }
 
+enum TaskFile {
+    Active,
+    Index,
+    Archive,
+    End,
+}
+
+pub struct TaskListInfoIterator {
+    list: VecDeque<TaskListInfo>,
+    file: TaskFile,
+    archive: Option<LogRotateFiles>,
+    lock: Option<File>,
+}
+
+impl TaskListInfoIterator {
+    pub fn new(active_only: bool) -> Result<Self, Error> {
+        let (read_lock, active_list) = {
+            let lock = lock_task_list_files(false)?;
+            let active_list = read_task_file_from_path(PROXMOX_BACKUP_ACTIVE_TASK_FN)?;
+
+            let needs_update = active_list
+                .iter()
+                .any(|info| info.state.is_none() && !worker_is_active_local(&info.upid));
+
+            if needs_update {
+                drop(lock);
+                update_active_workers(None)?;
+                let lock = lock_task_list_files(false)?;
+                let active_list = read_task_file_from_path(PROXMOX_BACKUP_ACTIVE_TASK_FN)?;
+                (lock, active_list)
+            } else {
+                (lock, active_list)
+            }
+        };
+
+        let archive = if active_only {
+            None
+        } else {
+            let logrotate = LogRotate::new(PROXMOX_BACKUP_ARCHIVE_TASK_FN, true).ok_or_else(|| format_err!("could not get archive file names"))?;
+            Some(logrotate.files())
+        };
+
+        let file = if active_only { TaskFile::End } else { TaskFile::Active };
+        let lock = if active_only { None } else { Some(read_lock) };
+
+        Ok(Self {
+            list: active_list.into(),
+            file,
+            archive,
+            lock,
+        })
+    }
+}
+
+impl Iterator for TaskListInfoIterator {
+    type Item = Result<TaskListInfo, Error>;
+
+    fn next(&mut self) -> Option<Self::Item> {
+        loop {
+            if let Some(element) = self.list.pop_back() {
+                return Some(Ok(element));
+            } else {
+                match self.file {
+                    TaskFile::Active => {
+                        let index = match read_task_file_from_path(PROXMOX_BACKUP_INDEX_TASK_FN) {
+                            Ok(index) => index,
+                            Err(err) => return Some(Err(err)),
+                        };
+                        self.list.append(&mut index.into());
+                        self.file = TaskFile::Index;
+                    },
+                    TaskFile::Index | TaskFile::Archive => {
+                        if let Some(mut archive) = self.archive.take() {
+                            if let Some(file) = archive.next() {
+                                let list = match read_task_file(file) {
+                                    Ok(list) => list,
+                                    Err(err) => return Some(Err(err)),
+                                };
+                                self.list.append(&mut list.into());
+                                self.archive = Some(archive);
+                                self.file = TaskFile::Archive;
+                                continue;
+                            }
+                        }
+                        self.file = TaskFile::End;
+                        self.lock.take();
+                        return None;
+                    }
+                    TaskFile::End => return None,
+                }
+            }
+        }
+    }
+}
+
 /// Launch long running worker tasks.
 ///
 /// A worker task can either be a whole thread, or a simply tokio
-- 
2.20.1





^ permalink raw reply	[flat|nested] 11+ messages in thread

* [pbs-devel] [PATCH proxmox-backup v2 6/9] api2/node/tasks: use TaskListInfoIterator instead of read_task_list
  2020-09-28 13:32 [pbs-devel] [PATCH proxmox-backup v2 0/9] improve task list handling Dominik Csapak
                   ` (4 preceding siblings ...)
  2020-09-28 13:32 ` [pbs-devel] [PATCH proxmox-backup v2 5/9] server/worker_task: add TaskListInfoIterator Dominik Csapak
@ 2020-09-28 13:32 ` Dominik Csapak
  2020-09-28 13:32 ` [pbs-devel] [PATCH proxmox-backup v2 7/9] api2/status: use the TaskListInfoIterator here Dominik Csapak
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2020-09-28 13:32 UTC (permalink / raw)
  To: pbs-devel

this makes the filtering/limiting much nicer and readable

since we now have potentially an 'infinite' amount of tasks we iterate over,
and cannot now beforehand how many there are, we return the total count
as always 1 higher then requested iff we are not at the end (this is
the case when the amount of entries is smaller than the requested limit)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
changes from v1:
* filter running with new Iterator option 'active_only'
* add a 'take_while' at the beginning to stop on the first error
 src/api2/node/tasks.rs | 52 ++++++++++++++++++++----------------------
 1 file changed, 25 insertions(+), 27 deletions(-)

diff --git a/src/api2/node/tasks.rs b/src/api2/node/tasks.rs
index e6a58a82..80384ea8 100644
--- a/src/api2/node/tasks.rs
+++ b/src/api2/node/tasks.rs
@@ -10,7 +10,7 @@ use proxmox::{identity, list_subdirs_api_method, sortable};
 
 use crate::tools;
 use crate::api2::types::*;
-use crate::server::{self, UPID, TaskState};
+use crate::server::{self, UPID, TaskState, TaskListInfoIterator};
 use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY};
 use crate::config::cached_user_info::CachedUserInfo;
 
@@ -316,56 +316,54 @@ pub fn list_tasks(
 
     let store = param["store"].as_str();
 
+    let list = TaskListInfoIterator::new(running)?;
 
-    let list = server::read_task_list()?;
-
-    let mut result = vec![];
-
-    let mut count = 0;
-
-    for info in list {
-        if !list_all && info.upid.userid != userid { continue; }
+    let result: Vec<TaskListItem> = list
+        .take_while(|info| !info.is_err())
+        .filter_map(|info| {
+        let info = match info {
+            Ok(info) => info,
+            Err(_) => return None,
+        };
 
+        if !list_all && info.upid.userid != userid { return None; }
 
         if let Some(userid) = &userfilter {
-            if !info.upid.userid.as_str().contains(userid) { continue; }
+            if !info.upid.userid.as_str().contains(userid) { return None; }
         }
 
         if let Some(store) = store {
             // Note: useful to select all tasks spawned by proxmox-backup-client
             let worker_id = match &info.upid.worker_id {
                 Some(w) => w,
-                None => continue, // skip
+                None => return None, // skip
             };
 
             if info.upid.worker_type == "backup" || info.upid.worker_type == "restore" ||
                 info.upid.worker_type == "prune"
             {
                 let prefix = format!("{}_", store);
-                if !worker_id.starts_with(&prefix) { continue; }
+                if !worker_id.starts_with(&prefix) { return None; }
             } else if info.upid.worker_type == "garbage_collection" {
-                if worker_id != store { continue; }
+                if worker_id != store { return None; }
             } else {
-                continue; // skip
+                return None; // skip
             }
         }
 
-        if let Some(ref state) = info.state {
-            if running { continue; }
-            match state {
-                crate::server::TaskState::OK { .. } if errors => continue,
-                _ => {},
-            }
+        match info.state {
+            Some(crate::server::TaskState::OK { .. }) if errors => return None,
+            _ => {},
         }
 
-        if (count as u64) < start {
-            count += 1;
-            continue;
-        } else {
-            count += 1;
-        }
+        Some(info.into())
+    }).skip(start as usize)
+        .take(limit as usize)
+        .collect();
 
-        if (result.len() as u64) < limit { result.push(info.into()); };
+    let mut count = result.len() + start as usize;
+    if result.len() > 0 && result.len() >= limit as usize { // we have a 'virtual' entry as long as we have any new
+        count += 1;
     }
 
     rpcenv["total"] = Value::from(count);
-- 
2.20.1





^ permalink raw reply	[flat|nested] 11+ messages in thread

* [pbs-devel] [PATCH proxmox-backup v2 7/9] api2/status: use the TaskListInfoIterator here
  2020-09-28 13:32 [pbs-devel] [PATCH proxmox-backup v2 0/9] improve task list handling Dominik Csapak
                   ` (5 preceding siblings ...)
  2020-09-28 13:32 ` [pbs-devel] [PATCH proxmox-backup v2 6/9] api2/node/tasks: use TaskListInfoIterator instead of read_task_list Dominik Csapak
@ 2020-09-28 13:32 ` Dominik Csapak
  2020-09-28 13:32 ` [pbs-devel] [PATCH proxmox-backup v2 8/9] server/worker_task: remove unecessary read_task_list Dominik Csapak
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2020-09-28 13:32 UTC (permalink / raw)
  To: pbs-devel

this means that limiting with epoch now works correctly
also change the api type to i64, since that is what the starttime is
saved as

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
changes from v1:
* adapt to new TaskListInfoIterator::new signature
 src/api2/status.rs | 32 ++++++++++++++++++++++++--------
 1 file changed, 24 insertions(+), 8 deletions(-)

diff --git a/src/api2/status.rs b/src/api2/status.rs
index eb8c43cd..fd40f14d 100644
--- a/src/api2/status.rs
+++ b/src/api2/status.rs
@@ -182,7 +182,7 @@ fn datastore_status(
     input: {
         properties: {
             since: {
-                type: u64,
+                type: i64,
                 description: "Only list tasks since this UNIX epoch.",
                 optional: true,
             },
@@ -200,6 +200,7 @@ fn datastore_status(
 )]
 /// List tasks.
 pub fn list_tasks(
+    since: Option<i64>,
     _param: Value,
     rpcenv: &mut dyn RpcEnvironment,
 ) -> Result<Vec<TaskListItem>, Error> {
@@ -209,13 +210,28 @@ pub fn list_tasks(
     let user_privs = user_info.lookup_privs(&userid, &["system", "tasks"]);
 
     let list_all = (user_privs & PRIV_SYS_AUDIT) != 0;
-
-    // TODO: replace with call that gets all task since 'since' epoch
-    let list: Vec<TaskListItem> = server::read_task_list()?
-        .into_iter()
-        .map(TaskListItem::from)
-        .filter(|entry| list_all || entry.user == userid)
-        .collect();
+    let since = since.unwrap_or_else(|| 0);
+
+    let list: Vec<TaskListItem> = server::TaskListInfoIterator::new(false)?
+        .take_while(|info| {
+            match info {
+                Ok(info) => info.upid.starttime > since,
+                Err(_) => false
+            }
+        })
+        .filter_map(|info| {
+            match info {
+                Ok(info) => {
+                    if list_all || info.upid.userid == userid {
+                        Some(Ok(TaskListItem::from(info)))
+                    } else {
+                        None
+                    }
+                }
+                Err(err) => Some(Err(err))
+            }
+        })
+        .collect::<Result<Vec<TaskListItem>, Error>>()?;
 
     Ok(list.into())
 }
-- 
2.20.1





^ permalink raw reply	[flat|nested] 11+ messages in thread

* [pbs-devel] [PATCH proxmox-backup v2 8/9] server/worker_task: remove unecessary read_task_list
  2020-09-28 13:32 [pbs-devel] [PATCH proxmox-backup v2 0/9] improve task list handling Dominik Csapak
                   ` (6 preceding siblings ...)
  2020-09-28 13:32 ` [pbs-devel] [PATCH proxmox-backup v2 7/9] api2/status: use the TaskListInfoIterator here Dominik Csapak
@ 2020-09-28 13:32 ` Dominik Csapak
  2020-09-28 13:32 ` [pbs-devel] [PATCH proxmox-backup v2 9/9] proxmox-backup-proxy: add task archive rotation Dominik Csapak
  2020-09-29  7:16 ` [pbs-devel] applied: [PATCH proxmox-backup v2 0/9] improve task list handling Dietmar Maurer
  9 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2020-09-28 13:32 UTC (permalink / raw)
  To: pbs-devel

since there are no users of this anymore and we now have a nicer
TaskListInfoIterator to use, we can drop this function

this also means that 'update_active_workers' does not need to return
a list anymore since we never used that result besides in
read_task_list

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 src/server/worker_task.rs | 14 ++------------
 1 file changed, 2 insertions(+), 12 deletions(-)

diff --git a/src/server/worker_task.rs b/src/server/worker_task.rs
index a2189596..2a343709 100644
--- a/src/server/worker_task.rs
+++ b/src/server/worker_task.rs
@@ -341,8 +341,7 @@ fn lock_task_list_files(exclusive: bool) -> Result<std::fs::File, Error> {
 
 // atomically read/update the task list, update status of finished tasks
 // new_upid is added to the list when specified.
-// Returns a sorted list of known tasks,
-fn update_active_workers(new_upid: Option<&UPID>) -> Result<Vec<TaskListInfo>, Error> {
+fn update_active_workers(new_upid: Option<&UPID>) -> Result<(), Error> {
 
     let backup_user = crate::backup::backup_user()?;
 
@@ -424,16 +423,7 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<Vec<TaskListInfo>, E
 
     drop(lock);
 
-    finish_list.append(&mut active_list);
-    finish_list.reverse();
-    Ok(finish_list)
-}
-
-/// Returns a sorted list of known tasks
-///
-/// The list is sorted by `(starttime, endtime)` in ascending order
-pub fn read_task_list() -> Result<Vec<TaskListInfo>, Error> {
-    update_active_workers(None)
+    Ok(())
 }
 
 fn render_task_line(info: &TaskListInfo) -> String {
-- 
2.20.1





^ permalink raw reply	[flat|nested] 11+ messages in thread

* [pbs-devel] [PATCH proxmox-backup v2 9/9] proxmox-backup-proxy: add task archive rotation
  2020-09-28 13:32 [pbs-devel] [PATCH proxmox-backup v2 0/9] improve task list handling Dominik Csapak
                   ` (7 preceding siblings ...)
  2020-09-28 13:32 ` [pbs-devel] [PATCH proxmox-backup v2 8/9] server/worker_task: remove unecessary read_task_list Dominik Csapak
@ 2020-09-28 13:32 ` Dominik Csapak
  2020-09-29  7:16 ` [pbs-devel] applied: [PATCH proxmox-backup v2 0/9] improve task list handling Dietmar Maurer
  9 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2020-09-28 13:32 UTC (permalink / raw)
  To: pbs-devel

this starts a task once a day at "00:00" that rotates the task log
archive if it is bigger than 500k

if we want, we can make the schedule/size limit/etc. configurable,
but for now it's ok to set fixed values for that

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 src/bin/proxmox-backup-proxy.rs | 96 +++++++++++++++++++++++++++++++++
 src/server/worker_task.rs       | 22 ++++++++
 2 files changed, 118 insertions(+)

diff --git a/src/bin/proxmox-backup-proxy.rs b/src/bin/proxmox-backup-proxy.rs
index 3272fe72..67fbc541 100644
--- a/src/bin/proxmox-backup-proxy.rs
+++ b/src/bin/proxmox-backup-proxy.rs
@@ -198,6 +198,7 @@ async fn schedule_tasks() -> Result<(), Error> {
     schedule_datastore_prune().await;
     schedule_datastore_verification().await;
     schedule_datastore_sync_jobs().await;
+    schedule_task_log_rotate().await;
 
     Ok(())
 }
@@ -655,6 +656,101 @@ async fn schedule_datastore_sync_jobs() {
     }
 }
 
+async fn schedule_task_log_rotate() {
+    use proxmox_backup::{
+        config::jobstate::{self, Job},
+        server::rotate_task_log_archive,
+    };
+    use proxmox_backup::server::WorkerTask;
+    use proxmox_backup::tools::systemd::time::{
+        parse_calendar_event, compute_next_event};
+
+    let worker_type = "logrotate";
+    let job_id = "task-archive";
+
+    let last = match jobstate::last_run_time(worker_type, job_id) {
+        Ok(time) => time,
+        Err(err) => {
+            eprintln!("could not get last run time of task log archive rotation: {}", err);
+            return;
+        }
+    };
+
+    // schedule daily at 00:00 like normal logrotate
+    let schedule = "00:00";
+
+    let event = match parse_calendar_event(schedule) {
+        Ok(event) => event,
+        Err(err) => {
+            // should not happen?
+            eprintln!("unable to parse schedule '{}' - {}", schedule, err);
+            return;
+        }
+    };
+
+    let next = match compute_next_event(&event, last, false) {
+        Ok(Some(next)) => next,
+        Ok(None) => return,
+        Err(err) => {
+            eprintln!("compute_next_event for '{}' failed - {}", schedule, err);
+            return;
+        }
+    };
+
+    let now = proxmox::tools::time::epoch_i64();
+
+    if next > now {
+        // if we never ran the rotation, schedule instantly
+        match jobstate::JobState::load(worker_type, job_id) {
+            Ok(state) => match state {
+                jobstate::JobState::Created { .. } => {},
+                _ => return,
+            },
+            _ => return,
+        }
+    }
+
+    let mut job = match Job::new(worker_type, job_id) {
+        Ok(job) => job,
+        Err(_) => return, // could not get lock
+    };
+
+    if let Err(err) = WorkerTask::new_thread(
+        worker_type,
+        Some(job_id.to_string()),
+        Userid::backup_userid().clone(),
+        false,
+        move |worker| {
+            job.start(&worker.upid().to_string())?;
+            worker.log(format!("starting task log rotation"));
+            // one entry has normally about ~100-150 bytes
+            let max_size = 500000; // at least 5000 entries
+            let max_files = 20; // at least 100000 entries
+            let result = try_block!({
+                let has_rotated = rotate_task_log_archive(max_size, true, Some(max_files))?;
+                if has_rotated {
+                    worker.log(format!("task log archive was rotated"));
+                } else {
+                    worker.log(format!("task log archive was not rotated"));
+                }
+
+                Ok(())
+            });
+
+            let status = worker.create_state(&result);
+
+            if let Err(err) = job.finish(status) {
+                eprintln!("could not finish job state for {}: {}", worker_type, err);
+            }
+
+            result
+        },
+    ) {
+        eprintln!("unable to start task log rotation: {}", err);
+    }
+
+}
+
 async fn run_stat_generator() {
 
     let mut count = 0;
diff --git a/src/server/worker_task.rs b/src/server/worker_task.rs
index 2a343709..2b517a79 100644
--- a/src/server/worker_task.rs
+++ b/src/server/worker_task.rs
@@ -1,5 +1,6 @@
 use std::collections::{HashMap, VecDeque};
 use std::fs::File;
+use std::path::Path;
 use std::io::{Read, Write, BufRead, BufReader};
 use std::panic::UnwindSafe;
 use std::sync::atomic::{AtomicBool, Ordering};
@@ -339,6 +340,27 @@ fn lock_task_list_files(exclusive: bool) -> Result<std::fs::File, Error> {
     Ok(lock)
 }
 
+/// checks if the Task Archive is bigger that 'size_threshold' bytes, and
+/// rotates it if it is
+pub fn rotate_task_log_archive(size_threshold: u64, compress: bool, max_files: Option<usize>) -> Result<bool, Error> {
+    let _lock = lock_task_list_files(true)?;
+    let path = Path::new(PROXMOX_BACKUP_ARCHIVE_TASK_FN);
+    let metadata = path.metadata()?;
+    if metadata.len() > size_threshold {
+        let mut logrotate = LogRotate::new(PROXMOX_BACKUP_ARCHIVE_TASK_FN, compress).ok_or_else(|| format_err!("could not get archive file names"))?;
+        let backup_user = crate::backup::backup_user()?;
+        logrotate.rotate(
+            CreateOptions::new()
+                .owner(backup_user.uid)
+                .group(backup_user.gid),
+            max_files,
+        )?;
+        Ok(true)
+    } else {
+        Ok(false)
+    }
+}
+
 // atomically read/update the task list, update status of finished tasks
 // new_upid is added to the list when specified.
 fn update_active_workers(new_upid: Option<&UPID>) -> Result<(), Error> {
-- 
2.20.1





^ permalink raw reply	[flat|nested] 11+ messages in thread

* [pbs-devel] applied: [PATCH proxmox-backup v2 0/9] improve task list handling
  2020-09-28 13:32 [pbs-devel] [PATCH proxmox-backup v2 0/9] improve task list handling Dominik Csapak
                   ` (8 preceding siblings ...)
  2020-09-28 13:32 ` [pbs-devel] [PATCH proxmox-backup v2 9/9] proxmox-backup-proxy: add task archive rotation Dominik Csapak
@ 2020-09-29  7:16 ` Dietmar Maurer
  9 siblings, 0 replies; 11+ messages in thread
From: Dietmar Maurer @ 2020-09-29  7:16 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Dominik Csapak

applied




^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2020-09-29  7:16 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-28 13:32 [pbs-devel] [PATCH proxmox-backup v2 0/9] improve task list handling Dominik Csapak
2020-09-28 13:32 ` [pbs-devel] [PATCH proxmox-backup v2 1/9] tools: add logrotate module Dominik Csapak
2020-09-28 13:32 ` [pbs-devel] [PATCH proxmox-backup v2 2/9] server/worker_task: refactor locking of the task list Dominik Csapak
2020-09-28 13:32 ` [pbs-devel] [PATCH proxmox-backup v2 3/9] server/worker_task: split task list file into two Dominik Csapak
2020-09-28 13:32 ` [pbs-devel] [PATCH proxmox-backup v2 4/9] server/worker_task: write older tasks into archive file Dominik Csapak
2020-09-28 13:32 ` [pbs-devel] [PATCH proxmox-backup v2 5/9] server/worker_task: add TaskListInfoIterator Dominik Csapak
2020-09-28 13:32 ` [pbs-devel] [PATCH proxmox-backup v2 6/9] api2/node/tasks: use TaskListInfoIterator instead of read_task_list Dominik Csapak
2020-09-28 13:32 ` [pbs-devel] [PATCH proxmox-backup v2 7/9] api2/status: use the TaskListInfoIterator here Dominik Csapak
2020-09-28 13:32 ` [pbs-devel] [PATCH proxmox-backup v2 8/9] server/worker_task: remove unecessary read_task_list Dominik Csapak
2020-09-28 13:32 ` [pbs-devel] [PATCH proxmox-backup v2 9/9] proxmox-backup-proxy: add task archive rotation Dominik Csapak
2020-09-29  7:16 ` [pbs-devel] applied: [PATCH proxmox-backup v2 0/9] improve task list handling Dietmar Maurer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal