* [pbs-devel] [PATCH proxmox/proxmox-backup/widget-toolkit] improve task list handling
@ 2020-09-25 14:13 Dominik Csapak
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox 1/3] proxmox/tools/fs: add shared lock helper Dominik Csapak
` (13 more replies)
0 siblings, 14 replies; 23+ messages in thread
From: Dominik Csapak @ 2020-09-25 14:13 UTC (permalink / raw)
To: pbs-devel
this series extends the task handling in a way so that we can safely
have more than 1000 tasks and properly filter and read them
this also introduces a daily task to rotate the now existing
task archive when it is over 500k up to maximum 20 files
strictly speaking the widget-toolkit patch is not necessary, but makes
the user interface a bit better to use
proxmox:
Dominik Csapak (3):
proxmox/tools/fs: add shared lock helper
proxmox/tools/fs: create tmpfile helper
proxmox/tools: add logrotate module
proxmox/Cargo.toml | 1 +
proxmox/src/tools/fs.rs | 52 ++++++---
proxmox/src/tools/logrotate.rs | 188 +++++++++++++++++++++++++++++++++
proxmox/src/tools/mod.rs | 1 +
4 files changed, 226 insertions(+), 16 deletions(-)
create mode 100644 proxmox/src/tools/logrotate.rs
proxmox-backup:
Dominik Csapak (10):
api2/node/tasks: move userfilter to function signature
server/worker_task: refactor locking of the task list
server/worker_task: factor out task list rendering
server/worker_task: split task list file into two
server/worker_task: write older tasks into archive file
server/worker_task: add TaskListInfoIterator
api2/node/tasks: use TaskListInfoIterator instead of read_task_list
api2/status: use the TaskListInfoIterator here
server/worker_task: remove unecessary read_task_list
proxmox-backup-proxy: add task archive rotation
src/api2/node/tasks.rs | 49 +++--
src/api2/status.rs | 32 +++-
src/bin/proxmox-backup-proxy.rs | 96 ++++++++++
src/server/worker_task.rs | 313 +++++++++++++++++++++++---------
4 files changed, 373 insertions(+), 117 deletions(-)
proxmox-widget-toolkit:
Dominik Csapak (1):
node/Tasks: improve scroller behaviour on datastore loading
src/node/Tasks.js | 8 ++++++++
1 file changed, 8 insertions(+)
--
2.20.1
^ permalink raw reply [flat|nested] 23+ messages in thread
* [pbs-devel] [PATCH proxmox 1/3] proxmox/tools/fs: add shared lock helper
2020-09-25 14:13 [pbs-devel] [PATCH proxmox/proxmox-backup/widget-toolkit] improve task list handling Dominik Csapak
@ 2020-09-25 14:13 ` Dominik Csapak
2020-09-28 5:10 ` [pbs-devel] applied: " Dietmar Maurer
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox 2/3] proxmox/tools/fs: create tmpfile helper Dominik Csapak
` (12 subsequent siblings)
13 siblings, 1 reply; 23+ messages in thread
From: Dominik Csapak @ 2020-09-25 14:13 UTC (permalink / raw)
To: pbs-devel
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
proxmox/src/tools/fs.rs | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/proxmox/src/tools/fs.rs b/proxmox/src/tools/fs.rs
index b1a95b5..2cd1a1b 100644
--- a/proxmox/src/tools/fs.rs
+++ b/proxmox/src/tools/fs.rs
@@ -505,15 +505,26 @@ pub fn lock_file<F: AsRawFd>(
Ok(())
}
+/// Open or create a lock file (append mode). Then try to
+/// acquire a shared lock using `lock_file()`.
+pub fn open_file_locked_shared<P: AsRef<Path>>(path: P, timeout: Duration) -> Result<File, Error> {
+ open_file_locked_impl(path, timeout, false)
+}
+
/// Open or create a lock file (append mode). Then try to
/// acquire a lock using `lock_file()`.
pub fn open_file_locked<P: AsRef<Path>>(path: P, timeout: Duration) -> Result<File, Error> {
+ open_file_locked_impl(path, timeout, true)
+}
+
+fn open_file_locked_impl<P: AsRef<Path>>(path: P, timeout: Duration, exclusive: bool) -> Result<File, Error> {
+
let path = path.as_ref();
let mut file = match OpenOptions::new().create(true).append(true).open(path) {
Ok(file) => file,
Err(err) => bail!("Unable to open lock {:?} - {}", path, err),
};
- match lock_file(&mut file, true, Some(timeout)) {
+ match lock_file(&mut file, exclusive, Some(timeout)) {
Ok(_) => Ok(file),
Err(err) => bail!("Unable to acquire lock {:?} - {}", path, err),
}
--
2.20.1
^ permalink raw reply [flat|nested] 23+ messages in thread
* [pbs-devel] [PATCH proxmox 2/3] proxmox/tools/fs: create tmpfile helper
2020-09-25 14:13 [pbs-devel] [PATCH proxmox/proxmox-backup/widget-toolkit] improve task list handling Dominik Csapak
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox 1/3] proxmox/tools/fs: add shared lock helper Dominik Csapak
@ 2020-09-25 14:13 ` Dominik Csapak
2020-09-28 5:10 ` [pbs-devel] applied: " Dietmar Maurer
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox 3/3] proxmox/tools: add logrotate module Dominik Csapak
` (11 subsequent siblings)
13 siblings, 1 reply; 23+ messages in thread
From: Dominik Csapak @ 2020-09-25 14:13 UTC (permalink / raw)
To: pbs-devel
by factoring out the code we had in 'replace_file'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
proxmox/src/tools/fs.rs | 39 ++++++++++++++++++++++++---------------
1 file changed, 24 insertions(+), 15 deletions(-)
diff --git a/proxmox/src/tools/fs.rs b/proxmox/src/tools/fs.rs
index 2cd1a1b..3f96d9e 100644
--- a/proxmox/src/tools/fs.rs
+++ b/proxmox/src/tools/fs.rs
@@ -4,7 +4,7 @@ use std::ffi::CStr;
use std::fs::{File, OpenOptions};
use std::io::{self, BufRead, BufReader, Write};
use std::os::unix::io::{AsRawFd, FromRawFd, RawFd};
-use std::path::Path;
+use std::path::{Path, PathBuf};
use std::time::Duration;
use anyhow::{bail, format_err, Error};
@@ -121,14 +121,12 @@ pub fn file_read_firstline<P: AsRef<Path>>(path: P) -> Result<String, Error> {
.map_err(|err: Error| format_err!("unable to read {:?} - {}", path, err))
}
-/// Atomically replace a file.
-///
-/// This first creates a temporary file and then rotates it in place.
-pub fn replace_file<P: AsRef<Path>>(
+/// Takes a Path and CreateOptions, creates a tmpfile from it and returns
+/// a RawFd and PathBuf for it
+pub fn make_tmp_file<P: AsRef<Path>>(
path: P,
- data: &[u8],
options: CreateOptions,
-) -> Result<(), Error> {
+) -> Result<(RawFd, PathBuf), Error> {
let path = path.as_ref();
// Note: we use mkstemp heŕe, because this worka with different
@@ -140,8 +138,6 @@ pub fn replace_file<P: AsRef<Path>>(
Err(err) => bail!("mkstemp {:?} failed: {}", template, err),
};
- let tmp_path = tmp_path.as_path();
-
// clippy bug?: from_bits_truncate is actually a const fn...
#[allow(clippy::or_fun_call)]
let mode: stat::Mode = options
@@ -149,27 +145,40 @@ pub fn replace_file<P: AsRef<Path>>(
.unwrap_or(stat::Mode::from_bits_truncate(0o644));
if let Err(err) = stat::fchmod(fd, mode) {
- let _ = unistd::unlink(tmp_path);
+ let _ = unistd::unlink(&tmp_path);
bail!("fchmod {:?} failed: {}", tmp_path, err);
}
if options.owner.is_some() || options.group.is_some() {
if let Err(err) = fchown(fd, options.owner, options.group) {
- let _ = unistd::unlink(tmp_path);
+ let _ = unistd::unlink(&tmp_path);
bail!("fchown {:?} failed: {}", tmp_path, err);
}
}
+ Ok((fd, tmp_path))
+}
+
+/// Atomically replace a file.
+///
+/// This first creates a temporary file and then rotates it in place.
+pub fn replace_file<P: AsRef<Path>>(
+ path: P,
+ data: &[u8],
+ options: CreateOptions,
+) -> Result<(), Error> {
+ let (fd, tmp_path) = make_tmp_file(&path, options)?;
+
let mut file = unsafe { File::from_raw_fd(fd) };
if let Err(err) = file.write_all(data) {
- let _ = unistd::unlink(tmp_path);
+ let _ = unistd::unlink(&tmp_path);
bail!("write failed: {}", err);
}
- if let Err(err) = std::fs::rename(tmp_path, path) {
- let _ = unistd::unlink(tmp_path);
- bail!("Atomic rename failed for file {:?} - {}", path, err);
+ if let Err(err) = std::fs::rename(&tmp_path, &path) {
+ let _ = unistd::unlink(&tmp_path);
+ bail!("Atomic rename failed for file {:?} - {}", path.as_ref(), err);
}
Ok(())
--
2.20.1
^ permalink raw reply [flat|nested] 23+ messages in thread
* [pbs-devel] [PATCH proxmox 3/3] proxmox/tools: add logrotate module
2020-09-25 14:13 [pbs-devel] [PATCH proxmox/proxmox-backup/widget-toolkit] improve task list handling Dominik Csapak
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox 1/3] proxmox/tools/fs: add shared lock helper Dominik Csapak
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox 2/3] proxmox/tools/fs: create tmpfile helper Dominik Csapak
@ 2020-09-25 14:13 ` Dominik Csapak
2020-09-28 5:12 ` Dietmar Maurer
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 01/10] api2/node/tasks: move userfilter to function signature Dominik Csapak
` (10 subsequent siblings)
13 siblings, 1 reply; 23+ messages in thread
From: Dominik Csapak @ 2020-09-25 14:13 UTC (permalink / raw)
To: pbs-devel
this is a helper to rotate and iterate over log files
there is an iterator for open filehandles as well as
only the filename
also it has the possibilty to rotate them
since those files can be gzipped, we have to include flate2 as dependency
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
proxmox/Cargo.toml | 1 +
proxmox/src/tools/logrotate.rs | 188 +++++++++++++++++++++++++++++++++
proxmox/src/tools/mod.rs | 1 +
3 files changed, 190 insertions(+)
create mode 100644 proxmox/src/tools/logrotate.rs
diff --git a/proxmox/Cargo.toml b/proxmox/Cargo.toml
index 68606da..aab77dd 100644
--- a/proxmox/Cargo.toml
+++ b/proxmox/Cargo.toml
@@ -21,6 +21,7 @@ nix = "0.16"
# tools module:
base64 = "0.12"
endian_trait = { version = "0.6", features = ["arrays"] }
+flate2 = "1.0"
regex = "1.2"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
diff --git a/proxmox/src/tools/logrotate.rs b/proxmox/src/tools/logrotate.rs
new file mode 100644
index 0000000..6d6c7ab
--- /dev/null
+++ b/proxmox/src/tools/logrotate.rs
@@ -0,0 +1,188 @@
+use std::path::{Path, PathBuf};
+use std::fs::{File, rename};
+use std::os::unix::io::FromRawFd;
+use std::io::Read;
+
+use anyhow::{bail, Error};
+use flate2::{
+ read::GzDecoder,
+ write::GzEncoder,
+ Compression,
+};
+use nix::unistd;
+
+use crate::tools::fs::{CreateOptions, make_tmp_file, replace_file};
+
+/// Used for rotating log files and iterating over them
+pub struct LogRotate {
+ base_path: PathBuf,
+ compress: bool,
+}
+
+impl LogRotate {
+ /// Creates a new instance if the path given is a valid file name
+ /// (iow. does not end with ..)
+ /// 'compress' decides if compresses files will be created on
+ /// rotation, and if it will search '.gz' files when iterating
+ pub fn new<P: AsRef<Path>>(path: P, compress: bool) -> Option<Self> {
+ if path.as_ref().file_name().is_some() {
+ Some(Self {
+ base_path: path.as_ref().to_path_buf(),
+ compress,
+ })
+ } else {
+ None
+ }
+ }
+
+ /// Returns an iterator over the logrotated file names that exist
+ pub fn file_names(&self) -> LogRotateFileNames {
+ LogRotateFileNames {
+ base_path: self.base_path.clone(),
+ count: 0,
+ compress: self.compress
+ }
+ }
+
+ /// Returns an iterator over the logrotated file handles
+ pub fn files(&self) -> LogRotateFiles {
+ LogRotateFiles {
+ file_names: self.file_names(),
+ }
+ }
+
+ /// Rotates the files up to 'max_files'
+ /// if the 'compress' option was given it will compress the newest file
+ ///
+ /// e.g. rotates
+ /// foo.2.gz => foo.3.gz
+ /// foo.1.gz => foo.2.gz
+ /// foo => foo.1.gz
+ /// => foo
+ pub fn rotate(&mut self, options: CreateOptions, max_files: Option<usize>) -> Result<(), Error> {
+ let mut filenames: Vec<PathBuf> = self.file_names().collect();
+ if filenames.is_empty() {
+ return Ok(()); // no file means nothing to rotate
+ }
+
+ let mut next_filename = self.base_path.clone().canonicalize()?.into_os_string();
+
+ if self.compress {
+ next_filename.push(format!(".{}.gz", filenames.len()));
+ } else {
+ next_filename.push(format!(".{}", filenames.len()));
+ }
+
+ filenames.push(PathBuf::from(next_filename));
+ let count = filenames.len();
+
+ // rotate all but the first, that we maybe have to compress
+ for i in (1..count-1).rev() {
+ rename(&filenames[i], &filenames[i+1])?;
+ }
+
+ if self.compress {
+ let mut source = File::open(&filenames[0])?;
+ let (fd, tmp_path) = make_tmp_file(&filenames[1], options.clone())?;
+ let target = unsafe { File::from_raw_fd(fd) };
+ let mut encoder = GzEncoder::new(target, Compression::default());
+
+ if let Err(err) = std::io::copy(&mut source, &mut encoder) {
+ let _ = unistd::unlink(&tmp_path);
+ bail!("gzip encoding failed for file {:?} - {}", &filenames[1], err);
+ }
+
+ if let Err(err) = encoder.try_finish() {
+ let _ = unistd::unlink(&tmp_path);
+ bail!("gzip finish failed for file {:?} - {}", &filenames[1], err);
+ }
+
+ if let Err(err) = rename(&tmp_path, &filenames[1]) {
+ let _ = unistd::unlink(&tmp_path);
+ bail!("rename failed for file {:?} - {}", &filenames[1], err);
+ }
+
+ unistd::unlink(&filenames[0])?;
+ } else {
+ rename(&filenames[0], &filenames[1])?;
+ }
+
+ // create empty original file
+ replace_file(&filenames[0], b"", options)?;
+
+ if let Some(max_files) = max_files {
+ // delete all files > max_files
+ for file in filenames.iter().skip(max_files) {
+ if let Err(err) = unistd::unlink(file) {
+ eprintln!("could not remove {:?}: {}", &file, err);
+ }
+ }
+ }
+
+ Ok(())
+ }
+}
+
+/// Iterator over logrotated file names
+pub struct LogRotateFileNames {
+ base_path: PathBuf,
+ count: usize,
+ compress: bool,
+}
+
+impl Iterator for LogRotateFileNames {
+ type Item = PathBuf;
+
+ fn next(&mut self) -> Option<Self::Item> {
+ if self.count > 0 {
+ let mut path: std::ffi::OsString = self.base_path.clone().into();
+
+ path.push(format!(".{}", self.count));
+ self.count += 1;
+
+ if Path::new(&path).is_file() {
+ Some(path.into())
+ } else if self.compress {
+ path.push(".gz");
+ if Path::new(&path).is_file() {
+ Some(path.into())
+ } else {
+ None
+ }
+ } else {
+ None
+ }
+ } else if self.base_path.is_file() {
+ self.count += 1;
+ Some(self.base_path.to_path_buf())
+ } else {
+ None
+ }
+ }
+}
+
+/// Iterator over logrotated files by returning a boxed reader
+pub struct LogRotateFiles {
+ file_names: LogRotateFileNames,
+}
+
+impl Iterator for LogRotateFiles {
+ type Item = Box<dyn Read + Send + Sync>;
+
+ fn next(&mut self) -> Option<Self::Item> {
+ let filename = self.file_names.next()?;
+
+ if let Some(extension) = filename.extension() {
+ if extension == "gz" {
+ return match File::open(filename) {
+ Ok(file) => Some(Box::new(GzDecoder::new(file))),
+ Err(_) => None
+ }
+ }
+ }
+ match File::open(filename) {
+ Ok(file) => Some(Box::new(file)),
+ Err(_) => None
+ }
+ }
+}
diff --git a/proxmox/src/tools/mod.rs b/proxmox/src/tools/mod.rs
index df6c429..03261b1 100644
--- a/proxmox/src/tools/mod.rs
+++ b/proxmox/src/tools/mod.rs
@@ -20,6 +20,7 @@ pub mod serde;
pub mod time;
pub mod uuid;
pub mod vec;
+pub mod logrotate;
#[cfg(feature = "websocket")]
pub mod websocket;
--
2.20.1
^ permalink raw reply [flat|nested] 23+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 01/10] api2/node/tasks: move userfilter to function signature
2020-09-25 14:13 [pbs-devel] [PATCH proxmox/proxmox-backup/widget-toolkit] improve task list handling Dominik Csapak
` (2 preceding siblings ...)
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox 3/3] proxmox/tools: add logrotate module Dominik Csapak
@ 2020-09-25 14:13 ` Dominik Csapak
2020-09-28 5:18 ` [pbs-devel] applied: " Dietmar Maurer
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 02/10] server/worker_task: refactor locking of the task list Dominik Csapak
` (9 subsequent siblings)
13 siblings, 1 reply; 23+ messages in thread
From: Dominik Csapak @ 2020-09-25 14:13 UTC (permalink / raw)
To: pbs-devel
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/api2/node/tasks.rs | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/api2/node/tasks.rs b/src/api2/node/tasks.rs
index c8add6b4..e6a58a82 100644
--- a/src/api2/node/tasks.rs
+++ b/src/api2/node/tasks.rs
@@ -303,6 +303,7 @@ pub fn list_tasks(
limit: u64,
errors: bool,
running: bool,
+ userfilter: Option<String>,
param: Value,
mut rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<TaskListItem>, Error> {
@@ -315,7 +316,6 @@ pub fn list_tasks(
let store = param["store"].as_str();
- let userfilter = param["userfilter"].as_str();
let list = server::read_task_list()?;
@@ -327,7 +327,7 @@ pub fn list_tasks(
if !list_all && info.upid.userid != userid { continue; }
- if let Some(userid) = userfilter {
+ if let Some(userid) = &userfilter {
if !info.upid.userid.as_str().contains(userid) { continue; }
}
--
2.20.1
^ permalink raw reply [flat|nested] 23+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 02/10] server/worker_task: refactor locking of the task list
2020-09-25 14:13 [pbs-devel] [PATCH proxmox/proxmox-backup/widget-toolkit] improve task list handling Dominik Csapak
` (3 preceding siblings ...)
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 01/10] api2/node/tasks: move userfilter to function signature Dominik Csapak
@ 2020-09-25 14:13 ` Dominik Csapak
2020-09-28 5:28 ` Dietmar Maurer
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 03/10] server/worker_task: factor out task list rendering Dominik Csapak
` (8 subsequent siblings)
13 siblings, 1 reply; 23+ messages in thread
From: Dominik Csapak @ 2020-09-25 14:13 UTC (permalink / raw)
To: pbs-devel
also add the functionality of having a 'shared' (read) lock for the list
we will need this later
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/server/worker_task.rs | 18 +++++++++++++++---
1 file changed, 15 insertions(+), 3 deletions(-)
diff --git a/src/server/worker_task.rs b/src/server/worker_task.rs
index 771e8447..d220e755 100644
--- a/src/server/worker_task.rs
+++ b/src/server/worker_task.rs
@@ -15,7 +15,7 @@ use tokio::sync::oneshot;
use proxmox::sys::linux::procfs;
use proxmox::try_block;
-use proxmox::tools::fs::{create_path, open_file_locked, replace_file, CreateOptions};
+use proxmox::tools::fs::{create_path, open_file_locked, open_file_locked_shared, replace_file, CreateOptions};
use super::UPID;
@@ -325,6 +325,19 @@ pub struct TaskListInfo {
pub state: Option<TaskState>, // endtime, status
}
+fn lock_task_list_files(exclusive: bool) -> Result<std::fs::File, Error> {
+ let backup_user = crate::backup::backup_user()?;
+
+ let lock = if exclusive {
+ open_file_locked(PROXMOX_BACKUP_TASK_LOCK_FN, std::time::Duration::new(10, 0))?
+ } else {
+ open_file_locked_shared(PROXMOX_BACKUP_TASK_LOCK_FN, std::time::Duration::new(10, 0))?
+ };
+ nix::unistd::chown(PROXMOX_BACKUP_TASK_LOCK_FN, Some(backup_user.uid), Some(backup_user.gid))?;
+
+ Ok(lock)
+}
+
// atomically read/update the task list, update status of finished tasks
// new_upid is added to the list when specified.
// Returns a sorted list of known tasks,
@@ -332,8 +345,7 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<Vec<TaskListInfo>, E
let backup_user = crate::backup::backup_user()?;
- let lock = open_file_locked(PROXMOX_BACKUP_TASK_LOCK_FN, std::time::Duration::new(10, 0))?;
- nix::unistd::chown(PROXMOX_BACKUP_TASK_LOCK_FN, Some(backup_user.uid), Some(backup_user.gid))?;
+ let lock = lock_task_list_files(true)?;
let reader = match File::open(PROXMOX_BACKUP_ACTIVE_TASK_FN) {
Ok(f) => Some(BufReader::new(f)),
--
2.20.1
^ permalink raw reply [flat|nested] 23+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 03/10] server/worker_task: factor out task list rendering
2020-09-25 14:13 [pbs-devel] [PATCH proxmox/proxmox-backup/widget-toolkit] improve task list handling Dominik Csapak
` (4 preceding siblings ...)
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 02/10] server/worker_task: refactor locking of the task list Dominik Csapak
@ 2020-09-25 14:13 ` Dominik Csapak
2020-09-28 5:31 ` [pbs-devel] applied: " Dietmar Maurer
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 04/10] server/worker_task: split task list file into two Dominik Csapak
` (7 subsequent siblings)
13 siblings, 1 reply; 23+ messages in thread
From: Dominik Csapak @ 2020-09-25 14:13 UTC (permalink / raw)
To: pbs-devel
we will need this later again
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/server/worker_task.rs | 30 +++++++++++++++++++++---------
1 file changed, 21 insertions(+), 9 deletions(-)
diff --git a/src/server/worker_task.rs b/src/server/worker_task.rs
index d220e755..3b47dfb6 100644
--- a/src/server/worker_task.rs
+++ b/src/server/worker_task.rs
@@ -425,15 +425,7 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<Vec<TaskListInfo>, E
}
});
- let mut raw = String::new();
- for info in &task_list {
- if let Some(status) = &info.state {
- raw.push_str(&format!("{} {:08X} {}\n", info.upid_str, status.endtime(), status));
- } else {
- raw.push_str(&info.upid_str);
- raw.push('\n');
- }
- }
+ let raw = render_task_list(&task_list[..]);
replace_file(
PROXMOX_BACKUP_ACTIVE_TASK_FN,
@@ -455,6 +447,26 @@ pub fn read_task_list() -> Result<Vec<TaskListInfo>, Error> {
update_active_workers(None)
}
+fn render_task_line(info: &TaskListInfo) -> String {
+ let mut raw = String::new();
+ if let Some(status) = &info.state {
+ raw.push_str(&format!("{} {:08X} {}\n", info.upid_str, status.endtime(), status));
+ } else {
+ raw.push_str(&info.upid_str);
+ raw.push('\n');
+ }
+
+ raw
+}
+
+fn render_task_list(list: &[TaskListInfo]) -> String {
+ let mut raw = String::new();
+ for info in list {
+ raw.push_str(&render_task_line(&info));
+ }
+ raw
+}
+
/// Launch long running worker tasks.
///
/// A worker task can either be a whole thread, or a simply tokio
--
2.20.1
^ permalink raw reply [flat|nested] 23+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 04/10] server/worker_task: split task list file into two
2020-09-25 14:13 [pbs-devel] [PATCH proxmox/proxmox-backup/widget-toolkit] improve task list handling Dominik Csapak
` (5 preceding siblings ...)
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 03/10] server/worker_task: factor out task list rendering Dominik Csapak
@ 2020-09-25 14:13 ` Dominik Csapak
2020-09-28 5:43 ` Dietmar Maurer
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 05/10] server/worker_task: write older tasks into archive file Dominik Csapak
` (6 subsequent siblings)
13 siblings, 1 reply; 23+ messages in thread
From: Dominik Csapak @ 2020-09-25 14:13 UTC (permalink / raw)
To: pbs-devel
one for only the active tasks and one for up to 1000 finished tasks
factor out the parsing of a task file (we will later need this again)
and use iterator combinators for easier code
we now sort the tasks ascending (this will become important in a later patch)
but reverse (for now) it to keep compatibility
this code also omits the converting into an intermittent hash
since it cannot really happen that we have duplicate tasks in this list
(since the call is locked by an flock, and it is the only place where we
write into the lists)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/server/worker_task.rs | 143 +++++++++++++++++++++-----------------
1 file changed, 79 insertions(+), 64 deletions(-)
diff --git a/src/server/worker_task.rs b/src/server/worker_task.rs
index 3b47dfb6..9152609e 100644
--- a/src/server/worker_task.rs
+++ b/src/server/worker_task.rs
@@ -31,6 +31,9 @@ pub const PROXMOX_BACKUP_LOG_DIR: &str = PROXMOX_BACKUP_LOG_DIR_M!();
pub const PROXMOX_BACKUP_TASK_DIR: &str = PROXMOX_BACKUP_TASK_DIR_M!();
pub const PROXMOX_BACKUP_TASK_LOCK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/.active.lock");
pub const PROXMOX_BACKUP_ACTIVE_TASK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/active");
+pub const PROXMOX_BACKUP_INDEX_TASK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/index");
+
+const MAX_INDEX_TASKS: usize = 1000;
lazy_static! {
static ref WORKER_TASK_LIST: Mutex<HashMap<usize, Arc<WorkerTask>>> = Mutex::new(HashMap::new());
@@ -347,76 +350,46 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<Vec<TaskListInfo>, E
let lock = lock_task_list_files(true)?;
- let reader = match File::open(PROXMOX_BACKUP_ACTIVE_TASK_FN) {
- Ok(f) => Some(BufReader::new(f)),
- Err(err) => {
- if err.kind() == std::io::ErrorKind::NotFound {
- None
- } else {
- bail!("unable to open active worker {:?} - {}", PROXMOX_BACKUP_ACTIVE_TASK_FN, err);
+ let mut finish_list: Vec<TaskListInfo> = read_task_file_from_path(PROXMOX_BACKUP_INDEX_TASK_FN)?;
+ let mut active_list: Vec<TaskListInfo> = read_task_file_from_path(PROXMOX_BACKUP_ACTIVE_TASK_FN)?
+ .into_iter()
+ .filter_map(|info| {
+ if info.state.is_some() { // should not happen?
+ finish_list.push(info);
+ return None;
}
- }
- };
- let mut active_list = vec![];
- let mut finish_list = vec![];
-
- if let Some(lines) = reader.map(|r| r.lines()) {
-
- for line in lines {
- let line = line?;
- match parse_worker_status_line(&line) {
- Err(err) => bail!("unable to parse active worker status '{}' - {}", line, err),
- Ok((upid_str, upid, state)) => match state {
- None if worker_is_active_local(&upid) => {
- active_list.push(TaskListInfo { upid, upid_str, state: None });
- },
- None => {
- println!("Detected stopped UPID {}", upid_str);
- let now = proxmox::tools::time::epoch_i64();
- let status = upid_read_status(&upid)
- .unwrap_or_else(|_| TaskState::Unknown { endtime: now });
- finish_list.push(TaskListInfo {
- upid, upid_str, state: Some(status)
- });
- },
- Some(status) => {
- finish_list.push(TaskListInfo {
- upid, upid_str, state: Some(status)
- })
- }
- }
+ if !worker_is_active_local(&info.upid) {
+ println!("Detected stopped UPID {}", &info.upid_str);
+ let now = proxmox::tools::time::epoch_i64();
+ let status = upid_read_status(&info.upid)
+ .unwrap_or_else(|_| TaskState::Unknown { endtime: now });
+ finish_list.push(TaskListInfo {
+ upid: info.upid,
+ upid_str: info.upid_str,
+ state: Some(status)
+ });
+ return None;
}
- }
- }
+
+ Some(info)
+ }).collect();
if let Some(upid) = new_upid {
active_list.push(TaskListInfo { upid: upid.clone(), upid_str: upid.to_string(), state: None });
}
- // assemble list without duplicates
- // we include all active tasks,
- // and fill up to 1000 entries with finished tasks
+ let active_raw = render_task_list(&active_list);
- let max = 1000;
-
- let mut task_hash = HashMap::new();
-
- for info in active_list {
- task_hash.insert(info.upid_str.clone(), info);
- }
-
- for info in finish_list {
- if task_hash.len() > max { break; }
- if !task_hash.contains_key(&info.upid_str) {
- task_hash.insert(info.upid_str.clone(), info);
- }
- }
-
- let mut task_list: Vec<TaskListInfo> = vec![];
- for (_, info) in task_hash { task_list.push(info); }
+ replace_file(
+ PROXMOX_BACKUP_ACTIVE_TASK_FN,
+ active_raw.as_bytes(),
+ CreateOptions::new()
+ .owner(backup_user.uid)
+ .group(backup_user.gid),
+ )?;
- task_list.sort_unstable_by(|b, a| { // lastest on top
+ finish_list.sort_unstable_by(|a, b| {
match (&a.state, &b.state) {
(Some(s1), Some(s2)) => s1.cmp(&s2),
(Some(_), None) => std::cmp::Ordering::Less,
@@ -425,11 +398,13 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<Vec<TaskListInfo>, E
}
});
- let raw = render_task_list(&task_list[..]);
+ let start = (finish_list.len()-MAX_INDEX_TASKS).max(0);
+ let end = (start+MAX_INDEX_TASKS).min(finish_list.len());
+ let index_raw = render_task_list(&finish_list[start..end]);
replace_file(
- PROXMOX_BACKUP_ACTIVE_TASK_FN,
- raw.as_bytes(),
+ PROXMOX_BACKUP_INDEX_TASK_FN,
+ index_raw.as_bytes(),
CreateOptions::new()
.owner(backup_user.uid)
.group(backup_user.gid),
@@ -437,7 +412,9 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<Vec<TaskListInfo>, E
drop(lock);
- Ok(task_list)
+ finish_list.append(&mut active_list);
+ finish_list.reverse();
+ Ok(finish_list)
}
/// Returns a sorted list of known tasks
@@ -467,6 +444,44 @@ fn render_task_list(list: &[TaskListInfo]) -> String {
raw
}
+// note this is not locked, caller has to make sure it is
+// this will skip (and log) lines that are not valid status lines
+fn read_task_file<R: Read>(reader: R) -> Result<Vec<TaskListInfo>, Error>
+{
+ let reader = BufReader::new(reader);
+ let mut list = Vec::new();
+ for line in reader.lines() {
+ let line = line?;
+ match parse_worker_status_line(&line) {
+ Ok((upid_str, upid, state)) => list.push(TaskListInfo {
+ upid_str,
+ upid,
+ state
+ }),
+ Err(err) => {
+ eprintln!("unable to parse worker status '{}' - {}", line, err);
+ continue;
+ }
+ };
+ }
+
+ Ok(list)
+}
+
+// note this is not locked, caller has to make sure it is
+fn read_task_file_from_path<P>(path: P) -> Result<Vec<TaskListInfo>, Error>
+where
+ P: AsRef<std::path::Path> + std::fmt::Debug,
+{
+ let file = match File::open(&path) {
+ Ok(f) => f,
+ Err(err) if err.kind() == std::io::ErrorKind::NotFound => return Ok(Vec::new()),
+ Err(err) => bail!("unable to open task list {:?} - {}", path, err),
+ };
+
+ read_task_file(file)
+}
+
/// Launch long running worker tasks.
///
/// A worker task can either be a whole thread, or a simply tokio
--
2.20.1
^ permalink raw reply [flat|nested] 23+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 05/10] server/worker_task: write older tasks into archive file
2020-09-25 14:13 [pbs-devel] [PATCH proxmox/proxmox-backup/widget-toolkit] improve task list handling Dominik Csapak
` (6 preceding siblings ...)
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 04/10] server/worker_task: split task list file into two Dominik Csapak
@ 2020-09-25 14:13 ` Dominik Csapak
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 06/10] server/worker_task: add TaskListInfoIterator Dominik Csapak
` (5 subsequent siblings)
13 siblings, 0 replies; 23+ messages in thread
From: Dominik Csapak @ 2020-09-25 14:13 UTC (permalink / raw)
To: pbs-devel
instead of removing tasks beyond the 1000 that are in the index
write them into an archive file by appending them at the end
this way we can later still read them
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/server/worker_task.rs | 16 +++++++++++++++-
1 file changed, 15 insertions(+), 1 deletion(-)
diff --git a/src/server/worker_task.rs b/src/server/worker_task.rs
index 9152609e..98047814 100644
--- a/src/server/worker_task.rs
+++ b/src/server/worker_task.rs
@@ -1,6 +1,6 @@
use std::collections::HashMap;
use std::fs::File;
-use std::io::{Read, BufRead, BufReader};
+use std::io::{Read, Write, BufRead, BufReader};
use std::panic::UnwindSafe;
use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::{Arc, Mutex};
@@ -32,6 +32,7 @@ pub const PROXMOX_BACKUP_TASK_DIR: &str = PROXMOX_BACKUP_TASK_DIR_M!();
pub const PROXMOX_BACKUP_TASK_LOCK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/.active.lock");
pub const PROXMOX_BACKUP_ACTIVE_TASK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/active");
pub const PROXMOX_BACKUP_INDEX_TASK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/index");
+pub const PROXMOX_BACKUP_ARCHIVE_TASK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/archive");
const MAX_INDEX_TASKS: usize = 1000;
@@ -410,6 +411,19 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<Vec<TaskListInfo>, E
.group(backup_user.gid),
)?;
+ if !finish_list.is_empty() && start > 0 {
+ match std::fs::OpenOptions::new().append(true).create(true).open(PROXMOX_BACKUP_ARCHIVE_TASK_FN) {
+ Ok(mut writer) => {
+ for info in &finish_list[0..start] {
+ writer.write_all(render_task_line(&info).as_bytes())?;
+ }
+ },
+ Err(err) => bail!("could not write task archive - {}", err),
+ }
+
+ nix::unistd::chown(PROXMOX_BACKUP_ARCHIVE_TASK_FN, Some(backup_user.uid), Some(backup_user.gid))?;
+ }
+
drop(lock);
finish_list.append(&mut active_list);
--
2.20.1
^ permalink raw reply [flat|nested] 23+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 06/10] server/worker_task: add TaskListInfoIterator
2020-09-25 14:13 [pbs-devel] [PATCH proxmox/proxmox-backup/widget-toolkit] improve task list handling Dominik Csapak
` (7 preceding siblings ...)
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 05/10] server/worker_task: write older tasks into archive file Dominik Csapak
@ 2020-09-25 14:13 ` Dominik Csapak
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 07/10] api2/node/tasks: use TaskListInfoIterator instead of read_task_list Dominik Csapak
` (4 subsequent siblings)
13 siblings, 0 replies; 23+ messages in thread
From: Dominik Csapak @ 2020-09-25 14:13 UTC (permalink / raw)
To: pbs-devel
this is an iterator that reads/parses/updates the task list as
necessary and returns the tasks in descending order (newest first)
it does this by using our logrotate iterator and using a vecdeque
we can use this to iterate over all tasks, even if they are in the
archive and even if the archive is logrotated but only read
as much as we need
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/server/worker_task.rs | 82 ++++++++++++++++++++++++++++++++++++++-
1 file changed, 81 insertions(+), 1 deletion(-)
diff --git a/src/server/worker_task.rs b/src/server/worker_task.rs
index 98047814..622453a1 100644
--- a/src/server/worker_task.rs
+++ b/src/server/worker_task.rs
@@ -1,4 +1,4 @@
-use std::collections::HashMap;
+use std::collections::{HashMap, VecDeque};
use std::fs::File;
use std::io::{Read, Write, BufRead, BufReader};
use std::panic::UnwindSafe;
@@ -16,6 +16,7 @@ use tokio::sync::oneshot;
use proxmox::sys::linux::procfs;
use proxmox::try_block;
use proxmox::tools::fs::{create_path, open_file_locked, open_file_locked_shared, replace_file, CreateOptions};
+use proxmox::tools::logrotate::{LogRotate, LogRotateFiles};
use super::UPID;
@@ -496,6 +497,85 @@ where
read_task_file(file)
}
+enum TaskFile {
+ Active,
+ Index,
+ Archive,
+}
+
+pub struct TaskListInfoIterator {
+ list: VecDeque<TaskListInfo>,
+ file: TaskFile,
+ archive: LogRotateFiles,
+ _lock: File,
+}
+
+impl TaskListInfoIterator {
+ pub fn new() -> Result<Self, Error> {
+ let (read_lock, active_list) = {
+ let lock = lock_task_list_files(false)?;
+ let active_list = read_task_file_from_path(PROXMOX_BACKUP_ACTIVE_TASK_FN)?;
+
+ let needs_update = active_list
+ .iter()
+ .any(|info| info.state.is_none() && !worker_is_active_local(&info.upid));
+
+ if needs_update {
+ drop(lock);
+ update_active_workers(None)?;
+ let lock = lock_task_list_files(false)?;
+ let active_list = read_task_file_from_path(PROXMOX_BACKUP_ACTIVE_TASK_FN)?;
+ (lock, active_list)
+ } else {
+ (lock, active_list)
+ }
+ };
+ let logrotate = LogRotate::new(PROXMOX_BACKUP_ARCHIVE_TASK_FN, true).ok_or_else(|| format_err!("could not get archive file names"))?;
+
+ Ok(Self {
+ list: active_list.into(),
+ file: TaskFile::Active,
+ archive: logrotate.files(),
+ _lock: read_lock,
+ })
+ }
+}
+
+impl Iterator for TaskListInfoIterator {
+ type Item = Result<TaskListInfo, Error>;
+
+ fn next(&mut self) -> Option<Self::Item> {
+ loop {
+ if let Some(element) = self.list.pop_back() {
+ return Some(Ok(element));
+ } else {
+ match self.file {
+ TaskFile::Active => {
+ let index = match read_task_file_from_path(PROXMOX_BACKUP_INDEX_TASK_FN) {
+ Ok(index) => index,
+ Err(err) => return Some(Err(err)),
+ };
+ self.list.append(&mut index.into());
+ self.file = TaskFile::Index;
+ },
+ TaskFile::Index | TaskFile::Archive => {
+ if let Some(file) = self.archive.next() {
+ let archive = match read_task_file(file) {
+ Ok(archive) => archive,
+ Err(err) => return Some(Err(err)),
+ };
+ self.list.append(&mut archive.into());
+ } else {
+ return None;
+ }
+ self.file = TaskFile::Archive;
+ }
+ }
+ }
+ }
+ }
+}
+
/// Launch long running worker tasks.
///
/// A worker task can either be a whole thread, or a simply tokio
--
2.20.1
^ permalink raw reply [flat|nested] 23+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 07/10] api2/node/tasks: use TaskListInfoIterator instead of read_task_list
2020-09-25 14:13 [pbs-devel] [PATCH proxmox/proxmox-backup/widget-toolkit] improve task list handling Dominik Csapak
` (8 preceding siblings ...)
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 06/10] server/worker_task: add TaskListInfoIterator Dominik Csapak
@ 2020-09-25 14:13 ` Dominik Csapak
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 08/10] api2/status: use the TaskListInfoIterator here Dominik Csapak
` (3 subsequent siblings)
13 siblings, 0 replies; 23+ messages in thread
From: Dominik Csapak @ 2020-09-25 14:13 UTC (permalink / raw)
To: pbs-devel
this makes the filtering/limiting much nicer and readable
since we now have potentially an 'infinite' amount of tasks we iterate over,
and cannot now beforehand how many there are, we return the total count
as always 1 higher then requested iff we are not at the end (this is
the case when the amount of entries is smaller than the requested limit)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/api2/node/tasks.rs | 45 +++++++++++++++++++++---------------------
1 file changed, 22 insertions(+), 23 deletions(-)
diff --git a/src/api2/node/tasks.rs b/src/api2/node/tasks.rs
index e6a58a82..683a413b 100644
--- a/src/api2/node/tasks.rs
+++ b/src/api2/node/tasks.rs
@@ -10,7 +10,7 @@ use proxmox::{identity, list_subdirs_api_method, sortable};
use crate::tools;
use crate::api2::types::*;
-use crate::server::{self, UPID, TaskState};
+use crate::server::{self, UPID, TaskState, TaskListInfoIterator};
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY};
use crate::config::cached_user_info::CachedUserInfo;
@@ -316,56 +316,55 @@ pub fn list_tasks(
let store = param["store"].as_str();
+ let list = TaskListInfoIterator::new()?;
- let list = server::read_task_list()?;
-
- let mut result = vec![];
-
- let mut count = 0;
-
- for info in list {
- if !list_all && info.upid.userid != userid { continue; }
+ let result: Vec<TaskListItem> = list.filter_map(|info| {
+ let info = match info {
+ Ok(info) => info,
+ Err(_) => return None,
+ };
+ if !list_all && info.upid.userid != userid { return None; }
if let Some(userid) = &userfilter {
- if !info.upid.userid.as_str().contains(userid) { continue; }
+ if !info.upid.userid.as_str().contains(userid) { return None; }
}
if let Some(store) = store {
// Note: useful to select all tasks spawned by proxmox-backup-client
let worker_id = match &info.upid.worker_id {
Some(w) => w,
- None => continue, // skip
+ None => return None, // skip
};
if info.upid.worker_type == "backup" || info.upid.worker_type == "restore" ||
info.upid.worker_type == "prune"
{
let prefix = format!("{}_", store);
- if !worker_id.starts_with(&prefix) { continue; }
+ if !worker_id.starts_with(&prefix) { return None; }
} else if info.upid.worker_type == "garbage_collection" {
- if worker_id != store { continue; }
+ if worker_id != store { return None; }
} else {
- continue; // skip
+ return None; // skip
}
}
if let Some(ref state) = info.state {
- if running { continue; }
+ if running { return None; }
match state {
- crate::server::TaskState::OK { .. } if errors => continue,
+ crate::server::TaskState::OK { .. } if errors => return None,
_ => {},
}
}
- if (count as u64) < start {
- count += 1;
- continue;
- } else {
- count += 1;
- }
+ Some(info.into())
+ }).skip(start as usize)
+ .take(limit as usize)
+ .collect();
- if (result.len() as u64) < limit { result.push(info.into()); };
+ let mut count = result.len() + start as usize;
+ if result.len() > 0 && result.len() >= limit as usize { // we have a 'virtual' entry as long as we have any new
+ count += 1;
}
rpcenv["total"] = Value::from(count);
--
2.20.1
^ permalink raw reply [flat|nested] 23+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 08/10] api2/status: use the TaskListInfoIterator here
2020-09-25 14:13 [pbs-devel] [PATCH proxmox/proxmox-backup/widget-toolkit] improve task list handling Dominik Csapak
` (9 preceding siblings ...)
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 07/10] api2/node/tasks: use TaskListInfoIterator instead of read_task_list Dominik Csapak
@ 2020-09-25 14:13 ` Dominik Csapak
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 09/10] server/worker_task: remove unecessary read_task_list Dominik Csapak
` (2 subsequent siblings)
13 siblings, 0 replies; 23+ messages in thread
From: Dominik Csapak @ 2020-09-25 14:13 UTC (permalink / raw)
To: pbs-devel
this means that limiting with epoch now works correctly
also change the api type to i64, since that is what the starttime is
saved as
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/api2/status.rs | 32 ++++++++++++++++++++++++--------
1 file changed, 24 insertions(+), 8 deletions(-)
diff --git a/src/api2/status.rs b/src/api2/status.rs
index eb8c43cd..89bc4adc 100644
--- a/src/api2/status.rs
+++ b/src/api2/status.rs
@@ -182,7 +182,7 @@ fn datastore_status(
input: {
properties: {
since: {
- type: u64,
+ type: i64,
description: "Only list tasks since this UNIX epoch.",
optional: true,
},
@@ -200,6 +200,7 @@ fn datastore_status(
)]
/// List tasks.
pub fn list_tasks(
+ since: Option<i64>,
_param: Value,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Vec<TaskListItem>, Error> {
@@ -209,13 +210,28 @@ pub fn list_tasks(
let user_privs = user_info.lookup_privs(&userid, &["system", "tasks"]);
let list_all = (user_privs & PRIV_SYS_AUDIT) != 0;
-
- // TODO: replace with call that gets all task since 'since' epoch
- let list: Vec<TaskListItem> = server::read_task_list()?
- .into_iter()
- .map(TaskListItem::from)
- .filter(|entry| list_all || entry.user == userid)
- .collect();
+ let since = since.unwrap_or_else(|| 0);
+
+ let list: Vec<TaskListItem> = server::TaskListInfoIterator::new()?
+ .take_while(|info| {
+ match info {
+ Ok(info) => info.upid.starttime > since,
+ Err(_) => false
+ }
+ })
+ .filter_map(|info| {
+ match info {
+ Ok(info) => {
+ if list_all || info.upid.userid == userid {
+ Some(Ok(TaskListItem::from(info)))
+ } else {
+ None
+ }
+ }
+ Err(err) => Some(Err(err))
+ }
+ })
+ .collect::<Result<Vec<TaskListItem>, Error>>()?;
Ok(list.into())
}
--
2.20.1
^ permalink raw reply [flat|nested] 23+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 09/10] server/worker_task: remove unecessary read_task_list
2020-09-25 14:13 [pbs-devel] [PATCH proxmox/proxmox-backup/widget-toolkit] improve task list handling Dominik Csapak
` (10 preceding siblings ...)
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 08/10] api2/status: use the TaskListInfoIterator here Dominik Csapak
@ 2020-09-25 14:13 ` Dominik Csapak
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 10/10] proxmox-backup-proxy: add task archive rotation Dominik Csapak
2020-09-25 14:13 ` [pbs-devel] [PATCH widget-toolkit 1/1] node/Tasks: improve scroller behaviour on datastore loading Dominik Csapak
13 siblings, 0 replies; 23+ messages in thread
From: Dominik Csapak @ 2020-09-25 14:13 UTC (permalink / raw)
To: pbs-devel
since there are no users of this anymore and we now have a nicer
TaskListInfoIterator to use, we can drop this function
this also means that 'update_active_workers' does not need to return
a list anymore since we never used that result besides in
read_task_list
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/server/worker_task.rs | 14 ++------------
1 file changed, 2 insertions(+), 12 deletions(-)
diff --git a/src/server/worker_task.rs b/src/server/worker_task.rs
index 622453a1..8838db38 100644
--- a/src/server/worker_task.rs
+++ b/src/server/worker_task.rs
@@ -345,8 +345,7 @@ fn lock_task_list_files(exclusive: bool) -> Result<std::fs::File, Error> {
// atomically read/update the task list, update status of finished tasks
// new_upid is added to the list when specified.
-// Returns a sorted list of known tasks,
-fn update_active_workers(new_upid: Option<&UPID>) -> Result<Vec<TaskListInfo>, Error> {
+fn update_active_workers(new_upid: Option<&UPID>) -> Result<(), Error> {
let backup_user = crate::backup::backup_user()?;
@@ -427,16 +426,7 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<Vec<TaskListInfo>, E
drop(lock);
- finish_list.append(&mut active_list);
- finish_list.reverse();
- Ok(finish_list)
-}
-
-/// Returns a sorted list of known tasks
-///
-/// The list is sorted by `(starttime, endtime)` in ascending order
-pub fn read_task_list() -> Result<Vec<TaskListInfo>, Error> {
- update_active_workers(None)
+ Ok(())
}
fn render_task_line(info: &TaskListInfo) -> String {
--
2.20.1
^ permalink raw reply [flat|nested] 23+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 10/10] proxmox-backup-proxy: add task archive rotation
2020-09-25 14:13 [pbs-devel] [PATCH proxmox/proxmox-backup/widget-toolkit] improve task list handling Dominik Csapak
` (11 preceding siblings ...)
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 09/10] server/worker_task: remove unecessary read_task_list Dominik Csapak
@ 2020-09-25 14:13 ` Dominik Csapak
2020-09-25 14:13 ` [pbs-devel] [PATCH widget-toolkit 1/1] node/Tasks: improve scroller behaviour on datastore loading Dominik Csapak
13 siblings, 0 replies; 23+ messages in thread
From: Dominik Csapak @ 2020-09-25 14:13 UTC (permalink / raw)
To: pbs-devel
this starts a task once a day at "00:00" that rotates the task log
archive if it is bigger than 500k
if we want, we can make the schedule/size limit/etc. configurable,
but for now it's ok to set fixed values for that
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/bin/proxmox-backup-proxy.rs | 96 +++++++++++++++++++++++++++++++++
src/server/worker_task.rs | 22 ++++++++
2 files changed, 118 insertions(+)
diff --git a/src/bin/proxmox-backup-proxy.rs b/src/bin/proxmox-backup-proxy.rs
index 3272fe72..67fbc541 100644
--- a/src/bin/proxmox-backup-proxy.rs
+++ b/src/bin/proxmox-backup-proxy.rs
@@ -198,6 +198,7 @@ async fn schedule_tasks() -> Result<(), Error> {
schedule_datastore_prune().await;
schedule_datastore_verification().await;
schedule_datastore_sync_jobs().await;
+ schedule_task_log_rotate().await;
Ok(())
}
@@ -655,6 +656,101 @@ async fn schedule_datastore_sync_jobs() {
}
}
+async fn schedule_task_log_rotate() {
+ use proxmox_backup::{
+ config::jobstate::{self, Job},
+ server::rotate_task_log_archive,
+ };
+ use proxmox_backup::server::WorkerTask;
+ use proxmox_backup::tools::systemd::time::{
+ parse_calendar_event, compute_next_event};
+
+ let worker_type = "logrotate";
+ let job_id = "task-archive";
+
+ let last = match jobstate::last_run_time(worker_type, job_id) {
+ Ok(time) => time,
+ Err(err) => {
+ eprintln!("could not get last run time of task log archive rotation: {}", err);
+ return;
+ }
+ };
+
+ // schedule daily at 00:00 like normal logrotate
+ let schedule = "00:00";
+
+ let event = match parse_calendar_event(schedule) {
+ Ok(event) => event,
+ Err(err) => {
+ // should not happen?
+ eprintln!("unable to parse schedule '{}' - {}", schedule, err);
+ return;
+ }
+ };
+
+ let next = match compute_next_event(&event, last, false) {
+ Ok(Some(next)) => next,
+ Ok(None) => return,
+ Err(err) => {
+ eprintln!("compute_next_event for '{}' failed - {}", schedule, err);
+ return;
+ }
+ };
+
+ let now = proxmox::tools::time::epoch_i64();
+
+ if next > now {
+ // if we never ran the rotation, schedule instantly
+ match jobstate::JobState::load(worker_type, job_id) {
+ Ok(state) => match state {
+ jobstate::JobState::Created { .. } => {},
+ _ => return,
+ },
+ _ => return,
+ }
+ }
+
+ let mut job = match Job::new(worker_type, job_id) {
+ Ok(job) => job,
+ Err(_) => return, // could not get lock
+ };
+
+ if let Err(err) = WorkerTask::new_thread(
+ worker_type,
+ Some(job_id.to_string()),
+ Userid::backup_userid().clone(),
+ false,
+ move |worker| {
+ job.start(&worker.upid().to_string())?;
+ worker.log(format!("starting task log rotation"));
+ // one entry has normally about ~100-150 bytes
+ let max_size = 500000; // at least 5000 entries
+ let max_files = 20; // at least 100000 entries
+ let result = try_block!({
+ let has_rotated = rotate_task_log_archive(max_size, true, Some(max_files))?;
+ if has_rotated {
+ worker.log(format!("task log archive was rotated"));
+ } else {
+ worker.log(format!("task log archive was not rotated"));
+ }
+
+ Ok(())
+ });
+
+ let status = worker.create_state(&result);
+
+ if let Err(err) = job.finish(status) {
+ eprintln!("could not finish job state for {}: {}", worker_type, err);
+ }
+
+ result
+ },
+ ) {
+ eprintln!("unable to start task log rotation: {}", err);
+ }
+
+}
+
async fn run_stat_generator() {
let mut count = 0;
diff --git a/src/server/worker_task.rs b/src/server/worker_task.rs
index 8838db38..c567e6be 100644
--- a/src/server/worker_task.rs
+++ b/src/server/worker_task.rs
@@ -1,5 +1,6 @@
use std::collections::{HashMap, VecDeque};
use std::fs::File;
+use std::path::Path;
use std::io::{Read, Write, BufRead, BufReader};
use std::panic::UnwindSafe;
use std::sync::atomic::{AtomicBool, Ordering};
@@ -343,6 +344,27 @@ fn lock_task_list_files(exclusive: bool) -> Result<std::fs::File, Error> {
Ok(lock)
}
+/// checks if the Task Archive is bigger that 'size_threshold' bytes, and
+/// rotates it if it is
+pub fn rotate_task_log_archive(size_threshold: u64, compress: bool, max_files: Option<usize>) -> Result<bool, Error> {
+ let _lock = lock_task_list_files(true)?;
+ let path = Path::new(PROXMOX_BACKUP_ARCHIVE_TASK_FN);
+ let metadata = path.metadata()?;
+ if metadata.len() > size_threshold {
+ let mut logrotate = LogRotate::new(PROXMOX_BACKUP_ARCHIVE_TASK_FN, compress).ok_or_else(|| format_err!("could not get archive file names"))?;
+ let backup_user = crate::backup::backup_user()?;
+ logrotate.rotate(
+ CreateOptions::new()
+ .owner(backup_user.uid)
+ .group(backup_user.gid),
+ max_files,
+ )?;
+ Ok(true)
+ } else {
+ Ok(false)
+ }
+}
+
// atomically read/update the task list, update status of finished tasks
// new_upid is added to the list when specified.
fn update_active_workers(new_upid: Option<&UPID>) -> Result<(), Error> {
--
2.20.1
^ permalink raw reply [flat|nested] 23+ messages in thread
* [pbs-devel] [PATCH widget-toolkit 1/1] node/Tasks: improve scroller behaviour on datastore loading
2020-09-25 14:13 [pbs-devel] [PATCH proxmox/proxmox-backup/widget-toolkit] improve task list handling Dominik Csapak
` (12 preceding siblings ...)
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 10/10] proxmox-backup-proxy: add task archive rotation Dominik Csapak
@ 2020-09-25 14:13 ` Dominik Csapak
2020-09-29 7:19 ` [pbs-devel] applied: " Thomas Lamprecht
13 siblings, 1 reply; 23+ messages in thread
From: Dominik Csapak @ 2020-09-25 14:13 UTC (permalink / raw)
To: pbs-devel
when we have a fixed totalcount, the scrollbar behaves nicely,
but sometimes we do not have a fixed total, so the api will return
'+1' if there are more elments to load
when doing this, the scrollbar has a weird behaviour where it changes
size frequently the more one scrolls down.
instead, tell the grid to update its layout as soon as the store
prefetches the data, which triggers an update of the scrollbar
this still lets it jump around a little, but only once per load and
more consistent
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/node/Tasks.js | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/src/node/Tasks.js b/src/node/Tasks.js
index c41f0b5..2d87300 100644
--- a/src/node/Tasks.js
+++ b/src/node/Tasks.js
@@ -28,6 +28,14 @@ Ext.define('Proxmox.node.Tasks', {
},
});
+ store.on('prefetch', function() {
+ // we want to update the scrollbar on every store load
+ // since the total count might be different
+ // the buffered grid plugin does this only on scrolling itself
+ // and even reduces the scrollheight again when scrolling up
+ me.updateLayout();
+ });
+
let userfilter = '';
let filter_errors = 0;
--
2.20.1
^ permalink raw reply [flat|nested] 23+ messages in thread
* [pbs-devel] applied: [PATCH proxmox 1/3] proxmox/tools/fs: add shared lock helper
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox 1/3] proxmox/tools/fs: add shared lock helper Dominik Csapak
@ 2020-09-28 5:10 ` Dietmar Maurer
0 siblings, 0 replies; 23+ messages in thread
From: Dietmar Maurer @ 2020-09-28 5:10 UTC (permalink / raw)
To: Proxmox Backup Server development discussion, Dominik Csapak
applied
^ permalink raw reply [flat|nested] 23+ messages in thread
* [pbs-devel] applied: [PATCH proxmox 2/3] proxmox/tools/fs: create tmpfile helper
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox 2/3] proxmox/tools/fs: create tmpfile helper Dominik Csapak
@ 2020-09-28 5:10 ` Dietmar Maurer
0 siblings, 0 replies; 23+ messages in thread
From: Dietmar Maurer @ 2020-09-28 5:10 UTC (permalink / raw)
To: Proxmox Backup Server development discussion, Dominik Csapak
applied
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [pbs-devel] [PATCH proxmox 3/3] proxmox/tools: add logrotate module
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox 3/3] proxmox/tools: add logrotate module Dominik Csapak
@ 2020-09-28 5:12 ` Dietmar Maurer
0 siblings, 0 replies; 23+ messages in thread
From: Dietmar Maurer @ 2020-09-28 5:12 UTC (permalink / raw)
To: Proxmox Backup Server development discussion, Dominik Csapak
> since those files can be gzipped, we have to include flate2 as dependency
This should be fast, so why do we want to use slow gzip?
I would really prefer zstd.
^ permalink raw reply [flat|nested] 23+ messages in thread
* [pbs-devel] applied: [PATCH proxmox-backup 01/10] api2/node/tasks: move userfilter to function signature
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 01/10] api2/node/tasks: move userfilter to function signature Dominik Csapak
@ 2020-09-28 5:18 ` Dietmar Maurer
0 siblings, 0 replies; 23+ messages in thread
From: Dietmar Maurer @ 2020-09-28 5:18 UTC (permalink / raw)
To: Proxmox Backup Server development discussion, Dominik Csapak
applied
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [pbs-devel] [PATCH proxmox-backup 02/10] server/worker_task: refactor locking of the task list
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 02/10] server/worker_task: refactor locking of the task list Dominik Csapak
@ 2020-09-28 5:28 ` Dietmar Maurer
0 siblings, 0 replies; 23+ messages in thread
From: Dietmar Maurer @ 2020-09-28 5:28 UTC (permalink / raw)
To: Proxmox Backup Server development discussion, Dominik Csapak
> +fn lock_task_list_files(exclusive: bool) -> Result<std::fs::File, Error> {
> + let backup_user = crate::backup::backup_user()?;
> +
> + let lock = if exclusive {
> + open_file_locked(PROXMOX_BACKUP_TASK_LOCK_FN, std::time::Duration::new(10, 0))?
> + } else {
> + open_file_locked_shared(PROXMOX_BACKUP_TASK_LOCK_FN, std::time::Duration::new(10, 0))?
> + };
> + nix::unistd::chown(PROXMOX_BACKUP_TASK_LOCK_FN, Some(backup_user.uid), Some(backup_user.gid))?;
You should use fchown here on the open file handle.
> +
> + Ok(lock)
> +}
This starts to get clumsy. We have:
pub fn open_file_locked_shared<P: AsRef<Path>>(path: P, timeout: Duration) -> Result<File, Error>
and
pub fn open_file_locked<P: AsRef<Path>>(path: P, timeout: Duration) -> Result<File, Error>
and private:
fn open_file_locked_impl<P: AsRef<Path>>(path: P, timeout: Duration, exclusive: bool) -> Result<File, Error>
But we want:
pub fn open_file_locked<P: AsRef<Path>>(path: P, timeout: Duration, exclusive: bool) -> Result<File, Error>
...
^ permalink raw reply [flat|nested] 23+ messages in thread
* [pbs-devel] applied: [PATCH proxmox-backup 03/10] server/worker_task: factor out task list rendering
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 03/10] server/worker_task: factor out task list rendering Dominik Csapak
@ 2020-09-28 5:31 ` Dietmar Maurer
0 siblings, 0 replies; 23+ messages in thread
From: Dietmar Maurer @ 2020-09-28 5:31 UTC (permalink / raw)
To: Proxmox Backup Server development discussion, Dominik Csapak
applied
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [pbs-devel] [PATCH proxmox-backup 04/10] server/worker_task: split task list file into two
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 04/10] server/worker_task: split task list file into two Dominik Csapak
@ 2020-09-28 5:43 ` Dietmar Maurer
0 siblings, 0 replies; 23+ messages in thread
From: Dietmar Maurer @ 2020-09-28 5:43 UTC (permalink / raw)
To: Proxmox Backup Server development discussion, Dominik Csapak
> + let mut finish_list: Vec<TaskListInfo> = read_task_file_from_path(PROXMOX_BACKUP_INDEX_TASK_FN)?;
> + let mut active_list: Vec<TaskListInfo> = read_task_file_from_path(PROXMOX_BACKUP_ACTIVE_TASK_FN)?
> + .into_iter()
> + .filter_map(|info| {
> + if info.state.is_some() { // should not happen?
> + finish_list.push(info);
> + return None;
> }
Those files have different formats now, so I would use a different parser. That way
we can avoid that "should not happen?" code...
^ permalink raw reply [flat|nested] 23+ messages in thread
* [pbs-devel] applied: [PATCH widget-toolkit 1/1] node/Tasks: improve scroller behaviour on datastore loading
2020-09-25 14:13 ` [pbs-devel] [PATCH widget-toolkit 1/1] node/Tasks: improve scroller behaviour on datastore loading Dominik Csapak
@ 2020-09-29 7:19 ` Thomas Lamprecht
0 siblings, 0 replies; 23+ messages in thread
From: Thomas Lamprecht @ 2020-09-29 7:19 UTC (permalink / raw)
To: Proxmox Backup Server development discussion, Dominik Csapak
On 25.09.20 16:13, Dominik Csapak wrote:
> when we have a fixed totalcount, the scrollbar behaves nicely,
> but sometimes we do not have a fixed total, so the api will return
> '+1' if there are more elments to load
>
> when doing this, the scrollbar has a weird behaviour where it changes
> size frequently the more one scrolls down.
>
> instead, tell the grid to update its layout as soon as the store
> prefetches the data, which triggers an update of the scrollbar
>
> this still lets it jump around a little, but only once per load and
> more consistent
>
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
> src/node/Tasks.js | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
>
applied, thanks!
^ permalink raw reply [flat|nested] 23+ messages in thread
end of thread, other threads:[~2020-09-29 7:19 UTC | newest]
Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-25 14:13 [pbs-devel] [PATCH proxmox/proxmox-backup/widget-toolkit] improve task list handling Dominik Csapak
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox 1/3] proxmox/tools/fs: add shared lock helper Dominik Csapak
2020-09-28 5:10 ` [pbs-devel] applied: " Dietmar Maurer
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox 2/3] proxmox/tools/fs: create tmpfile helper Dominik Csapak
2020-09-28 5:10 ` [pbs-devel] applied: " Dietmar Maurer
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox 3/3] proxmox/tools: add logrotate module Dominik Csapak
2020-09-28 5:12 ` Dietmar Maurer
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 01/10] api2/node/tasks: move userfilter to function signature Dominik Csapak
2020-09-28 5:18 ` [pbs-devel] applied: " Dietmar Maurer
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 02/10] server/worker_task: refactor locking of the task list Dominik Csapak
2020-09-28 5:28 ` Dietmar Maurer
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 03/10] server/worker_task: factor out task list rendering Dominik Csapak
2020-09-28 5:31 ` [pbs-devel] applied: " Dietmar Maurer
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 04/10] server/worker_task: split task list file into two Dominik Csapak
2020-09-28 5:43 ` Dietmar Maurer
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 05/10] server/worker_task: write older tasks into archive file Dominik Csapak
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 06/10] server/worker_task: add TaskListInfoIterator Dominik Csapak
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 07/10] api2/node/tasks: use TaskListInfoIterator instead of read_task_list Dominik Csapak
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 08/10] api2/status: use the TaskListInfoIterator here Dominik Csapak
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 09/10] server/worker_task: remove unecessary read_task_list Dominik Csapak
2020-09-25 14:13 ` [pbs-devel] [PATCH proxmox-backup 10/10] proxmox-backup-proxy: add task archive rotation Dominik Csapak
2020-09-25 14:13 ` [pbs-devel] [PATCH widget-toolkit 1/1] node/Tasks: improve scroller behaviour on datastore loading Dominik Csapak
2020-09-29 7:19 ` [pbs-devel] applied: " Thomas Lamprecht
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox