* [pbs-devel] [PATCH proxmox-backup v3 0/9] improve syncjob handling
@ 2020-08-13 8:29 Dominik Csapak
2020-08-13 8:29 ` [pbs-devel] [PATCH proxmox-backup v3 1/9] server: change status of a task from a string to an enum Dominik Csapak
` (9 more replies)
0 siblings, 10 replies; 11+ messages in thread
From: Dominik Csapak @ 2020-08-13 8:29 UTC (permalink / raw)
To: pbs-devel
by saving the state in a separate file instead of parsing the task log
should be faster and more correct, since we cannot phase out
the last sync if too many tasks are started
we have to do the same for all other tasks that can be scheduled
changes from v2:
* change upid_read_status to also return the endtime (parsed from the log)
* use wrapper struct that holds the state and a lock (Job)
* use that wrapper to hold the lock from loading until the worker is finshed
* better comments and documentation
* squashed commits for schedules
* polling the abort_future in do_sync_jobs worker
* fix a bug in 8/9 (Proxmox.Utils vs Promxox.Utils)
changes from v1:
* rebase on master (new Userid struct)
* use an enum instead of a struct for jobstate since it
better represents what we want
(this way we do not have to save the starttime twice for example)
* add an enum for TaskState and use that (instead of strings)
(by doing this we can better parse the state for the syncview)
* incorporate dietmar suggestsions
* refactor do_sync_job first and extend after that
* improve the state display in the gui
* make some columsn smaller in the gui
Dominik Csapak (9):
server: change status of a task from a string to an enum
server/worker_task: let upid_read_status also return the endtime
config: add JobState helper
api2/admin/sync: use JobState for faster access to state info
api/{pull,sync}: refactor to do_sync_job
api2/pull: extend do_sync_job to also handle schedule and jobstate
syncjob: use do_sync_job also for scheduled sync jobs
ui: syncjob: improve task text rendering
ui: syncjob: make some columns smaller
src/api2/admin/sync.rs | 76 ++++------
src/api2/config/sync.rs | 2 +
src/api2/node/tasks.rs | 8 +-
src/api2/pull.rs | 66 ++++++++
src/api2/types/mod.rs | 2 +-
src/bin/proxmox-backup-api.rs | 1 +
src/bin/proxmox-backup-proxy.rs | 83 ++--------
src/config.rs | 1 +
src/config/jobstate.rs | 258 ++++++++++++++++++++++++++++++++
src/server/worker_task.rs | 111 ++++++++++----
www/config/SyncView.js | 30 +++-
11 files changed, 482 insertions(+), 156 deletions(-)
create mode 100644 src/config/jobstate.rs
--
2.20.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v3 1/9] server: change status of a task from a string to an enum
2020-08-13 8:29 [pbs-devel] [PATCH proxmox-backup v3 0/9] improve syncjob handling Dominik Csapak
@ 2020-08-13 8:29 ` Dominik Csapak
2020-08-13 8:29 ` [pbs-devel] [PATCH proxmox-backup v3 2/9] server/worker_task: let upid_read_status also return the endtime Dominik Csapak
` (8 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2020-08-13 8:29 UTC (permalink / raw)
To: pbs-devel
representing a state via an enum makes more sense in this case
we also implement FromStr and Display to make it easy to convet from/to
a string
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/api2/admin/sync.rs | 2 +-
src/api2/node/tasks.rs | 8 ++--
src/api2/types/mod.rs | 2 +-
src/server/worker_task.rs | 88 ++++++++++++++++++++++++++++++---------
4 files changed, 74 insertions(+), 26 deletions(-)
diff --git a/src/api2/admin/sync.rs b/src/api2/admin/sync.rs
index f06625f6..79af03bc 100644
--- a/src/api2/admin/sync.rs
+++ b/src/api2/admin/sync.rs
@@ -56,7 +56,7 @@ pub fn list_sync_jobs(
if let Some(task) = last_tasks.get(&job.id) {
job.last_run_upid = Some(task.upid_str.clone());
if let Some((endtime, status)) = &task.state {
- job.last_run_state = Some(String::from(status));
+ job.last_run_state = Some(status.to_string());
job.last_run_endtime = Some(*endtime);
last = *endtime;
}
diff --git a/src/api2/node/tasks.rs b/src/api2/node/tasks.rs
index 6917869e..0bb043d9 100644
--- a/src/api2/node/tasks.rs
+++ b/src/api2/node/tasks.rs
@@ -10,7 +10,7 @@ use proxmox::{identity, list_subdirs_api_method, sortable};
use crate::tools;
use crate::api2::types::*;
-use crate::server::{self, UPID};
+use crate::server::{self, UPID, TaskState};
use crate::config::acl::{PRIV_SYS_AUDIT, PRIV_SYS_MODIFY};
use crate::config::cached_user_info::CachedUserInfo;
@@ -105,9 +105,9 @@ async fn get_task_status(
if crate::server::worker_is_active(&upid).await? {
result["status"] = Value::from("running");
} else {
- let exitstatus = crate::server::upid_read_status(&upid).unwrap_or(String::from("unknown"));
+ let exitstatus = crate::server::upid_read_status(&upid).unwrap_or(TaskState::Unknown);
result["status"] = Value::from("stopped");
- result["exitstatus"] = Value::from(exitstatus);
+ result["exitstatus"] = Value::from(exitstatus.to_string());
};
Ok(result)
@@ -352,7 +352,7 @@ pub fn list_tasks(
if let Some(ref state) = info.state {
if running { continue; }
- if errors && state.1 == "OK" {
+ if errors && state.1 == crate::server::TaskState::OK {
continue;
}
}
diff --git a/src/api2/types/mod.rs b/src/api2/types/mod.rs
index fb386d46..a619810d 100644
--- a/src/api2/types/mod.rs
+++ b/src/api2/types/mod.rs
@@ -595,7 +595,7 @@ impl From<crate::server::TaskListInfo> for TaskListItem {
fn from(info: crate::server::TaskListInfo) -> Self {
let (endtime, status) = info
.state
- .map_or_else(|| (None, None), |(a,b)| (Some(a), Some(b)));
+ .map_or_else(|| (None, None), |(a,b)| (Some(a), Some(b.to_string())));
TaskListItem {
upid: info.upid_str,
diff --git a/src/server/worker_task.rs b/src/server/worker_task.rs
index ba83de60..62cceb3d 100644
--- a/src/server/worker_task.rs
+++ b/src/server/worker_task.rs
@@ -11,6 +11,7 @@ use futures::*;
use lazy_static::lazy_static;
use nix::unistd::Pid;
use serde_json::{json, Value};
+use serde::{Serialize, Deserialize};
use tokio::sync::oneshot;
use proxmox::sys::linux::procfs;
@@ -190,8 +191,8 @@ pub fn create_task_log_dirs() -> Result<(), Error> {
}
/// Read exits status from task log file
-pub fn upid_read_status(upid: &UPID) -> Result<String, Error> {
- let mut status = String::from("unknown");
+pub fn upid_read_status(upid: &UPID) -> Result<TaskState, Error> {
+ let mut status = TaskState::Unknown;
let path = upid.log_path();
@@ -212,12 +213,8 @@ pub fn upid_read_status(upid: &UPID) -> Result<String, Error> {
match iter.next() {
None => continue,
Some(rest) => {
- if rest == "OK" {
- status = String::from(rest);
- } else if rest.starts_with("WARNINGS: ") {
- status = String::from(rest);
- } else if rest.starts_with("ERROR: ") {
- status = String::from(&rest[7..]);
+ if let Ok(state) = rest.parse() {
+ status = state;
}
}
}
@@ -226,6 +223,59 @@ pub fn upid_read_status(upid: &UPID) -> Result<String, Error> {
Ok(status)
}
+/// Task State
+#[derive(Debug, PartialEq, Serialize, Deserialize)]
+pub enum TaskState {
+ /// The Task ended with an undefined state
+ Unknown,
+ /// The Task ended and there were no errors or warnings
+ OK,
+ /// The Task had 'count' amount of warnings and no errors
+ Warning { count: u64 },
+ /// The Task ended with the error described in 'message'
+ Error { message: String },
+}
+
+impl TaskState {
+ fn result_text(&self) -> String {
+ match self {
+ TaskState::Error { message } => format!("TASK ERROR: {}", message),
+ other => format!("TASK {}", other),
+ }
+ }
+}
+
+impl std::fmt::Display for TaskState {
+ fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+ match self {
+ TaskState::Unknown => write!(f, "unknown"),
+ TaskState::OK => write!(f, "OK"),
+ TaskState::Warning { count } => write!(f, "WARNINGS: {}", count),
+ TaskState::Error { message } => write!(f, "{}", message),
+ }
+ }
+}
+
+impl std::str::FromStr for TaskState {
+ type Err = Error;
+
+ fn from_str(s: &str) -> Result<Self, Self::Err> {
+ if s == "unknown" {
+ Ok(TaskState::Unknown)
+ } else if s == "OK" {
+ Ok(TaskState::OK)
+ } else if s.starts_with("WARNINGS: ") {
+ let count: u64 = s[10..].parse()?;
+ Ok(TaskState::Warning{ count })
+ } else if s.len() > 0 {
+ let message = if s.starts_with("ERROR: ") { &s[7..] } else { s }.to_string();
+ Ok(TaskState::Error{ message })
+ } else {
+ bail!("unable to parse Task Status '{}'", s);
+ }
+ }
+}
+
/// Task details including parsed UPID
///
/// If there is no `state`, the task is still running.
@@ -236,9 +286,7 @@ pub struct TaskListInfo {
/// UPID string representation
pub upid_str: String,
/// Task `(endtime, status)` if already finished
- ///
- /// The `status` is either `unknown`, `OK`, `WARN`, or `ERROR: ...`
- pub state: Option<(i64, String)>, // endtime, status
+ pub state: Option<(i64, TaskState)>, // endtime, status
}
// atomically read/update the task list, update status of finished tasks
@@ -278,14 +326,14 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<Vec<TaskListInfo>, E
None => {
println!("Detected stopped UPID {}", upid_str);
let status = upid_read_status(&upid)
- .unwrap_or_else(|_| String::from("unknown"));
+ .unwrap_or_else(|_| TaskState::Unknown);
finish_list.push(TaskListInfo {
upid, upid_str, state: Some((Local::now().timestamp(), status))
});
},
Some((endtime, status)) => {
finish_list.push(TaskListInfo {
- upid, upid_str, state: Some((endtime, status))
+ upid, upid_str, state: Some((endtime, status.parse()?))
})
}
}
@@ -498,23 +546,23 @@ impl WorkerTask {
Ok(upid_str)
}
- /// get the Text of the result
- pub fn get_log_text(&self, result: &Result<(), Error>) -> String {
-
+ /// create state from self and a result
+ pub fn create_state(&self, result: &Result<(), Error>) -> TaskState {
let warn_count = self.data.lock().unwrap().warn_count;
if let Err(err) = result {
- format!("ERROR: {}", err)
+ TaskState::Error { message: err.to_string() }
} else if warn_count > 0 {
- format!("WARNINGS: {}", warn_count)
+ TaskState::Warning { count: warn_count }
} else {
- "OK".to_string()
+ TaskState::OK
}
}
/// Log task result, remove task from running list
pub fn log_result(&self, result: &Result<(), Error>) {
- self.log(format!("TASK {}", self.get_log_text(result)));
+ let state = self.create_state(result);
+ self.log(state.result_text());
WORKER_TASK_LIST.lock().unwrap().remove(&self.upid.task_id);
let _ = update_active_workers(None);
--
2.20.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v3 2/9] server/worker_task: let upid_read_status also return the endtime
2020-08-13 8:29 [pbs-devel] [PATCH proxmox-backup v3 0/9] improve syncjob handling Dominik Csapak
2020-08-13 8:29 ` [pbs-devel] [PATCH proxmox-backup v3 1/9] server: change status of a task from a string to an enum Dominik Csapak
@ 2020-08-13 8:29 ` Dominik Csapak
2020-08-13 8:29 ` [pbs-devel] [PATCH proxmox-backup v3 3/9] config: add JobState helper Dominik Csapak
` (7 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2020-08-13 8:29 UTC (permalink / raw)
To: pbs-devel
the endtime should be the timestamp of the last log line
or if there is no log at all, the starttime
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/api2/node/tasks.rs | 2 +-
src/server/worker_task.rs | 27 ++++++++++++++++++---------
2 files changed, 19 insertions(+), 10 deletions(-)
diff --git a/src/api2/node/tasks.rs b/src/api2/node/tasks.rs
index 0bb043d9..1e9643ec 100644
--- a/src/api2/node/tasks.rs
+++ b/src/api2/node/tasks.rs
@@ -105,7 +105,7 @@ async fn get_task_status(
if crate::server::worker_is_active(&upid).await? {
result["status"] = Value::from("running");
} else {
- let exitstatus = crate::server::upid_read_status(&upid).unwrap_or(TaskState::Unknown);
+ let (_, exitstatus) = crate::server::upid_read_status(&upid).unwrap_or((0, TaskState::Unknown));
result["status"] = Value::from("stopped");
result["exitstatus"] = Value::from(exitstatus.to_string());
};
diff --git a/src/server/worker_task.rs b/src/server/worker_task.rs
index 62cceb3d..da1a877e 100644
--- a/src/server/worker_task.rs
+++ b/src/server/worker_task.rs
@@ -190,9 +190,12 @@ pub fn create_task_log_dirs() -> Result<(), Error> {
Ok(())
}
-/// Read exits status from task log file
-pub fn upid_read_status(upid: &UPID) -> Result<TaskState, Error> {
+/// Read endtime (time of last log line) and exitstatus from task log file
+/// If there is not a single line with at valid datetime, we assume the
+/// starttime to be the endtime
+pub fn upid_read_status(upid: &UPID) -> Result<(i64, TaskState), Error> {
let mut status = TaskState::Unknown;
+ let mut time = upid.starttime;
let path = upid.log_path();
@@ -208,9 +211,15 @@ pub fn upid_read_status(upid: &UPID) -> Result<TaskState, Error> {
for line in reader.lines() {
let line = line?;
- let mut iter = line.splitn(2, ": TASK ");
- if iter.next() == None { continue; }
- match iter.next() {
+ let mut iter = line.splitn(2, ": ");
+ if let Some(time_str) = iter.next() {
+ time = chrono::DateTime::parse_from_rfc3339(time_str)
+ .map_err(|err| format_err!("cannot parse '{}': {}", time_str, err))?
+ .timestamp();
+ } else {
+ continue;
+ }
+ match iter.next().and_then(|rest| rest.strip_prefix("TASK ")) {
None => continue,
Some(rest) => {
if let Ok(state) = rest.parse() {
@@ -220,7 +229,7 @@ pub fn upid_read_status(upid: &UPID) -> Result<TaskState, Error> {
}
}
- Ok(status)
+ Ok((time, status))
}
/// Task State
@@ -325,10 +334,10 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<Vec<TaskListInfo>, E
},
None => {
println!("Detected stopped UPID {}", upid_str);
- let status = upid_read_status(&upid)
- .unwrap_or_else(|_| TaskState::Unknown);
+ let (time, status) = upid_read_status(&upid)
+ .unwrap_or_else(|_| (Local::now().timestamp(), TaskState::Unknown));
finish_list.push(TaskListInfo {
- upid, upid_str, state: Some((Local::now().timestamp(), status))
+ upid, upid_str, state: Some((time, status))
});
},
Some((endtime, status)) => {
--
2.20.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v3 3/9] config: add JobState helper
2020-08-13 8:29 [pbs-devel] [PATCH proxmox-backup v3 0/9] improve syncjob handling Dominik Csapak
2020-08-13 8:29 ` [pbs-devel] [PATCH proxmox-backup v3 1/9] server: change status of a task from a string to an enum Dominik Csapak
2020-08-13 8:29 ` [pbs-devel] [PATCH proxmox-backup v3 2/9] server/worker_task: let upid_read_status also return the endtime Dominik Csapak
@ 2020-08-13 8:29 ` Dominik Csapak
2020-08-13 8:29 ` [pbs-devel] [PATCH proxmox-backup v3 4/9] api2/admin/sync: use JobState for faster access to state info Dominik Csapak
` (6 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2020-08-13 8:29 UTC (permalink / raw)
To: pbs-devel
this is intended to be a generic helper to (de)serialize job states
(e.g., sync, verify, and so on)
writes a json file into '/var/lib/proxmox-backup/jobstates/TYPE-ID.json'
the api creates the directory with the correct permissions, like
the rrd directory
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/bin/proxmox-backup-api.rs | 1 +
src/config.rs | 1 +
src/config/jobstate.rs | 258 ++++++++++++++++++++++++++++++++++
3 files changed, 260 insertions(+)
create mode 100644 src/config/jobstate.rs
diff --git a/src/bin/proxmox-backup-api.rs b/src/bin/proxmox-backup-api.rs
index 9dde46c0..ea306cf0 100644
--- a/src/bin/proxmox-backup-api.rs
+++ b/src/bin/proxmox-backup-api.rs
@@ -37,6 +37,7 @@ async fn run() -> Result<(), Error> {
config::update_self_signed_cert(false)?;
proxmox_backup::rrd::create_rrdb_dir()?;
+ proxmox_backup::config::jobstate::create_jobstate_dir()?;
if let Err(err) = generate_auth_key() {
bail!("unable to generate auth key - {}", err);
diff --git a/src/config.rs b/src/config.rs
index 2aeccaec..c2ac6da1 100644
--- a/src/config.rs
+++ b/src/config.rs
@@ -18,6 +18,7 @@ use crate::buildcfg;
pub mod acl;
pub mod cached_user_info;
pub mod datastore;
+pub mod jobstate;
pub mod network;
pub mod remote;
pub mod sync;
diff --git a/src/config/jobstate.rs b/src/config/jobstate.rs
new file mode 100644
index 00000000..45672cea
--- /dev/null
+++ b/src/config/jobstate.rs
@@ -0,0 +1,258 @@
+//! Generic JobState handling
+//!
+//! A 'Job' can have 3 states
+//! - Created, when a schedule was created but never executed
+//! - Started, when a job is running right now
+//! - Finished, when a job was running in the past
+//!
+//! and is identified by 2 values: jobtype and jobname (e.g. 'syncjob' and 'myfirstsyncjob')
+//!
+//! This module Provides 2 helper structs to handle those coniditons
+//! 'Job' which handles locking and writing to a file
+//! 'JobState' which is the actual state
+//!
+//! an example usage would be
+//! ```no_run
+//! # use anyhow::{bail, Error};
+//! # use proxmox_backup::server::TaskState;
+//! # use proxmox_backup::config::jobstate::*;
+//! # fn some_code() -> TaskState { TaskState::OK }
+//! # fn code() -> Result<(), Error> {
+//! // locks the correct file under /var/lib
+//! // or fails if someone else holds the lock
+//! let mut job = match Job::new("jobtype", "jobname") {
+//! Ok(job) => job,
+//! Err(err) => bail!("could not lock jobstate"),
+//! };
+//!
+//! // job holds the lock
+//! match job.load() {
+//! Ok(()) => {},
+//! Err(err) => bail!("could not load state {}", err),
+//! }
+//!
+//! // now the job is loaded;
+//! job.start("someupid")?;
+//! // do something
+//! let task_state = some_code();
+//! job.finish(task_state)?;
+//!
+//! // release the lock
+//! drop(job);
+//! # Ok(())
+//! # }
+//!
+//! ```
+use std::fs::File;
+use std::path::{Path, PathBuf};
+use std::time::Duration;
+
+use serde::{Serialize, Deserialize};
+use anyhow::{bail, Error, format_err};
+use proxmox::tools::fs::{file_read_optional_string, replace_file, create_path, CreateOptions, open_file_locked};
+
+use crate::tools::epoch_now_u64;
+use crate::server::{TaskState, UPID, worker_is_active_local, upid_read_status};
+
+#[serde(rename_all="kebab-case")]
+#[derive(Serialize,Deserialize)]
+/// Represents the State of a specific Job
+pub enum JobState {
+ /// A job was created at 'time', but never started/finished
+ Created { time: i64 },
+ /// The Job was last started in 'upid',
+ Started { upid: String },
+ /// The Job was last started in 'upid', which finished with 'state' at 'endtime'
+ Finished { upid: String, endtime: i64, state: TaskState }
+}
+
+/// Represents a Job and holds the correct lock
+pub struct Job {
+ jobtype: String,
+ jobname: String,
+ /// The State of the job
+ pub state: JobState,
+ _lock: File,
+}
+
+const JOB_STATE_BASEDIR: &str = "/var/lib/proxmox-backup/jobstates";
+
+/// Create jobstate stat dir with correct permission
+pub fn create_jobstate_dir() -> Result<(), Error> {
+ let backup_user = crate::backup::backup_user()?;
+ let opts = CreateOptions::new()
+ .owner(backup_user.uid)
+ .group(backup_user.gid);
+
+ create_path(JOB_STATE_BASEDIR, None, Some(opts))
+ .map_err(|err: Error| format_err!("unable to create rrdb stat dir - {}", err))?;
+
+ Ok(())
+}
+
+fn get_path(jobtype: &str, jobname: &str) -> PathBuf {
+ let mut path = PathBuf::from(JOB_STATE_BASEDIR);
+ path.push(format!("{}-{}.json", jobtype, jobname));
+ path
+}
+
+fn get_lock<P>(path: P) -> Result<File, Error>
+where
+ P: AsRef<Path>
+{
+ let mut path = path.as_ref().to_path_buf();
+ path.set_extension("lck");
+ open_file_locked(path, Duration::new(10, 0))
+}
+
+/// Removes the statefile of a job, this is useful if we delete a job
+pub fn remove_state_file(jobtype: &str, jobname: &str) -> Result<(), Error> {
+ let path = get_path(jobtype, jobname);
+ let _lock = get_lock(&path)?;
+ std::fs::remove_file(&path).map_err(|err|
+ format_err!("cannot remove statefile for {} - {}: {}", jobtype, jobname, err)
+ )
+}
+
+/// Returns the last run time of a job by reading the statefile
+/// Note that this is not locked
+pub fn last_run_time(jobtype: &str, jobname: &str) -> Result<i64, Error> {
+ match JobState::load(jobtype, jobname)? {
+ JobState::Created { time } => Ok(time),
+ JobState::Started { upid } | JobState::Finished { upid, .. } => {
+ let upid: UPID = upid.parse().map_err(|err|
+ format_err!("could not parse upid from state: {}", err)
+ )?;
+ Ok(upid.starttime)
+ }
+ }
+}
+
+impl JobState {
+ /// Loads and deserializes the jobstate from type and name.
+ /// When the loaded state indicates a started UPID,
+ /// we go and check if it has already stopped, and
+ /// returning the correct state.
+ ///
+ /// This does not update the state in the file.
+ pub fn load(jobtype: &str, jobname: &str) -> Result<Self, Error> {
+ if let Some(state) = file_read_optional_string(get_path(jobtype, jobname))? {
+ match serde_json::from_str(&state)? {
+ JobState::Started { upid } => {
+ let parsed: UPID = upid.parse()
+ .map_err(|err| format_err!("error parsing upid: {}", err))?;
+
+ if !worker_is_active_local(&parsed) {
+ let (endtime, state) = upid_read_status(&parsed)
+ .map_err(|err| format_err!("error reading upid log status: {}", err))?;
+
+ Ok(JobState::Finished {
+ upid,
+ endtime,
+ state
+ })
+ } else {
+ Ok(JobState::Started { upid })
+ }
+ }
+ other => Ok(other),
+ }
+ } else {
+ Ok(JobState::Created {
+ time: epoch_now_u64()? as i64
+ })
+ }
+ }
+}
+
+impl Job {
+ /// Creates a new instance of a job with the correct lock held
+ /// (will be hold until the job is dropped again).
+ ///
+ /// This does not load the state from the file, to do that,
+ /// 'load' must be called
+ pub fn new(jobtype: &str, jobname: &str) -> Result<Self, Error> {
+ let path = get_path(jobtype, jobname);
+
+ let _lock = get_lock(&path)?;
+
+ Ok(Self{
+ jobtype: jobtype.to_string(),
+ jobname: jobname.to_string(),
+ state: JobState::Created {
+ time: epoch_now_u64()? as i64
+ },
+ _lock,
+ })
+ }
+
+ /// Loads the state from the statefile if it exists.
+ /// If not, it gets created. Updates 'Started' State to 'Finished'
+ /// if we detect the UPID already stopped
+ pub fn load(&mut self) -> Result<(), Error> {
+ self.state = JobState::load(&self.jobtype, &self.jobname)?;
+
+ if let Err(err) = self.write_state() {
+ bail!("could not write statefile: {}", err);
+ }
+
+ Ok(())
+ }
+
+ /// Start the job and update the statefile accordingly
+ /// Fails if the job was already started
+ pub fn start(&mut self, upid: &str) -> Result<(), Error> {
+ match self.state {
+ JobState::Started { .. } => {
+ bail!("cannot start job that is started!");
+ }
+ _ => {}
+ }
+
+ self.state = JobState::Started{
+ upid: upid.to_string(),
+ };
+
+ self.write_state()
+ }
+
+ /// Finish the job and update the statefile accordingly with the given taskstate
+ /// Fails if the job was not yet started
+ pub fn finish(&mut self, state: TaskState) -> Result<(), Error> {
+ let upid = match &self.state {
+ JobState::Created { .. } => bail!("cannot finish when not started"),
+ JobState::Started { upid } => upid,
+ JobState::Finished { upid, .. } => upid,
+ }.to_string();
+
+ let endtime: i64 = epoch_now_u64()? as i64;
+
+ self.state = JobState::Finished {
+ upid,
+ endtime,
+ state,
+ };
+
+ self.write_state()
+ }
+
+ fn write_state(&mut self) -> Result<(), Error> {
+ let serialized = serde_json::to_string(&self.state)?;
+ let path = get_path(&self.jobtype, &self.jobname);
+
+ let backup_user = crate::backup::backup_user()?;
+ let mode = nix::sys::stat::Mode::from_bits_truncate(0o0644);
+ // set the correct owner/group/permissions while saving file
+ // owner(rw) = backup, group(r)= backup
+ let options = CreateOptions::new()
+ .perm(mode)
+ .owner(backup_user.uid)
+ .group(backup_user.gid);
+
+ replace_file(
+ path,
+ serialized.as_bytes(),
+ options,
+ )
+ }
+}
--
2.20.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v3 4/9] api2/admin/sync: use JobState for faster access to state info
2020-08-13 8:29 [pbs-devel] [PATCH proxmox-backup v3 0/9] improve syncjob handling Dominik Csapak
` (2 preceding siblings ...)
2020-08-13 8:29 ` [pbs-devel] [PATCH proxmox-backup v3 3/9] config: add JobState helper Dominik Csapak
@ 2020-08-13 8:29 ` Dominik Csapak
2020-08-13 8:29 ` [pbs-devel] [PATCH proxmox-backup v3 5/9] api/{pull, sync}: refactor to do_sync_job Dominik Csapak
` (5 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2020-08-13 8:29 UTC (permalink / raw)
To: pbs-devel
and delete the statefile again on syncjob removal
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/api2/admin/sync.rs | 50 +++++++++++++++++------------------------
src/api2/config/sync.rs | 2 ++
2 files changed, 23 insertions(+), 29 deletions(-)
diff --git a/src/api2/admin/sync.rs b/src/api2/admin/sync.rs
index 79af03bc..2d8810dc 100644
--- a/src/api2/admin/sync.rs
+++ b/src/api2/admin/sync.rs
@@ -1,6 +1,4 @@
-use std::collections::HashMap;
-
-use anyhow::{Error};
+use anyhow::{format_err, Error};
use serde_json::Value;
use proxmox::api::{api, ApiMethod, Router, RpcEnvironment};
@@ -10,7 +8,8 @@ use proxmox::{list_subdirs_api_method, sortable};
use crate::api2::types::*;
use crate::api2::pull::{get_pull_parameters};
use crate::config::sync::{self, SyncJobStatus, SyncJobConfig};
-use crate::server::{self, TaskListInfo, WorkerTask};
+use crate::server::UPID;
+use crate::config::jobstate::JobState;
use crate::tools::systemd::time::{
parse_calendar_event, compute_next_event};
@@ -34,33 +33,26 @@ pub fn list_sync_jobs(
let mut list: Vec<SyncJobStatus> = config.convert_to_typed_array("sync")?;
- let mut last_tasks: HashMap<String, &TaskListInfo> = HashMap::new();
- let tasks = server::read_task_list()?;
-
- for info in tasks.iter() {
- let worker_id = match &info.upid.worker_id {
- Some(id) => id,
- _ => { continue; },
+ for job in &mut list {
+ let last_state = JobState::load("syncjob", &job.id)
+ .map_err(|err| format_err!("could not open statefile for {}: {}", &job.id, err))?;
+ let (upid, endtime, state, starttime) = match last_state {
+ JobState::Created { time } => (None, None, None, time),
+ JobState::Started { upid } => {
+ let parsed_upid: UPID = upid.parse()?;
+ (Some(upid), None, None, parsed_upid.starttime)
+ },
+ JobState::Finished { upid, endtime, state } => {
+ let parsed_upid: UPID = upid.parse()?;
+ (Some(upid), Some(endtime), Some(state.to_string()), parsed_upid.starttime)
+ },
};
- if let Some(last) = last_tasks.get(worker_id) {
- if last.upid.starttime < info.upid.starttime {
- last_tasks.insert(worker_id.to_string(), &info);
- }
- } else {
- last_tasks.insert(worker_id.to_string(), &info);
- }
- }
- for job in &mut list {
- let mut last = 0;
- if let Some(task) = last_tasks.get(&job.id) {
- job.last_run_upid = Some(task.upid_str.clone());
- if let Some((endtime, status)) = &task.state {
- job.last_run_state = Some(status.to_string());
- job.last_run_endtime = Some(*endtime);
- last = *endtime;
- }
- }
+ job.last_run_upid = upid;
+ job.last_run_state = state;
+ job.last_run_endtime = endtime;
+
+ let last = job.last_run_endtime.unwrap_or_else(|| starttime);
job.next_run = (|| -> Option<i64> {
let schedule = job.schedule.as_ref()?;
diff --git a/src/api2/config/sync.rs b/src/api2/config/sync.rs
index 8273a4b4..8b16192c 100644
--- a/src/api2/config/sync.rs
+++ b/src/api2/config/sync.rs
@@ -264,6 +264,8 @@ pub fn delete_sync_job(id: String, digest: Option<String>) -> Result<(), Error>
sync::save_config(&config)?;
+ crate::config::jobstate::remove_state_file("syncjob", &id)?;
+
Ok(())
}
--
2.20.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v3 5/9] api/{pull, sync}: refactor to do_sync_job
2020-08-13 8:29 [pbs-devel] [PATCH proxmox-backup v3 0/9] improve syncjob handling Dominik Csapak
` (3 preceding siblings ...)
2020-08-13 8:29 ` [pbs-devel] [PATCH proxmox-backup v3 4/9] api2/admin/sync: use JobState for faster access to state info Dominik Csapak
@ 2020-08-13 8:29 ` Dominik Csapak
2020-08-13 8:29 ` [pbs-devel] [PATCH proxmox-backup v3 6/9] api2/pull: extend do_sync_job to also handle schedule and jobstate Dominik Csapak
` (4 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2020-08-13 8:29 UTC (permalink / raw)
To: pbs-devel
and move the pull parameters into the worker, so that the task log
contains the error if there is one
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/api2/admin/sync.rs | 25 +++----------------------
src/api2/pull.rs | 32 ++++++++++++++++++++++++++++++++
2 files changed, 35 insertions(+), 22 deletions(-)
diff --git a/src/api2/admin/sync.rs b/src/api2/admin/sync.rs
index 2d8810dc..47824887 100644
--- a/src/api2/admin/sync.rs
+++ b/src/api2/admin/sync.rs
@@ -6,7 +6,7 @@ use proxmox::api::router::SubdirMap;
use proxmox::{list_subdirs_api_method, sortable};
use crate::api2::types::*;
-use crate::api2::pull::{get_pull_parameters};
+use crate::api2::pull::do_sync_job;
use crate::config::sync::{self, SyncJobStatus, SyncJobConfig};
use crate::server::UPID;
use crate::config::jobstate::JobState;
@@ -76,7 +76,7 @@ pub fn list_sync_jobs(
}
)]
/// Runs the sync jobs manually.
-async fn run_sync_job(
+fn run_sync_job(
id: String,
_info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment,
@@ -87,26 +87,7 @@ async fn run_sync_job(
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
- let delete = sync_job.remove_vanished.unwrap_or(true);
- let (client, src_repo, tgt_store) = get_pull_parameters(&sync_job.store, &sync_job.remote, &sync_job.remote_store).await?;
-
- let upid_str = WorkerTask::spawn("syncjob", Some(id.clone()), userid, false, move |worker| async move {
-
- worker.log(format!("sync job '{}' start", &id));
-
- crate::client::pull::pull_store(
- &worker,
- &client,
- &src_repo,
- tgt_store.clone(),
- delete,
- Userid::backup_userid().clone(),
- ).await?;
-
- worker.log(format!("sync job '{}' end", &id));
-
- Ok(())
- })?;
+ let upid_str = do_sync_job(&id, sync_job, &userid)?;
Ok(upid_str)
}
diff --git a/src/api2/pull.rs b/src/api2/pull.rs
index b0db45c9..3265853b 100644
--- a/src/api2/pull.rs
+++ b/src/api2/pull.rs
@@ -12,6 +12,7 @@ use crate::client::{HttpClient, HttpClientOptions, BackupRepository, pull::pull_
use crate::api2::types::*;
use crate::config::{
remote,
+ sync::SyncJobConfig,
acl::{PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_PRUNE, PRIV_REMOTE_READ},
cached_user_info::CachedUserInfo,
};
@@ -62,6 +63,37 @@ pub async fn get_pull_parameters(
Ok((client, src_repo, tgt_store))
}
+pub fn do_sync_job(
+ id: &str,
+ sync_job: SyncJobConfig,
+ userid: &Userid,
+) -> Result<String, Error> {
+
+ let job_id = id.to_string();
+
+ let upid_str = WorkerTask::spawn("syncjob", Some(id.to_string()), userid.clone(), false, move |worker| async move {
+ let delete = sync_job.remove_vanished.unwrap_or(true);
+ let (client, src_repo, tgt_store) = get_pull_parameters(&sync_job.store, &sync_job.remote, &sync_job.remote_store).await?;
+
+ worker.log(format!("sync job '{}' start", &job_id));
+
+ crate::client::pull::pull_store(
+ &worker,
+ &client,
+ &src_repo,
+ tgt_store.clone(),
+ delete,
+ Userid::backup_userid().clone(),
+ ).await?;
+
+ worker.log(format!("sync job '{}' end", &job_id));
+
+ Ok(())
+ })?;
+
+ Ok(upid_str)
+}
+
#[api(
input: {
properties: {
--
2.20.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v3 6/9] api2/pull: extend do_sync_job to also handle schedule and jobstate
2020-08-13 8:29 [pbs-devel] [PATCH proxmox-backup v3 0/9] improve syncjob handling Dominik Csapak
` (4 preceding siblings ...)
2020-08-13 8:29 ` [pbs-devel] [PATCH proxmox-backup v3 5/9] api/{pull, sync}: refactor to do_sync_job Dominik Csapak
@ 2020-08-13 8:29 ` Dominik Csapak
2020-08-13 8:29 ` [pbs-devel] [PATCH proxmox-backup v3 7/9] syncjob: use do_sync_job also for scheduled sync jobs Dominik Csapak
` (3 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2020-08-13 8:29 UTC (permalink / raw)
To: pbs-devel
so that we can log if triggered by a schedule, and writing to a jobstatefile
also correctly polls now the abort_future of the worker, so that
users can stop a sync
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/api2/admin/sync.rs | 7 +++--
src/api2/pull.rs | 64 ++++++++++++++++++++++++++++++++----------
2 files changed, 54 insertions(+), 17 deletions(-)
diff --git a/src/api2/admin/sync.rs b/src/api2/admin/sync.rs
index 47824887..c09bea4f 100644
--- a/src/api2/admin/sync.rs
+++ b/src/api2/admin/sync.rs
@@ -9,7 +9,7 @@ use crate::api2::types::*;
use crate::api2::pull::do_sync_job;
use crate::config::sync::{self, SyncJobStatus, SyncJobConfig};
use crate::server::UPID;
-use crate::config::jobstate::JobState;
+use crate::config::jobstate::{Job, JobState};
use crate::tools::systemd::time::{
parse_calendar_event, compute_next_event};
@@ -87,7 +87,10 @@ fn run_sync_job(
let userid: Userid = rpcenv.get_user().unwrap().parse()?;
- let upid_str = do_sync_job(&id, sync_job, &userid)?;
+ let mut job = Job::new("syncjob", &id)?;
+ job.load()?;
+
+ let upid_str = do_sync_job(&id, sync_job, &userid, None, job)?;
Ok(upid_str)
}
diff --git a/src/api2/pull.rs b/src/api2/pull.rs
index 3265853b..e8eb35e1 100644
--- a/src/api2/pull.rs
+++ b/src/api2/pull.rs
@@ -2,6 +2,7 @@
use std::sync::{Arc};
use anyhow::{format_err, Error};
+use futures::{select, future::FutureExt};
use proxmox::api::api;
use proxmox::api::{ApiMethod, Router, RpcEnvironment, Permission};
@@ -13,6 +14,7 @@ use crate::api2::types::*;
use crate::config::{
remote,
sync::SyncJobConfig,
+ jobstate::Job,
acl::{PRIV_DATASTORE_BACKUP, PRIV_DATASTORE_PRUNE, PRIV_REMOTE_READ},
cached_user_info::CachedUserInfo,
};
@@ -67,29 +69,61 @@ pub fn do_sync_job(
id: &str,
sync_job: SyncJobConfig,
userid: &Userid,
+ schedule: Option<String>,
+ mut job: Job,
) -> Result<String, Error> {
let job_id = id.to_string();
+ let worker_type = "syncjob";
- let upid_str = WorkerTask::spawn("syncjob", Some(id.to_string()), userid.clone(), false, move |worker| async move {
- let delete = sync_job.remove_vanished.unwrap_or(true);
- let (client, src_repo, tgt_store) = get_pull_parameters(&sync_job.store, &sync_job.remote, &sync_job.remote_store).await?;
+ let upid_str = WorkerTask::spawn(
+ worker_type,
+ Some(id.to_string()),
+ userid.clone(),
+ false,
+ move |worker| async move {
- worker.log(format!("sync job '{}' start", &job_id));
+ job.start(&worker.upid().to_string())?;
- crate::client::pull::pull_store(
- &worker,
- &client,
- &src_repo,
- tgt_store.clone(),
- delete,
- Userid::backup_userid().clone(),
- ).await?;
+ let worker2 = worker.clone();
- worker.log(format!("sync job '{}' end", &job_id));
+ let worker_future = async move {
- Ok(())
- })?;
+ let delete = sync_job.remove_vanished.unwrap_or(true);
+ let (client, src_repo, tgt_store) = get_pull_parameters(&sync_job.store, &sync_job.remote, &sync_job.remote_store).await?;
+
+ worker.log(format!("Starting datastore sync job '{}'", job_id));
+ if let Some(event_str) = schedule {
+ worker.log(format!("task triggered by schedule '{}'", event_str));
+ }
+ worker.log(format!("Sync datastore '{}' from '{}/{}'",
+ sync_job.store, sync_job.remote, sync_job.remote_store));
+
+ crate::client::pull::pull_store(&worker, &client, &src_repo, tgt_store.clone(), delete, Userid::backup_userid().clone()).await?;
+
+ worker.log(format!("sync job '{}' end", &job_id));
+
+ Ok(())
+ };
+
+ let mut abort_future = worker2.abort_future().map(|_| Err(format_err!("sync aborted")));
+
+ let res = select!{
+ worker = worker_future.fuse() => worker,
+ abort = abort_future => abort,
+ };
+
+ let status = worker2.create_state(&res);
+
+ match job.finish(status) {
+ Ok(_) => {},
+ Err(err) => {
+ eprintln!("could not finish job state: {}", err);
+ }
+ }
+
+ res
+ })?;
Ok(upid_str)
}
--
2.20.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v3 7/9] syncjob: use do_sync_job also for scheduled sync jobs
2020-08-13 8:29 [pbs-devel] [PATCH proxmox-backup v3 0/9] improve syncjob handling Dominik Csapak
` (5 preceding siblings ...)
2020-08-13 8:29 ` [pbs-devel] [PATCH proxmox-backup v3 6/9] api2/pull: extend do_sync_job to also handle schedule and jobstate Dominik Csapak
@ 2020-08-13 8:29 ` Dominik Csapak
2020-08-13 8:29 ` [pbs-devel] [PATCH proxmox-backup v3 8/9] ui: syncjob: improve task text rendering Dominik Csapak
` (2 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2020-08-13 8:29 UTC (permalink / raw)
To: pbs-devel
and determine the last runtime with the jobstate
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
src/bin/proxmox-backup-proxy.rs | 83 +++++++--------------------------
1 file changed, 16 insertions(+), 67 deletions(-)
diff --git a/src/bin/proxmox-backup-proxy.rs b/src/bin/proxmox-backup-proxy.rs
index 3f7bf3ec..c7f8bc79 100644
--- a/src/bin/proxmox-backup-proxy.rs
+++ b/src/bin/proxmox-backup-proxy.rs
@@ -18,6 +18,8 @@ use proxmox_backup::server::{ApiConfig, rest::*};
use proxmox_backup::auth_helpers::*;
use proxmox_backup::tools::disks::{ DiskManage, zfs_pool_stats };
+use proxmox_backup::api2::pull::do_sync_job;
+
fn main() {
proxmox_backup::tools::setup_safe_path_env();
@@ -472,10 +474,7 @@ async fn schedule_datastore_prune() {
async fn schedule_datastore_sync_jobs() {
use proxmox_backup::{
- backup::DataStore,
- client::{ HttpClient, HttpClientOptions, BackupRepository, pull::pull_store },
- server::{ WorkerTask },
- config::{ sync::{self, SyncJobConfig}, remote::{self, Remote} },
+ config::{ sync::{self, SyncJobConfig}, jobstate::{self, Job} },
tools::systemd::time::{ parse_calendar_event, compute_next_event },
};
@@ -487,14 +486,6 @@ async fn schedule_datastore_sync_jobs() {
Ok((config, _digest)) => config,
};
- let remote_config = match remote::config() {
- Err(err) => {
- eprintln!("unable to read remote config - {}", err);
- return;
- }
- Ok((config, _digest)) => config,
- };
-
for (job_id, (_, job_config)) in config.sections {
let job_config: SyncJobConfig = match serde_json::from_value(job_config) {
Ok(c) => c,
@@ -519,16 +510,10 @@ async fn schedule_datastore_sync_jobs() {
let worker_type = "syncjob";
- let last = match lookup_last_worker(worker_type, &job_id) {
- Ok(Some(upid)) => {
- if proxmox_backup::server::worker_is_active_local(&upid) {
- continue;
- }
- upid.starttime
- },
- Ok(None) => 0,
+ let last = match jobstate::last_run_time(worker_type, &job_id) {
+ Ok(time) => time,
Err(err) => {
- eprintln!("lookup_last_job_start failed: {}", err);
+ eprintln!("could not get last run time of {} {}: {}", worker_type, job_id, err);
continue;
}
};
@@ -550,57 +535,21 @@ async fn schedule_datastore_sync_jobs() {
};
if next > now { continue; }
-
- let job_id2 = job_id.clone();
-
- let tgt_store = match DataStore::lookup_datastore(&job_config.store) {
- Ok(datastore) => datastore,
- Err(err) => {
- eprintln!("lookup_datastore '{}' failed - {}", job_config.store, err);
- continue;
- }
- };
-
- let remote: Remote = match remote_config.lookup("remote", &job_config.remote) {
- Ok(remote) => remote,
- Err(err) => {
- eprintln!("remote_config lookup failed: {}", err);
- continue;
+ let job = match Job::new(worker_type, &job_id) {
+ Ok(mut job) => match job.load() {
+ Ok(_) => job,
+ Err(err) => {
+ eprintln!("error loading jobstate for {} - {}: {}", worker_type, &job_id, err);
+ continue;
+ }
}
+ Err(_) => continue, // could not get lock
};
let userid = Userid::backup_userid().clone();
- let delete = job_config.remove_vanished.unwrap_or(true);
-
- if let Err(err) = WorkerTask::spawn(
- worker_type,
- Some(job_id.clone()),
- userid.clone(),
- false,
- move |worker| async move {
- worker.log(format!("Starting datastore sync job '{}'", job_id));
- worker.log(format!("task triggered by schedule '{}'", event_str));
- worker.log(format!("Sync datastore '{}' from '{}/{}'",
- job_config.store, job_config.remote, job_config.remote_store));
-
- let options = HttpClientOptions::new()
- .password(Some(remote.password.clone()))
- .fingerprint(remote.fingerprint.clone());
-
- let client = HttpClient::new(&remote.host, &remote.userid, options)?;
- let _auth_info = client.login() // make sure we can auth
- .await
- .map_err(|err| format_err!("remote connection to '{}' failed - {}", remote.host, err))?;
-
- let src_repo = BackupRepository::new(Some(remote.userid), Some(remote.host), job_config.remote_store);
-
- pull_store(&worker, &client, &src_repo, tgt_store, delete, userid).await?;
-
- Ok(())
- }
- ) {
- eprintln!("unable to start datastore sync job {} - {}", job_id2, err);
+ if let Err(err) = do_sync_job(&job_id, job_config, &userid, Some(event_str), job) {
+ eprintln!("unable to start datastore sync job {} - {}", &job_id, err);
}
}
}
--
2.20.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v3 8/9] ui: syncjob: improve task text rendering
2020-08-13 8:29 [pbs-devel] [PATCH proxmox-backup v3 0/9] improve syncjob handling Dominik Csapak
` (6 preceding siblings ...)
2020-08-13 8:29 ` [pbs-devel] [PATCH proxmox-backup v3 7/9] syncjob: use do_sync_job also for scheduled sync jobs Dominik Csapak
@ 2020-08-13 8:29 ` Dominik Csapak
2020-08-13 8:29 ` [pbs-devel] [PATCH proxmox-backup v3 9/9] ui: syncjob: make some columns smaller Dominik Csapak
2020-08-13 9:52 ` [pbs-devel] applied: [PATCH proxmox-backup v3 0/9] improve syncjob handling Dietmar Maurer
9 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2020-08-13 8:29 UTC (permalink / raw)
To: pbs-devel
to also have the correct icons for warnings and unknown tasks
the text is here "ERROR: ..." now, so leave the 'Error' out
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/config/SyncView.js | 22 +++++++++++++++++++---
1 file changed, 19 insertions(+), 3 deletions(-)
diff --git a/www/config/SyncView.js b/www/config/SyncView.js
index 634977c4..94e40d03 100644
--- a/www/config/SyncView.js
+++ b/www/config/SyncView.js
@@ -107,11 +107,27 @@ Ext.define('PBS.config.SyncJobView', {
return '';
}
- if (value === 'OK') {
- return `<i class="fa fa-check good"></i> ${gettext("OK")}`;
+ let parsed = Proxmox.Utils.parse_task_status(value);
+ let text = value;
+ let icon = '';
+ switch (parsed) {
+ case 'unknown':
+ icon = 'question faded';
+ text = Proxmox.Utils.unknownText;
+ break;
+ case 'error':
+ icon = 'times critical';
+ text = Proxmox.Utils.errorText + ': ' + value;
+ break;
+ case 'warning':
+ icon = 'exclamation warning';
+ break;
+ case 'ok':
+ icon = 'check good';
+ text = gettext("OK");
}
- return `<i class="fa fa-times critical"></i> ${gettext("Error")}:${value}`;
+ return `<i class="fa fa-${icon}"></i> ${text}`;
},
render_next_run: function(value, metadat, record) {
--
2.20.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pbs-devel] [PATCH proxmox-backup v3 9/9] ui: syncjob: make some columns smaller
2020-08-13 8:29 [pbs-devel] [PATCH proxmox-backup v3 0/9] improve syncjob handling Dominik Csapak
` (7 preceding siblings ...)
2020-08-13 8:29 ` [pbs-devel] [PATCH proxmox-backup v3 8/9] ui: syncjob: improve task text rendering Dominik Csapak
@ 2020-08-13 8:29 ` Dominik Csapak
2020-08-13 9:52 ` [pbs-devel] applied: [PATCH proxmox-backup v3 0/9] improve syncjob handling Dietmar Maurer
9 siblings, 0 replies; 11+ messages in thread
From: Dominik Csapak @ 2020-08-13 8:29 UTC (permalink / raw)
To: pbs-devel
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
www/config/SyncView.js | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/www/config/SyncView.js b/www/config/SyncView.js
index 94e40d03..55a940da 100644
--- a/www/config/SyncView.js
+++ b/www/config/SyncView.js
@@ -214,26 +214,26 @@ Ext.define('PBS.config.SyncJobView', {
columns: [
{
header: gettext('Sync Job'),
- width: 200,
+ width: 100,
sortable: true,
renderer: Ext.String.htmlEncode,
dataIndex: 'id',
},
{
header: gettext('Remote'),
- width: 200,
+ width: 100,
sortable: true,
dataIndex: 'remote',
},
{
header: gettext('Remote Store'),
- width: 200,
+ width: 100,
sortable: true,
dataIndex: 'remote-store',
},
{
header: gettext('Local Store'),
- width: 200,
+ width: 100,
sortable: true,
dataIndex: 'store',
},
--
2.20.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [pbs-devel] applied: [PATCH proxmox-backup v3 0/9] improve syncjob handling
2020-08-13 8:29 [pbs-devel] [PATCH proxmox-backup v3 0/9] improve syncjob handling Dominik Csapak
` (8 preceding siblings ...)
2020-08-13 8:29 ` [pbs-devel] [PATCH proxmox-backup v3 9/9] ui: syncjob: make some columns smaller Dominik Csapak
@ 2020-08-13 9:52 ` Dietmar Maurer
9 siblings, 0 replies; 11+ messages in thread
From: Dietmar Maurer @ 2020-08-13 9:52 UTC (permalink / raw)
To: Proxmox Backup Server development discussion, Dominik Csapak
applied
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2020-08-13 9:52 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-13 8:29 [pbs-devel] [PATCH proxmox-backup v3 0/9] improve syncjob handling Dominik Csapak
2020-08-13 8:29 ` [pbs-devel] [PATCH proxmox-backup v3 1/9] server: change status of a task from a string to an enum Dominik Csapak
2020-08-13 8:29 ` [pbs-devel] [PATCH proxmox-backup v3 2/9] server/worker_task: let upid_read_status also return the endtime Dominik Csapak
2020-08-13 8:29 ` [pbs-devel] [PATCH proxmox-backup v3 3/9] config: add JobState helper Dominik Csapak
2020-08-13 8:29 ` [pbs-devel] [PATCH proxmox-backup v3 4/9] api2/admin/sync: use JobState for faster access to state info Dominik Csapak
2020-08-13 8:29 ` [pbs-devel] [PATCH proxmox-backup v3 5/9] api/{pull, sync}: refactor to do_sync_job Dominik Csapak
2020-08-13 8:29 ` [pbs-devel] [PATCH proxmox-backup v3 6/9] api2/pull: extend do_sync_job to also handle schedule and jobstate Dominik Csapak
2020-08-13 8:29 ` [pbs-devel] [PATCH proxmox-backup v3 7/9] syncjob: use do_sync_job also for scheduled sync jobs Dominik Csapak
2020-08-13 8:29 ` [pbs-devel] [PATCH proxmox-backup v3 8/9] ui: syncjob: improve task text rendering Dominik Csapak
2020-08-13 8:29 ` [pbs-devel] [PATCH proxmox-backup v3 9/9] ui: syncjob: make some columns smaller Dominik Csapak
2020-08-13 9:52 ` [pbs-devel] applied: [PATCH proxmox-backup v3 0/9] improve syncjob handling Dietmar Maurer
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox