* [pbs-devel] [PATCH proxmox-backup 0/2] proxmox-log introduction
@ 2023-11-03 11:28 Gabriel Goller
2023-11-03 11:28 ` [pbs-devel] [PATCH proxmox 1/2] proxmox-log: added tracing infra Gabriel Goller
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Gabriel Goller @ 2023-11-03 11:28 UTC (permalink / raw)
To: pbs-devel
Removed the task_log! (and friends task_warn!, task_debug!, etc.) macro
and introduced the `tracing` crate. We now initiate the tracing crate
using a single layer, which is loggin to the syslog and the tasklog.
It uses the `syslog` crate and the original `FileLogger`.
To write to the task logs from the worker threads and tasks, we now
have a task_local logger (and warning counter), which
will get instantiated when a task/thread is created. This means that
when we call `info!` or any other `tracing` log macros (with the "tasklog"
attribute), it will get the file_logger from TLS and write to the file.
Changes since RFC v2:
- Combined syslog and tasklog to single layer
- Infer the logging target from the FileLogger TLS
Changes since RFC v1:
- Remove the 'tasklog = true' attribute and infer the context
- Wrap the worker_thread or worker_task in a span with name
'worker_task'
- All events in the span with name 'worker_task' get logged to the
file_logger, everything else goes to syslog (Error events go to
both)
- Remove the `Option<>` around the `FileLogger` in TLS
- Clippy fixes
Gabriel Goller (1):
proxmox-log: added tracing infra
Cargo.toml | 6 +
proxmox-log/Cargo.toml | 23 ++++
proxmox-log/debian/changelog | 5 +
proxmox-log/debian/control | 53 +++++++++
proxmox-log/debian/copyright | 18 +++
proxmox-log/debian/debcargo.toml | 7 ++
.../src/file_logger.rs | 2 +-
proxmox-log/src/lib.rs | 34 ++++++
proxmox-log/src/syslog_tasklog_layer.rs | 106 +++++++++++++++++
proxmox-rest-server/Cargo.toml | 2 +
proxmox-rest-server/src/api_config.rs | 3 +-
proxmox-rest-server/src/lib.rs | 3 -
proxmox-rest-server/src/rest.rs | 4 +-
proxmox-rest-server/src/worker_task.rs | 108 +++++++++---------
proxmox-sys/src/worker_task_context.rs | 47 --------
15 files changed, 315 insertions(+), 106 deletions(-)
create mode 100644 proxmox-log/Cargo.toml
create mode 100644 proxmox-log/debian/changelog
create mode 100644 proxmox-log/debian/control
create mode 100644 proxmox-log/debian/copyright
create mode 100644 proxmox-log/debian/debcargo.toml
rename {proxmox-rest-server => proxmox-log}/src/file_logger.rs (98%)
create mode 100644 proxmox-log/src/lib.rs
create mode 100644 proxmox-log/src/syslog_tasklog_layer.rs
Gabriel Goller (1):
log: removed task_log! macro and moved to tracing
Cargo.toml | 61 ++++----
pbs-api-types/src/metrics.rs | 1 -
pbs-api-types/src/node.rs | 3 +-
pbs-datastore/Cargo.toml | 1 +
pbs-datastore/src/chunk_store.rs | 30 +---
pbs-datastore/src/datastore.rs | 70 ++++-----
src/api2/admin/datastore.rs | 24 ++--
src/api2/admin/metrics.rs | 2 +-
src/api2/config/acme.rs | 20 +--
src/api2/config/datastore.rs | 12 +-
src/api2/node/apt.rs | 13 +-
src/api2/node/certificates.rs | 64 ++++-----
src/api2/node/disks/directory.rs | 13 +-
src/api2/node/disks/mod.rs | 6 +-
src/api2/node/disks/zfs.rs | 30 ++--
src/api2/node/mod.rs | 11 +-
src/api2/pull.rs | 28 ++--
src/api2/tape/backup.rs | 77 ++++------
src/api2/tape/drive.rs | 135 +++++++----------
src/api2/tape/restore.rs | 240 ++++++++++---------------------
src/backup/verify.rs | 103 ++++---------
src/bin/proxmox-backup-api.rs | 10 +-
src/bin/proxmox-backup-proxy.rs | 42 ++----
src/server/gc_job.rs | 6 +-
src/server/prune_job.rs | 26 ++--
src/server/pull.rs | 166 ++++++++-------------
src/server/realm_sync_job.rs | 40 ++----
src/server/verify_job.rs | 10 +-
src/tape/drive/mod.rs | 34 ++---
src/tape/pool_writer/mod.rs | 85 +++++------
tests/worker-task-abort.rs | 9 +-
31 files changed, 502 insertions(+), 870 deletions(-)
--
2.39.2
^ permalink raw reply [flat|nested] 8+ messages in thread
* [pbs-devel] [PATCH proxmox 1/2] proxmox-log: added tracing infra
2023-11-03 11:28 [pbs-devel] [PATCH proxmox-backup 0/2] proxmox-log introduction Gabriel Goller
@ 2023-11-03 11:28 ` Gabriel Goller
2023-12-01 9:56 ` Lukas Wagner
2023-11-03 11:28 ` [pbs-devel] [PATCH proxmox-backup 2/2] log: removed task_log! macro and moved to tracing Gabriel Goller
2023-12-15 13:56 ` [pbs-devel] [PATCH proxmox-backup 0/2] proxmox-log introduction Gabriel Goller
2 siblings, 1 reply; 8+ messages in thread
From: Gabriel Goller @ 2023-11-03 11:28 UTC (permalink / raw)
To: pbs-devel
Added the `proxmox_log` crate which includes the new logging infra.
Exports the `init_logger` function, which creates the `tracing` logger
that includes the default Subscriber and a single layer. The layer
checks if a `tokio::task_local!` variable is set, if it is,
it logs to the tasklog using the `FileLogger`. If it isn't, the message
goes to syslog (using the `syslog` crate).
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
---
Cargo.toml | 6 +
proxmox-log/Cargo.toml | 23 ++++
proxmox-log/debian/changelog | 5 +
proxmox-log/debian/control | 53 +++++++++
proxmox-log/debian/copyright | 18 +++
proxmox-log/debian/debcargo.toml | 7 ++
.../src/file_logger.rs | 2 +-
proxmox-log/src/lib.rs | 34 ++++++
proxmox-log/src/syslog_tasklog_layer.rs | 106 +++++++++++++++++
proxmox-rest-server/Cargo.toml | 2 +
proxmox-rest-server/src/api_config.rs | 3 +-
proxmox-rest-server/src/lib.rs | 3 -
proxmox-rest-server/src/rest.rs | 4 +-
proxmox-rest-server/src/worker_task.rs | 108 +++++++++---------
proxmox-sys/src/worker_task_context.rs | 47 --------
15 files changed, 315 insertions(+), 106 deletions(-)
create mode 100644 proxmox-log/Cargo.toml
create mode 100644 proxmox-log/debian/changelog
create mode 100644 proxmox-log/debian/control
create mode 100644 proxmox-log/debian/copyright
create mode 100644 proxmox-log/debian/debcargo.toml
rename {proxmox-rest-server => proxmox-log}/src/file_logger.rs (98%)
create mode 100644 proxmox-log/src/lib.rs
create mode 100644 proxmox-log/src/syslog_tasklog_layer.rs
diff --git a/Cargo.toml b/Cargo.toml
index 265f5c9..246d6e4 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -13,6 +13,7 @@ members = [
"proxmox-io",
"proxmox-lang",
"proxmox-ldap",
+ "proxmox-log",
"proxmox-login",
"proxmox-metrics",
"proxmox-notify",
@@ -81,10 +82,14 @@ tokio = "1.6"
tokio-openssl = "0.6.1"
tokio-stream = "0.1.0"
tower-service = "0.3.0"
+tracing = "0.1"
+tracing-log = { version = "0.1.3", default-features = false }
+tracing-subscriber = "0.3.16"
url = "2.2"
walkdir = "2"
webauthn-rs = "0.3"
zstd = { version = "0.12", features = [ "bindgen" ] }
+syslog = "6"
# workspace dependencies
proxmox-api-macro = { version = "1.0.6", path = "proxmox-api-macro" }
@@ -95,6 +100,7 @@ proxmox-http-error = { version = "0.1.0", path = "proxmox-http-error" }
proxmox-human-byte = { version = "0.1.0", path = "proxmox-human-byte" }
proxmox-io = { version = "1.0.0", path = "proxmox-io" }
proxmox-lang = { version = "1.1", path = "proxmox-lang" }
+proxmox-log= { version = "0.1.0", path = "proxmox-log" }
proxmox-login = { version = "0.1.0", path = "proxmox-login" }
proxmox-rest-server = { version = "0.4.0", path = "proxmox-rest-server" }
proxmox-router = { version = "2.1.1", path = "proxmox-router" }
diff --git a/proxmox-log/Cargo.toml b/proxmox-log/Cargo.toml
new file mode 100644
index 0000000..e05b0be
--- /dev/null
+++ b/proxmox-log/Cargo.toml
@@ -0,0 +1,23 @@
+[package]
+name = "proxmox-log"
+version = "0.1.0"
+authors.workspace = true
+edition.workspace = true
+license.workspace = true
+repository.workspace = true
+description = "Logging infrastructure for proxmox"
+
+exclude.workspace = true
+
+[dependencies]
+anyhow.workspace = true
+syslog.workspace = true
+nix.workspace = true
+log.workspace = true
+tracing.workspace = true
+tracing-subscriber.workspace = true
+tracing-log.workspace = true
+tokio = { workspace = true, features = ["rt-multi-thread"] }
+proxmox-time.workspace = true
+proxmox-sys.workspace = true
+
diff --git a/proxmox-log/debian/changelog b/proxmox-log/debian/changelog
new file mode 100644
index 0000000..aaf8073
--- /dev/null
+++ b/proxmox-log/debian/changelog
@@ -0,0 +1,5 @@
+rust-proxmox-log (0.1.0-3) UNRELEASED; urgency=medium
+
+ * Initial release
+
+ -- Gabriel Goller <ggoller@luna.proxmox.com> Wed, 11 Oct 2023 15:13:58 +0200
diff --git a/proxmox-log/debian/control b/proxmox-log/debian/control
new file mode 100644
index 0000000..4f22ecb
--- /dev/null
+++ b/proxmox-log/debian/control
@@ -0,0 +1,53 @@
+Source: rust-proxmox-log
+Section: rust
+Priority: optional
+Build-Depends: debhelper (>= 12),
+ dh-cargo (>= 25),
+ cargo:native <!nocheck>,
+ rustc:native <!nocheck>,
+ libstd-rust-dev <!nocheck>,
+ librust-anyhow-1+default-dev <!nocheck>,
+ librust-log-0.4+default-dev (>= 0.4.17-~~) <!nocheck>,
+ librust-nix-0.26+default-dev (>= 0.26.1-~~) <!nocheck>,
+ librust-proxmox-sys-0.5+default-dev <!nocheck>,
+ librust-proxmox-time-1+default-dev (>= 1.1.4-~~) <!nocheck>,
+ librust-syslog-6+default-dev <!nocheck>,
+ librust-tokio-1+default-dev (>= 1.6-~~) <!nocheck>,
+ librust-tokio-1+rt-multi-thread-dev (>= 1.6-~~) <!nocheck>,
+ librust-tracing-0.1+default-dev <!nocheck>,
+ librust-tracing-log-0.1-dev (>= 0.1.3-~~) <!nocheck>,
+ librust-tracing-subscriber-0.3+default-dev (>= 0.3.16-~~) <!nocheck>
+Maintainer: Proxmox Support Team <support@proxmox.com>
+Standards-Version: 4.6.1
+Vcs-Git: git://git.proxmox.com/git/proxmox.git
+Vcs-Browser: https://git.proxmox.com/?p=proxmox.git
+X-Cargo-Crate: proxmox-log
+Rules-Requires-Root: no
+
+Package: librust-proxmox-log-dev
+Architecture: any
+Multi-Arch: same
+Depends:
+ ${misc:Depends},
+ librust-anyhow-1+default-dev,
+ librust-log-0.4+default-dev (>= 0.4.17-~~),
+ librust-nix-0.26+default-dev (>= 0.26.1-~~),
+ librust-proxmox-sys-0.5+default-dev,
+ librust-proxmox-time-1+default-dev (>= 1.1.4-~~),
+ librust-syslog-6+default-dev,
+ librust-tokio-1+default-dev (>= 1.6-~~),
+ librust-tokio-1+rt-multi-thread-dev (>= 1.6-~~),
+ librust-tracing-0.1+default-dev,
+ librust-tracing-log-0.1-dev (>= 0.1.3-~~),
+ librust-tracing-subscriber-0.3+default-dev (>= 0.3.16-~~)
+Provides:
+ librust-proxmox-log+default-dev (= ${binary:Version}),
+ librust-proxmox-log-0-dev (= ${binary:Version}),
+ librust-proxmox-log-0+default-dev (= ${binary:Version}),
+ librust-proxmox-log-0.1-dev (= ${binary:Version}),
+ librust-proxmox-log-0.1+default-dev (= ${binary:Version}),
+ librust-proxmox-log-0.1.0-dev (= ${binary:Version}),
+ librust-proxmox-log-0.1.0+default-dev (= ${binary:Version})
+Description: Logging infrastructure for proxmox - Rust source code
+ This package contains the source for the Rust proxmox-log crate, packaged by
+ debcargo for use with cargo and dh-cargo.
diff --git a/proxmox-log/debian/copyright b/proxmox-log/debian/copyright
new file mode 100644
index 0000000..0d9eab3
--- /dev/null
+++ b/proxmox-log/debian/copyright
@@ -0,0 +1,18 @@
+Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
+
+Files:
+ *
+Copyright: 2019 - 2023 Proxmox Server Solutions GmbH <support@proxmox.com>
+License: AGPL-3.0-or-later
+ This program is free software: you can redistribute it and/or modify it under
+ the terms of the GNU Affero General Public License as published by the Free
+ Software Foundation, either version 3 of the License, or (at your option) any
+ later version.
+ .
+ This program is distributed in the hope that it will be useful, but WITHOUT
+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more
+ details.
+ .
+ You should have received a copy of the GNU Affero General Public License along
+ with this program. If not, see <https://www.gnu.org/licenses/>.
diff --git a/proxmox-log/debian/debcargo.toml b/proxmox-log/debian/debcargo.toml
new file mode 100644
index 0000000..b7864cd
--- /dev/null
+++ b/proxmox-log/debian/debcargo.toml
@@ -0,0 +1,7 @@
+overlay = "."
+crate_src_path = ".."
+maintainer = "Proxmox Support Team <support@proxmox.com>"
+
+[source]
+vcs_git = "git://git.proxmox.com/git/proxmox.git"
+vcs_browser = "https://git.proxmox.com/?p=proxmox.git"
diff --git a/proxmox-rest-server/src/file_logger.rs b/proxmox-log/src/file_logger.rs
similarity index 98%
rename from proxmox-rest-server/src/file_logger.rs
rename to proxmox-log/src/file_logger.rs
index 2bb1fac..c7e1d64 100644
--- a/proxmox-rest-server/src/file_logger.rs
+++ b/proxmox-log/src/file_logger.rs
@@ -30,7 +30,7 @@ pub struct FileLogOptions {
/// #### Example:
/// ```
/// # use anyhow::{bail, format_err, Error};
-/// use proxmox_rest_server::{flog, FileLogger, FileLogOptions};
+/// use proxmox_log::{flog, FileLogger, FileLogOptions};
///
/// # std::fs::remove_file("test.log");
/// let options = FileLogOptions {
diff --git a/proxmox-log/src/lib.rs b/proxmox-log/src/lib.rs
new file mode 100644
index 0000000..15fa22d
--- /dev/null
+++ b/proxmox-log/src/lib.rs
@@ -0,0 +1,34 @@
+use std::{cell::RefCell, env};
+use syslog_tasklog_layer::SyslogAndTasklogLayer;
+use tracing_log::{AsLog, LogTracer};
+use tracing_subscriber::filter::LevelFilter;
+use tracing_subscriber::prelude::*;
+
+mod file_logger;
+pub use file_logger::{FileLogOptions, FileLogger};
+
+mod syslog_tasklog_layer;
+
+tokio::task_local! {
+ pub static LOGGER: RefCell<FileLogger>;
+ pub static WARN_COUNTER: RefCell<u64>;
+}
+
+pub fn init_logger(
+ env_var_name: &str,
+ default_log_level: LevelFilter,
+ application_name: &str,
+) -> Result<(), anyhow::Error> {
+ let mut log_level = default_log_level;
+ if let Ok(v) = env::var(env_var_name) {
+ if let Ok(l) = v.parse::<LevelFilter>() {
+ log_level = l;
+ }
+ }
+ let registry = tracing_subscriber::registry()
+ .with(SyslogAndTasklogLayer::new(application_name.to_string()).with_filter(log_level));
+
+ tracing::subscriber::set_global_default(registry)?;
+ LogTracer::init_with_filter(log_level.as_log())?;
+ Ok(())
+}
diff --git a/proxmox-log/src/syslog_tasklog_layer.rs b/proxmox-log/src/syslog_tasklog_layer.rs
new file mode 100644
index 0000000..344a514
--- /dev/null
+++ b/proxmox-log/src/syslog_tasklog_layer.rs
@@ -0,0 +1,106 @@
+use std::fmt::Write as _;
+use std::sync::Arc;
+use std::sync::Mutex;
+
+use syslog::{Formatter3164, LoggerBackend};
+use tracing::field::Field;
+use tracing::field::Visit;
+use tracing::Event;
+use tracing::Level;
+use tracing::Subscriber;
+use tracing_subscriber::layer::Context;
+use tracing_subscriber::Layer;
+
+use crate::FileLogger;
+use crate::LOGGER;
+use crate::WARN_COUNTER;
+
+pub struct SyslogAndTasklogLayer {
+ syslog_logger: Arc<Mutex<syslog::Logger<LoggerBackend, Formatter3164>>>,
+}
+
+impl SyslogAndTasklogLayer {
+ pub fn new(application_name: String) -> Self {
+ let formatter = Formatter3164 {
+ facility: syslog::Facility::LOG_DAEMON,
+ process: application_name,
+ ..Formatter3164::default()
+ };
+
+ // we panic here if we can't initialize the syslogger
+ let logger = syslog::unix(formatter)
+ .map_err(|e| {
+ anyhow::Error::new(std::io::Error::new(
+ std::io::ErrorKind::Other,
+ e.description(),
+ ))
+ })
+ .unwrap();
+
+ let logger = Arc::new(Mutex::new(logger));
+
+ Self {
+ syslog_logger: logger,
+ }
+ }
+}
+
+impl<S: Subscriber> Layer<S> for SyslogAndTasklogLayer {
+ fn on_event(&self, event: &Event<'_>, _ctx: Context<'_, S>) {
+ let mut buf = String::new();
+ event.record(&mut EventVisitor::new(&mut buf));
+ let level = event.metadata().level();
+
+ let result = LOGGER.try_with(|logger| {
+ log_to_file(&mut logger.borrow_mut(), level, &buf);
+ });
+ if result.is_err() || *level == Level::ERROR {
+ log_to_syslog(&mut self.syslog_logger.lock().unwrap(), level, &buf);
+ }
+ }
+}
+
+fn log_to_syslog(
+ logger: &mut syslog::Logger<LoggerBackend, Formatter3164>,
+ level: &Level,
+ buf: &String,
+) {
+ let _ = match *level {
+ Level::ERROR => logger.err(buf),
+ Level::WARN => logger.warning(buf),
+ Level::INFO => logger.info(buf),
+ Level::DEBUG => logger.debug(buf),
+ Level::TRACE => logger.debug(buf),
+ };
+}
+fn log_to_file(logger: &mut FileLogger, level: &Level, buf: &String) {
+ match *level {
+ Level::ERROR | Level::WARN => {
+ WARN_COUNTER.with(|counter| {
+ counter.replace_with(|c| c.to_owned() + 1);
+ });
+ logger.log(buf);
+ }
+ Level::INFO => logger.log(buf),
+ Level::DEBUG => logger.log(format!("DEBUG: {}", buf)),
+ Level::TRACE => logger.log(format!("TRACE: {}", buf)),
+ };
+}
+
+struct EventVisitor<'a> {
+ buf: &'a mut String,
+}
+
+impl<'a> EventVisitor<'a> {
+ fn new(buf: &'a mut String) -> Self {
+ Self { buf }
+ }
+}
+
+impl Visit for EventVisitor<'_> {
+ fn record_debug(&mut self, field: &Field, value: &dyn std::fmt::Debug) {
+ if field.name() == "message" {
+ let _ = write!(self.buf, "{value:?}");
+ }
+ }
+}
diff --git a/proxmox-rest-server/Cargo.toml b/proxmox-rest-server/Cargo.toml
index 8daf1d3..a444941 100644
--- a/proxmox-rest-server/Cargo.toml
+++ b/proxmox-rest-server/Cargo.toml
@@ -34,6 +34,7 @@ tokio-openssl.workspace = true
tokio-stream.workspace = true
tower-service.workspace = true
url.workspace = true
+tracing.workspace = true
proxmox-async.workspace = true
proxmox-compression.workspace = true
@@ -44,6 +45,7 @@ proxmox-router.workspace = true
proxmox-schema = { workspace = true, features = [ "api-macro", "upid-api-impl" ] }
proxmox-sys = { workspace = true, features = [ "logrotate", "timer" ] }
proxmox-time.workspace = true
+proxmox-log.workspace = true
[features]
default = []
diff --git a/proxmox-rest-server/src/api_config.rs b/proxmox-rest-server/src/api_config.rs
index ad9a811..cf14c8f 100644
--- a/proxmox-rest-server/src/api_config.rs
+++ b/proxmox-rest-server/src/api_config.rs
@@ -9,11 +9,12 @@ use http::{HeaderMap, Method};
use hyper::http::request::Parts;
use hyper::{Body, Response};
+use proxmox_log::{FileLogOptions, FileLogger};
use proxmox_router::{Router, RpcEnvironmentType, UserInformation};
use proxmox_sys::fs::{create_path, CreateOptions};
use crate::rest::Handler;
-use crate::{CommandSocket, FileLogOptions, FileLogger, RestEnvironment};
+use crate::{CommandSocket, RestEnvironment};
/// REST server configuration
pub struct ApiConfig {
diff --git a/proxmox-rest-server/src/lib.rs b/proxmox-rest-server/src/lib.rs
index bc5be01..9c16065 100644
--- a/proxmox-rest-server/src/lib.rs
+++ b/proxmox-rest-server/src/lib.rs
@@ -41,9 +41,6 @@ pub use state::*;
mod command_socket;
pub use command_socket::*;
-mod file_logger;
-pub use file_logger::{FileLogOptions, FileLogger};
-
mod api_config;
pub use api_config::{ApiConfig, AuthError, AuthHandler, IndexHandler};
diff --git a/proxmox-rest-server/src/rest.rs b/proxmox-rest-server/src/rest.rs
index 2ccd4d5..9ad01a5 100644
--- a/proxmox-rest-server/src/rest.rs
+++ b/proxmox-rest-server/src/rest.rs
@@ -31,10 +31,10 @@ use proxmox_schema::{ObjectSchemaType, ParameterSchema};
use proxmox_async::stream::AsyncReaderStream;
use proxmox_compression::{DeflateEncoder, Level};
+use proxmox_log::FileLogger;
use crate::{
- formatter::*, normalize_path, ApiConfig, AuthError, CompressionMethod, FileLogger,
- RestEnvironment,
+ formatter::*, normalize_path, ApiConfig, AuthError, CompressionMethod, RestEnvironment,
};
extern "C" {
diff --git a/proxmox-rest-server/src/worker_task.rs b/proxmox-rest-server/src/worker_task.rs
index 54b6bc2..d9665bf 100644
--- a/proxmox-rest-server/src/worker_task.rs
+++ b/proxmox-rest-server/src/worker_task.rs
@@ -1,3 +1,4 @@
+use std::cell::RefCell;
use std::collections::{HashMap, VecDeque};
use std::fs::File;
use std::io::{BufRead, BufReader, Read, Write};
@@ -12,21 +13,23 @@ use futures::*;
use lazy_static::lazy_static;
use nix::fcntl::OFlag;
use once_cell::sync::OnceCell;
+use proxmox_log::{LOGGER, WARN_COUNTER};
use serde::{Deserialize, Serialize};
use serde_json::{json, Value};
use tokio::signal::unix::SignalKind;
use tokio::sync::oneshot;
+use tracing::{info, warn};
use proxmox_lang::try_block;
+use proxmox_log::{FileLogOptions, FileLogger};
use proxmox_schema::upid::UPID;
use proxmox_sys::fs::{atomic_open_or_create_file, create_path, replace_file, CreateOptions};
use proxmox_sys::linux::procfs;
-use proxmox_sys::task_warn;
use proxmox_sys::logrotate::{LogRotate, LogRotateFiles};
use proxmox_sys::WorkerTaskContext;
-use crate::{CommandSocket, FileLogOptions, FileLogger};
+use crate::CommandSocket;
struct TaskListLockGuard(File);
@@ -274,7 +277,7 @@ pub fn rotate_task_log_archive(
/// removes all task logs that are older than the oldest task entry in the
/// task archive
-pub fn cleanup_old_tasks(worker: &dyn WorkerTaskContext, compressed: bool) -> Result<(), Error> {
+pub fn cleanup_old_tasks(compressed: bool) -> Result<(), Error> {
let setup = worker_task_setup()?;
let _lock = setup.lock_task_list_files(true)?;
@@ -312,7 +315,10 @@ pub fn cleanup_old_tasks(worker: &dyn WorkerTaskContext, compressed: bool) -> Re
Ok(files) => files,
Err(err) if err.kind() == std::io::ErrorKind::NotFound => continue,
Err(err) => {
- task_warn!(worker, "could not check task logs in '{:02X}': {}", i, err);
+ warn!(
+ tasklog = true,
+ "could not check task logs in '{:02X}': {}", i, err
+ );
continue;
}
};
@@ -320,11 +326,9 @@ pub fn cleanup_old_tasks(worker: &dyn WorkerTaskContext, compressed: bool) -> Re
let file = match file {
Ok(file) => file,
Err(err) => {
- task_warn!(
- worker,
- "could not check some task log in '{:02X}': {}",
- i,
- err
+ warn!(
+ tasklog = true,
+ "could not check some task log in '{:02X}': {}", i, err
);
continue;
}
@@ -334,7 +338,10 @@ pub fn cleanup_old_tasks(worker: &dyn WorkerTaskContext, compressed: bool) -> Re
let modified = match get_modified(file) {
Ok(modified) => modified,
Err(err) => {
- task_warn!(worker, "error getting mtime for '{:?}': {}", path, err);
+ warn!(
+ tasklog = true,
+ "error getting mtime for '{:?}': {}", path, err
+ );
continue;
}
};
@@ -344,7 +351,10 @@ pub fn cleanup_old_tasks(worker: &dyn WorkerTaskContext, compressed: bool) -> Re
Ok(()) => {}
Err(err) if err.kind() == std::io::ErrorKind::NotFound => {}
Err(err) => {
- task_warn!(worker, "could not remove file '{:?}': {}", path, err)
+ warn!(
+ tasklog = true,
+ "could not remove file '{:?}': {}", path, err
+ )
}
}
}
@@ -796,9 +806,7 @@ impl std::fmt::Display for WorkerTask {
}
struct WorkerTaskData {
- logger: FileLogger,
progress: f64, // 0..1
- warn_count: u64,
pub abort_listeners: Vec<oneshot::Sender<()>>,
}
@@ -808,7 +816,7 @@ impl WorkerTask {
worker_id: Option<String>,
auth_id: String,
to_stdout: bool,
- ) -> Result<Arc<Self>, Error> {
+ ) -> Result<(Arc<Self>, FileLogger), Error> {
let setup = worker_task_setup()?;
let upid = UPID::new(worker_type, worker_id, auth_id)?;
@@ -842,9 +850,7 @@ impl WorkerTask {
upid: upid.clone(),
abort_requested: AtomicBool::new(false),
data: Mutex::new(WorkerTaskData {
- logger,
progress: 0.0,
- warn_count: 0,
abort_listeners: vec![],
}),
});
@@ -858,7 +864,7 @@ impl WorkerTask {
setup.update_active_workers(Some(&upid))?;
- Ok(worker)
+ Ok((worker, logger))
}
/// Spawn a new tokio task/future.
@@ -873,13 +879,20 @@ impl WorkerTask {
F: Send + 'static + FnOnce(Arc<WorkerTask>) -> T,
T: Send + 'static + Future<Output = Result<(), Error>>,
{
- let worker = WorkerTask::new(worker_type, worker_id, auth_id, to_stdout)?;
+ let (worker, logger) = WorkerTask::new(worker_type, worker_id, auth_id, to_stdout)?;
let upid_str = worker.upid.to_string();
let f = f(worker.clone());
- tokio::spawn(async move {
- let result = f.await;
- worker.log_result(&result);
- });
+
+ let logger = RefCell::new(logger);
+ let counter = RefCell::new(0);
+ tokio::spawn(LOGGER.scope(logger, async move {
+ WARN_COUNTER
+ .scope(counter, async move {
+ let result = f.await;
+ worker.log_result(&result);
+ })
+ .await;
+ }));
Ok(upid_str)
}
@@ -895,22 +908,27 @@ impl WorkerTask {
where
F: Send + UnwindSafe + 'static + FnOnce(Arc<WorkerTask>) -> Result<(), Error>,
{
- let worker = WorkerTask::new(worker_type, worker_id, auth_id, to_stdout)?;
+ let (worker, logger) = WorkerTask::new(worker_type, worker_id, auth_id, to_stdout)?;
let upid_str = worker.upid.to_string();
let _child = std::thread::Builder::new()
.name(upid_str.clone())
.spawn(move || {
- let worker1 = worker.clone();
- let result = match std::panic::catch_unwind(move || f(worker1)) {
- Ok(r) => r,
- Err(panic) => match panic.downcast::<&str>() {
- Ok(panic_msg) => Err(format_err!("worker panicked: {}", panic_msg)),
- Err(_) => Err(format_err!("worker panicked: unknown type.")),
- },
- };
+ LOGGER.sync_scope(RefCell::new(logger), || {
+ WARN_COUNTER.sync_scope(RefCell::new(0), || {
+ let worker1 = worker.clone();
+
+ let result = match std::panic::catch_unwind(move || f(worker1)) {
+ Ok(r) => r,
+ Err(panic) => match panic.downcast::<&str>() {
+ Ok(panic_msg) => Err(format_err!("worker panicked: {}", panic_msg)),
+ Err(_) => Err(format_err!("worker panicked: unknown type.")),
+ },
+ };
- worker.log_result(&result);
+ worker.log_result(&result);
+ });
+ });
});
Ok(upid_str)
@@ -918,7 +936,11 @@ impl WorkerTask {
/// create state from self and a result
pub fn create_state(&self, result: &Result<(), Error>) -> TaskState {
- let warn_count = self.data.lock().unwrap().warn_count;
+ let mut warn_count: u64 = 0;
+
+ let _ = WARN_COUNTER.try_with(|counter| {
+ warn_count = counter.borrow_mut().to_owned();
+ });
let endtime = proxmox_time::epoch_i64();
@@ -949,15 +971,7 @@ impl WorkerTask {
/// Log a message.
pub fn log_message<S: AsRef<str>>(&self, msg: S) {
- let mut data = self.data.lock().unwrap();
- data.logger.log(msg);
- }
-
- /// Log a message as warning.
- pub fn log_warning<S: AsRef<str>>(&self, msg: S) {
- let mut data = self.data.lock().unwrap();
- data.logger.log(format!("WARN: {}", msg.as_ref()));
- data.warn_count += 1;
+ info!(tasklog = true, "{}", msg.as_ref());
}
/// Set progress indicator
@@ -1020,16 +1034,6 @@ impl WorkerTaskContext for WorkerTask {
fn fail_on_shutdown(&self) -> Result<(), Error> {
crate::fail_on_shutdown()
}
-
- fn log(&self, level: log::Level, message: &std::fmt::Arguments) {
- match level {
- log::Level::Error => self.log_warning(message.to_string()),
- log::Level::Warn => self.log_warning(message.to_string()),
- log::Level::Info => self.log_message(message.to_string()),
- log::Level::Debug => self.log_message(format!("DEBUG: {}", message)),
- log::Level::Trace => self.log_message(format!("TRACE: {}", message)),
- }
- }
}
/// Wait for a locally spanned worker task
diff --git a/proxmox-sys/src/worker_task_context.rs b/proxmox-sys/src/worker_task_context.rs
index 2c86857..743ae53 100644
--- a/proxmox-sys/src/worker_task_context.rs
+++ b/proxmox-sys/src/worker_task_context.rs
@@ -26,9 +26,6 @@ pub trait WorkerTaskContext: Send + Sync {
}
Ok(())
}
-
- /// Create a log message for this task.
- fn log(&self, level: log::Level, message: &std::fmt::Arguments);
}
/// Convenience implementation:
@@ -48,48 +45,4 @@ impl<T: WorkerTaskContext + ?Sized> WorkerTaskContext for std::sync::Arc<T> {
fn fail_on_shutdown(&self) -> Result<(), Error> {
<T as WorkerTaskContext>::fail_on_shutdown(self)
}
-
- fn log(&self, level: log::Level, message: &std::fmt::Arguments) {
- <T as WorkerTaskContext>::log(self, level, message)
- }
-}
-
-/// Log an error to a [WorkerTaskContext]
-#[macro_export]
-macro_rules! task_error {
- ($task:expr, $($fmt:tt)+) => {{
- $crate::WorkerTaskContext::log(&*$task, log::Level::Error, &format_args!($($fmt)+))
- }};
-}
-
-/// Log a warning to a [WorkerTaskContext]
-#[macro_export]
-macro_rules! task_warn {
- ($task:expr, $($fmt:tt)+) => {{
- $crate::WorkerTaskContext::log(&*$task, log::Level::Warn, &format_args!($($fmt)+))
- }};
-}
-
-/// Log a message to a [WorkerTaskContext]
-#[macro_export]
-macro_rules! task_log {
- ($task:expr, $($fmt:tt)+) => {{
- $crate::WorkerTaskContext::log(&*$task, log::Level::Info, &format_args!($($fmt)+))
- }};
-}
-
-/// Log a debug message to a [WorkerTaskContext]
-#[macro_export]
-macro_rules! task_debug {
- ($task:expr, $($fmt:tt)+) => {{
- $crate::WorkerTaskContext::log(&*$task, log::Level::Debug, &format_args!($($fmt)+))
- }};
-}
-
-/// Log a trace message to a [WorkerTaskContext]
-#[macro_export]
-macro_rules! task_trace {
- ($task:expr, $($fmt:tt)+) => {{
- $crate::WorkerTaskContext::log(&*$task, log::Level::Trace, &format_args!($($fmt)+))
- }};
}
--
2.39.2
^ permalink raw reply [flat|nested] 8+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 2/2] log: removed task_log! macro and moved to tracing
2023-11-03 11:28 [pbs-devel] [PATCH proxmox-backup 0/2] proxmox-log introduction Gabriel Goller
2023-11-03 11:28 ` [pbs-devel] [PATCH proxmox 1/2] proxmox-log: added tracing infra Gabriel Goller
@ 2023-11-03 11:28 ` Gabriel Goller
2024-01-22 13:30 ` Lukas Wagner
2023-12-15 13:56 ` [pbs-devel] [PATCH proxmox-backup 0/2] proxmox-log introduction Gabriel Goller
2 siblings, 1 reply; 8+ messages in thread
From: Gabriel Goller @ 2023-11-03 11:28 UTC (permalink / raw)
To: pbs-devel
Imported `proxmox-log` and substituted all `task_log!`
(and task_warn) invocations with tracing calls (info!,
warn!, etc..). Removed worker where it isn't necessary
anymore.
Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
---
Cargo.toml | 61 ++++----
pbs-api-types/src/metrics.rs | 1 -
pbs-api-types/src/node.rs | 3 +-
pbs-datastore/Cargo.toml | 1 +
pbs-datastore/src/chunk_store.rs | 30 +---
pbs-datastore/src/datastore.rs | 70 ++++-----
src/api2/admin/datastore.rs | 24 ++--
src/api2/admin/metrics.rs | 2 +-
src/api2/config/acme.rs | 20 +--
src/api2/config/datastore.rs | 12 +-
src/api2/node/apt.rs | 13 +-
src/api2/node/certificates.rs | 64 ++++-----
src/api2/node/disks/directory.rs | 13 +-
src/api2/node/disks/mod.rs | 6 +-
src/api2/node/disks/zfs.rs | 30 ++--
src/api2/node/mod.rs | 11 +-
src/api2/pull.rs | 28 ++--
src/api2/tape/backup.rs | 77 ++++------
src/api2/tape/drive.rs | 135 +++++++----------
src/api2/tape/restore.rs | 240 ++++++++++---------------------
src/backup/verify.rs | 103 ++++---------
src/bin/proxmox-backup-api.rs | 10 +-
src/bin/proxmox-backup-proxy.rs | 42 ++----
src/server/gc_job.rs | 6 +-
src/server/prune_job.rs | 26 ++--
src/server/pull.rs | 166 ++++++++-------------
src/server/realm_sync_job.rs | 40 ++----
src/server/verify_job.rs | 10 +-
src/tape/drive/mod.rs | 34 ++---
src/tape/pool_writer/mod.rs | 85 +++++------
tests/worker-task-abort.rs | 9 +-
31 files changed, 502 insertions(+), 870 deletions(-)
diff --git a/Cargo.toml b/Cargo.toml
index db18cd45..46b0babd 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -64,6 +64,7 @@ proxmox-http = { version = "0.9.0", features = [ "client", "http-helpers", "webs
proxmox-human-byte = "0.1"
proxmox-io = "1.0.1" # tools and client use "tokio" feature
proxmox-lang = "1.1"
+proxmox-log = "0.1"
proxmox-ldap = "0.2.1"
proxmox-metrics = "0.3"
proxmox-openid = "0.10.0"
@@ -144,6 +145,8 @@ tokio = "1.6"
tokio-openssl = "0.6.1"
tokio-stream = "0.1.0"
tokio-util = { version = "0.7", features = [ "io" ] }
+tracing = "0.1"
+tracing-subscriber = "0.3.16"
tower-service = "0.3.0"
udev = "0.4"
url = "2.1"
@@ -189,6 +192,8 @@ tokio = { workspace = true, features = [ "fs", "io-util", "io-std", "macros", "n
tokio-openssl.workspace = true
tokio-stream.workspace = true
tokio-util = { workspace = true, features = [ "codec" ] }
+tracing.workspace = true
+tracing-subscriber.workspace = true
tower-service.workspace = true
udev.workspace = true
url.workspace = true
@@ -207,6 +212,7 @@ proxmox-http = { workspace = true, features = [ "client-trait", "proxmox-async",
proxmox-human-byte.workspace = true
proxmox-io.workspace = true
proxmox-lang.workspace = true
+proxmox-log.workspace = true
proxmox-ldap.workspace = true
proxmox-metrics.workspace = true
proxmox-openid.workspace = true
@@ -243,35 +249,36 @@ proxmox-rrd.workspace = true
# NOTE: You must run `cargo update` after changing this for it to take effect!
[patch.crates-io]
-#proxmox-apt = { path = "../proxmox/proxmox-apt" }
-#proxmox-async = { path = "../proxmox/proxmox-async" }
-#proxmox-auth-api = { path = "../proxmox/proxmox-auth-api" }
-#proxmox-borrow = { path = "../proxmox/proxmox-borrow" }
-#proxmox-compression = { path = "../proxmox/proxmox-compression" }
-#proxmox-fuse = { path = "../proxmox-fuse" }
-#proxmox-http = { path = "../proxmox/proxmox-http" }
-#proxmox-human-byte = { path = "../proxmox/proxmox-human-byte" }
-#proxmox-io = { path = "../proxmox/proxmox-io" }
-#proxmox-lang = { path = "../proxmox/proxmox-lang" }
-#proxmox-ldap = { path = "../proxmox/proxmox-ldap" }
-#proxmox-metrics = { path = "../proxmox/proxmox-metrics" }
-#proxmox-openid = { path = "../proxmox/proxmox-openid" }
-#proxmox-rest-server = { path = "../proxmox/proxmox-rest-server" }
-#proxmox-router = { path = "../proxmox/proxmox-router" }
-#proxmox-schema = { path = "../proxmox/proxmox-schema" }
-#proxmox-section-config = { path = "../proxmox/proxmox-section-config" }
-#proxmox-serde = { path = "../proxmox/proxmox-serde" }
-#proxmox-shared-memory = { path = "../proxmox/proxmox-shared-memory" }
-#proxmox-sortable-macro = { path = "../proxmox/proxmox-sortable-macro" }
-#proxmox-subscription = { path = "../proxmox/proxmox-subscription" }
-#proxmox-sys = { path = "../proxmox/proxmox-sys" }
-#proxmox-tfa = { path = "../proxmox/proxmox-tfa" }
-#proxmox-time = { path = "../proxmox/proxmox-time" }
-#proxmox-uuid = { path = "../proxmox/proxmox-uuid" }
+proxmox-apt = { path = "../proxmox/proxmox-apt" }
+proxmox-async = { path = "../proxmox/proxmox-async" }
+proxmox-auth-api = { path = "../proxmox/proxmox-auth-api" }
+proxmox-borrow = { path = "../proxmox/proxmox-borrow" }
+proxmox-compression = { path = "../proxmox/proxmox-compression" }
+proxmox-fuse = { path = "../proxmox-fuse" }
+proxmox-http = { path = "../proxmox/proxmox-http" }
+proxmox-human-byte = { path = "../proxmox/proxmox-human-byte" }
+proxmox-io = { path = "../proxmox/proxmox-io" }
+proxmox-lang = { path = "../proxmox/proxmox-lang" }
+proxmox-log = { path = "../proxmox/proxmox-log" }
+proxmox-ldap = { path = "../proxmox/proxmox-ldap" }
+proxmox-metrics = { path = "../proxmox/proxmox-metrics" }
+proxmox-openid = { path = "../proxmox/proxmox-openid" }
+proxmox-rest-server = { path = "../proxmox/proxmox-rest-server" }
+proxmox-router = { path = "../proxmox/proxmox-router" }
+proxmox-schema = { path = "../proxmox/proxmox-schema" }
+proxmox-section-config = { path = "../proxmox/proxmox-section-config" }
+proxmox-serde = { path = "../proxmox/proxmox-serde" }
+proxmox-shared-memory = { path = "../proxmox/proxmox-shared-memory" }
+proxmox-sortable-macro = { path = "../proxmox/proxmox-sortable-macro" }
+proxmox-subscription = { path = "../proxmox/proxmox-subscription" }
+proxmox-sys = { path = "../proxmox/proxmox-sys" }
+proxmox-tfa = { path = "../proxmox/proxmox-tfa" }
+proxmox-time = { path = "../proxmox/proxmox-time" }
+proxmox-uuid = { path = "../proxmox/proxmox-uuid" }
-#proxmox-acme-rs = { path = "../proxmox-acme-rs" }
+proxmox-acme-rs = { path = "../proxmox-acme-rs" }
#pathpatterns = {path = "../pathpatterns" }
-#pxar = { path = "../pxar" }
+pxar = { path = "../pxar" }
[features]
default = []
diff --git a/pbs-api-types/src/metrics.rs b/pbs-api-types/src/metrics.rs
index be5ff27a..99df88a7 100644
--- a/pbs-api-types/src/metrics.rs
+++ b/pbs-api-types/src/metrics.rs
@@ -147,7 +147,6 @@ pub struct InfluxDbHttp {
pub comment: Option<String>,
}
-
#[api]
#[derive(Copy, Clone, Deserialize, Serialize, PartialEq, Eq, PartialOrd, Ord)]
/// Type of the metric server
diff --git a/pbs-api-types/src/node.rs b/pbs-api-types/src/node.rs
index 704215bb..9033d586 100644
--- a/pbs-api-types/src/node.rs
+++ b/pbs-api-types/src/node.rs
@@ -1,9 +1,8 @@
-use serde::{Deserialize, Serialize};
use proxmox_schema::*;
+use serde::{Deserialize, Serialize};
use crate::StorageStatus;
-
#[api]
#[derive(Serialize, Deserialize, Default)]
#[serde(rename_all = "kebab-case")]
diff --git a/pbs-datastore/Cargo.toml b/pbs-datastore/Cargo.toml
index b793dc5b..fcb11ac4 100644
--- a/pbs-datastore/Cargo.toml
+++ b/pbs-datastore/Cargo.toml
@@ -20,6 +20,7 @@ openssl.workspace = true
serde.workspace = true
serde_json.workspace = true
tokio = { workspace = true, features = [] }
+tracing.workspace = true
walkdir.workspace = true
zstd.workspace = true
diff --git a/pbs-datastore/src/chunk_store.rs b/pbs-datastore/src/chunk_store.rs
index fb282749..43f2c4c7 100644
--- a/pbs-datastore/src/chunk_store.rs
+++ b/pbs-datastore/src/chunk_store.rs
@@ -3,6 +3,7 @@ use std::path::{Path, PathBuf};
use std::sync::{Arc, Mutex};
use anyhow::{bail, format_err, Error};
+use tracing::info;
use pbs_api_types::{DatastoreFSyncLevel, GarbageCollectionStatus};
use proxmox_io::ReadExt;
@@ -10,7 +11,6 @@ use proxmox_sys::fs::{create_dir, create_path, file_type_from_file_stat, CreateO
use proxmox_sys::process_locker::{
ProcessLockExclusiveGuard, ProcessLockSharedGuard, ProcessLocker,
};
-use proxmox_sys::task_log;
use proxmox_sys::WorkerTaskContext;
use crate::file_formats::{
@@ -92,7 +92,6 @@ impl ChunkStore {
path: P,
uid: nix::unistd::Uid,
gid: nix::unistd::Gid,
- worker: Option<&dyn WorkerTaskContext>,
sync_level: DatastoreFSyncLevel,
) -> Result<Self, Error>
where
@@ -143,9 +142,7 @@ impl ChunkStore {
}
let percentage = (i * 100) / (64 * 1024);
if percentage != last_percentage {
- if let Some(worker) = worker {
- task_log!(worker, "Chunkstore create: {}%", percentage)
- }
+ info!("Chunkstore create: {}%", percentage);
last_percentage = percentage;
}
}
@@ -374,7 +371,7 @@ impl ChunkStore {
for (entry, percentage, bad) in self.get_chunk_iterator()? {
if last_percentage != percentage {
last_percentage = percentage;
- task_log!(worker, "processed {}% ({} chunks)", percentage, chunk_count,);
+ info!("processed {}% ({} chunks)", percentage, chunk_count,);
}
worker.check_abort()?;
@@ -578,15 +575,8 @@ fn test_chunk_store1() {
let user = nix::unistd::User::from_uid(nix::unistd::Uid::current())
.unwrap()
.unwrap();
- let chunk_store = ChunkStore::create(
- "test",
- &path,
- user.uid,
- user.gid,
- None,
- DatastoreFSyncLevel::None,
- )
- .unwrap();
+ let chunk_store =
+ ChunkStore::create("test", &path, user.uid, user.gid, DatastoreFSyncLevel::None).unwrap();
let (chunk, digest) = crate::data_blob::DataChunkBuilder::new(&[0u8, 1u8])
.build()
@@ -598,14 +588,8 @@ fn test_chunk_store1() {
let (exists, _) = chunk_store.insert_chunk(&chunk, &digest).unwrap();
assert!(exists);
- let chunk_store = ChunkStore::create(
- "test",
- &path,
- user.uid,
- user.gid,
- None,
- DatastoreFSyncLevel::None,
- );
+ let chunk_store =
+ ChunkStore::create("test", &path, user.uid, user.gid, DatastoreFSyncLevel::None);
assert!(chunk_store.is_err());
if let Err(_e) = std::fs::remove_dir_all(".testdir") { /* ignore */ }
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index 41c9f4f2..b941a55e 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -7,6 +7,7 @@ use std::sync::{Arc, Mutex};
use anyhow::{bail, format_err, Error};
use lazy_static::lazy_static;
use nix::unistd::{unlinkat, UnlinkatFlags};
+use tracing::{info, warn};
use proxmox_human_byte::HumanByte;
use proxmox_schema::ApiType;
@@ -16,7 +17,6 @@ use proxmox_sys::fs::{file_read_optional_string, replace_file, CreateOptions};
use proxmox_sys::fs::{lock_dir_noblock, DirLockGuard};
use proxmox_sys::process_locker::ProcessLockSharedGuard;
use proxmox_sys::WorkerTaskContext;
-use proxmox_sys::{task_log, task_warn};
use pbs_api_types::{
Authid, BackupNamespace, BackupType, ChunkOrder, DataStoreConfig, DatastoreFSyncLevel,
@@ -925,8 +925,7 @@ impl DataStore {
let digest = index.index_digest(pos).unwrap();
if !self.inner.chunk_store.cond_touch_chunk(digest, false)? {
let hex = hex::encode(digest);
- task_warn!(
- worker,
+ warn!(
"warning: unable to access non-existent chunk {hex}, required by {file_name:?}"
);
@@ -992,8 +991,7 @@ impl DataStore {
let percentage = (i + 1) * 100 / image_count;
if percentage > last_percentage {
- task_log!(
- worker,
+ info!(
"marked {}% ({} of {} index files)",
percentage,
i + 1,
@@ -1004,8 +1002,7 @@ impl DataStore {
}
if strange_paths_count > 0 {
- task_log!(
- worker,
+ info!(
"found (and marked) {} index files outside of expected directory scheme",
strange_paths_count,
);
@@ -1045,11 +1042,11 @@ impl DataStore {
..Default::default()
};
- task_log!(worker, "Start GC phase1 (mark used chunks)");
+ info!("Start GC phase1 (mark used chunks)");
self.mark_used_chunks(&mut gc_status, worker)?;
- task_log!(worker, "Start GC phase2 (sweep unused chunks)");
+ info!("Start GC phase2 (sweep unused chunks)");
self.inner.chunk_store.sweep_unused_chunks(
oldest_writer,
phase1_start_time,
@@ -1057,30 +1054,27 @@ impl DataStore {
worker,
)?;
- task_log!(
- worker,
+ info!(
"Removed garbage: {}",
HumanByte::from(gc_status.removed_bytes),
);
- task_log!(worker, "Removed chunks: {}", gc_status.removed_chunks);
+ info!("Removed chunks: {}", gc_status.removed_chunks);
if gc_status.pending_bytes > 0 {
- task_log!(
- worker,
+ info!(
"Pending removals: {} (in {} chunks)",
HumanByte::from(gc_status.pending_bytes),
gc_status.pending_chunks,
);
}
if gc_status.removed_bad > 0 {
- task_log!(worker, "Removed bad chunks: {}", gc_status.removed_bad);
+ info!("Removed bad chunks: {}", gc_status.removed_bad);
}
if gc_status.still_bad > 0 {
- task_log!(worker, "Leftover bad chunks: {}", gc_status.still_bad);
+ info!("Leftover bad chunks: {}", gc_status.still_bad);
}
- task_log!(
- worker,
+ info!(
"Original data usage: {}",
HumanByte::from(gc_status.index_data_bytes),
);
@@ -1088,15 +1082,14 @@ impl DataStore {
if gc_status.index_data_bytes > 0 {
let comp_per =
(gc_status.disk_bytes as f64 * 100.) / gc_status.index_data_bytes as f64;
- task_log!(
- worker,
+ info!(
"On-Disk usage: {} ({:.2}%)",
HumanByte::from(gc_status.disk_bytes),
comp_per,
);
}
- task_log!(worker, "On-Disk chunks: {}", gc_status.disk_chunks);
+ info!("On-Disk chunks: {}", gc_status.disk_chunks);
let deduplication_factor = if gc_status.disk_bytes > 0 {
(gc_status.index_data_bytes as f64) / (gc_status.disk_bytes as f64)
@@ -1104,11 +1097,11 @@ impl DataStore {
1.0
};
- task_log!(worker, "Deduplication factor: {:.2}", deduplication_factor);
+ info!("Deduplication factor: {:.2}", deduplication_factor);
if gc_status.disk_chunks > 0 {
let avg_chunk = gc_status.disk_bytes / (gc_status.disk_chunks as u64);
- task_log!(worker, "Average chunk size: {}", HumanByte::from(avg_chunk));
+ info!("Average chunk size: {}", HumanByte::from(avg_chunk));
}
if let Ok(serialized) = serde_json::to_string(&gc_status) {
@@ -1340,11 +1333,7 @@ impl DataStore {
/// Destroy a datastore. This requires that there are no active operations on the datastore.
///
/// This is a synchronous operation and should be run in a worker-thread.
- pub fn destroy(
- name: &str,
- destroy_data: bool,
- worker: &dyn WorkerTaskContext,
- ) -> Result<(), Error> {
+ pub fn destroy(name: &str, destroy_data: bool) -> Result<(), Error> {
let config_lock = pbs_config::datastore::lock_config()?;
let (mut config, _digest) = pbs_config::datastore::config()?;
@@ -1368,13 +1357,13 @@ impl DataStore {
let remove = |subdir, ok: &mut bool| {
if let Err(err) = std::fs::remove_dir_all(base.join(subdir)) {
if err.kind() != io::ErrorKind::NotFound {
- task_warn!(worker, "failed to remove {subdir:?} subdirectory: {err}");
+ warn!("failed to remove {subdir:?} subdirectory: {err}");
*ok = false;
}
}
};
- task_log!(worker, "Deleting datastore data...");
+ info!("Deleting datastore data...");
remove("ns", &mut ok); // ns first
remove("ct", &mut ok);
remove("vm", &mut ok);
@@ -1383,7 +1372,7 @@ impl DataStore {
if ok {
if let Err(err) = std::fs::remove_file(base.join(".gc-status")) {
if err.kind() != io::ErrorKind::NotFound {
- task_warn!(worker, "failed to remove .gc-status file: {err}");
+ warn!("failed to remove .gc-status file: {err}");
ok = false;
}
}
@@ -1397,7 +1386,7 @@ impl DataStore {
// now the config
if ok {
- task_log!(worker, "Removing datastore from config...");
+ info!("Removing datastore from config...");
let _lock = pbs_config::datastore::lock_config()?;
let _ = config.sections.remove(name);
pbs_config::datastore::save_config(&config)?;
@@ -1408,35 +1397,32 @@ impl DataStore {
if ok {
if let Err(err) = std::fs::remove_file(base.join(".lock")) {
if err.kind() != io::ErrorKind::NotFound {
- task_warn!(worker, "failed to remove .lock file: {err}");
+ warn!("failed to remove .lock file: {err}");
ok = false;
}
}
}
if ok {
- task_log!(worker, "Finished deleting data.");
+ info!("Finished deleting data.");
match std::fs::remove_dir(base) {
- Ok(()) => task_log!(worker, "Removed empty datastore directory."),
+ Ok(()) => info!("Removed empty datastore directory."),
Err(err) if err.kind() == io::ErrorKind::NotFound => {
// weird, but ok
}
Err(err) if err.is_errno(nix::errno::Errno::EBUSY) => {
- task_warn!(
- worker,
- "Cannot delete datastore directory (is it a mount point?)."
- )
+ warn!("Cannot delete datastore directory (is it a mount point?).")
}
Err(err) if err.is_errno(nix::errno::Errno::ENOTEMPTY) => {
- task_warn!(worker, "Datastore directory not empty, not deleting.")
+ warn!("Datastore directory not empty, not deleting.")
}
Err(err) => {
- task_warn!(worker, "Failed to remove datastore directory: {err}");
+ warn!("Failed to remove datastore directory: {err}");
}
}
} else {
- task_log!(worker, "There were errors deleting data.");
+ info!("There were errors deleting data.");
}
}
diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
index a95031e7..4de3453d 100644
--- a/src/api2/admin/datastore.rs
+++ b/src/api2/admin/datastore.rs
@@ -13,6 +13,7 @@ use hyper::{header, Body, Response, StatusCode};
use serde::Deserialize;
use serde_json::{json, Value};
use tokio_stream::wrappers::ReceiverStream;
+use tracing::{info, warn};
use proxmox_async::blocking::WrappedReaderStream;
use proxmox_async::{io::AsyncChannelWriter, stream::AsyncReaderStream};
@@ -26,7 +27,6 @@ use proxmox_sortable_macro::sortable;
use proxmox_sys::fs::{
file_read_firstline, file_read_optional_string, replace_file, CreateOptions,
};
-use proxmox_sys::{task_log, task_warn};
use pxar::accessor::aio::Accessor;
use pxar::EntryKind;
@@ -907,9 +907,9 @@ pub fn verify(
)?
};
if !failed_dirs.is_empty() {
- task_log!(worker, "Failed to verify the following snapshots/groups:");
+ info!("Failed to verify the following snapshots/groups:");
for dir in failed_dirs {
- task_log!(worker, "\t{}", dir);
+ info!("\t{}", dir);
}
bail!("verification failed - please check the log for details");
}
@@ -1009,10 +1009,10 @@ pub fn prune(
}
// We use a WorkerTask just to have a task log, but run synchrounously
- let worker = WorkerTask::new("prune", Some(worker_id), auth_id.to_string(), true)?;
+ let (worker, _) = WorkerTask::new("prune", Some(worker_id), auth_id.to_string(), true)?;
if keep_all {
- task_log!(worker, "No prune selection - keeping all files.");
+ info!("No prune selection - keeping all files.");
} else {
let mut opts = Vec::new();
if !ns.is_root() {
@@ -1020,9 +1020,8 @@ pub fn prune(
}
crate::server::cli_keep_options(&mut opts, &keep_options);
- task_log!(worker, "retention options: {}", opts.join(" "));
- task_log!(
- worker,
+ info!("retention options: {}", opts.join(" "));
+ info!(
"Starting prune on {} group \"{}\"",
print_store_and_ns(&store, &ns),
group.group(),
@@ -1038,7 +1037,7 @@ pub fn prune(
let msg = format!("{}/{}/{} {}", group.ty, group.id, timestamp, mark,);
- task_log!(worker, "{}", msg);
+ info!("{}", msg);
prune_result.push(json!({
"backup-type": group.ty,
@@ -1050,8 +1049,7 @@ pub fn prune(
if !(dry_run || keep) {
if let Err(err) = info.backup_dir.destroy(false) {
- task_warn!(
- worker,
+ warn!(
"failed to remove dir {:?}: {}",
info.backup_dir.relative_path(),
err,
@@ -1121,9 +1119,7 @@ pub fn prune_datastore(
Some(worker_id),
auth_id.to_string(),
to_stdout,
- move |worker| {
- crate::server::prune_datastore(worker, auth_id, prune_options, datastore, dry_run)
- },
+ move |_worker| crate::server::prune_datastore(auth_id, prune_options, datastore, dry_run),
)?;
Ok(upid_str)
diff --git a/src/api2/admin/metrics.rs b/src/api2/admin/metrics.rs
index d72bd98b..c299485f 100644
--- a/src/api2/admin/metrics.rs
+++ b/src/api2/admin/metrics.rs
@@ -4,7 +4,7 @@ use serde_json::Value;
use proxmox_router::{Permission, Router, RpcEnvironment};
use proxmox_schema::api;
-use pbs_api_types::{PRIV_SYS_AUDIT, MetricServerInfo};
+use pbs_api_types::{MetricServerInfo, PRIV_SYS_AUDIT};
use pbs_config::metrics;
#[api(
diff --git a/src/api2/config/acme.rs b/src/api2/config/acme.rs
index 1954318b..a9aefcd7 100644
--- a/src/api2/config/acme.rs
+++ b/src/api2/config/acme.rs
@@ -9,12 +9,12 @@ use hex::FromHex;
use lazy_static::lazy_static;
use serde::{Deserialize, Serialize};
use serde_json::{json, Value};
+use tracing::{info, warn};
use proxmox_router::{
http_bail, list_subdirs_api_method, Permission, Router, RpcEnvironment, SubdirMap,
};
use proxmox_schema::{api, param_bail};
-use proxmox_sys::{task_log, task_warn};
use proxmox_acme_rs::account::AccountData as AcmeAccountData;
use proxmox_acme_rs::Account;
@@ -219,19 +219,15 @@ fn register_account(
Some(name.to_string()),
auth_id.to_string(),
true,
- move |worker| async move {
+ move |_worker| async move {
let mut client = AcmeClient::new(directory);
- task_log!(worker, "Registering ACME account '{}'...", &name);
+ info!("Registering ACME account '{}'...", &name);
let account =
do_register_account(&mut client, &name, tos_url.is_some(), contact, None).await?;
- task_log!(
- worker,
- "Registration successful, account URL: {}",
- account.location
- );
+ info!("Registration successful, account URL: {}", account.location);
Ok(())
},
@@ -325,7 +321,7 @@ pub fn deactivate_account(
Some(name.to_string()),
auth_id.to_string(),
true,
- move |worker| async move {
+ move |_worker| async move {
match AcmeClient::load(&name)
.await?
.update_account(&json!({"status": "deactivated"}))
@@ -334,11 +330,9 @@ pub fn deactivate_account(
Ok(_account) => (),
Err(err) if !force => return Err(err),
Err(err) => {
- task_warn!(
- worker,
+ warn!(
"error deactivating account {}, proceedeing anyway - {}",
- name,
- err,
+ name, err,
);
}
}
diff --git a/src/api2/config/datastore.rs b/src/api2/config/datastore.rs
index 5e013c39..222ba7bd 100644
--- a/src/api2/config/datastore.rs
+++ b/src/api2/config/datastore.rs
@@ -4,11 +4,11 @@ use ::serde::{Deserialize, Serialize};
use anyhow::Error;
use hex::FromHex;
use serde_json::Value;
+use tracing::warn;
use proxmox_router::{http_bail, Permission, Router, RpcEnvironment, RpcEnvironmentType};
use proxmox_schema::{api, param_bail, ApiType};
use proxmox_section_config::SectionConfigData;
-use proxmox_sys::{task_warn, WorkerTaskContext};
use pbs_api_types::{
Authid, DataStoreConfig, DataStoreConfigUpdater, DatastoreNotify, DatastoreTuning,
@@ -69,7 +69,6 @@ pub(crate) fn do_create_datastore(
_lock: BackupLockGuard,
mut config: SectionConfigData,
datastore: DataStoreConfig,
- worker: Option<&dyn WorkerTaskContext>,
) -> Result<(), Error> {
let path: PathBuf = datastore.path.clone().into();
@@ -83,7 +82,6 @@ pub(crate) fn do_create_datastore(
path,
backup_user.uid,
backup_user.gid,
- worker,
tuning.sync_level.unwrap_or_default(),
)?;
@@ -132,7 +130,7 @@ pub fn create_datastore(
Some(config.name.to_string()),
auth_id.to_string(),
to_stdout,
- move |worker| do_create_datastore(lock, section_config, config, Some(&worker)),
+ move |_worker| do_create_datastore(lock, section_config, config),
)
}
@@ -460,8 +458,8 @@ pub async fn delete_datastore(
Some(name.clone()),
auth_id.to_string(),
to_stdout,
- move |worker| {
- pbs_datastore::DataStore::destroy(&name, destroy_data, &worker)?;
+ move |_worker| {
+ pbs_datastore::DataStore::destroy(&name, destroy_data)?;
// ignore errors
let _ = jobstate::remove_state_file("prune", &name);
@@ -470,7 +468,7 @@ pub async fn delete_datastore(
if let Err(err) =
proxmox_async::runtime::block_on(crate::server::notify_datastore_removed())
{
- task_warn!(worker, "failed to notify after datastore removal: {err}");
+ warn!("failed to notify after datastore removal: {err}");
}
Ok(())
diff --git a/src/api2/node/apt.rs b/src/api2/node/apt.rs
index 8e4f150d..277659b4 100644
--- a/src/api2/node/apt.rs
+++ b/src/api2/node/apt.rs
@@ -2,6 +2,7 @@ use anyhow::{bail, format_err, Error};
use serde_json::{json, Value};
use std::collections::HashMap;
use std::os::unix::prelude::OsStrExt;
+use tracing::{info, warn};
use proxmox_router::{
list_subdirs_api_method, Permission, Router, RpcEnvironment, RpcEnvironmentType, SubdirMap,
@@ -85,9 +86,9 @@ fn read_and_update_proxy_config() -> Result<Option<ProxyConfig>, Error> {
Ok(proxy_config)
}
-fn do_apt_update(worker: &WorkerTask, quiet: bool) -> Result<(), Error> {
+fn do_apt_update(quiet: bool) -> Result<(), Error> {
if !quiet {
- worker.log_message("starting apt-get update")
+ info!(log = "file", "starting apt-get update")
}
read_and_update_proxy_config()?;
@@ -101,7 +102,7 @@ fn do_apt_update(worker: &WorkerTask, quiet: bool) -> Result<(), Error> {
.map_err(|err| format_err!("failed to execute {:?} - {}", command, err))?;
if !quiet {
- worker.log_message(String::from_utf8(output.stdout)?);
+ info!("{}", String::from_utf8(output.stdout)?);
}
// TODO: improve run_command to allow outputting both, stderr and stdout
@@ -116,7 +117,7 @@ fn do_apt_update(worker: &WorkerTask, quiet: bool) -> Result<(), Error> {
}
})
.unwrap_or_else(|_| String::from("non utf8 error message (suppressed)"));
- worker.log_warning(msg);
+ warn!(msg);
} else {
bail!("terminated by signal");
}
@@ -162,8 +163,8 @@ pub fn apt_update_database(
let auth_id = rpcenv.get_auth_id().unwrap();
let to_stdout = rpcenv.env_type() == RpcEnvironmentType::CLI;
- let upid_str = WorkerTask::new_thread("aptupdate", None, auth_id, to_stdout, move |worker| {
- do_apt_update(&worker, quiet)?;
+ let upid_str = WorkerTask::new_thread("aptupdate", None, auth_id, to_stdout, move |_worker| {
+ do_apt_update(quiet)?;
let mut cache = apt::update_cache()?;
diff --git a/src/api2/node/certificates.rs b/src/api2/node/certificates.rs
index d514001a..d7c7b969 100644
--- a/src/api2/node/certificates.rs
+++ b/src/api2/node/certificates.rs
@@ -5,16 +5,17 @@ use anyhow::{bail, format_err, Error};
use openssl::pkey::PKey;
use openssl::x509::X509;
use serde::{Deserialize, Serialize};
+use tracing::info;
use proxmox_router::list_subdirs_api_method;
use proxmox_router::SubdirMap;
use proxmox_router::{Permission, Router, RpcEnvironment};
use proxmox_schema::api;
-use proxmox_sys::{task_log, task_warn};
use pbs_api_types::{NODE_SCHEMA, PRIV_SYS_MODIFY};
use pbs_buildcfg::configdir;
use pbs_tools::cert;
+use tracing::warn;
use crate::acme::AcmeClient;
use crate::api2::types::AcmeDomain;
@@ -305,10 +306,7 @@ async fn order_certificate(
};
if domains.is_empty() {
- task_log!(
- worker,
- "No domains configured to be ordered from an ACME server."
- );
+ info!("No domains configured to be ordered from an ACME server.");
return Ok(None);
}
@@ -316,11 +314,11 @@ async fn order_certificate(
let mut acme = node_config.acme_client().await?;
- task_log!(worker, "Placing ACME order");
+ info!("Placing ACME order");
let order = acme
.new_order(domains.iter().map(|d| d.domain.to_ascii_lowercase()))
.await?;
- task_log!(worker, "Order URL: {}", order.location);
+ info!("Order URL: {}", order.location);
let identifiers: Vec<String> = order
.data
@@ -332,7 +330,7 @@ async fn order_certificate(
.collect();
for auth_url in &order.data.authorizations {
- task_log!(worker, "Getting authorization details from '{}'", auth_url);
+ info!("Getting authorization details from '{}'", auth_url);
let mut auth = acme.get_authorization(auth_url).await?;
let domain = match &mut auth.identifier {
@@ -340,11 +338,11 @@ async fn order_certificate(
};
if auth.status == Status::Valid {
- task_log!(worker, "{} is already validated!", domain);
+ info!("{} is already validated!", domain);
continue;
}
- task_log!(worker, "The validation for {} is pending", domain);
+ info!("The validation for {} is pending", domain);
let domain_config: &AcmeDomain = get_domain_config(&domain)?;
let plugin_id = domain_config.plugin.as_deref().unwrap_or("standalone");
let mut plugin_cfg =
@@ -352,31 +350,28 @@ async fn order_certificate(
format_err!("plugin '{}' for domain '{}' not found!", plugin_id, domain)
})?;
- task_log!(worker, "Setting up validation plugin");
+ info!("Setting up validation plugin");
let validation_url = plugin_cfg
.setup(&mut acme, &auth, domain_config, Arc::clone(&worker))
.await?;
- let result = request_validation(&worker, &mut acme, auth_url, validation_url).await;
+ let result = request_validation(&mut acme, auth_url, validation_url).await;
if let Err(err) = plugin_cfg
.teardown(&mut acme, &auth, domain_config, Arc::clone(&worker))
.await
{
- task_warn!(
- worker,
+ warn!(
"Failed to teardown plugin '{}' for domain '{}' - {}",
- plugin_id,
- domain,
- err
+ plugin_id, domain, err
);
}
result?;
}
- task_log!(worker, "All domains validated");
- task_log!(worker, "Creating CSR");
+ info!("All domains validated");
+ info!("Creating CSR");
let csr = proxmox_acme_rs::util::Csr::generate(&identifiers, &Default::default())?;
let mut finalize_error_cnt = 0u8;
@@ -389,7 +384,7 @@ async fn order_certificate(
match order.status {
Status::Pending => {
- task_log!(worker, "still pending, trying to finalize anyway");
+ info!("still pending, trying to finalize anyway");
let finalize = order
.finalize
.as_deref()
@@ -404,7 +399,7 @@ async fn order_certificate(
tokio::time::sleep(Duration::from_secs(5)).await;
}
Status::Ready => {
- task_log!(worker, "order is ready, finalizing");
+ info!("order is ready, finalizing");
let finalize = order
.finalize
.as_deref()
@@ -413,18 +408,18 @@ async fn order_certificate(
tokio::time::sleep(Duration::from_secs(5)).await;
}
Status::Processing => {
- task_log!(worker, "still processing, trying again in 30 seconds");
+ info!("still processing, trying again in 30 seconds");
tokio::time::sleep(Duration::from_secs(30)).await;
}
Status::Valid => {
- task_log!(worker, "valid");
+ info!("valid");
break;
}
other => bail!("order status: {:?}", other),
}
}
- task_log!(worker, "Downloading certificate");
+ info!("Downloading certificate");
let certificate = acme
.get_certificate(
order
@@ -441,15 +436,14 @@ async fn order_certificate(
}
async fn request_validation(
- worker: &WorkerTask,
acme: &mut AcmeClient,
auth_url: &str,
validation_url: &str,
) -> Result<(), Error> {
- task_log!(worker, "Triggering validation");
+ info!("Triggering validation");
acme.request_challenge_validation(validation_url).await?;
- task_log!(worker, "Sleeping for 5 seconds");
+ info!("Sleeping for 5 seconds");
tokio::time::sleep(Duration::from_secs(5)).await;
loop {
@@ -458,10 +452,7 @@ async fn request_validation(
let auth = acme.get_authorization(auth_url).await?;
match auth.status {
Status::Pending => {
- task_log!(
- worker,
- "Status is still 'pending', trying again in 10 seconds"
- );
+ info!("Status is still 'pending', trying again in 10 seconds");
tokio::time::sleep(Duration::from_secs(10)).await;
}
Status::Valid => return Ok(()),
@@ -586,15 +577,12 @@ pub fn revoke_acme_cert(rpcenv: &mut dyn RpcEnvironment) -> Result<String, Error
None,
auth_id,
true,
- move |worker| async move {
- task_log!(worker, "Loading ACME account");
+ move |_worker| async move {
+ info!("Loading ACME account");
let mut acme = node_config.acme_client().await?;
- task_log!(worker, "Revoking old certificate");
+ info!("Revoking old certificate");
acme.revoke_certificate(cert_pem.as_bytes(), None).await?;
- task_log!(
- worker,
- "Deleting certificate and regenerating a self-signed one"
- );
+ info!("Deleting certificate and regenerating a self-signed one");
delete_custom_certificate().await?;
Ok(())
},
diff --git a/src/api2/node/disks/directory.rs b/src/api2/node/disks/directory.rs
index d5a16be7..2c7f7a13 100644
--- a/src/api2/node/disks/directory.rs
+++ b/src/api2/node/disks/directory.rs
@@ -1,11 +1,11 @@
use ::serde::{Deserialize, Serialize};
use anyhow::{bail, Error};
use serde_json::json;
+use tracing::info;
use proxmox_router::{Permission, Router, RpcEnvironment, RpcEnvironmentType};
use proxmox_schema::api;
use proxmox_section_config::SectionConfigData;
-use proxmox_sys::task_log;
use pbs_api_types::{
DataStoreConfig, BLOCKDEVICE_NAME_SCHEMA, DATASTORE_SCHEMA, NODE_SCHEMA, PRIV_SYS_AUDIT,
@@ -170,8 +170,8 @@ pub fn create_datastore_disk(
Some(name.clone()),
auth_id,
to_stdout,
- move |worker| {
- task_log!(worker, "create datastore '{}' on disk {}", name, disk);
+ move |_worker| {
+ info!("create datastore '{}' on disk {}", name, disk);
let add_datastore = add_datastore.unwrap_or(false);
let filesystem = filesystem.unwrap_or(FileSystemType::Ext4);
@@ -204,12 +204,7 @@ pub fn create_datastore_disk(
bail!("datastore '{}' already exists.", datastore.name);
}
- crate::api2::config::datastore::do_create_datastore(
- lock,
- config,
- datastore,
- Some(&worker),
- )?;
+ crate::api2::config::datastore::do_create_datastore(lock, config, datastore)?;
}
Ok(())
diff --git a/src/api2/node/disks/mod.rs b/src/api2/node/disks/mod.rs
index 5ee959cd..53a9c536 100644
--- a/src/api2/node/disks/mod.rs
+++ b/src/api2/node/disks/mod.rs
@@ -6,7 +6,7 @@ use proxmox_router::{
};
use proxmox_schema::api;
use proxmox_sortable_macro::sortable;
-use proxmox_sys::task_log;
+use tracing::info;
use pbs_api_types::{
BLOCKDEVICE_NAME_SCHEMA, NODE_SCHEMA, PRIV_SYS_AUDIT, PRIV_SYS_MODIFY, UPID_SCHEMA,
@@ -163,8 +163,8 @@ pub fn initialize_disk(
Some(disk.clone()),
auth_id,
to_stdout,
- move |worker| {
- task_log!(worker, "initialize disk {}", disk);
+ move |_worker| {
+ info!("initialize disk {}", disk);
let disk_manager = DiskManage::new();
let disk_info = disk_manager.disk_by_name(&disk)?;
diff --git a/src/api2/node/disks/zfs.rs b/src/api2/node/disks/zfs.rs
index c8d221cc..2c2769d1 100644
--- a/src/api2/node/disks/zfs.rs
+++ b/src/api2/node/disks/zfs.rs
@@ -1,9 +1,9 @@
use anyhow::{bail, Error};
use serde_json::{json, Value};
+use tracing::{error, info};
use proxmox_router::{Permission, Router, RpcEnvironment, RpcEnvironmentType};
use proxmox_schema::api;
-use proxmox_sys::{task_error, task_log};
use pbs_api_types::{
DataStoreConfig, ZfsCompressionType, ZfsRaidLevel, ZpoolListItem, DATASTORE_SCHEMA,
@@ -228,13 +228,10 @@ pub fn create_zpool(
Some(name.clone()),
auth_id,
to_stdout,
- move |worker| {
- task_log!(
- worker,
+ move |_worker| {
+ info!(
"create {:?} zpool '{}' on devices '{}'",
- raidlevel,
- name,
- devices_text
+ raidlevel, name, devices_text
);
let mut command = std::process::Command::new("zpool");
@@ -275,12 +272,12 @@ pub fn create_zpool(
}
}
- task_log!(worker, "# {:?}", command);
+ info!("# {:?}", command);
match proxmox_sys::command::run_command(command, None) {
- Ok(output) => task_log!(worker, "{output}"),
+ Ok(output) => info!("{output}"),
Err(err) => {
- task_error!(worker, "{err}");
+ error!("{err}");
bail!("Error during 'zpool create', see task log for more details");
}
};
@@ -299,11 +296,11 @@ pub fn create_zpool(
command.arg(&format!("compression={}", compression));
}
command.args(["relatime=on", &name]);
- task_log!(worker, "# {:?}", command);
+ info!("# {:?}", command);
match proxmox_sys::command::run_command(command, None) {
- Ok(output) => task_log!(worker, "{output}"),
+ Ok(output) => info!("{output}"),
Err(err) => {
- task_error!(worker, "{err}");
+ error!("{err}");
bail!("Error during 'zfs set', see task log for more details");
}
};
@@ -319,12 +316,7 @@ pub fn create_zpool(
bail!("datastore '{}' already exists.", datastore.name);
}
- crate::api2::config::datastore::do_create_datastore(
- lock,
- config,
- datastore,
- Some(&worker),
- )?;
+ crate::api2::config::datastore::do_create_datastore(lock, config, datastore)?;
}
Ok(())
diff --git a/src/api2/node/mod.rs b/src/api2/node/mod.rs
index 931143fd..7055a683 100644
--- a/src/api2/node/mod.rs
+++ b/src/api2/node/mod.rs
@@ -25,6 +25,7 @@ use proxmox_sortable_macro::sortable;
use proxmox_sys::fd::fd_change_cloexec;
use pbs_api_types::{NODE_SCHEMA, PRIV_SYS_CONSOLE};
+use tracing::{info, warn};
use crate::auth::{private_auth_keyring, public_auth_keyring};
use crate::tools;
@@ -181,20 +182,18 @@ async fn termproxy(cmd: Option<String>, rpcenv: &mut dyn RpcEnvironment) -> Resu
let stdout = child.stdout.take().expect("no child stdout handle");
let stderr = child.stderr.take().expect("no child stderr handle");
- let worker_stdout = worker.clone();
let stdout_fut = async move {
let mut reader = BufReader::new(stdout).lines();
while let Some(line) = reader.next_line().await? {
- worker_stdout.log_message(line);
+ info!(line);
}
Ok::<(), Error>(())
};
- let worker_stderr = worker.clone();
let stderr_fut = async move {
let mut reader = BufReader::new(stderr).lines();
while let Some(line) = reader.next_line().await? {
- worker_stderr.log_warning(line);
+ warn!(line);
}
Ok::<(), Error>(())
};
@@ -226,9 +225,9 @@ async fn termproxy(cmd: Option<String>, rpcenv: &mut dyn RpcEnvironment) -> Resu
}
if let Err(err) = child.kill().await {
- worker.log_warning(format!("error killing termproxy: {}", err));
+ warn!("error killing termproxy: {}", err);
} else if let Err(err) = child.wait().await {
- worker.log_warning(format!("error awaiting termproxy: {}", err));
+ warn!("error awaiting termproxy: {}", err);
}
}
diff --git a/src/api2/pull.rs b/src/api2/pull.rs
index daeba7cf..b5c23117 100644
--- a/src/api2/pull.rs
+++ b/src/api2/pull.rs
@@ -1,10 +1,10 @@
//! Sync datastore from remote server
use anyhow::{format_err, Error};
use futures::{future::FutureExt, select};
+use tracing::info;
use proxmox_router::{Permission, Router, RpcEnvironment};
use proxmox_schema::api;
-use proxmox_sys::task_log;
use pbs_api_types::{
Authid, BackupNamespace, GroupFilter, RateLimitConfig, SyncJobConfig, DATASTORE_SCHEMA,
@@ -116,21 +116,18 @@ pub fn do_sync_job(
let pull_params = PullParameters::try_from(&sync_job)?;
let client = pull_params.client().await?;
- task_log!(worker, "Starting datastore sync job '{}'", job_id);
+ info!("Starting datastore sync job '{}'", job_id);
if let Some(event_str) = schedule {
- task_log!(worker, "task triggered by schedule '{}'", event_str);
+ info!("task triggered by schedule '{}'", event_str);
}
- task_log!(
- worker,
+ info!(
"sync datastore '{}' from '{}/{}'",
- sync_job.store,
- sync_job.remote,
- sync_job.remote_store,
+ sync_job.store, sync_job.remote, sync_job.remote_store,
);
- pull_store(&worker, &client, pull_params).await?;
+ pull_store(&client, pull_params).await?;
- task_log!(worker, "sync job '{}' end", &job_id);
+ info!("sync job '{}' end", &job_id);
Ok(())
};
@@ -276,21 +273,18 @@ async fn pull(
auth_id.to_string(),
true,
move |worker| async move {
- task_log!(
- worker,
+ info!(
"pull datastore '{}' from '{}/{}'",
- store,
- remote,
- remote_store,
+ store, remote, remote_store,
);
- let pull_future = pull_store(&worker, &client, pull_params);
+ let pull_future = pull_store(&client, pull_params);
(select! {
success = pull_future.fuse() => success,
abort = worker.abort_future().map(|_| Err(format_err!("pull aborted"))) => abort,
})?;
- task_log!(worker, "pull datastore '{}' end", store);
+ info!("pull datastore '{}' end", store);
Ok(())
},
diff --git a/src/api2/tape/backup.rs b/src/api2/tape/backup.rs
index 2f9385a7..9dfd2d11 100644
--- a/src/api2/tape/backup.rs
+++ b/src/api2/tape/backup.rs
@@ -2,11 +2,12 @@ use std::sync::{Arc, Mutex};
use anyhow::{bail, format_err, Error};
use serde_json::Value;
+use tracing::{info, warn};
use proxmox_lang::try_block;
use proxmox_router::{Permission, Router, RpcEnvironment, RpcEnvironmentType};
use proxmox_schema::api;
-use proxmox_sys::{task_log, task_warn, WorkerTaskContext};
+use proxmox_sys::WorkerTaskContext;
use pbs_api_types::{
print_ns_and_snapshot, print_store_and_ns, Authid, GroupFilter, MediaPoolConfig, Operation,
@@ -181,7 +182,7 @@ pub fn do_tape_backup_job(
let job_result = try_block!({
if schedule.is_some() {
// for scheduled tape backup jobs, we wait indefinitely for the lock
- task_log!(worker, "waiting for drive lock...");
+ info!("waiting for drive lock...");
loop {
worker.check_abort()?;
match lock_tape_device(&drive_config, &setup.drive) {
@@ -196,9 +197,9 @@ pub fn do_tape_backup_job(
}
set_tape_device_state(&setup.drive, &worker.upid().to_string())?;
- task_log!(worker, "Starting tape backup job '{}'", job_id);
+ info!("Starting tape backup job '{}'", job_id);
if let Some(event_str) = schedule {
- task_log!(worker, "task triggered by schedule '{}'", event_str);
+ info!("task triggered by schedule '{}'", event_str);
}
backup_worker(
@@ -392,7 +393,7 @@ fn backup_worker(
) -> Result<(), Error> {
let start = std::time::Instant::now();
- task_log!(worker, "update media online status");
+ info!("update media online status");
let changer_name = update_media_online_status(&setup.drive)?;
let root_namespace = setup.ns.clone().unwrap_or_default();
@@ -400,8 +401,7 @@ fn backup_worker(
let pool = MediaPool::with_config(TAPE_STATUS_DIR, pool_config, changer_name, false)?;
- let mut pool_writer =
- PoolWriter::new(pool, &setup.drive, worker, email, force_media_set, ns_magic)?;
+ let mut pool_writer = PoolWriter::new(pool, &setup.drive, email, force_media_set, ns_magic)?;
let mut group_list = Vec::new();
let namespaces = datastore.recursive_iter_backup_ns_ok(root_namespace, setup.max_depth)?;
@@ -422,16 +422,14 @@ fn backup_worker(
.filter(|group| filter_fn(group, group_filters))
.collect();
let group_count = list.len();
- task_log!(
- worker,
+ info!(
"found {} groups (out of {} total)",
- group_count,
- group_count_full
+ group_count, group_count_full
);
(list, group_count)
} else {
let group_count = group_list.len();
- task_log!(worker, "found {} groups", group_count);
+ info!("found {} groups", group_count);
(group_list, group_count)
};
@@ -440,10 +438,7 @@ fn backup_worker(
let latest_only = setup.latest_only.unwrap_or(false);
if latest_only {
- task_log!(
- worker,
- "latest-only: true (only considering latest snapshots)"
- );
+ info!("latest-only: true (only considering latest snapshots)");
}
let datastore_name = datastore.name();
@@ -466,8 +461,7 @@ fn backup_worker(
.collect();
if snapshot_list.is_empty() {
- task_log!(
- worker,
+ info!(
"{}, group {} was empty",
print_store_and_ns(datastore_name, group.backup_ns()),
group.group()
@@ -487,7 +481,7 @@ fn backup_worker(
info.backup_dir.backup_ns(),
info.backup_dir.as_ref(),
) {
- task_log!(worker, "skip snapshot {}", rel_path);
+ info!("skip snapshot {}", rel_path);
continue;
}
@@ -500,7 +494,7 @@ fn backup_worker(
SnapshotBackupResult::Ignored => {}
}
progress.done_snapshots = 1;
- task_log!(worker, "percentage done: {}", progress);
+ info!("percentage done: {}", progress);
}
} else {
progress.group_snapshots = snapshot_list.len() as u64;
@@ -513,7 +507,7 @@ fn backup_worker(
info.backup_dir.backup_ns(),
info.backup_dir.as_ref(),
) {
- task_log!(worker, "skip snapshot {}", rel_path);
+ info!("skip snapshot {}", rel_path);
continue;
}
@@ -526,7 +520,7 @@ fn backup_worker(
SnapshotBackupResult::Ignored => {}
}
progress.done_snapshots = snapshot_number as u64 + 1;
- task_log!(worker, "percentage done: {}", progress);
+ info!("percentage done: {}", progress);
}
}
}
@@ -534,18 +528,15 @@ fn backup_worker(
pool_writer.commit()?;
if need_catalog {
- task_log!(worker, "append media catalog");
+ info!("append media catalog");
let uuid = pool_writer.load_writable_media(worker)?;
- let done = pool_writer.append_catalog_archive(worker)?;
+ let done = pool_writer.append_catalog_archive()?;
if !done {
- task_log!(
- worker,
- "catalog does not fit on tape, writing to next volume"
- );
+ info!("catalog does not fit on tape, writing to next volume");
pool_writer.set_media_status_full(&uuid)?;
pool_writer.load_writable_media(worker)?;
- let done = pool_writer.append_catalog_archive(worker)?;
+ let done = pool_writer.append_catalog_archive()?;
if !done {
bail!("write_catalog_archive failed on second media");
}
@@ -553,9 +544,9 @@ fn backup_worker(
}
if setup.export_media_set.unwrap_or(false) {
- pool_writer.export_media_set(worker)?;
+ pool_writer.export_media_set()?;
} else if setup.eject_media.unwrap_or(false) {
- pool_writer.eject_media(worker)?;
+ pool_writer.eject_media()?;
}
if errors {
@@ -565,7 +556,7 @@ fn backup_worker(
summary.used_tapes = match pool_writer.get_used_media_labels() {
Ok(tapes) => Some(tapes),
Err(err) => {
- task_warn!(worker, "could not collect list of used tapes: {err}");
+ warn!("could not collect list of used tapes: {err}");
None
}
};
@@ -599,7 +590,7 @@ fn backup_snapshot(
snapshot: BackupDir,
) -> Result<SnapshotBackupResult, Error> {
let snapshot_path = snapshot.relative_path();
- task_log!(worker, "backup snapshot {:?}", snapshot_path);
+ info!("backup snapshot {:?}", snapshot_path);
let snapshot_reader = match snapshot.locked_reader() {
Ok(reader) => reader,
@@ -607,15 +598,10 @@ fn backup_snapshot(
if !snapshot.full_path().exists() {
// we got an error and the dir does not exist,
// it probably just vanished, so continue
- task_log!(worker, "snapshot {:?} vanished, skipping", snapshot_path);
+ info!("snapshot {:?} vanished, skipping", snapshot_path);
return Ok(SnapshotBackupResult::Ignored);
}
- task_warn!(
- worker,
- "failed opening snapshot {:?}: {}",
- snapshot_path,
- err
- );
+ warn!("failed opening snapshot {:?}: {}", snapshot_path, err);
return Ok(SnapshotBackupResult::Error);
}
};
@@ -661,7 +647,7 @@ fn backup_snapshot(
let snapshot_reader = snapshot_reader.lock().unwrap();
- let (done, _bytes) = pool_writer.append_snapshot_archive(worker, &snapshot_reader)?;
+ let (done, _bytes) = pool_writer.append_snapshot_archive(&snapshot_reader)?;
if !done {
// does not fit on tape, so we try on next volume
@@ -670,19 +656,14 @@ fn backup_snapshot(
worker.check_abort()?;
pool_writer.load_writable_media(worker)?;
- let (done, _bytes) = pool_writer.append_snapshot_archive(worker, &snapshot_reader)?;
+ let (done, _bytes) = pool_writer.append_snapshot_archive(&snapshot_reader)?;
if !done {
bail!("write_snapshot_archive failed on second media");
}
}
- task_log!(
- worker,
- "end backup {}:{:?}",
- datastore.name(),
- snapshot_path
- );
+ info!("end backup {}:{:?}", datastore.name(), snapshot_path);
Ok(SnapshotBackupResult::Success)
}
diff --git a/src/api2/tape/drive.rs b/src/api2/tape/drive.rs
index 5306e605..4c94f0d4 100644
--- a/src/api2/tape/drive.rs
+++ b/src/api2/tape/drive.rs
@@ -4,6 +4,7 @@ use std::sync::Arc;
use anyhow::{bail, format_err, Error};
use serde_json::Value;
+use tracing::{info, warn};
use proxmox_router::{
list_subdirs_api_method, Permission, Router, RpcEnvironment, RpcEnvironmentType, SubdirMap,
@@ -11,7 +12,6 @@ use proxmox_router::{
use proxmox_schema::api;
use proxmox_section_config::SectionConfigData;
use proxmox_sortable_macro::sortable;
-use proxmox_sys::{task_log, task_warn};
use proxmox_uuid::Uuid;
use pbs_api_types::{
@@ -131,13 +131,8 @@ pub fn load_media(
drive.clone(),
"load-media",
Some(job_id),
- move |worker, config| {
- task_log!(
- worker,
- "loading media '{}' into drive '{}'",
- label_text,
- drive
- );
+ move |_worker, config| {
+ info!("loading media '{}' into drive '{}'", label_text, drive);
let (mut changer, _) = required_media_changer(&config, &drive)?;
changer.load_media(&label_text)?;
Ok(())
@@ -250,8 +245,8 @@ pub fn unload(
drive.clone(),
"unload-media",
Some(drive.clone()),
- move |worker, config| {
- task_log!(worker, "unloading media from drive '{}'", drive);
+ move |_worker, config| {
+ info!("unloading media from drive '{}'", drive);
let (mut changer, _) = required_media_changer(&config, &drive)?;
changer.unload_media(target_slot)?;
@@ -299,9 +294,9 @@ pub fn format_media(
drive.clone(),
"format-media",
Some(drive.clone()),
- move |worker, config| {
+ move |_worker, config| {
if let Some(ref label) = label_text {
- task_log!(worker, "try to load media '{}'", label);
+ info!("try to load media '{}'", label);
if let Some((mut changer, _)) = media_changer(&config, &drive)? {
changer.load_media(label)?;
}
@@ -315,15 +310,15 @@ pub fn format_media(
bail!("expected label '{}', found unrelated data", label);
}
/* assume drive contains no or unrelated data */
- task_log!(worker, "unable to read media label: {}", err);
- task_log!(worker, "format anyways");
+ info!("unable to read media label: {}", err);
+ info!("format anyways");
handle.format_media(fast.unwrap_or(true))?;
}
Ok((None, _)) => {
if let Some(label) = label_text {
bail!("expected label '{}', found empty tape", label);
}
- task_log!(worker, "found empty media - format anyways");
+ info!("found empty media - format anyways");
handle.format_media(fast.unwrap_or(true))?;
}
Ok((Some(media_id), _key_config)) => {
@@ -337,11 +332,9 @@ pub fn format_media(
}
}
- task_log!(
- worker,
+ info!(
"found media '{}' with uuid '{}'",
- media_id.label.label_text,
- media_id.label.uuid,
+ media_id.label.label_text, media_id.label.uuid,
);
let mut inventory = Inventory::new(TAPE_STATUS_DIR);
@@ -489,7 +482,7 @@ pub fn label_media(
drive.clone(),
"label-media",
Some(drive.clone()),
- move |worker, config| {
+ move |_worker, config| {
let mut drive = open_drive(&config, &drive)?;
drive.rewind()?;
@@ -511,7 +504,7 @@ pub fn label_media(
pool: pool.clone(),
};
- write_media_label(worker, &mut drive, label, pool)
+ write_media_label(&mut drive, label, pool)
},
)?;
@@ -519,25 +512,15 @@ pub fn label_media(
}
fn write_media_label(
- worker: Arc<WorkerTask>,
drive: &mut Box<dyn TapeDriver>,
label: MediaLabel,
pool: Option<String>,
) -> Result<(), Error> {
drive.label_tape(&label)?;
if let Some(ref pool) = pool {
- task_log!(
- worker,
- "Label media '{}' for pool '{}'",
- label.label_text,
- pool
- );
+ info!("Label media '{}' for pool '{}'", label.label_text, pool);
} else {
- task_log!(
- worker,
- "Label media '{}' (no pool assignment)",
- label.label_text
- );
+ info!("Label media '{}' (no pool assignment)", label.label_text);
}
let media_id = MediaId {
@@ -729,10 +712,10 @@ pub fn clean_drive(drive: String, rpcenv: &mut dyn RpcEnvironment) -> Result<Val
drive.clone(),
"clean-drive",
Some(drive.clone()),
- move |worker, config| {
+ move |_worker, config| {
let (mut changer, _changer_name) = required_media_changer(&config, &drive)?;
- task_log!(worker, "Starting drive clean");
+ info!("Starting drive clean");
changer.clean_drive()?;
@@ -743,7 +726,7 @@ pub fn clean_drive(drive: String, rpcenv: &mut dyn RpcEnvironment) -> Result<Val
// test for critical tape alert flags
if let Ok(alert_flags) = handle.tape_alert_flags() {
if !alert_flags.is_empty() {
- task_log!(worker, "TapeAlertFlags: {:?}", alert_flags);
+ info!("TapeAlertFlags: {:?}", alert_flags);
if tape_alert_flags_critical(alert_flags) {
bail!("found critical tape alert flags: {:?}", alert_flags);
}
@@ -752,13 +735,13 @@ pub fn clean_drive(drive: String, rpcenv: &mut dyn RpcEnvironment) -> Result<Val
// test wearout (max. 50 mounts)
if let Ok(volume_stats) = handle.volume_statistics() {
- task_log!(worker, "Volume mounts: {}", volume_stats.volume_mounts);
+ info!("Volume mounts: {}", volume_stats.volume_mounts);
let wearout = volume_stats.volume_mounts * 2; // (*100.0/50.0);
- task_log!(worker, "Cleaning tape wearout: {}%", wearout);
+ info!("Cleaning tape wearout: {}%", wearout);
}
}
- task_log!(worker, "Drive cleaned successfully");
+ info!("Drive cleaned successfully");
Ok(())
},
@@ -880,12 +863,12 @@ pub fn update_inventory(
drive.clone(),
"inventory-update",
Some(drive.clone()),
- move |worker, config| {
+ move |_worker, config| {
let (mut changer, changer_name) = required_media_changer(&config, &drive)?;
let label_text_list = changer.online_media_label_texts()?;
if label_text_list.is_empty() {
- task_log!(worker, "changer device does not list any media labels");
+ info!("changer device does not list any media labels");
}
let mut inventory = Inventory::load(TAPE_STATUS_DIR)?;
@@ -894,7 +877,7 @@ pub fn update_inventory(
for label_text in label_text_list.iter() {
if label_text.starts_with("CLN") {
- task_log!(worker, "skip cleaning unit '{}'", label_text);
+ info!("skip cleaning unit '{}'", label_text);
continue;
}
@@ -903,45 +886,36 @@ pub fn update_inventory(
if !read_all_labels {
if let Some(media_id) = inventory.find_media_by_label_text(&label_text) {
if !catalog || MediaCatalog::exists(TAPE_STATUS_DIR, &media_id.label.uuid) {
- task_log!(worker, "media '{}' already inventoried", label_text);
+ info!("media '{}' already inventoried", label_text);
continue;
}
}
}
if let Err(err) = changer.load_media(&label_text) {
- task_warn!(worker, "unable to load media '{}' - {}", label_text, err);
+ warn!("unable to load media '{}' - {}", label_text, err);
continue;
}
let mut drive = open_drive(&config, &drive)?;
match drive.read_label() {
Err(err) => {
- task_warn!(
- worker,
- "unable to read label form media '{}' - {}",
- label_text,
- err
- );
+ warn!("unable to read label form media '{}' - {}", label_text, err);
}
Ok((None, _)) => {
- task_log!(worker, "media '{}' is empty", label_text);
+ info!("media '{}' is empty", label_text);
}
Ok((Some(media_id), _key_config)) => {
if label_text != media_id.label.label_text {
- task_warn!(
- worker,
+ warn!(
"label text mismatch ({} != {})",
- label_text,
- media_id.label.label_text
+ label_text, media_id.label.label_text
);
continue;
}
- task_log!(
- worker,
+ info!(
"inventorize media '{}' with uuid '{}'",
- label_text,
- media_id.label.uuid
+ label_text, media_id.label.uuid
);
let _pool_lock = if let Some(pool) = media_id.pool() {
@@ -962,15 +936,11 @@ pub fn update_inventory(
if catalog {
let media_set = inventory.compute_media_set_members(&set.uuid)?;
if let Err(err) = fast_catalog_restore(
- &worker,
&mut drive,
&media_set,
&media_id.label.uuid,
) {
- task_warn!(
- worker,
- "could not restore catalog for {label_text}: {err}"
- );
+ warn!("could not restore catalog for {label_text}: {err}");
}
}
} else {
@@ -1026,14 +996,13 @@ pub fn barcode_label_media(
drive.clone(),
"barcode-label-media",
Some(drive.clone()),
- move |worker, config| barcode_label_media_worker(worker, drive, &config, pool),
+ move |_worker, config| barcode_label_media_worker(drive, &config, pool),
)?;
Ok(upid_str.into())
}
fn barcode_label_media_worker(
- worker: Arc<WorkerTask>,
drive: String,
drive_config: &SectionConfigData,
pool: Option<String>,
@@ -1065,18 +1034,17 @@ fn barcode_label_media_worker(
inventory.reload()?;
if inventory.find_media_by_label_text(&label_text).is_some() {
- task_log!(
- worker,
+ info!(
"media '{}' already inventoried (already labeled)",
label_text
);
continue;
}
- task_log!(worker, "checking/loading media '{}'", label_text);
+ info!("checking/loading media '{}'", label_text);
if let Err(err) = changer.load_media(&label_text) {
- task_warn!(worker, "unable to load media '{}' - {}", label_text, err);
+ warn!("unable to load media '{}' - {}", label_text, err);
continue;
}
@@ -1085,18 +1053,13 @@ fn barcode_label_media_worker(
match drive.read_next_file() {
Ok(_reader) => {
- task_log!(
- worker,
- "media '{}' is not empty (format it first)",
- label_text
- );
+ info!("media '{}' is not empty (format it first)", label_text);
continue;
}
Err(BlockReadError::EndOfFile) => { /* EOF mark at BOT, assume tape is empty */ }
Err(BlockReadError::EndOfStream) => { /* tape is empty */ }
Err(_err) => {
- task_warn!(
- worker,
+ warn!(
"media '{}' read error (maybe not empty - format it first)",
label_text
);
@@ -1112,7 +1075,7 @@ fn barcode_label_media_worker(
pool: pool.clone(),
};
- write_media_label(worker.clone(), &mut drive, label, pool.clone())?
+ write_media_label(&mut drive, label, pool.clone())?
}
Ok(())
@@ -1271,14 +1234,12 @@ pub fn catalog_media(
let media_id = match drive.read_label()? {
(Some(media_id), key_config) => {
- task_log!(
- worker,
+ info!(
"found media label: {}",
serde_json::to_string_pretty(&serde_json::to_value(&media_id)?)?
);
if key_config.is_some() {
- task_log!(
- worker,
+ info!(
"encryption key config: {}",
serde_json::to_string_pretty(&serde_json::to_value(&key_config)?)?
);
@@ -1292,7 +1253,7 @@ pub fn catalog_media(
let (_media_set_lock, media_set_uuid) = match media_id.media_set_label {
None => {
- task_log!(worker, "media is empty");
+ info!("media is empty");
let _pool_lock = if let Some(pool) = media_id.pool() {
lock_media_pool(TAPE_STATUS_DIR, &pool)?
} else {
@@ -1305,7 +1266,7 @@ pub fn catalog_media(
Some(ref set) => {
if set.unassigned() {
// media is empty
- task_log!(worker, "media is empty");
+ info!("media is empty");
let _lock = lock_unassigned_media_pool(TAPE_STATUS_DIR)?;
MediaCatalog::destroy(TAPE_STATUS_DIR, &media_id.label.uuid)?;
inventory.store(media_id.clone(), false)?;
@@ -1336,14 +1297,14 @@ pub fn catalog_media(
if !scan {
let media_set = inventory.compute_media_set_members(media_set_uuid)?;
- if fast_catalog_restore(&worker, &mut drive, &media_set, &media_id.label.uuid)? {
+ if fast_catalog_restore(&mut drive, &media_set, &media_id.label.uuid)? {
return Ok(());
}
- task_log!(worker, "no catalog found");
+ info!("no catalog found");
}
- task_log!(worker, "scanning entire media to reconstruct catalog");
+ info!("scanning entire media to reconstruct catalog");
drive.rewind()?;
drive.read_label()?; // skip over labels - we already read them above
diff --git a/src/api2/tape/restore.rs b/src/api2/tape/restore.rs
index 7b6c8978..0d17020b 100644
--- a/src/api2/tape/restore.rs
+++ b/src/api2/tape/restore.rs
@@ -6,6 +6,7 @@ use std::sync::Arc;
use anyhow::{bail, format_err, Error};
use serde_json::Value;
+use tracing::{info, warn};
use proxmox_human_byte::HumanByte;
use proxmox_io::ReadExt;
@@ -13,7 +14,7 @@ use proxmox_router::{Permission, Router, RpcEnvironment, RpcEnvironmentType};
use proxmox_schema::{api, ApiType};
use proxmox_section_config::SectionConfigData;
use proxmox_sys::fs::{replace_file, CreateOptions};
-use proxmox_sys::{task_log, task_warn, WorkerTaskContext};
+use proxmox_sys::WorkerTaskContext;
use proxmox_uuid::Uuid;
use pbs_api_types::{
@@ -399,8 +400,8 @@ pub fn restore(
.and_then(lookup_user_email)
.or_else(|| lookup_user_email(&auth_id.clone().into()));
- task_log!(worker, "Mediaset '{media_set}'");
- task_log!(worker, "Pool: {pool}");
+ info!("Mediaset '{media_set}'");
+ info!("Pool: {pool}");
let res = if snapshots.is_some() || namespaces {
restore_list_worker(
@@ -430,10 +431,10 @@ pub fn restore(
)
};
if res.is_ok() {
- task_log!(worker, "Restore mediaset '{media_set}' done");
+ info!("Restore mediaset '{media_set}' done");
}
if let Err(err) = set_tape_device_state(&drive, "") {
- task_log!(worker, "could not unset drive state for {drive}: {err}");
+ info!("could not unset drive state for {drive}: {err}");
}
res
@@ -484,7 +485,7 @@ fn restore_full_worker(
}
if let Some(fingerprint) = encryption_key_fingerprint {
- task_log!(worker, "Encryption key fingerprint: {fingerprint}");
+ info!("Encryption key fingerprint: {fingerprint}");
}
let used_datastores = store_map.used_datastores();
@@ -493,13 +494,9 @@ fn restore_full_worker(
.map(|(t, _)| String::from(t.name()))
.collect::<Vec<String>>()
.join(", ");
- task_log!(worker, "Datastore(s): {datastore_list}",);
- task_log!(worker, "Drive: {drive_name}");
- log_required_tapes(
- &worker,
- &inventory,
- media_id_list.iter().map(|id| &id.label.uuid),
- );
+ info!("Datastore(s): {datastore_list}",);
+ info!("Drive: {drive_name}");
+ log_required_tapes(&inventory, media_id_list.iter().map(|id| &id.label.uuid));
let mut datastore_locks = Vec::new();
for (target, _) in used_datastores.values() {
@@ -529,7 +526,6 @@ fn restore_full_worker(
#[allow(clippy::too_many_arguments)]
fn check_snapshot_restorable(
- worker: &WorkerTask,
store_map: &DataStoreMap,
store: &str,
snapshot: &str,
@@ -570,7 +566,7 @@ fn check_snapshot_restorable(
auth_id,
Some(restore_owner),
) {
- task_warn!(worker, "cannot restore {store}:{snapshot} to {ns}: '{err}'");
+ warn!("cannot restore {store}:{snapshot} to {ns}: '{err}'");
continue;
}
@@ -578,8 +574,7 @@ fn check_snapshot_restorable(
if let Ok(owner) = datastore.get_owner(&ns, dir.as_ref()) {
if restore_owner != &owner {
// only the owner is allowed to create additional snapshots
- task_warn!(
- worker,
+ warn!(
"restore of '{snapshot}' to {ns} failed, owner check failed ({restore_owner} \
!= {owner})",
);
@@ -590,10 +585,7 @@ fn check_snapshot_restorable(
have_some_permissions = true;
if datastore.snapshot_path(&ns, dir).exists() {
- task_warn!(
- worker,
- "found snapshot {snapshot} on target datastore/namespace, skipping...",
- );
+ warn!("found snapshot {snapshot} on target datastore/namespace, skipping...",);
continue;
}
can_restore_some = true;
@@ -606,11 +598,7 @@ fn check_snapshot_restorable(
Ok(can_restore_some)
}
-fn log_required_tapes<'a>(
- worker: &WorkerTask,
- inventory: &Inventory,
- list: impl Iterator<Item = &'a Uuid>,
-) {
+fn log_required_tapes<'a>(inventory: &Inventory, list: impl Iterator<Item = &'a Uuid>) {
let mut tape_list = list
.map(|uuid| {
inventory
@@ -622,7 +610,7 @@ fn log_required_tapes<'a>(
})
.collect::<Vec<&str>>();
tape_list.sort_unstable();
- task_log!(worker, "Required media list: {}", tape_list.join(";"));
+ info!("Required media list: {}", tape_list.join(";"));
}
#[allow(clippy::too_many_arguments)]
@@ -654,14 +642,13 @@ fn restore_list_worker(
let (ns, dir) = match parse_ns_and_snapshot(snapshot) {
Ok((ns, dir)) if store_map.has_full_mapping(store, &ns) => (ns, dir),
Err(err) => {
- task_warn!(worker, "couldn't parse snapshot {snapshot} - {err}");
+ warn!("couldn't parse snapshot {snapshot} - {err}");
continue;
}
_ => continue,
};
let snapshot = print_ns_and_snapshot(&ns, &dir);
match check_snapshot_restorable(
- &worker,
&store_map,
store,
&snapshot,
@@ -675,7 +662,7 @@ fn restore_list_worker(
Ok(true) => restorable.push((store.to_string(), snapshot.to_string(), ns, dir)),
Ok(false) => {}
Err(err) => {
- task_warn!(worker, "{err}");
+ warn!("{err}");
skipped.push(format!("{store}:{snapshot}"));
}
}
@@ -693,7 +680,6 @@ fn restore_list_worker(
match parse_ns_and_snapshot(snapshot) {
Ok((ns, dir)) => {
match check_snapshot_restorable(
- &worker,
&store_map,
store,
snapshot,
@@ -709,14 +695,14 @@ fn restore_list_worker(
}
Ok(false) => None,
Err(err) => {
- task_warn!(worker, "{err}");
+ warn!("{err}");
skipped.push(format!("{store}:{snapshot}"));
None
}
}
}
Err(err) => {
- task_warn!(worker, "could not restore {store_snapshot}: {err}");
+ warn!("could not restore {store_snapshot}: {err}");
skipped.push(store_snapshot);
None
}
@@ -734,10 +720,7 @@ fn restore_list_worker(
let media_id = inventory.lookup_media(media_uuid).unwrap();
(media_id, file_num)
} else {
- task_warn!(
- worker,
- "did not find snapshot '{store}:{snapshot}' in media set",
- );
+ warn!("did not find snapshot '{store}:{snapshot}' in media set",);
skipped.push(format!("{store}:{snapshot}"));
continue;
};
@@ -750,26 +733,25 @@ fn restore_list_worker(
.or_insert_with(Vec::new);
file_list.push(file_num);
- task_log!(
- worker,
+ info!(
"found snapshot {snapshot} on {}: file {file_num}",
media_id.label.label_text,
);
}
if snapshot_file_hash.is_empty() {
- task_log!(worker, "nothing to restore, skipping remaining phases...");
+ info!("nothing to restore, skipping remaining phases...");
if !skipped.is_empty() {
- task_log!(worker, "skipped the following snapshots:");
+ info!("skipped the following snapshots:");
for snap in skipped {
- task_log!(worker, " {snap}");
+ info!(" {snap}");
}
}
return Ok(());
}
- task_log!(worker, "Phase 1: temporarily restore snapshots to temp dir");
- log_required_tapes(&worker, &inventory, snapshot_file_hash.keys());
+ info!("Phase 1: temporarily restore snapshots to temp dir");
+ log_required_tapes(&inventory, snapshot_file_hash.keys());
let mut datastore_chunk_map: HashMap<String, HashSet<[u8; 32]>> = HashMap::new();
let mut tmp_paths = Vec::new();
for (media_uuid, file_list) in snapshot_file_hash.iter_mut() {
@@ -822,10 +804,10 @@ fn restore_list_worker(
drop(catalog);
if !media_file_chunk_map.is_empty() {
- task_log!(worker, "Phase 2: restore chunks to datastores");
- log_required_tapes(&worker, &inventory, media_file_chunk_map.keys());
+ info!("Phase 2: restore chunks to datastores");
+ log_required_tapes(&inventory, media_file_chunk_map.keys());
} else {
- task_log!(worker, "All chunks are already present, skip phase 2...");
+ info!("All chunks are already present, skip phase 2...");
}
for (media_uuid, file_chunk_map) in media_file_chunk_map.iter_mut() {
@@ -840,10 +822,7 @@ fn restore_list_worker(
restore_file_chunk_map(worker.clone(), &mut drive, &store_map, file_chunk_map)?;
}
- task_log!(
- worker,
- "Phase 3: copy snapshots from temp dir to datastores"
- );
+ info!("Phase 3: copy snapshots from temp dir to datastores");
let mut errors = false;
for (source_datastore, snapshot, source_ns, backup_dir) in snapshots.into_iter() {
if let Err(err) = proxmox_lang::try_block!({
@@ -900,20 +879,14 @@ fn restore_list_worker(
Ok(())
}) {
- task_warn!(
- worker,
- "could not restore {source_datastore}:{snapshot}: '{err}'"
- );
+ warn!("could not restore {source_datastore}:{snapshot}: '{err}'");
skipped.push(format!("{source_datastore}:{snapshot}"));
}
}
- task_log!(worker, "Restore snapshot '{}' done", snapshot);
+ info!("Restore snapshot '{}' done", snapshot);
Ok::<_, Error>(())
}) {
- task_warn!(
- worker,
- "could not copy {source_datastore}:{snapshot}: {err}"
- );
+ warn!("could not copy {source_datastore}:{snapshot}: {err}");
errors = true;
}
}
@@ -923,7 +896,7 @@ fn restore_list_worker(
std::fs::remove_dir_all(&tmp_path)
.map_err(|err| format_err!("remove_dir_all failed - {err}"))
}) {
- task_warn!(worker, "could not clean up temp dir {tmp_path:?}: {err}");
+ warn!("could not clean up temp dir {tmp_path:?}: {err}");
errors = true;
};
}
@@ -932,19 +905,16 @@ fn restore_list_worker(
bail!("errors during copy occurred");
}
if !skipped.is_empty() {
- task_log!(worker, "(partially) skipped the following snapshots:");
+ info!("(partially) skipped the following snapshots:");
for snap in skipped {
- task_log!(worker, " {snap}");
+ info!(" {snap}");
}
}
Ok(())
});
if res.is_err() {
- task_warn!(
- worker,
- "Error during restore, partially restored snapshots will NOT be cleaned up"
- );
+ warn!("Error during restore, partially restored snapshots will NOT be cleaned up");
}
for (datastore, _) in store_map.used_datastores().values() {
@@ -952,7 +922,7 @@ fn restore_list_worker(
match std::fs::remove_dir_all(tmp_path) {
Ok(()) => {}
Err(err) if err.kind() == std::io::ErrorKind::NotFound => {}
- Err(err) => task_warn!(worker, "error cleaning up: {}", err),
+ Err(err) => warn!("error cleaning up: {}", err),
}
}
@@ -1030,7 +1000,7 @@ fn restore_snapshots_to_tmpdir(
);
}
let encrypt_fingerprint = set.encryption_key_fingerprint.clone().map(|fp| {
- task_log!(worker, "Encryption key fingerprint: {}", fp);
+ info!("Encryption key fingerprint: {}", fp);
(fp, set.uuid.clone())
});
@@ -1041,13 +1011,10 @@ fn restore_snapshots_to_tmpdir(
for file_num in file_list {
let current_file_number = drive.current_file_number()?;
if current_file_number != *file_num {
- task_log!(
- worker,
- "was at file {current_file_number}, moving to {file_num}"
- );
+ info!("was at file {current_file_number}, moving to {file_num}");
drive.move_to_file(*file_num)?;
let current_file_number = drive.current_file_number()?;
- task_log!(worker, "now at file {}", current_file_number);
+ info!("now at file {}", current_file_number);
}
let mut reader = drive.read_next_file()?;
@@ -1069,20 +1036,14 @@ fn restore_snapshots_to_tmpdir(
let source_datastore = archive_header.store;
let snapshot = archive_header.snapshot;
- task_log!(
- worker,
- "File {file_num}: snapshot archive {source_datastore}:{snapshot}",
- );
+ info!("File {file_num}: snapshot archive {source_datastore}:{snapshot}",);
let mut decoder = pxar::decoder::sync::Decoder::from_std(reader)?;
let target_datastore = match store_map.target_store(&source_datastore) {
Some(datastore) => datastore,
None => {
- task_warn!(
- worker,
- "could not find target datastore for {source_datastore}:{snapshot}",
- );
+ warn!("could not find target datastore for {source_datastore}:{snapshot}",);
continue;
}
};
@@ -1136,10 +1097,10 @@ fn restore_file_chunk_map(
for (nr, chunk_map) in file_chunk_map.iter_mut() {
let current_file_number = drive.current_file_number()?;
if current_file_number != *nr {
- task_log!(worker, "was at file {current_file_number}, moving to {nr}");
+ info!("was at file {current_file_number}, moving to {nr}");
drive.move_to_file(*nr)?;
let current_file_number = drive.current_file_number()?;
- task_log!(worker, "now at file {}", current_file_number);
+ info!("now at file {}", current_file_number);
}
let mut reader = drive.read_next_file()?;
let header: MediaContentHeader = unsafe { reader.read_le_value()? };
@@ -1156,10 +1117,7 @@ fn restore_file_chunk_map(
let source_datastore = archive_header.store;
- task_log!(
- worker,
- "File {nr}: chunk archive for datastore '{source_datastore}'",
- );
+ info!("File {nr}: chunk archive for datastore '{source_datastore}'",);
let datastore = store_map.target_store(&source_datastore).ok_or_else(|| {
format_err!("unexpected chunk archive for store: {source_datastore}")
@@ -1171,7 +1129,7 @@ fn restore_file_chunk_map(
datastore.clone(),
chunk_map,
)?;
- task_log!(worker, "restored {count} chunks");
+ info!("restored {count} chunks");
}
_ => bail!("unexpected content magic {:?}", header.content_magic),
}
@@ -1231,8 +1189,7 @@ fn restore_partial_chunk_archive<'a>(
let elapsed = start_time.elapsed()?.as_secs_f64();
let bytes = bytes.load(std::sync::atomic::Ordering::SeqCst) as f64;
- task_log!(
- worker,
+ info!(
"restored {} ({:.2}/s)",
HumanByte::new_decimal(bytes),
HumanByte::new_decimal(bytes / elapsed),
@@ -1317,15 +1274,11 @@ pub fn restore_media(
let current_file_number = drive.current_file_number()?;
let reader = match drive.read_next_file() {
Err(BlockReadError::EndOfFile) => {
- task_log!(
- worker,
- "skip unexpected filemark at pos {}",
- current_file_number
- );
+ info!("skip unexpected filemark at pos {}", current_file_number);
continue;
}
Err(BlockReadError::EndOfStream) => {
- task_log!(worker, "detected EOT after {} files", current_file_number);
+ info!("detected EOT after {} files", current_file_number);
break;
}
Err(BlockReadError::Error(err)) => {
@@ -1389,12 +1342,9 @@ fn restore_archive<'a>(
let datastore_name = archive_header.store;
let snapshot = archive_header.snapshot;
- task_log!(
- worker,
+ info!(
"File {}: snapshot archive {}:{}",
- current_file_number,
- datastore_name,
- snapshot
+ current_file_number, datastore_name, snapshot
);
let (backup_ns, backup_dir) = parse_ns_and_snapshot(&snapshot)?;
@@ -1429,7 +1379,7 @@ fn restore_archive<'a>(
path.push(rel_path);
if is_new {
- task_log!(worker, "restore snapshot {}", backup_dir);
+ info!("restore snapshot {}", backup_dir);
match restore_snapshot_archive(worker.clone(), reader, &path) {
Err(err) => {
@@ -1438,7 +1388,7 @@ fn restore_archive<'a>(
}
Ok(false) => {
std::fs::remove_dir_all(&path)?;
- task_log!(worker, "skip incomplete snapshot {}", backup_dir);
+ info!("skip incomplete snapshot {}", backup_dir);
}
Ok(true) => {
catalog.register_snapshot(
@@ -1454,7 +1404,7 @@ fn restore_archive<'a>(
return Ok(());
}
} else {
- task_log!(worker, "skipping...");
+ info!("skipping...");
}
}
@@ -1481,11 +1431,9 @@ fn restore_archive<'a>(
let source_datastore = archive_header.store;
- task_log!(
- worker,
+ info!(
"File {}: chunk archive for datastore '{}'",
- current_file_number,
- source_datastore
+ current_file_number, source_datastore
);
let datastore = target
.as_ref()
@@ -1521,12 +1469,12 @@ fn restore_archive<'a>(
&source_datastore,
&chunks[..],
)?;
- task_log!(worker, "register {} chunks", chunks.len());
+ info!("register {} chunks", chunks.len());
catalog.commit_if_large()?;
}
return Ok(());
} else if target.is_some() {
- task_log!(worker, "skipping...");
+ info!("skipping...");
}
reader.skip_data()?; // read all data
@@ -1537,11 +1485,9 @@ fn restore_archive<'a>(
let archive_header: CatalogArchiveHeader = serde_json::from_slice(&header_data)
.map_err(|err| format_err!("unable to parse catalog archive header - {}", err))?;
- task_log!(
- worker,
+ info!(
"File {}: skip catalog '{}'",
- current_file_number,
- archive_header.uuid
+ current_file_number, archive_header.uuid
);
reader.skip_data()?; // read all data
@@ -1576,7 +1522,7 @@ fn scan_chunk_archive<'a>(
// check if this is an aborted stream without end marker
if let Ok(false) = reader.has_end_marker() {
- task_log!(worker, "missing stream end marker");
+ info!("missing stream end marker");
return Ok(None);
}
@@ -1588,7 +1534,7 @@ fn scan_chunk_archive<'a>(
worker.check_abort()?;
if verbose {
- task_log!(worker, "Found chunk: {}", hex::encode(digest));
+ info!("Found chunk: {}", hex::encode(digest));
}
chunks.push(digest);
@@ -1612,8 +1558,6 @@ fn restore_chunk_archive<'a>(
let bytes = Arc::new(std::sync::atomic::AtomicU64::new(0));
let bytes2 = bytes.clone();
- let worker2 = worker.clone();
-
let writer_pool = ParallelHandler::new(
"tape restore chunk writer",
4,
@@ -1621,7 +1565,7 @@ fn restore_chunk_archive<'a>(
let chunk_exists = datastore.cond_touch_chunk(&digest, false)?;
if !chunk_exists {
if verbose {
- task_log!(worker2, "Insert chunk: {}", hex::encode(digest));
+ info!("Insert chunk: {}", hex::encode(digest));
}
bytes2.fetch_add(chunk.raw_size(), std::sync::atomic::Ordering::SeqCst);
// println!("verify and write {}", hex::encode(&digest));
@@ -1632,7 +1576,7 @@ fn restore_chunk_archive<'a>(
datastore.insert_chunk(&chunk, &digest)?;
} else if verbose {
- task_log!(worker2, "Found existing chunk: {}", hex::encode(digest));
+ info!("Found existing chunk: {}", hex::encode(digest));
}
Ok(())
},
@@ -1654,7 +1598,7 @@ fn restore_chunk_archive<'a>(
// check if this is an aborted stream without end marker
if let Ok(false) = reader.has_end_marker() {
- task_log!(worker, "missing stream end marker");
+ info!("missing stream end marker");
return Ok(None);
}
@@ -1678,8 +1622,7 @@ fn restore_chunk_archive<'a>(
let elapsed = start_time.elapsed()?.as_secs_f64();
let bytes = bytes.load(std::sync::atomic::Ordering::SeqCst) as f64;
- task_log!(
- worker,
+ info!(
"restored {} ({:.2}/s)",
HumanByte::new_decimal(bytes),
HumanByte::new_decimal(bytes / elapsed),
@@ -1830,7 +1773,6 @@ fn try_restore_snapshot_archive<R: pxar::decoder::SeqRead>(
/// Try to restore media catalogs (form catalog_archives)
pub fn fast_catalog_restore(
- worker: &WorkerTask,
drive: &mut Box<dyn TapeDriver>,
media_set: &MediaSet,
uuid: &Uuid, // current media Uuid
@@ -1851,14 +1793,11 @@ pub fn fast_catalog_restore(
// limit reader scope
let mut reader = match drive.read_next_file() {
Err(BlockReadError::EndOfFile) => {
- task_log!(
- worker,
- "skip unexpected filemark at pos {current_file_number}"
- );
+ info!("skip unexpected filemark at pos {current_file_number}");
continue;
}
Err(BlockReadError::EndOfStream) => {
- task_log!(worker, "detected EOT after {current_file_number} files");
+ info!("detected EOT after {current_file_number} files");
break;
}
Err(BlockReadError::Error(err)) => {
@@ -1875,7 +1814,7 @@ pub fn fast_catalog_restore(
if header.content_magic == PROXMOX_BACKUP_CATALOG_ARCHIVE_MAGIC_1_0
|| header.content_magic == PROXMOX_BACKUP_CATALOG_ARCHIVE_MAGIC_1_1
{
- task_log!(worker, "found catalog at pos {}", current_file_number);
+ info!("found catalog at pos {}", current_file_number);
let header_data = reader.read_exact_allocated(header.size as usize)?;
@@ -1885,11 +1824,7 @@ pub fn fast_catalog_restore(
})?;
if &archive_header.media_set_uuid != media_set.uuid() {
- task_log!(
- worker,
- "skipping unrelated catalog at pos {}",
- current_file_number
- );
+ info!("skipping unrelated catalog at pos {}", current_file_number);
reader.skip_data()?; // read all data
continue;
}
@@ -1902,8 +1837,7 @@ pub fn fast_catalog_restore(
});
if !wanted {
- task_log!(
- worker,
+ info!(
"skip catalog because media '{}' not inventarized",
catalog_uuid
);
@@ -1916,11 +1850,7 @@ pub fn fast_catalog_restore(
} else {
// only restore if catalog does not exist
if MediaCatalog::exists(TAPE_STATUS_DIR, catalog_uuid) {
- task_log!(
- worker,
- "catalog for media '{}' already exists",
- catalog_uuid
- );
+ info!("catalog for media '{}' already exists", catalog_uuid);
reader.skip_data()?; // read all data
continue;
}
@@ -1936,19 +1866,11 @@ pub fn fast_catalog_restore(
match MediaCatalog::parse_catalog_header(&mut file)? {
(true, Some(media_uuid), Some(media_set_uuid)) => {
if &media_uuid != catalog_uuid {
- task_log!(
- worker,
- "catalog uuid mismatch at pos {}",
- current_file_number
- );
+ info!("catalog uuid mismatch at pos {}", current_file_number);
continue;
}
if media_set_uuid != archive_header.media_set_uuid {
- task_log!(
- worker,
- "catalog media_set mismatch at pos {}",
- current_file_number
- );
+ info!("catalog media_set mismatch at pos {}", current_file_number);
continue;
}
@@ -1959,18 +1881,14 @@ pub fn fast_catalog_restore(
)?;
if catalog_uuid == uuid {
- task_log!(worker, "successfully restored catalog");
+ info!("successfully restored catalog");
found_catalog = true
} else {
- task_log!(
- worker,
- "successfully restored related catalog {}",
- media_uuid
- );
+ info!("successfully restored related catalog {}", media_uuid);
}
}
_ => {
- task_warn!(worker, "got incomplete catalog header - skip file");
+ warn!("got incomplete catalog header - skip file");
continue;
}
}
@@ -1984,7 +1902,7 @@ pub fn fast_catalog_restore(
}
moved_to_eom = true;
- task_log!(worker, "searching for catalog at EOT (moving to EOT)");
+ info!("searching for catalog at EOT (moving to EOT)");
drive.move_to_last_file()?;
let new_file_number = drive.current_file_number()?;
diff --git a/src/backup/verify.rs b/src/backup/verify.rs
index c972e532..dfad8824 100644
--- a/src/backup/verify.rs
+++ b/src/backup/verify.rs
@@ -5,8 +5,9 @@ use std::sync::{Arc, Mutex};
use std::time::Instant;
use anyhow::{bail, format_err, Error};
+use tracing::info;
-use proxmox_sys::{task_log, WorkerTaskContext};
+use proxmox_sys::WorkerTaskContext;
use pbs_api_types::{
print_ns_and_snapshot, print_store_and_ns, Authid, BackupNamespace, BackupType, CryptMode,
@@ -69,11 +70,7 @@ fn verify_blob(backup_dir: &BackupDir, info: &FileInfo) -> Result<(), Error> {
}
}
-fn rename_corrupted_chunk(
- datastore: Arc<DataStore>,
- digest: &[u8; 32],
- worker: &dyn WorkerTaskContext,
-) {
+fn rename_corrupted_chunk(datastore: Arc<DataStore>, digest: &[u8; 32]) {
let (path, digest_str) = datastore.chunk_path(digest);
let mut counter = 0;
@@ -89,17 +86,12 @@ fn rename_corrupted_chunk(
match std::fs::rename(&path, &new_path) {
Ok(_) => {
- task_log!(worker, "corrupted chunk renamed to {:?}", &new_path);
+ info!("corrupted chunk renamed to {:?}", &new_path);
}
Err(err) => {
match err.kind() {
std::io::ErrorKind::NotFound => { /* ignored */ }
- _ => task_log!(
- worker,
- "could not rename corrupted chunk {:?} - {}",
- &path,
- err
- ),
+ _ => info!("could not rename corrupted chunk {:?} - {}", &path, err),
}
}
};
@@ -117,7 +109,6 @@ fn verify_index_chunks(
let mut read_bytes = 0;
let mut decoded_bytes = 0;
- let worker2 = Arc::clone(&verify_worker.worker);
let datastore2 = Arc::clone(&verify_worker.datastore);
let corrupt_chunks2 = Arc::clone(&verify_worker.corrupt_chunks);
let verified_chunks2 = Arc::clone(&verify_worker.verified_chunks);
@@ -130,7 +121,7 @@ fn verify_index_chunks(
let chunk_crypt_mode = match chunk.crypt_mode() {
Err(err) => {
corrupt_chunks2.lock().unwrap().insert(digest);
- task_log!(worker2, "can't verify chunk, unknown CryptMode - {}", err);
+ info!("can't verify chunk, unknown CryptMode - {}", err);
errors2.fetch_add(1, Ordering::SeqCst);
return Ok(());
}
@@ -138,20 +129,18 @@ fn verify_index_chunks(
};
if chunk_crypt_mode != crypt_mode {
- task_log!(
- worker2,
+ info!(
"chunk CryptMode {:?} does not match index CryptMode {:?}",
- chunk_crypt_mode,
- crypt_mode
+ chunk_crypt_mode, crypt_mode
);
errors2.fetch_add(1, Ordering::SeqCst);
}
if let Err(err) = chunk.verify_unencrypted(size as usize, &digest) {
corrupt_chunks2.lock().unwrap().insert(digest);
- task_log!(worker2, "{}", err);
+ info!("{}", err);
errors2.fetch_add(1, Ordering::SeqCst);
- rename_corrupted_chunk(datastore2.clone(), &digest, &worker2);
+ rename_corrupted_chunk(datastore2.clone(), &digest);
} else {
verified_chunks2.lock().unwrap().insert(digest);
}
@@ -175,11 +164,7 @@ fn verify_index_chunks(
.contains(digest)
{
let digest_str = hex::encode(digest);
- task_log!(
- verify_worker.worker,
- "chunk {} was marked as corrupt",
- digest_str
- );
+ info!("chunk {} was marked as corrupt", digest_str);
errors.fetch_add(1, Ordering::SeqCst);
true
} else {
@@ -218,17 +203,9 @@ fn verify_index_chunks(
.lock()
.unwrap()
.insert(info.digest);
- task_log!(
- verify_worker.worker,
- "can't verify chunk, load failed - {}",
- err
- );
+ info!("can't verify chunk, load failed - {}", err);
errors.fetch_add(1, Ordering::SeqCst);
- rename_corrupted_chunk(
- verify_worker.datastore.clone(),
- &info.digest,
- &verify_worker.worker,
- );
+ rename_corrupted_chunk(verify_worker.datastore.clone(), &info.digest);
}
Ok(chunk) => {
let size = info.size();
@@ -251,15 +228,9 @@ fn verify_index_chunks(
let error_count = errors.load(Ordering::SeqCst);
- task_log!(
- verify_worker.worker,
+ info!(
" verified {:.2}/{:.2} MiB in {:.2} seconds, speed {:.2}/{:.2} MiB/s ({} errors)",
- read_bytes_mib,
- decoded_bytes_mib,
- elapsed,
- read_speed,
- decode_speed,
- error_count,
+ read_bytes_mib, decoded_bytes_mib, elapsed, read_speed, decode_speed, error_count,
);
if errors.load(Ordering::SeqCst) > 0 {
@@ -329,8 +300,7 @@ pub fn verify_backup_dir(
filter: Option<&dyn Fn(&BackupManifest) -> bool>,
) -> Result<bool, Error> {
if !backup_dir.full_path().exists() {
- task_log!(
- verify_worker.worker,
+ info!(
"SKIPPED: verify {}:{} - snapshot does not exist (anymore).",
verify_worker.datastore.name(),
backup_dir.dir(),
@@ -348,8 +318,7 @@ pub fn verify_backup_dir(
verify_backup_dir_with_lock(verify_worker, backup_dir, upid, filter, snap_lock)
}
Err(err) => {
- task_log!(
- verify_worker.worker,
+ info!(
"SKIPPED: verify {}:{} - could not acquire snapshot lock: {}",
verify_worker.datastore.name(),
backup_dir.dir(),
@@ -371,8 +340,7 @@ pub fn verify_backup_dir_with_lock(
let manifest = match backup_dir.load_manifest() {
Ok((manifest, _)) => manifest,
Err(err) => {
- task_log!(
- verify_worker.worker,
+ info!(
"verify {}:{} - manifest load error: {}",
verify_worker.datastore.name(),
backup_dir.dir(),
@@ -384,8 +352,7 @@ pub fn verify_backup_dir_with_lock(
if let Some(filter) = filter {
if !filter(&manifest) {
- task_log!(
- verify_worker.worker,
+ info!(
"SKIPPED: verify {}:{} (recently verified)",
verify_worker.datastore.name(),
backup_dir.dir(),
@@ -394,8 +361,7 @@ pub fn verify_backup_dir_with_lock(
}
}
- task_log!(
- verify_worker.worker,
+ info!(
"verify {}:{}",
verify_worker.datastore.name(),
backup_dir.dir()
@@ -406,7 +372,7 @@ pub fn verify_backup_dir_with_lock(
let mut verify_result = VerifyState::Ok;
for info in manifest.files() {
let result = proxmox_lang::try_block!({
- task_log!(verify_worker.worker, " check {}", info.filename);
+ info!(" check {}", info.filename);
match archive_type(&info.filename)? {
ArchiveType::FixedIndex => verify_fixed_index(verify_worker, backup_dir, info),
ArchiveType::DynamicIndex => verify_dynamic_index(verify_worker, backup_dir, info),
@@ -418,8 +384,7 @@ pub fn verify_backup_dir_with_lock(
verify_worker.worker.fail_on_shutdown()?;
if let Err(err) = result {
- task_log!(
- verify_worker.worker,
+ info!(
"verify {}:{}/{} failed: {}",
verify_worker.datastore.name(),
backup_dir.dir(),
@@ -463,8 +428,7 @@ pub fn verify_backup_group(
let mut list = match group.list_backups() {
Ok(list) => list,
Err(err) => {
- task_log!(
- verify_worker.worker,
+ info!(
"verify {}, group {} - unable to list backups: {}",
print_store_and_ns(verify_worker.datastore.name(), group.backup_ns()),
group.group(),
@@ -475,8 +439,7 @@ pub fn verify_backup_group(
};
let snapshot_count = list.len();
- task_log!(
- verify_worker.worker,
+ info!(
"verify group {}:{} ({} snapshots)",
verify_worker.datastore.name(),
group.group(),
@@ -494,9 +457,8 @@ pub fn verify_backup_group(
));
}
progress.done_snapshots = pos as u64 + 1;
- task_log!(verify_worker.worker, "percentage done: {}", progress);
+ info!("percentage done: {}", progress);
}
-
Ok(errors)
}
@@ -516,16 +478,11 @@ pub fn verify_all_backups(
filter: Option<&dyn Fn(&BackupManifest) -> bool>,
) -> Result<Vec<String>, Error> {
let mut errors = Vec::new();
- let worker = Arc::clone(&verify_worker.worker);
- task_log!(
- worker,
- "verify datastore {}",
- verify_worker.datastore.name()
- );
+ info!("verify datastore {}", verify_worker.datastore.name());
let owner_filtered = if let Some(owner) = &owner {
- task_log!(worker, "limiting to backups owned by {}", owner);
+ info!("limiting to backups owned by {}", owner);
true
} else {
false
@@ -553,7 +510,7 @@ pub fn verify_all_backups(
}
Err(err) => {
// we don't filter by owner, but we want to log the error
- task_log!(worker, "error on iterating groups in ns '{ns}' - {err}");
+ info!("error on iterating groups in ns '{ns}' - {err}");
errors.push(err.to_string());
None
}
@@ -563,7 +520,7 @@ pub fn verify_all_backups(
})
.collect::<Vec<BackupGroup>>(),
Err(err) => {
- task_log!(worker, "unable to list backups: {}", err,);
+ info!("unable to list backups: {}", err,);
return Ok(errors);
}
};
@@ -571,7 +528,7 @@ pub fn verify_all_backups(
list.sort_unstable_by(|a, b| a.group().cmp(b.group()));
let group_count = list.len();
- task_log!(worker, "found {} groups", group_count);
+ info!("found {} groups", group_count);
let mut progress = StoreProgress::new(group_count as u64);
diff --git a/src/bin/proxmox-backup-api.rs b/src/bin/proxmox-backup-api.rs
index c6c24449..993d1d98 100644
--- a/src/bin/proxmox-backup-api.rs
+++ b/src/bin/proxmox-backup-api.rs
@@ -7,6 +7,7 @@ use http::Response;
use hyper::{Body, StatusCode};
use proxmox_lang::try_block;
+use proxmox_log::init_logger;
use proxmox_router::RpcEnvironmentType;
use proxmox_sys::fs::CreateOptions;
@@ -15,6 +16,7 @@ use proxmox_rest_server::{daemon, ApiConfig, RestServer};
use proxmox_backup::auth_helpers::*;
use proxmox_backup::config;
use proxmox_backup::server::auth::check_pbs_auth;
+use tracing_subscriber::filter::LevelFilter;
fn main() {
pbs_tools::setup_libc_malloc_opts();
@@ -40,13 +42,7 @@ fn get_index() -> Pin<Box<dyn Future<Output = Response<Body>> + Send>> {
}
async fn run() -> Result<(), Error> {
- if let Err(err) = syslog::init(
- syslog::Facility::LOG_DAEMON,
- log::LevelFilter::Info,
- Some("proxmox-backup-api"),
- ) {
- bail!("unable to inititialize syslog - {}", err);
- }
+ init_logger("PBS_LOG", LevelFilter::INFO, "proxmox-backup-api")?;
config::create_configdir()?;
diff --git a/src/bin/proxmox-backup-proxy.rs b/src/bin/proxmox-backup-proxy.rs
index f38a02bd..cdfcf40e 100644
--- a/src/bin/proxmox-backup-proxy.rs
+++ b/src/bin/proxmox-backup-proxy.rs
@@ -7,6 +7,9 @@ use http::request::Parts;
use http::Response;
use hyper::header;
use hyper::{Body, StatusCode};
+use proxmox_log::init_logger;
+use tracing::{info, warn};
+use tracing_subscriber::filter::LevelFilter;
use url::form_urlencoded;
use openssl::ssl::SslAcceptor;
@@ -18,7 +21,6 @@ use proxmox_router::{RpcEnvironment, RpcEnvironmentType};
use proxmox_sys::fs::{CreateOptions, FileSystemInformation};
use proxmox_sys::linux::procfs::{Loadavg, ProcFsMemInfo, ProcFsNetDev, ProcFsStat};
use proxmox_sys::logrotate::LogRotate;
-use proxmox_sys::{task_log, task_warn};
use pbs_datastore::DataStore;
@@ -181,21 +183,7 @@ async fn get_index_future(env: RestEnvironment, parts: Parts) -> Response<Body>
}
async fn run() -> Result<(), Error> {
- // Note: To debug early connection error use
- // PROXMOX_DEBUG=1 ./target/release/proxmox-backup-proxy
- let debug = std::env::var("PROXMOX_DEBUG").is_ok();
-
- if let Err(err) = syslog::init(
- syslog::Facility::LOG_DAEMON,
- if debug {
- log::LevelFilter::Debug
- } else {
- log::LevelFilter::Info
- },
- Some("proxmox-backup-proxy"),
- ) {
- bail!("unable to inititialize syslog - {err}");
- }
+ init_logger("PBS_LOG", LevelFilter::INFO, "proxmox-backup-proxy")?;
proxmox_backup::auth_helpers::setup_auth_context(false);
@@ -289,7 +277,7 @@ async fn run() -> Result<(), Error> {
})?;
let connections = proxmox_rest_server::connection::AcceptBuilder::with_acceptor(acceptor)
- .debug(debug)
+ .debug(tracing::enabled!(tracing::Level::DEBUG))
.rate_limiter_lookup(Arc::new(lookup_rate_limiter))
.tcp_keepalive_time(PROXMOX_BACKUP_TCP_KEEPALIVE_TIME);
let server = daemon::create_daemon(
@@ -714,7 +702,7 @@ async fn schedule_task_log_rotate() {
false,
move |worker| {
job.start(&worker.upid().to_string())?;
- task_log!(worker, "starting task log rotation");
+ info!("starting task log rotation");
let result = try_block!({
let max_size = 512 * 1024 - 1; // an entry has ~ 100b, so > 5000 entries/file
@@ -739,9 +727,9 @@ async fn schedule_task_log_rotate() {
)?;
if has_rotated {
- task_log!(worker, "task log archive was rotated");
+ info!("task log archive was rotated");
} else {
- task_log!(worker, "task log archive was not rotated");
+ info!("task log archive was not rotated");
}
let max_size = 32 * 1024 * 1024 - 1;
@@ -757,9 +745,9 @@ async fn schedule_task_log_rotate() {
if logrotate.rotate(max_size)? {
println!("rotated access log, telling daemons to re-open log file");
proxmox_async::runtime::block_on(command_reopen_access_logfiles())?;
- task_log!(worker, "API access log was rotated");
+ info!("API access log was rotated");
} else {
- task_log!(worker, "API access log was not rotated");
+ info!("API access log was not rotated");
}
let mut logrotate = LogRotate::new(
@@ -772,15 +760,15 @@ async fn schedule_task_log_rotate() {
if logrotate.rotate(max_size)? {
println!("rotated auth log, telling daemons to re-open log file");
proxmox_async::runtime::block_on(command_reopen_auth_logfiles())?;
- task_log!(worker, "API authentication log was rotated");
+ info!("API authentication log was rotated");
} else {
- task_log!(worker, "API authentication log was not rotated");
+ info!("API authentication log was not rotated");
}
if has_rotated {
- task_log!(worker, "cleaning up old task logs");
- if let Err(err) = cleanup_old_tasks(&worker, true) {
- task_warn!(worker, "could not completely cleanup old tasks: {err}");
+ info!("cleaning up old task logs");
+ if let Err(err) = cleanup_old_tasks(true) {
+ warn!("could not completely cleanup old tasks: {err}");
}
}
diff --git a/src/server/gc_job.rs b/src/server/gc_job.rs
index 41375d72..da3b002c 100644
--- a/src/server/gc_job.rs
+++ b/src/server/gc_job.rs
@@ -1,7 +1,7 @@
use anyhow::Error;
use std::sync::Arc;
-use proxmox_sys::task_log;
+use tracing::info;
use pbs_api_types::Authid;
use pbs_datastore::DataStore;
@@ -30,9 +30,9 @@ pub fn do_garbage_collection_job(
move |worker| {
job.start(&worker.upid().to_string())?;
- task_log!(worker, "starting garbage collection on store {store}");
+ info!("starting garbage collection on store {store}");
if let Some(event_str) = schedule {
- task_log!(worker, "task triggered by schedule '{event_str}'");
+ info!("task triggered by schedule '{event_str}'");
}
let result = datastore.garbage_collection(&*worker, worker.upid());
diff --git a/src/server/prune_job.rs b/src/server/prune_job.rs
index 2de34973..83bcd7c3 100644
--- a/src/server/prune_job.rs
+++ b/src/server/prune_job.rs
@@ -1,8 +1,7 @@
use std::sync::Arc;
use anyhow::Error;
-
-use proxmox_sys::{task_log, task_warn};
+use tracing::{info, warn};
use pbs_api_types::{
print_store_and_ns, Authid, KeepOptions, Operation, PruneJobOptions, MAX_NAMESPACE_DEPTH,
@@ -16,7 +15,6 @@ use crate::backup::ListAccessibleBackupGroups;
use crate::server::jobstate::Job;
pub fn prune_datastore(
- worker: Arc<WorkerTask>,
auth_id: Authid,
prune_options: PruneJobOptions,
datastore: Arc<DataStore>,
@@ -31,19 +29,19 @@ pub fn prune_datastore(
};
let ns = prune_options.ns.clone().unwrap_or_default();
let store_ns = print_store_and_ns(store, &ns);
- task_log!(worker, "Starting datastore prune on {store_ns}, {depth}");
+ info!("Starting datastore prune on {store_ns}, {depth}");
if dry_run {
- task_log!(worker, "(dry test run)");
+ info!("(dry test run)");
}
let keep_all = !prune_options.keeps_something();
if keep_all {
- task_log!(worker, "No prune selection - keeping all files.");
+ info!("No prune selection - keeping all files.");
} else {
let rendered_options = cli_prune_options_string(&prune_options);
- task_log!(worker, "retention options: {rendered_options}");
+ info!("retention options: {rendered_options}");
}
for group in ListAccessibleBackupGroups::new_with_privs(
@@ -61,8 +59,7 @@ pub fn prune_datastore(
let mut prune_info = compute_prune_info(list, &prune_options.keep)?;
prune_info.reverse(); // delete older snapshots first
- task_log!(
- worker,
+ info!(
"Pruning group {ns}:\"{}/{}\"",
group.backup_type(),
group.backup_id()
@@ -70,8 +67,7 @@ pub fn prune_datastore(
for (info, mark) in prune_info {
let keep = keep_all || mark.keep();
- task_log!(
- worker,
+ info!(
"{}{} {}/{}/{}",
if dry_run { "would " } else { "" },
mark,
@@ -82,7 +78,7 @@ pub fn prune_datastore(
if !keep && !dry_run {
if let Err(err) = datastore.remove_backup_dir(ns, info.backup_dir.as_ref(), false) {
let path = info.backup_dir.relative_path();
- task_warn!(worker, "failed to remove dir {path:?}: {err}");
+ warn!("failed to remove dir {path:?}: {err}");
}
}
}
@@ -150,13 +146,13 @@ pub fn do_prune_job(
move |worker| {
job.start(&worker.upid().to_string())?;
- task_log!(worker, "prune job '{}'", job.jobname());
+ info!("prune job '{}'", job.jobname());
if let Some(event_str) = schedule {
- task_log!(worker, "task triggered by schedule '{event_str}'");
+ info!("task triggered by schedule '{event_str}'");
}
- let result = prune_datastore(worker.clone(), auth_id, prune_options, datastore, false);
+ let result = prune_datastore(auth_id, prune_options, datastore, false);
let status = worker.create_state(&result);
diff --git a/src/server/pull.rs b/src/server/pull.rs
index a973a10e..7fd6a803 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -10,9 +10,9 @@ use anyhow::{bail, format_err, Error};
use http::StatusCode;
use pbs_config::CachedUserInfo;
use serde_json::json;
+use tracing::info;
use proxmox_router::HttpError;
-use proxmox_sys::task_log;
use pbs_api_types::{
print_store_and_ns, Authid, BackupNamespace, GroupFilter, GroupListItem, NamespaceListItem,
@@ -32,7 +32,6 @@ use pbs_datastore::manifest::{
};
use pbs_datastore::{check_backup_owner, DataStore, StoreProgress};
use pbs_tools::sha::sha256;
-use proxmox_rest_server::WorkerTask;
use crate::backup::{check_ns_modification_privs, check_ns_privs};
use crate::tools::parallel_handler::ParallelHandler;
@@ -123,7 +122,6 @@ impl PullParameters {
}
async fn pull_index_chunks<I: IndexFile>(
- worker: &WorkerTask,
chunk_reader: RemoteChunkReader,
target: Arc<DataStore>,
index: I,
@@ -176,10 +174,10 @@ async fn pull_index_chunks<I: IndexFile>(
target.cond_touch_chunk(&info.digest, false)
})?;
if chunk_exists {
- //task_log!(worker, "chunk {} exists {}", pos, hex::encode(digest));
+ //info!("chunk {} exists {}", pos, hex::encode(digest));
return Ok::<_, Error>(());
}
- //task_log!(worker, "sync {} chunk {}", pos, hex::encode(digest));
+ //info!("sync {} chunk {}", pos, hex::encode(digest));
let chunk = chunk_reader.read_raw_chunk(&info.digest).await?;
let raw_size = chunk.raw_size() as usize;
@@ -205,8 +203,7 @@ async fn pull_index_chunks<I: IndexFile>(
let bytes = bytes.load(Ordering::SeqCst);
- task_log!(
- worker,
+ info!(
"downloaded {} bytes ({:.2} MiB/s)",
bytes,
(bytes as f64) / (1024.0 * 1024.0 * elapsed)
@@ -261,7 +258,6 @@ fn verify_archive(info: &FileInfo, csum: &[u8; 32], size: u64) -> Result<(), Err
/// - if archive is an index, pull referenced chunks
/// - Rename tmp file into real path
async fn pull_single_archive(
- worker: &WorkerTask,
reader: &BackupReader,
chunk_reader: &mut RemoteChunkReader,
snapshot: &pbs_datastore::BackupDir,
@@ -275,7 +271,7 @@ async fn pull_single_archive(
let mut tmp_path = path.clone();
tmp_path.set_extension("tmp");
- task_log!(worker, "sync archive {}", archive_name);
+ info!("sync archive {}", archive_name);
let mut tmpfile = std::fs::OpenOptions::new()
.write(true)
@@ -294,7 +290,6 @@ async fn pull_single_archive(
verify_archive(archive_info, &csum, size)?;
pull_index_chunks(
- worker,
chunk_reader.clone(),
snapshot.datastore().clone(),
index,
@@ -310,7 +305,6 @@ async fn pull_single_archive(
verify_archive(archive_info, &csum, size)?;
pull_index_chunks(
- worker,
chunk_reader.clone(),
snapshot.datastore().clone(),
index,
@@ -333,7 +327,6 @@ async fn pull_single_archive(
// Note: The client.log.blob is uploaded after the backup, so it is
// not mentioned in the manifest.
async fn try_client_log_download(
- worker: &WorkerTask,
reader: Arc<BackupReader>,
path: &std::path::Path,
) -> Result<(), Error> {
@@ -351,7 +344,7 @@ async fn try_client_log_download(
if let Err(err) = std::fs::rename(&tmp_path, path) {
bail!("Atomic rename file {:?} failed - {}", path, err);
}
- task_log!(worker, "got backup log file {:?}", CLIENT_LOG_BLOB_NAME);
+ info!("got backup log file {:?}", CLIENT_LOG_BLOB_NAME);
}
Ok(())
@@ -367,7 +360,6 @@ async fn try_client_log_download(
/// -- if not, pull it from the remote
/// - Download log if not already existing
async fn pull_snapshot(
- worker: &WorkerTask,
reader: Arc<BackupReader>,
snapshot: &pbs_datastore::BackupDir,
downloaded_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
@@ -388,8 +380,7 @@ async fn pull_snapshot(
match err.downcast_ref::<HttpError>() {
Some(HttpError { code, message }) => match *code {
StatusCode::NOT_FOUND => {
- task_log!(
- worker,
+ info!(
"skipping snapshot {} - vanished since start of sync",
snapshot.dir(),
);
@@ -422,9 +413,9 @@ async fn pull_snapshot(
if manifest_blob.raw_data() == tmp_manifest_blob.raw_data() {
if !client_log_name.exists() {
- try_client_log_download(worker, reader, &client_log_name).await?;
+ try_client_log_download(reader, &client_log_name).await?;
}
- task_log!(worker, "no data changes");
+ info!("no data changes");
let _ = std::fs::remove_file(&tmp_manifest_name);
return Ok(()); // nothing changed
}
@@ -444,7 +435,7 @@ async fn pull_snapshot(
match manifest.verify_file(&item.filename, &csum, size) {
Ok(_) => continue,
Err(err) => {
- task_log!(worker, "detected changed file {:?} - {}", path, err);
+ info!("detected changed file {:?} - {}", path, err);
}
}
}
@@ -454,7 +445,7 @@ async fn pull_snapshot(
match manifest.verify_file(&item.filename, &csum, size) {
Ok(_) => continue,
Err(err) => {
- task_log!(worker, "detected changed file {:?} - {}", path, err);
+ info!("detected changed file {:?} - {}", path, err);
}
}
}
@@ -464,7 +455,7 @@ async fn pull_snapshot(
match manifest.verify_file(&item.filename, &csum, size) {
Ok(_) => continue,
Err(err) => {
- task_log!(worker, "detected changed file {:?} - {}", path, err);
+ info!("detected changed file {:?} - {}", path, err);
}
}
}
@@ -479,7 +470,6 @@ async fn pull_snapshot(
);
pull_single_archive(
- worker,
&reader,
&mut chunk_reader,
snapshot,
@@ -494,7 +484,7 @@ async fn pull_snapshot(
}
if !client_log_name.exists() {
- try_client_log_download(worker, reader, &client_log_name).await?;
+ try_client_log_download(reader, &client_log_name).await?;
}
snapshot
@@ -509,7 +499,6 @@ async fn pull_snapshot(
/// The `reader` is configured to read from the remote / source namespace, while the `snapshot` is
/// pointing to the local datastore and target namespace.
async fn pull_snapshot_from(
- worker: &WorkerTask,
reader: Arc<BackupReader>,
snapshot: &pbs_datastore::BackupDir,
downloaded_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
@@ -519,22 +508,22 @@ async fn pull_snapshot_from(
.create_locked_backup_dir(snapshot.backup_ns(), snapshot.as_ref())?;
if is_new {
- task_log!(worker, "sync snapshot {}", snapshot.dir());
+ info!("sync snapshot {}", snapshot.dir());
- if let Err(err) = pull_snapshot(worker, reader, snapshot, downloaded_chunks).await {
+ if let Err(err) = pull_snapshot(reader, snapshot, downloaded_chunks).await {
if let Err(cleanup_err) = snapshot.datastore().remove_backup_dir(
snapshot.backup_ns(),
snapshot.as_ref(),
true,
) {
- task_log!(worker, "cleanup error - {}", cleanup_err);
+ info!("cleanup error - {}", cleanup_err);
}
return Err(err);
}
- task_log!(worker, "sync snapshot {} done", snapshot.dir());
+ info!("sync snapshot {} done", snapshot.dir());
} else {
- task_log!(worker, "re-sync snapshot {}", snapshot.dir());
- pull_snapshot(worker, reader, snapshot, downloaded_chunks).await?;
+ info!("re-sync snapshot {}", snapshot.dir());
+ pull_snapshot(reader, snapshot, downloaded_chunks).await?;
}
Ok(())
@@ -638,14 +627,13 @@ impl std::fmt::Display for SkipInfo {
/// - remote snapshot access is checked by remote (twice: query and opening the backup reader)
/// - local group owner is already checked by pull_store
async fn pull_group(
- worker: &WorkerTask,
client: &HttpClient,
params: &PullParameters,
group: &pbs_api_types::BackupGroup,
remote_ns: BackupNamespace,
progress: &mut StoreProgress,
) -> Result<(), Error> {
- task_log!(worker, "sync group {}", group);
+ info!("sync group {}", group);
let path = format!(
"api2/json/admin/datastore/{}/snapshots",
@@ -697,11 +685,7 @@ async fn pull_group(
// in-progress backups can't be synced
if item.size.is_none() {
- task_log!(
- worker,
- "skipping snapshot {} - in-progress backup",
- snapshot
- );
+ info!("skipping snapshot {} - in-progress backup", snapshot);
continue;
}
@@ -711,7 +695,7 @@ async fn pull_group(
already_synced_skip_info.update(snapshot.time);
continue;
} else if already_synced_skip_info.count > 0 {
- task_log!(worker, "{}", already_synced_skip_info);
+ info!("{}", already_synced_skip_info);
already_synced_skip_info.reset();
}
@@ -719,7 +703,7 @@ async fn pull_group(
transfer_last_skip_info.update(snapshot.time);
continue;
} else if transfer_last_skip_info.count > 0 {
- task_log!(worker, "{}", transfer_last_skip_info);
+ info!("{}", transfer_last_skip_info);
transfer_last_skip_info.reset();
}
@@ -749,10 +733,10 @@ async fn pull_group(
let snapshot = params.store.backup_dir(target_ns.clone(), snapshot)?;
- let result = pull_snapshot_from(worker, reader, &snapshot, downloaded_chunks.clone()).await;
+ let result = pull_snapshot_from(reader, &snapshot, downloaded_chunks.clone()).await;
progress.done_snapshots = pos as u64 + 1;
- task_log!(worker, "percentage done: {}", progress);
+ info!("percentage done: {}", progress);
result?; // stop on error
}
@@ -766,14 +750,13 @@ async fn pull_group(
continue;
}
if snapshot.is_protected() {
- task_log!(
- worker,
+ info!(
"don't delete vanished snapshot {} (protected)",
snapshot.dir()
);
continue;
}
- task_log!(worker, "delete vanished snapshot {}", snapshot.dir());
+ info!("delete vanished snapshot {}", snapshot.dir());
params
.store
.remove_backup_dir(&target_ns, snapshot.as_ref(), false)?;
@@ -785,7 +768,6 @@ async fn pull_group(
// will modify params if switching to backwards mode for lack of NS support on remote end
async fn query_namespaces(
- worker: &WorkerTask,
client: &HttpClient,
params: &mut PullParameters,
) -> Result<Vec<BackupNamespace>, Error> {
@@ -808,8 +790,8 @@ async fn query_namespaces(
Some(HttpError { code, message }) => match *code {
StatusCode::NOT_FOUND => {
if params.remote_ns.is_root() && params.max_depth.is_none() {
- task_log!(worker, "Could not query remote for namespaces (404) -> temporarily switching to backwards-compat mode");
- task_log!(worker, "Either make backwards-compat mode explicit (max-depth == 0) or upgrade remote system.");
+ info!("Could not query remote for namespaces (404) -> temporarily switching to backwards-compat mode");
+ info!("Either make backwards-compat mode explicit (max-depth == 0) or upgrade remote system.");
params.max_depth = Some(0);
} else {
bail!("Remote namespace set/recursive sync requested, but remote does not support namespaces.")
@@ -875,7 +857,6 @@ fn check_and_remove_ns(params: &PullParameters, local_ns: &BackupNamespace) -> R
}
fn check_and_remove_vanished_ns(
- worker: &WorkerTask,
params: &PullParameters,
synced_ns: HashSet<BackupNamespace>,
) -> Result<bool, Error> {
@@ -913,14 +894,13 @@ fn check_and_remove_vanished_ns(
continue;
}
match check_and_remove_ns(params, &local_ns) {
- Ok(true) => task_log!(worker, "Removed namespace {}", local_ns),
- Ok(false) => task_log!(
- worker,
+ Ok(true) => info!("Removed namespace {}", local_ns),
+ Ok(false) => info!(
"Did not remove namespace {} - protected snapshots remain",
local_ns
),
Err(err) => {
- task_log!(worker, "Failed to remove namespace {} - {}", local_ns, err);
+ info!("Failed to remove namespace {} - {}", local_ns, err);
errors = true;
}
}
@@ -947,7 +927,6 @@ fn check_and_remove_vanished_ns(
/// - creation and removal of sub-NS checked here
/// - access to sub-NS checked here
pub(crate) async fn pull_store(
- worker: &WorkerTask,
client: &HttpClient,
mut params: PullParameters,
) -> Result<(), Error> {
@@ -959,7 +938,7 @@ pub(crate) async fn pull_store(
let namespaces = if params.remote_ns.is_root() && params.max_depth == Some(0) {
vec![params.remote_ns.clone()] // backwards compat - don't query remote namespaces!
} else {
- query_namespaces(worker, client, &mut params).await?
+ query_namespaces(client, &mut params).await?
};
errors |= old_max_depth != params.max_depth; // fail job if we switched to backwards-compat mode
@@ -972,62 +951,52 @@ pub(crate) async fn pull_store(
let target_ns = namespace.map_prefix(¶ms.remote_ns, ¶ms.ns)?;
let target_store_ns_str = print_store_and_ns(params.store.name(), &target_ns);
- task_log!(worker, "----");
- task_log!(
- worker,
+ info!("----");
+ info!(
"Syncing {} into {}",
- source_store_ns_str,
- target_store_ns_str
+ source_store_ns_str, target_store_ns_str
);
synced_ns.insert(target_ns.clone());
match check_and_create_ns(¶ms, &target_ns) {
- Ok(true) => task_log!(worker, "Created namespace {}", target_ns),
+ Ok(true) => info!("Created namespace {}", target_ns),
Ok(false) => {}
Err(err) => {
- task_log!(
- worker,
+ info!(
"Cannot sync {} into {} - {}",
- source_store_ns_str,
- target_store_ns_str,
- err,
+ source_store_ns_str, target_store_ns_str, err,
);
errors = true;
continue;
}
}
- match pull_ns(worker, client, ¶ms, namespace.clone(), target_ns).await {
+ match pull_ns(client, ¶ms, namespace.clone(), target_ns).await {
Ok((ns_progress, ns_errors)) => {
errors |= ns_errors;
if params.max_depth != Some(0) {
groups += ns_progress.done_groups;
snapshots += ns_progress.done_snapshots;
- task_log!(
- worker,
+ info!(
"Finished syncing namespace {}, current progress: {} groups, {} snapshots",
- namespace,
- groups,
- snapshots,
+ namespace, groups, snapshots,
);
}
}
Err(err) => {
errors = true;
- task_log!(
- worker,
+ info!(
"Encountered errors while syncing namespace {} - {}",
- namespace,
- err,
+ namespace, err,
);
}
};
}
if params.remove_vanished {
- errors |= check_and_remove_vanished_ns(worker, ¶ms, synced_ns)?;
+ errors |= check_and_remove_vanished_ns(¶ms, synced_ns)?;
}
if errors {
@@ -1050,7 +1019,6 @@ pub(crate) async fn pull_store(
/// - remote namespaces are filtered by remote
/// - owner check for vanished groups done here
pub(crate) async fn pull_ns(
- worker: &WorkerTask,
client: &HttpClient,
params: &PullParameters,
source_ns: BackupNamespace,
@@ -1096,15 +1064,14 @@ pub(crate) async fn pull_ns(
.into_iter()
.filter(|group| apply_filters(group, group_filter))
.collect();
- task_log!(
- worker,
+ info!(
"found {} groups to sync (out of {} total)",
list.len(),
unfiltered_count
);
list
} else {
- task_log!(worker, "found {} groups to sync", total_count);
+ info!("found {} groups to sync", total_count);
list
};
@@ -1129,12 +1096,7 @@ pub(crate) async fn pull_ns(
{
Ok(result) => result,
Err(err) => {
- task_log!(
- worker,
- "sync group {} failed - group lock failed: {}",
- &group,
- err
- );
+ info!("sync group {} failed - group lock failed: {}", &group, err);
errors = true; // do not stop here, instead continue
continue;
}
@@ -1143,25 +1105,15 @@ pub(crate) async fn pull_ns(
// permission check
if params.owner != owner {
// only the owner is allowed to create additional snapshots
- task_log!(
- worker,
+ info!(
"sync group {} failed - owner check failed ({} != {})",
- &group,
- params.owner,
- owner
+ &group, params.owner, owner
);
errors = true; // do not stop here, instead continue
- } else if let Err(err) = pull_group(
- worker,
- client,
- params,
- &group,
- source_ns.clone(),
- &mut progress,
- )
- .await
+ } else if let Err(err) =
+ pull_group(client, params, &group, source_ns.clone(), &mut progress).await
{
- task_log!(worker, "sync group {} failed - {}", &group, err,);
+ info!("sync group {} failed - {}", &group, err,);
errors = true; // do not stop here, instead continue
}
}
@@ -1183,18 +1135,14 @@ pub(crate) async fn pull_ns(
continue;
}
}
- task_log!(worker, "delete vanished group '{local_group}'",);
+ info!("delete vanished group '{local_group}'",);
match params.store.remove_backup_group(&target_ns, local_group) {
Ok(true) => {}
Ok(false) => {
- task_log!(
- worker,
- "kept some protected snapshots of group '{}'",
- local_group
- );
+ info!("kept some protected snapshots of group '{}'", local_group);
}
Err(err) => {
- task_log!(worker, "{}", err);
+ info!("{}", err);
errors = true;
}
}
@@ -1202,7 +1150,7 @@ pub(crate) async fn pull_ns(
Ok(())
});
if let Err(err) = result {
- task_log!(worker, "error during cleanup: {}", err);
+ info!("error during cleanup: {}", err);
errors = true;
};
}
diff --git a/src/server/realm_sync_job.rs b/src/server/realm_sync_job.rs
index 1f92e843..b7b9c2c1 100644
--- a/src/server/realm_sync_job.rs
+++ b/src/server/realm_sync_job.rs
@@ -1,13 +1,14 @@
use anyhow::{bail, format_err, Context, Error};
+use tracing::{info, warn};
+
use pbs_config::{acl::AclTree, token_shadow, BackupLockGuard};
use proxmox_lang::try_block;
use proxmox_ldap::{Config, Connection, SearchParameters, SearchResult};
use proxmox_rest_server::WorkerTask;
use proxmox_schema::{ApiType, Schema};
use proxmox_section_config::SectionConfigData;
-use proxmox_sys::{task_log, task_warn};
-use std::{collections::HashSet, sync::Arc};
+use std::collections::HashSet;
use pbs_api_types::{
ApiToken, Authid, LdapRealmConfig, Realm, RemoveVanished, SyncAttributes as LdapSyncAttributes,
@@ -38,7 +39,7 @@ pub fn do_realm_sync_job(
move |worker| {
job.start(&worker.upid().to_string()).unwrap();
- task_log!(worker, "starting realm sync for {}", realm.as_str());
+ info!("starting realm sync for {}", realm.as_str());
let override_settings = GeneralSyncSettingsOverride {
remove_vanished,
@@ -46,7 +47,7 @@ pub fn do_realm_sync_job(
};
async move {
- let sync_job = LdapRealmSyncJob::new(worker, realm, &override_settings, dry_run)?;
+ let sync_job = LdapRealmSyncJob::new(realm, &override_settings, dry_run)?;
sync_job.sync().await
}
},
@@ -57,7 +58,6 @@ pub fn do_realm_sync_job(
/// Implemenation for syncing LDAP realms
struct LdapRealmSyncJob {
- worker: Arc<WorkerTask>,
realm: Realm,
general_sync_settings: GeneralSyncSettings,
ldap_sync_settings: LdapSyncSettings,
@@ -68,7 +68,6 @@ struct LdapRealmSyncJob {
impl LdapRealmSyncJob {
/// Create new LdapRealmSyncJob
fn new(
- worker: Arc<WorkerTask>,
realm: Realm,
override_settings: &GeneralSyncSettingsOverride,
dry_run: bool,
@@ -88,7 +87,6 @@ impl LdapRealmSyncJob {
let ldap_config = auth::LdapAuthenticator::api_type_to_config(&config)?;
Ok(Self {
- worker,
realm,
general_sync_settings: sync_settings,
ldap_sync_settings: sync_attributes,
@@ -100,10 +98,7 @@ impl LdapRealmSyncJob {
/// Perform realm synchronization
async fn sync(&self) -> Result<(), Error> {
if self.dry_run {
- task_log!(
- self.worker,
- "this is a DRY RUN - changes will not be persisted"
- );
+ info!("this is a DRY RUN - changes will not be persisted");
}
let ldap = Connection::new(self.ldap_config.clone());
@@ -185,7 +180,7 @@ impl LdapRealmSyncJob {
anyhow::Ok(())
});
if let Err(e) = result {
- task_log!(self.worker, "could not create/update user: {e}");
+ info!("could not create/update user: {e}");
}
}
@@ -204,18 +199,10 @@ impl LdapRealmSyncJob {
if let Some(existing_user) = existing_user {
if existing_user != new_or_updated_user {
- task_log!(
- self.worker,
- "updating user {}",
- new_or_updated_user.userid.as_str()
- );
+ info!("updating user {}", new_or_updated_user.userid.as_str());
}
} else {
- task_log!(
- self.worker,
- "creating user {}",
- new_or_updated_user.userid.as_str()
- );
+ info!("creating user {}", new_or_updated_user.userid.as_str());
}
user_config.set_data(
@@ -237,10 +224,7 @@ impl LdapRealmSyncJob {
let schema = schema.unwrap_string_schema();
if let Err(e) = schema.check_constraints(value) {
- task_warn!(
- self.worker,
- "{userid}: ignoring attribute `{attribute}`: {e}"
- );
+ warn!("{userid}: ignoring attribute `{attribute}`: {e}");
None
} else {
@@ -319,7 +303,7 @@ impl LdapRealmSyncJob {
to_delete: &[Userid],
) -> Result<(), Error> {
for userid in to_delete {
- task_log!(self.worker, "deleting user {}", userid.as_str());
+ info!("deleting user {}", userid.as_str());
// Delete the user
user_config.sections.remove(userid.as_str());
@@ -346,7 +330,7 @@ impl LdapRealmSyncJob {
if !self.dry_run {
if let Err(e) = token_shadow::delete_secret(&tokenid) {
- task_warn!(self.worker, "could not delete token for user {userid}: {e}",)
+ warn!("could not delete token for user {userid}: {e}",)
}
}
diff --git a/src/server/verify_job.rs b/src/server/verify_job.rs
index 8bf2a0c9..94fd0279 100644
--- a/src/server/verify_job.rs
+++ b/src/server/verify_job.rs
@@ -1,9 +1,9 @@
use anyhow::{format_err, Error};
+use tracing::info;
use pbs_api_types::{Authid, Operation, VerificationJobConfig};
use pbs_datastore::DataStore;
use proxmox_rest_server::WorkerTask;
-use proxmox_sys::task_log;
use crate::{
backup::{verify_all_backups, verify_filter},
@@ -36,9 +36,9 @@ pub fn do_verification_job(
move |worker| {
job.start(&worker.upid().to_string())?;
- task_log!(worker, "Starting datastore verify job '{}'", job_id);
+ info!("Starting datastore verify job '{}'", job_id);
if let Some(event_str) = schedule {
- task_log!(worker, "task triggered by schedule '{}'", event_str);
+ info!("task triggered by schedule '{}'", event_str);
}
let ns = match verification_job.ns {
@@ -60,9 +60,9 @@ pub fn do_verification_job(
let job_result = match result {
Ok(ref failed_dirs) if failed_dirs.is_empty() => Ok(()),
Ok(ref failed_dirs) => {
- task_log!(worker, "Failed to verify the following snapshots/groups:");
+ info!("Failed to verify the following snapshots/groups:");
for dir in failed_dirs {
- task_log!(worker, "\t{}", dir);
+ info!("\t{}", dir);
}
Err(format_err!(
diff --git a/src/tape/drive/mod.rs b/src/tape/drive/mod.rs
index 86f01201..0b5e3f41 100644
--- a/src/tape/drive/mod.rs
+++ b/src/tape/drive/mod.rs
@@ -12,6 +12,7 @@ use nix::fcntl::OFlag;
use nix::sys::stat::Mode;
use serde::Deserialize;
use serde_json::Value;
+use tracing::info;
use proxmox_sys::fs::{
atomic_open_or_create_file, file_read_optional_string, lock_file, replace_file, CreateOptions,
@@ -19,7 +20,7 @@ use proxmox_sys::fs::{
use proxmox_io::ReadExt;
use proxmox_section_config::SectionConfigData;
-use proxmox_sys::{task_log, WorkerTaskContext};
+use proxmox_sys::WorkerTaskContext;
use proxmox_uuid::Uuid;
use pbs_api_types::{Fingerprint, LtoTapeDrive, VirtualTapeDrive};
@@ -343,11 +344,9 @@ pub fn request_and_load_media(
) -> Result<(Box<dyn TapeDriver>, MediaId), Error> {
let check_label = |handle: &mut dyn TapeDriver, uuid: &proxmox_uuid::Uuid| {
if let Ok((Some(media_id), _)) = handle.read_label() {
- task_log!(
- worker,
+ info!(
"found media label {} ({})",
- media_id.label.label_text,
- media_id.label.uuid,
+ media_id.label.label_text, media_id.label.uuid,
);
if media_id.label.uuid == *uuid {
@@ -385,19 +384,16 @@ pub fn request_and_load_media(
let update_and_log_request_error =
|old: &mut TapeRequestError, new: TapeRequestError| -> Result<(), Error> {
if new != *old {
- task_log!(worker, "{}", new);
+ info!("{}", new);
let (device_type, device) = if let Some(changer) = changer {
("changer", changer.as_str())
} else {
("drive", drive)
};
- task_log!(
- worker,
+ info!(
"Please insert media '{}' into {} '{}'",
- label_text,
- device_type,
- device
+ label_text, device_type, device
);
if let Some(to) = notify_email {
send_load_media_email(
@@ -423,18 +419,11 @@ pub fn request_and_load_media(
std::thread::sleep(std::time::Duration::from_millis(100));
}
} else if drive_config.changer.is_none() {
- task_log!(
- worker,
- "Checking for media '{}' in drive '{}'",
- label_text,
- drive
- );
+ info!("Checking for media '{}' in drive '{}'", label_text, drive);
} else {
- task_log!(
- worker,
+ info!(
"trying to load media '{}' into drive '{}'",
- label_text,
- drive
+ label_text, drive
);
}
@@ -462,8 +451,7 @@ pub fn request_and_load_media(
let request_error = match handle.read_label() {
Ok((Some(media_id), _)) if media_id.label.uuid == label.uuid => {
- task_log!(
- worker,
+ info!(
"found media label {} ({})",
media_id.label.label_text,
media_id.label.uuid.to_string(),
diff --git a/src/tape/pool_writer/mod.rs b/src/tape/pool_writer/mod.rs
index d1add203..1ae5218b 100644
--- a/src/tape/pool_writer/mod.rs
+++ b/src/tape/pool_writer/mod.rs
@@ -11,8 +11,8 @@ use std::sync::{Arc, Mutex};
use std::time::SystemTime;
use anyhow::{bail, Error};
+use tracing::{info, warn};
-use proxmox_sys::{task_log, task_warn};
use proxmox_uuid::Uuid;
use pbs_datastore::{DataStore, SnapshotReader};
@@ -57,7 +57,6 @@ impl PoolWriter {
pub fn new(
mut pool: MediaPool,
drive_name: &str,
- worker: &WorkerTask,
notify_email: Option<String>,
force_media_set: bool,
ns_magic: bool,
@@ -66,11 +65,11 @@ impl PoolWriter {
let new_media_set_reason = pool.start_write_session(current_time, force_media_set)?;
if let Some(reason) = new_media_set_reason {
- task_log!(worker, "starting new media set - reason: {}", reason,);
+ info!("starting new media set - reason: {}", reason,);
}
let media_set_uuid = pool.current_media_set().uuid();
- task_log!(worker, "media set uuid: {}", media_set_uuid);
+ info!("media set uuid: {}", media_set_uuid);
let mut catalog_set = CatalogSet::new();
@@ -125,7 +124,7 @@ impl PoolWriter {
}
/// Eject media and drop PoolWriterState (close drive)
- pub fn eject_media(&mut self, worker: &WorkerTask) -> Result<(), Error> {
+ pub fn eject_media(&mut self) -> Result<(), Error> {
let mut status = match self.status.take() {
Some(status) => status,
None => return Ok(()), // no media loaded
@@ -134,13 +133,13 @@ impl PoolWriter {
let (drive_config, _digest) = pbs_config::drive::config()?;
if let Some((mut changer, _)) = media_changer(&drive_config, &self.drive_name)? {
- task_log!(worker, "eject media");
+ info!("eject media");
status.drive.eject_media()?; // rewind and eject early, so that unload_media is faster
drop(status); // close drive
- task_log!(worker, "unload media");
+ info!("unload media");
changer.unload_media(None)?; //eject and unload
} else {
- task_log!(worker, "standalone drive - ejecting media");
+ info!("standalone drive - ejecting media");
status.drive.eject_media()?;
}
@@ -148,14 +147,14 @@ impl PoolWriter {
}
/// Export current media set and drop PoolWriterState (close drive)
- pub fn export_media_set(&mut self, worker: &WorkerTask) -> Result<(), Error> {
+ pub fn export_media_set(&mut self) -> Result<(), Error> {
let mut status = self.status.take();
let (drive_config, _digest) = pbs_config::drive::config()?;
if let Some((mut changer, _)) = media_changer(&drive_config, &self.drive_name)? {
if let Some(ref mut status) = status {
- task_log!(worker, "rewind media");
+ info!("rewind media");
// rewind first so that the unload command later does not run into a timeout
status.drive.rewind()?;
}
@@ -165,25 +164,19 @@ impl PoolWriter {
let media = self.pool.lookup_media(media_uuid)?;
let label_text = media.label_text();
if let Some(slot) = changer.export_media(label_text)? {
- task_log!(
- worker,
+ info!(
"exported media '{}' to import/export slot {}",
- label_text,
- slot
+ label_text, slot
);
} else {
- task_warn!(
- worker,
+ warn!(
"export failed - media '{}' is not online or in different drive",
label_text
);
}
}
} else if let Some(mut status) = status {
- task_log!(
- worker,
- "standalone drive - ejecting media instead of export"
- );
+ info!("standalone drive - ejecting media instead of export");
status.drive.eject_media()?;
}
@@ -224,15 +217,11 @@ impl PoolWriter {
return Ok(media_uuid);
}
- task_log!(
- worker,
- "allocated new writable media '{}'",
- media.label_text()
- );
+ info!("allocated new writable media '{}'", media.label_text());
if let Some(PoolWriterState { mut drive, .. }) = self.status.take() {
if last_media_uuid.is_some() {
- task_log!(worker, "eject current media");
+ info!("eject current media");
drive.eject_media()?;
}
}
@@ -250,7 +239,7 @@ impl PoolWriter {
// test for critical tape alert flags
if let Ok(alert_flags) = drive.tape_alert_flags() {
if !alert_flags.is_empty() {
- task_log!(worker, "TapeAlertFlags: {:?}", alert_flags);
+ info!("TapeAlertFlags: {:?}", alert_flags);
if tape_alert_flags_critical(alert_flags) {
self.pool.set_media_status_damaged(&media_uuid)?;
bail!(
@@ -261,12 +250,8 @@ impl PoolWriter {
}
}
- let (catalog, is_new_media) = update_media_set_label(
- worker,
- drive.as_mut(),
- old_media_id.media_set_label,
- media.id(),
- )?;
+ let (catalog, is_new_media) =
+ update_media_set_label(drive.as_mut(), old_media_id.media_set_label, media.id())?;
self.catalog_set.lock().unwrap().append_catalog(catalog)?;
@@ -288,7 +273,7 @@ impl PoolWriter {
if is_new_media {
// add catalogs from previous media
- self.append_media_set_catalogs(worker)?;
+ self.append_media_set_catalogs()?;
}
self.used_tapes.insert(media_uuid.clone());
@@ -308,12 +293,12 @@ impl PoolWriter {
// Check it tape is loaded, then move to EOM (if not already there)
//
// Returns the tape position at EOM.
- fn prepare_tape_write(status: &mut PoolWriterState, worker: &WorkerTask) -> Result<u64, Error> {
+ fn prepare_tape_write(status: &mut PoolWriterState) -> Result<u64, Error> {
if !status.at_eom {
- task_log!(worker, "moving to end of media");
+ info!("moving to end of media");
status.drive.move_to_eom(true)?;
status.at_eom = true;
- task_log!(worker, "arrived at end of media");
+ info!("arrived at end of media");
}
let current_file_number = status.drive.current_file_number()?;
@@ -334,7 +319,7 @@ impl PoolWriter {
/// on the media (return value 'Ok(false, _)'). In that case, the
/// archive is marked incomplete. The caller should mark the media
/// as full and try again using another media.
- pub fn append_catalog_archive(&mut self, worker: &WorkerTask) -> Result<bool, Error> {
+ pub fn append_catalog_archive(&mut self) -> Result<bool, Error> {
let catalog_magic = self.catalog_version();
let status = match self.status {
@@ -342,7 +327,7 @@ impl PoolWriter {
None => bail!("PoolWriter - no media loaded"),
};
- Self::prepare_tape_write(status, worker)?;
+ Self::prepare_tape_write(status)?;
let catalog_set = self.catalog_set.lock().unwrap();
@@ -385,7 +370,7 @@ impl PoolWriter {
}
// Append catalogs for all previous media in set (without last)
- fn append_media_set_catalogs(&mut self, worker: &WorkerTask) -> Result<(), Error> {
+ fn append_media_set_catalogs(&mut self) -> Result<(), Error> {
let media_set = self.pool.current_media_set();
let mut media_list = media_set.media_list();
@@ -401,7 +386,7 @@ impl PoolWriter {
None => bail!("PoolWriter - no media loaded"),
};
- Self::prepare_tape_write(status, worker)?;
+ Self::prepare_tape_write(status)?;
for (seq_nr, uuid) in media_list.iter().enumerate() {
let uuid = match uuid {
@@ -413,7 +398,7 @@ impl PoolWriter {
let mut file = Self::open_catalog_file(uuid)?;
- task_log!(worker, "write catalog for previous media: {}", uuid);
+ info!("write catalog for previous media: {}", uuid);
if tape_write_catalog(
writer.as_mut(),
@@ -444,7 +429,6 @@ impl PoolWriter {
/// media.
pub fn append_snapshot_archive(
&mut self,
- worker: &WorkerTask,
snapshot_reader: &SnapshotReader,
) -> Result<(bool, usize), Error> {
let status = match self.status {
@@ -452,7 +436,7 @@ impl PoolWriter {
None => bail!("PoolWriter - no media loaded"),
};
- let current_file_number = Self::prepare_tape_write(status, worker)?;
+ let current_file_number = Self::prepare_tape_write(status)?;
let (done, bytes_written) = {
let mut writer: Box<dyn TapeWrite> = status.drive.write_file()?;
@@ -498,7 +482,7 @@ impl PoolWriter {
None => bail!("PoolWriter - no media loaded"),
};
- let current_file_number = Self::prepare_tape_write(status, worker)?;
+ let current_file_number = Self::prepare_tape_write(status)?;
let writer = status.drive.write_file()?;
@@ -510,8 +494,7 @@ impl PoolWriter {
status.bytes_written += bytes_written;
let elapsed = start_time.elapsed()?.as_secs_f64();
- task_log!(
- worker,
+ info!(
"wrote {} chunks ({:.2} MB at {:.2} MB/s)",
saved_chunks.len(),
bytes_written as f64 / 1_000_000.0,
@@ -591,7 +574,7 @@ fn write_chunk_archive<'a>(
}
if writer.bytes_written() > max_size {
- //task_log!(worker, "Chunk Archive max size reached, closing archive");
+ //info!("Chunk Archive max size reached, closing archive");
break;
}
}
@@ -605,7 +588,6 @@ fn write_chunk_archive<'a>(
// set label does not match the expected media set, overwrite the
// media set label.
fn update_media_set_label(
- worker: &WorkerTask,
drive: &mut dyn TapeDriver,
old_set: Option<MediaSetLabel>,
media_id: &MediaId,
@@ -634,7 +616,7 @@ fn update_media_set_label(
let new_media = match old_set {
None => {
- task_log!(worker, "writing new media set label");
+ info!("writing new media set label");
drive.write_media_set_label(new_set, key_config.as_ref())?;
media_catalog = MediaCatalog::overwrite(TAPE_STATUS_DIR, media_id, false)?;
true
@@ -658,8 +640,7 @@ fn update_media_set_label(
false
} else {
- task_log!(
- worker,
+ info!(
"writing new media set label (overwrite '{}/{}')",
media_set_label.uuid.to_string(),
media_set_label.seq_nr,
diff --git a/tests/worker-task-abort.rs b/tests/worker-task-abort.rs
index eda86f01..80243dbb 100644
--- a/tests/worker-task-abort.rs
+++ b/tests/worker-task-abort.rs
@@ -7,24 +7,25 @@ extern crate tokio;
use proxmox_lang::try_block;
use proxmox_sys::fs::CreateOptions;
-use proxmox_sys::{task_log, WorkerTaskContext};
+use proxmox_sys::WorkerTaskContext;
use pbs_api_types::{Authid, UPID};
use proxmox_rest_server::{CommandSocket, WorkerTask};
+use tracing::info;
fn garbage_collection(worker: &WorkerTask) -> Result<(), Error> {
- task_log!(worker, "start garbage collection");
+ info!("start garbage collection");
for i in 0..50 {
worker.check_abort()?;
- task_log!(worker, "progress {}", i);
+ info!("progress {}", i);
std::thread::sleep(std::time::Duration::from_millis(10));
}
- task_log!(worker, "end garbage collection");
+ info!("end garbage collection");
Ok(())
}
--
2.39.2
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [pbs-devel] [PATCH proxmox 1/2] proxmox-log: added tracing infra
2023-11-03 11:28 ` [pbs-devel] [PATCH proxmox 1/2] proxmox-log: added tracing infra Gabriel Goller
@ 2023-12-01 9:56 ` Lukas Wagner
2023-12-01 11:13 ` Wolfgang Bumiller
2023-12-01 14:48 ` Lukas Wagner
0 siblings, 2 replies; 8+ messages in thread
From: Lukas Wagner @ 2023-12-01 9:56 UTC (permalink / raw)
To: Proxmox Backup Server development discussion, Gabriel Goller
Cc: Wolfgang Bumiller
Hello,
I'm currently evaluating how this crate could be used from the
proxmox-perl-rs bindings.
The status quo there is that we initialize an envlogger, but that leads
to the problem that logs are only visible for tasks, since envlogger
does log to stdout (or was it stderr?).
To differentiate between 'daemon context' and 'fork worker context' we
obviously cannot rely on a tokio task-local var there.
Maybe it would make sense to have an alternative mode (gated via
feature-flags) with a static flag, which is then set in
RestEnvironment::fork_worker via a setter?
The dependency on tokio should then also be feature-gated, since we
don't want to have to pull in tokio as a dep for proxmox-perl-rs yet.
@Wolfgang what do you think about this?
On 11/3/23 12:28, Gabriel Goller wrote:
> diff --git a/proxmox-log/src/lib.rs b/proxmox-log/src/lib.rs
> new file mode 100644
> index 0000000..15fa22d
> --- /dev/null
> +++ b/proxmox-log/src/lib.rs
> @@ -0,0 +1,34 @@
> +use std::{cell::RefCell, env};
> +use syslog_tasklog_layer::SyslogAndTasklogLayer;
> +use tracing_log::{AsLog, LogTracer};
> +use tracing_subscriber::filter::LevelFilter;
> +use tracing_subscriber::prelude::*;
> +
> +mod file_logger;
> +pub use file_logger::{FileLogOptions, FileLogger};
> +
> +mod syslog_tasklog_layer;
> +
> +tokio::task_local! {
> + pub static LOGGER: RefCell<FileLogger>;
> + pub static WARN_COUNTER: RefCell<u64>;
> +}
> +
> +pub fn init_logger(
> + env_var_name: &str,
> + default_log_level: LevelFilter,
> + application_name: &str,
> +) -> Result<(), anyhow::Error> {
> + let mut log_level = default_log_level;
> + if let Ok(v) = env::var(env_var_name) {
> + if let Ok(l) = v.parse::<LevelFilter>() {
> + log_level = l;
> + }
> + }
> + let registry = tracing_subscriber::registry()
> + .with(SyslogAndTasklogLayer::new(application_name.to_string()).with_filter(log_level));
> +
> + tracing::subscriber::set_global_default(registry)?;
> + LogTracer::init_with_filter(log_level.as_log())?;
> + Ok(())
> +}
> diff --git a/proxmox-log/src/syslog_tasklog_layer.rs b/proxmox-log/src/syslog_tasklog_layer.rs
> new file mode 100644
> index 0000000..344a514
> --- /dev/null
> +++ b/proxmox-log/src/syslog_tasklog_layer.rs
> @@ -0,0 +1,106 @@
> +use std::fmt::Write as _;
> +use std::sync::Arc;
> +use std::sync::Mutex;
> +
> +use syslog::{Formatter3164, LoggerBackend};
> +use tracing::field::Field;
> +use tracing::field::Visit;
> +use tracing::Event;
> +use tracing::Level;
> +use tracing::Subscriber;
> +use tracing_subscriber::layer::Context;
> +use tracing_subscriber::Layer;
> +
> +use crate::FileLogger;
> +use crate::LOGGER;
> +use crate::WARN_COUNTER;
> +
> +pub struct SyslogAndTasklogLayer {
> + syslog_logger: Arc<Mutex<syslog::Logger<LoggerBackend, Formatter3164>>>,
> +}
> +
> +impl SyslogAndTasklogLayer {
> + pub fn new(application_name: String) -> Self {
> + let formatter = Formatter3164 {
> + facility: syslog::Facility::LOG_DAEMON,
> + process: application_name,
> + ..Formatter3164::default()
> + };
> +
> + // we panic here if we can't initialize the syslogger
> + let logger = syslog::unix(formatter)
> + .map_err(|e| {
> + anyhow::Error::new(std::io::Error::new(
> + std::io::ErrorKind::Other,
> + e.description(),
> + ))
> + })
> + .unwrap();
> +
> + let logger = Arc::new(Mutex::new(logger));
> +
> + Self {
> + syslog_logger: logger,
> + }
> + }
> +}
> +
> +impl<S: Subscriber> Layer<S> for SyslogAndTasklogLayer {
> + fn on_event(&self, event: &Event<'_>, _ctx: Context<'_, S>) {
> + let mut buf = String::new();
> + event.record(&mut EventVisitor::new(&mut buf));
> + let level = event.metadata().level();
> +
> + let result = LOGGER.try_with(|logger| {
> + log_to_file(&mut logger.borrow_mut(), level, &buf);
> + });
> + if result.is_err() || *level == Level::ERROR {
^
Shouldn't this be configurable?
Or, alternatively would it make sense of drop this comparison and only
rely on the env var (e.g. PBS_LOG)?
> + log_to_syslog(&mut self.syslog_logger.lock().unwrap(), level, &buf);
> + }
> + }
> +}
> +
> +fn log_to_syslog(
> + logger: &mut syslog::Logger<LoggerBackend, Formatter3164>,
> + level: &Level,
> + buf: &String,
> +) {
(...)
--
- Lukas
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [pbs-devel] [PATCH proxmox 1/2] proxmox-log: added tracing infra
2023-12-01 9:56 ` Lukas Wagner
@ 2023-12-01 11:13 ` Wolfgang Bumiller
2023-12-01 14:48 ` Lukas Wagner
1 sibling, 0 replies; 8+ messages in thread
From: Wolfgang Bumiller @ 2023-12-01 11:13 UTC (permalink / raw)
To: Lukas Wagner; +Cc: Proxmox Backup Server development discussion, Gabriel Goller
On Fri, Dec 01, 2023 at 10:56:47AM +0100, Lukas Wagner wrote:
> Hello,
> I'm currently evaluating how this crate could be used from the
> proxmox-perl-rs bindings.
>
> The status quo there is that we initialize an envlogger, but that leads to
> the problem that logs are only visible for tasks, since envlogger does log
> to stdout (or was it stderr?).
> To differentiate between 'daemon context' and 'fork worker context' we
> obviously cannot rely on a tokio task-local var there.
I'm not convinced we'll be able to keep tokio out of there for long -
if only to block_on some futures via a current_thread runtime.
> Maybe it would make sense to have an alternative mode (gated via
> feature-flags) with a static flag, which is then set in
> RestEnvironment::fork_worker via a setter?
Sure. This can of course also be a follow up (I haven't looked over
this version yet).
> The dependency on tokio should then also be feature-gated, since we don't
> want to have to pull in tokio as a dep for proxmox-perl-rs yet.
👍
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [pbs-devel] [PATCH proxmox 1/2] proxmox-log: added tracing infra
2023-12-01 9:56 ` Lukas Wagner
2023-12-01 11:13 ` Wolfgang Bumiller
@ 2023-12-01 14:48 ` Lukas Wagner
1 sibling, 0 replies; 8+ messages in thread
From: Lukas Wagner @ 2023-12-01 14:48 UTC (permalink / raw)
To: Proxmox Backup Server development discussion, Gabriel Goller
On 12/1/23 10:56, Lukas Wagner wrote:
>> +
>> + let result = LOGGER.try_with(|logger| {
>> + log_to_file(&mut logger.borrow_mut(), level, &buf);
>> + });
>> + if result.is_err() || *level == Level::ERROR {
> ^
> Shouldn't this be configurable?
> Or, alternatively would it make sense of drop this comparison and only
> rely on the env var (e.g. PBS_LOG)?
>
Argh, misread this line before, sorry. I somehow read this as 'only
errors should ever got to syslog', but in reality it's 'if task and
error, it goes to both, syslog and stdout'.
So it's fine as is, IMO.
--
- Lukas
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [pbs-devel] [PATCH proxmox-backup 0/2] proxmox-log introduction
2023-11-03 11:28 [pbs-devel] [PATCH proxmox-backup 0/2] proxmox-log introduction Gabriel Goller
2023-11-03 11:28 ` [pbs-devel] [PATCH proxmox 1/2] proxmox-log: added tracing infra Gabriel Goller
2023-11-03 11:28 ` [pbs-devel] [PATCH proxmox-backup 2/2] log: removed task_log! macro and moved to tracing Gabriel Goller
@ 2023-12-15 13:56 ` Gabriel Goller
2 siblings, 0 replies; 8+ messages in thread
From: Gabriel Goller @ 2023-12-15 13:56 UTC (permalink / raw)
To: pbs-devel
bump.
On 11/3/23 12:28, Gabriel Goller wrote:
> Removed the task_log! (and friends task_warn!, task_debug!, etc.) macro
> and introduced the `tracing` crate. We now initiate the tracing crate
> using a single layer, which is loggin to the syslog and the tasklog.
> It uses the `syslog` crate and the original `FileLogger`.
>
> To write to the task logs from the worker threads and tasks, we now
> have a task_local logger (and warning counter), which
> will get instantiated when a task/thread is created. This means that
> when we call `info!` or any other `tracing` log macros (with the "tasklog"
> attribute), it will get the file_logger from TLS and write to the file.
>
>
> Changes since RFC v2:
> - Combined syslog and tasklog to single layer
> - Infer the logging target from the FileLogger TLS
>
> Changes since RFC v1:
> - Remove the 'tasklog = true' attribute and infer the context
> - Wrap the worker_thread or worker_task in a span with name
> 'worker_task'
> - All events in the span with name 'worker_task' get logged to the
> file_logger, everything else goes to syslog (Error events go to
> both)
> - Remove the `Option<>` around the `FileLogger` in TLS
> - Clippy fixes
>
> Gabriel Goller (1):
> proxmox-log: added tracing infra
>
> Cargo.toml | 6 +
> proxmox-log/Cargo.toml | 23 ++++
> proxmox-log/debian/changelog | 5 +
> proxmox-log/debian/control | 53 +++++++++
> proxmox-log/debian/copyright | 18 +++
> proxmox-log/debian/debcargo.toml | 7 ++
> .../src/file_logger.rs | 2 +-
> proxmox-log/src/lib.rs | 34 ++++++
> proxmox-log/src/syslog_tasklog_layer.rs | 106 +++++++++++++++++
> proxmox-rest-server/Cargo.toml | 2 +
> proxmox-rest-server/src/api_config.rs | 3 +-
> proxmox-rest-server/src/lib.rs | 3 -
> proxmox-rest-server/src/rest.rs | 4 +-
> proxmox-rest-server/src/worker_task.rs | 108 +++++++++---------
> proxmox-sys/src/worker_task_context.rs | 47 --------
> 15 files changed, 315 insertions(+), 106 deletions(-)
> create mode 100644 proxmox-log/Cargo.toml
> create mode 100644 proxmox-log/debian/changelog
> create mode 100644 proxmox-log/debian/control
> create mode 100644 proxmox-log/debian/copyright
> create mode 100644 proxmox-log/debian/debcargo.toml
> rename {proxmox-rest-server => proxmox-log}/src/file_logger.rs (98%)
> create mode 100644 proxmox-log/src/lib.rs
> create mode 100644 proxmox-log/src/syslog_tasklog_layer.rs
>
>
> Gabriel Goller (1):
> log: removed task_log! macro and moved to tracing
>
> Cargo.toml | 61 ++++----
> pbs-api-types/src/metrics.rs | 1 -
> pbs-api-types/src/node.rs | 3 +-
> pbs-datastore/Cargo.toml | 1 +
> pbs-datastore/src/chunk_store.rs | 30 +---
> pbs-datastore/src/datastore.rs | 70 ++++-----
> src/api2/admin/datastore.rs | 24 ++--
> src/api2/admin/metrics.rs | 2 +-
> src/api2/config/acme.rs | 20 +--
> src/api2/config/datastore.rs | 12 +-
> src/api2/node/apt.rs | 13 +-
> src/api2/node/certificates.rs | 64 ++++-----
> src/api2/node/disks/directory.rs | 13 +-
> src/api2/node/disks/mod.rs | 6 +-
> src/api2/node/disks/zfs.rs | 30 ++--
> src/api2/node/mod.rs | 11 +-
> src/api2/pull.rs | 28 ++--
> src/api2/tape/backup.rs | 77 ++++------
> src/api2/tape/drive.rs | 135 +++++++----------
> src/api2/tape/restore.rs | 240 ++++++++++---------------------
> src/backup/verify.rs | 103 ++++---------
> src/bin/proxmox-backup-api.rs | 10 +-
> src/bin/proxmox-backup-proxy.rs | 42 ++----
> src/server/gc_job.rs | 6 +-
> src/server/prune_job.rs | 26 ++--
> src/server/pull.rs | 166 ++++++++-------------
> src/server/realm_sync_job.rs | 40 ++----
> src/server/verify_job.rs | 10 +-
> src/tape/drive/mod.rs | 34 ++---
> src/tape/pool_writer/mod.rs | 85 +++++------
> tests/worker-task-abort.rs | 9 +-
> 31 files changed, 502 insertions(+), 870 deletions(-)
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [pbs-devel] [PATCH proxmox-backup 2/2] log: removed task_log! macro and moved to tracing
2023-11-03 11:28 ` [pbs-devel] [PATCH proxmox-backup 2/2] log: removed task_log! macro and moved to tracing Gabriel Goller
@ 2024-01-22 13:30 ` Lukas Wagner
0 siblings, 0 replies; 8+ messages in thread
From: Lukas Wagner @ 2024-01-22 13:30 UTC (permalink / raw)
To: Proxmox Backup Server development discussion
This patch seems to need a rebase, it does not apply any more. :)
Anyway, 'bump' from me as well, as I would like to base some logging
improvements for the perl-rs bindings on the proxmox-log crate, if possible.
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2024-01-22 13:30 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-03 11:28 [pbs-devel] [PATCH proxmox-backup 0/2] proxmox-log introduction Gabriel Goller
2023-11-03 11:28 ` [pbs-devel] [PATCH proxmox 1/2] proxmox-log: added tracing infra Gabriel Goller
2023-12-01 9:56 ` Lukas Wagner
2023-12-01 11:13 ` Wolfgang Bumiller
2023-12-01 14:48 ` Lukas Wagner
2023-11-03 11:28 ` [pbs-devel] [PATCH proxmox-backup 2/2] log: removed task_log! macro and moved to tracing Gabriel Goller
2024-01-22 13:30 ` Lukas Wagner
2023-12-15 13:56 ` [pbs-devel] [PATCH proxmox-backup 0/2] proxmox-log introduction Gabriel Goller
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox