public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pbs-devel] [PATCH proxmox{, -backup} v5 0/4] proxmox-log introduction
@ 2024-06-13 13:56 Gabriel Goller
  2024-06-13 13:56 ` [pbs-devel] [PATCH proxmox v5 1/4] proxmox-log: add tracing infrastructure Gabriel Goller
                   ` (5 more replies)
  0 siblings, 6 replies; 14+ messages in thread
From: Gabriel Goller @ 2024-06-13 13:56 UTC (permalink / raw)
  To: pbs-devel

Removed the task_log! (and friends task_warn!, task_debug!, etc.) macro
and introduced the `tracing` crate. We now initiate the tracing crate
using a single layer, which is logging to the syslog and the tasklog.
It uses the `syslog` crate and the original `FileLogger`.

To write to the task logs from the worker threads and tasks, we now
have a task_local logger (and warning counter), which
will get instantiated when a task/thread is created. This means that
when we call `info!` or any other `tracing` log macros, it will get 
the file_logger from TLS and write to the file.

v5:
 - minor rebase

v4:
 - rebase
 - reword commit messages (thanks @Thomas)
 - split into multiple patches

v3, thanks @Sterzy:
 - updated debian/control files
 - downgraded tracing-log

v2:
 - Rebase onto master
 - Split proxmox-backup commit into two

v1, thanks @Wolfgang, @Lukas:
 - Combined syslog and tasklog to single layer
 - Infer the logging target from the FileLogger TLS

RFC v2, thanks @Dominik, @Thomas:
 - Remove the 'tasklog = true' attribute and infer the context
    - Wrap the worker_thread or worker_task in a span with name
      'worker_task'
    - All events in the span with name 'worker_task' get logged to the
      file_logger, everything else goes to syslog (Error events go to
      both)
 - Remove the `Option<>` around the `FileLogger` in TLS
 - Clippy fixes

proxmox:

Gabriel Goller (2):
  proxmox-log: add tracing infrastructure
  enable tracing logger, remove task_log macros

 Cargo.toml                                    |   6 +
 proxmox-log/Cargo.toml                        |  23 ++++
 proxmox-log/debian/changelog                  |   5 +
 proxmox-log/debian/control                    |  52 +++++++++
 proxmox-log/debian/copyright                  |  18 +++
 proxmox-log/debian/debcargo.toml              |   7 ++
 .../src/file_logger.rs                        |   2 +-
 proxmox-log/src/lib.rs                        |  37 ++++++
 proxmox-log/src/syslog_tasklog_layer.rs       | 106 ++++++++++++++++++
 proxmox-rest-server/Cargo.toml                |   2 +
 proxmox-rest-server/src/api_config.rs         |   3 +-
 proxmox-rest-server/src/lib.rs                |   3 -
 proxmox-rest-server/src/rest.rs               |   4 +-
 proxmox-rest-server/src/worker_task.rs        | 104 ++++++++---------
 proxmox-sys/src/worker_task_context.rs        |  47 --------
 15 files changed, 313 insertions(+), 106 deletions(-)
 create mode 100644 proxmox-log/Cargo.toml
 create mode 100644 proxmox-log/debian/changelog
 create mode 100644 proxmox-log/debian/control
 create mode 100644 proxmox-log/debian/copyright
 create mode 100644 proxmox-log/debian/debcargo.toml
 rename {proxmox-rest-server => proxmox-log}/src/file_logger.rs (98%)
 create mode 100644 proxmox-log/src/lib.rs
 create mode 100644 proxmox-log/src/syslog_tasklog_layer.rs


proxmox-backup:

Gabriel Goller (2):
  switch from task_log! macro to tracing
  api: switch from task_log! macro to tracing

 Cargo.toml                       |   7 +
 debian/control                   |   2 +
 pbs-datastore/Cargo.toml         |   1 +
 pbs-datastore/src/chunk_store.rs |  30 +---
 pbs-datastore/src/datastore.rs   |  70 ++++-----
 src/api2/admin/datastore.rs      |  26 ++--
 src/api2/config/acme.rs          |  20 +--
 src/api2/config/datastore.rs     |  16 +-
 src/api2/config/prune.rs         |  14 +-
 src/api2/node/apt.rs             |  13 +-
 src/api2/node/certificates.rs    |  64 ++++----
 src/api2/node/disks/directory.rs |  13 +-
 src/api2/node/disks/mod.rs       |  12 +-
 src/api2/node/disks/zfs.rs       |  30 ++--
 src/api2/node/mod.rs             |  11 +-
 src/api2/pull.rs                 |  33 ++--
 src/api2/tape/backup.rs          |  70 ++++-----
 src/api2/tape/drive.rs           | 146 +++++++-----------
 src/api2/tape/restore.rs         | 254 ++++++++++---------------------
 src/backup/verify.rs             | 103 ++++---------
 src/bin/proxmox-backup-api.rs    |  10 +-
 src/bin/proxmox-backup-proxy.rs  |  42 ++---
 src/server/gc_job.rs             |   6 +-
 src/server/prune_job.rs          |  26 ++--
 src/server/pull.rs               | 230 +++++++++-------------------
 src/server/realm_sync_job.rs     |  44 ++----
 src/server/verify_job.rs         |  10 +-
 src/tape/drive/mod.rs            |  34 ++---
 src/tape/pool_writer/mod.rs      |  88 +++++------
 src/tools/disks/mod.rs           |  21 +--
 tests/worker-task-abort.rs       |   9 +-
 31 files changed, 515 insertions(+), 940 deletions(-)


Summary over all repositories:
  46 files changed, 828 insertions(+), 1046 deletions(-)

-- 
Generated by git-murpp 0.5.0


_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pbs-devel] [PATCH proxmox v5 1/4] proxmox-log: add tracing infrastructure
  2024-06-13 13:56 [pbs-devel] [PATCH proxmox{, -backup} v5 0/4] proxmox-log introduction Gabriel Goller
@ 2024-06-13 13:56 ` Gabriel Goller
  2024-06-24 11:10   ` Lukas Wagner
  2024-06-13 13:56 ` [pbs-devel] [PATCH proxmox v5 2/4] enable tracing logger, remove task_log macros Gabriel Goller
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: Gabriel Goller @ 2024-06-13 13:56 UTC (permalink / raw)
  To: pbs-devel

Add the `proxmox_log` crate which includes the new logging infra.
Export the `init_logger` function, which creates the `tracing` logger
that includes the default subscriber and a single custom layer.

This layer checks on every event/log if a FileLogger is set in the
task-local-storage, if it is, then log to the specified file, otherwise
to syslog.

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
---
 Cargo.toml                              |   6 +
 proxmox-log/Cargo.toml                  |  23 ++++
 proxmox-log/debian/changelog            |   5 +
 proxmox-log/debian/control              |  52 +++++++++
 proxmox-log/debian/copyright            |  18 +++
 proxmox-log/debian/debcargo.toml        |   7 ++
 proxmox-log/src/file_logger.rs          | 147 ++++++++++++++++++++++++
 proxmox-log/src/lib.rs                  |  37 ++++++
 proxmox-log/src/syslog_tasklog_layer.rs | 106 +++++++++++++++++
 proxmox-rest-server/Cargo.toml          |   2 +
 10 files changed, 403 insertions(+)
 create mode 100644 proxmox-log/Cargo.toml
 create mode 100644 proxmox-log/debian/changelog
 create mode 100644 proxmox-log/debian/control
 create mode 100644 proxmox-log/debian/copyright
 create mode 100644 proxmox-log/debian/debcargo.toml
 create mode 100644 proxmox-log/src/file_logger.rs
 create mode 100644 proxmox-log/src/lib.rs
 create mode 100644 proxmox-log/src/syslog_tasklog_layer.rs

diff --git a/Cargo.toml b/Cargo.toml
index b3c97808..6e7bdb8d 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -17,6 +17,7 @@ members = [
     "proxmox-io",
     "proxmox-lang",
     "proxmox-ldap",
+    "proxmox-log",
     "proxmox-login",
     "proxmox-metrics",
     "proxmox-network-api",
@@ -97,11 +98,15 @@ serde_cbor = "0.11.1"
 serde_json = "1.0"
 serde_plain = "1.0"
 syn = { version = "2", features = [ "full", "visit-mut" ] }
+syslog = "6"
 tar = "0.4"
 tokio = "1.6"
 tokio-openssl = "0.6.1"
 tokio-stream = "0.1.0"
 tower-service = "0.3.0"
+tracing = "0.1"
+tracing-log = { version = "0.1.3", default-features = false }
+tracing-subscriber = "0.3.16"
 url = "2.2"
 walkdir = "2"
 webauthn-rs = "0.3"
@@ -117,6 +122,7 @@ proxmox-http-error = { version = "0.1.0", path = "proxmox-http-error" }
 proxmox-human-byte = { version = "0.1.0", path = "proxmox-human-byte" }
 proxmox-io = { version = "1.0.0", path = "proxmox-io" }
 proxmox-lang = { version = "1.1", path = "proxmox-lang" }
+proxmox-log= { version = "0.1.0", path = "proxmox-log" }
 proxmox-login = { version = "0.1.0", path = "proxmox-login" }
 proxmox-product-config = { version = "0.1.0", path = "proxmox-product-config" }
 proxmox-config-digest = { version = "0.1.0", path = "proxmox-config-digest" }
diff --git a/proxmox-log/Cargo.toml b/proxmox-log/Cargo.toml
new file mode 100644
index 00000000..14ca6c7b
--- /dev/null
+++ b/proxmox-log/Cargo.toml
@@ -0,0 +1,23 @@
+[package]
+name = "proxmox-log"
+version = "0.1.0"
+authors.workspace = true
+edition.workspace = true
+license.workspace = true
+repository.workspace = true
+description = "Logging infrastructure for proxmox"
+
+exclude.workspace = true
+
+[dependencies]
+anyhow.workspace = true
+syslog.workspace = true
+nix.workspace = true
+log.workspace = true
+tracing.workspace = true
+tracing-subscriber.workspace = true
+tracing-log = { workspace = true, features = ["std"] }
+tokio = { workspace = true, features = ["rt-multi-thread"] }
+proxmox-time.workspace = true
+proxmox-sys.workspace = true
+
diff --git a/proxmox-log/debian/changelog b/proxmox-log/debian/changelog
new file mode 100644
index 00000000..3661bae9
--- /dev/null
+++ b/proxmox-log/debian/changelog
@@ -0,0 +1,5 @@
+rust-proxmox-log (0.1.0-3) UNRELEASED; urgency=medium
+
+  * Initial release
+
+ -- Gabriel Goller <ggoller@luna.proxmox.com>  Wed, 11 Oct 2023 15:13:58 +0200
diff --git a/proxmox-log/debian/control b/proxmox-log/debian/control
new file mode 100644
index 00000000..14b2376d
--- /dev/null
+++ b/proxmox-log/debian/control
@@ -0,0 +1,52 @@
+Source: rust-proxmox-log
+Section: rust
+Priority: optional
+Build-Depends: debhelper (>= 12),
+ dh-cargo (>= 25),
+ cargo:native <!nocheck>,
+ rustc:native <!nocheck>,
+ libstd-rust-dev <!nocheck>,
+ librust-anyhow-1+default-dev <!nocheck>,
+ librust-log-0.4+default-dev (>= 0.4.17-~~) <!nocheck>,
+ librust-nix-0.26+default-dev (>= 0.26.1-~~) <!nocheck>,
+ librust-proxmox-sys-0.5+default-dev (>= 0.5.1-~~) <!nocheck>,
+ librust-proxmox-time-1+default-dev (>= 1.1.6-~~) <!nocheck>,
+ librust-syslog-6+default-dev <!nocheck>,
+ librust-tokio-1+default-dev (>= 1.6-~~) <!nocheck>,
+ librust-tokio-1+rt-multi-thread-dev (>= 1.6-~~) <!nocheck>,
+ librust-tracing-0.1+default-dev <!nocheck>,
+ librust-tracing-log-0.1-dev (>= 0.1.3-~~) <!nocheck>,
+ librust-tracing-subscriber-0.3+default-dev (>= 0.3.16-~~) <!nocheck>
+Maintainer: Proxmox Support Team <support@proxmox.com>
+Standards-Version: 4.6.2
+Vcs-Git: git://git.proxmox.com/git/proxmox.git
+Vcs-Browser: https://git.proxmox.com/?p=proxmox.git
+X-Cargo-Crate: proxmox-log
+Rules-Requires-Root: no
+
+Package: librust-proxmox-log-dev
+Architecture: any
+Multi-Arch: same
+Depends:
+ ${misc:Depends},
+ librust-anyhow-1+default-dev,
+ librust-log-0.4+default-dev (>= 0.4.17-~~),
+ librust-nix-0.26+default-dev (>= 0.26.1-~~),
+ librust-proxmox-sys-0.5+default-dev (>= 0.5.1-~~),
+ librust-proxmox-time-1+default-dev (>= 1.1.6-~~),
+ librust-syslog-6+default-dev,
+ librust-tokio-1+default-dev (>= 1.6-~~),
+ librust-tokio-1+rt-multi-thread-dev (>= 1.6-~~),
+ librust-tracing-0.1+default-dev,
+ librust-tracing-log-0.1-dev (>= 0.1.3-~~),
+ librust-tracing-subscriber-0.3+default-dev (>= 0.3.16-~~)
+Provides:
+ librust-proxmox-log+default-dev (= ${binary:Version}),
+ librust-proxmox-log-0-dev (= ${binary:Version}),
+ librust-proxmox-log-0+default-dev (= ${binary:Version}),
+ librust-proxmox-log-0.1-dev (= ${binary:Version}),
+ librust-proxmox-log-0.1+default-dev (= ${binary:Version}),
+ librust-proxmox-log-0.1.0-dev (= ${binary:Version}),
+ librust-proxmox-log-0.1.0+default-dev (= ${binary:Version})
+Description: Logging infrastructure for proxmox - Rust source code
+ Source code for Debianized Rust crate "proxmox-log"
diff --git a/proxmox-log/debian/copyright b/proxmox-log/debian/copyright
new file mode 100644
index 00000000..b227c290
--- /dev/null
+++ b/proxmox-log/debian/copyright
@@ -0,0 +1,18 @@
+Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
+
+Files:
+ *
+Copyright: 2019 - 2024 Proxmox Server Solutions GmbH <support@proxmox.com>
+License: AGPL-3.0-or-later
+ This program is free software: you can redistribute it and/or modify it under
+ the terms of the GNU Affero General Public License as published by the Free
+ Software Foundation, either version 3 of the License, or (at your option) any
+ later version.
+ .
+ This program is distributed in the hope that it will be useful, but WITHOUT
+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more
+ details.
+ .
+ You should have received a copy of the GNU Affero General Public License along
+ with this program. If not, see <https://www.gnu.org/licenses/>.
diff --git a/proxmox-log/debian/debcargo.toml b/proxmox-log/debian/debcargo.toml
new file mode 100644
index 00000000..b7864cdb
--- /dev/null
+++ b/proxmox-log/debian/debcargo.toml
@@ -0,0 +1,7 @@
+overlay = "."
+crate_src_path = ".."
+maintainer = "Proxmox Support Team <support@proxmox.com>"
+
+[source]
+vcs_git = "git://git.proxmox.com/git/proxmox.git"
+vcs_browser = "https://git.proxmox.com/?p=proxmox.git"
diff --git a/proxmox-log/src/file_logger.rs b/proxmox-log/src/file_logger.rs
new file mode 100644
index 00000000..c7e1d644
--- /dev/null
+++ b/proxmox-log/src/file_logger.rs
@@ -0,0 +1,147 @@
+use std::io::Write;
+
+use anyhow::Error;
+use nix::fcntl::OFlag;
+
+use proxmox_sys::fs::{atomic_open_or_create_file, CreateOptions};
+
+/// Options to control the behavior of a [FileLogger] instance
+#[derive(Default)]
+pub struct FileLogOptions {
+    /// Open underlying log file in append mode, useful when multiple concurrent processes
+    /// want to log to the same file (e.g., HTTP access log). Note that it is only atomic
+    /// for writes smaller than the PIPE_BUF (4k on Linux).
+    /// Inside the same process you may need to still use an mutex, for shared access.
+    pub append: bool,
+    /// Open underlying log file as readable
+    pub read: bool,
+    /// If set, ensure that the file is newly created or error out if already existing.
+    pub exclusive: bool,
+    /// Duplicate logged messages to STDOUT, like tee
+    pub to_stdout: bool,
+    /// Prefix messages logged to the file with the current local time as RFC 3339
+    pub prefix_time: bool,
+    /// File owner/group and mode
+    pub file_opts: CreateOptions,
+}
+
+/// Log messages with optional automatically added timestamps into files
+///
+/// #### Example:
+/// ```
+/// # use anyhow::{bail, format_err, Error};
+/// use proxmox_log::{flog, FileLogger, FileLogOptions};
+///
+/// # std::fs::remove_file("test.log");
+/// let options = FileLogOptions {
+///     to_stdout: true,
+///     exclusive: true,
+///     ..Default::default()
+/// };
+/// let mut log = FileLogger::new("test.log", options).unwrap();
+/// flog!(log, "A simple log: {}", "Hello!");
+/// # std::fs::remove_file("test.log");
+/// ```
+pub struct FileLogger {
+    file: std::fs::File,
+    file_name: std::path::PathBuf,
+    options: FileLogOptions,
+}
+
+/// Log messages to [FileLogger] - ``println`` like macro
+#[macro_export]
+macro_rules! flog {
+    ($log:expr, $($arg:tt)*) => ({
+        $log.log(format!($($arg)*));
+    })
+}
+
+impl FileLogger {
+    pub fn new<P: AsRef<std::path::Path>>(
+        file_name: P,
+        options: FileLogOptions,
+    ) -> Result<Self, Error> {
+        let file = Self::open(&file_name, &options)?;
+
+        let file_name: std::path::PathBuf = file_name.as_ref().to_path_buf();
+
+        Ok(Self {
+            file,
+            file_name,
+            options,
+        })
+    }
+
+    pub fn reopen(&mut self) -> Result<&Self, Error> {
+        let file = Self::open(&self.file_name, &self.options)?;
+        self.file = file;
+        Ok(self)
+    }
+
+    fn open<P: AsRef<std::path::Path>>(
+        file_name: P,
+        options: &FileLogOptions,
+    ) -> Result<std::fs::File, Error> {
+        let mut flags = OFlag::O_CLOEXEC;
+
+        if options.read {
+            flags |= OFlag::O_RDWR;
+        } else {
+            flags |= OFlag::O_WRONLY;
+        }
+
+        if options.append {
+            flags |= OFlag::O_APPEND;
+        }
+        if options.exclusive {
+            flags |= OFlag::O_EXCL;
+        }
+
+        let file =
+            atomic_open_or_create_file(&file_name, flags, &[], options.file_opts.clone(), false)?;
+
+        Ok(file)
+    }
+
+    pub fn log<S: AsRef<str>>(&mut self, msg: S) {
+        let msg = msg.as_ref();
+
+        if self.options.to_stdout {
+            let mut stdout = std::io::stdout();
+            stdout.write_all(msg.as_bytes()).unwrap();
+            stdout.write_all(b"\n").unwrap();
+        }
+
+        let line = if self.options.prefix_time {
+            let now = proxmox_time::epoch_i64();
+            let rfc3339 = match proxmox_time::epoch_to_rfc3339(now) {
+                Ok(rfc3339) => rfc3339,
+                Err(_) => "1970-01-01T00:00:00Z".into(), // for safety, should really not happen!
+            };
+            format!("{}: {}\n", rfc3339, msg)
+        } else {
+            format!("{}\n", msg)
+        };
+        if let Err(err) = self.file.write_all(line.as_bytes()) {
+            // avoid panicking, log methods should not do that
+            // FIXME: or, return result???
+            log::error!("error writing to log file - {}", err);
+        }
+    }
+}
+
+impl std::io::Write for FileLogger {
+    fn write(&mut self, buf: &[u8]) -> Result<usize, std::io::Error> {
+        if self.options.to_stdout {
+            let _ = std::io::stdout().write(buf);
+        }
+        self.file.write(buf)
+    }
+
+    fn flush(&mut self) -> Result<(), std::io::Error> {
+        if self.options.to_stdout {
+            let _ = std::io::stdout().flush();
+        }
+        self.file.flush()
+    }
+}
diff --git a/proxmox-log/src/lib.rs b/proxmox-log/src/lib.rs
new file mode 100644
index 00000000..daaaede5
--- /dev/null
+++ b/proxmox-log/src/lib.rs
@@ -0,0 +1,37 @@
+use std::{
+    cell::{Cell, RefCell},
+    env,
+};
+use syslog_tasklog_layer::SyslogAndTasklogLayer;
+use tracing_log::{AsLog, LogTracer};
+use tracing_subscriber::filter::LevelFilter;
+use tracing_subscriber::prelude::*;
+
+mod file_logger;
+pub use file_logger::{FileLogOptions, FileLogger};
+
+mod syslog_tasklog_layer;
+
+tokio::task_local! {
+    pub static LOGGER: RefCell<FileLogger>;
+    pub static WARN_COUNTER: Cell<u64> ;
+}
+
+pub fn init_logger(
+    env_var_name: &str,
+    default_log_level: LevelFilter,
+    application_name: &str,
+) -> Result<(), anyhow::Error> {
+    let mut log_level = default_log_level;
+    if let Ok(v) = env::var(env_var_name) {
+        if let Ok(l) = v.parse::<LevelFilter>() {
+            log_level = l;
+        }
+    }
+    let registry = tracing_subscriber::registry()
+        .with(SyslogAndTasklogLayer::new(application_name.to_string()).with_filter(log_level));
+
+    tracing::subscriber::set_global_default(registry)?;
+    LogTracer::init_with_filter(log_level.as_log())?;
+    Ok(())
+}
diff --git a/proxmox-log/src/syslog_tasklog_layer.rs b/proxmox-log/src/syslog_tasklog_layer.rs
new file mode 100644
index 00000000..89437caf
--- /dev/null
+++ b/proxmox-log/src/syslog_tasklog_layer.rs
@@ -0,0 +1,106 @@
+use std::fmt::Write as _;
+use std::sync::Arc;
+use std::sync::Mutex;
+
+use syslog::{Formatter3164, LoggerBackend};
+use tracing::field::Field;
+use tracing::field::Visit;
+use tracing::Event;
+use tracing::Level;
+use tracing::Subscriber;
+use tracing_subscriber::layer::Context;
+use tracing_subscriber::Layer;
+
+use crate::FileLogger;
+use crate::LOGGER;
+use crate::WARN_COUNTER;
+
+pub struct SyslogAndTasklogLayer {
+    syslog_logger: Arc<Mutex<syslog::Logger<LoggerBackend, Formatter3164>>>,
+}
+
+impl SyslogAndTasklogLayer {
+    pub fn new(application_name: String) -> Self {
+        let formatter = Formatter3164 {
+            facility: syslog::Facility::LOG_DAEMON,
+            process: application_name,
+            ..Formatter3164::default()
+        };
+
+        // we panic here if we can't initialize the syslogger
+        let logger = syslog::unix(formatter)
+            .map_err(|e| {
+                anyhow::Error::new(std::io::Error::new(
+                    std::io::ErrorKind::Other,
+                    e.description(),
+                ))
+            })
+            .unwrap();
+
+        let logger = Arc::new(Mutex::new(logger));
+
+        Self {
+            syslog_logger: logger,
+        }
+    }
+}
+
+impl<S: Subscriber> Layer<S> for SyslogAndTasklogLayer {
+    fn on_event(&self, event: &Event<'_>, _ctx: Context<'_, S>) {
+        let mut buf = String::new();
+        event.record(&mut EventVisitor::new(&mut buf));
+        let level = event.metadata().level();
+
+        let result = LOGGER.try_with(|logger| {
+            log_to_file(&mut logger.borrow_mut(), level, &buf);
+        });
+        if result.is_err() || *level == Level::ERROR {
+            log_to_syslog(&mut self.syslog_logger.lock().unwrap(), level, &buf);
+        }
+    }
+}
+
+fn log_to_syslog(
+    logger: &mut syslog::Logger<LoggerBackend, Formatter3164>,
+    level: &Level,
+    buf: &String,
+) {
+    let _ = match *level {
+        Level::ERROR => logger.err(buf),
+        Level::WARN => logger.warning(buf),
+        Level::INFO => logger.info(buf),
+        Level::DEBUG => logger.debug(buf),
+        Level::TRACE => logger.debug(buf),
+    };
+}
+fn log_to_file(logger: &mut FileLogger, level: &Level, buf: &String) {
+    match *level {
+        Level::ERROR | Level::WARN => {
+            WARN_COUNTER.with(|counter| {
+                counter.set(counter.get() + 1);
+            });
+            logger.log(buf);
+        }
+        Level::INFO => logger.log(buf),
+        Level::DEBUG => logger.log(format!("DEBUG: {}", buf)),
+        Level::TRACE => logger.log(format!("TRACE: {}", buf)),
+    };
+}
+
+struct EventVisitor<'a> {
+    buf: &'a mut String,
+}
+
+impl<'a> EventVisitor<'a> {
+    fn new(buf: &'a mut String) -> Self {
+        Self { buf }
+    }
+}
+
+impl Visit for EventVisitor<'_> {
+    fn record_debug(&mut self, field: &Field, value: &dyn std::fmt::Debug) {
+        if field.name() == "message" {
+            let _ = write!(self.buf, "{value:?}");
+        }
+    }
+}
diff --git a/proxmox-rest-server/Cargo.toml b/proxmox-rest-server/Cargo.toml
index 94330ff1..2bfea4a0 100644
--- a/proxmox-rest-server/Cargo.toml
+++ b/proxmox-rest-server/Cargo.toml
@@ -34,6 +34,7 @@ tokio-openssl.workspace = true
 tokio-stream.workspace = true
 tower-service.workspace = true
 url.workspace = true
+tracing.workspace = true
 
 proxmox-async.workspace = true
 proxmox-compression.workspace = true
@@ -44,6 +45,7 @@ proxmox-router.workspace = true
 proxmox-schema = { workspace = true, features = [ "api-macro", "upid-api-impl" ] }
 proxmox-sys = { workspace = true, features = [ "logrotate", "timer" ] }
 proxmox-time.workspace = true
+proxmox-log.workspace = true
 
 [features]
 default = []
-- 
2.43.0



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pbs-devel] [PATCH proxmox v5 2/4] enable tracing logger, remove task_log macros
  2024-06-13 13:56 [pbs-devel] [PATCH proxmox{, -backup} v5 0/4] proxmox-log introduction Gabriel Goller
  2024-06-13 13:56 ` [pbs-devel] [PATCH proxmox v5 1/4] proxmox-log: add tracing infrastructure Gabriel Goller
@ 2024-06-13 13:56 ` Gabriel Goller
  2024-06-24 11:09   ` Lukas Wagner
  2024-06-13 13:56 ` [pbs-devel] [PATCH proxmox-backup v5 3/4] switch from task_log! macro to tracing Gabriel Goller
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: Gabriel Goller @ 2024-06-13 13:56 UTC (permalink / raw)
  To: pbs-devel

Enable the tracing-system by setting the LOGGER task local variable
to a instance of a FileLogger and initializing the WARN_COUNTER.
Removed the task_log! macros and some occurences.

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
---
 proxmox-rest-server/src/api_config.rs  |   3 +-
 proxmox-rest-server/src/file_logger.rs | 147 -------------------------
 proxmox-rest-server/src/lib.rs         |   3 -
 proxmox-rest-server/src/rest.rs        |   4 +-
 proxmox-rest-server/src/worker_task.rs | 104 ++++++++---------
 proxmox-sys/src/worker_task_context.rs |  47 --------
 6 files changed, 56 insertions(+), 252 deletions(-)
 delete mode 100644 proxmox-rest-server/src/file_logger.rs

diff --git a/proxmox-rest-server/src/api_config.rs b/proxmox-rest-server/src/api_config.rs
index eaece05a..5b7a2a4f 100644
--- a/proxmox-rest-server/src/api_config.rs
+++ b/proxmox-rest-server/src/api_config.rs
@@ -12,11 +12,12 @@ use hyper::http::request::Parts;
 use hyper::{Body, Response};
 use tower_service::Service;
 
+use proxmox_log::{FileLogOptions, FileLogger};
 use proxmox_router::{Router, RpcEnvironmentType, UserInformation};
 use proxmox_sys::fs::{create_path, CreateOptions};
 
 use crate::rest::Handler;
-use crate::{CommandSocket, FileLogOptions, FileLogger, RestEnvironment};
+use crate::{CommandSocket, RestEnvironment};
 
 /// REST server configuration
 pub struct ApiConfig {
diff --git a/proxmox-rest-server/src/file_logger.rs b/proxmox-rest-server/src/file_logger.rs
deleted file mode 100644
index 2bb1fac6..00000000
--- a/proxmox-rest-server/src/file_logger.rs
+++ /dev/null
@@ -1,147 +0,0 @@
-use std::io::Write;
-
-use anyhow::Error;
-use nix::fcntl::OFlag;
-
-use proxmox_sys::fs::{atomic_open_or_create_file, CreateOptions};
-
-/// Options to control the behavior of a [FileLogger] instance
-#[derive(Default)]
-pub struct FileLogOptions {
-    /// Open underlying log file in append mode, useful when multiple concurrent processes
-    /// want to log to the same file (e.g., HTTP access log). Note that it is only atomic
-    /// for writes smaller than the PIPE_BUF (4k on Linux).
-    /// Inside the same process you may need to still use an mutex, for shared access.
-    pub append: bool,
-    /// Open underlying log file as readable
-    pub read: bool,
-    /// If set, ensure that the file is newly created or error out if already existing.
-    pub exclusive: bool,
-    /// Duplicate logged messages to STDOUT, like tee
-    pub to_stdout: bool,
-    /// Prefix messages logged to the file with the current local time as RFC 3339
-    pub prefix_time: bool,
-    /// File owner/group and mode
-    pub file_opts: CreateOptions,
-}
-
-/// Log messages with optional automatically added timestamps into files
-///
-/// #### Example:
-/// ```
-/// # use anyhow::{bail, format_err, Error};
-/// use proxmox_rest_server::{flog, FileLogger, FileLogOptions};
-///
-/// # std::fs::remove_file("test.log");
-/// let options = FileLogOptions {
-///     to_stdout: true,
-///     exclusive: true,
-///     ..Default::default()
-/// };
-/// let mut log = FileLogger::new("test.log", options).unwrap();
-/// flog!(log, "A simple log: {}", "Hello!");
-/// # std::fs::remove_file("test.log");
-/// ```
-pub struct FileLogger {
-    file: std::fs::File,
-    file_name: std::path::PathBuf,
-    options: FileLogOptions,
-}
-
-/// Log messages to [FileLogger] - ``println`` like macro
-#[macro_export]
-macro_rules! flog {
-    ($log:expr, $($arg:tt)*) => ({
-        $log.log(format!($($arg)*));
-    })
-}
-
-impl FileLogger {
-    pub fn new<P: AsRef<std::path::Path>>(
-        file_name: P,
-        options: FileLogOptions,
-    ) -> Result<Self, Error> {
-        let file = Self::open(&file_name, &options)?;
-
-        let file_name: std::path::PathBuf = file_name.as_ref().to_path_buf();
-
-        Ok(Self {
-            file,
-            file_name,
-            options,
-        })
-    }
-
-    pub fn reopen(&mut self) -> Result<&Self, Error> {
-        let file = Self::open(&self.file_name, &self.options)?;
-        self.file = file;
-        Ok(self)
-    }
-
-    fn open<P: AsRef<std::path::Path>>(
-        file_name: P,
-        options: &FileLogOptions,
-    ) -> Result<std::fs::File, Error> {
-        let mut flags = OFlag::O_CLOEXEC;
-
-        if options.read {
-            flags |= OFlag::O_RDWR;
-        } else {
-            flags |= OFlag::O_WRONLY;
-        }
-
-        if options.append {
-            flags |= OFlag::O_APPEND;
-        }
-        if options.exclusive {
-            flags |= OFlag::O_EXCL;
-        }
-
-        let file =
-            atomic_open_or_create_file(&file_name, flags, &[], options.file_opts.clone(), false)?;
-
-        Ok(file)
-    }
-
-    pub fn log<S: AsRef<str>>(&mut self, msg: S) {
-        let msg = msg.as_ref();
-
-        if self.options.to_stdout {
-            let mut stdout = std::io::stdout();
-            stdout.write_all(msg.as_bytes()).unwrap();
-            stdout.write_all(b"\n").unwrap();
-        }
-
-        let line = if self.options.prefix_time {
-            let now = proxmox_time::epoch_i64();
-            let rfc3339 = match proxmox_time::epoch_to_rfc3339(now) {
-                Ok(rfc3339) => rfc3339,
-                Err(_) => "1970-01-01T00:00:00Z".into(), // for safety, should really not happen!
-            };
-            format!("{}: {}\n", rfc3339, msg)
-        } else {
-            format!("{}\n", msg)
-        };
-        if let Err(err) = self.file.write_all(line.as_bytes()) {
-            // avoid panicking, log methods should not do that
-            // FIXME: or, return result???
-            log::error!("error writing to log file - {}", err);
-        }
-    }
-}
-
-impl std::io::Write for FileLogger {
-    fn write(&mut self, buf: &[u8]) -> Result<usize, std::io::Error> {
-        if self.options.to_stdout {
-            let _ = std::io::stdout().write(buf);
-        }
-        self.file.write(buf)
-    }
-
-    fn flush(&mut self) -> Result<(), std::io::Error> {
-        if self.options.to_stdout {
-            let _ = std::io::stdout().flush();
-        }
-        self.file.flush()
-    }
-}
diff --git a/proxmox-rest-server/src/lib.rs b/proxmox-rest-server/src/lib.rs
index ce9e4f15..e8866366 100644
--- a/proxmox-rest-server/src/lib.rs
+++ b/proxmox-rest-server/src/lib.rs
@@ -41,9 +41,6 @@ pub use state::*;
 mod command_socket;
 pub use command_socket::*;
 
-mod file_logger;
-pub use file_logger::{FileLogOptions, FileLogger};
-
 mod api_config;
 pub use api_config::{ApiConfig, AuthError, AuthHandler, IndexHandler, UnixAcceptor};
 
diff --git a/proxmox-rest-server/src/rest.rs b/proxmox-rest-server/src/rest.rs
index 39ae9a18..3a3c287c 100644
--- a/proxmox-rest-server/src/rest.rs
+++ b/proxmox-rest-server/src/rest.rs
@@ -31,10 +31,10 @@ use proxmox_schema::{ObjectSchemaType, ParameterSchema};
 
 use proxmox_async::stream::AsyncReaderStream;
 use proxmox_compression::{DeflateEncoder, Level};
+use proxmox_log::FileLogger;
 
 use crate::{
-    formatter::*, normalize_path, ApiConfig, AuthError, CompressionMethod, FileLogger,
-    RestEnvironment,
+    formatter::*, normalize_path, ApiConfig, AuthError, CompressionMethod, RestEnvironment,
 };
 
 extern "C" {
diff --git a/proxmox-rest-server/src/worker_task.rs b/proxmox-rest-server/src/worker_task.rs
index 5bab4cdc..58537f54 100644
--- a/proxmox-rest-server/src/worker_task.rs
+++ b/proxmox-rest-server/src/worker_task.rs
@@ -1,3 +1,4 @@
+use std::cell::{Cell, RefCell};
 use std::collections::{HashMap, VecDeque};
 use std::fs::File;
 use std::io::{BufRead, BufReader, Read, Write};
@@ -12,21 +13,23 @@ use futures::*;
 use lazy_static::lazy_static;
 use nix::fcntl::OFlag;
 use once_cell::sync::OnceCell;
+use proxmox_log::{LOGGER, WARN_COUNTER};
 use serde::{Deserialize, Serialize};
 use serde_json::{json, Value};
 use tokio::signal::unix::SignalKind;
 use tokio::sync::oneshot;
+use tracing::{info, warn};
 
 use proxmox_lang::try_block;
+use proxmox_log::{FileLogOptions, FileLogger};
 use proxmox_schema::upid::UPID;
 use proxmox_sys::fs::{atomic_open_or_create_file, create_path, replace_file, CreateOptions};
 use proxmox_sys::linux::procfs;
-use proxmox_sys::task_warn;
 
 use proxmox_sys::logrotate::{LogRotate, LogRotateFiles};
 use proxmox_sys::WorkerTaskContext;
 
-use crate::{CommandSocket, FileLogOptions, FileLogger};
+use crate::CommandSocket;
 
 struct TaskListLockGuard(File);
 
@@ -294,7 +297,7 @@ pub fn rotate_task_log_archive(
 
 /// removes all task logs that are older than the oldest task entry in the
 /// task archive
-pub fn cleanup_old_tasks(worker: &dyn WorkerTaskContext, compressed: bool) -> Result<(), Error> {
+pub fn cleanup_old_tasks(compressed: bool) -> Result<(), Error> {
     let setup = worker_task_setup()?;
 
     let _lock = setup.lock_task_list_files(true)?;
@@ -332,7 +335,10 @@ pub fn cleanup_old_tasks(worker: &dyn WorkerTaskContext, compressed: bool) -> Re
                 Ok(files) => files,
                 Err(err) if err.kind() == std::io::ErrorKind::NotFound => continue,
                 Err(err) => {
-                    task_warn!(worker, "could not check task logs in '{:02X}': {}", i, err);
+                    warn!(
+                        tasklog = true,
+                        "could not check task logs in '{:02X}': {}", i, err
+                    );
                     continue;
                 }
             };
@@ -340,11 +346,9 @@ pub fn cleanup_old_tasks(worker: &dyn WorkerTaskContext, compressed: bool) -> Re
                 let file = match file {
                     Ok(file) => file,
                     Err(err) => {
-                        task_warn!(
-                            worker,
-                            "could not check some task log in '{:02X}': {}",
-                            i,
-                            err
+                        warn!(
+                            tasklog = true,
+                            "could not check some task log in '{:02X}': {}", i, err
                         );
                         continue;
                     }
@@ -354,7 +358,10 @@ pub fn cleanup_old_tasks(worker: &dyn WorkerTaskContext, compressed: bool) -> Re
                 let modified = match get_modified(file) {
                     Ok(modified) => modified,
                     Err(err) => {
-                        task_warn!(worker, "error getting mtime for '{:?}': {}", path, err);
+                        warn!(
+                            tasklog = true,
+                            "error getting mtime for '{:?}': {}", path, err
+                        );
                         continue;
                     }
                 };
@@ -364,7 +371,10 @@ pub fn cleanup_old_tasks(worker: &dyn WorkerTaskContext, compressed: bool) -> Re
                         Ok(()) => {}
                         Err(err) if err.kind() == std::io::ErrorKind::NotFound => {}
                         Err(err) => {
-                            task_warn!(worker, "could not remove file '{:?}': {}", path, err)
+                            warn!(
+                                tasklog = true,
+                                "could not remove file '{:?}': {}", path, err
+                            )
                         }
                     }
                 }
@@ -822,9 +832,7 @@ impl std::fmt::Display for WorkerTask {
 }
 
 struct WorkerTaskData {
-    logger: FileLogger,
     progress: f64, // 0..1
-    warn_count: u64,
     pub abort_listeners: Vec<oneshot::Sender<()>>,
 }
 
@@ -834,7 +842,7 @@ impl WorkerTask {
         worker_id: Option<String>,
         auth_id: String,
         to_stdout: bool,
-    ) -> Result<Arc<Self>, Error> {
+    ) -> Result<(Arc<Self>, FileLogger), Error> {
         let setup = worker_task_setup()?;
 
         let upid = UPID::new(worker_type, worker_id, auth_id)?;
@@ -857,9 +865,7 @@ impl WorkerTask {
             upid: upid.clone(),
             abort_requested: AtomicBool::new(false),
             data: Mutex::new(WorkerTaskData {
-                logger,
                 progress: 0.0,
-                warn_count: 0,
                 abort_listeners: vec![],
             }),
         });
@@ -873,7 +879,7 @@ impl WorkerTask {
 
         setup.update_active_workers(Some(&upid))?;
 
-        Ok(worker)
+        Ok((worker, logger))
     }
 
     /// Spawn a new tokio task/future.
@@ -888,13 +894,20 @@ impl WorkerTask {
         F: Send + 'static + FnOnce(Arc<WorkerTask>) -> T,
         T: Send + 'static + Future<Output = Result<(), Error>>,
     {
-        let worker = WorkerTask::new(worker_type, worker_id, auth_id, to_stdout)?;
+        let (worker, logger) = WorkerTask::new(worker_type, worker_id, auth_id, to_stdout)?;
         let upid_str = worker.upid.to_string();
         let f = f(worker.clone());
-        tokio::spawn(async move {
-            let result = f.await;
-            worker.log_result(&result);
-        });
+
+        let logger = RefCell::new(logger);
+        let counter = Cell::new(0);
+        tokio::spawn(LOGGER.scope(logger, async move {
+            WARN_COUNTER
+                .scope(counter, async move {
+                    let result = f.await;
+                    worker.log_result(&result);
+                })
+                .await;
+        }));
 
         Ok(upid_str)
     }
@@ -910,22 +923,27 @@ impl WorkerTask {
     where
         F: Send + UnwindSafe + 'static + FnOnce(Arc<WorkerTask>) -> Result<(), Error>,
     {
-        let worker = WorkerTask::new(worker_type, worker_id, auth_id, to_stdout)?;
+        let (worker, logger) = WorkerTask::new(worker_type, worker_id, auth_id, to_stdout)?;
         let upid_str = worker.upid.to_string();
 
         let _child = std::thread::Builder::new()
             .name(upid_str.clone())
             .spawn(move || {
-                let worker1 = worker.clone();
-                let result = match std::panic::catch_unwind(move || f(worker1)) {
-                    Ok(r) => r,
-                    Err(panic) => match panic.downcast::<&str>() {
-                        Ok(panic_msg) => Err(format_err!("worker panicked: {}", panic_msg)),
-                        Err(_) => Err(format_err!("worker panicked: unknown type.")),
-                    },
-                };
+                LOGGER.sync_scope(RefCell::new(logger), || {
+                    WARN_COUNTER.sync_scope(Cell::new(0), || {
+                        let worker1 = worker.clone();
+
+                        let result = match std::panic::catch_unwind(move || f(worker1)) {
+                            Ok(r) => r,
+                            Err(panic) => match panic.downcast::<&str>() {
+                                Ok(panic_msg) => Err(format_err!("worker panicked: {}", panic_msg)),
+                                Err(_) => Err(format_err!("worker panicked: unknown type.")),
+                            },
+                        };
 
-                worker.log_result(&result);
+                        worker.log_result(&result);
+                    });
+                });
             });
 
         Ok(upid_str)
@@ -933,7 +951,7 @@ impl WorkerTask {
 
     /// create state from self and a result
     pub fn create_state(&self, result: &Result<(), Error>) -> TaskState {
-        let warn_count = self.data.lock().unwrap().warn_count;
+        let warn_count = WARN_COUNTER.try_with(Cell::get).unwrap_or(0);
 
         let endtime = proxmox_time::epoch_i64();
 
@@ -964,15 +982,7 @@ impl WorkerTask {
 
     /// Log a message.
     pub fn log_message<S: AsRef<str>>(&self, msg: S) {
-        let mut data = self.data.lock().unwrap();
-        data.logger.log(msg);
-    }
-
-    /// Log a message as warning.
-    pub fn log_warning<S: AsRef<str>>(&self, msg: S) {
-        let mut data = self.data.lock().unwrap();
-        data.logger.log(format!("WARN: {}", msg.as_ref()));
-        data.warn_count += 1;
+        info!(tasklog = true, "{}", msg.as_ref());
     }
 
     /// Set progress indicator
@@ -1035,16 +1045,6 @@ impl WorkerTaskContext for WorkerTask {
     fn fail_on_shutdown(&self) -> Result<(), Error> {
         crate::fail_on_shutdown()
     }
-
-    fn log(&self, level: log::Level, message: &std::fmt::Arguments) {
-        match level {
-            log::Level::Error => self.log_warning(message.to_string()),
-            log::Level::Warn => self.log_warning(message.to_string()),
-            log::Level::Info => self.log_message(message.to_string()),
-            log::Level::Debug => self.log_message(format!("DEBUG: {}", message)),
-            log::Level::Trace => self.log_message(format!("TRACE: {}", message)),
-        }
-    }
 }
 
 /// Wait for a locally spanned worker task
diff --git a/proxmox-sys/src/worker_task_context.rs b/proxmox-sys/src/worker_task_context.rs
index 2c86857c..743ae53f 100644
--- a/proxmox-sys/src/worker_task_context.rs
+++ b/proxmox-sys/src/worker_task_context.rs
@@ -26,9 +26,6 @@ pub trait WorkerTaskContext: Send + Sync {
         }
         Ok(())
     }
-
-    /// Create a log message for this task.
-    fn log(&self, level: log::Level, message: &std::fmt::Arguments);
 }
 
 /// Convenience implementation:
@@ -48,48 +45,4 @@ impl<T: WorkerTaskContext + ?Sized> WorkerTaskContext for std::sync::Arc<T> {
     fn fail_on_shutdown(&self) -> Result<(), Error> {
         <T as WorkerTaskContext>::fail_on_shutdown(self)
     }
-
-    fn log(&self, level: log::Level, message: &std::fmt::Arguments) {
-        <T as WorkerTaskContext>::log(self, level, message)
-    }
-}
-
-/// Log an error to a [WorkerTaskContext]
-#[macro_export]
-macro_rules! task_error {
-    ($task:expr, $($fmt:tt)+) => {{
-        $crate::WorkerTaskContext::log(&*$task, log::Level::Error, &format_args!($($fmt)+))
-    }};
-}
-
-/// Log a warning to a [WorkerTaskContext]
-#[macro_export]
-macro_rules! task_warn {
-    ($task:expr, $($fmt:tt)+) => {{
-        $crate::WorkerTaskContext::log(&*$task, log::Level::Warn, &format_args!($($fmt)+))
-    }};
-}
-
-/// Log a message to a [WorkerTaskContext]
-#[macro_export]
-macro_rules! task_log {
-    ($task:expr, $($fmt:tt)+) => {{
-        $crate::WorkerTaskContext::log(&*$task, log::Level::Info, &format_args!($($fmt)+))
-    }};
-}
-
-/// Log a debug message to a [WorkerTaskContext]
-#[macro_export]
-macro_rules! task_debug {
-    ($task:expr, $($fmt:tt)+) => {{
-        $crate::WorkerTaskContext::log(&*$task, log::Level::Debug, &format_args!($($fmt)+))
-    }};
-}
-
-/// Log a trace message to a [WorkerTaskContext]
-#[macro_export]
-macro_rules! task_trace {
-    ($task:expr, $($fmt:tt)+) => {{
-        $crate::WorkerTaskContext::log(&*$task, log::Level::Trace, &format_args!($($fmt)+))
-    }};
 }
-- 
2.43.0



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pbs-devel] [PATCH proxmox-backup v5 3/4] switch from task_log! macro to tracing
  2024-06-13 13:56 [pbs-devel] [PATCH proxmox{, -backup} v5 0/4] proxmox-log introduction Gabriel Goller
  2024-06-13 13:56 ` [pbs-devel] [PATCH proxmox v5 1/4] proxmox-log: add tracing infrastructure Gabriel Goller
  2024-06-13 13:56 ` [pbs-devel] [PATCH proxmox v5 2/4] enable tracing logger, remove task_log macros Gabriel Goller
@ 2024-06-13 13:56 ` Gabriel Goller
  2024-06-24 11:10   ` Lukas Wagner
  2024-06-13 13:56 ` [pbs-devel] [PATCH proxmox-backup v5 4/4] api: " Gabriel Goller
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: Gabriel Goller @ 2024-06-13 13:56 UTC (permalink / raw)
  To: pbs-devel

Import `proxmox-log` and substitute all `task_log!`
(and task_warn!, task_error!) invocations with tracing calls (info!,
warn!, etc..). Remove worker references where it isn't necessary
anymore.

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
---
 Cargo.toml                       |   7 +
 debian/control                   |   2 +
 pbs-datastore/Cargo.toml         |   1 +
 pbs-datastore/src/chunk_store.rs |  30 +---
 pbs-datastore/src/datastore.rs   |  70 ++++------
 src/backup/verify.rs             | 103 ++++----------
 src/bin/proxmox-backup-api.rs    |  10 +-
 src/bin/proxmox-backup-proxy.rs  |  42 ++----
 src/server/gc_job.rs             |   6 +-
 src/server/prune_job.rs          |  26 ++--
 src/server/pull.rs               | 230 ++++++++++---------------------
 src/server/realm_sync_job.rs     |  44 ++----
 src/server/verify_job.rs         |  10 +-
 src/tape/drive/mod.rs            |  34 ++---
 src/tape/pool_writer/mod.rs      |  88 +++++-------
 src/tools/disks/mod.rs           |  21 ++-
 tests/worker-task-abort.rs       |   9 +-
 17 files changed, 252 insertions(+), 481 deletions(-)

diff --git a/Cargo.toml b/Cargo.toml
index 5eddd493..21f07e67 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -63,6 +63,7 @@ proxmox-http = { version = "0.9.0", features = [ "client", "http-helpers", "webs
 proxmox-human-byte = "0.1"
 proxmox-io = "1.0.1" # tools and client use "tokio" feature
 proxmox-lang = "1.1"
+proxmox-log = "0.1"
 proxmox-ldap = "0.2.1"
 proxmox-metrics = "0.3.1"
 proxmox-notify = "0.4"
@@ -145,6 +146,8 @@ tokio = "1.6"
 tokio-openssl = "0.6.1"
 tokio-stream = "0.1.0"
 tokio-util = { version = "0.7", features = [ "io" ] }
+tracing = "0.1"
+tracing-subscriber = "0.3.16"
 tower-service = "0.3.0"
 udev = "0.4"
 url = "2.1"
@@ -192,6 +195,8 @@ tokio = { workspace = true, features = [ "fs", "io-util", "io-std", "macros", "n
 tokio-openssl.workspace = true
 tokio-stream.workspace = true
 tokio-util = { workspace = true, features = [ "codec" ] }
+tracing.workspace = true
+tracing-subscriber.workspace = true
 tower-service.workspace = true
 udev.workspace = true
 url.workspace = true
@@ -210,6 +215,7 @@ proxmox-http = { workspace = true, features = [ "client-trait", "proxmox-async",
 proxmox-human-byte.workspace = true
 proxmox-io.workspace = true
 proxmox-lang.workspace = true
+proxmox-log.workspace = true
 proxmox-ldap.workspace = true
 proxmox-metrics.workspace = true
 proxmox-notify = { workspace = true, features = [ "pbs-context" ] }
@@ -257,6 +263,7 @@ proxmox-rrd.workspace = true
 #proxmox-human-byte = { path = "../proxmox/proxmox-human-byte" }
 #proxmox-io = { path = "../proxmox/proxmox-io" }
 #proxmox-lang = { path = "../proxmox/proxmox-lang" }
+#proxmox-log = { path = "../proxmox/proxmox-log" }
 #proxmox-ldap = { path = "../proxmox/proxmox-ldap" }
 #proxmox-metrics = { path = "../proxmox/proxmox-metrics" }
 #proxmox-notify = { path = "../proxmox/proxmox-notify" }
diff --git a/debian/control b/debian/control
index cbf9fbd0..9a958888 100644
--- a/debian/control
+++ b/debian/control
@@ -134,6 +134,8 @@ Build-Depends: bash-completion,
                librust-tokio-util-0.7+default-dev,
                librust-tokio-util-0.7+io-dev,
                librust-tower-service-0.3+default-dev,
+               librust-tracing-0.1+default-dev,
+               librust-tracing-subscriber-0.3+default-dev (>= 0.3.16-~~),
                librust-udev-0.4+default-dev,
                librust-url-2+default-dev (>= 2.1-~~),
                librust-walkdir-2+default-dev,
diff --git a/pbs-datastore/Cargo.toml b/pbs-datastore/Cargo.toml
index b793dc5b..fcb11ac4 100644
--- a/pbs-datastore/Cargo.toml
+++ b/pbs-datastore/Cargo.toml
@@ -20,6 +20,7 @@ openssl.workspace = true
 serde.workspace = true
 serde_json.workspace = true
 tokio = { workspace = true, features = [] }
+tracing.workspace = true
 walkdir.workspace = true
 zstd.workspace = true
 
diff --git a/pbs-datastore/src/chunk_store.rs b/pbs-datastore/src/chunk_store.rs
index 9f6289c9..8402d2fe 100644
--- a/pbs-datastore/src/chunk_store.rs
+++ b/pbs-datastore/src/chunk_store.rs
@@ -3,6 +3,7 @@ use std::path::{Path, PathBuf};
 use std::sync::{Arc, Mutex};
 
 use anyhow::{bail, format_err, Error};
+use tracing::info;
 
 use pbs_api_types::{DatastoreFSyncLevel, GarbageCollectionStatus};
 use proxmox_io::ReadExt;
@@ -10,7 +11,6 @@ use proxmox_sys::fs::{create_dir, create_path, file_type_from_file_stat, CreateO
 use proxmox_sys::process_locker::{
     ProcessLockExclusiveGuard, ProcessLockSharedGuard, ProcessLocker,
 };
-use proxmox_sys::task_log;
 use proxmox_sys::WorkerTaskContext;
 
 use crate::file_formats::{
@@ -92,7 +92,6 @@ impl ChunkStore {
         path: P,
         uid: nix::unistd::Uid,
         gid: nix::unistd::Gid,
-        worker: Option<&dyn WorkerTaskContext>,
         sync_level: DatastoreFSyncLevel,
     ) -> Result<Self, Error>
     where
@@ -143,9 +142,7 @@ impl ChunkStore {
             }
             let percentage = (i * 100) / (64 * 1024);
             if percentage != last_percentage {
-                if let Some(worker) = worker {
-                    task_log!(worker, "Chunkstore create: {}%", percentage)
-                }
+                info!("Chunkstore create: {}%", percentage);
                 last_percentage = percentage;
             }
         }
@@ -374,7 +371,7 @@ impl ChunkStore {
         for (entry, percentage, bad) in self.get_chunk_iterator()? {
             if last_percentage != percentage {
                 last_percentage = percentage;
-                task_log!(worker, "processed {}% ({} chunks)", percentage, chunk_count,);
+                info!("processed {}% ({} chunks)", percentage, chunk_count,);
             }
 
             worker.check_abort()?;
@@ -578,15 +575,8 @@ fn test_chunk_store1() {
     let user = nix::unistd::User::from_uid(nix::unistd::Uid::current())
         .unwrap()
         .unwrap();
-    let chunk_store = ChunkStore::create(
-        "test",
-        &path,
-        user.uid,
-        user.gid,
-        None,
-        DatastoreFSyncLevel::None,
-    )
-    .unwrap();
+    let chunk_store =
+        ChunkStore::create("test", &path, user.uid, user.gid, DatastoreFSyncLevel::None).unwrap();
 
     let (chunk, digest) = crate::data_blob::DataChunkBuilder::new(&[0u8, 1u8])
         .build()
@@ -598,14 +588,8 @@ fn test_chunk_store1() {
     let (exists, _) = chunk_store.insert_chunk(&chunk, &digest).unwrap();
     assert!(exists);
 
-    let chunk_store = ChunkStore::create(
-        "test",
-        &path,
-        user.uid,
-        user.gid,
-        None,
-        DatastoreFSyncLevel::None,
-    );
+    let chunk_store =
+        ChunkStore::create("test", &path, user.uid, user.gid, DatastoreFSyncLevel::None);
     assert!(chunk_store.is_err());
 
     if let Err(_e) = std::fs::remove_dir_all(".testdir") { /* ignore */ }
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index f95da761..8cb98f59 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -7,6 +7,7 @@ use std::sync::{Arc, Mutex};
 use anyhow::{bail, format_err, Error};
 use lazy_static::lazy_static;
 use nix::unistd::{unlinkat, UnlinkatFlags};
+use tracing::{info, warn};
 
 use proxmox_human_byte::HumanByte;
 use proxmox_schema::ApiType;
@@ -16,7 +17,6 @@ use proxmox_sys::fs::{file_read_optional_string, replace_file, CreateOptions};
 use proxmox_sys::fs::{lock_dir_noblock, DirLockGuard};
 use proxmox_sys::process_locker::ProcessLockSharedGuard;
 use proxmox_sys::WorkerTaskContext;
-use proxmox_sys::{task_log, task_warn};
 
 use pbs_api_types::{
     Authid, BackupNamespace, BackupType, ChunkOrder, DataStoreConfig, DatastoreFSyncLevel,
@@ -965,8 +965,7 @@ impl DataStore {
             let digest = index.index_digest(pos).unwrap();
             if !self.inner.chunk_store.cond_touch_chunk(digest, false)? {
                 let hex = hex::encode(digest);
-                task_warn!(
-                    worker,
+                warn!(
                     "warning: unable to access non-existent chunk {hex}, required by {file_name:?}"
                 );
 
@@ -1032,8 +1031,7 @@ impl DataStore {
 
             let percentage = (i + 1) * 100 / image_count;
             if percentage > last_percentage {
-                task_log!(
-                    worker,
+                info!(
                     "marked {}% ({} of {} index files)",
                     percentage,
                     i + 1,
@@ -1044,8 +1042,7 @@ impl DataStore {
         }
 
         if strange_paths_count > 0 {
-            task_log!(
-                worker,
+            info!(
                 "found (and marked) {} index files outside of expected directory scheme",
                 strange_paths_count,
             );
@@ -1085,11 +1082,11 @@ impl DataStore {
                 ..Default::default()
             };
 
-            task_log!(worker, "Start GC phase1 (mark used chunks)");
+            info!("Start GC phase1 (mark used chunks)");
 
             self.mark_used_chunks(&mut gc_status, worker)?;
 
-            task_log!(worker, "Start GC phase2 (sweep unused chunks)");
+            info!("Start GC phase2 (sweep unused chunks)");
             self.inner.chunk_store.sweep_unused_chunks(
                 oldest_writer,
                 phase1_start_time,
@@ -1097,30 +1094,27 @@ impl DataStore {
                 worker,
             )?;
 
-            task_log!(
-                worker,
+            info!(
                 "Removed garbage: {}",
                 HumanByte::from(gc_status.removed_bytes),
             );
-            task_log!(worker, "Removed chunks: {}", gc_status.removed_chunks);
+            info!("Removed chunks: {}", gc_status.removed_chunks);
             if gc_status.pending_bytes > 0 {
-                task_log!(
-                    worker,
+                info!(
                     "Pending removals: {} (in {} chunks)",
                     HumanByte::from(gc_status.pending_bytes),
                     gc_status.pending_chunks,
                 );
             }
             if gc_status.removed_bad > 0 {
-                task_log!(worker, "Removed bad chunks: {}", gc_status.removed_bad);
+                info!("Removed bad chunks: {}", gc_status.removed_bad);
             }
 
             if gc_status.still_bad > 0 {
-                task_log!(worker, "Leftover bad chunks: {}", gc_status.still_bad);
+                info!("Leftover bad chunks: {}", gc_status.still_bad);
             }
 
-            task_log!(
-                worker,
+            info!(
                 "Original data usage: {}",
                 HumanByte::from(gc_status.index_data_bytes),
             );
@@ -1128,15 +1122,14 @@ impl DataStore {
             if gc_status.index_data_bytes > 0 {
                 let comp_per =
                     (gc_status.disk_bytes as f64 * 100.) / gc_status.index_data_bytes as f64;
-                task_log!(
-                    worker,
+                info!(
                     "On-Disk usage: {} ({:.2}%)",
                     HumanByte::from(gc_status.disk_bytes),
                     comp_per,
                 );
             }
 
-            task_log!(worker, "On-Disk chunks: {}", gc_status.disk_chunks);
+            info!("On-Disk chunks: {}", gc_status.disk_chunks);
 
             let deduplication_factor = if gc_status.disk_bytes > 0 {
                 (gc_status.index_data_bytes as f64) / (gc_status.disk_bytes as f64)
@@ -1144,11 +1137,11 @@ impl DataStore {
                 1.0
             };
 
-            task_log!(worker, "Deduplication factor: {:.2}", deduplication_factor);
+            info!("Deduplication factor: {:.2}", deduplication_factor);
 
             if gc_status.disk_chunks > 0 {
                 let avg_chunk = gc_status.disk_bytes / (gc_status.disk_chunks as u64);
-                task_log!(worker, "Average chunk size: {}", HumanByte::from(avg_chunk));
+                info!("Average chunk size: {}", HumanByte::from(avg_chunk));
             }
 
             if let Ok(serialized) = serde_json::to_string(&gc_status) {
@@ -1380,11 +1373,7 @@ impl DataStore {
     /// Destroy a datastore. This requires that there are no active operations on the datastore.
     ///
     /// This is a synchronous operation and should be run in a worker-thread.
-    pub fn destroy(
-        name: &str,
-        destroy_data: bool,
-        worker: &dyn WorkerTaskContext,
-    ) -> Result<(), Error> {
+    pub fn destroy(name: &str, destroy_data: bool) -> Result<(), Error> {
         let config_lock = pbs_config::datastore::lock_config()?;
 
         let (mut config, _digest) = pbs_config::datastore::config()?;
@@ -1412,13 +1401,13 @@ impl DataStore {
             let remove = |subdir, ok: &mut bool| {
                 if let Err(err) = std::fs::remove_dir_all(base.join(subdir)) {
                     if err.kind() != io::ErrorKind::NotFound {
-                        task_warn!(worker, "failed to remove {subdir:?} subdirectory: {err}");
+                        warn!("failed to remove {subdir:?} subdirectory: {err}");
                         *ok = false;
                     }
                 }
             };
 
-            task_log!(worker, "Deleting datastore data...");
+            info!("Deleting datastore data...");
             remove("ns", &mut ok); // ns first
             remove("ct", &mut ok);
             remove("vm", &mut ok);
@@ -1427,7 +1416,7 @@ impl DataStore {
             if ok {
                 if let Err(err) = std::fs::remove_file(base.join(".gc-status")) {
                     if err.kind() != io::ErrorKind::NotFound {
-                        task_warn!(worker, "failed to remove .gc-status file: {err}");
+                        warn!("failed to remove .gc-status file: {err}");
                         ok = false;
                     }
                 }
@@ -1441,7 +1430,7 @@ impl DataStore {
 
         // now the config
         if ok {
-            task_log!(worker, "Removing datastore from config...");
+            info!("Removing datastore from config...");
             let _lock = pbs_config::datastore::lock_config()?;
             let _ = config.sections.remove(name);
             pbs_config::datastore::save_config(&config)?;
@@ -1452,35 +1441,32 @@ impl DataStore {
             if ok {
                 if let Err(err) = std::fs::remove_file(base.join(".lock")) {
                     if err.kind() != io::ErrorKind::NotFound {
-                        task_warn!(worker, "failed to remove .lock file: {err}");
+                        warn!("failed to remove .lock file: {err}");
                         ok = false;
                     }
                 }
             }
 
             if ok {
-                task_log!(worker, "Finished deleting data.");
+                info!("Finished deleting data.");
 
                 match std::fs::remove_dir(base) {
-                    Ok(()) => task_log!(worker, "Removed empty datastore directory."),
+                    Ok(()) => info!("Removed empty datastore directory."),
                     Err(err) if err.kind() == io::ErrorKind::NotFound => {
                         // weird, but ok
                     }
                     Err(err) if err.is_errno(nix::errno::Errno::EBUSY) => {
-                        task_warn!(
-                            worker,
-                            "Cannot delete datastore directory (is it a mount point?)."
-                        )
+                        warn!("Cannot delete datastore directory (is it a mount point?).")
                     }
                     Err(err) if err.is_errno(nix::errno::Errno::ENOTEMPTY) => {
-                        task_warn!(worker, "Datastore directory not empty, not deleting.")
+                        warn!("Datastore directory not empty, not deleting.")
                     }
                     Err(err) => {
-                        task_warn!(worker, "Failed to remove datastore directory: {err}");
+                        warn!("Failed to remove datastore directory: {err}");
                     }
                 }
             } else {
-                task_log!(worker, "There were errors deleting data.");
+                info!("There were errors deleting data.");
             }
         }
 
diff --git a/src/backup/verify.rs b/src/backup/verify.rs
index c972e532..dfad8824 100644
--- a/src/backup/verify.rs
+++ b/src/backup/verify.rs
@@ -5,8 +5,9 @@ use std::sync::{Arc, Mutex};
 use std::time::Instant;
 
 use anyhow::{bail, format_err, Error};
+use tracing::info;
 
-use proxmox_sys::{task_log, WorkerTaskContext};
+use proxmox_sys::WorkerTaskContext;
 
 use pbs_api_types::{
     print_ns_and_snapshot, print_store_and_ns, Authid, BackupNamespace, BackupType, CryptMode,
@@ -69,11 +70,7 @@ fn verify_blob(backup_dir: &BackupDir, info: &FileInfo) -> Result<(), Error> {
     }
 }
 
-fn rename_corrupted_chunk(
-    datastore: Arc<DataStore>,
-    digest: &[u8; 32],
-    worker: &dyn WorkerTaskContext,
-) {
+fn rename_corrupted_chunk(datastore: Arc<DataStore>, digest: &[u8; 32]) {
     let (path, digest_str) = datastore.chunk_path(digest);
 
     let mut counter = 0;
@@ -89,17 +86,12 @@ fn rename_corrupted_chunk(
 
     match std::fs::rename(&path, &new_path) {
         Ok(_) => {
-            task_log!(worker, "corrupted chunk renamed to {:?}", &new_path);
+            info!("corrupted chunk renamed to {:?}", &new_path);
         }
         Err(err) => {
             match err.kind() {
                 std::io::ErrorKind::NotFound => { /* ignored */ }
-                _ => task_log!(
-                    worker,
-                    "could not rename corrupted chunk {:?} - {}",
-                    &path,
-                    err
-                ),
+                _ => info!("could not rename corrupted chunk {:?} - {}", &path, err),
             }
         }
     };
@@ -117,7 +109,6 @@ fn verify_index_chunks(
     let mut read_bytes = 0;
     let mut decoded_bytes = 0;
 
-    let worker2 = Arc::clone(&verify_worker.worker);
     let datastore2 = Arc::clone(&verify_worker.datastore);
     let corrupt_chunks2 = Arc::clone(&verify_worker.corrupt_chunks);
     let verified_chunks2 = Arc::clone(&verify_worker.verified_chunks);
@@ -130,7 +121,7 @@ fn verify_index_chunks(
             let chunk_crypt_mode = match chunk.crypt_mode() {
                 Err(err) => {
                     corrupt_chunks2.lock().unwrap().insert(digest);
-                    task_log!(worker2, "can't verify chunk, unknown CryptMode - {}", err);
+                    info!("can't verify chunk, unknown CryptMode - {}", err);
                     errors2.fetch_add(1, Ordering::SeqCst);
                     return Ok(());
                 }
@@ -138,20 +129,18 @@ fn verify_index_chunks(
             };
 
             if chunk_crypt_mode != crypt_mode {
-                task_log!(
-                    worker2,
+                info!(
                     "chunk CryptMode {:?} does not match index CryptMode {:?}",
-                    chunk_crypt_mode,
-                    crypt_mode
+                    chunk_crypt_mode, crypt_mode
                 );
                 errors2.fetch_add(1, Ordering::SeqCst);
             }
 
             if let Err(err) = chunk.verify_unencrypted(size as usize, &digest) {
                 corrupt_chunks2.lock().unwrap().insert(digest);
-                task_log!(worker2, "{}", err);
+                info!("{}", err);
                 errors2.fetch_add(1, Ordering::SeqCst);
-                rename_corrupted_chunk(datastore2.clone(), &digest, &worker2);
+                rename_corrupted_chunk(datastore2.clone(), &digest);
             } else {
                 verified_chunks2.lock().unwrap().insert(digest);
             }
@@ -175,11 +164,7 @@ fn verify_index_chunks(
             .contains(digest)
         {
             let digest_str = hex::encode(digest);
-            task_log!(
-                verify_worker.worker,
-                "chunk {} was marked as corrupt",
-                digest_str
-            );
+            info!("chunk {} was marked as corrupt", digest_str);
             errors.fetch_add(1, Ordering::SeqCst);
             true
         } else {
@@ -218,17 +203,9 @@ fn verify_index_chunks(
                     .lock()
                     .unwrap()
                     .insert(info.digest);
-                task_log!(
-                    verify_worker.worker,
-                    "can't verify chunk, load failed - {}",
-                    err
-                );
+                info!("can't verify chunk, load failed - {}", err);
                 errors.fetch_add(1, Ordering::SeqCst);
-                rename_corrupted_chunk(
-                    verify_worker.datastore.clone(),
-                    &info.digest,
-                    &verify_worker.worker,
-                );
+                rename_corrupted_chunk(verify_worker.datastore.clone(), &info.digest);
             }
             Ok(chunk) => {
                 let size = info.size();
@@ -251,15 +228,9 @@ fn verify_index_chunks(
 
     let error_count = errors.load(Ordering::SeqCst);
 
-    task_log!(
-        verify_worker.worker,
+    info!(
         "  verified {:.2}/{:.2} MiB in {:.2} seconds, speed {:.2}/{:.2} MiB/s ({} errors)",
-        read_bytes_mib,
-        decoded_bytes_mib,
-        elapsed,
-        read_speed,
-        decode_speed,
-        error_count,
+        read_bytes_mib, decoded_bytes_mib, elapsed, read_speed, decode_speed, error_count,
     );
 
     if errors.load(Ordering::SeqCst) > 0 {
@@ -329,8 +300,7 @@ pub fn verify_backup_dir(
     filter: Option<&dyn Fn(&BackupManifest) -> bool>,
 ) -> Result<bool, Error> {
     if !backup_dir.full_path().exists() {
-        task_log!(
-            verify_worker.worker,
+        info!(
             "SKIPPED: verify {}:{} - snapshot does not exist (anymore).",
             verify_worker.datastore.name(),
             backup_dir.dir(),
@@ -348,8 +318,7 @@ pub fn verify_backup_dir(
             verify_backup_dir_with_lock(verify_worker, backup_dir, upid, filter, snap_lock)
         }
         Err(err) => {
-            task_log!(
-                verify_worker.worker,
+            info!(
                 "SKIPPED: verify {}:{} - could not acquire snapshot lock: {}",
                 verify_worker.datastore.name(),
                 backup_dir.dir(),
@@ -371,8 +340,7 @@ pub fn verify_backup_dir_with_lock(
     let manifest = match backup_dir.load_manifest() {
         Ok((manifest, _)) => manifest,
         Err(err) => {
-            task_log!(
-                verify_worker.worker,
+            info!(
                 "verify {}:{} - manifest load error: {}",
                 verify_worker.datastore.name(),
                 backup_dir.dir(),
@@ -384,8 +352,7 @@ pub fn verify_backup_dir_with_lock(
 
     if let Some(filter) = filter {
         if !filter(&manifest) {
-            task_log!(
-                verify_worker.worker,
+            info!(
                 "SKIPPED: verify {}:{} (recently verified)",
                 verify_worker.datastore.name(),
                 backup_dir.dir(),
@@ -394,8 +361,7 @@ pub fn verify_backup_dir_with_lock(
         }
     }
 
-    task_log!(
-        verify_worker.worker,
+    info!(
         "verify {}:{}",
         verify_worker.datastore.name(),
         backup_dir.dir()
@@ -406,7 +372,7 @@ pub fn verify_backup_dir_with_lock(
     let mut verify_result = VerifyState::Ok;
     for info in manifest.files() {
         let result = proxmox_lang::try_block!({
-            task_log!(verify_worker.worker, "  check {}", info.filename);
+            info!("  check {}", info.filename);
             match archive_type(&info.filename)? {
                 ArchiveType::FixedIndex => verify_fixed_index(verify_worker, backup_dir, info),
                 ArchiveType::DynamicIndex => verify_dynamic_index(verify_worker, backup_dir, info),
@@ -418,8 +384,7 @@ pub fn verify_backup_dir_with_lock(
         verify_worker.worker.fail_on_shutdown()?;
 
         if let Err(err) = result {
-            task_log!(
-                verify_worker.worker,
+            info!(
                 "verify {}:{}/{} failed: {}",
                 verify_worker.datastore.name(),
                 backup_dir.dir(),
@@ -463,8 +428,7 @@ pub fn verify_backup_group(
     let mut list = match group.list_backups() {
         Ok(list) => list,
         Err(err) => {
-            task_log!(
-                verify_worker.worker,
+            info!(
                 "verify {}, group {} - unable to list backups: {}",
                 print_store_and_ns(verify_worker.datastore.name(), group.backup_ns()),
                 group.group(),
@@ -475,8 +439,7 @@ pub fn verify_backup_group(
     };
 
     let snapshot_count = list.len();
-    task_log!(
-        verify_worker.worker,
+    info!(
         "verify group {}:{} ({} snapshots)",
         verify_worker.datastore.name(),
         group.group(),
@@ -494,9 +457,8 @@ pub fn verify_backup_group(
             ));
         }
         progress.done_snapshots = pos as u64 + 1;
-        task_log!(verify_worker.worker, "percentage done: {}", progress);
+        info!("percentage done: {}", progress);
     }
-
     Ok(errors)
 }
 
@@ -516,16 +478,11 @@ pub fn verify_all_backups(
     filter: Option<&dyn Fn(&BackupManifest) -> bool>,
 ) -> Result<Vec<String>, Error> {
     let mut errors = Vec::new();
-    let worker = Arc::clone(&verify_worker.worker);
 
-    task_log!(
-        worker,
-        "verify datastore {}",
-        verify_worker.datastore.name()
-    );
+    info!("verify datastore {}", verify_worker.datastore.name());
 
     let owner_filtered = if let Some(owner) = &owner {
-        task_log!(worker, "limiting to backups owned by {}", owner);
+        info!("limiting to backups owned by {}", owner);
         true
     } else {
         false
@@ -553,7 +510,7 @@ pub fn verify_all_backups(
                 }
                 Err(err) => {
                     // we don't filter by owner, but we want to log the error
-                    task_log!(worker, "error on iterating groups in ns '{ns}' - {err}");
+                    info!("error on iterating groups in ns '{ns}' - {err}");
                     errors.push(err.to_string());
                     None
                 }
@@ -563,7 +520,7 @@ pub fn verify_all_backups(
             })
             .collect::<Vec<BackupGroup>>(),
         Err(err) => {
-            task_log!(worker, "unable to list backups: {}", err,);
+            info!("unable to list backups: {}", err,);
             return Ok(errors);
         }
     };
@@ -571,7 +528,7 @@ pub fn verify_all_backups(
     list.sort_unstable_by(|a, b| a.group().cmp(b.group()));
 
     let group_count = list.len();
-    task_log!(worker, "found {} groups", group_count);
+    info!("found {} groups", group_count);
 
     let mut progress = StoreProgress::new(group_count as u64);
 
diff --git a/src/bin/proxmox-backup-api.rs b/src/bin/proxmox-backup-api.rs
index d5ce9f3b..1163cbfe 100644
--- a/src/bin/proxmox-backup-api.rs
+++ b/src/bin/proxmox-backup-api.rs
@@ -7,6 +7,7 @@ use http::Response;
 use hyper::{Body, StatusCode};
 
 use proxmox_lang::try_block;
+use proxmox_log::init_logger;
 use proxmox_router::RpcEnvironmentType;
 use proxmox_sys::fs::CreateOptions;
 
@@ -15,6 +16,7 @@ use proxmox_rest_server::{daemon, ApiConfig, RestServer};
 use proxmox_backup::auth_helpers::*;
 use proxmox_backup::config;
 use proxmox_backup::server::auth::check_pbs_auth;
+use tracing_subscriber::filter::LevelFilter;
 
 fn main() {
     pbs_tools::setup_libc_malloc_opts();
@@ -40,13 +42,7 @@ fn get_index() -> Pin<Box<dyn Future<Output = Response<Body>> + Send>> {
 }
 
 async fn run() -> Result<(), Error> {
-    if let Err(err) = syslog::init(
-        syslog::Facility::LOG_DAEMON,
-        log::LevelFilter::Info,
-        Some("proxmox-backup-api"),
-    ) {
-        bail!("unable to inititialize syslog - {}", err);
-    }
+    init_logger("PBS_LOG", LevelFilter::INFO, "proxmox-backup-api")?;
 
     config::create_configdir()?;
 
diff --git a/src/bin/proxmox-backup-proxy.rs b/src/bin/proxmox-backup-proxy.rs
index 15444685..d04e5ad0 100644
--- a/src/bin/proxmox-backup-proxy.rs
+++ b/src/bin/proxmox-backup-proxy.rs
@@ -7,6 +7,9 @@ use http::request::Parts;
 use http::Response;
 use hyper::header;
 use hyper::{Body, StatusCode};
+use proxmox_log::init_logger;
+use tracing::{info, warn};
+use tracing_subscriber::filter::LevelFilter;
 use url::form_urlencoded;
 
 use openssl::ssl::SslAcceptor;
@@ -18,7 +21,6 @@ use proxmox_router::{RpcEnvironment, RpcEnvironmentType};
 use proxmox_sys::fs::{CreateOptions, FileSystemInformation};
 use proxmox_sys::linux::procfs::{Loadavg, ProcFsMemInfo, ProcFsNetDev, ProcFsStat};
 use proxmox_sys::logrotate::LogRotate;
-use proxmox_sys::{task_log, task_warn};
 
 use pbs_datastore::DataStore;
 
@@ -181,21 +183,7 @@ async fn get_index_future(env: RestEnvironment, parts: Parts) -> Response<Body>
 }
 
 async fn run() -> Result<(), Error> {
-    // Note: To debug early connection error use
-    // PROXMOX_DEBUG=1 ./target/release/proxmox-backup-proxy
-    let debug = std::env::var("PROXMOX_DEBUG").is_ok();
-
-    if let Err(err) = syslog::init(
-        syslog::Facility::LOG_DAEMON,
-        if debug {
-            log::LevelFilter::Debug
-        } else {
-            log::LevelFilter::Info
-        },
-        Some("proxmox-backup-proxy"),
-    ) {
-        bail!("unable to inititialize syslog - {err}");
-    }
+    init_logger("PBS_LOG", LevelFilter::INFO, "proxmox-backup-proxy")?;
 
     proxmox_backup::auth_helpers::setup_auth_context(false);
     proxmox_backup::server::notifications::init()?;
@@ -301,7 +289,7 @@ async fn run() -> Result<(), Error> {
     })?;
 
     let connections = proxmox_rest_server::connection::AcceptBuilder::new()
-        .debug(debug)
+        .debug(tracing::enabled!(tracing::Level::DEBUG))
         .rate_limiter_lookup(Arc::new(lookup_rate_limiter))
         .tcp_keepalive_time(PROXMOX_BACKUP_TCP_KEEPALIVE_TIME);
 
@@ -750,7 +738,7 @@ async fn schedule_task_log_rotate() {
         false,
         move |worker| {
             job.start(&worker.upid().to_string())?;
-            task_log!(worker, "starting task log rotation");
+            info!("starting task log rotation");
 
             let result = try_block!({
                 let max_size = 512 * 1024 - 1; // an entry has ~ 100b, so > 5000 entries/file
@@ -775,9 +763,9 @@ async fn schedule_task_log_rotate() {
                 )?;
 
                 if has_rotated {
-                    task_log!(worker, "task log archive was rotated");
+                    info!("task log archive was rotated");
                 } else {
-                    task_log!(worker, "task log archive was not rotated");
+                    info!("task log archive was not rotated");
                 }
 
                 let max_size = 32 * 1024 * 1024 - 1;
@@ -793,9 +781,9 @@ async fn schedule_task_log_rotate() {
                 if logrotate.rotate(max_size)? {
                     println!("rotated access log, telling daemons to re-open log file");
                     proxmox_async::runtime::block_on(command_reopen_access_logfiles())?;
-                    task_log!(worker, "API access log was rotated");
+                    info!("API access log was rotated");
                 } else {
-                    task_log!(worker, "API access log was not rotated");
+                    info!("API access log was not rotated");
                 }
 
                 let mut logrotate = LogRotate::new(
@@ -808,15 +796,15 @@ async fn schedule_task_log_rotate() {
                 if logrotate.rotate(max_size)? {
                     println!("rotated auth log, telling daemons to re-open log file");
                     proxmox_async::runtime::block_on(command_reopen_auth_logfiles())?;
-                    task_log!(worker, "API authentication log was rotated");
+                    info!("API authentication log was rotated");
                 } else {
-                    task_log!(worker, "API authentication log was not rotated");
+                    info!("API authentication log was not rotated");
                 }
 
                 if has_rotated {
-                    task_log!(worker, "cleaning up old task logs");
-                    if let Err(err) = cleanup_old_tasks(&worker, true) {
-                        task_warn!(worker, "could not completely cleanup old tasks: {err}");
+                    info!("cleaning up old task logs");
+                    if let Err(err) = cleanup_old_tasks(true) {
+                        warn!("could not completely cleanup old tasks: {err}");
                     }
                 }
 
diff --git a/src/server/gc_job.rs b/src/server/gc_job.rs
index ff5bdccf..64835028 100644
--- a/src/server/gc_job.rs
+++ b/src/server/gc_job.rs
@@ -1,7 +1,7 @@
 use anyhow::Error;
 use std::sync::Arc;
 
-use proxmox_sys::task_log;
+use tracing::info;
 
 use pbs_api_types::Authid;
 use pbs_datastore::DataStore;
@@ -28,9 +28,9 @@ pub fn do_garbage_collection_job(
         move |worker| {
             job.start(&worker.upid().to_string())?;
 
-            task_log!(worker, "starting garbage collection on store {store}");
+            info!("starting garbage collection on store {store}");
             if let Some(event_str) = schedule {
-                task_log!(worker, "task triggered by schedule '{event_str}'");
+                info!("task triggered by schedule '{event_str}'");
             }
 
             let result = datastore.garbage_collection(&*worker, worker.upid());
diff --git a/src/server/prune_job.rs b/src/server/prune_job.rs
index 2de34973..83bcd7c3 100644
--- a/src/server/prune_job.rs
+++ b/src/server/prune_job.rs
@@ -1,8 +1,7 @@
 use std::sync::Arc;
 
 use anyhow::Error;
-
-use proxmox_sys::{task_log, task_warn};
+use tracing::{info, warn};
 
 use pbs_api_types::{
     print_store_and_ns, Authid, KeepOptions, Operation, PruneJobOptions, MAX_NAMESPACE_DEPTH,
@@ -16,7 +15,6 @@ use crate::backup::ListAccessibleBackupGroups;
 use crate::server::jobstate::Job;
 
 pub fn prune_datastore(
-    worker: Arc<WorkerTask>,
     auth_id: Authid,
     prune_options: PruneJobOptions,
     datastore: Arc<DataStore>,
@@ -31,19 +29,19 @@ pub fn prune_datastore(
     };
     let ns = prune_options.ns.clone().unwrap_or_default();
     let store_ns = print_store_and_ns(store, &ns);
-    task_log!(worker, "Starting datastore prune on {store_ns}, {depth}");
+    info!("Starting datastore prune on {store_ns}, {depth}");
 
     if dry_run {
-        task_log!(worker, "(dry test run)");
+        info!("(dry test run)");
     }
 
     let keep_all = !prune_options.keeps_something();
 
     if keep_all {
-        task_log!(worker, "No prune selection - keeping all files.");
+        info!("No prune selection - keeping all files.");
     } else {
         let rendered_options = cli_prune_options_string(&prune_options);
-        task_log!(worker, "retention options: {rendered_options}");
+        info!("retention options: {rendered_options}");
     }
 
     for group in ListAccessibleBackupGroups::new_with_privs(
@@ -61,8 +59,7 @@ pub fn prune_datastore(
         let mut prune_info = compute_prune_info(list, &prune_options.keep)?;
         prune_info.reverse(); // delete older snapshots first
 
-        task_log!(
-            worker,
+        info!(
             "Pruning group {ns}:\"{}/{}\"",
             group.backup_type(),
             group.backup_id()
@@ -70,8 +67,7 @@ pub fn prune_datastore(
 
         for (info, mark) in prune_info {
             let keep = keep_all || mark.keep();
-            task_log!(
-                worker,
+            info!(
                 "{}{} {}/{}/{}",
                 if dry_run { "would " } else { "" },
                 mark,
@@ -82,7 +78,7 @@ pub fn prune_datastore(
             if !keep && !dry_run {
                 if let Err(err) = datastore.remove_backup_dir(ns, info.backup_dir.as_ref(), false) {
                     let path = info.backup_dir.relative_path();
-                    task_warn!(worker, "failed to remove dir {path:?}: {err}");
+                    warn!("failed to remove dir {path:?}: {err}");
                 }
             }
         }
@@ -150,13 +146,13 @@ pub fn do_prune_job(
         move |worker| {
             job.start(&worker.upid().to_string())?;
 
-            task_log!(worker, "prune job '{}'", job.jobname());
+            info!("prune job '{}'", job.jobname());
 
             if let Some(event_str) = schedule {
-                task_log!(worker, "task triggered by schedule '{event_str}'");
+                info!("task triggered by schedule '{event_str}'");
             }
 
-            let result = prune_datastore(worker.clone(), auth_id, prune_options, datastore, false);
+            let result = prune_datastore(auth_id, prune_options, datastore, false);
 
             let status = worker.create_state(&result);
 
diff --git a/src/server/pull.rs b/src/server/pull.rs
index 14744e9c..f5d2b837 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -10,10 +10,9 @@ use std::time::{Duration, SystemTime};
 use anyhow::{bail, format_err, Error};
 use http::StatusCode;
 use proxmox_human_byte::HumanByte;
-use proxmox_rest_server::WorkerTask;
 use proxmox_router::HttpError;
-use proxmox_sys::{task_log, task_warn};
 use serde_json::json;
+use tracing::{info, warn};
 
 use pbs_api_types::{
     print_store_and_ns, Authid, BackupDir, BackupGroup, BackupNamespace, CryptMode, GroupFilter,
@@ -122,7 +121,6 @@ trait PullSource: Send + Sync {
     async fn list_namespaces(
         &self,
         max_depth: &mut Option<usize>,
-        worker: &WorkerTask,
     ) -> Result<Vec<BackupNamespace>, Error>;
 
     /// Lists groups within a specific namespace from the source.
@@ -137,7 +135,6 @@ trait PullSource: Send + Sync {
         &self,
         namespace: &BackupNamespace,
         group: &BackupGroup,
-        worker: &WorkerTask,
     ) -> Result<Vec<BackupDir>, Error>;
     fn get_ns(&self) -> BackupNamespace;
     fn get_store(&self) -> &str;
@@ -155,7 +152,6 @@ impl PullSource for RemoteSource {
     async fn list_namespaces(
         &self,
         max_depth: &mut Option<usize>,
-        worker: &WorkerTask,
     ) -> Result<Vec<BackupNamespace>, Error> {
         if self.ns.is_root() && max_depth.map_or(false, |depth| depth == 0) {
             return Ok(vec![self.ns.clone()]);
@@ -178,8 +174,8 @@ impl PullSource for RemoteSource {
                 Some(HttpError { code, message }) => match code {
                     &StatusCode::NOT_FOUND => {
                         if self.ns.is_root() && max_depth.is_none() {
-                            task_warn!(worker, "Could not query remote for namespaces (404) -> temporarily switching to backwards-compat mode");
-                            task_warn!(worker, "Either make backwards-compat mode explicit (max-depth == 0) or upgrade remote system.");
+                            warn!("Could not query remote for namespaces (404) -> temporarily switching to backwards-compat mode");
+                            warn!("Either make backwards-compat mode explicit (max-depth == 0) or upgrade remote system.");
                             max_depth.replace(0);
                         } else {
                             bail!("Remote namespace set/recursive sync requested, but remote does not support namespaces.")
@@ -238,7 +234,6 @@ impl PullSource for RemoteSource {
         &self,
         namespace: &BackupNamespace,
         group: &BackupGroup,
-        worker: &WorkerTask,
     ) -> Result<Vec<BackupDir>, Error> {
         let path = format!("api2/json/admin/datastore/{}/snapshots", self.repo.store());
 
@@ -261,11 +256,7 @@ impl PullSource for RemoteSource {
                 let snapshot = item.backup;
                 // in-progress backups can't be synced
                 if item.size.is_none() {
-                    task_log!(
-                        worker,
-                        "skipping snapshot {} - in-progress backup",
-                        snapshot
-                    );
+                    info!("skipping snapshot {} - in-progress backup", snapshot);
                     return None;
                 }
 
@@ -301,7 +292,6 @@ impl PullSource for LocalSource {
     async fn list_namespaces(
         &self,
         max_depth: &mut Option<usize>,
-        _worker: &WorkerTask,
     ) -> Result<Vec<BackupNamespace>, Error> {
         ListNamespacesRecursive::new_max_depth(
             self.store.clone(),
@@ -333,7 +323,6 @@ impl PullSource for LocalSource {
         &self,
         namespace: &BackupNamespace,
         group: &BackupGroup,
-        _worker: &WorkerTask,
     ) -> Result<Vec<BackupDir>, Error> {
         Ok(self
             .store
@@ -381,19 +370,10 @@ trait PullReader: Send + Sync {
     /// Asynchronously loads a file from the source into a local file.
     /// `filename` is the name of the file to load from the source.
     /// `into` is the path of the local file to load the source file into.
-    async fn load_file_into(
-        &self,
-        filename: &str,
-        into: &Path,
-        worker: &WorkerTask,
-    ) -> Result<Option<DataBlob>, Error>;
+    async fn load_file_into(&self, filename: &str, into: &Path) -> Result<Option<DataBlob>, Error>;
 
     /// Tries to download the client log from the source and save it into a local file.
-    async fn try_download_client_log(
-        &self,
-        to_path: &Path,
-        worker: &WorkerTask,
-    ) -> Result<(), Error>;
+    async fn try_download_client_log(&self, to_path: &Path) -> Result<(), Error>;
 
     fn skip_chunk_sync(&self, target_store_name: &str) -> bool;
 }
@@ -409,12 +389,7 @@ impl PullReader for RemoteReader {
         ))
     }
 
-    async fn load_file_into(
-        &self,
-        filename: &str,
-        into: &Path,
-        worker: &WorkerTask,
-    ) -> Result<Option<DataBlob>, Error> {
+    async fn load_file_into(&self, filename: &str, into: &Path) -> Result<Option<DataBlob>, Error> {
         let mut tmp_file = std::fs::OpenOptions::new()
             .write(true)
             .create(true)
@@ -426,8 +401,7 @@ impl PullReader for RemoteReader {
             match err.downcast_ref::<HttpError>() {
                 Some(HttpError { code, message }) => match *code {
                     StatusCode::NOT_FOUND => {
-                        task_log!(
-                            worker,
+                        info!(
                             "skipping snapshot {} - vanished since start of sync",
                             &self.dir,
                         );
@@ -446,11 +420,7 @@ impl PullReader for RemoteReader {
         Ok(DataBlob::load_from_reader(&mut tmp_file).ok())
     }
 
-    async fn try_download_client_log(
-        &self,
-        to_path: &Path,
-        worker: &WorkerTask,
-    ) -> Result<(), Error> {
+    async fn try_download_client_log(&self, to_path: &Path) -> Result<(), Error> {
         let mut tmp_path = to_path.to_owned();
         tmp_path.set_extension("tmp");
 
@@ -469,7 +439,7 @@ impl PullReader for RemoteReader {
             if let Err(err) = std::fs::rename(&tmp_path, to_path) {
                 bail!("Atomic rename file {:?} failed - {}", to_path, err);
             }
-            task_log!(worker, "got backup log file {:?}", CLIENT_LOG_BLOB_NAME);
+            info!("got backup log file {:?}", CLIENT_LOG_BLOB_NAME);
         }
 
         Ok(())
@@ -490,12 +460,7 @@ impl PullReader for LocalReader {
         ))
     }
 
-    async fn load_file_into(
-        &self,
-        filename: &str,
-        into: &Path,
-        _worker: &WorkerTask,
-    ) -> Result<Option<DataBlob>, Error> {
+    async fn load_file_into(&self, filename: &str, into: &Path) -> Result<Option<DataBlob>, Error> {
         let mut tmp_file = std::fs::OpenOptions::new()
             .write(true)
             .create(true)
@@ -509,11 +474,7 @@ impl PullReader for LocalReader {
         Ok(DataBlob::load_from_reader(&mut tmp_file).ok())
     }
 
-    async fn try_download_client_log(
-        &self,
-        _to_path: &Path,
-        _worker: &WorkerTask,
-    ) -> Result<(), Error> {
+    async fn try_download_client_log(&self, _to_path: &Path) -> Result<(), Error> {
         Ok(())
     }
 
@@ -603,7 +564,6 @@ impl PullParameters {
 }
 
 async fn pull_index_chunks<I: IndexFile>(
-    worker: &WorkerTask,
     chunk_reader: Arc<dyn AsyncReadChunk>,
     target: Arc<DataStore>,
     index: I,
@@ -658,10 +618,10 @@ async fn pull_index_chunks<I: IndexFile>(
                     target.cond_touch_chunk(&info.digest, false)
                 })?;
                 if chunk_exists {
-                    //task_log!(worker, "chunk {} exists {}", pos, hex::encode(digest));
+                    //info!("chunk {} exists {}", pos, hex::encode(digest));
                     return Ok::<_, Error>(());
                 }
-                //task_log!(worker, "sync {} chunk {}", pos, hex::encode(digest));
+                //info!("sync {} chunk {}", pos, hex::encode(digest));
                 let chunk = chunk_reader.read_raw_chunk(&info.digest).await?;
                 let raw_size = chunk.raw_size() as usize;
 
@@ -689,8 +649,7 @@ async fn pull_index_chunks<I: IndexFile>(
     let bytes = bytes.load(Ordering::SeqCst);
     let chunk_count = chunk_count.load(Ordering::SeqCst);
 
-    task_log!(
-        worker,
+    info!(
         "downloaded {} ({}/s)",
         HumanByte::from(bytes),
         HumanByte::new_binary(bytes as f64 / elapsed.as_secs_f64()),
@@ -730,7 +689,6 @@ fn verify_archive(info: &FileInfo, csum: &[u8; 32], size: u64) -> Result<(), Err
 /// - if archive is an index, pull referenced chunks
 /// - Rename tmp file into real path
 async fn pull_single_archive<'a>(
-    worker: &'a WorkerTask,
     reader: Arc<dyn PullReader + 'a>,
     snapshot: &'a pbs_datastore::BackupDir,
     archive_info: &'a FileInfo,
@@ -745,11 +703,9 @@ async fn pull_single_archive<'a>(
 
     let mut pull_stats = PullStats::default();
 
-    task_log!(worker, "sync archive {}", archive_name);
+    info!("sync archive {}", archive_name);
 
-    reader
-        .load_file_into(archive_name, &tmp_path, worker)
-        .await?;
+    reader.load_file_into(archive_name, &tmp_path).await?;
 
     let mut tmpfile = std::fs::OpenOptions::new().read(true).open(&tmp_path)?;
 
@@ -762,10 +718,9 @@ async fn pull_single_archive<'a>(
             verify_archive(archive_info, &csum, size)?;
 
             if reader.skip_chunk_sync(snapshot.datastore().name()) {
-                task_log!(worker, "skipping chunk sync for same datastore");
+                info!("skipping chunk sync for same datastore");
             } else {
                 let stats = pull_index_chunks(
-                    worker,
                     reader.chunk_reader(archive_info.crypt_mode),
                     snapshot.datastore().clone(),
                     index,
@@ -783,10 +738,9 @@ async fn pull_single_archive<'a>(
             verify_archive(archive_info, &csum, size)?;
 
             if reader.skip_chunk_sync(snapshot.datastore().name()) {
-                task_log!(worker, "skipping chunk sync for same datastore");
+                info!("skipping chunk sync for same datastore");
             } else {
                 let stats = pull_index_chunks(
-                    worker,
                     reader.chunk_reader(archive_info.crypt_mode),
                     snapshot.datastore().clone(),
                     index,
@@ -818,7 +772,6 @@ async fn pull_single_archive<'a>(
 /// -- if not, pull it from the remote
 /// - Download log if not already existing
 async fn pull_snapshot<'a>(
-    worker: &'a WorkerTask,
     reader: Arc<dyn PullReader + 'a>,
     snapshot: &'a pbs_datastore::BackupDir,
     downloaded_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
@@ -834,7 +787,7 @@ async fn pull_snapshot<'a>(
     tmp_manifest_name.set_extension("tmp");
     let tmp_manifest_blob;
     if let Some(data) = reader
-        .load_file_into(MANIFEST_BLOB_NAME, &tmp_manifest_name, worker)
+        .load_file_into(MANIFEST_BLOB_NAME, &tmp_manifest_name)
         .await?
     {
         tmp_manifest_blob = data;
@@ -857,11 +810,9 @@ async fn pull_snapshot<'a>(
 
         if manifest_blob.raw_data() == tmp_manifest_blob.raw_data() {
             if !client_log_name.exists() {
-                reader
-                    .try_download_client_log(&client_log_name, worker)
-                    .await?;
+                reader.try_download_client_log(&client_log_name).await?;
             };
-            task_log!(worker, "no data changes");
+            info!("no data changes");
             let _ = std::fs::remove_file(&tmp_manifest_name);
             return Ok(pull_stats); // nothing changed
         }
@@ -881,7 +832,7 @@ async fn pull_snapshot<'a>(
                     match manifest.verify_file(&item.filename, &csum, size) {
                         Ok(_) => continue,
                         Err(err) => {
-                            task_log!(worker, "detected changed file {:?} - {}", path, err);
+                            info!("detected changed file {:?} - {}", path, err);
                         }
                     }
                 }
@@ -891,7 +842,7 @@ async fn pull_snapshot<'a>(
                     match manifest.verify_file(&item.filename, &csum, size) {
                         Ok(_) => continue,
                         Err(err) => {
-                            task_log!(worker, "detected changed file {:?} - {}", path, err);
+                            info!("detected changed file {:?} - {}", path, err);
                         }
                     }
                 }
@@ -901,21 +852,15 @@ async fn pull_snapshot<'a>(
                     match manifest.verify_file(&item.filename, &csum, size) {
                         Ok(_) => continue,
                         Err(err) => {
-                            task_log!(worker, "detected changed file {:?} - {}", path, err);
+                            info!("detected changed file {:?} - {}", path, err);
                         }
                     }
                 }
             }
         }
 
-        let stats = pull_single_archive(
-            worker,
-            reader.clone(),
-            snapshot,
-            item,
-            downloaded_chunks.clone(),
-        )
-        .await?;
+        let stats =
+            pull_single_archive(reader.clone(), snapshot, item, downloaded_chunks.clone()).await?;
         pull_stats.add(stats);
     }
 
@@ -924,9 +869,7 @@ async fn pull_snapshot<'a>(
     }
 
     if !client_log_name.exists() {
-        reader
-            .try_download_client_log(&client_log_name, worker)
-            .await?;
+        reader.try_download_client_log(&client_log_name).await?;
     };
     snapshot
         .cleanup_unreferenced_files(&manifest)
@@ -940,7 +883,6 @@ async fn pull_snapshot<'a>(
 /// The `reader` is configured to read from the source backup directory, while the
 /// `snapshot` is pointing to the local datastore and target namespace.
 async fn pull_snapshot_from<'a>(
-    worker: &'a WorkerTask,
     reader: Arc<dyn PullReader + 'a>,
     snapshot: &'a pbs_datastore::BackupDir,
     downloaded_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
@@ -950,27 +892,27 @@ async fn pull_snapshot_from<'a>(
         .create_locked_backup_dir(snapshot.backup_ns(), snapshot.as_ref())?;
 
     let pull_stats = if is_new {
-        task_log!(worker, "sync snapshot {}", snapshot.dir());
+        info!("sync snapshot {}", snapshot.dir());
 
-        match pull_snapshot(worker, reader, snapshot, downloaded_chunks).await {
+        match pull_snapshot(reader, snapshot, downloaded_chunks).await {
             Err(err) => {
                 if let Err(cleanup_err) = snapshot.datastore().remove_backup_dir(
                     snapshot.backup_ns(),
                     snapshot.as_ref(),
                     true,
                 ) {
-                    task_log!(worker, "cleanup error - {}", cleanup_err);
+                    info!("cleanup error - {}", cleanup_err);
                 }
                 return Err(err);
             }
             Ok(pull_stats) => {
-                task_log!(worker, "sync snapshot {} done", snapshot.dir());
+                info!("sync snapshot {} done", snapshot.dir());
                 pull_stats
             }
         }
     } else {
-        task_log!(worker, "re-sync snapshot {}", snapshot.dir());
-        pull_snapshot(worker, reader, snapshot, downloaded_chunks).await?
+        info!("re-sync snapshot {}", snapshot.dir());
+        pull_snapshot(reader, snapshot, downloaded_chunks).await?
     };
 
     Ok(pull_stats)
@@ -1073,7 +1015,6 @@ impl std::fmt::Display for SkipInfo {
 /// - remote snapshot access is checked by remote (twice: query and opening the backup reader)
 /// - local group owner is already checked by pull_store
 async fn pull_group(
-    worker: &WorkerTask,
     params: &PullParameters,
     source_namespace: &BackupNamespace,
     group: &BackupGroup,
@@ -1084,7 +1025,7 @@ async fn pull_group(
 
     let mut raw_list: Vec<BackupDir> = params
         .source
-        .list_backup_dirs(source_namespace, group, worker)
+        .list_backup_dirs(source_namespace, group)
         .await?;
     raw_list.sort_unstable_by(|a, b| a.time.cmp(&b.time));
 
@@ -1113,7 +1054,7 @@ async fn pull_group(
                 already_synced_skip_info.update(dir.time);
                 return false;
             } else if already_synced_skip_info.count > 0 {
-                task_log!(worker, "{}", already_synced_skip_info);
+                info!("{}", already_synced_skip_info);
                 already_synced_skip_info.reset();
                 return true;
             }
@@ -1122,7 +1063,7 @@ async fn pull_group(
                 transfer_last_skip_info.update(dir.time);
                 return false;
             } else if transfer_last_skip_info.count > 0 {
-                task_log!(worker, "{}", transfer_last_skip_info);
+                info!("{}", transfer_last_skip_info);
                 transfer_last_skip_info.reset();
             }
             true
@@ -1147,11 +1088,10 @@ async fn pull_group(
             .source
             .reader(source_namespace, &from_snapshot)
             .await?;
-        let result =
-            pull_snapshot_from(worker, reader, &to_snapshot, downloaded_chunks.clone()).await;
+        let result = pull_snapshot_from(reader, &to_snapshot, downloaded_chunks.clone()).await;
 
         progress.done_snapshots = pos as u64 + 1;
-        task_log!(worker, "percentage done: {}", progress);
+        info!("percentage done: {}", progress);
 
         let stats = result?; // stop on error
         pull_stats.add(stats);
@@ -1169,14 +1109,13 @@ async fn pull_group(
                 continue;
             }
             if snapshot.is_protected() {
-                task_log!(
-                    worker,
+                info!(
                     "don't delete vanished snapshot {} (protected)",
                     snapshot.dir()
                 );
                 continue;
             }
-            task_log!(worker, "delete vanished snapshot {}", snapshot.dir());
+            info!("delete vanished snapshot {}", snapshot.dir());
             params
                 .target
                 .store
@@ -1235,7 +1174,6 @@ fn check_and_remove_ns(params: &PullParameters, local_ns: &BackupNamespace) -> R
 }
 
 fn check_and_remove_vanished_ns(
-    worker: &WorkerTask,
     params: &PullParameters,
     synced_ns: HashSet<BackupNamespace>,
 ) -> Result<(bool, RemovedVanishedStats), Error> {
@@ -1276,16 +1214,15 @@ fn check_and_remove_vanished_ns(
         }
         match check_and_remove_ns(params, &local_ns) {
             Ok(true) => {
-                task_log!(worker, "Removed namespace {local_ns}");
+                info!("Removed namespace {local_ns}");
                 removed_stats.namespaces += 1;
             }
-            Ok(false) => task_log!(
-                worker,
+            Ok(false) => info!(
                 "Did not remove namespace {} - protected snapshots remain",
                 local_ns
             ),
             Err(err) => {
-                task_log!(worker, "Failed to remove namespace {} - {}", local_ns, err);
+                info!("Failed to remove namespace {} - {}", local_ns, err);
                 errors = true;
             }
         }
@@ -1311,10 +1248,7 @@ fn check_and_remove_vanished_ns(
 /// - remote namespaces are filtered by remote
 /// - creation and removal of sub-NS checked here
 /// - access to sub-NS checked here
-pub(crate) async fn pull_store(
-    worker: &WorkerTask,
-    mut params: PullParameters,
-) -> Result<PullStats, Error> {
+pub(crate) async fn pull_store(mut params: PullParameters) -> Result<PullStats, Error> {
     // explicit create shared lock to prevent GC on newly created chunks
     let _shared_store_lock = params.target.store.try_shared_chunk_store_lock()?;
     let mut errors = false;
@@ -1323,10 +1257,7 @@ pub(crate) async fn pull_store(
     let mut namespaces = if params.source.get_ns().is_root() && old_max_depth == Some(0) {
         vec![params.source.get_ns()] // backwards compat - don't query remote namespaces!
     } else {
-        params
-            .source
-            .list_namespaces(&mut params.max_depth, worker)
-            .await?
+        params.source.list_namespaces(&mut params.max_depth).await?
     };
 
     let ns_layers_to_be_pulled = namespaces
@@ -1358,33 +1289,28 @@ pub(crate) async fn pull_store(
         let target_ns = namespace.map_prefix(&params.source.get_ns(), &params.target.ns)?;
         let target_store_ns_str = print_store_and_ns(params.target.store.name(), &target_ns);
 
-        task_log!(worker, "----");
-        task_log!(
-            worker,
+        info!("----");
+        info!(
             "Syncing {} into {}",
-            source_store_ns_str,
-            target_store_ns_str
+            source_store_ns_str, target_store_ns_str
         );
 
         synced_ns.insert(target_ns.clone());
 
         match check_and_create_ns(&params, &target_ns) {
-            Ok(true) => task_log!(worker, "Created namespace {}", target_ns),
+            Ok(true) => info!("Created namespace {}", target_ns),
             Ok(false) => {}
             Err(err) => {
-                task_log!(
-                    worker,
+                info!(
                     "Cannot sync {} into {} - {}",
-                    source_store_ns_str,
-                    target_store_ns_str,
-                    err,
+                    source_store_ns_str, target_store_ns_str, err,
                 );
                 errors = true;
                 continue;
             }
         }
 
-        match pull_ns(worker, &namespace, &mut params).await {
+        match pull_ns(&namespace, &mut params).await {
             Ok((ns_progress, ns_pull_stats, ns_errors)) => {
                 errors |= ns_errors;
 
@@ -1393,29 +1319,24 @@ pub(crate) async fn pull_store(
                 if params.max_depth != Some(0) {
                     groups += ns_progress.done_groups;
                     snapshots += ns_progress.done_snapshots;
-                    task_log!(
-                        worker,
+                    info!(
                         "Finished syncing namespace {}, current progress: {} groups, {} snapshots",
-                        namespace,
-                        groups,
-                        snapshots,
+                        namespace, groups, snapshots,
                     );
                 }
             }
             Err(err) => {
                 errors = true;
-                task_log!(
-                    worker,
+                info!(
                     "Encountered errors while syncing namespace {} - {}",
-                    &namespace,
-                    err,
+                    &namespace, err,
                 );
             }
         };
     }
 
     if params.remove_vanished {
-        let (has_errors, stats) = check_and_remove_vanished_ns(worker, &params, synced_ns)?;
+        let (has_errors, stats) = check_and_remove_vanished_ns(&params, synced_ns)?;
         errors |= has_errors;
         pull_stats.add(PullStats::from(stats));
     }
@@ -1440,7 +1361,6 @@ pub(crate) async fn pull_store(
 /// - remote namespaces are filtered by remote
 /// - owner check for vanished groups done here
 pub(crate) async fn pull_ns(
-    worker: &WorkerTask,
     namespace: &BackupNamespace,
     params: &mut PullParameters,
 ) -> Result<(StoreProgress, PullStats, bool), Error> {
@@ -1460,8 +1380,7 @@ pub(crate) async fn pull_ns(
         .into_iter()
         .filter(|group| group.apply_filters(&params.group_filter))
         .collect();
-    task_log!(
-        worker,
+    info!(
         "found {} groups to sync (out of {} total)",
         list.len(),
         unfiltered_count
@@ -1492,15 +1411,10 @@ pub(crate) async fn pull_ns(
             {
                 Ok(result) => result,
                 Err(err) => {
-                    task_log!(
-                        worker,
-                        "sync group {} failed - group lock failed: {}",
-                        &group,
-                        err
-                    );
+                    info!("sync group {} failed - group lock failed: {}", &group, err);
                     errors = true;
                     // do not stop here, instead continue
-                    task_log!(worker, "create_locked_backup_group failed");
+                    info!("create_locked_backup_group failed");
                     continue;
                 }
             };
@@ -1508,19 +1422,16 @@ pub(crate) async fn pull_ns(
         // permission check
         if params.owner != owner {
             // only the owner is allowed to create additional snapshots
-            task_log!(
-                worker,
+            info!(
                 "sync group {} failed - owner check failed ({} != {})",
-                &group,
-                params.owner,
-                owner
+                &group, params.owner, owner
             );
             errors = true; // do not stop here, instead continue
         } else {
-            match pull_group(worker, params, namespace, &group, &mut progress).await {
+            match pull_group(params, namespace, &group, &mut progress).await {
                 Ok(stats) => pull_stats.add(stats),
                 Err(err) => {
-                    task_log!(worker, "sync group {} failed - {}", &group, err,);
+                    info!("sync group {} failed - {}", &group, err,);
                     errors = true; // do not stop here, instead continue
                 }
             }
@@ -1542,7 +1453,7 @@ pub(crate) async fn pull_ns(
                 if !local_group.apply_filters(&params.group_filter) {
                     continue;
                 }
-                task_log!(worker, "delete vanished group '{local_group}'",);
+                info!("delete vanished group '{local_group}'",);
                 let delete_stats_result = params
                     .target
                     .store
@@ -1551,10 +1462,7 @@ pub(crate) async fn pull_ns(
                 match delete_stats_result {
                     Ok(stats) => {
                         if !stats.all_removed() {
-                            task_log!(
-                                worker,
-                                "kept some protected snapshots of group '{local_group}'",
-                            );
+                            info!("kept some protected snapshots of group '{local_group}'",);
                             pull_stats.add(PullStats::from(RemovedVanishedStats {
                                 snapshots: stats.removed_snapshots(),
                                 groups: 0,
@@ -1569,7 +1477,7 @@ pub(crate) async fn pull_ns(
                         }
                     }
                     Err(err) => {
-                        task_log!(worker, "{}", err);
+                        info!("{}", err);
                         errors = true;
                     }
                 }
@@ -1577,7 +1485,7 @@ pub(crate) async fn pull_ns(
             Ok(())
         });
         if let Err(err) = result {
-            task_log!(worker, "error during cleanup: {}", err);
+            info!("error during cleanup: {}", err);
             errors = true;
         };
     }
diff --git a/src/server/realm_sync_job.rs b/src/server/realm_sync_job.rs
index 31412be8..5f3832ab 100644
--- a/src/server/realm_sync_job.rs
+++ b/src/server/realm_sync_job.rs
@@ -1,13 +1,14 @@
 use anyhow::{bail, format_err, Context, Error};
+use tracing::{info, warn};
+
 use pbs_config::{acl::AclTree, token_shadow, BackupLockGuard};
 use proxmox_lang::try_block;
 use proxmox_ldap::{Config, Connection, SearchParameters, SearchResult};
 use proxmox_rest_server::WorkerTask;
 use proxmox_schema::{ApiType, Schema};
 use proxmox_section_config::SectionConfigData;
-use proxmox_sys::{task_log, task_warn};
 
-use std::{collections::HashSet, sync::Arc};
+use std::collections::HashSet;
 
 use pbs_api_types::{
     AdRealmConfig, ApiToken, Authid, LdapRealmConfig, Realm, RealmType, RemoveVanished,
@@ -39,7 +40,7 @@ pub fn do_realm_sync_job(
         move |worker| {
             job.start(&worker.upid().to_string()).unwrap();
 
-            task_log!(worker, "starting realm sync for {}", realm.as_str());
+            info!("starting realm sync for {}", realm.as_str());
 
             let override_settings = GeneralSyncSettingsOverride {
                 remove_vanished,
@@ -49,12 +50,12 @@ pub fn do_realm_sync_job(
             async move {
                 match realm_type {
                     RealmType::Ldap => {
-                        LdapRealmSyncJob::new(worker, realm, &override_settings, dry_run)?
+                        LdapRealmSyncJob::new(realm, &override_settings, dry_run)?
                             .sync()
                             .await
                     }
                     RealmType::Ad => {
-                        AdRealmSyncJob::new(worker, realm, &override_settings, dry_run)?
+                        AdRealmSyncJob::new(realm, &override_settings, dry_run)?
                             .sync()
                             .await
                     }
@@ -73,7 +74,6 @@ struct AdRealmSyncJob(LdapRealmSyncJob);
 
 impl AdRealmSyncJob {
     fn new(
-        worker: Arc<WorkerTask>,
         realm: Realm,
         override_settings: &GeneralSyncSettingsOverride,
         dry_run: bool,
@@ -98,7 +98,6 @@ impl AdRealmSyncJob {
         let ldap_config = auth::AdAuthenticator::api_type_to_config(&config)?;
 
         Ok(Self(LdapRealmSyncJob {
-            worker,
             realm,
             general_sync_settings: sync_settings,
             ldap_sync_settings: sync_attributes,
@@ -114,7 +113,6 @@ impl AdRealmSyncJob {
 
 /// Implementation for syncing LDAP realms
 struct LdapRealmSyncJob {
-    worker: Arc<WorkerTask>,
     realm: Realm,
     general_sync_settings: GeneralSyncSettings,
     ldap_sync_settings: LdapSyncSettings,
@@ -125,7 +123,6 @@ struct LdapRealmSyncJob {
 impl LdapRealmSyncJob {
     /// Create new LdapRealmSyncJob
     fn new(
-        worker: Arc<WorkerTask>,
         realm: Realm,
         override_settings: &GeneralSyncSettingsOverride,
         dry_run: bool,
@@ -150,7 +147,6 @@ impl LdapRealmSyncJob {
         let ldap_config = auth::LdapAuthenticator::api_type_to_config(&config)?;
 
         Ok(Self {
-            worker,
             realm,
             general_sync_settings: sync_settings,
             ldap_sync_settings: sync_attributes,
@@ -162,10 +158,7 @@ impl LdapRealmSyncJob {
     /// Perform realm synchronization
     async fn sync(&self) -> Result<(), Error> {
         if self.dry_run {
-            task_log!(
-                self.worker,
-                "this is a DRY RUN - changes will not be persisted"
-            );
+            info!("this is a DRY RUN - changes will not be persisted");
         }
 
         let ldap = Connection::new(self.ldap_config.clone());
@@ -247,7 +240,7 @@ impl LdapRealmSyncJob {
                 anyhow::Ok(())
             });
             if let Err(e) = result {
-                task_log!(self.worker, "could not create/update user: {e}");
+                info!("could not create/update user: {e}");
             }
         }
 
@@ -266,18 +259,10 @@ impl LdapRealmSyncJob {
 
         if let Some(existing_user) = existing_user {
             if existing_user != new_or_updated_user {
-                task_log!(
-                    self.worker,
-                    "updating user {}",
-                    new_or_updated_user.userid.as_str()
-                );
+                info!("updating user {}", new_or_updated_user.userid.as_str());
             }
         } else {
-            task_log!(
-                self.worker,
-                "creating user {}",
-                new_or_updated_user.userid.as_str()
-            );
+            info!("creating user {}", new_or_updated_user.userid.as_str());
         }
 
         user_config.set_data(
@@ -299,10 +284,7 @@ impl LdapRealmSyncJob {
             let schema = schema.unwrap_string_schema();
 
             if let Err(e) = schema.check_constraints(value) {
-                task_warn!(
-                    self.worker,
-                    "{userid}: ignoring attribute `{attribute}`: {e}"
-                );
+                warn!("{userid}: ignoring attribute `{attribute}`: {e}");
 
                 None
             } else {
@@ -381,7 +363,7 @@ impl LdapRealmSyncJob {
         to_delete: &[Userid],
     ) -> Result<(), Error> {
         for userid in to_delete {
-            task_log!(self.worker, "deleting user {}", userid.as_str());
+            info!("deleting user {}", userid.as_str());
 
             // Delete the user
             user_config.sections.remove(userid.as_str());
@@ -408,7 +390,7 @@ impl LdapRealmSyncJob {
 
                     if !self.dry_run {
                         if let Err(e) = token_shadow::delete_secret(&tokenid) {
-                            task_warn!(self.worker, "could not delete token for user {userid}: {e}",)
+                            warn!("could not delete token for user {userid}: {e}",)
                         }
                     }
 
diff --git a/src/server/verify_job.rs b/src/server/verify_job.rs
index f3cf2277..3241e304 100644
--- a/src/server/verify_job.rs
+++ b/src/server/verify_job.rs
@@ -1,9 +1,9 @@
 use anyhow::{format_err, Error};
+use tracing::info;
 
 use pbs_api_types::{Authid, Operation, VerificationJobConfig};
 use pbs_datastore::DataStore;
 use proxmox_rest_server::WorkerTask;
-use proxmox_sys::task_log;
 
 use crate::{
     backup::{verify_all_backups, verify_filter},
@@ -34,9 +34,9 @@ pub fn do_verification_job(
         move |worker| {
             job.start(&worker.upid().to_string())?;
 
-            task_log!(worker, "Starting datastore verify job '{}'", job_id);
+            info!("Starting datastore verify job '{}'", job_id);
             if let Some(event_str) = schedule {
-                task_log!(worker, "task triggered by schedule '{}'", event_str);
+                info!("task triggered by schedule '{}'", event_str);
             }
 
             let ns = match verification_job.ns {
@@ -58,9 +58,9 @@ pub fn do_verification_job(
             let job_result = match result {
                 Ok(ref failed_dirs) if failed_dirs.is_empty() => Ok(()),
                 Ok(ref failed_dirs) => {
-                    task_log!(worker, "Failed to verify the following snapshots/groups:");
+                    info!("Failed to verify the following snapshots/groups:");
                     for dir in failed_dirs {
-                        task_log!(worker, "\t{}", dir);
+                        info!("\t{}", dir);
                     }
 
                     Err(format_err!(
diff --git a/src/tape/drive/mod.rs b/src/tape/drive/mod.rs
index b912b234..d36e00a8 100644
--- a/src/tape/drive/mod.rs
+++ b/src/tape/drive/mod.rs
@@ -12,6 +12,7 @@ use nix::fcntl::OFlag;
 use nix::sys::stat::Mode;
 use serde::Deserialize;
 use serde_json::Value;
+use tracing::info;
 
 use proxmox_sys::fs::{
     atomic_open_or_create_file, file_read_optional_string, lock_file, replace_file, CreateOptions,
@@ -19,7 +20,7 @@ use proxmox_sys::fs::{
 
 use proxmox_io::ReadExt;
 use proxmox_section_config::SectionConfigData;
-use proxmox_sys::{task_log, WorkerTaskContext};
+use proxmox_sys::WorkerTaskContext;
 use proxmox_uuid::Uuid;
 
 use pbs_api_types::{Fingerprint, LtoTapeDrive, VirtualTapeDrive};
@@ -381,11 +382,9 @@ pub fn request_and_load_media(
 ) -> Result<(Box<dyn TapeDriver>, MediaId), Error> {
     let check_label = |handle: &mut dyn TapeDriver, uuid: &proxmox_uuid::Uuid| {
         if let Ok((Some(media_id), _)) = handle.read_label() {
-            task_log!(
-                worker,
+            info!(
                 "found media label {} ({})",
-                media_id.label.label_text,
-                media_id.label.uuid,
+                media_id.label.label_text, media_id.label.uuid,
             );
 
             if media_id.label.uuid == *uuid {
@@ -423,19 +422,16 @@ pub fn request_and_load_media(
                     let update_and_log_request_error =
                         |old: &mut TapeRequestError, new: TapeRequestError| -> Result<(), Error> {
                             if new != *old {
-                                task_log!(worker, "{}", new);
+                                info!("{}", new);
                                 let (device_type, device) = if let Some(changer) = changer {
                                     ("changer", changer.as_str())
                                 } else {
                                     ("drive", drive)
                                 };
 
-                                task_log!(
-                                    worker,
+                                info!(
                                     "Please insert media '{}' into {} '{}'",
-                                    label_text,
-                                    device_type,
-                                    device
+                                    label_text, device_type, device
                                 );
                                 send_load_media_notification(
                                     notification_mode,
@@ -460,18 +456,11 @@ pub fn request_and_load_media(
                                 std::thread::sleep(std::time::Duration::from_millis(100));
                             }
                         } else if drive_config.changer.is_none() {
-                            task_log!(
-                                worker,
-                                "Checking for media '{}' in drive '{}'",
-                                label_text,
-                                drive
-                            );
+                            info!("Checking for media '{}' in drive '{}'", label_text, drive);
                         } else {
-                            task_log!(
-                                worker,
+                            info!(
                                 "trying to load media '{}' into drive '{}'",
-                                label_text,
-                                drive
+                                label_text, drive
                             );
                         }
 
@@ -499,8 +488,7 @@ pub fn request_and_load_media(
 
                         let request_error = match handle.read_label() {
                             Ok((Some(media_id), _)) if media_id.label.uuid == label.uuid => {
-                                task_log!(
-                                    worker,
+                                info!(
                                     "found media label {} ({})",
                                     media_id.label.label_text,
                                     media_id.label.uuid.to_string(),
diff --git a/src/tape/pool_writer/mod.rs b/src/tape/pool_writer/mod.rs
index 1df297b6..b186ad6e 100644
--- a/src/tape/pool_writer/mod.rs
+++ b/src/tape/pool_writer/mod.rs
@@ -11,8 +11,8 @@ use std::sync::{Arc, Mutex};
 use std::time::SystemTime;
 
 use anyhow::{bail, Error};
+use tracing::{info, warn};
 
-use proxmox_sys::{task_log, task_warn};
 use proxmox_uuid::Uuid;
 
 use pbs_datastore::{DataStore, SnapshotReader};
@@ -62,7 +62,6 @@ impl PoolWriter {
     pub fn new(
         mut pool: MediaPool,
         drive_name: &str,
-        worker: &WorkerTask,
         notification_mode: TapeNotificationMode,
         force_media_set: bool,
         ns_magic: bool,
@@ -71,11 +70,11 @@ impl PoolWriter {
 
         let new_media_set_reason = pool.start_write_session(current_time, force_media_set)?;
         if let Some(reason) = new_media_set_reason {
-            task_log!(worker, "starting new media set - reason: {}", reason,);
+            info!("starting new media set - reason: {}", reason,);
         }
 
         let media_set_uuid = pool.current_media_set().uuid();
-        task_log!(worker, "media set uuid: {}", media_set_uuid);
+        info!("media set uuid: {}", media_set_uuid);
 
         let mut catalog_set = CatalogSet::new();
 
@@ -130,7 +129,7 @@ impl PoolWriter {
     }
 
     /// Eject media and drop PoolWriterState (close drive)
-    pub fn eject_media(&mut self, worker: &WorkerTask) -> Result<(), Error> {
+    pub fn eject_media(&mut self) -> Result<(), Error> {
         let mut status = match self.status.take() {
             Some(status) => status,
             None => return Ok(()), // no media loaded
@@ -139,13 +138,13 @@ impl PoolWriter {
         let (drive_config, _digest) = pbs_config::drive::config()?;
 
         if let Some((mut changer, _)) = media_changer(&drive_config, &self.drive_name)? {
-            task_log!(worker, "eject media");
+            info!("eject media");
             status.drive.eject_media()?; // rewind and eject early, so that unload_media is faster
             drop(status); // close drive
-            task_log!(worker, "unload media");
+            info!("unload media");
             changer.unload_media(None)?; //eject and unload
         } else {
-            task_log!(worker, "standalone drive - ejecting media");
+            info!("standalone drive - ejecting media");
             status.drive.eject_media()?;
         }
 
@@ -153,14 +152,14 @@ impl PoolWriter {
     }
 
     /// Export current media set and drop PoolWriterState (close drive)
-    pub fn export_media_set(&mut self, worker: &WorkerTask) -> Result<(), Error> {
+    pub fn export_media_set(&mut self) -> Result<(), Error> {
         let mut status = self.status.take();
 
         let (drive_config, _digest) = pbs_config::drive::config()?;
 
         if let Some((mut changer, _)) = media_changer(&drive_config, &self.drive_name)? {
             if let Some(ref mut status) = status {
-                task_log!(worker, "rewind media");
+                info!("rewind media");
                 // rewind first so that the unload command later does not run into a timeout
                 status.drive.rewind()?;
             }
@@ -170,25 +169,19 @@ impl PoolWriter {
                 let media = self.pool.lookup_media(media_uuid)?;
                 let label_text = media.label_text();
                 if let Some(slot) = changer.export_media(label_text)? {
-                    task_log!(
-                        worker,
+                    info!(
                         "exported media '{}' to import/export slot {}",
-                        label_text,
-                        slot
+                        label_text, slot
                     );
                 } else {
-                    task_warn!(
-                        worker,
+                    warn!(
                         "export failed - media '{}' is not online or in different drive",
                         label_text
                     );
                 }
             }
         } else if let Some(mut status) = status {
-            task_log!(
-                worker,
-                "standalone drive - ejecting media instead of export"
-            );
+            info!("standalone drive - ejecting media instead of export");
             status.drive.eject_media()?;
         }
 
@@ -238,11 +231,7 @@ impl PoolWriter {
             return Ok(media_uuid);
         }
 
-        task_log!(
-            worker,
-            "allocated new writable media '{}'",
-            media.label_text()
-        );
+        info!("allocated new writable media '{}'", media.label_text());
 
         if let Some(PoolWriterState { mut drive, .. }) = self.status.take() {
             if let Some(uuid) = &last_media_uuid {
@@ -252,7 +241,7 @@ impl PoolWriter {
                         .set_media_bytes_used(uuid, Some(stats.total_used_native_capacity))?;
                 }
 
-                task_log!(worker, "eject current media");
+                info!("eject current media");
                 drive.eject_media()?;
             }
         }
@@ -270,7 +259,7 @@ impl PoolWriter {
         // test for critical tape alert flags
         if let Ok(alert_flags) = drive.tape_alert_flags() {
             if !alert_flags.is_empty() {
-                task_log!(worker, "TapeAlertFlags: {:?}", alert_flags);
+                info!("TapeAlertFlags: {:?}", alert_flags);
                 if tape_alert_flags_critical(alert_flags) {
                     self.pool.set_media_status_damaged(&media_uuid)?;
                     bail!(
@@ -281,20 +270,15 @@ impl PoolWriter {
             }
         }
 
-        let (catalog, is_new_media) = update_media_set_label(
-            worker,
-            drive.as_mut(),
-            old_media_id.media_set_label,
-            media.id(),
-        )?;
+        let (catalog, is_new_media) =
+            update_media_set_label(drive.as_mut(), old_media_id.media_set_label, media.id())?;
 
         self.catalog_set.lock().unwrap().append_catalog(catalog)?;
 
         let media_set = media.media_set_label().unwrap();
 
         if is_new_media && media_set.seq_nr >= MEDIA_SET_SEQ_NR_WARN_LIMIT {
-            task_warn!(
-                worker,
+            warn!(
                 "large media-set detected ({}), consider using a different allocation policy",
                 media_set.seq_nr
             );
@@ -311,7 +295,7 @@ impl PoolWriter {
 
         if is_new_media {
             // add catalogs from previous media
-            self.append_media_set_catalogs(worker)?;
+            self.append_media_set_catalogs()?;
         }
 
         self.used_tapes.insert(media_uuid.clone());
@@ -331,12 +315,12 @@ impl PoolWriter {
     // Check it tape is loaded, then move to EOM (if not already there)
     //
     // Returns the tape position at EOM.
-    fn prepare_tape_write(status: &mut PoolWriterState, worker: &WorkerTask) -> Result<u64, Error> {
+    fn prepare_tape_write(status: &mut PoolWriterState) -> Result<u64, Error> {
         if !status.at_eom {
-            task_log!(worker, "moving to end of media");
+            info!("moving to end of media");
             status.drive.move_to_eom(true)?;
             status.at_eom = true;
-            task_log!(worker, "arrived at end of media");
+            info!("arrived at end of media");
         }
 
         let current_file_number = status.drive.current_file_number()?;
@@ -357,7 +341,7 @@ impl PoolWriter {
     /// on the media (return value 'Ok(false, _)'). In that case, the
     /// archive is marked incomplete. The caller should mark the media
     /// as full and try again using another media.
-    pub fn append_catalog_archive(&mut self, worker: &WorkerTask) -> Result<bool, Error> {
+    pub fn append_catalog_archive(&mut self) -> Result<bool, Error> {
         let catalog_magic = self.catalog_version();
 
         let status = match self.status {
@@ -365,7 +349,7 @@ impl PoolWriter {
             None => bail!("PoolWriter - no media loaded"),
         };
 
-        Self::prepare_tape_write(status, worker)?;
+        Self::prepare_tape_write(status)?;
 
         let catalog_set = self.catalog_set.lock().unwrap();
 
@@ -408,7 +392,7 @@ impl PoolWriter {
     }
 
     // Append catalogs for all previous media in set (without last)
-    fn append_media_set_catalogs(&mut self, worker: &WorkerTask) -> Result<(), Error> {
+    fn append_media_set_catalogs(&mut self) -> Result<(), Error> {
         let media_set = self.pool.current_media_set();
 
         let mut media_list = media_set.media_list();
@@ -424,7 +408,7 @@ impl PoolWriter {
             None => bail!("PoolWriter - no media loaded"),
         };
 
-        Self::prepare_tape_write(status, worker)?;
+        Self::prepare_tape_write(status)?;
 
         for (seq_nr, uuid) in media_list.iter().enumerate() {
             let uuid = match uuid {
@@ -436,7 +420,7 @@ impl PoolWriter {
 
             let mut file = Self::open_catalog_file(uuid)?;
 
-            task_log!(worker, "write catalog for previous media: {}", uuid);
+            info!("write catalog for previous media: {}", uuid);
 
             if tape_write_catalog(
                 writer.as_mut(),
@@ -467,7 +451,6 @@ impl PoolWriter {
     /// media.
     pub fn append_snapshot_archive(
         &mut self,
-        worker: &WorkerTask,
         snapshot_reader: &SnapshotReader,
     ) -> Result<(bool, usize), Error> {
         let status = match self.status {
@@ -475,7 +458,7 @@ impl PoolWriter {
             None => bail!("PoolWriter - no media loaded"),
         };
 
-        let current_file_number = Self::prepare_tape_write(status, worker)?;
+        let current_file_number = Self::prepare_tape_write(status)?;
 
         let (done, bytes_written) = {
             let mut writer: Box<dyn TapeWrite> = status.drive.write_file()?;
@@ -521,7 +504,7 @@ impl PoolWriter {
             None => bail!("PoolWriter - no media loaded"),
         };
 
-        let current_file_number = Self::prepare_tape_write(status, worker)?;
+        let current_file_number = Self::prepare_tape_write(status)?;
 
         let writer = status.drive.write_file()?;
 
@@ -533,8 +516,7 @@ impl PoolWriter {
         status.bytes_written_after_sync += bytes_written;
 
         let elapsed = start_time.elapsed()?.as_secs_f64();
-        task_log!(
-            worker,
+        info!(
             "wrote {} chunks ({:.2} MB at {:.2} MB/s)",
             saved_chunks.len(),
             bytes_written as f64 / 1_000_000.0,
@@ -614,7 +596,7 @@ fn write_chunk_archive<'a>(
         }
 
         if writer.bytes_written() > max_size {
-            //task_log!(worker, "Chunk Archive max size reached, closing archive");
+            //info!("Chunk Archive max size reached, closing archive");
             break;
         }
     }
@@ -628,7 +610,6 @@ fn write_chunk_archive<'a>(
 // set label does not match the expected media set, overwrite the
 // media set label.
 fn update_media_set_label(
-    worker: &WorkerTask,
     drive: &mut dyn TapeDriver,
     old_set: Option<MediaSetLabel>,
     media_id: &MediaId,
@@ -657,7 +638,7 @@ fn update_media_set_label(
 
     let new_media = match old_set {
         None => {
-            task_log!(worker, "writing new media set label");
+            info!("writing new media set label");
             drive.write_media_set_label(new_set, key_config.as_ref())?;
             media_catalog = MediaCatalog::overwrite(TAPE_STATUS_DIR, media_id, false)?;
             true
@@ -681,8 +662,7 @@ fn update_media_set_label(
 
                 false
             } else {
-                task_log!(
-                    worker,
+                info!(
                     "writing new media set label (overwrite '{}/{}')",
                     media_set_label.uuid.to_string(),
                     media_set_label.seq_nr,
diff --git a/src/tools/disks/mod.rs b/src/tools/disks/mod.rs
index 94f89e0a..f916416f 100644
--- a/src/tools/disks/mod.rs
+++ b/src/tools/disks/mod.rs
@@ -16,14 +16,13 @@ use ::serde::{Deserialize, Serialize};
 
 use proxmox_lang::error::io_err_other;
 use proxmox_lang::{io_bail, io_format_err};
-use proxmox_rest_server::WorkerTask;
 use proxmox_schema::api;
 use proxmox_sys::linux::procfs::{mountinfo::Device, MountInfo};
-use proxmox_sys::task_log;
 
 use pbs_api_types::{BLOCKDEVICE_DISK_AND_PARTITION_NAME_REGEX, BLOCKDEVICE_NAME_REGEX};
 
 mod zfs;
+use tracing::info;
 pub use zfs::*;
 mod zpool_status;
 pub use zpool_status::*;
@@ -1116,7 +1115,7 @@ pub fn inititialize_gpt_disk(disk: &Disk, uuid: Option<&str>) -> Result<(), Erro
 
 /// Wipes all labels and the first 200 MiB of a disk/partition (or the whole if it is smaller).
 /// If called with a partition, also sets the partition type to 0x83 'Linux filesystem'.
-pub fn wipe_blockdev(disk: &Disk, worker: Arc<WorkerTask>) -> Result<(), Error> {
+pub fn wipe_blockdev(disk: &Disk) -> Result<(), Error> {
     let disk_path = match disk.device_path() {
         Some(path) => path,
         None => bail!("disk {:?} has no node in /dev", disk.syspath()),
@@ -1137,13 +1136,13 @@ pub fn wipe_blockdev(disk: &Disk, worker: Arc<WorkerTask>) -> Result<(), Error>
 
     to_wipe.push(disk_path.to_path_buf());
 
-    task_log!(worker, "Wiping block device {}", disk_path.display());
+    info!("Wiping block device {}", disk_path.display());
 
     let mut wipefs_command = std::process::Command::new("wipefs");
     wipefs_command.arg("--all").args(&to_wipe);
 
     let wipefs_output = proxmox_sys::command::run_command(wipefs_command, None)?;
-    task_log!(worker, "wipefs output: {}", wipefs_output);
+    info!("wipefs output: {}", wipefs_output);
 
     let size = disk.size().map(|size| size / 1024 / 1024)?;
     let count = size.min(200);
@@ -1163,21 +1162,17 @@ pub fn wipe_blockdev(disk: &Disk, worker: Arc<WorkerTask>) -> Result<(), Error>
     dd_command.args(args);
 
     let dd_output = proxmox_sys::command::run_command(dd_command, None)?;
-    task_log!(worker, "dd output: {}", dd_output);
+    info!("dd output: {}", dd_output);
 
     if is_partition {
         // set the partition type to 0x83 'Linux filesystem'
-        change_parttype(disk, "8300", worker)?;
+        change_parttype(disk, "8300")?;
     }
 
     Ok(())
 }
 
-pub fn change_parttype(
-    part_disk: &Disk,
-    part_type: &str,
-    worker: Arc<WorkerTask>,
-) -> Result<(), Error> {
+pub fn change_parttype(part_disk: &Disk, part_type: &str) -> Result<(), Error> {
     let part_path = match part_disk.device_path() {
         Some(path) => path,
         None => bail!("disk {:?} has no node in /dev", part_disk.syspath()),
@@ -1199,7 +1194,7 @@ pub fn change_parttype(
         };
         sgdisk_command.arg(part_disk_parent_path);
         let sgdisk_output = proxmox_sys::command::run_command(sgdisk_command, None)?;
-        task_log!(worker, "sgdisk output: {}", sgdisk_output);
+        info!("sgdisk output: {}", sgdisk_output);
     }
     Ok(())
 }
diff --git a/tests/worker-task-abort.rs b/tests/worker-task-abort.rs
index eda86f01..80243dbb 100644
--- a/tests/worker-task-abort.rs
+++ b/tests/worker-task-abort.rs
@@ -7,24 +7,25 @@ extern crate tokio;
 
 use proxmox_lang::try_block;
 use proxmox_sys::fs::CreateOptions;
-use proxmox_sys::{task_log, WorkerTaskContext};
+use proxmox_sys::WorkerTaskContext;
 
 use pbs_api_types::{Authid, UPID};
 
 use proxmox_rest_server::{CommandSocket, WorkerTask};
+use tracing::info;
 
 fn garbage_collection(worker: &WorkerTask) -> Result<(), Error> {
-    task_log!(worker, "start garbage collection");
+    info!("start garbage collection");
 
     for i in 0..50 {
         worker.check_abort()?;
 
-        task_log!(worker, "progress {}", i);
+        info!("progress {}", i);
 
         std::thread::sleep(std::time::Duration::from_millis(10));
     }
 
-    task_log!(worker, "end garbage collection");
+    info!("end garbage collection");
 
     Ok(())
 }
-- 
2.43.0



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pbs-devel] [PATCH proxmox-backup v5 4/4] api: switch from task_log! macro to tracing
  2024-06-13 13:56 [pbs-devel] [PATCH proxmox{, -backup} v5 0/4] proxmox-log introduction Gabriel Goller
                   ` (2 preceding siblings ...)
  2024-06-13 13:56 ` [pbs-devel] [PATCH proxmox-backup v5 3/4] switch from task_log! macro to tracing Gabriel Goller
@ 2024-06-13 13:56 ` Gabriel Goller
  2024-06-24 11:13 ` [pbs-devel] [PATCH proxmox{, -backup} v5 0/4] proxmox-log introduction Lukas Wagner
  2024-06-26 13:10 ` Gabriel Goller
  5 siblings, 0 replies; 14+ messages in thread
From: Gabriel Goller @ 2024-06-13 13:56 UTC (permalink / raw)
  To: pbs-devel

Import `proxmox-log` and substitute all `task_log!`
(and task_warn!, task_error!) invocations with tracing calls (info!,
warn!, etc..). Remove worker references where it isn't necessary
anymore.

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
---
 src/api2/admin/datastore.rs      |  26 ++--
 src/api2/config/acme.rs          |  20 +--
 src/api2/config/datastore.rs     |  16 +-
 src/api2/config/prune.rs         |  14 +-
 src/api2/node/apt.rs             |  13 +-
 src/api2/node/certificates.rs    |  64 ++++----
 src/api2/node/disks/directory.rs |  13 +-
 src/api2/node/disks/mod.rs       |  12 +-
 src/api2/node/disks/zfs.rs       |  30 ++--
 src/api2/node/mod.rs             |  11 +-
 src/api2/pull.rs                 |  33 ++--
 src/api2/tape/backup.rs          |  70 ++++-----
 src/api2/tape/drive.rs           | 146 +++++++-----------
 src/api2/tape/restore.rs         | 254 ++++++++++---------------------
 14 files changed, 263 insertions(+), 459 deletions(-)

diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
index 82221c50..01fa2a1f 100644
--- a/src/api2/admin/datastore.rs
+++ b/src/api2/admin/datastore.rs
@@ -13,6 +13,7 @@ use hyper::{header, Body, Response, StatusCode};
 use serde::Deserialize;
 use serde_json::{json, Value};
 use tokio_stream::wrappers::ReceiverStream;
+use tracing::{info, warn};
 
 use proxmox_async::blocking::WrappedReaderStream;
 use proxmox_async::{io::AsyncChannelWriter, stream::AsyncReaderStream};
@@ -26,7 +27,6 @@ use proxmox_sortable_macro::sortable;
 use proxmox_sys::fs::{
     file_read_firstline, file_read_optional_string, replace_file, CreateOptions,
 };
-use proxmox_sys::{task_log, task_warn};
 use proxmox_time::CalendarEvent;
 
 use pxar::accessor::aio::Accessor;
@@ -909,9 +909,9 @@ pub fn verify(
                 )?
             };
             if !failed_dirs.is_empty() {
-                task_log!(worker, "Failed to verify the following snapshots/groups:");
+                info!("Failed to verify the following snapshots/groups:");
                 for dir in failed_dirs {
-                    task_log!(worker, "\t{}", dir);
+                    info!("\t{}", dir);
                 }
                 bail!("verification failed - please check the log for details");
             }
@@ -1031,9 +1031,9 @@ pub fn prune(
         return Ok(json!(prune_result));
     }
 
-    let prune_group = move |worker: Arc<WorkerTask>| {
+    let prune_group = move |_worker: Arc<WorkerTask>| {
         if keep_all {
-            task_log!(worker, "No prune selection - keeping all files.");
+            info!("No prune selection - keeping all files.");
         } else {
             let mut opts = Vec::new();
             if !ns.is_root() {
@@ -1041,9 +1041,8 @@ pub fn prune(
             }
             crate::server::cli_keep_options(&mut opts, &keep_options);
 
-            task_log!(worker, "retention options: {}", opts.join(" "));
-            task_log!(
-                worker,
+            info!("retention options: {}", opts.join(" "));
+            info!(
                 "Starting prune on {} group \"{}\"",
                 print_store_and_ns(&store, &ns),
                 group.group(),
@@ -1060,7 +1059,7 @@ pub fn prune(
 
             let msg = format!("{}/{}/{timestamp} {mark}", group.ty, group.id);
 
-            task_log!(worker, "{msg}");
+            info!("{msg}");
 
             prune_result.push(PruneResult {
                 backup_type: group.ty,
@@ -1073,8 +1072,7 @@ pub fn prune(
 
             if !keep {
                 if let Err(err) = backup_dir.destroy(false) {
-                    task_warn!(
-                        worker,
+                    warn!(
                         "failed to remove dir {:?}: {}",
                         backup_dir.relative_path(),
                         err,
@@ -1098,7 +1096,7 @@ pub fn prune(
         )?;
         Ok(json!(upid))
     } else {
-        let worker = WorkerTask::new("prune", Some(worker_id), auth_id.to_string(), true)?;
+        let (worker, _) = WorkerTask::new("prune", Some(worker_id), auth_id.to_string(), true)?;
         let result = prune_group(worker.clone());
         worker.log_result(&Ok(()));
         Ok(json!(result))
@@ -1161,9 +1159,7 @@ pub fn prune_datastore(
         Some(worker_id),
         auth_id.to_string(),
         to_stdout,
-        move |worker| {
-            crate::server::prune_datastore(worker, auth_id, prune_options, datastore, dry_run)
-        },
+        move |_worker| crate::server::prune_datastore(auth_id, prune_options, datastore, dry_run),
     )?;
 
     Ok(upid_str)
diff --git a/src/api2/config/acme.rs b/src/api2/config/acme.rs
index 0b6acc50..86e74831 100644
--- a/src/api2/config/acme.rs
+++ b/src/api2/config/acme.rs
@@ -9,12 +9,12 @@ use hex::FromHex;
 use lazy_static::lazy_static;
 use serde::{Deserialize, Serialize};
 use serde_json::{json, Value};
+use tracing::{info, warn};
 
 use proxmox_router::{
     http_bail, list_subdirs_api_method, Permission, Router, RpcEnvironment, SubdirMap,
 };
 use proxmox_schema::{api, param_bail};
-use proxmox_sys::{task_log, task_warn};
 
 use proxmox_acme::types::AccountData as AcmeAccountData;
 use proxmox_acme::Account;
@@ -240,10 +240,10 @@ fn register_account(
         Some(name.to_string()),
         auth_id.to_string(),
         true,
-        move |worker| async move {
+        move |_worker| async move {
             let mut client = AcmeClient::new(directory);
 
-            task_log!(worker, "Registering ACME account '{}'...", &name);
+            info!("Registering ACME account '{}'...", &name);
 
             let account = do_register_account(
                 &mut client,
@@ -255,11 +255,7 @@ fn register_account(
             )
             .await?;
 
-            task_log!(
-                worker,
-                "Registration successful, account URL: {}",
-                account.location
-            );
+            info!("Registration successful, account URL: {}", account.location);
 
             Ok(())
         },
@@ -354,7 +350,7 @@ pub fn deactivate_account(
         Some(name.to_string()),
         auth_id.to_string(),
         true,
-        move |worker| async move {
+        move |_worker| async move {
             match AcmeClient::load(&name)
                 .await?
                 .update_account(&json!({"status": "deactivated"}))
@@ -363,11 +359,9 @@ pub fn deactivate_account(
                 Ok(_account) => (),
                 Err(err) if !force => return Err(err),
                 Err(err) => {
-                    task_warn!(
-                        worker,
+                    warn!(
                         "error deactivating account {}, proceedeing anyway - {}",
-                        name,
-                        err,
+                        name, err,
                     );
                 }
             }
diff --git a/src/api2/config/datastore.rs b/src/api2/config/datastore.rs
index 6b742acb..3f7b06e0 100644
--- a/src/api2/config/datastore.rs
+++ b/src/api2/config/datastore.rs
@@ -4,11 +4,11 @@ use ::serde::{Deserialize, Serialize};
 use anyhow::Error;
 use hex::FromHex;
 use serde_json::Value;
+use tracing::warn;
 
 use proxmox_router::{http_bail, Permission, Router, RpcEnvironment, RpcEnvironmentType};
 use proxmox_schema::{api, param_bail, ApiType};
 use proxmox_section_config::SectionConfigData;
-use proxmox_sys::{task_warn, WorkerTaskContext};
 use proxmox_uuid::Uuid;
 
 use pbs_api_types::{
@@ -70,7 +70,6 @@ pub(crate) fn do_create_datastore(
     _lock: BackupLockGuard,
     mut config: SectionConfigData,
     datastore: DataStoreConfig,
-    worker: Option<&dyn WorkerTaskContext>,
 ) -> Result<(), Error> {
     let path: PathBuf = datastore.path.clone().into();
 
@@ -84,7 +83,6 @@ pub(crate) fn do_create_datastore(
         path,
         backup_user.uid,
         backup_user.gid,
-        worker,
         tuning.sync_level.unwrap_or_default(),
     )?;
 
@@ -155,11 +153,11 @@ pub fn create_datastore(
         Some(config.name.to_string()),
         auth_id.to_string(),
         to_stdout,
-        move |worker| {
-            do_create_datastore(lock, section_config, config, Some(&worker))?;
+        move |_worker| {
+            do_create_datastore(lock, section_config, config)?;
 
             if let Some(prune_job_config) = prune_job_config {
-                do_create_prune_job(prune_job_config, Some(&worker))
+                do_create_prune_job(prune_job_config)
             } else {
                 Ok(())
             }
@@ -528,8 +526,8 @@ pub async fn delete_datastore(
         Some(name.clone()),
         auth_id.to_string(),
         to_stdout,
-        move |worker| {
-            pbs_datastore::DataStore::destroy(&name, destroy_data, &worker)?;
+        move |_worker| {
+            pbs_datastore::DataStore::destroy(&name, destroy_data)?;
 
             // ignore errors
             let _ = jobstate::remove_state_file("prune", &name);
@@ -538,7 +536,7 @@ pub async fn delete_datastore(
             if let Err(err) =
                 proxmox_async::runtime::block_on(crate::server::notify_datastore_removed())
             {
-                task_warn!(worker, "failed to notify after datastore removal: {err}");
+                warn!("failed to notify after datastore removal: {err}");
             }
 
             Ok(())
diff --git a/src/api2/config/prune.rs b/src/api2/config/prune.rs
index 4f7ce39c..4f3d94af 100644
--- a/src/api2/config/prune.rs
+++ b/src/api2/config/prune.rs
@@ -1,7 +1,5 @@
 use anyhow::Error;
 use hex::FromHex;
-use proxmox_sys::task_log;
-use proxmox_sys::WorkerTaskContext;
 use serde::{Deserialize, Serialize};
 use serde_json::Value;
 
@@ -15,6 +13,7 @@ use pbs_api_types::{
 use pbs_config::prune;
 
 use pbs_config::CachedUserInfo;
+use tracing::info;
 
 #[api(
     input: {
@@ -58,10 +57,7 @@ pub fn list_prune_jobs(
     Ok(list)
 }
 
-pub fn do_create_prune_job(
-    config: PruneJobConfig,
-    worker: Option<&dyn WorkerTaskContext>,
-) -> Result<(), Error> {
+pub fn do_create_prune_job(config: PruneJobConfig) -> Result<(), Error> {
     let _lock = prune::lock_config()?;
 
     let (mut section_config, _digest) = prune::config()?;
@@ -76,9 +72,7 @@ pub fn do_create_prune_job(
 
     crate::server::jobstate::create_state_file("prunejob", &config.id)?;
 
-    if let Some(worker) = worker {
-        task_log!(worker, "Prune job created: {}", config.id);
-    }
+    info!("Prune job created: {}", config.id);
 
     Ok(())
 }
@@ -108,7 +102,7 @@ pub fn create_prune_job(
 
     user_info.check_privs(&auth_id, &config.acl_path(), PRIV_DATASTORE_MODIFY, true)?;
 
-    do_create_prune_job(config, None)
+    do_create_prune_job(config)
 }
 
 #[api(
diff --git a/src/api2/node/apt.rs b/src/api2/node/apt.rs
index 9c0d9ecc..38af0452 100644
--- a/src/api2/node/apt.rs
+++ b/src/api2/node/apt.rs
@@ -1,6 +1,7 @@
 use anyhow::{bail, format_err, Error};
 use serde_json::{json, Value};
 use std::os::unix::prelude::OsStrExt;
+use tracing::{info, warn};
 
 use proxmox_router::{
     list_subdirs_api_method, Permission, Router, RpcEnvironment, RpcEnvironmentType, SubdirMap,
@@ -83,9 +84,9 @@ fn read_and_update_proxy_config() -> Result<Option<ProxyConfig>, Error> {
     Ok(proxy_config)
 }
 
-fn do_apt_update(worker: &WorkerTask, quiet: bool) -> Result<(), Error> {
+fn do_apt_update(quiet: bool) -> Result<(), Error> {
     if !quiet {
-        worker.log_message("starting apt-get update")
+        info!("starting apt-get update");
     }
 
     read_and_update_proxy_config()?;
@@ -99,7 +100,7 @@ fn do_apt_update(worker: &WorkerTask, quiet: bool) -> Result<(), Error> {
         .map_err(|err| format_err!("failed to execute {:?} - {}", command, err))?;
 
     if !quiet {
-        worker.log_message(String::from_utf8(output.stdout)?);
+        info!("{}", String::from_utf8(output.stdout)?);
     }
 
     // TODO: improve run_command to allow outputting both, stderr and stdout
@@ -114,7 +115,7 @@ fn do_apt_update(worker: &WorkerTask, quiet: bool) -> Result<(), Error> {
                     }
                 })
                 .unwrap_or_else(|_| String::from("non utf8 error message (suppressed)"));
-            worker.log_warning(msg);
+            warn!(msg);
         } else {
             bail!("terminated by signal");
         }
@@ -160,8 +161,8 @@ pub fn apt_update_database(
     let auth_id = rpcenv.get_auth_id().unwrap();
     let to_stdout = rpcenv.env_type() == RpcEnvironmentType::CLI;
 
-    let upid_str = WorkerTask::new_thread("aptupdate", None, auth_id, to_stdout, move |worker| {
-        do_apt_update(&worker, quiet)?;
+    let upid_str = WorkerTask::new_thread("aptupdate", None, auth_id, to_stdout, move |_worker| {
+        do_apt_update(quiet)?;
 
         let mut cache = apt::update_cache()?;
 
diff --git a/src/api2/node/certificates.rs b/src/api2/node/certificates.rs
index 6dfcd0c0..e4fbc8ac 100644
--- a/src/api2/node/certificates.rs
+++ b/src/api2/node/certificates.rs
@@ -5,16 +5,17 @@ use anyhow::{bail, format_err, Error};
 use openssl::pkey::PKey;
 use openssl::x509::X509;
 use serde::{Deserialize, Serialize};
+use tracing::info;
 
 use proxmox_router::list_subdirs_api_method;
 use proxmox_router::SubdirMap;
 use proxmox_router::{Permission, Router, RpcEnvironment};
 use proxmox_schema::api;
-use proxmox_sys::{task_log, task_warn};
 
 use pbs_api_types::{NODE_SCHEMA, PRIV_SYS_MODIFY};
 use pbs_buildcfg::configdir;
 use pbs_tools::cert;
+use tracing::warn;
 
 use crate::acme::AcmeClient;
 use crate::api2::types::AcmeDomain;
@@ -301,10 +302,7 @@ async fn order_certificate(
     };
 
     if domains.is_empty() {
-        task_log!(
-            worker,
-            "No domains configured to be ordered from an ACME server."
-        );
+        info!("No domains configured to be ordered from an ACME server.");
         return Ok(None);
     }
 
@@ -312,11 +310,11 @@ async fn order_certificate(
 
     let mut acme = node_config.acme_client().await?;
 
-    task_log!(worker, "Placing ACME order");
+    info!("Placing ACME order");
     let order = acme
         .new_order(domains.iter().map(|d| d.domain.to_ascii_lowercase()))
         .await?;
-    task_log!(worker, "Order URL: {}", order.location);
+    info!("Order URL: {}", order.location);
 
     let identifiers: Vec<String> = order
         .data
@@ -328,7 +326,7 @@ async fn order_certificate(
         .collect();
 
     for auth_url in &order.data.authorizations {
-        task_log!(worker, "Getting authorization details from '{}'", auth_url);
+        info!("Getting authorization details from '{}'", auth_url);
         let mut auth = acme.get_authorization(auth_url).await?;
 
         let domain = match &mut auth.identifier {
@@ -336,11 +334,11 @@ async fn order_certificate(
         };
 
         if auth.status == Status::Valid {
-            task_log!(worker, "{} is already validated!", domain);
+            info!("{} is already validated!", domain);
             continue;
         }
 
-        task_log!(worker, "The validation for {} is pending", domain);
+        info!("The validation for {} is pending", domain);
         let domain_config: &AcmeDomain = get_domain_config(&domain)?;
         let plugin_id = domain_config.plugin.as_deref().unwrap_or("standalone");
         let mut plugin_cfg =
@@ -348,31 +346,28 @@ async fn order_certificate(
                 format_err!("plugin '{}' for domain '{}' not found!", plugin_id, domain)
             })?;
 
-        task_log!(worker, "Setting up validation plugin");
+        info!("Setting up validation plugin");
         let validation_url = plugin_cfg
             .setup(&mut acme, &auth, domain_config, Arc::clone(&worker))
             .await?;
 
-        let result = request_validation(&worker, &mut acme, auth_url, validation_url).await;
+        let result = request_validation(&mut acme, auth_url, validation_url).await;
 
         if let Err(err) = plugin_cfg
             .teardown(&mut acme, &auth, domain_config, Arc::clone(&worker))
             .await
         {
-            task_warn!(
-                worker,
+            warn!(
                 "Failed to teardown plugin '{}' for domain '{}' - {}",
-                plugin_id,
-                domain,
-                err
+                plugin_id, domain, err
             );
         }
 
         result?;
     }
 
-    task_log!(worker, "All domains validated");
-    task_log!(worker, "Creating CSR");
+    info!("All domains validated");
+    info!("Creating CSR");
 
     let csr = proxmox_acme::util::Csr::generate(&identifiers, &Default::default())?;
     let mut finalize_error_cnt = 0u8;
@@ -385,7 +380,7 @@ async fn order_certificate(
 
         match order.status {
             Status::Pending => {
-                task_log!(worker, "still pending, trying to finalize anyway");
+                info!("still pending, trying to finalize anyway");
                 let finalize = order
                     .finalize
                     .as_deref()
@@ -400,7 +395,7 @@ async fn order_certificate(
                 tokio::time::sleep(Duration::from_secs(5)).await;
             }
             Status::Ready => {
-                task_log!(worker, "order is ready, finalizing");
+                info!("order is ready, finalizing");
                 let finalize = order
                     .finalize
                     .as_deref()
@@ -409,18 +404,18 @@ async fn order_certificate(
                 tokio::time::sleep(Duration::from_secs(5)).await;
             }
             Status::Processing => {
-                task_log!(worker, "still processing, trying again in 30 seconds");
+                info!("still processing, trying again in 30 seconds");
                 tokio::time::sleep(Duration::from_secs(30)).await;
             }
             Status::Valid => {
-                task_log!(worker, "valid");
+                info!("valid");
                 break;
             }
             other => bail!("order status: {:?}", other),
         }
     }
 
-    task_log!(worker, "Downloading certificate");
+    info!("Downloading certificate");
     let certificate = acme
         .get_certificate(
             order
@@ -437,15 +432,14 @@ async fn order_certificate(
 }
 
 async fn request_validation(
-    worker: &WorkerTask,
     acme: &mut AcmeClient,
     auth_url: &str,
     validation_url: &str,
 ) -> Result<(), Error> {
-    task_log!(worker, "Triggering validation");
+    info!("Triggering validation");
     acme.request_challenge_validation(validation_url).await?;
 
-    task_log!(worker, "Sleeping for 5 seconds");
+    info!("Sleeping for 5 seconds");
     tokio::time::sleep(Duration::from_secs(5)).await;
 
     loop {
@@ -454,10 +448,7 @@ async fn request_validation(
         let auth = acme.get_authorization(auth_url).await?;
         match auth.status {
             Status::Pending => {
-                task_log!(
-                    worker,
-                    "Status is still 'pending', trying again in 10 seconds"
-                );
+                info!("Status is still 'pending', trying again in 10 seconds");
                 tokio::time::sleep(Duration::from_secs(10)).await;
             }
             Status::Valid => return Ok(()),
@@ -582,15 +573,12 @@ pub fn revoke_acme_cert(rpcenv: &mut dyn RpcEnvironment) -> Result<String, Error
         None,
         auth_id,
         true,
-        move |worker| async move {
-            task_log!(worker, "Loading ACME account");
+        move |_worker| async move {
+            info!("Loading ACME account");
             let mut acme = node_config.acme_client().await?;
-            task_log!(worker, "Revoking old certificate");
+            info!("Revoking old certificate");
             acme.revoke_certificate(cert_pem.as_bytes(), None).await?;
-            task_log!(
-                worker,
-                "Deleting certificate and regenerating a self-signed one"
-            );
+            info!("Deleting certificate and regenerating a self-signed one");
             delete_custom_certificate().await?;
             Ok(())
         },
diff --git a/src/api2/node/disks/directory.rs b/src/api2/node/disks/directory.rs
index 9f1112a9..1db573e9 100644
--- a/src/api2/node/disks/directory.rs
+++ b/src/api2/node/disks/directory.rs
@@ -2,11 +2,11 @@ use ::serde::{Deserialize, Serialize};
 use anyhow::{bail, Error};
 use serde_json::json;
 use std::os::linux::fs::MetadataExt;
+use tracing::info;
 
 use proxmox_router::{Permission, Router, RpcEnvironment, RpcEnvironmentType};
 use proxmox_schema::api;
 use proxmox_section_config::SectionConfigData;
-use proxmox_sys::task_log;
 
 use pbs_api_types::{
     DataStoreConfig, BLOCKDEVICE_NAME_SCHEMA, DATASTORE_SCHEMA, NODE_SCHEMA, PRIV_SYS_AUDIT,
@@ -179,8 +179,8 @@ pub fn create_datastore_disk(
         Some(name.clone()),
         auth_id,
         to_stdout,
-        move |worker| {
-            task_log!(worker, "create datastore '{}' on disk {}", name, disk);
+        move |_worker| {
+            info!("create datastore '{}' on disk {}", name, disk);
 
             let add_datastore = add_datastore.unwrap_or(false);
             let filesystem = filesystem.unwrap_or(FileSystemType::Ext4);
@@ -213,12 +213,7 @@ pub fn create_datastore_disk(
                     bail!("datastore '{}' already exists.", datastore.name);
                 }
 
-                crate::api2::config::datastore::do_create_datastore(
-                    lock,
-                    config,
-                    datastore,
-                    Some(&worker),
-                )?;
+                crate::api2::config::datastore::do_create_datastore(lock, config, datastore)?;
             }
 
             Ok(())
diff --git a/src/api2/node/disks/mod.rs b/src/api2/node/disks/mod.rs
index 711dae7b..4ec511aa 100644
--- a/src/api2/node/disks/mod.rs
+++ b/src/api2/node/disks/mod.rs
@@ -6,7 +6,7 @@ use proxmox_router::{
 };
 use proxmox_schema::api;
 use proxmox_sortable_macro::sortable;
-use proxmox_sys::task_log;
+use tracing::info;
 
 use pbs_api_types::{
     BLOCKDEVICE_DISK_AND_PARTITION_NAME_SCHEMA, BLOCKDEVICE_NAME_SCHEMA, NODE_SCHEMA,
@@ -164,8 +164,8 @@ pub fn initialize_disk(
         Some(disk.clone()),
         auth_id,
         to_stdout,
-        move |worker| {
-            task_log!(worker, "initialize disk {}", disk);
+        move |_worker| {
+            info!("initialize disk {}", disk);
 
             let disk_manager = DiskManage::new();
             let disk_info = disk_manager.disk_by_name(&disk)?;
@@ -209,13 +209,13 @@ pub fn wipe_disk(disk: String, rpcenv: &mut dyn RpcEnvironment) -> Result<Value,
         Some(disk.clone()),
         auth_id,
         to_stdout,
-        move |worker| {
-            task_log!(worker, "wipe disk {}", disk);
+        move |_worker| {
+            info!("wipe disk {}", disk);
 
             let disk_manager = DiskManage::new();
             let disk_info = disk_manager.partition_by_name(&disk)?;
 
-            wipe_blockdev(&disk_info, worker)?;
+            wipe_blockdev(&disk_info)?;
 
             Ok(())
         },
diff --git a/src/api2/node/disks/zfs.rs b/src/api2/node/disks/zfs.rs
index 673dc1bf..2447f4aa 100644
--- a/src/api2/node/disks/zfs.rs
+++ b/src/api2/node/disks/zfs.rs
@@ -1,9 +1,9 @@
 use anyhow::{bail, Error};
 use serde_json::{json, Value};
+use tracing::{error, info};
 
 use proxmox_router::{Permission, Router, RpcEnvironment, RpcEnvironmentType};
 use proxmox_schema::api;
-use proxmox_sys::{task_error, task_log};
 
 use pbs_api_types::{
     DataStoreConfig, ZfsCompressionType, ZfsRaidLevel, ZpoolListItem, DATASTORE_SCHEMA,
@@ -228,13 +228,10 @@ pub fn create_zpool(
         Some(name.clone()),
         auth_id,
         to_stdout,
-        move |worker| {
-            task_log!(
-                worker,
+        move |_worker| {
+            info!(
                 "create {:?} zpool '{}' on devices '{}'",
-                raidlevel,
-                name,
-                devices_text
+                raidlevel, name, devices_text
             );
 
             let mut command = std::process::Command::new("zpool");
@@ -275,12 +272,12 @@ pub fn create_zpool(
                 }
             }
 
-            task_log!(worker, "# {:?}", command);
+            info!("# {:?}", command);
 
             match proxmox_sys::command::run_command(command, None) {
-                Ok(output) => task_log!(worker, "{output}"),
+                Ok(output) => info!("{output}"),
                 Err(err) => {
-                    task_error!(worker, "{err}");
+                    error!("{err}");
                     bail!("Error during 'zpool create', see task log for more details");
                 }
             };
@@ -299,11 +296,11 @@ pub fn create_zpool(
                 command.arg(&format!("compression={}", compression));
             }
             command.args(["relatime=on", &name]);
-            task_log!(worker, "# {:?}", command);
+            info!("# {:?}", command);
             match proxmox_sys::command::run_command(command, None) {
-                Ok(output) => task_log!(worker, "{output}"),
+                Ok(output) => info!("{output}"),
                 Err(err) => {
-                    task_error!(worker, "{err}");
+                    error!("{err}");
                     bail!("Error during 'zfs set', see task log for more details");
                 }
             };
@@ -319,12 +316,7 @@ pub fn create_zpool(
                     bail!("datastore '{}' already exists.", datastore.name);
                 }
 
-                crate::api2::config::datastore::do_create_datastore(
-                    lock,
-                    config,
-                    datastore,
-                    Some(&worker),
-                )?;
+                crate::api2::config::datastore::do_create_datastore(lock, config, datastore)?;
             }
 
             Ok(())
diff --git a/src/api2/node/mod.rs b/src/api2/node/mod.rs
index 931143fd..7055a683 100644
--- a/src/api2/node/mod.rs
+++ b/src/api2/node/mod.rs
@@ -25,6 +25,7 @@ use proxmox_sortable_macro::sortable;
 use proxmox_sys::fd::fd_change_cloexec;
 
 use pbs_api_types::{NODE_SCHEMA, PRIV_SYS_CONSOLE};
+use tracing::{info, warn};
 
 use crate::auth::{private_auth_keyring, public_auth_keyring};
 use crate::tools;
@@ -181,20 +182,18 @@ async fn termproxy(cmd: Option<String>, rpcenv: &mut dyn RpcEnvironment) -> Resu
             let stdout = child.stdout.take().expect("no child stdout handle");
             let stderr = child.stderr.take().expect("no child stderr handle");
 
-            let worker_stdout = worker.clone();
             let stdout_fut = async move {
                 let mut reader = BufReader::new(stdout).lines();
                 while let Some(line) = reader.next_line().await? {
-                    worker_stdout.log_message(line);
+                    info!(line);
                 }
                 Ok::<(), Error>(())
             };
 
-            let worker_stderr = worker.clone();
             let stderr_fut = async move {
                 let mut reader = BufReader::new(stderr).lines();
                 while let Some(line) = reader.next_line().await? {
-                    worker_stderr.log_warning(line);
+                    warn!(line);
                 }
                 Ok::<(), Error>(())
             };
@@ -226,9 +225,9 @@ async fn termproxy(cmd: Option<String>, rpcenv: &mut dyn RpcEnvironment) -> Resu
                 }
 
                 if let Err(err) = child.kill().await {
-                    worker.log_warning(format!("error killing termproxy: {}", err));
+                    warn!("error killing termproxy: {}", err);
                 } else if let Err(err) = child.wait().await {
-                    worker.log_warning(format!("error awaiting termproxy: {}", err));
+                    warn!("error awaiting termproxy: {}", err);
                 }
             }
 
diff --git a/src/api2/pull.rs b/src/api2/pull.rs
index 5d02510c..751d078d 100644
--- a/src/api2/pull.rs
+++ b/src/api2/pull.rs
@@ -1,10 +1,10 @@
 //! Sync datastore from remote server
 use anyhow::{bail, format_err, Error};
 use futures::{future::FutureExt, select};
+use tracing::info;
 
 use proxmox_router::{Permission, Router, RpcEnvironment};
 use proxmox_schema::api;
-use proxmox_sys::task_log;
 
 use pbs_api_types::{
     Authid, BackupNamespace, GroupFilter, RateLimitConfig, SyncJobConfig, DATASTORE_SCHEMA,
@@ -128,12 +128,12 @@ pub fn do_sync_job(
             let worker_future = async move {
                 let pull_params = PullParameters::try_from(&sync_job)?;
 
-                task_log!(worker, "Starting datastore sync job '{}'", job_id);
+                info!("Starting datastore sync job '{}'", job_id);
                 if let Some(event_str) = schedule {
-                    task_log!(worker, "task triggered by schedule '{}'", event_str);
+                    info!("task triggered by schedule '{}'", event_str);
                 }
-                task_log!(
-                    worker,
+
+                info!(
                     "sync datastore '{}' from '{}{}'",
                     sync_job.store,
                     sync_job
@@ -143,33 +143,29 @@ pub fn do_sync_job(
                     sync_job.remote_store,
                 );
 
-                let pull_stats = pull_store(&worker, pull_params).await?;
+                let pull_stats = pull_store(pull_params).await?;
 
                 if pull_stats.bytes != 0 {
                     let amount = HumanByte::from(pull_stats.bytes);
                     let rate = HumanByte::new_binary(
                         pull_stats.bytes as f64 / pull_stats.elapsed.as_secs_f64(),
                     );
-                    task_log!(
-                        worker,
+                    info!(
                         "Summary: sync job pulled {amount} in {} chunks (average rate: {rate}/s)",
                         pull_stats.chunk_count,
                     );
                 } else {
-                    task_log!(worker, "Summary: sync job found no new data to pull");
+                    info!("Summary: sync job found no new data to pull");
                 }
 
                 if let Some(removed) = pull_stats.removed {
-                    task_log!(
-                        worker,
+                    info!(
                         "Summary: removed vanished: snapshots: {}, groups: {}, namespaces: {}",
-                        removed.snapshots,
-                        removed.groups,
-                        removed.namespaces,
+                        removed.snapshots, removed.groups, removed.namespaces,
                     );
                 }
 
-                task_log!(worker, "sync job '{}' end", &job_id);
+                info!("sync job '{}' end", &job_id);
 
                 Ok(())
             };
@@ -315,21 +311,20 @@ async fn pull(
         auth_id.to_string(),
         true,
         move |worker| async move {
-            task_log!(
-                worker,
+            info!(
                 "pull datastore '{}' from '{}/{}'",
                 store,
                 remote.as_deref().unwrap_or("-"),
                 remote_store,
             );
 
-            let pull_future = pull_store(&worker, pull_params);
+            let pull_future = pull_store(pull_params);
             (select! {
                 success = pull_future.fuse() => success,
                 abort = worker.abort_future().map(|_| Err(format_err!("pull aborted"))) => abort,
             })?;
 
-            task_log!(worker, "pull datastore '{}' end", store);
+            info!("pull datastore '{}' end", store);
 
             Ok(())
         },
diff --git a/src/api2/tape/backup.rs b/src/api2/tape/backup.rs
index 896e809b..54203f36 100644
--- a/src/api2/tape/backup.rs
+++ b/src/api2/tape/backup.rs
@@ -2,11 +2,12 @@ use std::sync::{Arc, Mutex};
 
 use anyhow::{bail, format_err, Error};
 use serde_json::Value;
+use tracing::{info, warn};
 
 use proxmox_lang::try_block;
 use proxmox_router::{Permission, Router, RpcEnvironment, RpcEnvironmentType};
 use proxmox_schema::api;
-use proxmox_sys::{task_log, task_warn, WorkerTaskContext};
+use proxmox_sys::WorkerTaskContext;
 
 use pbs_api_types::{
     print_ns_and_snapshot, print_store_and_ns, Authid, MediaPoolConfig, Operation,
@@ -176,7 +177,7 @@ pub fn do_tape_backup_job(
             let job_result = try_block!({
                 if schedule.is_some() {
                     // for scheduled tape backup jobs, we wait indefinitely for the lock
-                    task_log!(worker, "waiting for drive lock...");
+                    info!("waiting for drive lock...");
                     loop {
                         worker.check_abort()?;
                         match lock_tape_device(&drive_config, &setup.drive) {
@@ -191,9 +192,9 @@ pub fn do_tape_backup_job(
                 }
                 set_tape_device_state(&setup.drive, &worker.upid().to_string())?;
 
-                task_log!(worker, "Starting tape backup job '{}'", job_id);
+                info!("Starting tape backup job '{}'", job_id);
                 if let Some(event_str) = schedule {
-                    task_log!(worker, "task triggered by schedule '{}'", event_str);
+                    info!("task triggered by schedule '{}'", event_str);
                 }
 
                 backup_worker(
@@ -369,7 +370,7 @@ fn backup_worker(
 ) -> Result<(), Error> {
     let start = std::time::Instant::now();
 
-    task_log!(worker, "update media online status");
+    info!("update media online status");
     let changer_name = update_media_online_status(&setup.drive)?;
 
     let root_namespace = setup.ns.clone().unwrap_or_default();
@@ -381,7 +382,6 @@ fn backup_worker(
     let mut pool_writer = PoolWriter::new(
         pool,
         &setup.drive,
-        worker,
         notification_mode,
         force_media_set,
         ns_magic,
@@ -405,8 +405,7 @@ fn backup_worker(
         None => group_list,
     };
 
-    task_log!(
-        worker,
+    info!(
         "found {} groups (out of {} total)",
         group_list.len(),
         group_count_full
@@ -417,10 +416,7 @@ fn backup_worker(
     let latest_only = setup.latest_only.unwrap_or(false);
 
     if latest_only {
-        task_log!(
-            worker,
-            "latest-only: true (only considering latest snapshots)"
-        );
+        info!("latest-only: true (only considering latest snapshots)");
     }
 
     let datastore_name = datastore.name();
@@ -443,8 +439,7 @@ fn backup_worker(
             .collect();
 
         if snapshot_list.is_empty() {
-            task_log!(
-                worker,
+            info!(
                 "{}, group {} was empty",
                 print_store_and_ns(datastore_name, group.backup_ns()),
                 group.group()
@@ -464,7 +459,7 @@ fn backup_worker(
                     info.backup_dir.backup_ns(),
                     info.backup_dir.as_ref(),
                 ) {
-                    task_log!(worker, "skip snapshot {}", rel_path);
+                    info!("skip snapshot {}", rel_path);
                     continue;
                 }
 
@@ -477,7 +472,7 @@ fn backup_worker(
                     SnapshotBackupResult::Ignored => {}
                 }
                 progress.done_snapshots = 1;
-                task_log!(worker, "percentage done: {}", progress);
+                info!("percentage done: {}", progress);
             }
         } else {
             progress.group_snapshots = snapshot_list.len() as u64;
@@ -490,7 +485,7 @@ fn backup_worker(
                     info.backup_dir.backup_ns(),
                     info.backup_dir.as_ref(),
                 ) {
-                    task_log!(worker, "skip snapshot {}", rel_path);
+                    info!("skip snapshot {}", rel_path);
                     continue;
                 }
 
@@ -503,7 +498,7 @@ fn backup_worker(
                     SnapshotBackupResult::Ignored => {}
                 }
                 progress.done_snapshots = snapshot_number as u64 + 1;
-                task_log!(worker, "percentage done: {}", progress);
+                info!("percentage done: {}", progress);
             }
         }
     }
@@ -511,18 +506,15 @@ fn backup_worker(
     pool_writer.commit()?;
 
     if need_catalog {
-        task_log!(worker, "append media catalog");
+        info!("append media catalog");
 
         let uuid = pool_writer.load_writable_media(worker)?;
-        let done = pool_writer.append_catalog_archive(worker)?;
+        let done = pool_writer.append_catalog_archive()?;
         if !done {
-            task_log!(
-                worker,
-                "catalog does not fit on tape, writing to next volume"
-            );
+            info!("catalog does not fit on tape, writing to next volume");
             pool_writer.set_media_status_full(&uuid)?;
             pool_writer.load_writable_media(worker)?;
-            let done = pool_writer.append_catalog_archive(worker)?;
+            let done = pool_writer.append_catalog_archive()?;
             if !done {
                 bail!("write_catalog_archive failed on second media");
             }
@@ -530,9 +522,9 @@ fn backup_worker(
     }
 
     if setup.export_media_set.unwrap_or(false) {
-        pool_writer.export_media_set(worker)?;
+        pool_writer.export_media_set()?;
     } else if setup.eject_media.unwrap_or(false) {
-        pool_writer.eject_media(worker)?;
+        pool_writer.eject_media()?;
     }
 
     if errors {
@@ -542,7 +534,7 @@ fn backup_worker(
     summary.used_tapes = match pool_writer.get_used_media_labels() {
         Ok(tapes) => Some(tapes),
         Err(err) => {
-            task_warn!(worker, "could not collect list of used tapes: {err}");
+            warn!("could not collect list of used tapes: {err}");
             None
         }
     };
@@ -576,7 +568,7 @@ fn backup_snapshot(
     snapshot: BackupDir,
 ) -> Result<SnapshotBackupResult, Error> {
     let snapshot_path = snapshot.relative_path();
-    task_log!(worker, "backup snapshot {:?}", snapshot_path);
+    info!("backup snapshot {:?}", snapshot_path);
 
     let snapshot_reader = match snapshot.locked_reader() {
         Ok(reader) => reader,
@@ -584,15 +576,10 @@ fn backup_snapshot(
             if !snapshot.full_path().exists() {
                 // we got an error and the dir does not exist,
                 // it probably just vanished, so continue
-                task_log!(worker, "snapshot {:?} vanished, skipping", snapshot_path);
+                info!("snapshot {:?} vanished, skipping", snapshot_path);
                 return Ok(SnapshotBackupResult::Ignored);
             }
-            task_warn!(
-                worker,
-                "failed opening snapshot {:?}: {}",
-                snapshot_path,
-                err
-            );
+            warn!("failed opening snapshot {:?}: {}", snapshot_path, err);
             return Ok(SnapshotBackupResult::Error);
         }
     };
@@ -638,7 +625,7 @@ fn backup_snapshot(
 
     let snapshot_reader = snapshot_reader.lock().unwrap();
 
-    let (done, _bytes) = pool_writer.append_snapshot_archive(worker, &snapshot_reader)?;
+    let (done, _bytes) = pool_writer.append_snapshot_archive(&snapshot_reader)?;
 
     if !done {
         // does not fit on tape, so we try on next volume
@@ -647,19 +634,14 @@ fn backup_snapshot(
         worker.check_abort()?;
 
         pool_writer.load_writable_media(worker)?;
-        let (done, _bytes) = pool_writer.append_snapshot_archive(worker, &snapshot_reader)?;
+        let (done, _bytes) = pool_writer.append_snapshot_archive(&snapshot_reader)?;
 
         if !done {
             bail!("write_snapshot_archive failed on second media");
         }
     }
 
-    task_log!(
-        worker,
-        "end backup {}:{:?}",
-        datastore.name(),
-        snapshot_path
-    );
+    info!("end backup {}:{:?}", datastore.name(), snapshot_path);
 
     Ok(SnapshotBackupResult::Success)
 }
diff --git a/src/api2/tape/drive.rs b/src/api2/tape/drive.rs
index c6fc9f9c..9c761305 100644
--- a/src/api2/tape/drive.rs
+++ b/src/api2/tape/drive.rs
@@ -5,6 +5,7 @@ use std::sync::Arc;
 use anyhow::{bail, format_err, Error};
 use pbs_tape::sg_tape::SgTape;
 use serde_json::Value;
+use tracing::{info, warn};
 
 use proxmox_router::{
     list_subdirs_api_method, Permission, Router, RpcEnvironment, RpcEnvironmentType, SubdirMap,
@@ -12,7 +13,6 @@ use proxmox_router::{
 use proxmox_schema::api;
 use proxmox_section_config::SectionConfigData;
 use proxmox_sortable_macro::sortable;
-use proxmox_sys::{task_log, task_warn};
 use proxmox_uuid::Uuid;
 
 use pbs_api_types::{
@@ -131,13 +131,8 @@ pub fn load_media(
         drive.clone(),
         "load-media",
         Some(job_id),
-        move |worker, config| {
-            task_log!(
-                worker,
-                "loading media '{}' into drive '{}'",
-                label_text,
-                drive
-            );
+        move |_worker, config| {
+            info!("loading media '{}' into drive '{}'", label_text, drive);
             let (mut changer, _) = required_media_changer(&config, &drive)?;
             changer.load_media(&label_text)?;
             Ok(())
@@ -250,8 +245,8 @@ pub fn unload(
         drive.clone(),
         "unload-media",
         Some(drive.clone()),
-        move |worker, config| {
-            task_log!(worker, "unloading media from drive '{}'", drive);
+        move |_worker, config| {
+            info!("unloading media from drive '{}'", drive);
 
             let (mut changer, _) = required_media_changer(&config, &drive)?;
             changer.unload_media(target_slot)?;
@@ -299,9 +294,9 @@ pub fn format_media(
         drive.clone(),
         "format-media",
         Some(drive.clone()),
-        move |worker, config| {
+        move |_worker, config| {
             if let Some(ref label) = label_text {
-                task_log!(worker, "try to load media '{}'", label);
+                info!("try to load media '{}'", label);
                 if let Some((mut changer, _)) = media_changer(&config, &drive)? {
                     changer.load_media(label)?;
                 }
@@ -315,11 +310,8 @@ pub fn format_media(
                 let mut handle = LtoTapeHandle::new(file)?;
                 if let Ok(status) = handle.get_drive_and_media_status() {
                     if status.density >= TapeDensity::LTO9 {
-                        task_log!(worker, "Slow formatting LTO9+ media.");
-                        task_log!(
-                            worker,
-                            "This can take a very long time due to media optimization."
-                        );
+                        info!("Slow formatting LTO9+ media.");
+                        info!("This can take a very long time due to media optimization.");
                     }
                 }
             }
@@ -330,15 +322,15 @@ pub fn format_media(
                         bail!("expected label '{}', found unrelated data", label);
                     }
                     /* assume drive contains no or unrelated data */
-                    task_log!(worker, "unable to read media label: {}", err);
-                    task_log!(worker, "format anyways");
+                    info!("unable to read media label: {}", err);
+                    info!("format anyways");
                     handle.format_media(fast.unwrap_or(true))?;
                 }
                 Ok((None, _)) => {
                     if let Some(label) = label_text {
                         bail!("expected label '{}', found empty tape", label);
                     }
-                    task_log!(worker, "found empty media - format anyways");
+                    info!("found empty media - format anyways");
                     handle.format_media(fast.unwrap_or(true))?;
                 }
                 Ok((Some(media_id), _key_config)) => {
@@ -352,11 +344,9 @@ pub fn format_media(
                         }
                     }
 
-                    task_log!(
-                        worker,
+                    info!(
                         "found media '{}' with uuid '{}'",
-                        media_id.label.label_text,
-                        media_id.label.uuid,
+                        media_id.label.label_text, media_id.label.uuid,
                     );
 
                     let mut inventory = Inventory::new(TAPE_STATUS_DIR);
@@ -504,7 +494,7 @@ pub fn label_media(
         drive.clone(),
         "label-media",
         Some(drive.clone()),
-        move |worker, config| {
+        move |_worker, config| {
             let mut drive = open_drive(&config, &drive)?;
 
             drive.rewind()?;
@@ -526,7 +516,7 @@ pub fn label_media(
                 pool: pool.clone(),
             };
 
-            write_media_label(worker, &mut drive, label, pool)
+            write_media_label(&mut drive, label, pool)
         },
     )?;
 
@@ -534,7 +524,6 @@ pub fn label_media(
 }
 
 fn write_media_label(
-    worker: Arc<WorkerTask>,
     drive: &mut Box<dyn TapeDriver>,
     label: MediaLabel,
     pool: Option<String>,
@@ -549,18 +538,9 @@ fn write_media_label(
     }
     drive.label_tape(&label)?;
     if let Some(ref pool) = pool {
-        task_log!(
-            worker,
-            "Label media '{}' for pool '{}'",
-            label.label_text,
-            pool
-        );
+        info!("Label media '{}' for pool '{}'", label.label_text, pool);
     } else {
-        task_log!(
-            worker,
-            "Label media '{}' (no pool assignment)",
-            label.label_text
-        );
+        info!("Label media '{}' (no pool assignment)", label.label_text);
     }
 
     let media_id = MediaId {
@@ -749,10 +729,10 @@ pub fn clean_drive(drive: String, rpcenv: &mut dyn RpcEnvironment) -> Result<Val
         drive.clone(),
         "clean-drive",
         Some(drive.clone()),
-        move |worker, config| {
+        move |_worker, config| {
             let (mut changer, _changer_name) = required_media_changer(&config, &drive)?;
 
-            task_log!(worker, "Starting drive clean");
+            info!("Starting drive clean");
 
             changer.clean_drive()?;
 
@@ -763,7 +743,7 @@ pub fn clean_drive(drive: String, rpcenv: &mut dyn RpcEnvironment) -> Result<Val
                 // test for critical tape alert flags
                 if let Ok(alert_flags) = handle.tape_alert_flags() {
                     if !alert_flags.is_empty() {
-                        task_log!(worker, "TapeAlertFlags: {:?}", alert_flags);
+                        info!("TapeAlertFlags: {:?}", alert_flags);
                         if tape_alert_flags_critical(alert_flags) {
                             bail!("found critical tape alert flags: {:?}", alert_flags);
                         }
@@ -772,13 +752,13 @@ pub fn clean_drive(drive: String, rpcenv: &mut dyn RpcEnvironment) -> Result<Val
 
                 // test wearout (max. 50 mounts)
                 if let Ok(volume_stats) = handle.volume_statistics() {
-                    task_log!(worker, "Volume mounts: {}", volume_stats.volume_mounts);
+                    info!("Volume mounts: {}", volume_stats.volume_mounts);
                     let wearout = volume_stats.volume_mounts * 2; // (*100.0/50.0);
-                    task_log!(worker, "Cleaning tape wearout: {}%", wearout);
+                    info!("Cleaning tape wearout: {}%", wearout);
                 }
             }
 
-            task_log!(worker, "Drive cleaned successfully");
+            info!("Drive cleaned successfully");
 
             Ok(())
         },
@@ -910,12 +890,12 @@ pub fn update_inventory(
         drive.clone(),
         "inventory-update",
         Some(drive.clone()),
-        move |worker, config| {
+        move |_worker, config| {
             let (mut changer, changer_name) = required_media_changer(&config, &drive)?;
 
             let label_text_list = changer.online_media_label_texts()?;
             if label_text_list.is_empty() {
-                task_log!(worker, "changer device does not list any media labels");
+                info!("changer device does not list any media labels");
             }
 
             let mut inventory = Inventory::load(TAPE_STATUS_DIR)?;
@@ -924,7 +904,7 @@ pub fn update_inventory(
 
             for label_text in label_text_list.iter() {
                 if label_text.starts_with("CLN") {
-                    task_log!(worker, "skip cleaning unit '{}'", label_text);
+                    info!("skip cleaning unit '{}'", label_text);
                     continue;
                 }
 
@@ -936,12 +916,12 @@ pub fn update_inventory(
                             if !catalog
                                 || MediaCatalog::exists(TAPE_STATUS_DIR, &media_id.label.uuid)
                             {
-                                task_log!(worker, "media '{}' already inventoried", label_text);
+                                info!("media '{}' already inventoried", label_text);
                                 continue;
                             }
                         }
                         Err(err) => {
-                            task_warn!(worker, "error getting media by unique label: {err}");
+                            warn!("error getting media by unique label: {err}");
                             // we can't be sure which uuid it is
                             continue;
                         }
@@ -950,38 +930,29 @@ pub fn update_inventory(
                 }
 
                 if let Err(err) = changer.load_media(&label_text) {
-                    task_warn!(worker, "unable to load media '{}' - {}", label_text, err);
+                    warn!("unable to load media '{}' - {}", label_text, err);
                     continue;
                 }
 
                 let mut drive = open_drive(&config, &drive)?;
                 match drive.read_label() {
                     Err(err) => {
-                        task_warn!(
-                            worker,
-                            "unable to read label form media '{}' - {}",
-                            label_text,
-                            err
-                        );
+                        warn!("unable to read label form media '{}' - {}", label_text, err);
                     }
                     Ok((None, _)) => {
-                        task_log!(worker, "media '{}' is empty", label_text);
+                        info!("media '{}' is empty", label_text);
                     }
                     Ok((Some(media_id), _key_config)) => {
                         if label_text != media_id.label.label_text {
-                            task_warn!(
-                                worker,
+                            warn!(
                                 "label text mismatch ({} != {})",
-                                label_text,
-                                media_id.label.label_text
+                                label_text, media_id.label.label_text
                             );
                             continue;
                         }
-                        task_log!(
-                            worker,
+                        info!(
                             "inventorize media '{}' with uuid '{}'",
-                            label_text,
-                            media_id.label.uuid
+                            label_text, media_id.label.uuid
                         );
 
                         let _pool_lock = if let Some(pool) = media_id.pool() {
@@ -1002,15 +973,11 @@ pub fn update_inventory(
                             if catalog {
                                 let media_set = inventory.compute_media_set_members(&set.uuid)?;
                                 if let Err(err) = fast_catalog_restore(
-                                    &worker,
                                     &mut drive,
                                     &media_set,
                                     &media_id.label.uuid,
                                 ) {
-                                    task_warn!(
-                                        worker,
-                                        "could not restore catalog for {label_text}: {err}"
-                                    );
+                                    warn!("could not restore catalog for {label_text}: {err}");
                                 }
                             }
                         } else {
@@ -1066,14 +1033,13 @@ pub fn barcode_label_media(
         drive.clone(),
         "barcode-label-media",
         Some(drive.clone()),
-        move |worker, config| barcode_label_media_worker(worker, drive, &config, pool),
+        move |_worker, config| barcode_label_media_worker(drive, &config, pool),
     )?;
 
     Ok(upid_str.into())
 }
 
 fn barcode_label_media_worker(
-    worker: Arc<WorkerTask>,
     drive: String,
     drive_config: &SectionConfigData,
     pool: Option<String>,
@@ -1106,24 +1072,23 @@ fn barcode_label_media_worker(
         inventory.reload()?;
         match inventory.find_media_by_label_text(&label_text) {
             Ok(Some(_)) => {
-                task_log!(
-                    worker,
+                info!(
                     "media '{}' already inventoried (already labeled)",
                     label_text
                 );
                 continue;
             }
             Err(err) => {
-                task_warn!(worker, "error getting media by unique label: {err}",);
+                warn!("error getting media by unique label: {err}",);
                 continue;
             }
             Ok(None) => {} // ok to label
         }
 
-        task_log!(worker, "checking/loading media '{}'", label_text);
+        info!("checking/loading media '{}'", label_text);
 
         if let Err(err) = changer.load_media(&label_text) {
-            task_warn!(worker, "unable to load media '{}' - {}", label_text, err);
+            warn!("unable to load media '{}' - {}", label_text, err);
             continue;
         }
 
@@ -1132,18 +1097,13 @@ fn barcode_label_media_worker(
 
         match drive.read_next_file() {
             Ok(_reader) => {
-                task_log!(
-                    worker,
-                    "media '{}' is not empty (format it first)",
-                    label_text
-                );
+                info!("media '{}' is not empty (format it first)", label_text);
                 continue;
             }
             Err(BlockReadError::EndOfFile) => { /* EOF mark at BOT, assume tape is empty */ }
             Err(BlockReadError::EndOfStream) => { /* tape is empty */ }
             Err(_err) => {
-                task_warn!(
-                    worker,
+                warn!(
                     "media '{}' read error (maybe not empty - format it first)",
                     label_text
                 );
@@ -1159,7 +1119,7 @@ fn barcode_label_media_worker(
             pool: pool.clone(),
         };
 
-        write_media_label(worker.clone(), &mut drive, label, pool.clone())?
+        write_media_label(&mut drive, label, pool.clone())?
     }
 
     Ok(())
@@ -1318,14 +1278,12 @@ pub fn catalog_media(
 
             let media_id = match drive.read_label()? {
                 (Some(media_id), key_config) => {
-                    task_log!(
-                        worker,
+                    info!(
                         "found media label: {}",
                         serde_json::to_string_pretty(&serde_json::to_value(&media_id)?)?
                     );
                     if key_config.is_some() {
-                        task_log!(
-                            worker,
+                        info!(
                             "encryption key config: {}",
                             serde_json::to_string_pretty(&serde_json::to_value(&key_config)?)?
                         );
@@ -1339,7 +1297,7 @@ pub fn catalog_media(
 
             let (_media_set_lock, media_set_uuid) = match media_id.media_set_label {
                 None => {
-                    task_log!(worker, "media is empty");
+                    info!("media is empty");
                     let _pool_lock = if let Some(pool) = media_id.pool() {
                         lock_media_pool(TAPE_STATUS_DIR, &pool)?
                     } else {
@@ -1352,7 +1310,7 @@ pub fn catalog_media(
                 Some(ref set) => {
                     if set.unassigned() {
                         // media is empty
-                        task_log!(worker, "media is empty");
+                        info!("media is empty");
                         let _lock = lock_unassigned_media_pool(TAPE_STATUS_DIR)?;
                         MediaCatalog::destroy(TAPE_STATUS_DIR, &media_id.label.uuid)?;
                         inventory.store(media_id.clone(), false)?;
@@ -1377,14 +1335,14 @@ pub fn catalog_media(
             if !scan {
                 let media_set = inventory.compute_media_set_members(media_set_uuid)?;
 
-                if fast_catalog_restore(&worker, &mut drive, &media_set, &media_id.label.uuid)? {
+                if fast_catalog_restore(&mut drive, &media_set, &media_id.label.uuid)? {
                     return Ok(());
                 }
 
-                task_log!(worker, "no catalog found");
+                info!("no catalog found");
             }
 
-            task_log!(worker, "scanning entire media to reconstruct catalog");
+            info!("scanning entire media to reconstruct catalog");
 
             drive.rewind()?;
             drive.read_label()?; // skip over labels - we already read them above
diff --git a/src/api2/tape/restore.rs b/src/api2/tape/restore.rs
index 38290964..52d76742 100644
--- a/src/api2/tape/restore.rs
+++ b/src/api2/tape/restore.rs
@@ -6,6 +6,7 @@ use std::sync::Arc;
 
 use anyhow::{bail, format_err, Error};
 use serde_json::Value;
+use tracing::{info, warn};
 
 use proxmox_human_byte::HumanByte;
 use proxmox_io::ReadExt;
@@ -13,7 +14,7 @@ use proxmox_router::{Permission, Router, RpcEnvironment, RpcEnvironmentType};
 use proxmox_schema::{api, ApiType};
 use proxmox_section_config::SectionConfigData;
 use proxmox_sys::fs::{replace_file, CreateOptions};
-use proxmox_sys::{task_log, task_warn, WorkerTaskContext};
+use proxmox_sys::WorkerTaskContext;
 use proxmox_uuid::Uuid;
 
 use pbs_api_types::{
@@ -403,12 +404,12 @@ pub fn restore(
 
             let restore_owner = owner.as_ref().unwrap_or(&auth_id);
 
-            task_log!(worker, "Mediaset '{media_set}'");
-            task_log!(worker, "Pool: {pool}");
+            info!("Mediaset '{media_set}'");
+            info!("Pool: {pool}");
 
             let res = if snapshots.is_some() || namespaces {
                 restore_list_worker(
-                    worker.clone(),
+                    worker,
                     snapshots.unwrap_or_default(),
                     inventory,
                     media_set_uuid,
@@ -422,7 +423,7 @@ pub fn restore(
                 )
             } else {
                 restore_full_worker(
-                    worker.clone(),
+                    worker,
                     inventory,
                     media_set_uuid,
                     drive_config,
@@ -434,10 +435,10 @@ pub fn restore(
                 )
             };
             if res.is_ok() {
-                task_log!(worker, "Restore mediaset '{media_set}' done");
+                info!("Restore mediaset '{media_set}' done");
             }
             if let Err(err) = set_tape_device_state(&drive, "") {
-                task_log!(worker, "could not unset drive state for {drive}: {err}");
+                info!("could not unset drive state for {drive}: {err}");
             }
 
             res
@@ -488,7 +489,7 @@ fn restore_full_worker(
     }
 
     if let Some(fingerprint) = encryption_key_fingerprint {
-        task_log!(worker, "Encryption key fingerprint: {fingerprint}");
+        info!("Encryption key fingerprint: {fingerprint}");
     }
 
     let used_datastores = store_map.used_datastores();
@@ -497,13 +498,9 @@ fn restore_full_worker(
         .map(|(t, _)| String::from(t.name()))
         .collect::<Vec<String>>()
         .join(", ");
-    task_log!(worker, "Datastore(s): {datastore_list}",);
-    task_log!(worker, "Drive: {drive_name}");
-    log_required_tapes(
-        &worker,
-        &inventory,
-        media_id_list.iter().map(|id| &id.label.uuid),
-    );
+    info!("Datastore(s): {datastore_list}",);
+    info!("Drive: {drive_name}");
+    log_required_tapes(&inventory, media_id_list.iter().map(|id| &id.label.uuid));
 
     let mut datastore_locks = Vec::new();
     for (target, _) in used_datastores.values() {
@@ -533,7 +530,6 @@ fn restore_full_worker(
 
 #[allow(clippy::too_many_arguments)]
 fn check_snapshot_restorable(
-    worker: &WorkerTask,
     store_map: &DataStoreMap,
     store: &str,
     snapshot: &str,
@@ -574,7 +570,7 @@ fn check_snapshot_restorable(
             auth_id,
             Some(restore_owner),
         ) {
-            task_warn!(worker, "cannot restore {store}:{snapshot} to {ns}: '{err}'");
+            warn!("cannot restore {store}:{snapshot} to {ns}: '{err}'");
             continue;
         }
 
@@ -582,8 +578,7 @@ fn check_snapshot_restorable(
         if let Ok(owner) = datastore.get_owner(&ns, dir.as_ref()) {
             if restore_owner != &owner {
                 // only the owner is allowed to create additional snapshots
-                task_warn!(
-                    worker,
+                warn!(
                     "restore  of '{snapshot}' to {ns} failed, owner check failed ({restore_owner} \
                     != {owner})",
                 );
@@ -594,10 +589,7 @@ fn check_snapshot_restorable(
         have_some_permissions = true;
 
         if datastore.snapshot_path(&ns, dir).exists() {
-            task_warn!(
-                worker,
-                "found snapshot {snapshot} on target datastore/namespace, skipping...",
-            );
+            warn!("found snapshot {snapshot} on target datastore/namespace, skipping...",);
             continue;
         }
         can_restore_some = true;
@@ -610,11 +602,7 @@ fn check_snapshot_restorable(
     Ok(can_restore_some)
 }
 
-fn log_required_tapes<'a>(
-    worker: &WorkerTask,
-    inventory: &Inventory,
-    list: impl Iterator<Item = &'a Uuid>,
-) {
+fn log_required_tapes<'a>(inventory: &Inventory, list: impl Iterator<Item = &'a Uuid>) {
     let mut tape_list = list
         .map(|uuid| {
             inventory
@@ -626,7 +614,7 @@ fn log_required_tapes<'a>(
         })
         .collect::<Vec<&str>>();
     tape_list.sort_unstable();
-    task_log!(worker, "Required media list: {}", tape_list.join(";"));
+    info!("Required media list: {}", tape_list.join(";"));
 }
 
 #[allow(clippy::too_many_arguments)]
@@ -658,14 +646,13 @@ fn restore_list_worker(
                 let (ns, dir) = match parse_ns_and_snapshot(snapshot) {
                     Ok((ns, dir)) if store_map.has_full_mapping(store, &ns) => (ns, dir),
                     Err(err) => {
-                        task_warn!(worker, "couldn't parse snapshot {snapshot} - {err}");
+                        warn!("couldn't parse snapshot {snapshot} - {err}");
                         continue;
                     }
                     _ => continue,
                 };
                 let snapshot = print_ns_and_snapshot(&ns, &dir);
                 match check_snapshot_restorable(
-                    &worker,
                     &store_map,
                     store,
                     &snapshot,
@@ -679,7 +666,7 @@ fn restore_list_worker(
                     Ok(true) => restorable.push((store.to_string(), snapshot.to_string(), ns, dir)),
                     Ok(false) => {}
                     Err(err) => {
-                        task_warn!(worker, "{err}");
+                        warn!("{err}");
                         skipped.push(format!("{store}:{snapshot}"));
                     }
                 }
@@ -697,7 +684,6 @@ fn restore_list_worker(
                     match parse_ns_and_snapshot(snapshot) {
                         Ok((ns, dir)) => {
                             match check_snapshot_restorable(
-                                &worker,
                                 &store_map,
                                 store,
                                 snapshot,
@@ -713,14 +699,14 @@ fn restore_list_worker(
                                 }
                                 Ok(false) => None,
                                 Err(err) => {
-                                    task_warn!(worker, "{err}");
+                                    warn!("{err}");
                                     skipped.push(format!("{store}:{snapshot}"));
                                     None
                                 }
                             }
                         }
                         Err(err) => {
-                            task_warn!(worker, "could not restore {store_snapshot}: {err}");
+                            warn!("could not restore {store_snapshot}: {err}");
                             skipped.push(store_snapshot);
                             None
                         }
@@ -738,10 +724,7 @@ fn restore_list_worker(
                     let media_id = inventory.lookup_media(media_uuid).unwrap();
                     (media_id, file_num)
                 } else {
-                    task_warn!(
-                        worker,
-                        "did not find snapshot '{store}:{snapshot}' in media set",
-                    );
+                    warn!("did not find snapshot '{store}:{snapshot}' in media set",);
                     skipped.push(format!("{store}:{snapshot}"));
                     continue;
                 };
@@ -754,26 +737,25 @@ fn restore_list_worker(
                 .or_default();
             file_list.push(file_num);
 
-            task_log!(
-                worker,
+            info!(
                 "found snapshot {snapshot} on {}: file {file_num}",
                 media_id.label.label_text,
             );
         }
 
         if snapshot_file_hash.is_empty() {
-            task_log!(worker, "nothing to restore, skipping remaining phases...");
+            info!("nothing to restore, skipping remaining phases...");
             if !skipped.is_empty() {
-                task_log!(worker, "skipped the following snapshots:");
+                info!("skipped the following snapshots:");
                 for snap in skipped {
-                    task_log!(worker, "  {snap}");
+                    info!("  {snap}");
                 }
             }
             return Ok(());
         }
 
-        task_log!(worker, "Phase 1: temporarily restore snapshots to temp dir");
-        log_required_tapes(&worker, &inventory, snapshot_file_hash.keys());
+        info!("Phase 1: temporarily restore snapshots to temp dir");
+        log_required_tapes(&inventory, snapshot_file_hash.keys());
         let mut datastore_chunk_map: HashMap<String, HashSet<[u8; 32]>> = HashMap::new();
         let mut tmp_paths = Vec::new();
         for (media_uuid, file_list) in snapshot_file_hash.iter_mut() {
@@ -824,10 +806,10 @@ fn restore_list_worker(
         drop(catalog);
 
         if !media_file_chunk_map.is_empty() {
-            task_log!(worker, "Phase 2: restore chunks to datastores");
-            log_required_tapes(&worker, &inventory, media_file_chunk_map.keys());
+            info!("Phase 2: restore chunks to datastores");
+            log_required_tapes(&inventory, media_file_chunk_map.keys());
         } else {
-            task_log!(worker, "All chunks are already present, skip phase 2...");
+            info!("All chunks are already present, skip phase 2...");
         }
 
         for (media_uuid, file_chunk_map) in media_file_chunk_map.iter_mut() {
@@ -842,10 +824,7 @@ fn restore_list_worker(
             restore_file_chunk_map(worker.clone(), &mut drive, &store_map, file_chunk_map)?;
         }
 
-        task_log!(
-            worker,
-            "Phase 3: copy snapshots from temp dir to datastores"
-        );
+        info!("Phase 3: copy snapshots from temp dir to datastores");
         let mut errors = false;
         for (source_datastore, snapshot, source_ns, backup_dir) in snapshots.into_iter() {
             if let Err(err) = proxmox_lang::try_block!({
@@ -902,20 +881,14 @@ fn restore_list_worker(
 
                         Ok(())
                     }) {
-                        task_warn!(
-                            worker,
-                            "could not restore {source_datastore}:{snapshot}: '{err}'"
-                        );
+                        warn!("could not restore {source_datastore}:{snapshot}: '{err}'");
                         skipped.push(format!("{source_datastore}:{snapshot}"));
                     }
                 }
-                task_log!(worker, "Restore snapshot '{}' done", snapshot);
+                info!("Restore snapshot '{}' done", snapshot);
                 Ok::<_, Error>(())
             }) {
-                task_warn!(
-                    worker,
-                    "could not copy {source_datastore}:{snapshot}: {err}"
-                );
+                warn!("could not copy {source_datastore}:{snapshot}: {err}");
                 errors = true;
             }
         }
@@ -925,7 +898,7 @@ fn restore_list_worker(
                 std::fs::remove_dir_all(&tmp_path)
                     .map_err(|err| format_err!("remove_dir_all failed - {err}"))
             }) {
-                task_warn!(worker, "could not clean up temp dir {tmp_path:?}: {err}");
+                warn!("could not clean up temp dir {tmp_path:?}: {err}");
                 errors = true;
             };
         }
@@ -934,19 +907,16 @@ fn restore_list_worker(
             bail!("errors during copy occurred");
         }
         if !skipped.is_empty() {
-            task_log!(worker, "(partially) skipped the following snapshots:");
+            info!("(partially) skipped the following snapshots:");
             for snap in skipped {
-                task_log!(worker, "  {snap}");
+                info!("  {snap}");
             }
         }
         Ok(())
     });
 
     if res.is_err() {
-        task_warn!(
-            worker,
-            "Error during restore, partially restored snapshots will NOT be cleaned up"
-        );
+        warn!("Error during restore, partially restored snapshots will NOT be cleaned up");
     }
 
     for (datastore, _) in store_map.used_datastores().values() {
@@ -954,7 +924,7 @@ fn restore_list_worker(
         match std::fs::remove_dir_all(tmp_path) {
             Ok(()) => {}
             Err(err) if err.kind() == std::io::ErrorKind::NotFound => {}
-            Err(err) => task_warn!(worker, "error cleaning up: {}", err),
+            Err(err) => warn!("error cleaning up: {}", err),
         }
     }
 
@@ -1037,13 +1007,10 @@ fn restore_snapshots_to_tmpdir(
     for file_num in file_list {
         let current_file_number = drive.current_file_number()?;
         if current_file_number != *file_num {
-            task_log!(
-                worker,
-                "was at file {current_file_number}, moving to {file_num}"
-            );
+            info!("was at file {current_file_number}, moving to {file_num}");
             drive.move_to_file(*file_num)?;
             let current_file_number = drive.current_file_number()?;
-            task_log!(worker, "now at file {}", current_file_number);
+            info!("now at file {}", current_file_number);
         }
         let mut reader = drive.read_next_file()?;
 
@@ -1065,10 +1032,7 @@ fn restore_snapshots_to_tmpdir(
                 let source_datastore = archive_header.store;
                 let snapshot = archive_header.snapshot;
 
-                task_log!(
-                    worker,
-                    "File {file_num}: snapshot archive {source_datastore}:{snapshot}",
-                );
+                info!("File {file_num}: snapshot archive {source_datastore}:{snapshot}",);
 
                 let mut decoder =
                     pxar::decoder::sync::Decoder::from_std(pxar::PxarVariant::Unified(reader))?;
@@ -1076,10 +1040,7 @@ fn restore_snapshots_to_tmpdir(
                 let target_datastore = match store_map.target_store(&source_datastore) {
                     Some(datastore) => datastore,
                     None => {
-                        task_warn!(
-                            worker,
-                            "could not find target datastore for {source_datastore}:{snapshot}",
-                        );
+                        warn!("could not find target datastore for {source_datastore}:{snapshot}",);
                         continue;
                     }
                 };
@@ -1131,10 +1092,10 @@ fn restore_file_chunk_map(
     for (nr, chunk_map) in file_chunk_map.iter_mut() {
         let current_file_number = drive.current_file_number()?;
         if current_file_number != *nr {
-            task_log!(worker, "was at file {current_file_number}, moving to {nr}");
+            info!("was at file {current_file_number}, moving to {nr}");
             drive.move_to_file(*nr)?;
             let current_file_number = drive.current_file_number()?;
-            task_log!(worker, "now at file {}", current_file_number);
+            info!("now at file {}", current_file_number);
         }
         let mut reader = drive.read_next_file()?;
         let header: MediaContentHeader = unsafe { reader.read_le_value()? };
@@ -1151,10 +1112,7 @@ fn restore_file_chunk_map(
 
                 let source_datastore = archive_header.store;
 
-                task_log!(
-                    worker,
-                    "File {nr}: chunk archive for datastore '{source_datastore}'",
-                );
+                info!("File {nr}: chunk archive for datastore '{source_datastore}'",);
 
                 let datastore = store_map.target_store(&source_datastore).ok_or_else(|| {
                     format_err!("unexpected chunk archive for store: {source_datastore}")
@@ -1166,7 +1124,7 @@ fn restore_file_chunk_map(
                     datastore.clone(),
                     chunk_map,
                 )?;
-                task_log!(worker, "restored {count} chunks");
+                info!("restored {count} chunks");
             }
             _ => bail!("unexpected content magic {:?}", header.content_magic),
         }
@@ -1226,8 +1184,7 @@ fn restore_partial_chunk_archive<'a>(
 
     let elapsed = start_time.elapsed()?.as_secs_f64();
     let bytes = bytes.load(std::sync::atomic::Ordering::SeqCst) as f64;
-    task_log!(
-        worker,
+    info!(
         "restored {} ({:.2}/s)",
         HumanByte::new_decimal(bytes),
         HumanByte::new_decimal(bytes / elapsed),
@@ -1311,15 +1268,11 @@ pub fn restore_media(
         let current_file_number = drive.current_file_number()?;
         let reader = match drive.read_next_file() {
             Err(BlockReadError::EndOfFile) => {
-                task_log!(
-                    worker,
-                    "skip unexpected filemark at pos {}",
-                    current_file_number
-                );
+                info!("skip unexpected filemark at pos {}", current_file_number);
                 continue;
             }
             Err(BlockReadError::EndOfStream) => {
-                task_log!(worker, "detected EOT after {} files", current_file_number);
+                info!("detected EOT after {} files", current_file_number);
                 break;
             }
             Err(BlockReadError::Error(err)) => {
@@ -1383,12 +1336,9 @@ fn restore_archive<'a>(
             let datastore_name = archive_header.store;
             let snapshot = archive_header.snapshot;
 
-            task_log!(
-                worker,
+            info!(
                 "File {}: snapshot archive {}:{}",
-                current_file_number,
-                datastore_name,
-                snapshot
+                current_file_number, datastore_name, snapshot
             );
 
             let (backup_ns, backup_dir) = parse_ns_and_snapshot(&snapshot)?;
@@ -1423,16 +1373,16 @@ fn restore_archive<'a>(
                     path.push(rel_path);
 
                     if is_new {
-                        task_log!(worker, "restore snapshot {}", backup_dir);
+                        info!("restore snapshot {}", backup_dir);
 
-                        match restore_snapshot_archive(worker.clone(), reader, &path) {
+                        match restore_snapshot_archive(worker, reader, &path) {
                             Err(err) => {
                                 std::fs::remove_dir_all(&path)?;
                                 bail!("restore snapshot {} failed - {}", backup_dir, err);
                             }
                             Ok(false) => {
                                 std::fs::remove_dir_all(&path)?;
-                                task_log!(worker, "skip incomplete snapshot {}", backup_dir);
+                                info!("skip incomplete snapshot {}", backup_dir);
                             }
                             Ok(true) => {
                                 catalog.register_snapshot(
@@ -1448,7 +1398,7 @@ fn restore_archive<'a>(
                         return Ok(());
                     }
                 } else {
-                    task_log!(worker, "skipping...");
+                    info!("skipping...");
                 }
             }
 
@@ -1475,11 +1425,9 @@ fn restore_archive<'a>(
 
             let source_datastore = archive_header.store;
 
-            task_log!(
-                worker,
+            info!(
                 "File {}: chunk archive for datastore '{}'",
-                current_file_number,
-                source_datastore
+                current_file_number, source_datastore
             );
             let datastore = target
                 .as_ref()
@@ -1497,15 +1445,9 @@ fn restore_archive<'a>(
                     .or_default();
 
                 let chunks = if let Some(datastore) = datastore {
-                    restore_chunk_archive(
-                        worker.clone(),
-                        reader,
-                        datastore,
-                        checked_chunks,
-                        verbose,
-                    )?
+                    restore_chunk_archive(worker, reader, datastore, checked_chunks, verbose)?
                 } else {
-                    scan_chunk_archive(worker.clone(), reader, verbose)?
+                    scan_chunk_archive(worker, reader, verbose)?
                 };
 
                 if let Some(chunks) = chunks {
@@ -1515,12 +1457,12 @@ fn restore_archive<'a>(
                         &source_datastore,
                         &chunks[..],
                     )?;
-                    task_log!(worker, "register {} chunks", chunks.len());
+                    info!("register {} chunks", chunks.len());
                     catalog.commit_if_large()?;
                 }
                 return Ok(());
             } else if target.is_some() {
-                task_log!(worker, "skipping...");
+                info!("skipping...");
             }
 
             reader.skip_data()?; // read all data
@@ -1531,11 +1473,9 @@ fn restore_archive<'a>(
             let archive_header: CatalogArchiveHeader = serde_json::from_slice(&header_data)
                 .map_err(|err| format_err!("unable to parse catalog archive header - {}", err))?;
 
-            task_log!(
-                worker,
+            info!(
                 "File {}: skip catalog '{}'",
-                current_file_number,
-                archive_header.uuid
+                current_file_number, archive_header.uuid
             );
 
             reader.skip_data()?; // read all data
@@ -1570,7 +1510,7 @@ fn scan_chunk_archive<'a>(
 
                 // check if this is an aborted stream without end marker
                 if let Ok(false) = reader.has_end_marker() {
-                    task_log!(worker, "missing stream end marker");
+                    info!("missing stream end marker");
                     return Ok(None);
                 }
 
@@ -1582,7 +1522,7 @@ fn scan_chunk_archive<'a>(
         worker.check_abort()?;
 
         if verbose {
-            task_log!(worker, "Found chunk: {}", hex::encode(digest));
+            info!("Found chunk: {}", hex::encode(digest));
         }
 
         chunks.push(digest);
@@ -1606,8 +1546,6 @@ fn restore_chunk_archive<'a>(
     let bytes = Arc::new(std::sync::atomic::AtomicU64::new(0));
     let bytes2 = bytes.clone();
 
-    let worker2 = worker.clone();
-
     let writer_pool = ParallelHandler::new(
         "tape restore chunk writer",
         4,
@@ -1615,7 +1553,7 @@ fn restore_chunk_archive<'a>(
             let chunk_exists = datastore.cond_touch_chunk(&digest, false)?;
             if !chunk_exists {
                 if verbose {
-                    task_log!(worker2, "Insert chunk: {}", hex::encode(digest));
+                    info!("Insert chunk: {}", hex::encode(digest));
                 }
                 bytes2.fetch_add(chunk.raw_size(), std::sync::atomic::Ordering::SeqCst);
                 // println!("verify and write {}", hex::encode(&digest));
@@ -1626,7 +1564,7 @@ fn restore_chunk_archive<'a>(
 
                 datastore.insert_chunk(&chunk, &digest)?;
             } else if verbose {
-                task_log!(worker2, "Found existing chunk: {}", hex::encode(digest));
+                info!("Found existing chunk: {}", hex::encode(digest));
             }
             Ok(())
         },
@@ -1648,7 +1586,7 @@ fn restore_chunk_archive<'a>(
 
                 // check if this is an aborted stream without end marker
                 if let Ok(false) = reader.has_end_marker() {
-                    task_log!(worker, "missing stream end marker");
+                    info!("missing stream end marker");
                     return Ok(None);
                 }
 
@@ -1672,8 +1610,7 @@ fn restore_chunk_archive<'a>(
 
     let elapsed = start_time.elapsed()?.as_secs_f64();
     let bytes = bytes.load(std::sync::atomic::Ordering::SeqCst) as f64;
-    task_log!(
-        worker,
+    info!(
         "restored {} ({:.2}/s)",
         HumanByte::new_decimal(bytes),
         HumanByte::new_decimal(bytes / elapsed),
@@ -1818,7 +1755,6 @@ fn try_restore_snapshot_archive<R: pxar::decoder::SeqRead>(
 
 /// Try to restore media catalogs (form catalog_archives)
 pub fn fast_catalog_restore(
-    worker: &WorkerTask,
     drive: &mut Box<dyn TapeDriver>,
     media_set: &MediaSet,
     uuid: &Uuid, // current media Uuid
@@ -1839,14 +1775,11 @@ pub fn fast_catalog_restore(
             // limit reader scope
             let mut reader = match drive.read_next_file() {
                 Err(BlockReadError::EndOfFile) => {
-                    task_log!(
-                        worker,
-                        "skip unexpected filemark at pos {current_file_number}"
-                    );
+                    info!("skip unexpected filemark at pos {current_file_number}");
                     continue;
                 }
                 Err(BlockReadError::EndOfStream) => {
-                    task_log!(worker, "detected EOT after {current_file_number} files");
+                    info!("detected EOT after {current_file_number} files");
                     break;
                 }
                 Err(BlockReadError::Error(err)) => {
@@ -1863,7 +1796,7 @@ pub fn fast_catalog_restore(
             if header.content_magic == PROXMOX_BACKUP_CATALOG_ARCHIVE_MAGIC_1_0
                 || header.content_magic == PROXMOX_BACKUP_CATALOG_ARCHIVE_MAGIC_1_1
             {
-                task_log!(worker, "found catalog at pos {}", current_file_number);
+                info!("found catalog at pos {}", current_file_number);
 
                 let header_data = reader.read_exact_allocated(header.size as usize)?;
 
@@ -1873,11 +1806,7 @@ pub fn fast_catalog_restore(
                     })?;
 
                 if &archive_header.media_set_uuid != media_set.uuid() {
-                    task_log!(
-                        worker,
-                        "skipping unrelated catalog at pos {}",
-                        current_file_number
-                    );
+                    info!("skipping unrelated catalog at pos {}", current_file_number);
                     reader.skip_data()?; // read all data
                     continue;
                 }
@@ -1890,8 +1819,7 @@ pub fn fast_catalog_restore(
                 });
 
                 if !wanted {
-                    task_log!(
-                        worker,
+                    info!(
                         "skip catalog because media '{}' not inventarized",
                         catalog_uuid
                     );
@@ -1904,11 +1832,7 @@ pub fn fast_catalog_restore(
                 } else {
                     // only restore if catalog does not exist
                     if MediaCatalog::exists(TAPE_STATUS_DIR, catalog_uuid) {
-                        task_log!(
-                            worker,
-                            "catalog for media '{}' already exists",
-                            catalog_uuid
-                        );
+                        info!("catalog for media '{}' already exists", catalog_uuid);
                         reader.skip_data()?; // read all data
                         continue;
                     }
@@ -1924,19 +1848,11 @@ pub fn fast_catalog_restore(
                 match MediaCatalog::parse_catalog_header(&mut file)? {
                     (true, Some(media_uuid), Some(media_set_uuid)) => {
                         if &media_uuid != catalog_uuid {
-                            task_log!(
-                                worker,
-                                "catalog uuid mismatch at pos {}",
-                                current_file_number
-                            );
+                            info!("catalog uuid mismatch at pos {}", current_file_number);
                             continue;
                         }
                         if media_set_uuid != archive_header.media_set_uuid {
-                            task_log!(
-                                worker,
-                                "catalog media_set mismatch at pos {}",
-                                current_file_number
-                            );
+                            info!("catalog media_set mismatch at pos {}", current_file_number);
                             continue;
                         }
 
@@ -1947,18 +1863,14 @@ pub fn fast_catalog_restore(
                         )?;
 
                         if catalog_uuid == uuid {
-                            task_log!(worker, "successfully restored catalog");
+                            info!("successfully restored catalog");
                             found_catalog = true
                         } else {
-                            task_log!(
-                                worker,
-                                "successfully restored related catalog {}",
-                                media_uuid
-                            );
+                            info!("successfully restored related catalog {}", media_uuid);
                         }
                     }
                     _ => {
-                        task_warn!(worker, "got incomplete catalog header - skip file");
+                        warn!("got incomplete catalog header - skip file");
                         continue;
                     }
                 }
@@ -1972,7 +1884,7 @@ pub fn fast_catalog_restore(
         }
         moved_to_eom = true;
 
-        task_log!(worker, "searching for catalog at EOT (moving to EOT)");
+        info!("searching for catalog at EOT (moving to EOT)");
         drive.move_to_last_file()?;
 
         let new_file_number = drive.current_file_number()?;
-- 
2.43.0



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pbs-devel] [PATCH proxmox v5 2/4] enable tracing logger, remove task_log macros
  2024-06-13 13:56 ` [pbs-devel] [PATCH proxmox v5 2/4] enable tracing logger, remove task_log macros Gabriel Goller
@ 2024-06-24 11:09   ` Lukas Wagner
  2024-06-25  8:49     ` Gabriel Goller
  0 siblings, 1 reply; 14+ messages in thread
From: Lukas Wagner @ 2024-06-24 11:09 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Gabriel Goller

2 nits inline

On  2024-06-13 15:56, Gabriel Goller wrote:
> Enable the tracing-system by setting the LOGGER task local variable
> to a instance of a FileLogger and initializing the WARN_COUNTER.
> Removed the task_log! macros and some occurences.
> 
> Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
> ---
>  proxmox-rest-server/src/api_config.rs  |   3 +-
>  proxmox-rest-server/src/file_logger.rs | 147 -------------------------
>  proxmox-rest-server/src/lib.rs         |   3 -
>  proxmox-rest-server/src/rest.rs        |   4 +-
>  proxmox-rest-server/src/worker_task.rs | 104 ++++++++---------
>  proxmox-sys/src/worker_task_context.rs |  47 --------
>  6 files changed, 56 insertions(+), 252 deletions(-)
>  delete mode 100644 proxmox-rest-server/src/file_logger.rs
> 
> diff --git a/proxmox-rest-server/src/api_config.rs b/proxmox-rest-server/src/api_config.rs
> index eaece05a..5b7a2a4f 100644
> --- a/proxmox-rest-server/src/api_config.rs
> +++ b/proxmox-rest-server/src/api_config.rs
> @@ -12,11 +12,12 @@ use hyper::http::request::Parts;
>  use hyper::{Body, Response};
>  use tower_service::Service;
>  
> +use proxmox_log::{FileLogOptions, FileLogger};
>  use proxmox_router::{Router, RpcEnvironmentType, UserInformation};
>  use proxmox_sys::fs::{create_path, CreateOptions};
>  
>  use crate::rest::Handler;
> -use crate::{CommandSocket, FileLogOptions, FileLogger, RestEnvironment};
> +use crate::{CommandSocket, RestEnvironment};
>  
>  /// REST server configuration
>  pub struct ApiConfig {
> diff --git a/proxmox-rest-server/src/file_logger.rs b/proxmox-rest-server/src/file_logger.rs
> deleted file mode 100644
> index 2bb1fac6..00000000
> --- a/proxmox-rest-server/src/file_logger.rs
> +++ /dev/null
> @@ -1,147 +0,0 @@
> -use std::io::Write;
> -
> -use anyhow::Error;
> -use nix::fcntl::OFlag;
> -
> -use proxmox_sys::fs::{atomic_open_or_create_file, CreateOptions};
> -
> -/// Options to control the behavior of a [FileLogger] instance
> -#[derive(Default)]
> -pub struct FileLogOptions {
> -    /// Open underlying log file in append mode, useful when multiple concurrent processes
> -    /// want to log to the same file (e.g., HTTP access log). Note that it is only atomic
> -    /// for writes smaller than the PIPE_BUF (4k on Linux).
> -    /// Inside the same process you may need to still use an mutex, for shared access.
> -    pub append: bool,
> -    /// Open underlying log file as readable
> -    pub read: bool,
> -    /// If set, ensure that the file is newly created or error out if already existing.
> -    pub exclusive: bool,
> -    /// Duplicate logged messages to STDOUT, like tee
> -    pub to_stdout: bool,
> -    /// Prefix messages logged to the file with the current local time as RFC 3339
> -    pub prefix_time: bool,
> -    /// File owner/group and mode
> -    pub file_opts: CreateOptions,
> -}
> -
> -/// Log messages with optional automatically added timestamps into files
> -///
> -/// #### Example:
> -/// ```
> -/// # use anyhow::{bail, format_err, Error};
> -/// use proxmox_rest_server::{flog, FileLogger, FileLogOptions};
> -///
> -/// # std::fs::remove_file("test.log");
> -/// let options = FileLogOptions {
> -///     to_stdout: true,
> -///     exclusive: true,
> -///     ..Default::default()
> -/// };
> -/// let mut log = FileLogger::new("test.log", options).unwrap();
> -/// flog!(log, "A simple log: {}", "Hello!");
> -/// # std::fs::remove_file("test.log");
> -/// ```
> -pub struct FileLogger {
> -    file: std::fs::File,
> -    file_name: std::path::PathBuf,
> -    options: FileLogOptions,
> -}
> -
> -/// Log messages to [FileLogger] - ``println`` like macro
> -#[macro_export]
> -macro_rules! flog {
> -    ($log:expr, $($arg:tt)*) => ({
> -        $log.log(format!($($arg)*));
> -    })
> -}
> -
> -impl FileLogger {
> -    pub fn new<P: AsRef<std::path::Path>>(
> -        file_name: P,
> -        options: FileLogOptions,
> -    ) -> Result<Self, Error> {
> -        let file = Self::open(&file_name, &options)?;
> -
> -        let file_name: std::path::PathBuf = file_name.as_ref().to_path_buf();
> -
> -        Ok(Self {
> -            file,
> -            file_name,
> -            options,
> -        })
> -    }
> -
> -    pub fn reopen(&mut self) -> Result<&Self, Error> {
> -        let file = Self::open(&self.file_name, &self.options)?;
> -        self.file = file;
> -        Ok(self)
> -    }
> -
> -    fn open<P: AsRef<std::path::Path>>(
> -        file_name: P,
> -        options: &FileLogOptions,
> -    ) -> Result<std::fs::File, Error> {
> -        let mut flags = OFlag::O_CLOEXEC;
> -
> -        if options.read {
> -            flags |= OFlag::O_RDWR;
> -        } else {
> -            flags |= OFlag::O_WRONLY;
> -        }
> -
> -        if options.append {
> -            flags |= OFlag::O_APPEND;
> -        }
> -        if options.exclusive {
> -            flags |= OFlag::O_EXCL;
> -        }
> -
> -        let file =
> -            atomic_open_or_create_file(&file_name, flags, &[], options.file_opts.clone(), false)?;
> -
> -        Ok(file)
> -    }
> -
> -    pub fn log<S: AsRef<str>>(&mut self, msg: S) {
> -        let msg = msg.as_ref();
> -
> -        if self.options.to_stdout {
> -            let mut stdout = std::io::stdout();
> -            stdout.write_all(msg.as_bytes()).unwrap();
> -            stdout.write_all(b"\n").unwrap();
> -        }
> -
> -        let line = if self.options.prefix_time {
> -            let now = proxmox_time::epoch_i64();
> -            let rfc3339 = match proxmox_time::epoch_to_rfc3339(now) {
> -                Ok(rfc3339) => rfc3339,
> -                Err(_) => "1970-01-01T00:00:00Z".into(), // for safety, should really not happen!
> -            };
> -            format!("{}: {}\n", rfc3339, msg)
> -        } else {
> -            format!("{}\n", msg)
> -        };
> -        if let Err(err) = self.file.write_all(line.as_bytes()) {
> -            // avoid panicking, log methods should not do that
> -            // FIXME: or, return result???
> -            log::error!("error writing to log file - {}", err);
> -        }
> -    }
> -}
> -
> -impl std::io::Write for FileLogger {
> -    fn write(&mut self, buf: &[u8]) -> Result<usize, std::io::Error> {
> -        if self.options.to_stdout {
> -            let _ = std::io::stdout().write(buf);
> -        }
> -        self.file.write(buf)
> -    }
> -
> -    fn flush(&mut self) -> Result<(), std::io::Error> {
> -        if self.options.to_stdout {
> -            let _ = std::io::stdout().flush();
> -        }
> -        self.file.flush()
> -    }
> -}
> diff --git a/proxmox-rest-server/src/lib.rs b/proxmox-rest-server/src/lib.rs
> index ce9e4f15..e8866366 100644
> --- a/proxmox-rest-server/src/lib.rs
> +++ b/proxmox-rest-server/src/lib.rs
> @@ -41,9 +41,6 @@ pub use state::*;
>  mod command_socket;
>  pub use command_socket::*;
>  
> -mod file_logger;
> -pub use file_logger::{FileLogOptions, FileLogger};
> -
>  mod api_config;
>  pub use api_config::{ApiConfig, AuthError, AuthHandler, IndexHandler, UnixAcceptor};
>  
> diff --git a/proxmox-rest-server/src/rest.rs b/proxmox-rest-server/src/rest.rs
> index 39ae9a18..3a3c287c 100644
> --- a/proxmox-rest-server/src/rest.rs
> +++ b/proxmox-rest-server/src/rest.rs
> @@ -31,10 +31,10 @@ use proxmox_schema::{ObjectSchemaType, ParameterSchema};
>  
>  use proxmox_async::stream::AsyncReaderStream;
>  use proxmox_compression::{DeflateEncoder, Level};
> +use proxmox_log::FileLogger;
>  
>  use crate::{
> -    formatter::*, normalize_path, ApiConfig, AuthError, CompressionMethod, FileLogger,
> -    RestEnvironment,
> +    formatter::*, normalize_path, ApiConfig, AuthError, CompressionMethod, RestEnvironment,
>  };
>  
>  extern "C" {
> diff --git a/proxmox-rest-server/src/worker_task.rs b/proxmox-rest-server/src/worker_task.rs
> index 5bab4cdc..58537f54 100644
> --- a/proxmox-rest-server/src/worker_task.rs
> +++ b/proxmox-rest-server/src/worker_task.rs
> @@ -1,3 +1,4 @@
> +use std::cell::{Cell, RefCell};
>  use std::collections::{HashMap, VecDeque};
>  use std::fs::File;
>  use std::io::{BufRead, BufReader, Read, Write};
> @@ -12,21 +13,23 @@ use futures::*;
>  use lazy_static::lazy_static;
>  use nix::fcntl::OFlag;
>  use once_cell::sync::OnceCell;
> +use proxmox_log::{LOGGER, WARN_COUNTER};

Nit: Order of imports, let's put this import to the other
proxmox_* imports below

>  use serde::{Deserialize, Serialize};
>  use serde_json::{json, Value};
>  use tokio::signal::unix::SignalKind;
>  use tokio::sync::oneshot;
> +use tracing::{info, warn};
>  
>  use proxmox_lang::try_block;
> +use proxmox_log::{FileLogOptions, FileLogger};
>  use proxmox_schema::upid::UPID;
>  use proxmox_sys::fs::{atomic_open_or_create_file, create_path, replace_file, CreateOptions};
>  use proxmox_sys::linux::procfs;
> -use proxmox_sys::task_warn;
>  
>  use proxmox_sys::logrotate::{LogRotate, LogRotateFiles};
>  use proxmox_sys::WorkerTaskContext;
>  
> -use crate::{CommandSocket, FileLogOptions, FileLogger};
> +use crate::CommandSocket;
>  
>  struct TaskListLockGuard(File);
>  
> @@ -294,7 +297,7 @@ pub fn rotate_task_log_archive(
>  
>  /// removes all task logs that are older than the oldest task entry in the
>  /// task archive
> -pub fn cleanup_old_tasks(worker: &dyn WorkerTaskContext, compressed: bool) -> Result<(), Error> {
> +pub fn cleanup_old_tasks(compressed: bool) -> Result<(), Error> {
>      let setup = worker_task_setup()?;
>  
>      let _lock = setup.lock_task_list_files(true)?;
> @@ -332,7 +335,10 @@ pub fn cleanup_old_tasks(worker: &dyn WorkerTaskContext, compressed: bool) -> Re
>                  Ok(files) => files,
>                  Err(err) if err.kind() == std::io::ErrorKind::NotFound => continue,
>                  Err(err) => {
> -                    task_warn!(worker, "could not check task logs in '{:02X}': {}", i, err);
> +                    warn!(
> +                        tasklog = true,
> +                        "could not check task logs in '{:02X}': {}", i, err
> +                    );
>                      continue;
>                  }
>              };
> @@ -340,11 +346,9 @@ pub fn cleanup_old_tasks(worker: &dyn WorkerTaskContext, compressed: bool) -> Re
>                  let file = match file {
>                      Ok(file) => file,
>                      Err(err) => {
> -                        task_warn!(
> -                            worker,
> -                            "could not check some task log in '{:02X}': {}",
> -                            i,
> -                            err
> +                        warn!(
> +                            tasklog = true,
> +                            "could not check some task log in '{:02X}': {}", i
err

Nit: You could inline these vars if you edit these lines any way.

>                          );
>                          continue;
>                      }
> @@ -354,7 +358,10 @@ pub fn cleanup_old_tasks(worker: &dyn WorkerTaskContext, compressed: bool) -> Re
>                  let modified = match get_modified(file) {
>                      Ok(modified) => modified,
>                      Err(err) => {
> -                        task_warn!(worker, "error getting mtime for '{:?}': {}", path, err);
> +                        warn!(
> +                            tasklog = true,
> +                            "error getting mtime for '{:?}': {}", path, err
> +                        );
>                          continue;
>                      }
>                  };
> @@ -364,7 +371,10 @@ pub fn cleanup_old_tasks(worker: &dyn WorkerTaskContext, compressed: bool) -> Re
>                          Ok(()) => {}
>                          Err(err) if err.kind() == std::io::ErrorKind::NotFound => {}
>                          Err(err) => {
> -                            task_warn!(worker, "could not remove file '{:?}': {}", path, err)
> +                            warn!(
> +                                tasklog = true,
> +                                "could not remove file '{:?}': {}", path, err
> +                            )
>                          }
>                      }
>                  }
> @@ -822,9 +832,7 @@ impl std::fmt::Display for WorkerTask {
>  }
>  
>  struct WorkerTaskData {
> -    logger: FileLogger,
>      progress: f64, // 0..1
> -    warn_count: u64,
>      pub abort_listeners: Vec<oneshot::Sender<()>>,
>  }
>  
> @@ -834,7 +842,7 @@ impl WorkerTask {
>          worker_id: Option<String>,
>          auth_id: String,
>          to_stdout: bool,
> -    ) -> Result<Arc<Self>, Error> {
> +    ) -> Result<(Arc<Self>, FileLogger), Error> {
>          let setup = worker_task_setup()?;
>  
>          let upid = UPID::new(worker_type, worker_id, auth_id)?;
> @@ -857,9 +865,7 @@ impl WorkerTask {
>              upid: upid.clone(),
>              abort_requested: AtomicBool::new(false),
>              data: Mutex::new(WorkerTaskData {
> -                logger,
>                  progress: 0.0,
> -                warn_count: 0,
>                  abort_listeners: vec![],
>              }),
>          });
> @@ -873,7 +879,7 @@ impl WorkerTask {
>  
>          setup.update_active_workers(Some(&upid))?;
>  
> -        Ok(worker)
> +        Ok((worker, logger))
>      }
>  
>      /// Spawn a new tokio task/future.
> @@ -888,13 +894,20 @@ impl WorkerTask {
>          F: Send + 'static + FnOnce(Arc<WorkerTask>) -> T,
>          T: Send + 'static + Future<Output = Result<(), Error>>,
>      {
> -        let worker = WorkerTask::new(worker_type, worker_id, auth_id, to_stdout)?;
> +        let (worker, logger) = WorkerTask::new(worker_type, worker_id, auth_id, to_stdout)?;
>          let upid_str = worker.upid.to_string();
>          let f = f(worker.clone());
> -        tokio::spawn(async move {
> -            let result = f.await;
> -            worker.log_result(&result);
> -        });
> +
> +        let logger = RefCell::new(logger);
> +        let counter = Cell::new(0);
> +        tokio::spawn(LOGGER.scope(logger, async move {
> +            WARN_COUNTER
> +                .scope(counter, async move {
> +                    let result = f.await;
> +                    worker.log_result(&result);
> +                })
> +                .await;
> +        }));
>  
>          Ok(upid_str)
>      }
> @@ -910,22 +923,27 @@ impl WorkerTask {
>      where
>          F: Send + UnwindSafe + 'static + FnOnce(Arc<WorkerTask>) -> Result<(), Error>,
>      {
> -        let worker = WorkerTask::new(worker_type, worker_id, auth_id, to_stdout)?;
> +        let (worker, logger) = WorkerTask::new(worker_type, worker_id, auth_id, to_stdout)?;
>          let upid_str = worker.upid.to_string();
>  
>          let _child = std::thread::Builder::new()
>              .name(upid_str.clone())
>              .spawn(move || {
> -                let worker1 = worker.clone();
> -                let result = match std::panic::catch_unwind(move || f(worker1)) {
> -                    Ok(r) => r,
> -                    Err(panic) => match panic.downcast::<&str>() {
> -                        Ok(panic_msg) => Err(format_err!("worker panicked: {}", panic_msg)),
> -                        Err(_) => Err(format_err!("worker panicked: unknown type.")),
> -                    },
> -                };
> +                LOGGER.sync_scope(RefCell::new(logger), || {
> +                    WARN_COUNTER.sync_scope(Cell::new(0), || {
> +                        let worker1 = worker.clone();
> +
> +                        let result = match std::panic::catch_unwind(move || f(worker1)) {
> +                            Ok(r) => r,
> +                            Err(panic) => match panic.downcast::<&str>() {
> +                                Ok(panic_msg) => Err(format_err!("worker panicked: {}", panic_msg)),
> +                                Err(_) => Err(format_err!("worker panicked: unknown type.")),
> +                            },
> +                        };
>  
> -                worker.log_result(&result);
> +                        worker.log_result(&result);
> +                    });
> +                });
>              });
>  
>          Ok(upid_str)
> @@ -933,7 +951,7 @@ impl WorkerTask {
>  
>      /// create state from self and a result
>      pub fn create_state(&self, result: &Result<(), Error>) -> TaskState {
> -        let warn_count = self.data.lock().unwrap().warn_count;
> +        let warn_count = WARN_COUNTER.try_with(Cell::get).unwrap_or(0);
>  
>          let endtime = proxmox_time::epoch_i64();
>  
> @@ -964,15 +982,7 @@ impl WorkerTask {
>  
>      /// Log a message.
>      pub fn log_message<S: AsRef<str>>(&self, msg: S) {
> -        let mut data = self.data.lock().unwrap();
> -        data.logger.log(msg);
> -    }
> -
> -    /// Log a message as warning.
> -    pub fn log_warning<S: AsRef<str>>(&self, msg: S) {
> -        let mut data = self.data.lock().unwrap();
> -        data.logger.log(format!("WARN: {}", msg.as_ref()));
> -        data.warn_count += 1;
> +        info!(tasklog = true, "{}", msg.as_ref());
>      }
>  
>      /// Set progress indicator
> @@ -1035,16 +1045,6 @@ impl WorkerTaskContext for WorkerTask {
>      fn fail_on_shutdown(&self) -> Result<(), Error> {
>          crate::fail_on_shutdown()
>      }
> -
> -    fn log(&self, level: log::Level, message: &std::fmt::Arguments) {
> -        match level {
> -            log::Level::Error => self.log_warning(message.to_string()),
> -            log::Level::Warn => self.log_warning(message.to_string()),
> -            log::Level::Info => self.log_message(message.to_string()),
> -            log::Level::Debug => self.log_message(format!("DEBUG: {}", message)),
> -            log::Level::Trace => self.log_message(format!("TRACE: {}", message)),
> -        }
> -    }
>  }
>  
>  /// Wait for a locally spanned worker task
> diff --git a/proxmox-sys/src/worker_task_context.rs b/proxmox-sys/src/worker_task_context.rs
> index 2c86857c..743ae53f 100644
> --- a/proxmox-sys/src/worker_task_context.rs
> +++ b/proxmox-sys/src/worker_task_context.rs
> @@ -26,9 +26,6 @@ pub trait WorkerTaskContext: Send + Sync {
>          }
>          Ok(())
>      }
> -
> -    /// Create a log message for this task.
> -    fn log(&self, level: log::Level, message: &std::fmt::Arguments);
>  }
>  
>  /// Convenience implementation:
> @@ -48,48 +45,4 @@ impl<T: WorkerTaskContext + ?Sized> WorkerTaskContext for std::sync::Arc<T> {
>      fn fail_on_shutdown(&self) -> Result<(), Error> {
>          <T as WorkerTaskContext>::fail_on_shutdown(self)
>      }
> -
> -    fn log(&self, level: log::Level, message: &std::fmt::Arguments) {
> -        <T as WorkerTaskContext>::log(self, level, message)
> -    }
> -}
> -
> -/// Log an error to a [WorkerTaskContext]
> -#[macro_export]
> -macro_rules! task_error {
> -    ($task:expr, $($fmt:tt)+) => {{
> -        $crate::WorkerTaskContext::log(&*$task, log::Level::Error, &format_args!($($fmt)+))
> -    }};
> -}
> -
> -/// Log a warning to a [WorkerTaskContext]
> -#[macro_export]
> -macro_rules! task_warn {
> -    ($task:expr, $($fmt:tt)+) => {{
> -        $crate::WorkerTaskContext::log(&*$task, log::Level::Warn, &format_args!($($fmt)+))
> -    }};
> -}
> -
> -/// Log a message to a [WorkerTaskContext]
> -#[macro_export]
> -macro_rules! task_log {
> -    ($task:expr, $($fmt:tt)+) => {{
> -        $crate::WorkerTaskContext::log(&*$task, log::Level::Info, &format_args!($($fmt)+))
> -    }};
> -}
> -
> -/// Log a debug message to a [WorkerTaskContext]
> -#[macro_export]
> -macro_rules! task_debug {
> -    ($task:expr, $($fmt:tt)+) => {{
> -        $crate::WorkerTaskContext::log(&*$task, log::Level::Debug, &format_args!($($fmt)+))
> -    }};
> -}
> -
> -/// Log a trace message to a [WorkerTaskContext]
> -#[macro_export]
> -macro_rules! task_trace {
> -    ($task:expr, $($fmt:tt)+) => {{
> -        $crate::WorkerTaskContext::log(&*$task, log::Level::Trace, &format_args!($($fmt)+))
> -    }};
>  }

-- 
- Lukas


_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pbs-devel] [PATCH proxmox v5 1/4] proxmox-log: add tracing infrastructure
  2024-06-13 13:56 ` [pbs-devel] [PATCH proxmox v5 1/4] proxmox-log: add tracing infrastructure Gabriel Goller
@ 2024-06-24 11:10   ` Lukas Wagner
  2024-06-25  8:44     ` Gabriel Goller
  0 siblings, 1 reply; 14+ messages in thread
From: Lukas Wagner @ 2024-06-24 11:10 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Gabriel Goller

Some comments inline

On  2024-06-13 15:56, Gabriel Goller wrote:
> Add the `proxmox_log` crate which includes the new logging infra.
> Export the `init_logger` function, which creates the `tracing` logger
> that includes the default subscriber and a single custom layer.
> 
> This layer checks on every event/log if a FileLogger is set in the
> task-local-storage, if it is, then log to the specified file, otherwise
> to syslog.
> 
> Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
> ---
>  Cargo.toml                              |   6 +
>  proxmox-log/Cargo.toml                  |  23 ++++
>  proxmox-log/debian/changelog            |   5 +
>  proxmox-log/debian/control              |  52 +++++++++
>  proxmox-log/debian/copyright            |  18 +++
>  proxmox-log/debian/debcargo.toml        |   7 ++
>  proxmox-log/src/file_logger.rs          | 147 ++++++++++++++++++++++++
>  proxmox-log/src/lib.rs                  |  37 ++++++
>  proxmox-log/src/syslog_tasklog_layer.rs | 106 +++++++++++++++++
>  proxmox-rest-server/Cargo.toml          |   2 +
>  10 files changed, 403 insertions(+)
>  create mode 100644 proxmox-log/Cargo.toml
>  create mode 100644 proxmox-log/debian/changelog
>  create mode 100644 proxmox-log/debian/control
>  create mode 100644 proxmox-log/debian/copyright
>  create mode 100644 proxmox-log/debian/debcargo.toml
>  create mode 100644 proxmox-log/src/file_logger.rs
>  create mode 100644 proxmox-log/src/lib.rs
>  create mode 100644 proxmox-log/src/syslog_tasklog_layer.rs
> 
> diff --git a/Cargo.toml b/Cargo.toml
> index b3c97808..6e7bdb8d 100644
> --- a/Cargo.toml
> +++ b/Cargo.toml
> @@ -17,6 +17,7 @@ members = [
>      "proxmox-io",
>      "proxmox-lang",
>      "proxmox-ldap",
> +    "proxmox-log",
>      "proxmox-login",
>      "proxmox-metrics",
>      "proxmox-network-api",
> @@ -97,11 +98,15 @@ serde_cbor = "0.11.1"
>  serde_json = "1.0"
>  serde_plain = "1.0"
>  syn = { version = "2", features = [ "full", "visit-mut" ] }
> +syslog = "6"
>  tar = "0.4"
>  tokio = "1.6"
>  tokio-openssl = "0.6.1"
>  tokio-stream = "0.1.0"
>  tower-service = "0.3.0"
> +tracing = "0.1"
> +tracing-log = { version = "0.1.3", default-features = false }
> +tracing-subscriber = "0.3.16"
>  url = "2.2"
>  walkdir = "2"
>  webauthn-rs = "0.3"
> @@ -117,6 +122,7 @@ proxmox-http-error = { version = "0.1.0", path = "proxmox-http-error" }
>  proxmox-human-byte = { version = "0.1.0", path = "proxmox-human-byte" }
>  proxmox-io = { version = "1.0.0", path = "proxmox-io" }
>  proxmox-lang = { version = "1.1", path = "proxmox-lang" }
> +proxmox-log= { version = "0.1.0", path = "proxmox-log" }
>  proxmox-login = { version = "0.1.0", path = "proxmox-login" }
>  proxmox-product-config = { version = "0.1.0", path = "proxmox-product-config" }
>  proxmox-config-digest = { version = "0.1.0", path = "proxmox-config-digest" }
> diff --git a/proxmox-log/Cargo.toml b/proxmox-log/Cargo.toml
> new file mode 100644
> index 00000000..14ca6c7b
> --- /dev/null
> +++ b/proxmox-log/Cargo.toml
> @@ -0,0 +1,23 @@
> +[package]
> +name = "proxmox-log"
> +version = "0.1.0"
> +authors.workspace = true
> +edition.workspace = true
> +license.workspace = true
> +repository.workspace = true
> +description = "Logging infrastructure for proxmox"
> +
> +exclude.workspace = true
> +
> +[dependencies]
> +anyhow.workspace = true
> +syslog.workspace = true

Just wondering, have you seen/checked out tracing-journald?
Would that be interesting for us?

https://docs.rs/tracing-journald/latest/tracing_journald/

> +nix.workspace = true
> +log.workspace = true
> +tracing.workspace = true
> +tracing-subscriber.workspace = true
> +tracing-log = { workspace = true, features = ["std"] }
> +tokio = { workspace = true, features = ["rt-multi-thread"] }
> +proxmox-time.workspace = true
> +proxmox-sys.workspace = true
> +
> diff --git a/proxmox-log/debian/changelog b/proxmox-log/debian/changelog
> new file mode 100644
> index 00000000..3661bae9
> --- /dev/null
> +++ b/proxmox-log/debian/changelog
> @@ -0,0 +1,5 @@
> +rust-proxmox-log (0.1.0-3) UNRELEASED; urgency=medium
> +
> +  * Initial release
> +
> + -- Gabriel Goller <ggoller@luna.proxmox.com>  Wed, 11 Oct 2023 15:13:58 +0200
> diff --git a/proxmox-log/debian/control b/proxmox-log/debian/control
> new file mode 100644
> index 00000000..14b2376d
> --- /dev/null
> +++ b/proxmox-log/debian/control
> @@ -0,0 +1,52 @@
> +Source: rust-proxmox-log
> +Section: rust
> +Priority: optional
> +Build-Depends: debhelper (>= 12),
> + dh-cargo (>= 25),
> + cargo:native <!nocheck>,
> + rustc:native <!nocheck>,
> + libstd-rust-dev <!nocheck>,
> + librust-anyhow-1+default-dev <!nocheck>,
> + librust-log-0.4+default-dev (>= 0.4.17-~~) <!nocheck>,
> + librust-nix-0.26+default-dev (>= 0.26.1-~~) <!nocheck>,
> + librust-proxmox-sys-0.5+default-dev (>= 0.5.1-~~) <!nocheck>,
> + librust-proxmox-time-1+default-dev (>= 1.1.6-~~) <!nocheck>,
> + librust-syslog-6+default-dev <!nocheck>,
> + librust-tokio-1+default-dev (>= 1.6-~~) <!nocheck>,
> + librust-tokio-1+rt-multi-thread-dev (>= 1.6-~~) <!nocheck>,
> + librust-tracing-0.1+default-dev <!nocheck>,
> + librust-tracing-log-0.1-dev (>= 0.1.3-~~) <!nocheck>,
> + librust-tracing-subscriber-0.3+default-dev (>= 0.3.16-~~) <!nocheck>
> +Maintainer: Proxmox Support Team <support@proxmox.com>
> +Standards-Version: 4.6.2
> +Vcs-Git: git://git.proxmox.com/git/proxmox.git
> +Vcs-Browser: https://git.proxmox.com/?p=proxmox.git
> +X-Cargo-Crate: proxmox-log
> +Rules-Requires-Root: no
> +
> +Package: librust-proxmox-log-dev
> +Architecture: any
> +Multi-Arch: same
> +Depends:
> + ${misc:Depends},
> + librust-anyhow-1+default-dev,
> + librust-log-0.4+default-dev (>= 0.4.17-~~),
> + librust-nix-0.26+default-dev (>= 0.26.1-~~),
> + librust-proxmox-sys-0.5+default-dev (>= 0.5.1-~~),
> + librust-proxmox-time-1+default-dev (>= 1.1.6-~~),
> + librust-syslog-6+default-dev,
> + librust-tokio-1+default-dev (>= 1.6-~~),
> + librust-tokio-1+rt-multi-thread-dev (>= 1.6-~~),
> + librust-tracing-0.1+default-dev,
> + librust-tracing-log-0.1-dev (>= 0.1.3-~~),
> + librust-tracing-subscriber-0.3+default-dev (>= 0.3.16-~~)
> +Provides:
> + librust-proxmox-log+default-dev (= ${binary:Version}),
> + librust-proxmox-log-0-dev (= ${binary:Version}),
> + librust-proxmox-log-0+default-dev (= ${binary:Version}),
> + librust-proxmox-log-0.1-dev (= ${binary:Version}),
> + librust-proxmox-log-0.1+default-dev (= ${binary:Version}),
> + librust-proxmox-log-0.1.0-dev (= ${binary:Version}),
> + librust-proxmox-log-0.1.0+default-dev (= ${binary:Version})
> +Description: Logging infrastructure for proxmox - Rust source code
> + Source code for Debianized Rust crate "proxmox-log"
> diff --git a/proxmox-log/debian/copyright b/proxmox-log/debian/copyright
> new file mode 100644
> index 00000000..b227c290
> --- /dev/null
> +++ b/proxmox-log/debian/copyright
> @@ -0,0 +1,18 @@
> +Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
> +
> +Files:
> + *
> +Copyright: 2019 - 2024 Proxmox Server Solutions GmbH <support@proxmox.com>
> +License: AGPL-3.0-or-later
> + This program is free software: you can redistribute it and/or modify it under
> + the terms of the GNU Affero General Public License as published by the Free
> + Software Foundation, either version 3 of the License, or (at your option) any
> + later version.
> + .
> + This program is distributed in the hope that it will be useful, but WITHOUT
> + ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
> + FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more
> + details.
> + .
> + You should have received a copy of the GNU Affero General Public License along
> + with this program. If not, see <https://www.gnu.org/licenses/>.
> diff --git a/proxmox-log/debian/debcargo.toml b/proxmox-log/debian/debcargo.toml
> new file mode 100644
> index 00000000..b7864cdb
> --- /dev/null
> +++ b/proxmox-log/debian/debcargo.toml
> @@ -0,0 +1,7 @@
> +overlay = "."
> +crate_src_path = ".."
> +maintainer = "Proxmox Support Team <support@proxmox.com>"
> +
> +[source]
> +vcs_git = "git://git.proxmox.com/git/proxmox.git"
> +vcs_browser = "https://git.proxmox.com/?p=proxmox.git"
> diff --git a/proxmox-log/src/file_logger.rs b/proxmox-log/src/file_logger.rs
> new file mode 100644
> index 00000000..c7e1d644
> --- /dev/null
> +++ b/proxmox-log/src/file_logger.rs
> @@ -0,0 +1,147 @@
> +use std::io::Write;
> +
> +use anyhow::Error;
> +use nix::fcntl::OFlag;
> +
> +use proxmox_sys::fs::{atomic_open_or_create_file, CreateOptions};
> +
> +/// Options to control the behavior of a [FileLogger] instance
> +#[derive(Default)]
> +pub struct FileLogOptions {
> +    /// Open underlying log file in append mode, useful when multiple concurrent processes
> +    /// want to log to the same file (e.g., HTTP access log). Note that it is only atomic
> +    /// for writes smaller than the PIPE_BUF (4k on Linux).
> +    /// Inside the same process you may need to still use an mutex, for shared access.
> +    pub append: bool,
> +    /// Open underlying log file as readable
> +    pub read: bool,
> +    /// If set, ensure that the file is newly created or error out if already existing.
> +    pub exclusive: bool,
> +    /// Duplicate logged messages to STDOUT, like tee
> +    pub to_stdout: bool,
> +    /// Prefix messages logged to the file with the current local time as RFC 3339
> +    pub prefix_time: bool,
> +    /// File owner/group and mode
> +    pub file_opts: CreateOptions,
> +}
> +
> +/// Log messages with optional automatically added timestamps into files
> +///
> +/// #### Example:
> +/// ```
> +/// # use anyhow::{bail, format_err, Error};
> +/// use proxmox_log::{flog, FileLogger, FileLogOptions};
> +///
> +/// # std::fs::remove_file("test.log");
> +/// let options = FileLogOptions {
> +///     to_stdout: true,
> +///     exclusive: true,
> +///     ..Default::default()
> +/// };
> +/// let mut log = FileLogger::new("test.log", options).unwrap();
> +/// flog!(log, "A simple log: {}", "Hello!");
> +/// # std::fs::remove_file("test.log");
> +/// ```
> +pub struct FileLogger {
> +    file: std::fs::File,
> +    file_name: std::path::PathBuf,
> +    options: FileLogOptions,
> +}
> +
> +/// Log messages to [FileLogger] - ``println`` like macro
> +#[macro_export]
> +macro_rules! flog {
> +    ($log:expr, $($arg:tt)*) => ({
> +        $log.log(format!($($arg)*));
> +    })
> +}
> +
> +impl FileLogger {
> +    pub fn new<P: AsRef<std::path::Path>>(

Maybe add some doc comment for pub fn's?

> +        file_name: P,
> +        options: FileLogOptions,
> +    ) -> Result<Self, Error> {
> +        let file = Self::open(&file_name, &options)?;
> +
> +        let file_name: std::path::PathBuf = file_name.as_ref().to_path_buf();
> +
> +        Ok(Self {
> +            file,
> +            file_name,
> +            options,
> +        })
> +    }
> +
> +    pub fn reopen(&mut self) -> Result<&Self, Error> {

Maybe add some doc comment for pub fn's?

> +        let file = Self::open(&self.file_name, &self.options)?;
> +        self.file = file;
> +        Ok(self)
> +    }
> +
> +    fn open<P: AsRef<std::path::Path>>(
> +        file_name: P,
> +        options: &FileLogOptions,
> +    ) -> Result<std::fs::File, Error> {
> +        let mut flags = OFlag::O_CLOEXEC;
> +
> +        if options.read {
> +            flags |= OFlag::O_RDWR;
> +        } else {
> +            flags |= OFlag::O_WRONLY;
> +        }
> +
> +        if options.append {
> +            flags |= OFlag::O_APPEND;
> +        }
> +        if options.exclusive {
> +            flags |= OFlag::O_EXCL;
> +        }
> +
> +        let file =
> +            atomic_open_or_create_file(&file_name, flags, &[], options.file_opts.clone(), false)?;
> +
> +        Ok(file)
> +    }
> +
> +    pub fn log<S: AsRef<str>>(&mut self, msg: S) {

Maybe add some doc comment for pub fn's?

> +        let msg = msg.as_ref();
> +
> +        if self.options.to_stdout {
> +            let mut stdout = std::io::stdout();
> +            stdout.write_all(msg.as_bytes()).unwrap();
> +            stdout.write_all(b"\n").unwrap();

Instead of .unwrap'ing, you could just ignore the result:
  let _ = stdout.write_all(...);

If your 1000% sure that the write cannot fail, I'd like to see a comment
explaining why that is. Sometimes I also like to put that reasoning in the
message of .expect(...) and use that instead of .unwrap()

> +        }
> +
> +        let line = if self.options.prefix_time {
> +            let now = proxmox_time::epoch_i64();
> +            let rfc3339 = match proxmox_time::epoch_to_rfc3339(now) {
> +                Ok(rfc3339) => rfc3339,
> +                Err(_) => "1970-01-01T00:00:00Z".into(), // for safety, should really not happen!
> +            };
> +            format!("{}: {}\n", rfc3339, msg)
> +        } else {
> +            format!("{}\n", msg)

You could inline these vars in the format string

> +        };
> +        if let Err(err) = self.file.write_all(line.as_bytes()) {
> +            // avoid panicking, log methods should not do that
> +            // FIXME: or, return result???
> +            log::error!("error writing to log file - {}", err);

Since you use `FileLogger` in `SyslogAndTaskLogger` as a tracing logging layer and
also setup the `tracing-log` bridge to convert log records to tracing events,
wouldn't this lead to recursion?

> +        }
> +    }
> +}
> +
> +impl std::io::Write for FileLogger {
> +    fn write(&mut self, buf: &[u8]) -> Result<usize, std::io::Error> {
> +        if self.options.to_stdout {
> +            let _ = std::io::stdout().write(buf);
> +        }
> +        self.file.write(buf)
> +    }
> +
> +    fn flush(&mut self) -> Result<(), std::io::Error> {
> +        if self.options.to_stdout {
> +            let _ = std::io::stdout().flush();
> +        }
> +        self.file.flush()
> +    }
> +}
> diff --git a/proxmox-log/src/lib.rs b/proxmox-log/src/lib.rs
> new file mode 100644
> index 00000000..daaaede5
> --- /dev/null
> +++ b/proxmox-log/src/lib.rs
> @@ -0,0 +1,37 @@
> +use std::{
> +    cell::{Cell, RefCell},
> +    env,
> +};
> +use syslog_tasklog_layer::SyslogAndTasklogLayer;
> +use tracing_log::{AsLog, LogTracer};
> +use tracing_subscriber::filter::LevelFilter;
> +use tracing_subscriber::prelude::*;
> +
> +mod file_logger;
> +pub use file_logger::{FileLogOptions, FileLogger};
> +
> +mod syslog_tasklog_layer;
> +
> +tokio::task_local! {
> +    pub static LOGGER: RefCell<FileLogger>;
> +    pub static WARN_COUNTER: Cell<u64> ;
> +}
> +
> +pub fn init_logger(
> +    env_var_name: &str,
> +    default_log_level: LevelFilter,
> +    application_name: &str,
> +) -> Result<(), anyhow::Error> {
> +    let mut log_level = default_log_level;
> +    if let Ok(v) = env::var(env_var_name) {
> +        if let Ok(l) = v.parse::<LevelFilter>() {
> +            log_level = l;
> +        }
> +    }
> +    let registry = tracing_subscriber::registry()
> +        .with(SyslogAndTasklogLayer::new(application_name.to_string()).with_filter(log_level));
> +
> +    tracing::subscriber::set_global_default(registry)?;
> +    LogTracer::init_with_filter(log_level.as_log())?;
> +    Ok(())
> +}
> diff --git a/proxmox-log/src/syslog_tasklog_layer.rs b/proxmox-log/src/syslog_tasklog_layer.rs
> new file mode 100644
> index 00000000..89437caf
> --- /dev/null
> +++ b/proxmox-log/src/syslog_tasklog_layer.rs
> @@ -0,0 +1,106 @@
> +use std::fmt::Write as _;
> +use std::sync::Arc;
> +use std::sync::Mutex;
> +
> +use syslog::{Formatter3164, LoggerBackend};
> +use tracing::field::Field;
> +use tracing::field::Visit;
> +use tracing::Event;
> +use tracing::Level;
> +use tracing::Subscriber;
> +use tracing_subscriber::layer::Context;
> +use tracing_subscriber::Layer;
> +
> +use crate::FileLogger;
> +use crate::LOGGER;
> +use crate::WARN_COUNTER;
> +
> +pub struct SyslogAndTasklogLayer {
> +    syslog_logger: Arc<Mutex<syslog::Logger<LoggerBackend, Formatter3164>>>,
> +}
> +
> +impl SyslogAndTasklogLayer {
> +    pub fn new(application_name: String) -> Self {
> +        let formatter = Formatter3164 {
> +            facility: syslog::Facility::LOG_DAEMON,
> +            process: application_name,
> +            ..Formatter3164::default()
> +        };
> +
> +        // we panic here if we can't initialize the syslogger
> +        let logger = syslog::unix(formatter)

Maybe explain why it is okay to panic in this case? (is it?)

> +            .map_err(|e| {
> +                anyhow::Error::new(std::io::Error::new(
> +                    std::io::ErrorKind::Other,
> +                    e.description(),
> +                ))
> +            })
> +            .unwrap();
> +
> +        let logger = Arc::new(Mutex::new(logger));
> +
> +        Self {
> +            syslog_logger: logger,
> +        }
> +    }
> +}
> +
> +impl<S: Subscriber> Layer<S> for SyslogAndTasklogLayer {
> +    fn on_event(&self, event: &Event<'_>, _ctx: Context<'_, S>) {
> +        let mut buf = String::new();
> +        event.record(&mut EventVisitor::new(&mut buf));
> +        let level = event.metadata().level();
> +
> +        let result = LOGGER.try_with(|logger| {
> +            log_to_file(&mut logger.borrow_mut(), level, &buf);
> +        });
> +        if result.is_err() || *level == Level::ERROR {
> +            log_to_syslog(&mut self.syslog_logger.lock().unwrap(), level, &buf);

Also could do the .expect with an explanatory message here

> +        }
> +    }
> +}
> +
> +fn log_to_syslog(
> +    logger: &mut syslog::Logger<LoggerBackend, Formatter3164>,
> +    level: &Level,
> +    buf: &String,
> +) {
> +    let _ = match *level {
> +        Level::ERROR => logger.err(buf),
> +        Level::WARN => logger.warning(buf),
> +        Level::INFO => logger.info(buf),
> +        Level::DEBUG => logger.debug(buf),
> +        Level::TRACE => logger.debug(buf),
> +    };
> +}
> +fn log_to_file(logger: &mut FileLogger, level: &Level, buf: &String) {
> +    match *level {
> +        Level::ERROR | Level::WARN => {
> +            WARN_COUNTER.with(|counter| {
> +                counter.set(counter.get() + 1);
> +            });
> +            logger.log(buf);
> +        }
> +        Level::INFO => logger.log(buf),
> +        Level::DEBUG => logger.log(format!("DEBUG: {}", buf)),
> +        Level::TRACE => logger.log(format!("TRACE: {}", buf)),

You could inline these vars :)

> +    };
> +}
> +
> +struct EventVisitor<'a> {
> +    buf: &'a mut String,
> +}
> +
> +impl<'a> EventVisitor<'a> {
> +    fn new(buf: &'a mut String) -> Self {
> +        Self { buf }
> +    }
> +}
> +
> +impl Visit for EventVisitor<'_> {
> +    fn record_debug(&mut self, field: &Field, value: &dyn std::fmt::Debug) {
> +        if field.name() == "message" {
> +            let _ = write!(self.buf, "{value:?}");
> +        }
> +    }
> +}
> diff --git a/proxmox-rest-server/Cargo.toml b/proxmox-rest-server/Cargo.toml
> index 94330ff1..2bfea4a0 100644
> --- a/proxmox-rest-server/Cargo.toml
> +++ b/proxmox-rest-server/Cargo.toml
> @@ -34,6 +34,7 @@ tokio-openssl.workspace = true
>  tokio-stream.workspace = true
>  tower-service.workspace = true
>  url.workspace = true
> +tracing.workspace = true

please keep the alphabetic order :)

>  
>  proxmox-async.workspace = true
>  proxmox-compression.workspace = true
> @@ -44,6 +45,7 @@ proxmox-router.workspace = true
>  proxmox-schema = { workspace = true, features = [ "api-macro", "upid-api-impl" ] }
>  proxmox-sys = { workspace = true, features = [ "logrotate", "timer" ] }
>  proxmox-time.workspace = true
> +proxmox-log.workspace = true

Same here

>  
>  [features]
>  default = []

-- 
- Lukas


_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pbs-devel] [PATCH proxmox-backup v5 3/4] switch from task_log! macro to tracing
  2024-06-13 13:56 ` [pbs-devel] [PATCH proxmox-backup v5 3/4] switch from task_log! macro to tracing Gabriel Goller
@ 2024-06-24 11:10   ` Lukas Wagner
  0 siblings, 0 replies; 14+ messages in thread
From: Lukas Wagner @ 2024-06-24 11:10 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Gabriel Goller

Hi!

Since you are touching all log-calls anyway, you *could* also
inline all variables where possible
(e.g. info!("asdf {}", foo) -> info!("asdf {foo}")

But that's only a suggestion, no hard feelings if you don't -
I don't want this to stand in the way delaying the merge
of these patches even longer ;)

Oh, and something else: Usually we like to structure imports 
into 4 distinct block, separted by a newline.

use std::... // imports from std

use tracing::... // 3rd party crates

use proxmox_log::... // imports from our own deps

use pbs-api-types::... // imports from the same repo

Your changes mix that up in some places. No big issue, but
it would be nice if you could bring that in order if
the need for a new version of the series arises.

Both comments also apply to the other patch for proxmox-backup.


On  2024-06-13 15:56, Gabriel Goller wrote:
> Import `proxmox-log` and substitute all `task_log!`
> (and task_warn!, task_error!) invocations with tracing calls (info!,
> warn!, etc..). Remove worker references where it isn't necessary
> anymore.
> 
> Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
> ---
>  Cargo.toml                       |   7 +
>  debian/control                   |   2 +
>  pbs-datastore/Cargo.toml         |   1 +
>  pbs-datastore/src/chunk_store.rs |  30 +---
>  pbs-datastore/src/datastore.rs   |  70 ++++------
>  src/backup/verify.rs             | 103 ++++----------
>  src/bin/proxmox-backup-api.rs    |  10 +-
>  src/bin/proxmox-backup-proxy.rs  |  42 ++----
>  src/server/gc_job.rs             |   6 +-
>  src/server/prune_job.rs          |  26 ++--
>  src/server/pull.rs               | 230 ++++++++++---------------------
>  src/server/realm_sync_job.rs     |  44 ++----
>  src/server/verify_job.rs         |  10 +-
>  src/tape/drive/mod.rs            |  34 ++---
>  src/tape/pool_writer/mod.rs      |  88 +++++-------
>  src/tools/disks/mod.rs           |  21 ++-
>  tests/worker-task-abort.rs       |   9 +-
>  17 files changed, 252 insertions(+), 481 deletions(-)
> 
> diff --git a/Cargo.toml b/Cargo.toml
> index 5eddd493..21f07e67 100644
> --- a/Cargo.toml
> +++ b/Cargo.toml
> @@ -63,6 +63,7 @@ proxmox-http = { version = "0.9.0", features = [ "client", "http-helpers", "webs
>  proxmox-human-byte = "0.1"
>  proxmox-io = "1.0.1" # tools and client use "tokio" feature
>  proxmox-lang = "1.1"
> +proxmox-log = "0.1"
>  proxmox-ldap = "0.2.1"
>  proxmox-metrics = "0.3.1"
>  proxmox-notify = "0.4"
> @@ -145,6 +146,8 @@ tokio = "1.6"
>  tokio-openssl = "0.6.1"
>  tokio-stream = "0.1.0"
>  tokio-util = { version = "0.7", features = [ "io" ] }
> +tracing = "0.1"
> +tracing-subscriber = "0.3.16"
>  tower-service = "0.3.0"
>  udev = "0.4"
>  url = "2.1"
> @@ -192,6 +195,8 @@ tokio = { workspace = true, features = [ "fs", "io-util", "io-std", "macros", "n
>  tokio-openssl.workspace = true
>  tokio-stream.workspace = true
>  tokio-util = { workspace = true, features = [ "codec" ] }
> +tracing.workspace = true
> +tracing-subscriber.workspace = true
>  tower-service.workspace = true
>  udev.workspace = true
>  url.workspace = true
> @@ -210,6 +215,7 @@ proxmox-http = { workspace = true, features = [ "client-trait", "proxmox-async",
>  proxmox-human-byte.workspace = true
>  proxmox-io.workspace = true
>  proxmox-lang.workspace = true
> +proxmox-log.workspace = true
>  proxmox-ldap.workspace = true
>  proxmox-metrics.workspace = true
>  proxmox-notify = { workspace = true, features = [ "pbs-context" ] }
> @@ -257,6 +263,7 @@ proxmox-rrd.workspace = true
>  #proxmox-human-byte = { path = "../proxmox/proxmox-human-byte" }
>  #proxmox-io = { path = "../proxmox/proxmox-io" }
>  #proxmox-lang = { path = "../proxmox/proxmox-lang" }
> +#proxmox-log = { path = "../proxmox/proxmox-log" }
>  #proxmox-ldap = { path = "../proxmox/proxmox-ldap" }
>  #proxmox-metrics = { path = "../proxmox/proxmox-metrics" }
>  #proxmox-notify = { path = "../proxmox/proxmox-notify" }
> diff --git a/debian/control b/debian/control
> index cbf9fbd0..9a958888 100644
> --- a/debian/control
> +++ b/debian/control
> @@ -134,6 +134,8 @@ Build-Depends: bash-completion,
>                 librust-tokio-util-0.7+default-dev,
>                 librust-tokio-util-0.7+io-dev,
>                 librust-tower-service-0.3+default-dev,
> +               librust-tracing-0.1+default-dev,
> +               librust-tracing-subscriber-0.3+default-dev (>= 0.3.16-~~),
>                 librust-udev-0.4+default-dev,
>                 librust-url-2+default-dev (>= 2.1-~~),
>                 librust-walkdir-2+default-dev,
> diff --git a/pbs-datastore/Cargo.toml b/pbs-datastore/Cargo.toml
> index b793dc5b..fcb11ac4 100644
> --- a/pbs-datastore/Cargo.toml
> +++ b/pbs-datastore/Cargo.toml
> @@ -20,6 +20,7 @@ openssl.workspace = true
>  serde.workspace = true
>  serde_json.workspace = true
>  tokio = { workspace = true, features = [] }
> +tracing.workspace = true
>  walkdir.workspace = true
>  zstd.workspace = true
>  
> diff --git a/pbs-datastore/src/chunk_store.rs b/pbs-datastore/src/chunk_store.rs
> index 9f6289c9..8402d2fe 100644
> --- a/pbs-datastore/src/chunk_store.rs
> +++ b/pbs-datastore/src/chunk_store.rs
> @@ -3,6 +3,7 @@ use std::path::{Path, PathBuf};
>  use std::sync::{Arc, Mutex};
>  
>  use anyhow::{bail, format_err, Error};
> +use tracing::info;
>  
>  use pbs_api_types::{DatastoreFSyncLevel, GarbageCollectionStatus};
>  use proxmox_io::ReadExt;
> @@ -10,7 +11,6 @@ use proxmox_sys::fs::{create_dir, create_path, file_type_from_file_stat, CreateO
>  use proxmox_sys::process_locker::{
>      ProcessLockExclusiveGuard, ProcessLockSharedGuard, ProcessLocker,
>  };
> -use proxmox_sys::task_log;
>  use proxmox_sys::WorkerTaskContext;
>  
>  use crate::file_formats::{
> @@ -92,7 +92,6 @@ impl ChunkStore {
>          path: P,
>          uid: nix::unistd::Uid,
>          gid: nix::unistd::Gid,
> -        worker: Option<&dyn WorkerTaskContext>,
>          sync_level: DatastoreFSyncLevel,
>      ) -> Result<Self, Error>
>      where
> @@ -143,9 +142,7 @@ impl ChunkStore {
>              }
>              let percentage = (i * 100) / (64 * 1024);
>              if percentage != last_percentage {
> -                if let Some(worker) = worker {
> -                    task_log!(worker, "Chunkstore create: {}%", percentage)
> -                }
> +                info!("Chunkstore create: {}%", percentage);
>                  last_percentage = percentage;
>              }
>          }
> @@ -374,7 +371,7 @@ impl ChunkStore {
>          for (entry, percentage, bad) in self.get_chunk_iterator()? {
>              if last_percentage != percentage {
>                  last_percentage = percentage;
> -                task_log!(worker, "processed {}% ({} chunks)", percentage, chunk_count,);
> +                info!("processed {}% ({} chunks)", percentage, chunk_count,);
>              }
>  
>              worker.check_abort()?;
> @@ -578,15 +575,8 @@ fn test_chunk_store1() {
>      let user = nix::unistd::User::from_uid(nix::unistd::Uid::current())
>          .unwrap()
>          .unwrap();
> -    let chunk_store = ChunkStore::create(
> -        "test",
> -        &path,
> -        user.uid,
> -        user.gid,
> -        None,
> -        DatastoreFSyncLevel::None,
> -    )
> -    .unwrap();
> +    let chunk_store =
> +        ChunkStore::create("test", &path, user.uid, user.gid, DatastoreFSyncLevel::None).unwrap();
>  
>      let (chunk, digest) = crate::data_blob::DataChunkBuilder::new(&[0u8, 1u8])
>          .build()
> @@ -598,14 +588,8 @@ fn test_chunk_store1() {
>      let (exists, _) = chunk_store.insert_chunk(&chunk, &digest).unwrap();
>      assert!(exists);
>  
> -    let chunk_store = ChunkStore::create(
> -        "test",
> -        &path,
> -        user.uid,
> -        user.gid,
> -        None,
> -        DatastoreFSyncLevel::None,
> -    );
> +    let chunk_store =
> +        ChunkStore::create("test", &path, user.uid, user.gid, DatastoreFSyncLevel::None);
>      assert!(chunk_store.is_err());
>  
>      if let Err(_e) = std::fs::remove_dir_all(".testdir") { /* ignore */ }
> diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
> index f95da761..8cb98f59 100644
> --- a/pbs-datastore/src/datastore.rs
> +++ b/pbs-datastore/src/datastore.rs
> @@ -7,6 +7,7 @@ use std::sync::{Arc, Mutex};
>  use anyhow::{bail, format_err, Error};
>  use lazy_static::lazy_static;
>  use nix::unistd::{unlinkat, UnlinkatFlags};
> +use tracing::{info, warn};
>  
>  use proxmox_human_byte::HumanByte;
>  use proxmox_schema::ApiType;
> @@ -16,7 +17,6 @@ use proxmox_sys::fs::{file_read_optional_string, replace_file, CreateOptions};
>  use proxmox_sys::fs::{lock_dir_noblock, DirLockGuard};
>  use proxmox_sys::process_locker::ProcessLockSharedGuard;
>  use proxmox_sys::WorkerTaskContext;
> -use proxmox_sys::{task_log, task_warn};
>  
>  use pbs_api_types::{
>      Authid, BackupNamespace, BackupType, ChunkOrder, DataStoreConfig, DatastoreFSyncLevel,
> @@ -965,8 +965,7 @@ impl DataStore {
>              let digest = index.index_digest(pos).unwrap();
>              if !self.inner.chunk_store.cond_touch_chunk(digest, false)? {
>                  let hex = hex::encode(digest);
> -                task_warn!(
> -                    worker,
> +                warn!(
>                      "warning: unable to access non-existent chunk {hex}, required by {file_name:?}"
>                  );
>  
> @@ -1032,8 +1031,7 @@ impl DataStore {
>  
>              let percentage = (i + 1) * 100 / image_count;
>              if percentage > last_percentage {
> -                task_log!(
> -                    worker,
> +                info!(
>                      "marked {}% ({} of {} index files)",
>                      percentage,
>                      i + 1,
> @@ -1044,8 +1042,7 @@ impl DataStore {
>          }
>  
>          if strange_paths_count > 0 {
> -            task_log!(
> -                worker,
> +            info!(
>                  "found (and marked) {} index files outside of expected directory scheme",
>                  strange_paths_count,
>              );
> @@ -1085,11 +1082,11 @@ impl DataStore {
>                  ..Default::default()
>              };
>  
> -            task_log!(worker, "Start GC phase1 (mark used chunks)");
> +            info!("Start GC phase1 (mark used chunks)");
>  
>              self.mark_used_chunks(&mut gc_status, worker)?;
>  
> -            task_log!(worker, "Start GC phase2 (sweep unused chunks)");
> +            info!("Start GC phase2 (sweep unused chunks)");
>              self.inner.chunk_store.sweep_unused_chunks(
>                  oldest_writer,
>                  phase1_start_time,
> @@ -1097,30 +1094,27 @@ impl DataStore {
>                  worker,
>              )?;
>  
> -            task_log!(
> -                worker,
> +            info!(
>                  "Removed garbage: {}",
>                  HumanByte::from(gc_status.removed_bytes),
>              );
> -            task_log!(worker, "Removed chunks: {}", gc_status.removed_chunks);
> +            info!("Removed chunks: {}", gc_status.removed_chunks);
>              if gc_status.pending_bytes > 0 {
> -                task_log!(
> -                    worker,
> +                info!(
>                      "Pending removals: {} (in {} chunks)",
>                      HumanByte::from(gc_status.pending_bytes),
>                      gc_status.pending_chunks,
>                  );
>              }
>              if gc_status.removed_bad > 0 {
> -                task_log!(worker, "Removed bad chunks: {}", gc_status.removed_bad);
> +                info!("Removed bad chunks: {}", gc_status.removed_bad);
>              }
>  
>              if gc_status.still_bad > 0 {
> -                task_log!(worker, "Leftover bad chunks: {}", gc_status.still_bad);
> +                info!("Leftover bad chunks: {}", gc_status.still_bad);
>              }
>  
> -            task_log!(
> -                worker,
> +            info!(
>                  "Original data usage: {}",
>                  HumanByte::from(gc_status.index_data_bytes),
>              );
> @@ -1128,15 +1122,14 @@ impl DataStore {
>              if gc_status.index_data_bytes > 0 {
>                  let comp_per =
>                      (gc_status.disk_bytes as f64 * 100.) / gc_status.index_data_bytes as f64;
> -                task_log!(
> -                    worker,
> +                info!(
>                      "On-Disk usage: {} ({:.2}%)",
>                      HumanByte::from(gc_status.disk_bytes),
>                      comp_per,
>                  );
>              }
>  
> -            task_log!(worker, "On-Disk chunks: {}", gc_status.disk_chunks);
> +            info!("On-Disk chunks: {}", gc_status.disk_chunks);
>  
>              let deduplication_factor = if gc_status.disk_bytes > 0 {
>                  (gc_status.index_data_bytes as f64) / (gc_status.disk_bytes as f64)
> @@ -1144,11 +1137,11 @@ impl DataStore {
>                  1.0
>              };
>  
> -            task_log!(worker, "Deduplication factor: {:.2}", deduplication_factor);
> +            info!("Deduplication factor: {:.2}", deduplication_factor);
>  
>              if gc_status.disk_chunks > 0 {
>                  let avg_chunk = gc_status.disk_bytes / (gc_status.disk_chunks as u64);
> -                task_log!(worker, "Average chunk size: {}", HumanByte::from(avg_chunk));
> +                info!("Average chunk size: {}", HumanByte::from(avg_chunk));
>              }
>  
>              if let Ok(serialized) = serde_json::to_string(&gc_status) {
> @@ -1380,11 +1373,7 @@ impl DataStore {
>      /// Destroy a datastore. This requires that there are no active operations on the datastore.
>      ///
>      /// This is a synchronous operation and should be run in a worker-thread.
> -    pub fn destroy(
> -        name: &str,
> -        destroy_data: bool,
> -        worker: &dyn WorkerTaskContext,
> -    ) -> Result<(), Error> {
> +    pub fn destroy(name: &str, destroy_data: bool) -> Result<(), Error> {
>          let config_lock = pbs_config::datastore::lock_config()?;
>  
>          let (mut config, _digest) = pbs_config::datastore::config()?;
> @@ -1412,13 +1401,13 @@ impl DataStore {
>              let remove = |subdir, ok: &mut bool| {
>                  if let Err(err) = std::fs::remove_dir_all(base.join(subdir)) {
>                      if err.kind() != io::ErrorKind::NotFound {
> -                        task_warn!(worker, "failed to remove {subdir:?} subdirectory: {err}");
> +                        warn!("failed to remove {subdir:?} subdirectory: {err}");
>                          *ok = false;
>                      }
>                  }
>              };
>  
> -            task_log!(worker, "Deleting datastore data...");
> +            info!("Deleting datastore data...");
>              remove("ns", &mut ok); // ns first
>              remove("ct", &mut ok);
>              remove("vm", &mut ok);
> @@ -1427,7 +1416,7 @@ impl DataStore {
>              if ok {
>                  if let Err(err) = std::fs::remove_file(base.join(".gc-status")) {
>                      if err.kind() != io::ErrorKind::NotFound {
> -                        task_warn!(worker, "failed to remove .gc-status file: {err}");
> +                        warn!("failed to remove .gc-status file: {err}");
>                          ok = false;
>                      }
>                  }
> @@ -1441,7 +1430,7 @@ impl DataStore {
>  
>          // now the config
>          if ok {
> -            task_log!(worker, "Removing datastore from config...");
> +            info!("Removing datastore from config...");
>              let _lock = pbs_config::datastore::lock_config()?;
>              let _ = config.sections.remove(name);
>              pbs_config::datastore::save_config(&config)?;
> @@ -1452,35 +1441,32 @@ impl DataStore {
>              if ok {
>                  if let Err(err) = std::fs::remove_file(base.join(".lock")) {
>                      if err.kind() != io::ErrorKind::NotFound {
> -                        task_warn!(worker, "failed to remove .lock file: {err}");
> +                        warn!("failed to remove .lock file: {err}");
>                          ok = false;
>                      }
>                  }
>              }
>  
>              if ok {
> -                task_log!(worker, "Finished deleting data.");
> +                info!("Finished deleting data.");
>  
>                  match std::fs::remove_dir(base) {
> -                    Ok(()) => task_log!(worker, "Removed empty datastore directory."),
> +                    Ok(()) => info!("Removed empty datastore directory."),
>                      Err(err) if err.kind() == io::ErrorKind::NotFound => {
>                          // weird, but ok
>                      }
>                      Err(err) if err.is_errno(nix::errno::Errno::EBUSY) => {
> -                        task_warn!(
> -                            worker,
> -                            "Cannot delete datastore directory (is it a mount point?)."
> -                        )
> +                        warn!("Cannot delete datastore directory (is it a mount point?).")
>                      }
>                      Err(err) if err.is_errno(nix::errno::Errno::ENOTEMPTY) => {
> -                        task_warn!(worker, "Datastore directory not empty, not deleting.")
> +                        warn!("Datastore directory not empty, not deleting.")
>                      }
>                      Err(err) => {
> -                        task_warn!(worker, "Failed to remove datastore directory: {err}");
> +                        warn!("Failed to remove datastore directory: {err}");
>                      }
>                  }
>              } else {
> -                task_log!(worker, "There were errors deleting data.");
> +                info!("There were errors deleting data.");
>              }
>          }
>  
> diff --git a/src/backup/verify.rs b/src/backup/verify.rs
> index c972e532..dfad8824 100644
> --- a/src/backup/verify.rs
> +++ b/src/backup/verify.rs
> @@ -5,8 +5,9 @@ use std::sync::{Arc, Mutex};
>  use std::time::Instant;
>  
>  use anyhow::{bail, format_err, Error};
> +use tracing::info;
>  
> -use proxmox_sys::{task_log, WorkerTaskContext};
> +use proxmox_sys::WorkerTaskContext;
>  
>  use pbs_api_types::{
>      print_ns_and_snapshot, print_store_and_ns, Authid, BackupNamespace, BackupType, CryptMode,
> @@ -69,11 +70,7 @@ fn verify_blob(backup_dir: &BackupDir, info: &FileInfo) -> Result<(), Error> {
>      }
>  }
>  
> -fn rename_corrupted_chunk(
> -    datastore: Arc<DataStore>,
> -    digest: &[u8; 32],
> -    worker: &dyn WorkerTaskContext,
> -) {
> +fn rename_corrupted_chunk(datastore: Arc<DataStore>, digest: &[u8; 32]) {
>      let (path, digest_str) = datastore.chunk_path(digest);
>  
>      let mut counter = 0;
> @@ -89,17 +86,12 @@ fn rename_corrupted_chunk(
>  
>      match std::fs::rename(&path, &new_path) {
>          Ok(_) => {
> -            task_log!(worker, "corrupted chunk renamed to {:?}", &new_path);
> +            info!("corrupted chunk renamed to {:?}", &new_path);
>          }
>          Err(err) => {
>              match err.kind() {
>                  std::io::ErrorKind::NotFound => { /* ignored */ }
> -                _ => task_log!(
> -                    worker,
> -                    "could not rename corrupted chunk {:?} - {}",
> -                    &path,
> -                    err
> -                ),
> +                _ => info!("could not rename corrupted chunk {:?} - {}", &path, err),
>              }
>          }
>      };
> @@ -117,7 +109,6 @@ fn verify_index_chunks(
>      let mut read_bytes = 0;
>      let mut decoded_bytes = 0;
>  
> -    let worker2 = Arc::clone(&verify_worker.worker);
>      let datastore2 = Arc::clone(&verify_worker.datastore);
>      let corrupt_chunks2 = Arc::clone(&verify_worker.corrupt_chunks);
>      let verified_chunks2 = Arc::clone(&verify_worker.verified_chunks);
> @@ -130,7 +121,7 @@ fn verify_index_chunks(
>              let chunk_crypt_mode = match chunk.crypt_mode() {
>                  Err(err) => {
>                      corrupt_chunks2.lock().unwrap().insert(digest);
> -                    task_log!(worker2, "can't verify chunk, unknown CryptMode - {}", err);
> +                    info!("can't verify chunk, unknown CryptMode - {}", err);
>                      errors2.fetch_add(1, Ordering::SeqCst);
>                      return Ok(());
>                  }
> @@ -138,20 +129,18 @@ fn verify_index_chunks(
>              };
>  
>              if chunk_crypt_mode != crypt_mode {
> -                task_log!(
> -                    worker2,
> +                info!(
>                      "chunk CryptMode {:?} does not match index CryptMode {:?}",
> -                    chunk_crypt_mode,
> -                    crypt_mode
> +                    chunk_crypt_mode, crypt_mode
>                  );
>                  errors2.fetch_add(1, Ordering::SeqCst);
>              }
>  
>              if let Err(err) = chunk.verify_unencrypted(size as usize, &digest) {
>                  corrupt_chunks2.lock().unwrap().insert(digest);
> -                task_log!(worker2, "{}", err);
> +                info!("{}", err);
>                  errors2.fetch_add(1, Ordering::SeqCst);
> -                rename_corrupted_chunk(datastore2.clone(), &digest, &worker2);
> +                rename_corrupted_chunk(datastore2.clone(), &digest);
>              } else {
>                  verified_chunks2.lock().unwrap().insert(digest);
>              }
> @@ -175,11 +164,7 @@ fn verify_index_chunks(
>              .contains(digest)
>          {
>              let digest_str = hex::encode(digest);
> -            task_log!(
> -                verify_worker.worker,
> -                "chunk {} was marked as corrupt",
> -                digest_str
> -            );
> +            info!("chunk {} was marked as corrupt", digest_str);
>              errors.fetch_add(1, Ordering::SeqCst);
>              true
>          } else {
> @@ -218,17 +203,9 @@ fn verify_index_chunks(
>                      .lock()
>                      .unwrap()
>                      .insert(info.digest);
> -                task_log!(
> -                    verify_worker.worker,
> -                    "can't verify chunk, load failed - {}",
> -                    err
> -                );
> +                info!("can't verify chunk, load failed - {}", err);
>                  errors.fetch_add(1, Ordering::SeqCst);
> -                rename_corrupted_chunk(
> -                    verify_worker.datastore.clone(),
> -                    &info.digest,
> -                    &verify_worker.worker,
> -                );
> +                rename_corrupted_chunk(verify_worker.datastore.clone(), &info.digest);
>              }
>              Ok(chunk) => {
>                  let size = info.size();
> @@ -251,15 +228,9 @@ fn verify_index_chunks(
>  
>      let error_count = errors.load(Ordering::SeqCst);
>  
> -    task_log!(
> -        verify_worker.worker,
> +    info!(
>          "  verified {:.2}/{:.2} MiB in {:.2} seconds, speed {:.2}/{:.2} MiB/s ({} errors)",
> -        read_bytes_mib,
> -        decoded_bytes_mib,
> -        elapsed,
> -        read_speed,
> -        decode_speed,
> -        error_count,
> +        read_bytes_mib, decoded_bytes_mib, elapsed, read_speed, decode_speed, error_count,
>      );
>  
>      if errors.load(Ordering::SeqCst) > 0 {
> @@ -329,8 +300,7 @@ pub fn verify_backup_dir(
>      filter: Option<&dyn Fn(&BackupManifest) -> bool>,
>  ) -> Result<bool, Error> {
>      if !backup_dir.full_path().exists() {
> -        task_log!(
> -            verify_worker.worker,
> +        info!(
>              "SKIPPED: verify {}:{} - snapshot does not exist (anymore).",
>              verify_worker.datastore.name(),
>              backup_dir.dir(),
> @@ -348,8 +318,7 @@ pub fn verify_backup_dir(
>              verify_backup_dir_with_lock(verify_worker, backup_dir, upid, filter, snap_lock)
>          }
>          Err(err) => {
> -            task_log!(
> -                verify_worker.worker,
> +            info!(
>                  "SKIPPED: verify {}:{} - could not acquire snapshot lock: {}",
>                  verify_worker.datastore.name(),
>                  backup_dir.dir(),
> @@ -371,8 +340,7 @@ pub fn verify_backup_dir_with_lock(
>      let manifest = match backup_dir.load_manifest() {
>          Ok((manifest, _)) => manifest,
>          Err(err) => {
> -            task_log!(
> -                verify_worker.worker,
> +            info!(
>                  "verify {}:{} - manifest load error: {}",
>                  verify_worker.datastore.name(),
>                  backup_dir.dir(),
> @@ -384,8 +352,7 @@ pub fn verify_backup_dir_with_lock(
>  
>      if let Some(filter) = filter {
>          if !filter(&manifest) {
> -            task_log!(
> -                verify_worker.worker,
> +            info!(
>                  "SKIPPED: verify {}:{} (recently verified)",
>                  verify_worker.datastore.name(),
>                  backup_dir.dir(),
> @@ -394,8 +361,7 @@ pub fn verify_backup_dir_with_lock(
>          }
>      }
>  
> -    task_log!(
> -        verify_worker.worker,
> +    info!(
>          "verify {}:{}",
>          verify_worker.datastore.name(),
>          backup_dir.dir()
> @@ -406,7 +372,7 @@ pub fn verify_backup_dir_with_lock(
>      let mut verify_result = VerifyState::Ok;
>      for info in manifest.files() {
>          let result = proxmox_lang::try_block!({
> -            task_log!(verify_worker.worker, "  check {}", info.filename);
> +            info!("  check {}", info.filename);
>              match archive_type(&info.filename)? {
>                  ArchiveType::FixedIndex => verify_fixed_index(verify_worker, backup_dir, info),
>                  ArchiveType::DynamicIndex => verify_dynamic_index(verify_worker, backup_dir, info),
> @@ -418,8 +384,7 @@ pub fn verify_backup_dir_with_lock(
>          verify_worker.worker.fail_on_shutdown()?;
>  
>          if let Err(err) = result {
> -            task_log!(
> -                verify_worker.worker,
> +            info!(
>                  "verify {}:{}/{} failed: {}",
>                  verify_worker.datastore.name(),
>                  backup_dir.dir(),
> @@ -463,8 +428,7 @@ pub fn verify_backup_group(
>      let mut list = match group.list_backups() {
>          Ok(list) => list,
>          Err(err) => {
> -            task_log!(
> -                verify_worker.worker,
> +            info!(
>                  "verify {}, group {} - unable to list backups: {}",
>                  print_store_and_ns(verify_worker.datastore.name(), group.backup_ns()),
>                  group.group(),
> @@ -475,8 +439,7 @@ pub fn verify_backup_group(
>      };
>  
>      let snapshot_count = list.len();
> -    task_log!(
> -        verify_worker.worker,
> +    info!(
>          "verify group {}:{} ({} snapshots)",
>          verify_worker.datastore.name(),
>          group.group(),
> @@ -494,9 +457,8 @@ pub fn verify_backup_group(
>              ));
>          }
>          progress.done_snapshots = pos as u64 + 1;
> -        task_log!(verify_worker.worker, "percentage done: {}", progress);
> +        info!("percentage done: {}", progress);
>      }
> -
>      Ok(errors)
>  }
>  
> @@ -516,16 +478,11 @@ pub fn verify_all_backups(
>      filter: Option<&dyn Fn(&BackupManifest) -> bool>,
>  ) -> Result<Vec<String>, Error> {
>      let mut errors = Vec::new();
> -    let worker = Arc::clone(&verify_worker.worker);
>  
> -    task_log!(
> -        worker,
> -        "verify datastore {}",
> -        verify_worker.datastore.name()
> -    );
> +    info!("verify datastore {}", verify_worker.datastore.name());
>  
>      let owner_filtered = if let Some(owner) = &owner {
> -        task_log!(worker, "limiting to backups owned by {}", owner);
> +        info!("limiting to backups owned by {}", owner);
>          true
>      } else {
>          false
> @@ -553,7 +510,7 @@ pub fn verify_all_backups(
>                  }
>                  Err(err) => {
>                      // we don't filter by owner, but we want to log the error
> -                    task_log!(worker, "error on iterating groups in ns '{ns}' - {err}");
> +                    info!("error on iterating groups in ns '{ns}' - {err}");
>                      errors.push(err.to_string());
>                      None
>                  }
> @@ -563,7 +520,7 @@ pub fn verify_all_backups(
>              })
>              .collect::<Vec<BackupGroup>>(),
>          Err(err) => {
> -            task_log!(worker, "unable to list backups: {}", err,);
> +            info!("unable to list backups: {}", err,);
>              return Ok(errors);
>          }
>      };
> @@ -571,7 +528,7 @@ pub fn verify_all_backups(
>      list.sort_unstable_by(|a, b| a.group().cmp(b.group()));
>  
>      let group_count = list.len();
> -    task_log!(worker, "found {} groups", group_count);
> +    info!("found {} groups", group_count);
>  
>      let mut progress = StoreProgress::new(group_count as u64);
>  
> diff --git a/src/bin/proxmox-backup-api.rs b/src/bin/proxmox-backup-api.rs
> index d5ce9f3b..1163cbfe 100644
> --- a/src/bin/proxmox-backup-api.rs
> +++ b/src/bin/proxmox-backup-api.rs
> @@ -7,6 +7,7 @@ use http::Response;
>  use hyper::{Body, StatusCode};
>  
>  use proxmox_lang::try_block;
> +use proxmox_log::init_logger;
>  use proxmox_router::RpcEnvironmentType;
>  use proxmox_sys::fs::CreateOptions;
>  
> @@ -15,6 +16,7 @@ use proxmox_rest_server::{daemon, ApiConfig, RestServer};
>  use proxmox_backup::auth_helpers::*;
>  use proxmox_backup::config;
>  use proxmox_backup::server::auth::check_pbs_auth;
> +use tracing_subscriber::filter::LevelFilter;
>  
>  fn main() {
>      pbs_tools::setup_libc_malloc_opts();
> @@ -40,13 +42,7 @@ fn get_index() -> Pin<Box<dyn Future<Output = Response<Body>> + Send>> {
>  }
>  
>  async fn run() -> Result<(), Error> {
> -    if let Err(err) = syslog::init(
> -        syslog::Facility::LOG_DAEMON,
> -        log::LevelFilter::Info,
> -        Some("proxmox-backup-api"),
> -    ) {
> -        bail!("unable to inititialize syslog - {}", err);
> -    }
> +    init_logger("PBS_LOG", LevelFilter::INFO, "proxmox-backup-api")?;
>  
>      config::create_configdir()?;
>  
> diff --git a/src/bin/proxmox-backup-proxy.rs b/src/bin/proxmox-backup-proxy.rs
> index 15444685..d04e5ad0 100644
> --- a/src/bin/proxmox-backup-proxy.rs
> +++ b/src/bin/proxmox-backup-proxy.rs
> @@ -7,6 +7,9 @@ use http::request::Parts;
>  use http::Response;
>  use hyper::header;
>  use hyper::{Body, StatusCode};
> +use proxmox_log::init_logger;
> +use tracing::{info, warn};
> +use tracing_subscriber::filter::LevelFilter;
>  use url::form_urlencoded;
>  
>  use openssl::ssl::SslAcceptor;
> @@ -18,7 +21,6 @@ use proxmox_router::{RpcEnvironment, RpcEnvironmentType};
>  use proxmox_sys::fs::{CreateOptions, FileSystemInformation};
>  use proxmox_sys::linux::procfs::{Loadavg, ProcFsMemInfo, ProcFsNetDev, ProcFsStat};
>  use proxmox_sys::logrotate::LogRotate;
> -use proxmox_sys::{task_log, task_warn};
>  
>  use pbs_datastore::DataStore;
>  
> @@ -181,21 +183,7 @@ async fn get_index_future(env: RestEnvironment, parts: Parts) -> Response<Body>
>  }
>  
>  async fn run() -> Result<(), Error> {
> -    // Note: To debug early connection error use
> -    // PROXMOX_DEBUG=1 ./target/release/proxmox-backup-proxy
> -    let debug = std::env::var("PROXMOX_DEBUG").is_ok();
> -
> -    if let Err(err) = syslog::init(
> -        syslog::Facility::LOG_DAEMON,
> -        if debug {
> -            log::LevelFilter::Debug
> -        } else {
> -            log::LevelFilter::Info
> -        },
> -        Some("proxmox-backup-proxy"),
> -    ) {
> -        bail!("unable to inititialize syslog - {err}");
> -    }
> +    init_logger("PBS_LOG", LevelFilter::INFO, "proxmox-backup-proxy")?;
>  
>      proxmox_backup::auth_helpers::setup_auth_context(false);
>      proxmox_backup::server::notifications::init()?;
> @@ -301,7 +289,7 @@ async fn run() -> Result<(), Error> {
>      })?;
>  
>      let connections = proxmox_rest_server::connection::AcceptBuilder::new()
> -        .debug(debug)
> +        .debug(tracing::enabled!(tracing::Level::DEBUG))
>          .rate_limiter_lookup(Arc::new(lookup_rate_limiter))
>          .tcp_keepalive_time(PROXMOX_BACKUP_TCP_KEEPALIVE_TIME);
>  
> @@ -750,7 +738,7 @@ async fn schedule_task_log_rotate() {
>          false,
>          move |worker| {
>              job.start(&worker.upid().to_string())?;
> -            task_log!(worker, "starting task log rotation");
> +            info!("starting task log rotation");
>  
>              let result = try_block!({
>                  let max_size = 512 * 1024 - 1; // an entry has ~ 100b, so > 5000 entries/file
> @@ -775,9 +763,9 @@ async fn schedule_task_log_rotate() {
>                  )?;
>  
>                  if has_rotated {
> -                    task_log!(worker, "task log archive was rotated");
> +                    info!("task log archive was rotated");
>                  } else {
> -                    task_log!(worker, "task log archive was not rotated");
> +                    info!("task log archive was not rotated");
>                  }
>  
>                  let max_size = 32 * 1024 * 1024 - 1;
> @@ -793,9 +781,9 @@ async fn schedule_task_log_rotate() {
>                  if logrotate.rotate(max_size)? {
>                      println!("rotated access log, telling daemons to re-open log file");
>                      proxmox_async::runtime::block_on(command_reopen_access_logfiles())?;
> -                    task_log!(worker, "API access log was rotated");
> +                    info!("API access log was rotated");
>                  } else {
> -                    task_log!(worker, "API access log was not rotated");
> +                    info!("API access log was not rotated");
>                  }
>  
>                  let mut logrotate = LogRotate::new(
> @@ -808,15 +796,15 @@ async fn schedule_task_log_rotate() {
>                  if logrotate.rotate(max_size)? {
>                      println!("rotated auth log, telling daemons to re-open log file");
>                      proxmox_async::runtime::block_on(command_reopen_auth_logfiles())?;
> -                    task_log!(worker, "API authentication log was rotated");
> +                    info!("API authentication log was rotated");
>                  } else {
> -                    task_log!(worker, "API authentication log was not rotated");
> +                    info!("API authentication log was not rotated");
>                  }
>  
>                  if has_rotated {
> -                    task_log!(worker, "cleaning up old task logs");
> -                    if let Err(err) = cleanup_old_tasks(&worker, true) {
> -                        task_warn!(worker, "could not completely cleanup old tasks: {err}");
> +                    info!("cleaning up old task logs");
> +                    if let Err(err) = cleanup_old_tasks(true) {
> +                        warn!("could not completely cleanup old tasks: {err}");
>                      }
>                  }
>  
> diff --git a/src/server/gc_job.rs b/src/server/gc_job.rs
> index ff5bdccf..64835028 100644
> --- a/src/server/gc_job.rs
> +++ b/src/server/gc_job.rs
> @@ -1,7 +1,7 @@
>  use anyhow::Error;
>  use std::sync::Arc;
>  
> -use proxmox_sys::task_log;
> +use tracing::info;
>  
>  use pbs_api_types::Authid;
>  use pbs_datastore::DataStore;
> @@ -28,9 +28,9 @@ pub fn do_garbage_collection_job(
>          move |worker| {
>              job.start(&worker.upid().to_string())?;
>  
> -            task_log!(worker, "starting garbage collection on store {store}");
> +            info!("starting garbage collection on store {store}");
>              if let Some(event_str) = schedule {
> -                task_log!(worker, "task triggered by schedule '{event_str}'");
> +                info!("task triggered by schedule '{event_str}'");
>              }
>  
>              let result = datastore.garbage_collection(&*worker, worker.upid());
> diff --git a/src/server/prune_job.rs b/src/server/prune_job.rs
> index 2de34973..83bcd7c3 100644
> --- a/src/server/prune_job.rs
> +++ b/src/server/prune_job.rs
> @@ -1,8 +1,7 @@
>  use std::sync::Arc;
>  
>  use anyhow::Error;
> -
> -use proxmox_sys::{task_log, task_warn};
> +use tracing::{info, warn};
>  
>  use pbs_api_types::{
>      print_store_and_ns, Authid, KeepOptions, Operation, PruneJobOptions, MAX_NAMESPACE_DEPTH,
> @@ -16,7 +15,6 @@ use crate::backup::ListAccessibleBackupGroups;
>  use crate::server::jobstate::Job;
>  
>  pub fn prune_datastore(
> -    worker: Arc<WorkerTask>,
>      auth_id: Authid,
>      prune_options: PruneJobOptions,
>      datastore: Arc<DataStore>,
> @@ -31,19 +29,19 @@ pub fn prune_datastore(
>      };
>      let ns = prune_options.ns.clone().unwrap_or_default();
>      let store_ns = print_store_and_ns(store, &ns);
> -    task_log!(worker, "Starting datastore prune on {store_ns}, {depth}");
> +    info!("Starting datastore prune on {store_ns}, {depth}");
>  
>      if dry_run {
> -        task_log!(worker, "(dry test run)");
> +        info!("(dry test run)");
>      }
>  
>      let keep_all = !prune_options.keeps_something();
>  
>      if keep_all {
> -        task_log!(worker, "No prune selection - keeping all files.");
> +        info!("No prune selection - keeping all files.");
>      } else {
>          let rendered_options = cli_prune_options_string(&prune_options);
> -        task_log!(worker, "retention options: {rendered_options}");
> +        info!("retention options: {rendered_options}");
>      }
>  
>      for group in ListAccessibleBackupGroups::new_with_privs(
> @@ -61,8 +59,7 @@ pub fn prune_datastore(
>          let mut prune_info = compute_prune_info(list, &prune_options.keep)?;
>          prune_info.reverse(); // delete older snapshots first
>  
> -        task_log!(
> -            worker,
> +        info!(
>              "Pruning group {ns}:\"{}/{}\"",
>              group.backup_type(),
>              group.backup_id()
> @@ -70,8 +67,7 @@ pub fn prune_datastore(
>  
>          for (info, mark) in prune_info {
>              let keep = keep_all || mark.keep();
> -            task_log!(
> -                worker,
> +            info!(
>                  "{}{} {}/{}/{}",
>                  if dry_run { "would " } else { "" },
>                  mark,
> @@ -82,7 +78,7 @@ pub fn prune_datastore(
>              if !keep && !dry_run {
>                  if let Err(err) = datastore.remove_backup_dir(ns, info.backup_dir.as_ref(), false) {
>                      let path = info.backup_dir.relative_path();
> -                    task_warn!(worker, "failed to remove dir {path:?}: {err}");
> +                    warn!("failed to remove dir {path:?}: {err}");
>                  }
>              }
>          }
> @@ -150,13 +146,13 @@ pub fn do_prune_job(
>          move |worker| {
>              job.start(&worker.upid().to_string())?;
>  
> -            task_log!(worker, "prune job '{}'", job.jobname());
> +            info!("prune job '{}'", job.jobname());
>  
>              if let Some(event_str) = schedule {
> -                task_log!(worker, "task triggered by schedule '{event_str}'");
> +                info!("task triggered by schedule '{event_str}'");
>              }
>  
> -            let result = prune_datastore(worker.clone(), auth_id, prune_options, datastore, false);
> +            let result = prune_datastore(auth_id, prune_options, datastore, false);
>  
>              let status = worker.create_state(&result);
>  
> diff --git a/src/server/pull.rs b/src/server/pull.rs
> index 14744e9c..f5d2b837 100644
> --- a/src/server/pull.rs
> +++ b/src/server/pull.rs
> @@ -10,10 +10,9 @@ use std::time::{Duration, SystemTime};
>  use anyhow::{bail, format_err, Error};
>  use http::StatusCode;
>  use proxmox_human_byte::HumanByte;
> -use proxmox_rest_server::WorkerTask;
>  use proxmox_router::HttpError;
> -use proxmox_sys::{task_log, task_warn};
>  use serde_json::json;
> +use tracing::{info, warn};
>  
>  use pbs_api_types::{
>      print_store_and_ns, Authid, BackupDir, BackupGroup, BackupNamespace, CryptMode, GroupFilter,
> @@ -122,7 +121,6 @@ trait PullSource: Send + Sync {
>      async fn list_namespaces(
>          &self,
>          max_depth: &mut Option<usize>,
> -        worker: &WorkerTask,
>      ) -> Result<Vec<BackupNamespace>, Error>;
>  
>      /// Lists groups within a specific namespace from the source.
> @@ -137,7 +135,6 @@ trait PullSource: Send + Sync {
>          &self,
>          namespace: &BackupNamespace,
>          group: &BackupGroup,
> -        worker: &WorkerTask,
>      ) -> Result<Vec<BackupDir>, Error>;
>      fn get_ns(&self) -> BackupNamespace;
>      fn get_store(&self) -> &str;
> @@ -155,7 +152,6 @@ impl PullSource for RemoteSource {
>      async fn list_namespaces(
>          &self,
>          max_depth: &mut Option<usize>,
> -        worker: &WorkerTask,
>      ) -> Result<Vec<BackupNamespace>, Error> {
>          if self.ns.is_root() && max_depth.map_or(false, |depth| depth == 0) {
>              return Ok(vec![self.ns.clone()]);
> @@ -178,8 +174,8 @@ impl PullSource for RemoteSource {
>                  Some(HttpError { code, message }) => match code {
>                      &StatusCode::NOT_FOUND => {
>                          if self.ns.is_root() && max_depth.is_none() {
> -                            task_warn!(worker, "Could not query remote for namespaces (404) -> temporarily switching to backwards-compat mode");
> -                            task_warn!(worker, "Either make backwards-compat mode explicit (max-depth == 0) or upgrade remote system.");
> +                            warn!("Could not query remote for namespaces (404) -> temporarily switching to backwards-compat mode");
> +                            warn!("Either make backwards-compat mode explicit (max-depth == 0) or upgrade remote system.");
>                              max_depth.replace(0);
>                          } else {
>                              bail!("Remote namespace set/recursive sync requested, but remote does not support namespaces.")
> @@ -238,7 +234,6 @@ impl PullSource for RemoteSource {
>          &self,
>          namespace: &BackupNamespace,
>          group: &BackupGroup,
> -        worker: &WorkerTask,
>      ) -> Result<Vec<BackupDir>, Error> {
>          let path = format!("api2/json/admin/datastore/{}/snapshots", self.repo.store());
>  
> @@ -261,11 +256,7 @@ impl PullSource for RemoteSource {
>                  let snapshot = item.backup;
>                  // in-progress backups can't be synced
>                  if item.size.is_none() {
> -                    task_log!(
> -                        worker,
> -                        "skipping snapshot {} - in-progress backup",
> -                        snapshot
> -                    );
> +                    info!("skipping snapshot {} - in-progress backup", snapshot);
>                      return None;
>                  }
>  
> @@ -301,7 +292,6 @@ impl PullSource for LocalSource {
>      async fn list_namespaces(
>          &self,
>          max_depth: &mut Option<usize>,
> -        _worker: &WorkerTask,
>      ) -> Result<Vec<BackupNamespace>, Error> {
>          ListNamespacesRecursive::new_max_depth(
>              self.store.clone(),
> @@ -333,7 +323,6 @@ impl PullSource for LocalSource {
>          &self,
>          namespace: &BackupNamespace,
>          group: &BackupGroup,
> -        _worker: &WorkerTask,
>      ) -> Result<Vec<BackupDir>, Error> {
>          Ok(self
>              .store
> @@ -381,19 +370,10 @@ trait PullReader: Send + Sync {
>      /// Asynchronously loads a file from the source into a local file.
>      /// `filename` is the name of the file to load from the source.
>      /// `into` is the path of the local file to load the source file into.
> -    async fn load_file_into(
> -        &self,
> -        filename: &str,
> -        into: &Path,
> -        worker: &WorkerTask,
> -    ) -> Result<Option<DataBlob>, Error>;
> +    async fn load_file_into(&self, filename: &str, into: &Path) -> Result<Option<DataBlob>, Error>;
>  
>      /// Tries to download the client log from the source and save it into a local file.
> -    async fn try_download_client_log(
> -        &self,
> -        to_path: &Path,
> -        worker: &WorkerTask,
> -    ) -> Result<(), Error>;
> +    async fn try_download_client_log(&self, to_path: &Path) -> Result<(), Error>;
>  
>      fn skip_chunk_sync(&self, target_store_name: &str) -> bool;
>  }
> @@ -409,12 +389,7 @@ impl PullReader for RemoteReader {
>          ))
>      }
>  
> -    async fn load_file_into(
> -        &self,
> -        filename: &str,
> -        into: &Path,
> -        worker: &WorkerTask,
> -    ) -> Result<Option<DataBlob>, Error> {
> +    async fn load_file_into(&self, filename: &str, into: &Path) -> Result<Option<DataBlob>, Error> {
>          let mut tmp_file = std::fs::OpenOptions::new()
>              .write(true)
>              .create(true)
> @@ -426,8 +401,7 @@ impl PullReader for RemoteReader {
>              match err.downcast_ref::<HttpError>() {
>                  Some(HttpError { code, message }) => match *code {
>                      StatusCode::NOT_FOUND => {
> -                        task_log!(
> -                            worker,
> +                        info!(
>                              "skipping snapshot {} - vanished since start of sync",
>                              &self.dir,
>                          );
> @@ -446,11 +420,7 @@ impl PullReader for RemoteReader {
>          Ok(DataBlob::load_from_reader(&mut tmp_file).ok())
>      }
>  
> -    async fn try_download_client_log(
> -        &self,
> -        to_path: &Path,
> -        worker: &WorkerTask,
> -    ) -> Result<(), Error> {
> +    async fn try_download_client_log(&self, to_path: &Path) -> Result<(), Error> {
>          let mut tmp_path = to_path.to_owned();
>          tmp_path.set_extension("tmp");
>  
> @@ -469,7 +439,7 @@ impl PullReader for RemoteReader {
>              if let Err(err) = std::fs::rename(&tmp_path, to_path) {
>                  bail!("Atomic rename file {:?} failed - {}", to_path, err);
>              }
> -            task_log!(worker, "got backup log file {:?}", CLIENT_LOG_BLOB_NAME);
> +            info!("got backup log file {:?}", CLIENT_LOG_BLOB_NAME);
>          }
>  
>          Ok(())
> @@ -490,12 +460,7 @@ impl PullReader for LocalReader {
>          ))
>      }
>  
> -    async fn load_file_into(
> -        &self,
> -        filename: &str,
> -        into: &Path,
> -        _worker: &WorkerTask,
> -    ) -> Result<Option<DataBlob>, Error> {
> +    async fn load_file_into(&self, filename: &str, into: &Path) -> Result<Option<DataBlob>, Error> {
>          let mut tmp_file = std::fs::OpenOptions::new()
>              .write(true)
>              .create(true)
> @@ -509,11 +474,7 @@ impl PullReader for LocalReader {
>          Ok(DataBlob::load_from_reader(&mut tmp_file).ok())
>      }
>  
> -    async fn try_download_client_log(
> -        &self,
> -        _to_path: &Path,
> -        _worker: &WorkerTask,
> -    ) -> Result<(), Error> {
> +    async fn try_download_client_log(&self, _to_path: &Path) -> Result<(), Error> {
>          Ok(())
>      }
>  
> @@ -603,7 +564,6 @@ impl PullParameters {
>  }
>  
>  async fn pull_index_chunks<I: IndexFile>(
> -    worker: &WorkerTask,
>      chunk_reader: Arc<dyn AsyncReadChunk>,
>      target: Arc<DataStore>,
>      index: I,
> @@ -658,10 +618,10 @@ async fn pull_index_chunks<I: IndexFile>(
>                      target.cond_touch_chunk(&info.digest, false)
>                  })?;
>                  if chunk_exists {
> -                    //task_log!(worker, "chunk {} exists {}", pos, hex::encode(digest));
> +                    //info!("chunk {} exists {}", pos, hex::encode(digest));
>                      return Ok::<_, Error>(());
>                  }
> -                //task_log!(worker, "sync {} chunk {}", pos, hex::encode(digest));
> +                //info!("sync {} chunk {}", pos, hex::encode(digest));
>                  let chunk = chunk_reader.read_raw_chunk(&info.digest).await?;
>                  let raw_size = chunk.raw_size() as usize;
>  
> @@ -689,8 +649,7 @@ async fn pull_index_chunks<I: IndexFile>(
>      let bytes = bytes.load(Ordering::SeqCst);
>      let chunk_count = chunk_count.load(Ordering::SeqCst);
>  
> -    task_log!(
> -        worker,
> +    info!(
>          "downloaded {} ({}/s)",
>          HumanByte::from(bytes),
>          HumanByte::new_binary(bytes as f64 / elapsed.as_secs_f64()),
> @@ -730,7 +689,6 @@ fn verify_archive(info: &FileInfo, csum: &[u8; 32], size: u64) -> Result<(), Err
>  /// - if archive is an index, pull referenced chunks
>  /// - Rename tmp file into real path
>  async fn pull_single_archive<'a>(
> -    worker: &'a WorkerTask,
>      reader: Arc<dyn PullReader + 'a>,
>      snapshot: &'a pbs_datastore::BackupDir,
>      archive_info: &'a FileInfo,
> @@ -745,11 +703,9 @@ async fn pull_single_archive<'a>(
>  
>      let mut pull_stats = PullStats::default();
>  
> -    task_log!(worker, "sync archive {}", archive_name);
> +    info!("sync archive {}", archive_name);
>  
> -    reader
> -        .load_file_into(archive_name, &tmp_path, worker)
> -        .await?;
> +    reader.load_file_into(archive_name, &tmp_path).await?;
>  
>      let mut tmpfile = std::fs::OpenOptions::new().read(true).open(&tmp_path)?;
>  
> @@ -762,10 +718,9 @@ async fn pull_single_archive<'a>(
>              verify_archive(archive_info, &csum, size)?;
>  
>              if reader.skip_chunk_sync(snapshot.datastore().name()) {
> -                task_log!(worker, "skipping chunk sync for same datastore");
> +                info!("skipping chunk sync for same datastore");
>              } else {
>                  let stats = pull_index_chunks(
> -                    worker,
>                      reader.chunk_reader(archive_info.crypt_mode),
>                      snapshot.datastore().clone(),
>                      index,
> @@ -783,10 +738,9 @@ async fn pull_single_archive<'a>(
>              verify_archive(archive_info, &csum, size)?;
>  
>              if reader.skip_chunk_sync(snapshot.datastore().name()) {
> -                task_log!(worker, "skipping chunk sync for same datastore");
> +                info!("skipping chunk sync for same datastore");
>              } else {
>                  let stats = pull_index_chunks(
> -                    worker,
>                      reader.chunk_reader(archive_info.crypt_mode),
>                      snapshot.datastore().clone(),
>                      index,
> @@ -818,7 +772,6 @@ async fn pull_single_archive<'a>(
>  /// -- if not, pull it from the remote
>  /// - Download log if not already existing
>  async fn pull_snapshot<'a>(
> -    worker: &'a WorkerTask,
>      reader: Arc<dyn PullReader + 'a>,
>      snapshot: &'a pbs_datastore::BackupDir,
>      downloaded_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
> @@ -834,7 +787,7 @@ async fn pull_snapshot<'a>(
>      tmp_manifest_name.set_extension("tmp");
>      let tmp_manifest_blob;
>      if let Some(data) = reader
> -        .load_file_into(MANIFEST_BLOB_NAME, &tmp_manifest_name, worker)
> +        .load_file_into(MANIFEST_BLOB_NAME, &tmp_manifest_name)
>          .await?
>      {
>          tmp_manifest_blob = data;
> @@ -857,11 +810,9 @@ async fn pull_snapshot<'a>(
>  
>          if manifest_blob.raw_data() == tmp_manifest_blob.raw_data() {
>              if !client_log_name.exists() {
> -                reader
> -                    .try_download_client_log(&client_log_name, worker)
> -                    .await?;
> +                reader.try_download_client_log(&client_log_name).await?;
>              };
> -            task_log!(worker, "no data changes");
> +            info!("no data changes");
>              let _ = std::fs::remove_file(&tmp_manifest_name);
>              return Ok(pull_stats); // nothing changed
>          }
> @@ -881,7 +832,7 @@ async fn pull_snapshot<'a>(
>                      match manifest.verify_file(&item.filename, &csum, size) {
>                          Ok(_) => continue,
>                          Err(err) => {
> -                            task_log!(worker, "detected changed file {:?} - {}", path, err);
> +                            info!("detected changed file {:?} - {}", path, err);
>                          }
>                      }
>                  }
> @@ -891,7 +842,7 @@ async fn pull_snapshot<'a>(
>                      match manifest.verify_file(&item.filename, &csum, size) {
>                          Ok(_) => continue,
>                          Err(err) => {
> -                            task_log!(worker, "detected changed file {:?} - {}", path, err);
> +                            info!("detected changed file {:?} - {}", path, err);
>                          }
>                      }
>                  }
> @@ -901,21 +852,15 @@ async fn pull_snapshot<'a>(
>                      match manifest.verify_file(&item.filename, &csum, size) {
>                          Ok(_) => continue,
>                          Err(err) => {
> -                            task_log!(worker, "detected changed file {:?} - {}", path, err);
> +                            info!("detected changed file {:?} - {}", path, err);
>                          }
>                      }
>                  }
>              }
>          }
>  
> -        let stats = pull_single_archive(
> -            worker,
> -            reader.clone(),
> -            snapshot,
> -            item,
> -            downloaded_chunks.clone(),
> -        )
> -        .await?;
> +        let stats =
> +            pull_single_archive(reader.clone(), snapshot, item, downloaded_chunks.clone()).await?;
>          pull_stats.add(stats);
>      }
>  
> @@ -924,9 +869,7 @@ async fn pull_snapshot<'a>(
>      }
>  
>      if !client_log_name.exists() {
> -        reader
> -            .try_download_client_log(&client_log_name, worker)
> -            .await?;
> +        reader.try_download_client_log(&client_log_name).await?;
>      };
>      snapshot
>          .cleanup_unreferenced_files(&manifest)
> @@ -940,7 +883,6 @@ async fn pull_snapshot<'a>(
>  /// The `reader` is configured to read from the source backup directory, while the
>  /// `snapshot` is pointing to the local datastore and target namespace.
>  async fn pull_snapshot_from<'a>(
> -    worker: &'a WorkerTask,
>      reader: Arc<dyn PullReader + 'a>,
>      snapshot: &'a pbs_datastore::BackupDir,
>      downloaded_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
> @@ -950,27 +892,27 @@ async fn pull_snapshot_from<'a>(
>          .create_locked_backup_dir(snapshot.backup_ns(), snapshot.as_ref())?;
>  
>      let pull_stats = if is_new {
> -        task_log!(worker, "sync snapshot {}", snapshot.dir());
> +        info!("sync snapshot {}", snapshot.dir());
>  
> -        match pull_snapshot(worker, reader, snapshot, downloaded_chunks).await {
> +        match pull_snapshot(reader, snapshot, downloaded_chunks).await {
>              Err(err) => {
>                  if let Err(cleanup_err) = snapshot.datastore().remove_backup_dir(
>                      snapshot.backup_ns(),
>                      snapshot.as_ref(),
>                      true,
>                  ) {
> -                    task_log!(worker, "cleanup error - {}", cleanup_err);
> +                    info!("cleanup error - {}", cleanup_err);
>                  }
>                  return Err(err);
>              }
>              Ok(pull_stats) => {
> -                task_log!(worker, "sync snapshot {} done", snapshot.dir());
> +                info!("sync snapshot {} done", snapshot.dir());
>                  pull_stats
>              }
>          }
>      } else {
> -        task_log!(worker, "re-sync snapshot {}", snapshot.dir());
> -        pull_snapshot(worker, reader, snapshot, downloaded_chunks).await?
> +        info!("re-sync snapshot {}", snapshot.dir());
> +        pull_snapshot(reader, snapshot, downloaded_chunks).await?
>      };
>  
>      Ok(pull_stats)
> @@ -1073,7 +1015,6 @@ impl std::fmt::Display for SkipInfo {
>  /// - remote snapshot access is checked by remote (twice: query and opening the backup reader)
>  /// - local group owner is already checked by pull_store
>  async fn pull_group(
> -    worker: &WorkerTask,
>      params: &PullParameters,
>      source_namespace: &BackupNamespace,
>      group: &BackupGroup,
> @@ -1084,7 +1025,7 @@ async fn pull_group(
>  
>      let mut raw_list: Vec<BackupDir> = params
>          .source
> -        .list_backup_dirs(source_namespace, group, worker)
> +        .list_backup_dirs(source_namespace, group)
>          .await?;
>      raw_list.sort_unstable_by(|a, b| a.time.cmp(&b.time));
>  
> @@ -1113,7 +1054,7 @@ async fn pull_group(
>                  already_synced_skip_info.update(dir.time);
>                  return false;
>              } else if already_synced_skip_info.count > 0 {
> -                task_log!(worker, "{}", already_synced_skip_info);
> +                info!("{}", already_synced_skip_info);
>                  already_synced_skip_info.reset();
>                  return true;
>              }
> @@ -1122,7 +1063,7 @@ async fn pull_group(
>                  transfer_last_skip_info.update(dir.time);
>                  return false;
>              } else if transfer_last_skip_info.count > 0 {
> -                task_log!(worker, "{}", transfer_last_skip_info);
> +                info!("{}", transfer_last_skip_info);
>                  transfer_last_skip_info.reset();
>              }
>              true
> @@ -1147,11 +1088,10 @@ async fn pull_group(
>              .source
>              .reader(source_namespace, &from_snapshot)
>              .await?;
> -        let result =
> -            pull_snapshot_from(worker, reader, &to_snapshot, downloaded_chunks.clone()).await;
> +        let result = pull_snapshot_from(reader, &to_snapshot, downloaded_chunks.clone()).await;
>  
>          progress.done_snapshots = pos as u64 + 1;
> -        task_log!(worker, "percentage done: {}", progress);
> +        info!("percentage done: {}", progress);
>  
>          let stats = result?; // stop on error
>          pull_stats.add(stats);
> @@ -1169,14 +1109,13 @@ async fn pull_group(
>                  continue;
>              }
>              if snapshot.is_protected() {
> -                task_log!(
> -                    worker,
> +                info!(
>                      "don't delete vanished snapshot {} (protected)",
>                      snapshot.dir()
>                  );
>                  continue;
>              }
> -            task_log!(worker, "delete vanished snapshot {}", snapshot.dir());
> +            info!("delete vanished snapshot {}", snapshot.dir());
>              params
>                  .target
>                  .store
> @@ -1235,7 +1174,6 @@ fn check_and_remove_ns(params: &PullParameters, local_ns: &BackupNamespace) -> R
>  }
>  
>  fn check_and_remove_vanished_ns(
> -    worker: &WorkerTask,
>      params: &PullParameters,
>      synced_ns: HashSet<BackupNamespace>,
>  ) -> Result<(bool, RemovedVanishedStats), Error> {
> @@ -1276,16 +1214,15 @@ fn check_and_remove_vanished_ns(
>          }
>          match check_and_remove_ns(params, &local_ns) {
>              Ok(true) => {
> -                task_log!(worker, "Removed namespace {local_ns}");
> +                info!("Removed namespace {local_ns}");
>                  removed_stats.namespaces += 1;
>              }
> -            Ok(false) => task_log!(
> -                worker,
> +            Ok(false) => info!(
>                  "Did not remove namespace {} - protected snapshots remain",
>                  local_ns
>              ),
>              Err(err) => {
> -                task_log!(worker, "Failed to remove namespace {} - {}", local_ns, err);
> +                info!("Failed to remove namespace {} - {}", local_ns, err);
>                  errors = true;
>              }
>          }
> @@ -1311,10 +1248,7 @@ fn check_and_remove_vanished_ns(
>  /// - remote namespaces are filtered by remote
>  /// - creation and removal of sub-NS checked here
>  /// - access to sub-NS checked here
> -pub(crate) async fn pull_store(
> -    worker: &WorkerTask,
> -    mut params: PullParameters,
> -) -> Result<PullStats, Error> {
> +pub(crate) async fn pull_store(mut params: PullParameters) -> Result<PullStats, Error> {
>      // explicit create shared lock to prevent GC on newly created chunks
>      let _shared_store_lock = params.target.store.try_shared_chunk_store_lock()?;
>      let mut errors = false;
> @@ -1323,10 +1257,7 @@ pub(crate) async fn pull_store(
>      let mut namespaces = if params.source.get_ns().is_root() && old_max_depth == Some(0) {
>          vec![params.source.get_ns()] // backwards compat - don't query remote namespaces!
>      } else {
> -        params
> -            .source
> -            .list_namespaces(&mut params.max_depth, worker)
> -            .await?
> +        params.source.list_namespaces(&mut params.max_depth).await?
>      };
>  
>      let ns_layers_to_be_pulled = namespaces
> @@ -1358,33 +1289,28 @@ pub(crate) async fn pull_store(
>          let target_ns = namespace.map_prefix(&params.source.get_ns(), &params.target.ns)?;
>          let target_store_ns_str = print_store_and_ns(params.target.store.name(), &target_ns);
>  
> -        task_log!(worker, "----");
> -        task_log!(
> -            worker,
> +        info!("----");
> +        info!(
>              "Syncing {} into {}",
> -            source_store_ns_str,
> -            target_store_ns_str
> +            source_store_ns_str, target_store_ns_str
>          );
>  
>          synced_ns.insert(target_ns.clone());
>  
>          match check_and_create_ns(&params, &target_ns) {
> -            Ok(true) => task_log!(worker, "Created namespace {}", target_ns),
> +            Ok(true) => info!("Created namespace {}", target_ns),
>              Ok(false) => {}
>              Err(err) => {
> -                task_log!(
> -                    worker,
> +                info!(
>                      "Cannot sync {} into {} - {}",
> -                    source_store_ns_str,
> -                    target_store_ns_str,
> -                    err,
> +                    source_store_ns_str, target_store_ns_str, err,
>                  );
>                  errors = true;
>                  continue;
>              }
>          }
>  
> -        match pull_ns(worker, &namespace, &mut params).await {
> +        match pull_ns(&namespace, &mut params).await {
>              Ok((ns_progress, ns_pull_stats, ns_errors)) => {
>                  errors |= ns_errors;
>  
> @@ -1393,29 +1319,24 @@ pub(crate) async fn pull_store(
>                  if params.max_depth != Some(0) {
>                      groups += ns_progress.done_groups;
>                      snapshots += ns_progress.done_snapshots;
> -                    task_log!(
> -                        worker,
> +                    info!(
>                          "Finished syncing namespace {}, current progress: {} groups, {} snapshots",
> -                        namespace,
> -                        groups,
> -                        snapshots,
> +                        namespace, groups, snapshots,
>                      );
>                  }
>              }
>              Err(err) => {
>                  errors = true;
> -                task_log!(
> -                    worker,
> +                info!(
>                      "Encountered errors while syncing namespace {} - {}",
> -                    &namespace,
> -                    err,
> +                    &namespace, err,
>                  );
>              }
>          };
>      }
>  
>      if params.remove_vanished {
> -        let (has_errors, stats) = check_and_remove_vanished_ns(worker, &params, synced_ns)?;
> +        let (has_errors, stats) = check_and_remove_vanished_ns(&params, synced_ns)?;
>          errors |= has_errors;
>          pull_stats.add(PullStats::from(stats));
>      }
> @@ -1440,7 +1361,6 @@ pub(crate) async fn pull_store(
>  /// - remote namespaces are filtered by remote
>  /// - owner check for vanished groups done here
>  pub(crate) async fn pull_ns(
> -    worker: &WorkerTask,
>      namespace: &BackupNamespace,
>      params: &mut PullParameters,
>  ) -> Result<(StoreProgress, PullStats, bool), Error> {
> @@ -1460,8 +1380,7 @@ pub(crate) async fn pull_ns(
>          .into_iter()
>          .filter(|group| group.apply_filters(&params.group_filter))
>          .collect();
> -    task_log!(
> -        worker,
> +    info!(
>          "found {} groups to sync (out of {} total)",
>          list.len(),
>          unfiltered_count
> @@ -1492,15 +1411,10 @@ pub(crate) async fn pull_ns(
>              {
>                  Ok(result) => result,
>                  Err(err) => {
> -                    task_log!(
> -                        worker,
> -                        "sync group {} failed - group lock failed: {}",
> -                        &group,
> -                        err
> -                    );
> +                    info!("sync group {} failed - group lock failed: {}", &group, err);
>                      errors = true;
>                      // do not stop here, instead continue
> -                    task_log!(worker, "create_locked_backup_group failed");
> +                    info!("create_locked_backup_group failed");
>                      continue;
>                  }
>              };
> @@ -1508,19 +1422,16 @@ pub(crate) async fn pull_ns(
>          // permission check
>          if params.owner != owner {
>              // only the owner is allowed to create additional snapshots
> -            task_log!(
> -                worker,
> +            info!(
>                  "sync group {} failed - owner check failed ({} != {})",
> -                &group,
> -                params.owner,
> -                owner
> +                &group, params.owner, owner
>              );
>              errors = true; // do not stop here, instead continue
>          } else {
> -            match pull_group(worker, params, namespace, &group, &mut progress).await {
> +            match pull_group(params, namespace, &group, &mut progress).await {
>                  Ok(stats) => pull_stats.add(stats),
>                  Err(err) => {
> -                    task_log!(worker, "sync group {} failed - {}", &group, err,);
> +                    info!("sync group {} failed - {}", &group, err,);
>                      errors = true; // do not stop here, instead continue
>                  }
>              }
> @@ -1542,7 +1453,7 @@ pub(crate) async fn pull_ns(
>                  if !local_group.apply_filters(&params.group_filter) {
>                      continue;
>                  }
> -                task_log!(worker, "delete vanished group '{local_group}'",);
> +                info!("delete vanished group '{local_group}'",);
>                  let delete_stats_result = params
>                      .target
>                      .store
> @@ -1551,10 +1462,7 @@ pub(crate) async fn pull_ns(
>                  match delete_stats_result {
>                      Ok(stats) => {
>                          if !stats.all_removed() {
> -                            task_log!(
> -                                worker,
> -                                "kept some protected snapshots of group '{local_group}'",
> -                            );
> +                            info!("kept some protected snapshots of group '{local_group}'",);
>                              pull_stats.add(PullStats::from(RemovedVanishedStats {
>                                  snapshots: stats.removed_snapshots(),
>                                  groups: 0,
> @@ -1569,7 +1477,7 @@ pub(crate) async fn pull_ns(
>                          }
>                      }
>                      Err(err) => {
> -                        task_log!(worker, "{}", err);
> +                        info!("{}", err);
>                          errors = true;
>                      }
>                  }
> @@ -1577,7 +1485,7 @@ pub(crate) async fn pull_ns(
>              Ok(())
>          });
>          if let Err(err) = result {
> -            task_log!(worker, "error during cleanup: {}", err);
> +            info!("error during cleanup: {}", err);
>              errors = true;
>          };
>      }
> diff --git a/src/server/realm_sync_job.rs b/src/server/realm_sync_job.rs
> index 31412be8..5f3832ab 100644
> --- a/src/server/realm_sync_job.rs
> +++ b/src/server/realm_sync_job.rs
> @@ -1,13 +1,14 @@
>  use anyhow::{bail, format_err, Context, Error};
> +use tracing::{info, warn};
> +
>  use pbs_config::{acl::AclTree, token_shadow, BackupLockGuard};
>  use proxmox_lang::try_block;
>  use proxmox_ldap::{Config, Connection, SearchParameters, SearchResult};
>  use proxmox_rest_server::WorkerTask;
>  use proxmox_schema::{ApiType, Schema};
>  use proxmox_section_config::SectionConfigData;
> -use proxmox_sys::{task_log, task_warn};
>  
> -use std::{collections::HashSet, sync::Arc};
> +use std::collections::HashSet;
>  
>  use pbs_api_types::{
>      AdRealmConfig, ApiToken, Authid, LdapRealmConfig, Realm, RealmType, RemoveVanished,
> @@ -39,7 +40,7 @@ pub fn do_realm_sync_job(
>          move |worker| {
>              job.start(&worker.upid().to_string()).unwrap();
>  
> -            task_log!(worker, "starting realm sync for {}", realm.as_str());
> +            info!("starting realm sync for {}", realm.as_str());
>  
>              let override_settings = GeneralSyncSettingsOverride {
>                  remove_vanished,
> @@ -49,12 +50,12 @@ pub fn do_realm_sync_job(
>              async move {
>                  match realm_type {
>                      RealmType::Ldap => {
> -                        LdapRealmSyncJob::new(worker, realm, &override_settings, dry_run)?
> +                        LdapRealmSyncJob::new(realm, &override_settings, dry_run)?
>                              .sync()
>                              .await
>                      }
>                      RealmType::Ad => {
> -                        AdRealmSyncJob::new(worker, realm, &override_settings, dry_run)?
> +                        AdRealmSyncJob::new(realm, &override_settings, dry_run)?
>                              .sync()
>                              .await
>                      }
> @@ -73,7 +74,6 @@ struct AdRealmSyncJob(LdapRealmSyncJob);
>  
>  impl AdRealmSyncJob {
>      fn new(
> -        worker: Arc<WorkerTask>,
>          realm: Realm,
>          override_settings: &GeneralSyncSettingsOverride,
>          dry_run: bool,
> @@ -98,7 +98,6 @@ impl AdRealmSyncJob {
>          let ldap_config = auth::AdAuthenticator::api_type_to_config(&config)?;
>  
>          Ok(Self(LdapRealmSyncJob {
> -            worker,
>              realm,
>              general_sync_settings: sync_settings,
>              ldap_sync_settings: sync_attributes,
> @@ -114,7 +113,6 @@ impl AdRealmSyncJob {
>  
>  /// Implementation for syncing LDAP realms
>  struct LdapRealmSyncJob {
> -    worker: Arc<WorkerTask>,
>      realm: Realm,
>      general_sync_settings: GeneralSyncSettings,
>      ldap_sync_settings: LdapSyncSettings,
> @@ -125,7 +123,6 @@ struct LdapRealmSyncJob {
>  impl LdapRealmSyncJob {
>      /// Create new LdapRealmSyncJob
>      fn new(
> -        worker: Arc<WorkerTask>,
>          realm: Realm,
>          override_settings: &GeneralSyncSettingsOverride,
>          dry_run: bool,
> @@ -150,7 +147,6 @@ impl LdapRealmSyncJob {
>          let ldap_config = auth::LdapAuthenticator::api_type_to_config(&config)?;
>  
>          Ok(Self {
> -            worker,
>              realm,
>              general_sync_settings: sync_settings,
>              ldap_sync_settings: sync_attributes,
> @@ -162,10 +158,7 @@ impl LdapRealmSyncJob {
>      /// Perform realm synchronization
>      async fn sync(&self) -> Result<(), Error> {
>          if self.dry_run {
> -            task_log!(
> -                self.worker,
> -                "this is a DRY RUN - changes will not be persisted"
> -            );
> +            info!("this is a DRY RUN - changes will not be persisted");
>          }
>  
>          let ldap = Connection::new(self.ldap_config.clone());
> @@ -247,7 +240,7 @@ impl LdapRealmSyncJob {
>                  anyhow::Ok(())
>              });
>              if let Err(e) = result {
> -                task_log!(self.worker, "could not create/update user: {e}");
> +                info!("could not create/update user: {e}");
>              }
>          }
>  
> @@ -266,18 +259,10 @@ impl LdapRealmSyncJob {
>  
>          if let Some(existing_user) = existing_user {
>              if existing_user != new_or_updated_user {
> -                task_log!(
> -                    self.worker,
> -                    "updating user {}",
> -                    new_or_updated_user.userid.as_str()
> -                );
> +                info!("updating user {}", new_or_updated_user.userid.as_str());
>              }
>          } else {
> -            task_log!(
> -                self.worker,
> -                "creating user {}",
> -                new_or_updated_user.userid.as_str()
> -            );
> +            info!("creating user {}", new_or_updated_user.userid.as_str());
>          }
>  
>          user_config.set_data(
> @@ -299,10 +284,7 @@ impl LdapRealmSyncJob {
>              let schema = schema.unwrap_string_schema();
>  
>              if let Err(e) = schema.check_constraints(value) {
> -                task_warn!(
> -                    self.worker,
> -                    "{userid}: ignoring attribute `{attribute}`: {e}"
> -                );
> +                warn!("{userid}: ignoring attribute `{attribute}`: {e}");
>  
>                  None
>              } else {
> @@ -381,7 +363,7 @@ impl LdapRealmSyncJob {
>          to_delete: &[Userid],
>      ) -> Result<(), Error> {
>          for userid in to_delete {
> -            task_log!(self.worker, "deleting user {}", userid.as_str());
> +            info!("deleting user {}", userid.as_str());
>  
>              // Delete the user
>              user_config.sections.remove(userid.as_str());
> @@ -408,7 +390,7 @@ impl LdapRealmSyncJob {
>  
>                      if !self.dry_run {
>                          if let Err(e) = token_shadow::delete_secret(&tokenid) {
> -                            task_warn!(self.worker, "could not delete token for user {userid}: {e}",)
> +                            warn!("could not delete token for user {userid}: {e}",)
>                          }
>                      }
>  
> diff --git a/src/server/verify_job.rs b/src/server/verify_job.rs
> index f3cf2277..3241e304 100644
> --- a/src/server/verify_job.rs
> +++ b/src/server/verify_job.rs
> @@ -1,9 +1,9 @@
>  use anyhow::{format_err, Error};
> +use tracing::info;
>  
>  use pbs_api_types::{Authid, Operation, VerificationJobConfig};
>  use pbs_datastore::DataStore;
>  use proxmox_rest_server::WorkerTask;
> -use proxmox_sys::task_log;
>  
>  use crate::{
>      backup::{verify_all_backups, verify_filter},
> @@ -34,9 +34,9 @@ pub fn do_verification_job(
>          move |worker| {
>              job.start(&worker.upid().to_string())?;
>  
> -            task_log!(worker, "Starting datastore verify job '{}'", job_id);
> +            info!("Starting datastore verify job '{}'", job_id);
>              if let Some(event_str) = schedule {
> -                task_log!(worker, "task triggered by schedule '{}'", event_str);
> +                info!("task triggered by schedule '{}'", event_str);
>              }
>  
>              let ns = match verification_job.ns {
> @@ -58,9 +58,9 @@ pub fn do_verification_job(
>              let job_result = match result {
>                  Ok(ref failed_dirs) if failed_dirs.is_empty() => Ok(()),
>                  Ok(ref failed_dirs) => {
> -                    task_log!(worker, "Failed to verify the following snapshots/groups:");
> +                    info!("Failed to verify the following snapshots/groups:");
>                      for dir in failed_dirs {
> -                        task_log!(worker, "\t{}", dir);
> +                        info!("\t{}", dir);
>                      }
>  
>                      Err(format_err!(
> diff --git a/src/tape/drive/mod.rs b/src/tape/drive/mod.rs
> index b912b234..d36e00a8 100644
> --- a/src/tape/drive/mod.rs
> +++ b/src/tape/drive/mod.rs
> @@ -12,6 +12,7 @@ use nix::fcntl::OFlag;
>  use nix::sys::stat::Mode;
>  use serde::Deserialize;
>  use serde_json::Value;
> +use tracing::info;
>  
>  use proxmox_sys::fs::{
>      atomic_open_or_create_file, file_read_optional_string, lock_file, replace_file, CreateOptions,
> @@ -19,7 +20,7 @@ use proxmox_sys::fs::{
>  
>  use proxmox_io::ReadExt;
>  use proxmox_section_config::SectionConfigData;
> -use proxmox_sys::{task_log, WorkerTaskContext};
> +use proxmox_sys::WorkerTaskContext;
>  use proxmox_uuid::Uuid;
>  
>  use pbs_api_types::{Fingerprint, LtoTapeDrive, VirtualTapeDrive};
> @@ -381,11 +382,9 @@ pub fn request_and_load_media(
>  ) -> Result<(Box<dyn TapeDriver>, MediaId), Error> {
>      let check_label = |handle: &mut dyn TapeDriver, uuid: &proxmox_uuid::Uuid| {
>          if let Ok((Some(media_id), _)) = handle.read_label() {
> -            task_log!(
> -                worker,
> +            info!(
>                  "found media label {} ({})",
> -                media_id.label.label_text,
> -                media_id.label.uuid,
> +                media_id.label.label_text, media_id.label.uuid,
>              );
>  
>              if media_id.label.uuid == *uuid {
> @@ -423,19 +422,16 @@ pub fn request_and_load_media(
>                      let update_and_log_request_error =
>                          |old: &mut TapeRequestError, new: TapeRequestError| -> Result<(), Error> {
>                              if new != *old {
> -                                task_log!(worker, "{}", new);
> +                                info!("{}", new);
>                                  let (device_type, device) = if let Some(changer) = changer {
>                                      ("changer", changer.as_str())
>                                  } else {
>                                      ("drive", drive)
>                                  };
>  
> -                                task_log!(
> -                                    worker,
> +                                info!(
>                                      "Please insert media '{}' into {} '{}'",
> -                                    label_text,
> -                                    device_type,
> -                                    device
> +                                    label_text, device_type, device
>                                  );
>                                  send_load_media_notification(
>                                      notification_mode,
> @@ -460,18 +456,11 @@ pub fn request_and_load_media(
>                                  std::thread::sleep(std::time::Duration::from_millis(100));
>                              }
>                          } else if drive_config.changer.is_none() {
> -                            task_log!(
> -                                worker,
> -                                "Checking for media '{}' in drive '{}'",
> -                                label_text,
> -                                drive
> -                            );
> +                            info!("Checking for media '{}' in drive '{}'", label_text, drive);
>                          } else {
> -                            task_log!(
> -                                worker,
> +                            info!(
>                                  "trying to load media '{}' into drive '{}'",
> -                                label_text,
> -                                drive
> +                                label_text, drive
>                              );
>                          }
>  
> @@ -499,8 +488,7 @@ pub fn request_and_load_media(
>  
>                          let request_error = match handle.read_label() {
>                              Ok((Some(media_id), _)) if media_id.label.uuid == label.uuid => {
> -                                task_log!(
> -                                    worker,
> +                                info!(
>                                      "found media label {} ({})",
>                                      media_id.label.label_text,
>                                      media_id.label.uuid.to_string(),
> diff --git a/src/tape/pool_writer/mod.rs b/src/tape/pool_writer/mod.rs
> index 1df297b6..b186ad6e 100644
> --- a/src/tape/pool_writer/mod.rs
> +++ b/src/tape/pool_writer/mod.rs
> @@ -11,8 +11,8 @@ use std::sync::{Arc, Mutex};
>  use std::time::SystemTime;
>  
>  use anyhow::{bail, Error};
> +use tracing::{info, warn};
>  
> -use proxmox_sys::{task_log, task_warn};
>  use proxmox_uuid::Uuid;
>  
>  use pbs_datastore::{DataStore, SnapshotReader};
> @@ -62,7 +62,6 @@ impl PoolWriter {
>      pub fn new(
>          mut pool: MediaPool,
>          drive_name: &str,
> -        worker: &WorkerTask,
>          notification_mode: TapeNotificationMode,
>          force_media_set: bool,
>          ns_magic: bool,
> @@ -71,11 +70,11 @@ impl PoolWriter {
>  
>          let new_media_set_reason = pool.start_write_session(current_time, force_media_set)?;
>          if let Some(reason) = new_media_set_reason {
> -            task_log!(worker, "starting new media set - reason: {}", reason,);
> +            info!("starting new media set - reason: {}", reason,);
>          }
>  
>          let media_set_uuid = pool.current_media_set().uuid();
> -        task_log!(worker, "media set uuid: {}", media_set_uuid);
> +        info!("media set uuid: {}", media_set_uuid);
>  
>          let mut catalog_set = CatalogSet::new();
>  
> @@ -130,7 +129,7 @@ impl PoolWriter {
>      }
>  
>      /// Eject media and drop PoolWriterState (close drive)
> -    pub fn eject_media(&mut self, worker: &WorkerTask) -> Result<(), Error> {
> +    pub fn eject_media(&mut self) -> Result<(), Error> {
>          let mut status = match self.status.take() {
>              Some(status) => status,
>              None => return Ok(()), // no media loaded
> @@ -139,13 +138,13 @@ impl PoolWriter {
>          let (drive_config, _digest) = pbs_config::drive::config()?;
>  
>          if let Some((mut changer, _)) = media_changer(&drive_config, &self.drive_name)? {
> -            task_log!(worker, "eject media");
> +            info!("eject media");
>              status.drive.eject_media()?; // rewind and eject early, so that unload_media is faster
>              drop(status); // close drive
> -            task_log!(worker, "unload media");
> +            info!("unload media");
>              changer.unload_media(None)?; //eject and unload
>          } else {
> -            task_log!(worker, "standalone drive - ejecting media");
> +            info!("standalone drive - ejecting media");
>              status.drive.eject_media()?;
>          }
>  
> @@ -153,14 +152,14 @@ impl PoolWriter {
>      }
>  
>      /// Export current media set and drop PoolWriterState (close drive)
> -    pub fn export_media_set(&mut self, worker: &WorkerTask) -> Result<(), Error> {
> +    pub fn export_media_set(&mut self) -> Result<(), Error> {
>          let mut status = self.status.take();
>  
>          let (drive_config, _digest) = pbs_config::drive::config()?;
>  
>          if let Some((mut changer, _)) = media_changer(&drive_config, &self.drive_name)? {
>              if let Some(ref mut status) = status {
> -                task_log!(worker, "rewind media");
> +                info!("rewind media");
>                  // rewind first so that the unload command later does not run into a timeout
>                  status.drive.rewind()?;
>              }
> @@ -170,25 +169,19 @@ impl PoolWriter {
>                  let media = self.pool.lookup_media(media_uuid)?;
>                  let label_text = media.label_text();
>                  if let Some(slot) = changer.export_media(label_text)? {
> -                    task_log!(
> -                        worker,
> +                    info!(
>                          "exported media '{}' to import/export slot {}",
> -                        label_text,
> -                        slot
> +                        label_text, slot
>                      );
>                  } else {
> -                    task_warn!(
> -                        worker,
> +                    warn!(
>                          "export failed - media '{}' is not online or in different drive",
>                          label_text
>                      );
>                  }
>              }
>          } else if let Some(mut status) = status {
> -            task_log!(
> -                worker,
> -                "standalone drive - ejecting media instead of export"
> -            );
> +            info!("standalone drive - ejecting media instead of export");
>              status.drive.eject_media()?;
>          }
>  
> @@ -238,11 +231,7 @@ impl PoolWriter {
>              return Ok(media_uuid);
>          }
>  
> -        task_log!(
> -            worker,
> -            "allocated new writable media '{}'",
> -            media.label_text()
> -        );
> +        info!("allocated new writable media '{}'", media.label_text());
>  
>          if let Some(PoolWriterState { mut drive, .. }) = self.status.take() {
>              if let Some(uuid) = &last_media_uuid {
> @@ -252,7 +241,7 @@ impl PoolWriter {
>                          .set_media_bytes_used(uuid, Some(stats.total_used_native_capacity))?;
>                  }
>  
> -                task_log!(worker, "eject current media");
> +                info!("eject current media");
>                  drive.eject_media()?;
>              }
>          }
> @@ -270,7 +259,7 @@ impl PoolWriter {
>          // test for critical tape alert flags
>          if let Ok(alert_flags) = drive.tape_alert_flags() {
>              if !alert_flags.is_empty() {
> -                task_log!(worker, "TapeAlertFlags: {:?}", alert_flags);
> +                info!("TapeAlertFlags: {:?}", alert_flags);
>                  if tape_alert_flags_critical(alert_flags) {
>                      self.pool.set_media_status_damaged(&media_uuid)?;
>                      bail!(
> @@ -281,20 +270,15 @@ impl PoolWriter {
>              }
>          }
>  
> -        let (catalog, is_new_media) = update_media_set_label(
> -            worker,
> -            drive.as_mut(),
> -            old_media_id.media_set_label,
> -            media.id(),
> -        )?;
> +        let (catalog, is_new_media) =
> +            update_media_set_label(drive.as_mut(), old_media_id.media_set_label, media.id())?;
>  
>          self.catalog_set.lock().unwrap().append_catalog(catalog)?;
>  
>          let media_set = media.media_set_label().unwrap();
>  
>          if is_new_media && media_set.seq_nr >= MEDIA_SET_SEQ_NR_WARN_LIMIT {
> -            task_warn!(
> -                worker,
> +            warn!(
>                  "large media-set detected ({}), consider using a different allocation policy",
>                  media_set.seq_nr
>              );
> @@ -311,7 +295,7 @@ impl PoolWriter {
>  
>          if is_new_media {
>              // add catalogs from previous media
> -            self.append_media_set_catalogs(worker)?;
> +            self.append_media_set_catalogs()?;
>          }
>  
>          self.used_tapes.insert(media_uuid.clone());
> @@ -331,12 +315,12 @@ impl PoolWriter {
>      // Check it tape is loaded, then move to EOM (if not already there)
>      //
>      // Returns the tape position at EOM.
> -    fn prepare_tape_write(status: &mut PoolWriterState, worker: &WorkerTask) -> Result<u64, Error> {
> +    fn prepare_tape_write(status: &mut PoolWriterState) -> Result<u64, Error> {
>          if !status.at_eom {
> -            task_log!(worker, "moving to end of media");
> +            info!("moving to end of media");
>              status.drive.move_to_eom(true)?;
>              status.at_eom = true;
> -            task_log!(worker, "arrived at end of media");
> +            info!("arrived at end of media");
>          }
>  
>          let current_file_number = status.drive.current_file_number()?;
> @@ -357,7 +341,7 @@ impl PoolWriter {
>      /// on the media (return value 'Ok(false, _)'). In that case, the
>      /// archive is marked incomplete. The caller should mark the media
>      /// as full and try again using another media.
> -    pub fn append_catalog_archive(&mut self, worker: &WorkerTask) -> Result<bool, Error> {
> +    pub fn append_catalog_archive(&mut self) -> Result<bool, Error> {
>          let catalog_magic = self.catalog_version();
>  
>          let status = match self.status {
> @@ -365,7 +349,7 @@ impl PoolWriter {
>              None => bail!("PoolWriter - no media loaded"),
>          };
>  
> -        Self::prepare_tape_write(status, worker)?;
> +        Self::prepare_tape_write(status)?;
>  
>          let catalog_set = self.catalog_set.lock().unwrap();
>  
> @@ -408,7 +392,7 @@ impl PoolWriter {
>      }
>  
>      // Append catalogs for all previous media in set (without last)
> -    fn append_media_set_catalogs(&mut self, worker: &WorkerTask) -> Result<(), Error> {
> +    fn append_media_set_catalogs(&mut self) -> Result<(), Error> {
>          let media_set = self.pool.current_media_set();
>  
>          let mut media_list = media_set.media_list();
> @@ -424,7 +408,7 @@ impl PoolWriter {
>              None => bail!("PoolWriter - no media loaded"),
>          };
>  
> -        Self::prepare_tape_write(status, worker)?;
> +        Self::prepare_tape_write(status)?;
>  
>          for (seq_nr, uuid) in media_list.iter().enumerate() {
>              let uuid = match uuid {
> @@ -436,7 +420,7 @@ impl PoolWriter {
>  
>              let mut file = Self::open_catalog_file(uuid)?;
>  
> -            task_log!(worker, "write catalog for previous media: {}", uuid);
> +            info!("write catalog for previous media: {}", uuid);
>  
>              if tape_write_catalog(
>                  writer.as_mut(),
> @@ -467,7 +451,6 @@ impl PoolWriter {
>      /// media.
>      pub fn append_snapshot_archive(
>          &mut self,
> -        worker: &WorkerTask,
>          snapshot_reader: &SnapshotReader,
>      ) -> Result<(bool, usize), Error> {
>          let status = match self.status {
> @@ -475,7 +458,7 @@ impl PoolWriter {
>              None => bail!("PoolWriter - no media loaded"),
>          };
>  
> -        let current_file_number = Self::prepare_tape_write(status, worker)?;
> +        let current_file_number = Self::prepare_tape_write(status)?;
>  
>          let (done, bytes_written) = {
>              let mut writer: Box<dyn TapeWrite> = status.drive.write_file()?;
> @@ -521,7 +504,7 @@ impl PoolWriter {
>              None => bail!("PoolWriter - no media loaded"),
>          };
>  
> -        let current_file_number = Self::prepare_tape_write(status, worker)?;
> +        let current_file_number = Self::prepare_tape_write(status)?;
>  
>          let writer = status.drive.write_file()?;
>  
> @@ -533,8 +516,7 @@ impl PoolWriter {
>          status.bytes_written_after_sync += bytes_written;
>  
>          let elapsed = start_time.elapsed()?.as_secs_f64();
> -        task_log!(
> -            worker,
> +        info!(
>              "wrote {} chunks ({:.2} MB at {:.2} MB/s)",
>              saved_chunks.len(),
>              bytes_written as f64 / 1_000_000.0,
> @@ -614,7 +596,7 @@ fn write_chunk_archive<'a>(
>          }
>  
>          if writer.bytes_written() > max_size {
> -            //task_log!(worker, "Chunk Archive max size reached, closing archive");
> +            //info!("Chunk Archive max size reached, closing archive");
>              break;
>          }
>      }
> @@ -628,7 +610,6 @@ fn write_chunk_archive<'a>(
>  // set label does not match the expected media set, overwrite the
>  // media set label.
>  fn update_media_set_label(
> -    worker: &WorkerTask,
>      drive: &mut dyn TapeDriver,
>      old_set: Option<MediaSetLabel>,
>      media_id: &MediaId,
> @@ -657,7 +638,7 @@ fn update_media_set_label(
>  
>      let new_media = match old_set {
>          None => {
> -            task_log!(worker, "writing new media set label");
> +            info!("writing new media set label");
>              drive.write_media_set_label(new_set, key_config.as_ref())?;
>              media_catalog = MediaCatalog::overwrite(TAPE_STATUS_DIR, media_id, false)?;
>              true
> @@ -681,8 +662,7 @@ fn update_media_set_label(
>  
>                  false
>              } else {
> -                task_log!(
> -                    worker,
> +                info!(
>                      "writing new media set label (overwrite '{}/{}')",
>                      media_set_label.uuid.to_string(),
>                      media_set_label.seq_nr,
> diff --git a/src/tools/disks/mod.rs b/src/tools/disks/mod.rs
> index 94f89e0a..f916416f 100644
> --- a/src/tools/disks/mod.rs
> +++ b/src/tools/disks/mod.rs
> @@ -16,14 +16,13 @@ use ::serde::{Deserialize, Serialize};
>  
>  use proxmox_lang::error::io_err_other;
>  use proxmox_lang::{io_bail, io_format_err};
> -use proxmox_rest_server::WorkerTask;
>  use proxmox_schema::api;
>  use proxmox_sys::linux::procfs::{mountinfo::Device, MountInfo};
> -use proxmox_sys::task_log;
>  
>  use pbs_api_types::{BLOCKDEVICE_DISK_AND_PARTITION_NAME_REGEX, BLOCKDEVICE_NAME_REGEX};
>  
>  mod zfs;
> +use tracing::info;
>  pub use zfs::*;
>  mod zpool_status;
>  pub use zpool_status::*;
> @@ -1116,7 +1115,7 @@ pub fn inititialize_gpt_disk(disk: &Disk, uuid: Option<&str>) -> Result<(), Erro
>  
>  /// Wipes all labels and the first 200 MiB of a disk/partition (or the whole if it is smaller).
>  /// If called with a partition, also sets the partition type to 0x83 'Linux filesystem'.
> -pub fn wipe_blockdev(disk: &Disk, worker: Arc<WorkerTask>) -> Result<(), Error> {
> +pub fn wipe_blockdev(disk: &Disk) -> Result<(), Error> {
>      let disk_path = match disk.device_path() {
>          Some(path) => path,
>          None => bail!("disk {:?} has no node in /dev", disk.syspath()),
> @@ -1137,13 +1136,13 @@ pub fn wipe_blockdev(disk: &Disk, worker: Arc<WorkerTask>) -> Result<(), Error>
>  
>      to_wipe.push(disk_path.to_path_buf());
>  
> -    task_log!(worker, "Wiping block device {}", disk_path.display());
> +    info!("Wiping block device {}", disk_path.display());
>  
>      let mut wipefs_command = std::process::Command::new("wipefs");
>      wipefs_command.arg("--all").args(&to_wipe);
>  
>      let wipefs_output = proxmox_sys::command::run_command(wipefs_command, None)?;
> -    task_log!(worker, "wipefs output: {}", wipefs_output);
> +    info!("wipefs output: {}", wipefs_output);
>  
>      let size = disk.size().map(|size| size / 1024 / 1024)?;
>      let count = size.min(200);
> @@ -1163,21 +1162,17 @@ pub fn wipe_blockdev(disk: &Disk, worker: Arc<WorkerTask>) -> Result<(), Error>
>      dd_command.args(args);
>  
>      let dd_output = proxmox_sys::command::run_command(dd_command, None)?;
> -    task_log!(worker, "dd output: {}", dd_output);
> +    info!("dd output: {}", dd_output);
>  
>      if is_partition {
>          // set the partition type to 0x83 'Linux filesystem'
> -        change_parttype(disk, "8300", worker)?;
> +        change_parttype(disk, "8300")?;
>      }
>  
>      Ok(())
>  }
>  
> -pub fn change_parttype(
> -    part_disk: &Disk,
> -    part_type: &str,
> -    worker: Arc<WorkerTask>,
> -) -> Result<(), Error> {
> +pub fn change_parttype(part_disk: &Disk, part_type: &str) -> Result<(), Error> {
>      let part_path = match part_disk.device_path() {
>          Some(path) => path,
>          None => bail!("disk {:?} has no node in /dev", part_disk.syspath()),
> @@ -1199,7 +1194,7 @@ pub fn change_parttype(
>          };
>          sgdisk_command.arg(part_disk_parent_path);
>          let sgdisk_output = proxmox_sys::command::run_command(sgdisk_command, None)?;
> -        task_log!(worker, "sgdisk output: {}", sgdisk_output);
> +        info!("sgdisk output: {}", sgdisk_output);
>      }
>      Ok(())
>  }
> diff --git a/tests/worker-task-abort.rs b/tests/worker-task-abort.rs
> index eda86f01..80243dbb 100644
> --- a/tests/worker-task-abort.rs
> +++ b/tests/worker-task-abort.rs
> @@ -7,24 +7,25 @@ extern crate tokio;
>  
>  use proxmox_lang::try_block;
>  use proxmox_sys::fs::CreateOptions;
> -use proxmox_sys::{task_log, WorkerTaskContext};
> +use proxmox_sys::WorkerTaskContext;
>  
>  use pbs_api_types::{Authid, UPID};
>  
>  use proxmox_rest_server::{CommandSocket, WorkerTask};
> +use tracing::info;
>  
>  fn garbage_collection(worker: &WorkerTask) -> Result<(), Error> {
> -    task_log!(worker, "start garbage collection");
> +    info!("start garbage collection");
>  
>      for i in 0..50 {
>          worker.check_abort()?;
>  
> -        task_log!(worker, "progress {}", i);
> +        info!("progress {}", i);
>  
>          std::thread::sleep(std::time::Duration::from_millis(10));
>      }
>  
> -    task_log!(worker, "end garbage collection");
> +    info!("end garbage collection");
>  
>      Ok(())
>  }

-- 
- Lukas


_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pbs-devel] [PATCH proxmox{, -backup} v5 0/4] proxmox-log introduction
  2024-06-13 13:56 [pbs-devel] [PATCH proxmox{, -backup} v5 0/4] proxmox-log introduction Gabriel Goller
                   ` (3 preceding siblings ...)
  2024-06-13 13:56 ` [pbs-devel] [PATCH proxmox-backup v5 4/4] api: " Gabriel Goller
@ 2024-06-24 11:13 ` Lukas Wagner
  2024-06-26 13:10 ` Gabriel Goller
  5 siblings, 0 replies; 14+ messages in thread
From: Lukas Wagner @ 2024-06-24 11:13 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Gabriel Goller



On  2024-06-13 15:56, Gabriel Goller wrote:
> Removed the task_log! (and friends task_warn!, task_debug!, etc.) macro
> and introduced the `tracing` crate. We now initiate the tracing crate
> using a single layer, which is logging to the syslog and the tasklog.
> It uses the `syslog` crate and the original `FileLogger`.
> 
> To write to the task logs from the worker threads and tasks, we now
> have a task_local logger (and warning counter), which
> will get instantiated when a task/thread is created. This means that
> when we call `info!` or any other `tracing` log macros, it will get 
> the file_logger from TLS and write to the file.
> 
> 
> 
> Summary over all repositories:
>   46 files changed, 828 insertions(+), 1046 deletions(-)
> 

Great work!

Applied this on latest master and did a quick sanity check to ensure
that (task) logging still works. 

I left some comments with a few minor things to improve. With those fixed,
consider this:

Tested-by: Lukas Wagner <l.wagner@proxmox.com>
Reviewed-by: Lukas Wagner <l.wagner@proxmox.com>

-- 
- Lukas


_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pbs-devel] [PATCH proxmox v5 1/4] proxmox-log: add tracing infrastructure
  2024-06-24 11:10   ` Lukas Wagner
@ 2024-06-25  8:44     ` Gabriel Goller
  2024-06-25  9:04       ` Lukas Wagner
  0 siblings, 1 reply; 14+ messages in thread
From: Gabriel Goller @ 2024-06-25  8:44 UTC (permalink / raw)
  To: Lukas Wagner; +Cc: Proxmox Backup Server development discussion

Thanks for the review!

On 24.06.2024 13:10, Lukas Wagner wrote:
>Just wondering, have you seen/checked out tracing-journald?
>Would that be interesting for us?
>
>https://docs.rs/tracing-journald/latest/tracing_journald/

I packaged it for debian some time ago. It basically does the same as
the current `syslog` lib and custom subscriber, so we should just be
able to drop it in. Don't know if that's too much for this series though :). 
Maybe as a follow-up? What do you think?

>> +        };
>> +        if let Err(err) = self.file.write_all(line.as_bytes()) {
>> +            // avoid panicking, log methods should not do that
>> +            // FIXME: or, return result???
>> +            log::error!("error writing to log file - {}", err);
>Since you use `FileLogger` in `SyslogAndTaskLogger` as a tracing logging layer and
>also setup the `tracing-log` bridge to convert log records to tracing events,
>wouldn't this lead to recursion?

Good catch!

>> diff --git a/proxmox-log/src/syslog_tasklog_layer.rs b/proxmox-log/src/syslog_tasklog_layer.rs
>> new file mode 100644
>> index 00000000..89437caf
>> --- /dev/null
>> +++ b/proxmox-log/src/syslog_tasklog_layer.rs
>> @@ -0,0 +1,106 @@
>> +use std::fmt::Write as _;
>> +use std::sync::Arc;
>> +use std::sync::Mutex;
>> +
>> +use syslog::{Formatter3164, LoggerBackend};
>> +use tracing::field::Field;
>> +use tracing::field::Visit;
>> +use tracing::Event;
>> +use tracing::Level;
>> +use tracing::Subscriber;
>> +use tracing_subscriber::layer::Context;
>> +use tracing_subscriber::Layer;
>> +
>> +use crate::FileLogger;
>> +use crate::LOGGER;
>> +use crate::WARN_COUNTER;
>> +
>> +pub struct SyslogAndTasklogLayer {
>> +    syslog_logger: Arc<Mutex<syslog::Logger<LoggerBackend, Formatter3164>>>,
>> +}
>> +
>> +impl SyslogAndTasklogLayer {
>> +    pub fn new(application_name: String) -> Self {
>> +        let formatter = Formatter3164 {
>> +            facility: syslog::Facility::LOG_DAEMON,
>> +            process: application_name,
>> +            ..Formatter3164::default()
>> +        };
>> +
>> +        // we panic here if we can't initialize the syslogger
>> +        let logger = syslog::unix(formatter)
>
>Maybe explain why it is okay to panic in this case? (is it?)

I think it's reasonable to not start if we can't write to the syslog.
Either way this was done a long time before me, so I didn't change it :)

Also incorporated all the other changes!



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pbs-devel] [PATCH proxmox v5 2/4] enable tracing logger, remove task_log macros
  2024-06-24 11:09   ` Lukas Wagner
@ 2024-06-25  8:49     ` Gabriel Goller
  0 siblings, 0 replies; 14+ messages in thread
From: Gabriel Goller @ 2024-06-25  8:49 UTC (permalink / raw)
  To: Lukas Wagner; +Cc: Proxmox Backup Server development discussion

Thanks for the review!

On 24.06.2024 13:09, Lukas Wagner wrote:
>2 nits inline
>
>On  2024-06-13 15:56, Gabriel Goller wrote:
>> diff --git a/proxmox-rest-server/src/worker_task.rs b/proxmox-rest-server/src/worker_task.rs
>> index 5bab4cdc..58537f54 100644
>> --- a/proxmox-rest-server/src/worker_task.rs
>> +++ b/proxmox-rest-server/src/worker_task.rs
>> @@ -1,3 +1,4 @@
>> +use std::cell::{Cell, RefCell};
>>  use std::collections::{HashMap, VecDeque};
>>  use std::fs::File;
>>  use std::io::{BufRead, BufReader, Read, Write};
>> @@ -12,21 +13,23 @@ use futures::*;
>>  use lazy_static::lazy_static;
>>  use nix::fcntl::OFlag;
>>  use once_cell::sync::OnceCell;
>> +use proxmox_log::{LOGGER, WARN_COUNTER};
>
>Nit: Order of imports, let's put this import to the other
>proxmox_* imports below

Fixed!

>> @@ -340,11 +346,9 @@ pub fn cleanup_old_tasks(worker: &dyn WorkerTaskContext, compressed: bool) -> Re
>>                  let file = match file {
>>                      Ok(file) => file,
>>                      Err(err) => {
>> -                        task_warn!(
>> -                            worker,
>> -                            "could not check some task log in '{:02X}': {}",
>> -                            i,
>> -                            err
>> +                        warn!(
>> +                            tasklog = true,
>> +                            "could not check some task log in '{:02X}': {}", i
>err
>
>Nit: You could inline these vars if you edit these lines any way.

Fixed! Also removed the 'tasklog = true' parameter, which we removed in
a previous version.



_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pbs-devel] [PATCH proxmox v5 1/4] proxmox-log: add tracing infrastructure
  2024-06-25  8:44     ` Gabriel Goller
@ 2024-06-25  9:04       ` Lukas Wagner
  2024-06-26  7:55         ` Fabian Grünbichler
  0 siblings, 1 reply; 14+ messages in thread
From: Lukas Wagner @ 2024-06-25  9:04 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Gabriel Goller



On  2024-06-25 10:44, Gabriel Goller wrote:
> Thanks for the review!
> 
> On 24.06.2024 13:10, Lukas Wagner wrote:
>> Just wondering, have you seen/checked out tracing-journald?
>> Would that be interesting for us?
>>
>> https://docs.rs/tracing-journald/latest/tracing_journald/
> 
> I packaged it for debian some time ago. It basically does the same as
> the current `syslog` lib and custom subscriber, so we should just be
> able to drop it in. Don't know if that's too much for this series though :). Maybe as a follow-up? What do you think?

tracing-journald seems easy enough to use/set up, so IMHO you could include this in
series.
> 
>>> +        };
>>> +        if let Err(err) = self.file.write_all(line.as_bytes()) {
>>> +            // avoid panicking, log methods should not do that
>>> +            // FIXME: or, return result???
>>> +            log::error!("error writing to log file - {}", err);
>> Since you use `FileLogger` in `SyslogAndTaskLogger` as a tracing logging layer and
>> also setup the `tracing-log` bridge to convert log records to tracing events,
>> wouldn't this lead to recursion?
> 
> Good catch!
> 
>>> diff --git a/proxmox-log/src/syslog_tasklog_layer.rs b/proxmox-log/src/syslog_tasklog_layer.rs
>>> new file mode 100644
>>> index 00000000..89437caf
>>> --- /dev/null
>>> +++ b/proxmox-log/src/syslog_tasklog_layer.rs
>>> @@ -0,0 +1,106 @@
>>> +use std::fmt::Write as _;
>>> +use std::sync::Arc;
>>> +use std::sync::Mutex;
>>> +
>>> +use syslog::{Formatter3164, LoggerBackend};
>>> +use tracing::field::Field;
>>> +use tracing::field::Visit;
>>> +use tracing::Event;
>>> +use tracing::Level;
>>> +use tracing::Subscriber;
>>> +use tracing_subscriber::layer::Context;
>>> +use tracing_subscriber::Layer;
>>> +
>>> +use crate::FileLogger;
>>> +use crate::LOGGER;
>>> +use crate::WARN_COUNTER;
>>> +
>>> +pub struct SyslogAndTasklogLayer {
>>> +    syslog_logger: Arc<Mutex<syslog::Logger<LoggerBackend, Formatter3164>>>,
>>> +}
>>> +
>>> +impl SyslogAndTasklogLayer {
>>> +    pub fn new(application_name: String) -> Self {
>>> +        let formatter = Formatter3164 {
>>> +            facility: syslog::Facility::LOG_DAEMON,
>>> +            process: application_name,
>>> +            ..Formatter3164::default()
>>> +        };
>>> +
>>> +        // we panic here if we can't initialize the syslogger
>>> +        let logger = syslog::unix(formatter)
>>
>> Maybe explain why it is okay to panic in this case? (is it?)
> 
> I think it's reasonable to not start if we can't write to the syslog.
> Either way this was done a long time before me, so I didn't change it :)

Fair, I did not really check what was copied and what was authored by you ;)
But yeah, I think it is reasonable to not start if you fail to set up logging.


-- 
- Lukas


_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pbs-devel] [PATCH proxmox v5 1/4] proxmox-log: add tracing infrastructure
  2024-06-25  9:04       ` Lukas Wagner
@ 2024-06-26  7:55         ` Fabian Grünbichler
  0 siblings, 0 replies; 14+ messages in thread
From: Fabian Grünbichler @ 2024-06-26  7:55 UTC (permalink / raw)
  To: Gabriel Goller, Proxmox Backup Server development discussion

On June 25, 2024 11:04 am, Lukas Wagner wrote:
> 
> 
> On  2024-06-25 10:44, Gabriel Goller wrote:
>> Thanks for the review!
>> 
>> On 24.06.2024 13:10, Lukas Wagner wrote:
>>> Just wondering, have you seen/checked out tracing-journald?
>>> Would that be interesting for us?
>>>
>>> https://docs.rs/tracing-journald/latest/tracing_journald/
>> 
>> I packaged it for debian some time ago. It basically does the same as
>> the current `syslog` lib and custom subscriber, so we should just be
>> able to drop it in. Don't know if that's too much for this series though :). Maybe as a follow-up? What do you think?
> 
> tracing-journald seems easy enough to use/set up, so IMHO you could include this in
> series.

FWIW, tracing-journald is now packaged on the devel repo (either for
a v6 or a follow-up using it).


_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pbs-devel] [PATCH proxmox{, -backup} v5 0/4] proxmox-log introduction
  2024-06-13 13:56 [pbs-devel] [PATCH proxmox{, -backup} v5 0/4] proxmox-log introduction Gabriel Goller
                   ` (4 preceding siblings ...)
  2024-06-24 11:13 ` [pbs-devel] [PATCH proxmox{, -backup} v5 0/4] proxmox-log introduction Lukas Wagner
@ 2024-06-26 13:10 ` Gabriel Goller
  5 siblings, 0 replies; 14+ messages in thread
From: Gabriel Goller @ 2024-06-26 13:10 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion

Thanks for all the reviews!
Submitted a v6.


_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel




^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2024-06-26 13:11 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-06-13 13:56 [pbs-devel] [PATCH proxmox{, -backup} v5 0/4] proxmox-log introduction Gabriel Goller
2024-06-13 13:56 ` [pbs-devel] [PATCH proxmox v5 1/4] proxmox-log: add tracing infrastructure Gabriel Goller
2024-06-24 11:10   ` Lukas Wagner
2024-06-25  8:44     ` Gabriel Goller
2024-06-25  9:04       ` Lukas Wagner
2024-06-26  7:55         ` Fabian Grünbichler
2024-06-13 13:56 ` [pbs-devel] [PATCH proxmox v5 2/4] enable tracing logger, remove task_log macros Gabriel Goller
2024-06-24 11:09   ` Lukas Wagner
2024-06-25  8:49     ` Gabriel Goller
2024-06-13 13:56 ` [pbs-devel] [PATCH proxmox-backup v5 3/4] switch from task_log! macro to tracing Gabriel Goller
2024-06-24 11:10   ` Lukas Wagner
2024-06-13 13:56 ` [pbs-devel] [PATCH proxmox-backup v5 4/4] api: " Gabriel Goller
2024-06-24 11:13 ` [pbs-devel] [PATCH proxmox{, -backup} v5 0/4] proxmox-log introduction Lukas Wagner
2024-06-26 13:10 ` Gabriel Goller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal