public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pbs-devel] [RFC proxmox-backup 0/2] Tasklog rewrite with tracing
@ 2023-10-11 14:01 Gabriel Goller
  2023-10-11 14:01 ` [pbs-devel] [RFC proxmox-backup 1/2] log: removed task_log! macro and moved to tracing Gabriel Goller
                   ` (2 more replies)
  0 siblings, 3 replies; 14+ messages in thread
From: Gabriel Goller @ 2023-10-11 14:01 UTC (permalink / raw)
  To: pbs-devel

Removed the task_log! (and friends task_warn!, task_debug!, etc.) macro
and introduced the `tracing` crate. We now initiate the tracing crate
using two Layers, first the syslog layer (using the `syslog` crate) and
the `file_layer`. The file_layer uses the original `FileLogger`.

To write to the task logs from the worker threads and tasks, we now
have a task_local logger (and warning counter) in the file_layer, which
will get instantiated when a task/thread is created. This means that
when we call `info!` or any other `tracing` log macros (with the "tasklog"
attribute), it will get the file_logger from TLS and write to the file.

Gabriel Goller (1):
  log: removed task_log! macro and moved to tracing

 Cargo.toml                       |  61 ++++---
 pbs-api-types/src/metrics.rs     |   1 -
 pbs-api-types/src/node.rs        |   3 +-
 pbs-datastore/Cargo.toml         |   1 +
 pbs-datastore/src/chunk_store.rs |  33 ++--
 pbs-datastore/src/datastore.rs   | 101 ++++++-----
 src/api2/admin/datastore.rs      |  29 +--
 src/api2/admin/metrics.rs        |   2 +-
 src/api2/config/acme.rs          |  23 ++-
 src/api2/config/datastore.rs     |  15 +-
 src/api2/node/apt.rs             |  13 +-
 src/api2/node/certificates.rs    |  69 +++----
 src/api2/node/disks/directory.rs |  16 +-
 src/api2/node/disks/mod.rs       |   6 +-
 src/api2/node/disks/zfs.rs       |  32 ++--
 src/api2/node/mod.rs             |  11 +-
 src/api2/pull.rs                 |  27 ++-
 src/api2/tape/backup.rs          |  82 ++++-----
 src/api2/tape/drive.rs           | 158 +++++++++--------
 src/api2/tape/restore.rs         | 296 ++++++++++++++++---------------
 src/backup/verify.rs             | 108 +++++------
 src/bin/proxmox-backup-api.rs    |  10 +-
 src/bin/proxmox-backup-proxy.rs  |  45 ++---
 src/server/gc_job.rs             |   9 +-
 src/server/prune_job.rs          |  31 ++--
 src/server/pull.rs               | 192 +++++++++-----------
 src/server/realm_sync_job.rs     |  37 ++--
 src/server/verify_job.rs         |  13 +-
 src/tape/drive/mod.rs            |  37 ++--
 src/tape/pool_writer/mod.rs      |  90 +++++-----
 tests/worker-task-abort.rs       |   9 +-
 31 files changed, 764 insertions(+), 796 deletions(-)

Gabriel Goller (1):
  proxmox-log: added tracing infra

 Cargo.toml                                    |   6 +
 proxmox-log/Cargo.toml                        |  23 ++++
 proxmox-log/debian/changelog                  |   5 +
 proxmox-log/debian/control                    |  53 +++++++++
 proxmox-log/debian/copyright                  |  18 +++
 proxmox-log/debian/debcargo.toml              |   7 ++
 proxmox-log/src/file_layer.rs                 |  89 +++++++++++++++
 .../src/file_logger.rs                        |   2 +-
 proxmox-log/src/lib.rs                        |  36 ++++++
 proxmox-log/src/syslog_layer.rs               |  78 +++++++++++++
 proxmox-rest-server/Cargo.toml                |   2 +
 proxmox-rest-server/src/api_config.rs         |   3 +-
 proxmox-rest-server/src/lib.rs                |   3 -
 proxmox-rest-server/src/rest.rs               |   4 +-
 proxmox-rest-server/src/worker_task.rs        | 108 ++++++++++--------
 proxmox-router/Cargo.toml                     |   2 +
 proxmox-sys/src/worker_task_context.rs        |  47 --------
 17 files changed, 382 insertions(+), 104 deletions(-)
 create mode 100644 proxmox-log/Cargo.toml
 create mode 100644 proxmox-log/debian/changelog
 create mode 100644 proxmox-log/debian/control
 create mode 100644 proxmox-log/debian/copyright
 create mode 100644 proxmox-log/debian/debcargo.toml
 create mode 100644 proxmox-log/src/file_layer.rs
 rename {proxmox-rest-server => proxmox-log}/src/file_logger.rs (98%)
 create mode 100644 proxmox-log/src/lib.rs
 create mode 100644 proxmox-log/src/syslog_layer.rs

-- 
2.39.2





^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pbs-devel] [RFC proxmox-backup 1/2] log: removed task_log! macro and moved to tracing
  2023-10-11 14:01 [pbs-devel] [RFC proxmox-backup 0/2] Tasklog rewrite with tracing Gabriel Goller
@ 2023-10-11 14:01 ` Gabriel Goller
  2023-10-11 14:01 ` [pbs-devel] [RFC proxmox 2/2] proxmox-log: added tracing infra Gabriel Goller
  2023-10-18 13:12 ` [pbs-devel] [RFC proxmox-backup 0/2] Tasklog rewrite with tracing Dominik Csapak
  2 siblings, 0 replies; 14+ messages in thread
From: Gabriel Goller @ 2023-10-11 14:01 UTC (permalink / raw)
  To: pbs-devel

Imported `proxmox-log` and substituted all `task_log!`
(and task_warn) invocations with tracing calls (info!,
warn!, etc..). Removed worker where it isn't necessary
anymore.

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
---
 Cargo.toml                       |  61 ++++---
 pbs-api-types/src/metrics.rs     |   1 -
 pbs-api-types/src/node.rs        |   3 +-
 pbs-datastore/Cargo.toml         |   1 +
 pbs-datastore/src/chunk_store.rs |  33 ++--
 pbs-datastore/src/datastore.rs   | 101 ++++++-----
 src/api2/admin/datastore.rs      |  29 +--
 src/api2/admin/metrics.rs        |   2 +-
 src/api2/config/acme.rs          |  23 ++-
 src/api2/config/datastore.rs     |  15 +-
 src/api2/node/apt.rs             |  13 +-
 src/api2/node/certificates.rs    |  69 +++----
 src/api2/node/disks/directory.rs |  16 +-
 src/api2/node/disks/mod.rs       |   6 +-
 src/api2/node/disks/zfs.rs       |  32 ++--
 src/api2/node/mod.rs             |  11 +-
 src/api2/pull.rs                 |  27 ++-
 src/api2/tape/backup.rs          |  82 ++++-----
 src/api2/tape/drive.rs           | 158 +++++++++--------
 src/api2/tape/restore.rs         | 296 ++++++++++++++++---------------
 src/backup/verify.rs             | 108 +++++------
 src/bin/proxmox-backup-api.rs    |  10 +-
 src/bin/proxmox-backup-proxy.rs  |  45 ++---
 src/server/gc_job.rs             |   9 +-
 src/server/prune_job.rs          |  31 ++--
 src/server/pull.rs               | 192 +++++++++-----------
 src/server/realm_sync_job.rs     |  37 ++--
 src/server/verify_job.rs         |  13 +-
 src/tape/drive/mod.rs            |  37 ++--
 src/tape/pool_writer/mod.rs      |  90 +++++-----
 tests/worker-task-abort.rs       |   9 +-
 31 files changed, 764 insertions(+), 796 deletions(-)

diff --git a/Cargo.toml b/Cargo.toml
index cfbf2ba1..fea99f01 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -64,6 +64,7 @@ proxmox-http = { version = "0.9.0", features = [ "client", "http-helpers", "webs
 proxmox-human-byte = "0.1"
 proxmox-io = "1.0.1" # tools and client use "tokio" feature
 proxmox-lang = "1.1"
+proxmox-log = "0.1"
 proxmox-ldap = "0.2.1"
 proxmox-metrics = "0.3"
 proxmox-openid = "0.10.0"
@@ -144,6 +145,8 @@ tokio = "1.6"
 tokio-openssl = "0.6.1"
 tokio-stream = "0.1.0"
 tokio-util = { version = "0.7", features = [ "io" ] }
+tracing = "0.1"
+tracing-subscriber = "0.3.16"
 tower-service = "0.3.0"
 udev = "0.4"
 url = "2.1"
@@ -189,6 +192,8 @@ tokio = { workspace = true, features = [ "fs", "io-util", "io-std", "macros", "n
 tokio-openssl.workspace = true
 tokio-stream.workspace = true
 tokio-util = { workspace = true, features = [ "codec" ] }
+tracing.workspace = true
+tracing-subscriber.workspace = true
 tower-service.workspace = true
 udev.workspace = true
 url.workspace = true
@@ -207,6 +212,7 @@ proxmox-http = { workspace = true, features = [ "client-trait", "proxmox-async",
 proxmox-human-byte.workspace = true
 proxmox-io.workspace = true
 proxmox-lang.workspace = true
+proxmox-log.workspace = true
 proxmox-ldap.workspace = true
 proxmox-metrics.workspace = true
 proxmox-openid.workspace = true
@@ -243,35 +249,36 @@ proxmox-rrd.workspace = true
 # NOTE: You must run `cargo update` after changing this for it to take effect!
 [patch.crates-io]
 
-#proxmox-apt = { path = "../proxmox/proxmox-apt" }
-#proxmox-async = { path = "../proxmox/proxmox-async" }
-#proxmox-auth-api = { path = "../proxmox/proxmox-auth-api" }
-#proxmox-borrow = { path = "../proxmox/proxmox-borrow" }
-#proxmox-compression = { path = "../proxmox/proxmox-compression" }
-#proxmox-fuse = { path = "../proxmox-fuse" }
-#proxmox-http = { path = "../proxmox/proxmox-http" }
-#proxmox-human-byte = { path = "../proxmox/proxmox-human-byte" }
-#proxmox-io = { path = "../proxmox/proxmox-io" }
-#proxmox-lang = { path = "../proxmox/proxmox-lang" }
-#proxmox-ldap = { path = "../proxmox/proxmox-ldap" }
-#proxmox-metrics = { path = "../proxmox/proxmox-metrics" }
-#proxmox-openid = { path = "../proxmox/proxmox-openid" }
-#proxmox-rest-server = { path = "../proxmox/proxmox-rest-server" }
-#proxmox-router = { path = "../proxmox/proxmox-router" }
-#proxmox-schema = { path = "../proxmox/proxmox-schema" }
-#proxmox-section-config = { path = "../proxmox/proxmox-section-config" }
-#proxmox-serde = { path = "../proxmox/proxmox-serde" }
-#proxmox-shared-memory = { path = "../proxmox/proxmox-shared-memory" }
-#proxmox-sortable-macro = { path = "../proxmox/proxmox-sortable-macro" }
-#proxmox-subscription = { path = "../proxmox/proxmox-subscription" }
-#proxmox-sys = { path = "../proxmox/proxmox-sys" }
-#proxmox-tfa = { path = "../proxmox/proxmox-tfa" }
-#proxmox-time = { path = "../proxmox/proxmox-time" }
-#proxmox-uuid = { path = "../proxmox/proxmox-uuid" }
+proxmox-apt = { path = "../proxmox/proxmox-apt" }
+proxmox-async = { path = "../proxmox/proxmox-async" }
+proxmox-auth-api = { path = "../proxmox/proxmox-auth-api" }
+proxmox-borrow = { path = "../proxmox/proxmox-borrow" }
+proxmox-compression = { path = "../proxmox/proxmox-compression" }
+proxmox-fuse = { path = "../proxmox-fuse" }
+proxmox-http = { path = "../proxmox/proxmox-http" }
+proxmox-human-byte = { path = "../proxmox/proxmox-human-byte" }
+proxmox-io = { path = "../proxmox/proxmox-io" }
+proxmox-lang = { path = "../proxmox/proxmox-lang" }
+proxmox-log = { path = "../proxmox/proxmox-log" }
+proxmox-ldap = { path = "../proxmox/proxmox-ldap" }
+proxmox-metrics = { path = "../proxmox/proxmox-metrics" }
+proxmox-openid = { path = "../proxmox/proxmox-openid" }
+proxmox-rest-server = { path = "../proxmox/proxmox-rest-server" }
+proxmox-router = { path = "../proxmox/proxmox-router" }
+proxmox-schema = { path = "../proxmox/proxmox-schema" }
+proxmox-section-config = { path = "../proxmox/proxmox-section-config" }
+proxmox-serde = { path = "../proxmox/proxmox-serde" }
+proxmox-shared-memory = { path = "../proxmox/proxmox-shared-memory" }
+proxmox-sortable-macro = { path = "../proxmox/proxmox-sortable-macro" }
+proxmox-subscription = { path = "../proxmox/proxmox-subscription" }
+proxmox-sys = { path = "../proxmox/proxmox-sys" }
+proxmox-tfa = { path = "../proxmox/proxmox-tfa" }
+proxmox-time = { path = "../proxmox/proxmox-time" }
+proxmox-uuid = { path = "../proxmox/proxmox-uuid" }
 
-#proxmox-acme-rs = { path = "../proxmox-acme-rs" }
+proxmox-acme-rs = { path = "../proxmox-acme-rs" }
 #pathpatterns = {path = "../pathpatterns" }
-#pxar = { path = "../pxar" }
+pxar = { path = "../pxar" }
 
 [features]
 default = []
diff --git a/pbs-api-types/src/metrics.rs b/pbs-api-types/src/metrics.rs
index be5ff27a..99df88a7 100644
--- a/pbs-api-types/src/metrics.rs
+++ b/pbs-api-types/src/metrics.rs
@@ -147,7 +147,6 @@ pub struct InfluxDbHttp {
     pub comment: Option<String>,
 }
 
-
 #[api]
 #[derive(Copy, Clone, Deserialize, Serialize, PartialEq, Eq, PartialOrd, Ord)]
 /// Type of the metric server
diff --git a/pbs-api-types/src/node.rs b/pbs-api-types/src/node.rs
index 704215bb..9033d586 100644
--- a/pbs-api-types/src/node.rs
+++ b/pbs-api-types/src/node.rs
@@ -1,9 +1,8 @@
-use serde::{Deserialize, Serialize};
 use proxmox_schema::*;
+use serde::{Deserialize, Serialize};
 
 use crate::StorageStatus;
 
-
 #[api]
 #[derive(Serialize, Deserialize, Default)]
 #[serde(rename_all = "kebab-case")]
diff --git a/pbs-datastore/Cargo.toml b/pbs-datastore/Cargo.toml
index b793dc5b..fcb11ac4 100644
--- a/pbs-datastore/Cargo.toml
+++ b/pbs-datastore/Cargo.toml
@@ -20,6 +20,7 @@ openssl.workspace = true
 serde.workspace = true
 serde_json.workspace = true
 tokio = { workspace = true, features = [] }
+tracing.workspace = true
 walkdir.workspace = true
 zstd.workspace = true
 
diff --git a/pbs-datastore/src/chunk_store.rs b/pbs-datastore/src/chunk_store.rs
index fb282749..7cb3ee33 100644
--- a/pbs-datastore/src/chunk_store.rs
+++ b/pbs-datastore/src/chunk_store.rs
@@ -3,6 +3,7 @@ use std::path::{Path, PathBuf};
 use std::sync::{Arc, Mutex};
 
 use anyhow::{bail, format_err, Error};
+use tracing::info;
 
 use pbs_api_types::{DatastoreFSyncLevel, GarbageCollectionStatus};
 use proxmox_io::ReadExt;
@@ -10,7 +11,6 @@ use proxmox_sys::fs::{create_dir, create_path, file_type_from_file_stat, CreateO
 use proxmox_sys::process_locker::{
     ProcessLockExclusiveGuard, ProcessLockSharedGuard, ProcessLocker,
 };
-use proxmox_sys::task_log;
 use proxmox_sys::WorkerTaskContext;
 
 use crate::file_formats::{
@@ -92,7 +92,6 @@ impl ChunkStore {
         path: P,
         uid: nix::unistd::Uid,
         gid: nix::unistd::Gid,
-        worker: Option<&dyn WorkerTaskContext>,
         sync_level: DatastoreFSyncLevel,
     ) -> Result<Self, Error>
     where
@@ -143,9 +142,7 @@ impl ChunkStore {
             }
             let percentage = (i * 100) / (64 * 1024);
             if percentage != last_percentage {
-                if let Some(worker) = worker {
-                    task_log!(worker, "Chunkstore create: {}%", percentage)
-                }
+                info!(tasklog = true, "Chunkstore create: {}%", percentage);
                 last_percentage = percentage;
             }
         }
@@ -374,7 +371,10 @@ impl ChunkStore {
         for (entry, percentage, bad) in self.get_chunk_iterator()? {
             if last_percentage != percentage {
                 last_percentage = percentage;
-                task_log!(worker, "processed {}% ({} chunks)", percentage, chunk_count,);
+                info!(
+                    tasklog = true,
+                    "processed {}% ({} chunks)", percentage, chunk_count,
+                );
             }
 
             worker.check_abort()?;
@@ -578,15 +578,8 @@ fn test_chunk_store1() {
     let user = nix::unistd::User::from_uid(nix::unistd::Uid::current())
         .unwrap()
         .unwrap();
-    let chunk_store = ChunkStore::create(
-        "test",
-        &path,
-        user.uid,
-        user.gid,
-        None,
-        DatastoreFSyncLevel::None,
-    )
-    .unwrap();
+    let chunk_store =
+        ChunkStore::create("test", &path, user.uid, user.gid, DatastoreFSyncLevel::None).unwrap();
 
     let (chunk, digest) = crate::data_blob::DataChunkBuilder::new(&[0u8, 1u8])
         .build()
@@ -598,14 +591,8 @@ fn test_chunk_store1() {
     let (exists, _) = chunk_store.insert_chunk(&chunk, &digest).unwrap();
     assert!(exists);
 
-    let chunk_store = ChunkStore::create(
-        "test",
-        &path,
-        user.uid,
-        user.gid,
-        None,
-        DatastoreFSyncLevel::None,
-    );
+    let chunk_store =
+        ChunkStore::create("test", &path, user.uid, user.gid, DatastoreFSyncLevel::None);
     assert!(chunk_store.is_err());
 
     if let Err(_e) = std::fs::remove_dir_all(".testdir") { /* ignore */ }
diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
index 41c9f4f2..23cadd58 100644
--- a/pbs-datastore/src/datastore.rs
+++ b/pbs-datastore/src/datastore.rs
@@ -7,6 +7,7 @@ use std::sync::{Arc, Mutex};
 use anyhow::{bail, format_err, Error};
 use lazy_static::lazy_static;
 use nix::unistd::{unlinkat, UnlinkatFlags};
+use tracing::{info, warn};
 
 use proxmox_human_byte::HumanByte;
 use proxmox_schema::ApiType;
@@ -16,7 +17,6 @@ use proxmox_sys::fs::{file_read_optional_string, replace_file, CreateOptions};
 use proxmox_sys::fs::{lock_dir_noblock, DirLockGuard};
 use proxmox_sys::process_locker::ProcessLockSharedGuard;
 use proxmox_sys::WorkerTaskContext;
-use proxmox_sys::{task_log, task_warn};
 
 use pbs_api_types::{
     Authid, BackupNamespace, BackupType, ChunkOrder, DataStoreConfig, DatastoreFSyncLevel,
@@ -925,8 +925,8 @@ impl DataStore {
             let digest = index.index_digest(pos).unwrap();
             if !self.inner.chunk_store.cond_touch_chunk(digest, false)? {
                 let hex = hex::encode(digest);
-                task_warn!(
-                    worker,
+                warn!(
+                    tasklog = true,
                     "warning: unable to access non-existent chunk {hex}, required by {file_name:?}"
                 );
 
@@ -992,8 +992,8 @@ impl DataStore {
 
             let percentage = (i + 1) * 100 / image_count;
             if percentage > last_percentage {
-                task_log!(
-                    worker,
+                info!(
+                    tasklog = true,
                     "marked {}% ({} of {} index files)",
                     percentage,
                     i + 1,
@@ -1004,8 +1004,8 @@ impl DataStore {
         }
 
         if strange_paths_count > 0 {
-            task_log!(
-                worker,
+            info!(
+                tasklog = true,
                 "found (and marked) {} index files outside of expected directory scheme",
                 strange_paths_count,
             );
@@ -1045,11 +1045,11 @@ impl DataStore {
                 ..Default::default()
             };
 
-            task_log!(worker, "Start GC phase1 (mark used chunks)");
+            info!(tasklog = true, "Start GC phase1 (mark used chunks)");
 
             self.mark_used_chunks(&mut gc_status, worker)?;
 
-            task_log!(worker, "Start GC phase2 (sweep unused chunks)");
+            info!(tasklog = true, "Start GC phase2 (sweep unused chunks)");
             self.inner.chunk_store.sweep_unused_chunks(
                 oldest_writer,
                 phase1_start_time,
@@ -1057,30 +1057,39 @@ impl DataStore {
                 worker,
             )?;
 
-            task_log!(
-                worker,
+            info!(
+                tasklog = true,
                 "Removed garbage: {}",
                 HumanByte::from(gc_status.removed_bytes),
             );
-            task_log!(worker, "Removed chunks: {}", gc_status.removed_chunks);
+            info!(
+                tasklog = true,
+                "Removed chunks: {}", gc_status.removed_chunks
+            );
             if gc_status.pending_bytes > 0 {
-                task_log!(
-                    worker,
+                info!(
+                    tasklog = true,
                     "Pending removals: {} (in {} chunks)",
                     HumanByte::from(gc_status.pending_bytes),
                     gc_status.pending_chunks,
                 );
             }
             if gc_status.removed_bad > 0 {
-                task_log!(worker, "Removed bad chunks: {}", gc_status.removed_bad);
+                info!(
+                    tasklog = true,
+                    "Removed bad chunks: {}", gc_status.removed_bad
+                );
             }
 
             if gc_status.still_bad > 0 {
-                task_log!(worker, "Leftover bad chunks: {}", gc_status.still_bad);
+                info!(
+                    tasklog = true,
+                    "Leftover bad chunks: {}", gc_status.still_bad
+                );
             }
 
-            task_log!(
-                worker,
+            info!(
+                tasklog = true,
                 "Original data usage: {}",
                 HumanByte::from(gc_status.index_data_bytes),
             );
@@ -1088,15 +1097,15 @@ impl DataStore {
             if gc_status.index_data_bytes > 0 {
                 let comp_per =
                     (gc_status.disk_bytes as f64 * 100.) / gc_status.index_data_bytes as f64;
-                task_log!(
-                    worker,
+                info!(
+                    tasklog = true,
                     "On-Disk usage: {} ({:.2}%)",
                     HumanByte::from(gc_status.disk_bytes),
                     comp_per,
                 );
             }
 
-            task_log!(worker, "On-Disk chunks: {}", gc_status.disk_chunks);
+            info!(tasklog = true, "On-Disk chunks: {}", gc_status.disk_chunks);
 
             let deduplication_factor = if gc_status.disk_bytes > 0 {
                 (gc_status.index_data_bytes as f64) / (gc_status.disk_bytes as f64)
@@ -1104,11 +1113,18 @@ impl DataStore {
                 1.0
             };
 
-            task_log!(worker, "Deduplication factor: {:.2}", deduplication_factor);
+            info!(
+                tasklog = true,
+                "Deduplication factor: {:.2}", deduplication_factor
+            );
 
             if gc_status.disk_chunks > 0 {
                 let avg_chunk = gc_status.disk_bytes / (gc_status.disk_chunks as u64);
-                task_log!(worker, "Average chunk size: {}", HumanByte::from(avg_chunk));
+                info!(
+                    tasklog = true,
+                    "Average chunk size: {}",
+                    HumanByte::from(avg_chunk)
+                );
             }
 
             if let Ok(serialized) = serde_json::to_string(&gc_status) {
@@ -1340,11 +1356,7 @@ impl DataStore {
     /// Destroy a datastore. This requires that there are no active operations on the datastore.
     ///
     /// This is a synchronous operation and should be run in a worker-thread.
-    pub fn destroy(
-        name: &str,
-        destroy_data: bool,
-        worker: &dyn WorkerTaskContext,
-    ) -> Result<(), Error> {
+    pub fn destroy(name: &str, destroy_data: bool) -> Result<(), Error> {
         let config_lock = pbs_config::datastore::lock_config()?;
 
         let (mut config, _digest) = pbs_config::datastore::config()?;
@@ -1368,13 +1380,16 @@ impl DataStore {
             let remove = |subdir, ok: &mut bool| {
                 if let Err(err) = std::fs::remove_dir_all(base.join(subdir)) {
                     if err.kind() != io::ErrorKind::NotFound {
-                        task_warn!(worker, "failed to remove {subdir:?} subdirectory: {err}");
+                        warn!(
+                            tasklog = true,
+                            "failed to remove {subdir:?} subdirectory: {err}"
+                        );
                         *ok = false;
                     }
                 }
             };
 
-            task_log!(worker, "Deleting datastore data...");
+            info!(tasklog = true, "Deleting datastore data...");
             remove("ns", &mut ok); // ns first
             remove("ct", &mut ok);
             remove("vm", &mut ok);
@@ -1383,7 +1398,7 @@ impl DataStore {
             if ok {
                 if let Err(err) = std::fs::remove_file(base.join(".gc-status")) {
                     if err.kind() != io::ErrorKind::NotFound {
-                        task_warn!(worker, "failed to remove .gc-status file: {err}");
+                        warn!(tasklog = true, "failed to remove .gc-status file: {err}");
                         ok = false;
                     }
                 }
@@ -1397,7 +1412,7 @@ impl DataStore {
 
         // now the config
         if ok {
-            task_log!(worker, "Removing datastore from config...");
+            info!(tasklog = true, "Removing datastore from config...");
             let _lock = pbs_config::datastore::lock_config()?;
             let _ = config.sections.remove(name);
             pbs_config::datastore::save_config(&config)?;
@@ -1408,35 +1423,41 @@ impl DataStore {
             if ok {
                 if let Err(err) = std::fs::remove_file(base.join(".lock")) {
                     if err.kind() != io::ErrorKind::NotFound {
-                        task_warn!(worker, "failed to remove .lock file: {err}");
+                        warn!(tasklog = true, "failed to remove .lock file: {err}");
                         ok = false;
                     }
                 }
             }
 
             if ok {
-                task_log!(worker, "Finished deleting data.");
+                info!(tasklog = true, "Finished deleting data.");
 
                 match std::fs::remove_dir(base) {
-                    Ok(()) => task_log!(worker, "Removed empty datastore directory."),
+                    Ok(()) => info!(tasklog = true, "Removed empty datastore directory."),
                     Err(err) if err.kind() == io::ErrorKind::NotFound => {
                         // weird, but ok
                     }
                     Err(err) if err.is_errno(nix::errno::Errno::EBUSY) => {
-                        task_warn!(
-                            worker,
+                        warn!(
+                            tasklog = true,
                             "Cannot delete datastore directory (is it a mount point?)."
                         )
                     }
                     Err(err) if err.is_errno(nix::errno::Errno::ENOTEMPTY) => {
-                        task_warn!(worker, "Datastore directory not empty, not deleting.")
+                        warn!(
+                            tasklog = true,
+                            "Datastore directory not empty, not deleting."
+                        )
                     }
                     Err(err) => {
-                        task_warn!(worker, "Failed to remove datastore directory: {err}");
+                        warn!(
+                            tasklog = true,
+                            "Failed to remove datastore directory: {err}"
+                        );
                     }
                 }
             } else {
-                task_log!(worker, "There were errors deleting data.");
+                info!(tasklog = true, "There were errors deleting data.");
             }
         }
 
diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
index a95031e7..bb211378 100644
--- a/src/api2/admin/datastore.rs
+++ b/src/api2/admin/datastore.rs
@@ -13,6 +13,7 @@ use hyper::{header, Body, Response, StatusCode};
 use serde::Deserialize;
 use serde_json::{json, Value};
 use tokio_stream::wrappers::ReceiverStream;
+use tracing::{info, warn};
 
 use proxmox_async::blocking::WrappedReaderStream;
 use proxmox_async::{io::AsyncChannelWriter, stream::AsyncReaderStream};
@@ -26,7 +27,6 @@ use proxmox_sortable_macro::sortable;
 use proxmox_sys::fs::{
     file_read_firstline, file_read_optional_string, replace_file, CreateOptions,
 };
-use proxmox_sys::{task_log, task_warn};
 
 use pxar::accessor::aio::Accessor;
 use pxar::EntryKind;
@@ -907,9 +907,12 @@ pub fn verify(
                 )?
             };
             if !failed_dirs.is_empty() {
-                task_log!(worker, "Failed to verify the following snapshots/groups:");
+                info!(
+                    tasklog = true,
+                    "Failed to verify the following snapshots/groups:"
+                );
                 for dir in failed_dirs {
-                    task_log!(worker, "\t{}", dir);
+                    info!(tasklog = true, "\t{}", dir);
                 }
                 bail!("verification failed - please check the log for details");
             }
@@ -1009,10 +1012,10 @@ pub fn prune(
     }
 
     // We use a WorkerTask just to have a task log, but run synchrounously
-    let worker = WorkerTask::new("prune", Some(worker_id), auth_id.to_string(), true)?;
+    let (worker, _) = WorkerTask::new("prune", Some(worker_id), auth_id.to_string(), true)?;
 
     if keep_all {
-        task_log!(worker, "No prune selection - keeping all files.");
+        info!(tasklog = true, "No prune selection - keeping all files.");
     } else {
         let mut opts = Vec::new();
         if !ns.is_root() {
@@ -1020,9 +1023,9 @@ pub fn prune(
         }
         crate::server::cli_keep_options(&mut opts, &keep_options);
 
-        task_log!(worker, "retention options: {}", opts.join(" "));
-        task_log!(
-            worker,
+        info!(tasklog = true, "retention options: {}", opts.join(" "));
+        info!(
+            tasklog = true,
             "Starting prune on {} group \"{}\"",
             print_store_and_ns(&store, &ns),
             group.group(),
@@ -1038,7 +1041,7 @@ pub fn prune(
 
         let msg = format!("{}/{}/{} {}", group.ty, group.id, timestamp, mark,);
 
-        task_log!(worker, "{}", msg);
+        info!(tasklog = true, "{}", msg);
 
         prune_result.push(json!({
             "backup-type": group.ty,
@@ -1050,8 +1053,8 @@ pub fn prune(
 
         if !(dry_run || keep) {
             if let Err(err) = info.backup_dir.destroy(false) {
-                task_warn!(
-                    worker,
+                warn!(
+                    tasklog = true,
                     "failed to remove dir {:?}: {}",
                     info.backup_dir.relative_path(),
                     err,
@@ -1121,9 +1124,7 @@ pub fn prune_datastore(
         Some(worker_id),
         auth_id.to_string(),
         to_stdout,
-        move |worker| {
-            crate::server::prune_datastore(worker, auth_id, prune_options, datastore, dry_run)
-        },
+        move |_worker| crate::server::prune_datastore(auth_id, prune_options, datastore, dry_run),
     )?;
 
     Ok(upid_str)
diff --git a/src/api2/admin/metrics.rs b/src/api2/admin/metrics.rs
index d72bd98b..c299485f 100644
--- a/src/api2/admin/metrics.rs
+++ b/src/api2/admin/metrics.rs
@@ -4,7 +4,7 @@ use serde_json::Value;
 use proxmox_router::{Permission, Router, RpcEnvironment};
 use proxmox_schema::api;
 
-use pbs_api_types::{PRIV_SYS_AUDIT, MetricServerInfo};
+use pbs_api_types::{MetricServerInfo, PRIV_SYS_AUDIT};
 use pbs_config::metrics;
 
 #[api(
diff --git a/src/api2/config/acme.rs b/src/api2/config/acme.rs
index 1954318b..6f9b3547 100644
--- a/src/api2/config/acme.rs
+++ b/src/api2/config/acme.rs
@@ -9,12 +9,12 @@ use hex::FromHex;
 use lazy_static::lazy_static;
 use serde::{Deserialize, Serialize};
 use serde_json::{json, Value};
+use tracing::{info, warn};
 
 use proxmox_router::{
     http_bail, list_subdirs_api_method, Permission, Router, RpcEnvironment, SubdirMap,
 };
 use proxmox_schema::{api, param_bail};
-use proxmox_sys::{task_log, task_warn};
 
 use proxmox_acme_rs::account::AccountData as AcmeAccountData;
 use proxmox_acme_rs::Account;
@@ -219,18 +219,17 @@ fn register_account(
         Some(name.to_string()),
         auth_id.to_string(),
         true,
-        move |worker| async move {
+        move |_worker| async move {
             let mut client = AcmeClient::new(directory);
 
-            task_log!(worker, "Registering ACME account '{}'...", &name);
+            info!(tasklog = true, "Registering ACME account '{}'...", &name);
 
             let account =
                 do_register_account(&mut client, &name, tos_url.is_some(), contact, None).await?;
 
-            task_log!(
-                worker,
-                "Registration successful, account URL: {}",
-                account.location
+            info!(
+                tasklog = true,
+                "Registration successful, account URL: {}", account.location
             );
 
             Ok(())
@@ -325,7 +324,7 @@ pub fn deactivate_account(
         Some(name.to_string()),
         auth_id.to_string(),
         true,
-        move |worker| async move {
+        move |_worker| async move {
             match AcmeClient::load(&name)
                 .await?
                 .update_account(&json!({"status": "deactivated"}))
@@ -334,11 +333,9 @@ pub fn deactivate_account(
                 Ok(_account) => (),
                 Err(err) if !force => return Err(err),
                 Err(err) => {
-                    task_warn!(
-                        worker,
-                        "error deactivating account {}, proceedeing anyway - {}",
-                        name,
-                        err,
+                    warn!(
+                        tasklog = true,
+                        "error deactivating account {}, proceedeing anyway - {}", name, err,
                     );
                 }
             }
diff --git a/src/api2/config/datastore.rs b/src/api2/config/datastore.rs
index 5e013c39..299e1bea 100644
--- a/src/api2/config/datastore.rs
+++ b/src/api2/config/datastore.rs
@@ -4,11 +4,11 @@ use ::serde::{Deserialize, Serialize};
 use anyhow::Error;
 use hex::FromHex;
 use serde_json::Value;
+use tracing::warn;
 
 use proxmox_router::{http_bail, Permission, Router, RpcEnvironment, RpcEnvironmentType};
 use proxmox_schema::{api, param_bail, ApiType};
 use proxmox_section_config::SectionConfigData;
-use proxmox_sys::{task_warn, WorkerTaskContext};
 
 use pbs_api_types::{
     Authid, DataStoreConfig, DataStoreConfigUpdater, DatastoreNotify, DatastoreTuning,
@@ -69,7 +69,6 @@ pub(crate) fn do_create_datastore(
     _lock: BackupLockGuard,
     mut config: SectionConfigData,
     datastore: DataStoreConfig,
-    worker: Option<&dyn WorkerTaskContext>,
 ) -> Result<(), Error> {
     let path: PathBuf = datastore.path.clone().into();
 
@@ -83,7 +82,6 @@ pub(crate) fn do_create_datastore(
         path,
         backup_user.uid,
         backup_user.gid,
-        worker,
         tuning.sync_level.unwrap_or_default(),
     )?;
 
@@ -132,7 +130,7 @@ pub fn create_datastore(
         Some(config.name.to_string()),
         auth_id.to_string(),
         to_stdout,
-        move |worker| do_create_datastore(lock, section_config, config, Some(&worker)),
+        move |_worker| do_create_datastore(lock, section_config, config),
     )
 }
 
@@ -460,8 +458,8 @@ pub async fn delete_datastore(
         Some(name.clone()),
         auth_id.to_string(),
         to_stdout,
-        move |worker| {
-            pbs_datastore::DataStore::destroy(&name, destroy_data, &worker)?;
+        move |_worker| {
+            pbs_datastore::DataStore::destroy(&name, destroy_data)?;
 
             // ignore errors
             let _ = jobstate::remove_state_file("prune", &name);
@@ -470,7 +468,10 @@ pub async fn delete_datastore(
             if let Err(err) =
                 proxmox_async::runtime::block_on(crate::server::notify_datastore_removed())
             {
-                task_warn!(worker, "failed to notify after datastore removal: {err}");
+                warn!(
+                    tasklog = true,
+                    "failed to notify after datastore removal: {err}"
+                );
             }
 
             Ok(())
diff --git a/src/api2/node/apt.rs b/src/api2/node/apt.rs
index 8e4f150d..a9853b31 100644
--- a/src/api2/node/apt.rs
+++ b/src/api2/node/apt.rs
@@ -2,6 +2,7 @@ use anyhow::{bail, format_err, Error};
 use serde_json::{json, Value};
 use std::collections::HashMap;
 use std::os::unix::prelude::OsStrExt;
+use tracing::{info, warn};
 
 use proxmox_router::{
     list_subdirs_api_method, Permission, Router, RpcEnvironment, RpcEnvironmentType, SubdirMap,
@@ -85,9 +86,9 @@ fn read_and_update_proxy_config() -> Result<Option<ProxyConfig>, Error> {
     Ok(proxy_config)
 }
 
-fn do_apt_update(worker: &WorkerTask, quiet: bool) -> Result<(), Error> {
+fn do_apt_update(quiet: bool) -> Result<(), Error> {
     if !quiet {
-        worker.log_message("starting apt-get update")
+        info!(log = "file", "starting apt-get update")
     }
 
     read_and_update_proxy_config()?;
@@ -101,7 +102,7 @@ fn do_apt_update(worker: &WorkerTask, quiet: bool) -> Result<(), Error> {
         .map_err(|err| format_err!("failed to execute {:?} - {}", command, err))?;
 
     if !quiet {
-        worker.log_message(String::from_utf8(output.stdout)?);
+        info!(tasklog = true, "{}", String::from_utf8(output.stdout)?);
     }
 
     // TODO: improve run_command to allow outputting both, stderr and stdout
@@ -116,7 +117,7 @@ fn do_apt_update(worker: &WorkerTask, quiet: bool) -> Result<(), Error> {
                     }
                 })
                 .unwrap_or_else(|_| String::from("non utf8 error message (suppressed)"));
-            worker.log_warning(msg);
+            warn!(tasklog = true, msg);
         } else {
             bail!("terminated by signal");
         }
@@ -162,8 +163,8 @@ pub fn apt_update_database(
     let auth_id = rpcenv.get_auth_id().unwrap();
     let to_stdout = rpcenv.env_type() == RpcEnvironmentType::CLI;
 
-    let upid_str = WorkerTask::new_thread("aptupdate", None, auth_id, to_stdout, move |worker| {
-        do_apt_update(&worker, quiet)?;
+    let upid_str = WorkerTask::new_thread("aptupdate", None, auth_id, to_stdout, move |_worker| {
+        do_apt_update(quiet)?;
 
         let mut cache = apt::update_cache()?;
 
diff --git a/src/api2/node/certificates.rs b/src/api2/node/certificates.rs
index d514001a..24a40e7d 100644
--- a/src/api2/node/certificates.rs
+++ b/src/api2/node/certificates.rs
@@ -5,16 +5,17 @@ use anyhow::{bail, format_err, Error};
 use openssl::pkey::PKey;
 use openssl::x509::X509;
 use serde::{Deserialize, Serialize};
+use tracing::info;
 
 use proxmox_router::list_subdirs_api_method;
 use proxmox_router::SubdirMap;
 use proxmox_router::{Permission, Router, RpcEnvironment};
 use proxmox_schema::api;
-use proxmox_sys::{task_log, task_warn};
 
 use pbs_api_types::{NODE_SCHEMA, PRIV_SYS_MODIFY};
 use pbs_buildcfg::configdir;
 use pbs_tools::cert;
+use tracing::warn;
 
 use crate::acme::AcmeClient;
 use crate::api2::types::AcmeDomain;
@@ -305,8 +306,8 @@ async fn order_certificate(
     };
 
     if domains.is_empty() {
-        task_log!(
-            worker,
+        info!(
+            tasklog = true,
             "No domains configured to be ordered from an ACME server."
         );
         return Ok(None);
@@ -316,11 +317,11 @@ async fn order_certificate(
 
     let mut acme = node_config.acme_client().await?;
 
-    task_log!(worker, "Placing ACME order");
+    info!(tasklog = true, "Placing ACME order");
     let order = acme
         .new_order(domains.iter().map(|d| d.domain.to_ascii_lowercase()))
         .await?;
-    task_log!(worker, "Order URL: {}", order.location);
+    info!(tasklog = true, "Order URL: {}", order.location);
 
     let identifiers: Vec<String> = order
         .data
@@ -332,7 +333,10 @@ async fn order_certificate(
         .collect();
 
     for auth_url in &order.data.authorizations {
-        task_log!(worker, "Getting authorization details from '{}'", auth_url);
+        info!(
+            tasklog = true,
+            "Getting authorization details from '{}'", auth_url
+        );
         let mut auth = acme.get_authorization(auth_url).await?;
 
         let domain = match &mut auth.identifier {
@@ -340,11 +344,11 @@ async fn order_certificate(
         };
 
         if auth.status == Status::Valid {
-            task_log!(worker, "{} is already validated!", domain);
+            info!(tasklog = true, "{} is already validated!", domain);
             continue;
         }
 
-        task_log!(worker, "The validation for {} is pending", domain);
+        info!(tasklog = true, "The validation for {} is pending", domain);
         let domain_config: &AcmeDomain = get_domain_config(&domain)?;
         let plugin_id = domain_config.plugin.as_deref().unwrap_or("standalone");
         let mut plugin_cfg =
@@ -352,31 +356,28 @@ async fn order_certificate(
                 format_err!("plugin '{}' for domain '{}' not found!", plugin_id, domain)
             })?;
 
-        task_log!(worker, "Setting up validation plugin");
+        info!(tasklog = true, "Setting up validation plugin");
         let validation_url = plugin_cfg
             .setup(&mut acme, &auth, domain_config, Arc::clone(&worker))
             .await?;
 
-        let result = request_validation(&worker, &mut acme, auth_url, validation_url).await;
+        let result = request_validation(&mut acme, auth_url, validation_url).await;
 
         if let Err(err) = plugin_cfg
             .teardown(&mut acme, &auth, domain_config, Arc::clone(&worker))
             .await
         {
-            task_warn!(
-                worker,
-                "Failed to teardown plugin '{}' for domain '{}' - {}",
-                plugin_id,
-                domain,
-                err
+            warn!(
+                tasklog = true,
+                "Failed to teardown plugin '{}' for domain '{}' - {}", plugin_id, domain, err
             );
         }
 
         result?;
     }
 
-    task_log!(worker, "All domains validated");
-    task_log!(worker, "Creating CSR");
+    info!(tasklog = true, "All domains validated");
+    info!(tasklog = true, "Creating CSR");
 
     let csr = proxmox_acme_rs::util::Csr::generate(&identifiers, &Default::default())?;
     let mut finalize_error_cnt = 0u8;
@@ -389,7 +390,7 @@ async fn order_certificate(
 
         match order.status {
             Status::Pending => {
-                task_log!(worker, "still pending, trying to finalize anyway");
+                info!(tasklog = true, "still pending, trying to finalize anyway");
                 let finalize = order
                     .finalize
                     .as_deref()
@@ -404,7 +405,7 @@ async fn order_certificate(
                 tokio::time::sleep(Duration::from_secs(5)).await;
             }
             Status::Ready => {
-                task_log!(worker, "order is ready, finalizing");
+                info!(tasklog = true, "order is ready, finalizing");
                 let finalize = order
                     .finalize
                     .as_deref()
@@ -413,18 +414,21 @@ async fn order_certificate(
                 tokio::time::sleep(Duration::from_secs(5)).await;
             }
             Status::Processing => {
-                task_log!(worker, "still processing, trying again in 30 seconds");
+                info!(
+                    tasklog = true,
+                    "still processing, trying again in 30 seconds"
+                );
                 tokio::time::sleep(Duration::from_secs(30)).await;
             }
             Status::Valid => {
-                task_log!(worker, "valid");
+                info!(tasklog = true, "valid");
                 break;
             }
             other => bail!("order status: {:?}", other),
         }
     }
 
-    task_log!(worker, "Downloading certificate");
+    info!(tasklog = true, "Downloading certificate");
     let certificate = acme
         .get_certificate(
             order
@@ -441,15 +445,14 @@ async fn order_certificate(
 }
 
 async fn request_validation(
-    worker: &WorkerTask,
     acme: &mut AcmeClient,
     auth_url: &str,
     validation_url: &str,
 ) -> Result<(), Error> {
-    task_log!(worker, "Triggering validation");
+    info!(tasklog = true, "Triggering validation");
     acme.request_challenge_validation(validation_url).await?;
 
-    task_log!(worker, "Sleeping for 5 seconds");
+    info!(tasklog = true, "Sleeping for 5 seconds");
     tokio::time::sleep(Duration::from_secs(5)).await;
 
     loop {
@@ -458,8 +461,8 @@ async fn request_validation(
         let auth = acme.get_authorization(auth_url).await?;
         match auth.status {
             Status::Pending => {
-                task_log!(
-                    worker,
+                info!(
+                    tasklog = true,
                     "Status is still 'pending', trying again in 10 seconds"
                 );
                 tokio::time::sleep(Duration::from_secs(10)).await;
@@ -586,13 +589,13 @@ pub fn revoke_acme_cert(rpcenv: &mut dyn RpcEnvironment) -> Result<String, Error
         None,
         auth_id,
         true,
-        move |worker| async move {
-            task_log!(worker, "Loading ACME account");
+        move |_worker| async move {
+            info!(tasklog = true, "Loading ACME account");
             let mut acme = node_config.acme_client().await?;
-            task_log!(worker, "Revoking old certificate");
+            info!(tasklog = true, "Revoking old certificate");
             acme.revoke_certificate(cert_pem.as_bytes(), None).await?;
-            task_log!(
-                worker,
+            info!(
+                tasklog = true,
                 "Deleting certificate and regenerating a self-signed one"
             );
             delete_custom_certificate().await?;
diff --git a/src/api2/node/disks/directory.rs b/src/api2/node/disks/directory.rs
index d5a16be7..20de1d9c 100644
--- a/src/api2/node/disks/directory.rs
+++ b/src/api2/node/disks/directory.rs
@@ -1,11 +1,11 @@
 use ::serde::{Deserialize, Serialize};
 use anyhow::{bail, Error};
 use serde_json::json;
+use tracing::info;
 
 use proxmox_router::{Permission, Router, RpcEnvironment, RpcEnvironmentType};
 use proxmox_schema::api;
 use proxmox_section_config::SectionConfigData;
-use proxmox_sys::task_log;
 
 use pbs_api_types::{
     DataStoreConfig, BLOCKDEVICE_NAME_SCHEMA, DATASTORE_SCHEMA, NODE_SCHEMA, PRIV_SYS_AUDIT,
@@ -170,8 +170,11 @@ pub fn create_datastore_disk(
         Some(name.clone()),
         auth_id,
         to_stdout,
-        move |worker| {
-            task_log!(worker, "create datastore '{}' on disk {}", name, disk);
+        move |_worker| {
+            info!(
+                tasklog = true,
+                "create datastore '{}' on disk {}", name, disk
+            );
 
             let add_datastore = add_datastore.unwrap_or(false);
             let filesystem = filesystem.unwrap_or(FileSystemType::Ext4);
@@ -204,12 +207,7 @@ pub fn create_datastore_disk(
                     bail!("datastore '{}' already exists.", datastore.name);
                 }
 
-                crate::api2::config::datastore::do_create_datastore(
-                    lock,
-                    config,
-                    datastore,
-                    Some(&worker),
-                )?;
+                crate::api2::config::datastore::do_create_datastore(lock, config, datastore)?;
             }
 
             Ok(())
diff --git a/src/api2/node/disks/mod.rs b/src/api2/node/disks/mod.rs
index 5ee959cd..3cb13ff0 100644
--- a/src/api2/node/disks/mod.rs
+++ b/src/api2/node/disks/mod.rs
@@ -6,7 +6,7 @@ use proxmox_router::{
 };
 use proxmox_schema::api;
 use proxmox_sortable_macro::sortable;
-use proxmox_sys::task_log;
+use tracing::info;
 
 use pbs_api_types::{
     BLOCKDEVICE_NAME_SCHEMA, NODE_SCHEMA, PRIV_SYS_AUDIT, PRIV_SYS_MODIFY, UPID_SCHEMA,
@@ -163,8 +163,8 @@ pub fn initialize_disk(
         Some(disk.clone()),
         auth_id,
         to_stdout,
-        move |worker| {
-            task_log!(worker, "initialize disk {}", disk);
+        move |_worker| {
+            info!(tasklog = true, "initialize disk {}", disk);
 
             let disk_manager = DiskManage::new();
             let disk_info = disk_manager.disk_by_name(&disk)?;
diff --git a/src/api2/node/disks/zfs.rs b/src/api2/node/disks/zfs.rs
index c8d221cc..b143d178 100644
--- a/src/api2/node/disks/zfs.rs
+++ b/src/api2/node/disks/zfs.rs
@@ -1,9 +1,9 @@
 use anyhow::{bail, Error};
 use serde_json::{json, Value};
+use tracing::{error, info};
 
 use proxmox_router::{Permission, Router, RpcEnvironment, RpcEnvironmentType};
 use proxmox_schema::api;
-use proxmox_sys::{task_error, task_log};
 
 use pbs_api_types::{
     DataStoreConfig, ZfsCompressionType, ZfsRaidLevel, ZpoolListItem, DATASTORE_SCHEMA,
@@ -228,13 +228,10 @@ pub fn create_zpool(
         Some(name.clone()),
         auth_id,
         to_stdout,
-        move |worker| {
-            task_log!(
-                worker,
-                "create {:?} zpool '{}' on devices '{}'",
-                raidlevel,
-                name,
-                devices_text
+        move |_worker| {
+            info!(
+                tasklog = true,
+                "create {:?} zpool '{}' on devices '{}'", raidlevel, name, devices_text
             );
 
             let mut command = std::process::Command::new("zpool");
@@ -275,12 +272,12 @@ pub fn create_zpool(
                 }
             }
 
-            task_log!(worker, "# {:?}", command);
+            info!(tasklog = true, "# {:?}", command);
 
             match proxmox_sys::command::run_command(command, None) {
-                Ok(output) => task_log!(worker, "{output}"),
+                Ok(output) => info!(tasklog = true, "{output}"),
                 Err(err) => {
-                    task_error!(worker, "{err}");
+                    error!(tasklog = true, "{err}");
                     bail!("Error during 'zpool create', see task log for more details");
                 }
             };
@@ -299,11 +296,11 @@ pub fn create_zpool(
                 command.arg(&format!("compression={}", compression));
             }
             command.args(["relatime=on", &name]);
-            task_log!(worker, "# {:?}", command);
+            info!(tasklog = true, "# {:?}", command);
             match proxmox_sys::command::run_command(command, None) {
-                Ok(output) => task_log!(worker, "{output}"),
+                Ok(output) => info!(tasklog = true, "{output}"),
                 Err(err) => {
-                    task_error!(worker, "{err}");
+                    error!(tasklog = true, "{err}");
                     bail!("Error during 'zfs set', see task log for more details");
                 }
             };
@@ -319,12 +316,7 @@ pub fn create_zpool(
                     bail!("datastore '{}' already exists.", datastore.name);
                 }
 
-                crate::api2::config::datastore::do_create_datastore(
-                    lock,
-                    config,
-                    datastore,
-                    Some(&worker),
-                )?;
+                crate::api2::config::datastore::do_create_datastore(lock, config, datastore)?;
             }
 
             Ok(())
diff --git a/src/api2/node/mod.rs b/src/api2/node/mod.rs
index 931143fd..32373e08 100644
--- a/src/api2/node/mod.rs
+++ b/src/api2/node/mod.rs
@@ -25,6 +25,7 @@ use proxmox_sortable_macro::sortable;
 use proxmox_sys::fd::fd_change_cloexec;
 
 use pbs_api_types::{NODE_SCHEMA, PRIV_SYS_CONSOLE};
+use tracing::{info, warn};
 
 use crate::auth::{private_auth_keyring, public_auth_keyring};
 use crate::tools;
@@ -181,20 +182,18 @@ async fn termproxy(cmd: Option<String>, rpcenv: &mut dyn RpcEnvironment) -> Resu
             let stdout = child.stdout.take().expect("no child stdout handle");
             let stderr = child.stderr.take().expect("no child stderr handle");
 
-            let worker_stdout = worker.clone();
             let stdout_fut = async move {
                 let mut reader = BufReader::new(stdout).lines();
                 while let Some(line) = reader.next_line().await? {
-                    worker_stdout.log_message(line);
+                    info!(tasklog = true, line);
                 }
                 Ok::<(), Error>(())
             };
 
-            let worker_stderr = worker.clone();
             let stderr_fut = async move {
                 let mut reader = BufReader::new(stderr).lines();
                 while let Some(line) = reader.next_line().await? {
-                    worker_stderr.log_warning(line);
+                    warn!(tasklog = true, line);
                 }
                 Ok::<(), Error>(())
             };
@@ -226,9 +225,9 @@ async fn termproxy(cmd: Option<String>, rpcenv: &mut dyn RpcEnvironment) -> Resu
                 }
 
                 if let Err(err) = child.kill().await {
-                    worker.log_warning(format!("error killing termproxy: {}", err));
+                    warn!(tasklog = true, "error killing termproxy: {}", err);
                 } else if let Err(err) = child.wait().await {
-                    worker.log_warning(format!("error awaiting termproxy: {}", err));
+                    warn!(tasklog = true, "error awaiting termproxy: {}", err);
                 }
             }
 
diff --git a/src/api2/pull.rs b/src/api2/pull.rs
index daeba7cf..2c5d02d2 100644
--- a/src/api2/pull.rs
+++ b/src/api2/pull.rs
@@ -1,10 +1,10 @@
 //! Sync datastore from remote server
 use anyhow::{format_err, Error};
 use futures::{future::FutureExt, select};
+use tracing::info;
 
 use proxmox_router::{Permission, Router, RpcEnvironment};
 use proxmox_schema::api;
-use proxmox_sys::task_log;
 
 use pbs_api_types::{
     Authid, BackupNamespace, GroupFilter, RateLimitConfig, SyncJobConfig, DATASTORE_SCHEMA,
@@ -116,21 +116,21 @@ pub fn do_sync_job(
                 let pull_params = PullParameters::try_from(&sync_job)?;
                 let client = pull_params.client().await?;
 
-                task_log!(worker, "Starting datastore sync job '{}'", job_id);
+                info!(tasklog = true, "Starting datastore sync job '{}'", job_id);
                 if let Some(event_str) = schedule {
-                    task_log!(worker, "task triggered by schedule '{}'", event_str);
+                    info!(tasklog = true, "task triggered by schedule '{}'", event_str);
                 }
-                task_log!(
-                    worker,
+                info!(
+                    tasklog = true,
                     "sync datastore '{}' from '{}/{}'",
                     sync_job.store,
                     sync_job.remote,
                     sync_job.remote_store,
                 );
 
-                pull_store(&worker, &client, pull_params).await?;
+                pull_store(&client, pull_params).await?;
 
-                task_log!(worker, "sync job '{}' end", &job_id);
+                info!(tasklog = true, "sync job '{}' end", &job_id);
 
                 Ok(())
             };
@@ -276,21 +276,18 @@ async fn pull(
         auth_id.to_string(),
         true,
         move |worker| async move {
-            task_log!(
-                worker,
-                "pull datastore '{}' from '{}/{}'",
-                store,
-                remote,
-                remote_store,
+            info!(
+                tasklog = true,
+                "pull datastore '{}' from '{}/{}'", store, remote, remote_store,
             );
 
-            let pull_future = pull_store(&worker, &client, pull_params);
+            let pull_future = pull_store(&client, pull_params);
             (select! {
                 success = pull_future.fuse() => success,
                 abort = worker.abort_future().map(|_| Err(format_err!("pull aborted"))) => abort,
             })?;
 
-            task_log!(worker, "pull datastore '{}' end", store);
+            info!(tasklog = true, "pull datastore '{}' end", store);
 
             Ok(())
         },
diff --git a/src/api2/tape/backup.rs b/src/api2/tape/backup.rs
index 2f9385a7..5f50a0b0 100644
--- a/src/api2/tape/backup.rs
+++ b/src/api2/tape/backup.rs
@@ -2,11 +2,12 @@ use std::sync::{Arc, Mutex};
 
 use anyhow::{bail, format_err, Error};
 use serde_json::Value;
+use tracing::{info, warn};
 
 use proxmox_lang::try_block;
 use proxmox_router::{Permission, Router, RpcEnvironment, RpcEnvironmentType};
 use proxmox_schema::api;
-use proxmox_sys::{task_log, task_warn, WorkerTaskContext};
+use proxmox_sys::WorkerTaskContext;
 
 use pbs_api_types::{
     print_ns_and_snapshot, print_store_and_ns, Authid, GroupFilter, MediaPoolConfig, Operation,
@@ -181,7 +182,7 @@ pub fn do_tape_backup_job(
             let job_result = try_block!({
                 if schedule.is_some() {
                     // for scheduled tape backup jobs, we wait indefinitely for the lock
-                    task_log!(worker, "waiting for drive lock...");
+                    info!(tasklog = true, "waiting for drive lock...");
                     loop {
                         worker.check_abort()?;
                         match lock_tape_device(&drive_config, &setup.drive) {
@@ -196,9 +197,9 @@ pub fn do_tape_backup_job(
                 }
                 set_tape_device_state(&setup.drive, &worker.upid().to_string())?;
 
-                task_log!(worker, "Starting tape backup job '{}'", job_id);
+                info!(tasklog = true, "Starting tape backup job '{}'", job_id);
                 if let Some(event_str) = schedule {
-                    task_log!(worker, "task triggered by schedule '{}'", event_str);
+                    info!(tasklog = true, "task triggered by schedule '{}'", event_str);
                 }
 
                 backup_worker(
@@ -392,7 +393,7 @@ fn backup_worker(
 ) -> Result<(), Error> {
     let start = std::time::Instant::now();
 
-    task_log!(worker, "update media online status");
+    info!(tasklog = true, "update media online status");
     let changer_name = update_media_online_status(&setup.drive)?;
 
     let root_namespace = setup.ns.clone().unwrap_or_default();
@@ -400,8 +401,7 @@ fn backup_worker(
 
     let pool = MediaPool::with_config(TAPE_STATUS_DIR, pool_config, changer_name, false)?;
 
-    let mut pool_writer =
-        PoolWriter::new(pool, &setup.drive, worker, email, force_media_set, ns_magic)?;
+    let mut pool_writer = PoolWriter::new(pool, &setup.drive, email, force_media_set, ns_magic)?;
 
     let mut group_list = Vec::new();
     let namespaces = datastore.recursive_iter_backup_ns_ok(root_namespace, setup.max_depth)?;
@@ -422,16 +422,14 @@ fn backup_worker(
             .filter(|group| filter_fn(group, group_filters))
             .collect();
         let group_count = list.len();
-        task_log!(
-            worker,
-            "found {} groups (out of {} total)",
-            group_count,
-            group_count_full
+        info!(
+            tasklog = true,
+            "found {} groups (out of {} total)", group_count, group_count_full
         );
         (list, group_count)
     } else {
         let group_count = group_list.len();
-        task_log!(worker, "found {} groups", group_count);
+        info!(tasklog = true, "found {} groups", group_count);
         (group_list, group_count)
     };
 
@@ -440,8 +438,8 @@ fn backup_worker(
     let latest_only = setup.latest_only.unwrap_or(false);
 
     if latest_only {
-        task_log!(
-            worker,
+        info!(
+            tasklog = true,
             "latest-only: true (only considering latest snapshots)"
         );
     }
@@ -466,8 +464,8 @@ fn backup_worker(
             .collect();
 
         if snapshot_list.is_empty() {
-            task_log!(
-                worker,
+            info!(
+                tasklog = true,
                 "{}, group {} was empty",
                 print_store_and_ns(datastore_name, group.backup_ns()),
                 group.group()
@@ -487,7 +485,7 @@ fn backup_worker(
                     info.backup_dir.backup_ns(),
                     info.backup_dir.as_ref(),
                 ) {
-                    task_log!(worker, "skip snapshot {}", rel_path);
+                    info!(tasklog = true, "skip snapshot {}", rel_path);
                     continue;
                 }
 
@@ -500,7 +498,7 @@ fn backup_worker(
                     SnapshotBackupResult::Ignored => {}
                 }
                 progress.done_snapshots = 1;
-                task_log!(worker, "percentage done: {}", progress);
+                info!(tasklog = true, "percentage done: {}", progress);
             }
         } else {
             progress.group_snapshots = snapshot_list.len() as u64;
@@ -513,7 +511,7 @@ fn backup_worker(
                     info.backup_dir.backup_ns(),
                     info.backup_dir.as_ref(),
                 ) {
-                    task_log!(worker, "skip snapshot {}", rel_path);
+                    info!(tasklog = true, "skip snapshot {}", rel_path);
                     continue;
                 }
 
@@ -526,7 +524,7 @@ fn backup_worker(
                     SnapshotBackupResult::Ignored => {}
                 }
                 progress.done_snapshots = snapshot_number as u64 + 1;
-                task_log!(worker, "percentage done: {}", progress);
+                info!(tasklog = true, "percentage done: {}", progress);
             }
         }
     }
@@ -534,18 +532,18 @@ fn backup_worker(
     pool_writer.commit()?;
 
     if need_catalog {
-        task_log!(worker, "append media catalog");
+        info!(tasklog = true, "append media catalog");
 
         let uuid = pool_writer.load_writable_media(worker)?;
-        let done = pool_writer.append_catalog_archive(worker)?;
+        let done = pool_writer.append_catalog_archive()?;
         if !done {
-            task_log!(
-                worker,
+            info!(
+                tasklog = true,
                 "catalog does not fit on tape, writing to next volume"
             );
             pool_writer.set_media_status_full(&uuid)?;
             pool_writer.load_writable_media(worker)?;
-            let done = pool_writer.append_catalog_archive(worker)?;
+            let done = pool_writer.append_catalog_archive()?;
             if !done {
                 bail!("write_catalog_archive failed on second media");
             }
@@ -553,9 +551,9 @@ fn backup_worker(
     }
 
     if setup.export_media_set.unwrap_or(false) {
-        pool_writer.export_media_set(worker)?;
+        pool_writer.export_media_set()?;
     } else if setup.eject_media.unwrap_or(false) {
-        pool_writer.eject_media(worker)?;
+        pool_writer.eject_media()?;
     }
 
     if errors {
@@ -565,7 +563,10 @@ fn backup_worker(
     summary.used_tapes = match pool_writer.get_used_media_labels() {
         Ok(tapes) => Some(tapes),
         Err(err) => {
-            task_warn!(worker, "could not collect list of used tapes: {err}");
+            warn!(
+                tasklog = true,
+                "could not collect list of used tapes: {err}"
+            );
             None
         }
     };
@@ -599,7 +600,7 @@ fn backup_snapshot(
     snapshot: BackupDir,
 ) -> Result<SnapshotBackupResult, Error> {
     let snapshot_path = snapshot.relative_path();
-    task_log!(worker, "backup snapshot {:?}", snapshot_path);
+    info!(tasklog = true, "backup snapshot {:?}", snapshot_path);
 
     let snapshot_reader = match snapshot.locked_reader() {
         Ok(reader) => reader,
@@ -607,14 +608,15 @@ fn backup_snapshot(
             if !snapshot.full_path().exists() {
                 // we got an error and the dir does not exist,
                 // it probably just vanished, so continue
-                task_log!(worker, "snapshot {:?} vanished, skipping", snapshot_path);
+                info!(
+                    tasklog = true,
+                    "snapshot {:?} vanished, skipping", snapshot_path
+                );
                 return Ok(SnapshotBackupResult::Ignored);
             }
-            task_warn!(
-                worker,
-                "failed opening snapshot {:?}: {}",
-                snapshot_path,
-                err
+            warn!(
+                tasklog = true,
+                "failed opening snapshot {:?}: {}", snapshot_path, err
             );
             return Ok(SnapshotBackupResult::Error);
         }
@@ -661,7 +663,7 @@ fn backup_snapshot(
 
     let snapshot_reader = snapshot_reader.lock().unwrap();
 
-    let (done, _bytes) = pool_writer.append_snapshot_archive(worker, &snapshot_reader)?;
+    let (done, _bytes) = pool_writer.append_snapshot_archive(&snapshot_reader)?;
 
     if !done {
         // does not fit on tape, so we try on next volume
@@ -670,15 +672,15 @@ fn backup_snapshot(
         worker.check_abort()?;
 
         pool_writer.load_writable_media(worker)?;
-        let (done, _bytes) = pool_writer.append_snapshot_archive(worker, &snapshot_reader)?;
+        let (done, _bytes) = pool_writer.append_snapshot_archive(&snapshot_reader)?;
 
         if !done {
             bail!("write_snapshot_archive failed on second media");
         }
     }
 
-    task_log!(
-        worker,
+    info!(
+        tasklog = true,
         "end backup {}:{:?}",
         datastore.name(),
         snapshot_path
diff --git a/src/api2/tape/drive.rs b/src/api2/tape/drive.rs
index 5306e605..f3550328 100644
--- a/src/api2/tape/drive.rs
+++ b/src/api2/tape/drive.rs
@@ -4,6 +4,7 @@ use std::sync::Arc;
 
 use anyhow::{bail, format_err, Error};
 use serde_json::Value;
+use tracing::{info, warn};
 
 use proxmox_router::{
     list_subdirs_api_method, Permission, Router, RpcEnvironment, RpcEnvironmentType, SubdirMap,
@@ -11,7 +12,6 @@ use proxmox_router::{
 use proxmox_schema::api;
 use proxmox_section_config::SectionConfigData;
 use proxmox_sortable_macro::sortable;
-use proxmox_sys::{task_log, task_warn};
 use proxmox_uuid::Uuid;
 
 use pbs_api_types::{
@@ -131,12 +131,10 @@ pub fn load_media(
         drive.clone(),
         "load-media",
         Some(job_id),
-        move |worker, config| {
-            task_log!(
-                worker,
-                "loading media '{}' into drive '{}'",
-                label_text,
-                drive
+        move |_worker, config| {
+            info!(
+                tasklog = true,
+                "loading media '{}' into drive '{}'", label_text, drive
             );
             let (mut changer, _) = required_media_changer(&config, &drive)?;
             changer.load_media(&label_text)?;
@@ -250,8 +248,8 @@ pub fn unload(
         drive.clone(),
         "unload-media",
         Some(drive.clone()),
-        move |worker, config| {
-            task_log!(worker, "unloading media from drive '{}'", drive);
+        move |_worker, config| {
+            info!(tasklog = true, "unloading media from drive '{}'", drive);
 
             let (mut changer, _) = required_media_changer(&config, &drive)?;
             changer.unload_media(target_slot)?;
@@ -299,9 +297,9 @@ pub fn format_media(
         drive.clone(),
         "format-media",
         Some(drive.clone()),
-        move |worker, config| {
+        move |_worker, config| {
             if let Some(ref label) = label_text {
-                task_log!(worker, "try to load media '{}'", label);
+                info!(tasklog = true, "try to load media '{}'", label);
                 if let Some((mut changer, _)) = media_changer(&config, &drive)? {
                     changer.load_media(label)?;
                 }
@@ -315,15 +313,15 @@ pub fn format_media(
                         bail!("expected label '{}', found unrelated data", label);
                     }
                     /* assume drive contains no or unrelated data */
-                    task_log!(worker, "unable to read media label: {}", err);
-                    task_log!(worker, "format anyways");
+                    info!(tasklog = true, "unable to read media label: {}", err);
+                    info!(tasklog = true, "format anyways");
                     handle.format_media(fast.unwrap_or(true))?;
                 }
                 Ok((None, _)) => {
                     if let Some(label) = label_text {
                         bail!("expected label '{}', found empty tape", label);
                     }
-                    task_log!(worker, "found empty media - format anyways");
+                    info!(tasklog = true, "found empty media - format anyways");
                     handle.format_media(fast.unwrap_or(true))?;
                 }
                 Ok((Some(media_id), _key_config)) => {
@@ -337,8 +335,8 @@ pub fn format_media(
                         }
                     }
 
-                    task_log!(
-                        worker,
+                    info!(
+                        tasklog = true,
                         "found media '{}' with uuid '{}'",
                         media_id.label.label_text,
                         media_id.label.uuid,
@@ -489,7 +487,7 @@ pub fn label_media(
         drive.clone(),
         "label-media",
         Some(drive.clone()),
-        move |worker, config| {
+        move |_worker, config| {
             let mut drive = open_drive(&config, &drive)?;
 
             drive.rewind()?;
@@ -511,7 +509,7 @@ pub fn label_media(
                 pool: pool.clone(),
             };
 
-            write_media_label(worker, &mut drive, label, pool)
+            write_media_label(&mut drive, label, pool)
         },
     )?;
 
@@ -519,24 +517,20 @@ pub fn label_media(
 }
 
 fn write_media_label(
-    worker: Arc<WorkerTask>,
     drive: &mut Box<dyn TapeDriver>,
     label: MediaLabel,
     pool: Option<String>,
 ) -> Result<(), Error> {
     drive.label_tape(&label)?;
     if let Some(ref pool) = pool {
-        task_log!(
-            worker,
-            "Label media '{}' for pool '{}'",
-            label.label_text,
-            pool
+        info!(
+            tasklog = true,
+            "Label media '{}' for pool '{}'", label.label_text, pool
         );
     } else {
-        task_log!(
-            worker,
-            "Label media '{}' (no pool assignment)",
-            label.label_text
+        info!(
+            tasklog = true,
+            "Label media '{}' (no pool assignment)", label.label_text
         );
     }
 
@@ -729,10 +723,10 @@ pub fn clean_drive(drive: String, rpcenv: &mut dyn RpcEnvironment) -> Result<Val
         drive.clone(),
         "clean-drive",
         Some(drive.clone()),
-        move |worker, config| {
+        move |_worker, config| {
             let (mut changer, _changer_name) = required_media_changer(&config, &drive)?;
 
-            task_log!(worker, "Starting drive clean");
+            info!(tasklog = true, "Starting drive clean");
 
             changer.clean_drive()?;
 
@@ -743,7 +737,7 @@ pub fn clean_drive(drive: String, rpcenv: &mut dyn RpcEnvironment) -> Result<Val
                 // test for critical tape alert flags
                 if let Ok(alert_flags) = handle.tape_alert_flags() {
                     if !alert_flags.is_empty() {
-                        task_log!(worker, "TapeAlertFlags: {:?}", alert_flags);
+                        info!(tasklog = true, "TapeAlertFlags: {:?}", alert_flags);
                         if tape_alert_flags_critical(alert_flags) {
                             bail!("found critical tape alert flags: {:?}", alert_flags);
                         }
@@ -752,13 +746,16 @@ pub fn clean_drive(drive: String, rpcenv: &mut dyn RpcEnvironment) -> Result<Val
 
                 // test wearout (max. 50 mounts)
                 if let Ok(volume_stats) = handle.volume_statistics() {
-                    task_log!(worker, "Volume mounts: {}", volume_stats.volume_mounts);
+                    info!(
+                        tasklog = true,
+                        "Volume mounts: {}", volume_stats.volume_mounts
+                    );
                     let wearout = volume_stats.volume_mounts * 2; // (*100.0/50.0);
-                    task_log!(worker, "Cleaning tape wearout: {}%", wearout);
+                    info!(tasklog = true, "Cleaning tape wearout: {}%", wearout);
                 }
             }
 
-            task_log!(worker, "Drive cleaned successfully");
+            info!(tasklog = true, "Drive cleaned successfully");
 
             Ok(())
         },
@@ -880,12 +877,15 @@ pub fn update_inventory(
         drive.clone(),
         "inventory-update",
         Some(drive.clone()),
-        move |worker, config| {
+        move |_worker, config| {
             let (mut changer, changer_name) = required_media_changer(&config, &drive)?;
 
             let label_text_list = changer.online_media_label_texts()?;
             if label_text_list.is_empty() {
-                task_log!(worker, "changer device does not list any media labels");
+                info!(
+                    tasklog = true,
+                    "changer device does not list any media labels"
+                );
             }
 
             let mut inventory = Inventory::load(TAPE_STATUS_DIR)?;
@@ -894,7 +894,7 @@ pub fn update_inventory(
 
             for label_text in label_text_list.iter() {
                 if label_text.starts_with("CLN") {
-                    task_log!(worker, "skip cleaning unit '{}'", label_text);
+                    info!(tasklog = true, "skip cleaning unit '{}'", label_text);
                     continue;
                 }
 
@@ -903,42 +903,43 @@ pub fn update_inventory(
                 if !read_all_labels {
                     if let Some(media_id) = inventory.find_media_by_label_text(&label_text) {
                         if !catalog || MediaCatalog::exists(TAPE_STATUS_DIR, &media_id.label.uuid) {
-                            task_log!(worker, "media '{}' already inventoried", label_text);
+                            info!(tasklog = true, "media '{}' already inventoried", label_text);
                             continue;
                         }
                     }
                 }
 
                 if let Err(err) = changer.load_media(&label_text) {
-                    task_warn!(worker, "unable to load media '{}' - {}", label_text, err);
+                    warn!(
+                        tasklog = true,
+                        "unable to load media '{}' - {}", label_text, err
+                    );
                     continue;
                 }
 
                 let mut drive = open_drive(&config, &drive)?;
                 match drive.read_label() {
                     Err(err) => {
-                        task_warn!(
-                            worker,
-                            "unable to read label form media '{}' - {}",
-                            label_text,
-                            err
+                        warn!(
+                            tasklog = true,
+                            "unable to read label form media '{}' - {}", label_text, err
                         );
                     }
                     Ok((None, _)) => {
-                        task_log!(worker, "media '{}' is empty", label_text);
+                        info!(tasklog = true, "media '{}' is empty", label_text);
                     }
                     Ok((Some(media_id), _key_config)) => {
                         if label_text != media_id.label.label_text {
-                            task_warn!(
-                                worker,
+                            warn!(
+                                tasklog = true,
                                 "label text mismatch ({} != {})",
                                 label_text,
                                 media_id.label.label_text
                             );
                             continue;
                         }
-                        task_log!(
-                            worker,
+                        info!(
+                            tasklog = true,
                             "inventorize media '{}' with uuid '{}'",
                             label_text,
                             media_id.label.uuid
@@ -962,13 +963,12 @@ pub fn update_inventory(
                             if catalog {
                                 let media_set = inventory.compute_media_set_members(&set.uuid)?;
                                 if let Err(err) = fast_catalog_restore(
-                                    &worker,
                                     &mut drive,
                                     &media_set,
                                     &media_id.label.uuid,
                                 ) {
-                                    task_warn!(
-                                        worker,
+                                    warn!(
+                                        tasklog = true,
                                         "could not restore catalog for {label_text}: {err}"
                                     );
                                 }
@@ -1026,14 +1026,13 @@ pub fn barcode_label_media(
         drive.clone(),
         "barcode-label-media",
         Some(drive.clone()),
-        move |worker, config| barcode_label_media_worker(worker, drive, &config, pool),
+        move |_worker, config| barcode_label_media_worker(drive, &config, pool),
     )?;
 
     Ok(upid_str.into())
 }
 
 fn barcode_label_media_worker(
-    worker: Arc<WorkerTask>,
     drive: String,
     drive_config: &SectionConfigData,
     pool: Option<String>,
@@ -1065,18 +1064,20 @@ fn barcode_label_media_worker(
 
         inventory.reload()?;
         if inventory.find_media_by_label_text(&label_text).is_some() {
-            task_log!(
-                worker,
-                "media '{}' already inventoried (already labeled)",
-                label_text
+            info!(
+                tasklog = true,
+                "media '{}' already inventoried (already labeled)", label_text
             );
             continue;
         }
 
-        task_log!(worker, "checking/loading media '{}'", label_text);
+        info!(tasklog = true, "checking/loading media '{}'", label_text);
 
         if let Err(err) = changer.load_media(&label_text) {
-            task_warn!(worker, "unable to load media '{}' - {}", label_text, err);
+            warn!(
+                tasklog = true,
+                "unable to load media '{}' - {}", label_text, err
+            );
             continue;
         }
 
@@ -1085,20 +1086,18 @@ fn barcode_label_media_worker(
 
         match drive.read_next_file() {
             Ok(_reader) => {
-                task_log!(
-                    worker,
-                    "media '{}' is not empty (format it first)",
-                    label_text
+                info!(
+                    tasklog = true,
+                    "media '{}' is not empty (format it first)", label_text
                 );
                 continue;
             }
             Err(BlockReadError::EndOfFile) => { /* EOF mark at BOT, assume tape is empty */ }
             Err(BlockReadError::EndOfStream) => { /* tape is empty */ }
             Err(_err) => {
-                task_warn!(
-                    worker,
-                    "media '{}' read error (maybe not empty - format it first)",
-                    label_text
+                warn!(
+                    tasklog = true,
+                    "media '{}' read error (maybe not empty - format it first)", label_text
                 );
                 continue;
             }
@@ -1112,7 +1111,7 @@ fn barcode_label_media_worker(
             pool: pool.clone(),
         };
 
-        write_media_label(worker.clone(), &mut drive, label, pool.clone())?
+        write_media_label(&mut drive, label, pool.clone())?
     }
 
     Ok(())
@@ -1271,14 +1270,14 @@ pub fn catalog_media(
 
             let media_id = match drive.read_label()? {
                 (Some(media_id), key_config) => {
-                    task_log!(
-                        worker,
+                    info!(
+                        tasklog = true,
                         "found media label: {}",
                         serde_json::to_string_pretty(&serde_json::to_value(&media_id)?)?
                     );
                     if key_config.is_some() {
-                        task_log!(
-                            worker,
+                        info!(
+                            tasklog = true,
                             "encryption key config: {}",
                             serde_json::to_string_pretty(&serde_json::to_value(&key_config)?)?
                         );
@@ -1292,7 +1291,7 @@ pub fn catalog_media(
 
             let (_media_set_lock, media_set_uuid) = match media_id.media_set_label {
                 None => {
-                    task_log!(worker, "media is empty");
+                    info!(tasklog = true, "media is empty");
                     let _pool_lock = if let Some(pool) = media_id.pool() {
                         lock_media_pool(TAPE_STATUS_DIR, &pool)?
                     } else {
@@ -1305,7 +1304,7 @@ pub fn catalog_media(
                 Some(ref set) => {
                     if set.unassigned() {
                         // media is empty
-                        task_log!(worker, "media is empty");
+                        info!(tasklog = true, "media is empty");
                         let _lock = lock_unassigned_media_pool(TAPE_STATUS_DIR)?;
                         MediaCatalog::destroy(TAPE_STATUS_DIR, &media_id.label.uuid)?;
                         inventory.store(media_id.clone(), false)?;
@@ -1336,14 +1335,17 @@ pub fn catalog_media(
             if !scan {
                 let media_set = inventory.compute_media_set_members(media_set_uuid)?;
 
-                if fast_catalog_restore(&worker, &mut drive, &media_set, &media_id.label.uuid)? {
+                if fast_catalog_restore(&mut drive, &media_set, &media_id.label.uuid)? {
                     return Ok(());
                 }
 
-                task_log!(worker, "no catalog found");
+                info!(tasklog = true, "no catalog found");
             }
 
-            task_log!(worker, "scanning entire media to reconstruct catalog");
+            info!(
+                tasklog = true,
+                "scanning entire media to reconstruct catalog"
+            );
 
             drive.rewind()?;
             drive.read_label()?; // skip over labels - we already read them above
diff --git a/src/api2/tape/restore.rs b/src/api2/tape/restore.rs
index 7b6c8978..0a138e7b 100644
--- a/src/api2/tape/restore.rs
+++ b/src/api2/tape/restore.rs
@@ -6,6 +6,7 @@ use std::sync::Arc;
 
 use anyhow::{bail, format_err, Error};
 use serde_json::Value;
+use tracing::{info, warn};
 
 use proxmox_human_byte::HumanByte;
 use proxmox_io::ReadExt;
@@ -13,7 +14,7 @@ use proxmox_router::{Permission, Router, RpcEnvironment, RpcEnvironmentType};
 use proxmox_schema::{api, ApiType};
 use proxmox_section_config::SectionConfigData;
 use proxmox_sys::fs::{replace_file, CreateOptions};
-use proxmox_sys::{task_log, task_warn, WorkerTaskContext};
+use proxmox_sys::WorkerTaskContext;
 use proxmox_uuid::Uuid;
 
 use pbs_api_types::{
@@ -399,8 +400,8 @@ pub fn restore(
                 .and_then(lookup_user_email)
                 .or_else(|| lookup_user_email(&auth_id.clone().into()));
 
-            task_log!(worker, "Mediaset '{media_set}'");
-            task_log!(worker, "Pool: {pool}");
+            info!(tasklog = true, "Mediaset '{media_set}'");
+            info!(tasklog = true, "Pool: {pool}");
 
             let res = if snapshots.is_some() || namespaces {
                 restore_list_worker(
@@ -430,10 +431,13 @@ pub fn restore(
                 )
             };
             if res.is_ok() {
-                task_log!(worker, "Restore mediaset '{media_set}' done");
+                info!(tasklog = true, "Restore mediaset '{media_set}' done");
             }
             if let Err(err) = set_tape_device_state(&drive, "") {
-                task_log!(worker, "could not unset drive state for {drive}: {err}");
+                info!(
+                    tasklog = true,
+                    "could not unset drive state for {drive}: {err}"
+                );
             }
 
             res
@@ -484,7 +488,7 @@ fn restore_full_worker(
     }
 
     if let Some(fingerprint) = encryption_key_fingerprint {
-        task_log!(worker, "Encryption key fingerprint: {fingerprint}");
+        info!(tasklog = true, "Encryption key fingerprint: {fingerprint}");
     }
 
     let used_datastores = store_map.used_datastores();
@@ -493,13 +497,9 @@ fn restore_full_worker(
         .map(|(t, _)| String::from(t.name()))
         .collect::<Vec<String>>()
         .join(", ");
-    task_log!(worker, "Datastore(s): {datastore_list}",);
-    task_log!(worker, "Drive: {drive_name}");
-    log_required_tapes(
-        &worker,
-        &inventory,
-        media_id_list.iter().map(|id| &id.label.uuid),
-    );
+    info!(tasklog = true, "Datastore(s): {datastore_list}",);
+    info!(tasklog = true, "Drive: {drive_name}");
+    log_required_tapes(&inventory, media_id_list.iter().map(|id| &id.label.uuid));
 
     let mut datastore_locks = Vec::new();
     for (target, _) in used_datastores.values() {
@@ -529,7 +529,6 @@ fn restore_full_worker(
 
 #[allow(clippy::too_many_arguments)]
 fn check_snapshot_restorable(
-    worker: &WorkerTask,
     store_map: &DataStoreMap,
     store: &str,
     snapshot: &str,
@@ -570,7 +569,10 @@ fn check_snapshot_restorable(
             auth_id,
             Some(restore_owner),
         ) {
-            task_warn!(worker, "cannot restore {store}:{snapshot} to {ns}: '{err}'");
+            warn!(
+                tasklog = true,
+                "cannot restore {store}:{snapshot} to {ns}: '{err}'"
+            );
             continue;
         }
 
@@ -578,8 +580,8 @@ fn check_snapshot_restorable(
         if let Ok(owner) = datastore.get_owner(&ns, dir.as_ref()) {
             if restore_owner != &owner {
                 // only the owner is allowed to create additional snapshots
-                task_warn!(
-                    worker,
+                warn!(
+                    tasklog = true,
                     "restore  of '{snapshot}' to {ns} failed, owner check failed ({restore_owner} \
                     != {owner})",
                 );
@@ -590,8 +592,8 @@ fn check_snapshot_restorable(
         have_some_permissions = true;
 
         if datastore.snapshot_path(&ns, dir).exists() {
-            task_warn!(
-                worker,
+            warn!(
+                tasklog = true,
                 "found snapshot {snapshot} on target datastore/namespace, skipping...",
             );
             continue;
@@ -606,11 +608,7 @@ fn check_snapshot_restorable(
     Ok(can_restore_some)
 }
 
-fn log_required_tapes<'a>(
-    worker: &WorkerTask,
-    inventory: &Inventory,
-    list: impl Iterator<Item = &'a Uuid>,
-) {
+fn log_required_tapes<'a>(inventory: &Inventory, list: impl Iterator<Item = &'a Uuid>) {
     let mut tape_list = list
         .map(|uuid| {
             inventory
@@ -622,7 +620,11 @@ fn log_required_tapes<'a>(
         })
         .collect::<Vec<&str>>();
     tape_list.sort_unstable();
-    task_log!(worker, "Required media list: {}", tape_list.join(";"));
+    info!(
+        tasklog = true,
+        "Required media list: {}",
+        tape_list.join(";")
+    );
 }
 
 #[allow(clippy::too_many_arguments)]
@@ -654,14 +656,13 @@ fn restore_list_worker(
                 let (ns, dir) = match parse_ns_and_snapshot(snapshot) {
                     Ok((ns, dir)) if store_map.has_full_mapping(store, &ns) => (ns, dir),
                     Err(err) => {
-                        task_warn!(worker, "couldn't parse snapshot {snapshot} - {err}");
+                        warn!(tasklog = true, "couldn't parse snapshot {snapshot} - {err}");
                         continue;
                     }
                     _ => continue,
                 };
                 let snapshot = print_ns_and_snapshot(&ns, &dir);
                 match check_snapshot_restorable(
-                    &worker,
                     &store_map,
                     store,
                     &snapshot,
@@ -675,7 +676,7 @@ fn restore_list_worker(
                     Ok(true) => restorable.push((store.to_string(), snapshot.to_string(), ns, dir)),
                     Ok(false) => {}
                     Err(err) => {
-                        task_warn!(worker, "{err}");
+                        warn!(tasklog = true, "{err}");
                         skipped.push(format!("{store}:{snapshot}"));
                     }
                 }
@@ -693,7 +694,6 @@ fn restore_list_worker(
                     match parse_ns_and_snapshot(snapshot) {
                         Ok((ns, dir)) => {
                             match check_snapshot_restorable(
-                                &worker,
                                 &store_map,
                                 store,
                                 snapshot,
@@ -709,14 +709,14 @@ fn restore_list_worker(
                                 }
                                 Ok(false) => None,
                                 Err(err) => {
-                                    task_warn!(worker, "{err}");
+                                    warn!(tasklog = true, "{err}");
                                     skipped.push(format!("{store}:{snapshot}"));
                                     None
                                 }
                             }
                         }
                         Err(err) => {
-                            task_warn!(worker, "could not restore {store_snapshot}: {err}");
+                            warn!(tasklog = true, "could not restore {store_snapshot}: {err}");
                             skipped.push(store_snapshot);
                             None
                         }
@@ -734,8 +734,8 @@ fn restore_list_worker(
                     let media_id = inventory.lookup_media(media_uuid).unwrap();
                     (media_id, file_num)
                 } else {
-                    task_warn!(
-                        worker,
+                    warn!(
+                        tasklog = true,
                         "did not find snapshot '{store}:{snapshot}' in media set",
                     );
                     skipped.push(format!("{store}:{snapshot}"));
@@ -750,26 +750,31 @@ fn restore_list_worker(
                 .or_insert_with(Vec::new);
             file_list.push(file_num);
 
-            task_log!(
-                worker,
-                "found snapshot {snapshot} on {}: file {file_num}",
-                media_id.label.label_text,
+            info!(
+                tasklog = true,
+                "found snapshot {snapshot} on {}: file {file_num}", media_id.label.label_text,
             );
         }
 
         if snapshot_file_hash.is_empty() {
-            task_log!(worker, "nothing to restore, skipping remaining phases...");
+            info!(
+                tasklog = true,
+                "nothing to restore, skipping remaining phases..."
+            );
             if !skipped.is_empty() {
-                task_log!(worker, "skipped the following snapshots:");
+                info!(tasklog = true, "skipped the following snapshots:");
                 for snap in skipped {
-                    task_log!(worker, "  {snap}");
+                    info!(tasklog = true, "  {snap}");
                 }
             }
             return Ok(());
         }
 
-        task_log!(worker, "Phase 1: temporarily restore snapshots to temp dir");
-        log_required_tapes(&worker, &inventory, snapshot_file_hash.keys());
+        info!(
+            tasklog = true,
+            "Phase 1: temporarily restore snapshots to temp dir"
+        );
+        log_required_tapes(&inventory, snapshot_file_hash.keys());
         let mut datastore_chunk_map: HashMap<String, HashSet<[u8; 32]>> = HashMap::new();
         let mut tmp_paths = Vec::new();
         for (media_uuid, file_list) in snapshot_file_hash.iter_mut() {
@@ -822,10 +827,13 @@ fn restore_list_worker(
         drop(catalog);
 
         if !media_file_chunk_map.is_empty() {
-            task_log!(worker, "Phase 2: restore chunks to datastores");
-            log_required_tapes(&worker, &inventory, media_file_chunk_map.keys());
+            info!(tasklog = true, "Phase 2: restore chunks to datastores");
+            log_required_tapes(&inventory, media_file_chunk_map.keys());
         } else {
-            task_log!(worker, "All chunks are already present, skip phase 2...");
+            info!(
+                tasklog = true,
+                "All chunks are already present, skip phase 2..."
+            );
         }
 
         for (media_uuid, file_chunk_map) in media_file_chunk_map.iter_mut() {
@@ -840,8 +848,8 @@ fn restore_list_worker(
             restore_file_chunk_map(worker.clone(), &mut drive, &store_map, file_chunk_map)?;
         }
 
-        task_log!(
-            worker,
+        info!(
+            tasklog = true,
             "Phase 3: copy snapshots from temp dir to datastores"
         );
         let mut errors = false;
@@ -900,18 +908,18 @@ fn restore_list_worker(
 
                         Ok(())
                     }) {
-                        task_warn!(
-                            worker,
+                        warn!(
+                            tasklog = true,
                             "could not restore {source_datastore}:{snapshot}: '{err}'"
                         );
                         skipped.push(format!("{source_datastore}:{snapshot}"));
                     }
                 }
-                task_log!(worker, "Restore snapshot '{}' done", snapshot);
+                info!(tasklog = true, "Restore snapshot '{}' done", snapshot);
                 Ok::<_, Error>(())
             }) {
-                task_warn!(
-                    worker,
+                warn!(
+                    tasklog = true,
                     "could not copy {source_datastore}:{snapshot}: {err}"
                 );
                 errors = true;
@@ -923,7 +931,10 @@ fn restore_list_worker(
                 std::fs::remove_dir_all(&tmp_path)
                     .map_err(|err| format_err!("remove_dir_all failed - {err}"))
             }) {
-                task_warn!(worker, "could not clean up temp dir {tmp_path:?}: {err}");
+                warn!(
+                    tasklog = true,
+                    "could not clean up temp dir {tmp_path:?}: {err}"
+                );
                 errors = true;
             };
         }
@@ -932,17 +943,20 @@ fn restore_list_worker(
             bail!("errors during copy occurred");
         }
         if !skipped.is_empty() {
-            task_log!(worker, "(partially) skipped the following snapshots:");
+            info!(
+                tasklog = true,
+                "(partially) skipped the following snapshots:"
+            );
             for snap in skipped {
-                task_log!(worker, "  {snap}");
+                info!(tasklog = true, "  {snap}");
             }
         }
         Ok(())
     });
 
     if res.is_err() {
-        task_warn!(
-            worker,
+        warn!(
+            tasklog = true,
             "Error during restore, partially restored snapshots will NOT be cleaned up"
         );
     }
@@ -952,7 +966,7 @@ fn restore_list_worker(
         match std::fs::remove_dir_all(tmp_path) {
             Ok(()) => {}
             Err(err) if err.kind() == std::io::ErrorKind::NotFound => {}
-            Err(err) => task_warn!(worker, "error cleaning up: {}", err),
+            Err(err) => warn!(tasklog = true, "error cleaning up: {}", err),
         }
     }
 
@@ -1030,7 +1044,7 @@ fn restore_snapshots_to_tmpdir(
                 );
             }
             let encrypt_fingerprint = set.encryption_key_fingerprint.clone().map(|fp| {
-                task_log!(worker, "Encryption key fingerprint: {}", fp);
+                info!(tasklog = true, "Encryption key fingerprint: {}", fp);
                 (fp, set.uuid.clone())
             });
 
@@ -1041,13 +1055,13 @@ fn restore_snapshots_to_tmpdir(
     for file_num in file_list {
         let current_file_number = drive.current_file_number()?;
         if current_file_number != *file_num {
-            task_log!(
-                worker,
+            info!(
+                tasklog = true,
                 "was at file {current_file_number}, moving to {file_num}"
             );
             drive.move_to_file(*file_num)?;
             let current_file_number = drive.current_file_number()?;
-            task_log!(worker, "now at file {}", current_file_number);
+            info!(tasklog = true, "now at file {}", current_file_number);
         }
         let mut reader = drive.read_next_file()?;
 
@@ -1069,8 +1083,8 @@ fn restore_snapshots_to_tmpdir(
                 let source_datastore = archive_header.store;
                 let snapshot = archive_header.snapshot;
 
-                task_log!(
-                    worker,
+                info!(
+                    tasklog = true,
                     "File {file_num}: snapshot archive {source_datastore}:{snapshot}",
                 );
 
@@ -1079,8 +1093,8 @@ fn restore_snapshots_to_tmpdir(
                 let target_datastore = match store_map.target_store(&source_datastore) {
                     Some(datastore) => datastore,
                     None => {
-                        task_warn!(
-                            worker,
+                        warn!(
+                            tasklog = true,
                             "could not find target datastore for {source_datastore}:{snapshot}",
                         );
                         continue;
@@ -1136,10 +1150,13 @@ fn restore_file_chunk_map(
     for (nr, chunk_map) in file_chunk_map.iter_mut() {
         let current_file_number = drive.current_file_number()?;
         if current_file_number != *nr {
-            task_log!(worker, "was at file {current_file_number}, moving to {nr}");
+            info!(
+                tasklog = true,
+                "was at file {current_file_number}, moving to {nr}"
+            );
             drive.move_to_file(*nr)?;
             let current_file_number = drive.current_file_number()?;
-            task_log!(worker, "now at file {}", current_file_number);
+            info!(tasklog = true, "now at file {}", current_file_number);
         }
         let mut reader = drive.read_next_file()?;
         let header: MediaContentHeader = unsafe { reader.read_le_value()? };
@@ -1156,8 +1173,8 @@ fn restore_file_chunk_map(
 
                 let source_datastore = archive_header.store;
 
-                task_log!(
-                    worker,
+                info!(
+                    tasklog = true,
                     "File {nr}: chunk archive for datastore '{source_datastore}'",
                 );
 
@@ -1171,7 +1188,7 @@ fn restore_file_chunk_map(
                     datastore.clone(),
                     chunk_map,
                 )?;
-                task_log!(worker, "restored {count} chunks");
+                info!(tasklog = true, "restored {count} chunks");
             }
             _ => bail!("unexpected content magic {:?}", header.content_magic),
         }
@@ -1231,8 +1248,8 @@ fn restore_partial_chunk_archive<'a>(
 
     let elapsed = start_time.elapsed()?.as_secs_f64();
     let bytes = bytes.load(std::sync::atomic::Ordering::SeqCst) as f64;
-    task_log!(
-        worker,
+    info!(
+        tasklog = true,
         "restored {} ({:.2}/s)",
         HumanByte::new_decimal(bytes),
         HumanByte::new_decimal(bytes / elapsed),
@@ -1317,15 +1334,17 @@ pub fn restore_media(
         let current_file_number = drive.current_file_number()?;
         let reader = match drive.read_next_file() {
             Err(BlockReadError::EndOfFile) => {
-                task_log!(
-                    worker,
-                    "skip unexpected filemark at pos {}",
-                    current_file_number
+                info!(
+                    tasklog = true,
+                    "skip unexpected filemark at pos {}", current_file_number
                 );
                 continue;
             }
             Err(BlockReadError::EndOfStream) => {
-                task_log!(worker, "detected EOT after {} files", current_file_number);
+                info!(
+                    tasklog = true,
+                    "detected EOT after {} files", current_file_number
+                );
                 break;
             }
             Err(BlockReadError::Error(err)) => {
@@ -1389,12 +1408,9 @@ fn restore_archive<'a>(
             let datastore_name = archive_header.store;
             let snapshot = archive_header.snapshot;
 
-            task_log!(
-                worker,
-                "File {}: snapshot archive {}:{}",
-                current_file_number,
-                datastore_name,
-                snapshot
+            info!(
+                tasklog = true,
+                "File {}: snapshot archive {}:{}", current_file_number, datastore_name, snapshot
             );
 
             let (backup_ns, backup_dir) = parse_ns_and_snapshot(&snapshot)?;
@@ -1429,7 +1445,7 @@ fn restore_archive<'a>(
                     path.push(rel_path);
 
                     if is_new {
-                        task_log!(worker, "restore snapshot {}", backup_dir);
+                        info!(tasklog = true, "restore snapshot {}", backup_dir);
 
                         match restore_snapshot_archive(worker.clone(), reader, &path) {
                             Err(err) => {
@@ -1438,7 +1454,7 @@ fn restore_archive<'a>(
                             }
                             Ok(false) => {
                                 std::fs::remove_dir_all(&path)?;
-                                task_log!(worker, "skip incomplete snapshot {}", backup_dir);
+                                info!(tasklog = true, "skip incomplete snapshot {}", backup_dir);
                             }
                             Ok(true) => {
                                 catalog.register_snapshot(
@@ -1454,7 +1470,7 @@ fn restore_archive<'a>(
                         return Ok(());
                     }
                 } else {
-                    task_log!(worker, "skipping...");
+                    info!(tasklog = true, "skipping...");
                 }
             }
 
@@ -1481,11 +1497,9 @@ fn restore_archive<'a>(
 
             let source_datastore = archive_header.store;
 
-            task_log!(
-                worker,
-                "File {}: chunk archive for datastore '{}'",
-                current_file_number,
-                source_datastore
+            info!(
+                tasklog = true,
+                "File {}: chunk archive for datastore '{}'", current_file_number, source_datastore
             );
             let datastore = target
                 .as_ref()
@@ -1521,12 +1535,12 @@ fn restore_archive<'a>(
                         &source_datastore,
                         &chunks[..],
                     )?;
-                    task_log!(worker, "register {} chunks", chunks.len());
+                    info!(tasklog = true, "register {} chunks", chunks.len());
                     catalog.commit_if_large()?;
                 }
                 return Ok(());
             } else if target.is_some() {
-                task_log!(worker, "skipping...");
+                info!(tasklog = true, "skipping...");
             }
 
             reader.skip_data()?; // read all data
@@ -1537,11 +1551,9 @@ fn restore_archive<'a>(
             let archive_header: CatalogArchiveHeader = serde_json::from_slice(&header_data)
                 .map_err(|err| format_err!("unable to parse catalog archive header - {}", err))?;
 
-            task_log!(
-                worker,
-                "File {}: skip catalog '{}'",
-                current_file_number,
-                archive_header.uuid
+            info!(
+                tasklog = true,
+                "File {}: skip catalog '{}'", current_file_number, archive_header.uuid
             );
 
             reader.skip_data()?; // read all data
@@ -1576,7 +1588,7 @@ fn scan_chunk_archive<'a>(
 
                 // check if this is an aborted stream without end marker
                 if let Ok(false) = reader.has_end_marker() {
-                    task_log!(worker, "missing stream end marker");
+                    info!(tasklog = true, "missing stream end marker");
                     return Ok(None);
                 }
 
@@ -1588,7 +1600,7 @@ fn scan_chunk_archive<'a>(
         worker.check_abort()?;
 
         if verbose {
-            task_log!(worker, "Found chunk: {}", hex::encode(digest));
+            info!(tasklog = true, "Found chunk: {}", hex::encode(digest));
         }
 
         chunks.push(digest);
@@ -1612,8 +1624,6 @@ fn restore_chunk_archive<'a>(
     let bytes = Arc::new(std::sync::atomic::AtomicU64::new(0));
     let bytes2 = bytes.clone();
 
-    let worker2 = worker.clone();
-
     let writer_pool = ParallelHandler::new(
         "tape restore chunk writer",
         4,
@@ -1621,7 +1631,7 @@ fn restore_chunk_archive<'a>(
             let chunk_exists = datastore.cond_touch_chunk(&digest, false)?;
             if !chunk_exists {
                 if verbose {
-                    task_log!(worker2, "Insert chunk: {}", hex::encode(digest));
+                    info!(tasklog = true, "Insert chunk: {}", hex::encode(digest));
                 }
                 bytes2.fetch_add(chunk.raw_size(), std::sync::atomic::Ordering::SeqCst);
                 // println!("verify and write {}", hex::encode(&digest));
@@ -1632,7 +1642,11 @@ fn restore_chunk_archive<'a>(
 
                 datastore.insert_chunk(&chunk, &digest)?;
             } else if verbose {
-                task_log!(worker2, "Found existing chunk: {}", hex::encode(digest));
+                info!(
+                    tasklog = true,
+                    "Found existing chunk: {}",
+                    hex::encode(digest)
+                );
             }
             Ok(())
         },
@@ -1654,7 +1668,7 @@ fn restore_chunk_archive<'a>(
 
                 // check if this is an aborted stream without end marker
                 if let Ok(false) = reader.has_end_marker() {
-                    task_log!(worker, "missing stream end marker");
+                    info!(tasklog = true, "missing stream end marker");
                     return Ok(None);
                 }
 
@@ -1678,8 +1692,8 @@ fn restore_chunk_archive<'a>(
 
     let elapsed = start_time.elapsed()?.as_secs_f64();
     let bytes = bytes.load(std::sync::atomic::Ordering::SeqCst) as f64;
-    task_log!(
-        worker,
+    info!(
+        tasklog = true,
         "restored {} ({:.2}/s)",
         HumanByte::new_decimal(bytes),
         HumanByte::new_decimal(bytes / elapsed),
@@ -1830,7 +1844,6 @@ fn try_restore_snapshot_archive<R: pxar::decoder::SeqRead>(
 
 /// Try to restore media catalogs (form catalog_archives)
 pub fn fast_catalog_restore(
-    worker: &WorkerTask,
     drive: &mut Box<dyn TapeDriver>,
     media_set: &MediaSet,
     uuid: &Uuid, // current media Uuid
@@ -1851,14 +1864,17 @@ pub fn fast_catalog_restore(
             // limit reader scope
             let mut reader = match drive.read_next_file() {
                 Err(BlockReadError::EndOfFile) => {
-                    task_log!(
-                        worker,
+                    info!(
+                        tasklog = true,
                         "skip unexpected filemark at pos {current_file_number}"
                     );
                     continue;
                 }
                 Err(BlockReadError::EndOfStream) => {
-                    task_log!(worker, "detected EOT after {current_file_number} files");
+                    info!(
+                        tasklog = true,
+                        "detected EOT after {current_file_number} files"
+                    );
                     break;
                 }
                 Err(BlockReadError::Error(err)) => {
@@ -1875,7 +1891,10 @@ pub fn fast_catalog_restore(
             if header.content_magic == PROXMOX_BACKUP_CATALOG_ARCHIVE_MAGIC_1_0
                 || header.content_magic == PROXMOX_BACKUP_CATALOG_ARCHIVE_MAGIC_1_1
             {
-                task_log!(worker, "found catalog at pos {}", current_file_number);
+                info!(
+                    tasklog = true,
+                    "found catalog at pos {}", current_file_number
+                );
 
                 let header_data = reader.read_exact_allocated(header.size as usize)?;
 
@@ -1885,10 +1904,9 @@ pub fn fast_catalog_restore(
                     })?;
 
                 if &archive_header.media_set_uuid != media_set.uuid() {
-                    task_log!(
-                        worker,
-                        "skipping unrelated catalog at pos {}",
-                        current_file_number
+                    info!(
+                        tasklog = true,
+                        "skipping unrelated catalog at pos {}", current_file_number
                     );
                     reader.skip_data()?; // read all data
                     continue;
@@ -1902,10 +1920,9 @@ pub fn fast_catalog_restore(
                 });
 
                 if !wanted {
-                    task_log!(
-                        worker,
-                        "skip catalog because media '{}' not inventarized",
-                        catalog_uuid
+                    info!(
+                        tasklog = true,
+                        "skip catalog because media '{}' not inventarized", catalog_uuid
                     );
                     reader.skip_data()?; // read all data
                     continue;
@@ -1916,10 +1933,9 @@ pub fn fast_catalog_restore(
                 } else {
                     // only restore if catalog does not exist
                     if MediaCatalog::exists(TAPE_STATUS_DIR, catalog_uuid) {
-                        task_log!(
-                            worker,
-                            "catalog for media '{}' already exists",
-                            catalog_uuid
+                        info!(
+                            tasklog = true,
+                            "catalog for media '{}' already exists", catalog_uuid
                         );
                         reader.skip_data()?; // read all data
                         continue;
@@ -1936,18 +1952,16 @@ pub fn fast_catalog_restore(
                 match MediaCatalog::parse_catalog_header(&mut file)? {
                     (true, Some(media_uuid), Some(media_set_uuid)) => {
                         if &media_uuid != catalog_uuid {
-                            task_log!(
-                                worker,
-                                "catalog uuid mismatch at pos {}",
-                                current_file_number
+                            info!(
+                                tasklog = true,
+                                "catalog uuid mismatch at pos {}", current_file_number
                             );
                             continue;
                         }
                         if media_set_uuid != archive_header.media_set_uuid {
-                            task_log!(
-                                worker,
-                                "catalog media_set mismatch at pos {}",
-                                current_file_number
+                            info!(
+                                tasklog = true,
+                                "catalog media_set mismatch at pos {}", current_file_number
                             );
                             continue;
                         }
@@ -1959,18 +1973,17 @@ pub fn fast_catalog_restore(
                         )?;
 
                         if catalog_uuid == uuid {
-                            task_log!(worker, "successfully restored catalog");
+                            info!(tasklog = true, "successfully restored catalog");
                             found_catalog = true
                         } else {
-                            task_log!(
-                                worker,
-                                "successfully restored related catalog {}",
-                                media_uuid
+                            info!(
+                                tasklog = true,
+                                "successfully restored related catalog {}", media_uuid
                             );
                         }
                     }
                     _ => {
-                        task_warn!(worker, "got incomplete catalog header - skip file");
+                        warn!(tasklog = true, "got incomplete catalog header - skip file");
                         continue;
                     }
                 }
@@ -1984,7 +1997,10 @@ pub fn fast_catalog_restore(
         }
         moved_to_eom = true;
 
-        task_log!(worker, "searching for catalog at EOT (moving to EOT)");
+        info!(
+            tasklog = true,
+            "searching for catalog at EOT (moving to EOT)"
+        );
         drive.move_to_last_file()?;
 
         let new_file_number = drive.current_file_number()?;
diff --git a/src/backup/verify.rs b/src/backup/verify.rs
index c972e532..77d5ee8b 100644
--- a/src/backup/verify.rs
+++ b/src/backup/verify.rs
@@ -5,8 +5,9 @@ use std::sync::{Arc, Mutex};
 use std::time::Instant;
 
 use anyhow::{bail, format_err, Error};
+use tracing::info;
 
-use proxmox_sys::{task_log, WorkerTaskContext};
+use proxmox_sys::WorkerTaskContext;
 
 use pbs_api_types::{
     print_ns_and_snapshot, print_store_and_ns, Authid, BackupNamespace, BackupType, CryptMode,
@@ -69,11 +70,7 @@ fn verify_blob(backup_dir: &BackupDir, info: &FileInfo) -> Result<(), Error> {
     }
 }
 
-fn rename_corrupted_chunk(
-    datastore: Arc<DataStore>,
-    digest: &[u8; 32],
-    worker: &dyn WorkerTaskContext,
-) {
+fn rename_corrupted_chunk(datastore: Arc<DataStore>, digest: &[u8; 32]) {
     let (path, digest_str) = datastore.chunk_path(digest);
 
     let mut counter = 0;
@@ -89,16 +86,14 @@ fn rename_corrupted_chunk(
 
     match std::fs::rename(&path, &new_path) {
         Ok(_) => {
-            task_log!(worker, "corrupted chunk renamed to {:?}", &new_path);
+            info!(tasklog = true, "corrupted chunk renamed to {:?}", &new_path);
         }
         Err(err) => {
             match err.kind() {
                 std::io::ErrorKind::NotFound => { /* ignored */ }
-                _ => task_log!(
-                    worker,
-                    "could not rename corrupted chunk {:?} - {}",
-                    &path,
-                    err
+                _ => info!(
+                    tasklog = true,
+                    "could not rename corrupted chunk {:?} - {}", &path, err
                 ),
             }
         }
@@ -117,7 +112,6 @@ fn verify_index_chunks(
     let mut read_bytes = 0;
     let mut decoded_bytes = 0;
 
-    let worker2 = Arc::clone(&verify_worker.worker);
     let datastore2 = Arc::clone(&verify_worker.datastore);
     let corrupt_chunks2 = Arc::clone(&verify_worker.corrupt_chunks);
     let verified_chunks2 = Arc::clone(&verify_worker.verified_chunks);
@@ -130,7 +124,10 @@ fn verify_index_chunks(
             let chunk_crypt_mode = match chunk.crypt_mode() {
                 Err(err) => {
                     corrupt_chunks2.lock().unwrap().insert(digest);
-                    task_log!(worker2, "can't verify chunk, unknown CryptMode - {}", err);
+                    info!(
+                        tasklog = true,
+                        "can't verify chunk, unknown CryptMode - {}", err
+                    );
                     errors2.fetch_add(1, Ordering::SeqCst);
                     return Ok(());
                 }
@@ -138,8 +135,8 @@ fn verify_index_chunks(
             };
 
             if chunk_crypt_mode != crypt_mode {
-                task_log!(
-                    worker2,
+                info!(
+                    tasklog = true,
                     "chunk CryptMode {:?} does not match index CryptMode {:?}",
                     chunk_crypt_mode,
                     crypt_mode
@@ -149,9 +146,9 @@ fn verify_index_chunks(
 
             if let Err(err) = chunk.verify_unencrypted(size as usize, &digest) {
                 corrupt_chunks2.lock().unwrap().insert(digest);
-                task_log!(worker2, "{}", err);
+                info!(tasklog = true, "{}", err);
                 errors2.fetch_add(1, Ordering::SeqCst);
-                rename_corrupted_chunk(datastore2.clone(), &digest, &worker2);
+                rename_corrupted_chunk(datastore2.clone(), &digest);
             } else {
                 verified_chunks2.lock().unwrap().insert(digest);
             }
@@ -175,11 +172,7 @@ fn verify_index_chunks(
             .contains(digest)
         {
             let digest_str = hex::encode(digest);
-            task_log!(
-                verify_worker.worker,
-                "chunk {} was marked as corrupt",
-                digest_str
-            );
+            info!(tasklog = true, "chunk {} was marked as corrupt", digest_str);
             errors.fetch_add(1, Ordering::SeqCst);
             true
         } else {
@@ -218,17 +211,9 @@ fn verify_index_chunks(
                     .lock()
                     .unwrap()
                     .insert(info.digest);
-                task_log!(
-                    verify_worker.worker,
-                    "can't verify chunk, load failed - {}",
-                    err
-                );
+                info!(tasklog = true, "can't verify chunk, load failed - {}", err);
                 errors.fetch_add(1, Ordering::SeqCst);
-                rename_corrupted_chunk(
-                    verify_worker.datastore.clone(),
-                    &info.digest,
-                    &verify_worker.worker,
-                );
+                rename_corrupted_chunk(verify_worker.datastore.clone(), &info.digest);
             }
             Ok(chunk) => {
                 let size = info.size();
@@ -251,8 +236,8 @@ fn verify_index_chunks(
 
     let error_count = errors.load(Ordering::SeqCst);
 
-    task_log!(
-        verify_worker.worker,
+    info!(
+        tasklog = true,
         "  verified {:.2}/{:.2} MiB in {:.2} seconds, speed {:.2}/{:.2} MiB/s ({} errors)",
         read_bytes_mib,
         decoded_bytes_mib,
@@ -329,8 +314,8 @@ pub fn verify_backup_dir(
     filter: Option<&dyn Fn(&BackupManifest) -> bool>,
 ) -> Result<bool, Error> {
     if !backup_dir.full_path().exists() {
-        task_log!(
-            verify_worker.worker,
+        info!(
+            tasklog = true,
             "SKIPPED: verify {}:{} - snapshot does not exist (anymore).",
             verify_worker.datastore.name(),
             backup_dir.dir(),
@@ -348,8 +333,8 @@ pub fn verify_backup_dir(
             verify_backup_dir_with_lock(verify_worker, backup_dir, upid, filter, snap_lock)
         }
         Err(err) => {
-            task_log!(
-                verify_worker.worker,
+            info!(
+                tasklog = true,
                 "SKIPPED: verify {}:{} - could not acquire snapshot lock: {}",
                 verify_worker.datastore.name(),
                 backup_dir.dir(),
@@ -371,8 +356,8 @@ pub fn verify_backup_dir_with_lock(
     let manifest = match backup_dir.load_manifest() {
         Ok((manifest, _)) => manifest,
         Err(err) => {
-            task_log!(
-                verify_worker.worker,
+            info!(
+                tasklog = true,
                 "verify {}:{} - manifest load error: {}",
                 verify_worker.datastore.name(),
                 backup_dir.dir(),
@@ -384,8 +369,8 @@ pub fn verify_backup_dir_with_lock(
 
     if let Some(filter) = filter {
         if !filter(&manifest) {
-            task_log!(
-                verify_worker.worker,
+            info!(
+                tasklog = true,
                 "SKIPPED: verify {}:{} (recently verified)",
                 verify_worker.datastore.name(),
                 backup_dir.dir(),
@@ -394,8 +379,8 @@ pub fn verify_backup_dir_with_lock(
         }
     }
 
-    task_log!(
-        verify_worker.worker,
+    info!(
+        tasklog = true,
         "verify {}:{}",
         verify_worker.datastore.name(),
         backup_dir.dir()
@@ -406,7 +391,7 @@ pub fn verify_backup_dir_with_lock(
     let mut verify_result = VerifyState::Ok;
     for info in manifest.files() {
         let result = proxmox_lang::try_block!({
-            task_log!(verify_worker.worker, "  check {}", info.filename);
+            info!(tasklog = true, "  check {}", info.filename);
             match archive_type(&info.filename)? {
                 ArchiveType::FixedIndex => verify_fixed_index(verify_worker, backup_dir, info),
                 ArchiveType::DynamicIndex => verify_dynamic_index(verify_worker, backup_dir, info),
@@ -418,8 +403,8 @@ pub fn verify_backup_dir_with_lock(
         verify_worker.worker.fail_on_shutdown()?;
 
         if let Err(err) = result {
-            task_log!(
-                verify_worker.worker,
+            info!(
+                tasklog = true,
                 "verify {}:{}/{} failed: {}",
                 verify_worker.datastore.name(),
                 backup_dir.dir(),
@@ -463,8 +448,8 @@ pub fn verify_backup_group(
     let mut list = match group.list_backups() {
         Ok(list) => list,
         Err(err) => {
-            task_log!(
-                verify_worker.worker,
+            info!(
+                tasklog = true,
                 "verify {}, group {} - unable to list backups: {}",
                 print_store_and_ns(verify_worker.datastore.name(), group.backup_ns()),
                 group.group(),
@@ -475,8 +460,8 @@ pub fn verify_backup_group(
     };
 
     let snapshot_count = list.len();
-    task_log!(
-        verify_worker.worker,
+    info!(
+        tasklog = true,
         "verify group {}:{} ({} snapshots)",
         verify_worker.datastore.name(),
         group.group(),
@@ -494,9 +479,8 @@ pub fn verify_backup_group(
             ));
         }
         progress.done_snapshots = pos as u64 + 1;
-        task_log!(verify_worker.worker, "percentage done: {}", progress);
+        info!(tasklog = true, "percentage done: {}", progress);
     }
-
     Ok(errors)
 }
 
@@ -516,16 +500,15 @@ pub fn verify_all_backups(
     filter: Option<&dyn Fn(&BackupManifest) -> bool>,
 ) -> Result<Vec<String>, Error> {
     let mut errors = Vec::new();
-    let worker = Arc::clone(&verify_worker.worker);
 
-    task_log!(
-        worker,
+    info!(
+        tasklog = true,
         "verify datastore {}",
         verify_worker.datastore.name()
     );
 
     let owner_filtered = if let Some(owner) = &owner {
-        task_log!(worker, "limiting to backups owned by {}", owner);
+        info!(tasklog = true, "limiting to backups owned by {}", owner);
         true
     } else {
         false
@@ -553,7 +536,10 @@ pub fn verify_all_backups(
                 }
                 Err(err) => {
                     // we don't filter by owner, but we want to log the error
-                    task_log!(worker, "error on iterating groups in ns '{ns}' - {err}");
+                    info!(
+                        tasklog = true,
+                        "error on iterating groups in ns '{ns}' - {err}"
+                    );
                     errors.push(err.to_string());
                     None
                 }
@@ -563,7 +549,7 @@ pub fn verify_all_backups(
             })
             .collect::<Vec<BackupGroup>>(),
         Err(err) => {
-            task_log!(worker, "unable to list backups: {}", err,);
+            info!(tasklog = true, "unable to list backups: {}", err,);
             return Ok(errors);
         }
     };
@@ -571,7 +557,7 @@ pub fn verify_all_backups(
     list.sort_unstable_by(|a, b| a.group().cmp(b.group()));
 
     let group_count = list.len();
-    task_log!(worker, "found {} groups", group_count);
+    info!(tasklog = true, "found {} groups", group_count);
 
     let mut progress = StoreProgress::new(group_count as u64);
 
diff --git a/src/bin/proxmox-backup-api.rs b/src/bin/proxmox-backup-api.rs
index c6c24449..993d1d98 100644
--- a/src/bin/proxmox-backup-api.rs
+++ b/src/bin/proxmox-backup-api.rs
@@ -7,6 +7,7 @@ use http::Response;
 use hyper::{Body, StatusCode};
 
 use proxmox_lang::try_block;
+use proxmox_log::init_logger;
 use proxmox_router::RpcEnvironmentType;
 use proxmox_sys::fs::CreateOptions;
 
@@ -15,6 +16,7 @@ use proxmox_rest_server::{daemon, ApiConfig, RestServer};
 use proxmox_backup::auth_helpers::*;
 use proxmox_backup::config;
 use proxmox_backup::server::auth::check_pbs_auth;
+use tracing_subscriber::filter::LevelFilter;
 
 fn main() {
     pbs_tools::setup_libc_malloc_opts();
@@ -40,13 +42,7 @@ fn get_index() -> Pin<Box<dyn Future<Output = Response<Body>> + Send>> {
 }
 
 async fn run() -> Result<(), Error> {
-    if let Err(err) = syslog::init(
-        syslog::Facility::LOG_DAEMON,
-        log::LevelFilter::Info,
-        Some("proxmox-backup-api"),
-    ) {
-        bail!("unable to inititialize syslog - {}", err);
-    }
+    init_logger("PBS_LOG", LevelFilter::INFO, "proxmox-backup-api")?;
 
     config::create_configdir()?;
 
diff --git a/src/bin/proxmox-backup-proxy.rs b/src/bin/proxmox-backup-proxy.rs
index f38a02bd..4fed24fb 100644
--- a/src/bin/proxmox-backup-proxy.rs
+++ b/src/bin/proxmox-backup-proxy.rs
@@ -7,6 +7,9 @@ use http::request::Parts;
 use http::Response;
 use hyper::header;
 use hyper::{Body, StatusCode};
+use proxmox_log::init_logger;
+use tracing::{info, warn};
+use tracing_subscriber::filter::LevelFilter;
 use url::form_urlencoded;
 
 use openssl::ssl::SslAcceptor;
@@ -18,7 +21,6 @@ use proxmox_router::{RpcEnvironment, RpcEnvironmentType};
 use proxmox_sys::fs::{CreateOptions, FileSystemInformation};
 use proxmox_sys::linux::procfs::{Loadavg, ProcFsMemInfo, ProcFsNetDev, ProcFsStat};
 use proxmox_sys::logrotate::LogRotate;
-use proxmox_sys::{task_log, task_warn};
 
 use pbs_datastore::DataStore;
 
@@ -181,21 +183,7 @@ async fn get_index_future(env: RestEnvironment, parts: Parts) -> Response<Body>
 }
 
 async fn run() -> Result<(), Error> {
-    // Note: To debug early connection error use
-    // PROXMOX_DEBUG=1 ./target/release/proxmox-backup-proxy
-    let debug = std::env::var("PROXMOX_DEBUG").is_ok();
-
-    if let Err(err) = syslog::init(
-        syslog::Facility::LOG_DAEMON,
-        if debug {
-            log::LevelFilter::Debug
-        } else {
-            log::LevelFilter::Info
-        },
-        Some("proxmox-backup-proxy"),
-    ) {
-        bail!("unable to inititialize syslog - {err}");
-    }
+    init_logger("PBS_LOG", LevelFilter::INFO, "proxmox-backup-proxy")?;
 
     proxmox_backup::auth_helpers::setup_auth_context(false);
 
@@ -289,7 +277,7 @@ async fn run() -> Result<(), Error> {
     })?;
 
     let connections = proxmox_rest_server::connection::AcceptBuilder::with_acceptor(acceptor)
-        .debug(debug)
+        .debug(tracing::enabled!(tracing::Level::DEBUG))
         .rate_limiter_lookup(Arc::new(lookup_rate_limiter))
         .tcp_keepalive_time(PROXMOX_BACKUP_TCP_KEEPALIVE_TIME);
     let server = daemon::create_daemon(
@@ -714,7 +702,7 @@ async fn schedule_task_log_rotate() {
         false,
         move |worker| {
             job.start(&worker.upid().to_string())?;
-            task_log!(worker, "starting task log rotation");
+            info!(tasklog = true, "starting task log rotation");
 
             let result = try_block!({
                 let max_size = 512 * 1024 - 1; // an entry has ~ 100b, so > 5000 entries/file
@@ -739,9 +727,9 @@ async fn schedule_task_log_rotate() {
                 )?;
 
                 if has_rotated {
-                    task_log!(worker, "task log archive was rotated");
+                    info!(tasklog = true, "task log archive was rotated");
                 } else {
-                    task_log!(worker, "task log archive was not rotated");
+                    info!(tasklog = true, "task log archive was not rotated");
                 }
 
                 let max_size = 32 * 1024 * 1024 - 1;
@@ -757,9 +745,9 @@ async fn schedule_task_log_rotate() {
                 if logrotate.rotate(max_size)? {
                     println!("rotated access log, telling daemons to re-open log file");
                     proxmox_async::runtime::block_on(command_reopen_access_logfiles())?;
-                    task_log!(worker, "API access log was rotated");
+                    info!(tasklog = true, "API access log was rotated");
                 } else {
-                    task_log!(worker, "API access log was not rotated");
+                    info!(tasklog = true, "API access log was not rotated");
                 }
 
                 let mut logrotate = LogRotate::new(
@@ -772,15 +760,18 @@ async fn schedule_task_log_rotate() {
                 if logrotate.rotate(max_size)? {
                     println!("rotated auth log, telling daemons to re-open log file");
                     proxmox_async::runtime::block_on(command_reopen_auth_logfiles())?;
-                    task_log!(worker, "API authentication log was rotated");
+                    info!(tasklog = true, "API authentication log was rotated");
                 } else {
-                    task_log!(worker, "API authentication log was not rotated");
+                    info!(tasklog = true, "API authentication log was not rotated");
                 }
 
                 if has_rotated {
-                    task_log!(worker, "cleaning up old task logs");
-                    if let Err(err) = cleanup_old_tasks(&worker, true) {
-                        task_warn!(worker, "could not completely cleanup old tasks: {err}");
+                    info!(tasklog = true, "cleaning up old task logs");
+                    if let Err(err) = cleanup_old_tasks(true) {
+                        warn!(
+                            tasklog = true,
+                            "could not completely cleanup old tasks: {err}"
+                        );
                     }
                 }
 
diff --git a/src/server/gc_job.rs b/src/server/gc_job.rs
index 41375d72..4a79a92d 100644
--- a/src/server/gc_job.rs
+++ b/src/server/gc_job.rs
@@ -1,7 +1,7 @@
 use anyhow::Error;
 use std::sync::Arc;
 
-use proxmox_sys::task_log;
+use tracing::info;
 
 use pbs_api_types::Authid;
 use pbs_datastore::DataStore;
@@ -30,9 +30,12 @@ pub fn do_garbage_collection_job(
         move |worker| {
             job.start(&worker.upid().to_string())?;
 
-            task_log!(worker, "starting garbage collection on store {store}");
+            info!(
+                tasklog = true,
+                "starting garbage collection on store {store}"
+            );
             if let Some(event_str) = schedule {
-                task_log!(worker, "task triggered by schedule '{event_str}'");
+                info!(tasklog = true, "task triggered by schedule '{event_str}'");
             }
 
             let result = datastore.garbage_collection(&*worker, worker.upid());
diff --git a/src/server/prune_job.rs b/src/server/prune_job.rs
index 2de34973..ae85886e 100644
--- a/src/server/prune_job.rs
+++ b/src/server/prune_job.rs
@@ -1,8 +1,7 @@
 use std::sync::Arc;
 
 use anyhow::Error;
-
-use proxmox_sys::{task_log, task_warn};
+use tracing::{info, warn};
 
 use pbs_api_types::{
     print_store_and_ns, Authid, KeepOptions, Operation, PruneJobOptions, MAX_NAMESPACE_DEPTH,
@@ -16,7 +15,6 @@ use crate::backup::ListAccessibleBackupGroups;
 use crate::server::jobstate::Job;
 
 pub fn prune_datastore(
-    worker: Arc<WorkerTask>,
     auth_id: Authid,
     prune_options: PruneJobOptions,
     datastore: Arc<DataStore>,
@@ -31,19 +29,22 @@ pub fn prune_datastore(
     };
     let ns = prune_options.ns.clone().unwrap_or_default();
     let store_ns = print_store_and_ns(store, &ns);
-    task_log!(worker, "Starting datastore prune on {store_ns}, {depth}");
+    info!(
+        tasklog = true,
+        "Starting datastore prune on {store_ns}, {depth}"
+    );
 
     if dry_run {
-        task_log!(worker, "(dry test run)");
+        info!(tasklog = true, "(dry test run)");
     }
 
     let keep_all = !prune_options.keeps_something();
 
     if keep_all {
-        task_log!(worker, "No prune selection - keeping all files.");
+        info!(tasklog = true, "No prune selection - keeping all files.");
     } else {
         let rendered_options = cli_prune_options_string(&prune_options);
-        task_log!(worker, "retention options: {rendered_options}");
+        info!(tasklog = true, "retention options: {rendered_options}");
     }
 
     for group in ListAccessibleBackupGroups::new_with_privs(
@@ -61,8 +62,8 @@ pub fn prune_datastore(
         let mut prune_info = compute_prune_info(list, &prune_options.keep)?;
         prune_info.reverse(); // delete older snapshots first
 
-        task_log!(
-            worker,
+        info!(
+            tasklog = true,
             "Pruning group {ns}:\"{}/{}\"",
             group.backup_type(),
             group.backup_id()
@@ -70,8 +71,8 @@ pub fn prune_datastore(
 
         for (info, mark) in prune_info {
             let keep = keep_all || mark.keep();
-            task_log!(
-                worker,
+            info!(
+                tasklog = true,
                 "{}{} {}/{}/{}",
                 if dry_run { "would " } else { "" },
                 mark,
@@ -82,7 +83,7 @@ pub fn prune_datastore(
             if !keep && !dry_run {
                 if let Err(err) = datastore.remove_backup_dir(ns, info.backup_dir.as_ref(), false) {
                     let path = info.backup_dir.relative_path();
-                    task_warn!(worker, "failed to remove dir {path:?}: {err}");
+                    warn!(tasklog = true, "failed to remove dir {path:?}: {err}");
                 }
             }
         }
@@ -150,13 +151,13 @@ pub fn do_prune_job(
         move |worker| {
             job.start(&worker.upid().to_string())?;
 
-            task_log!(worker, "prune job '{}'", job.jobname());
+            info!(tasklog = true, "prune job '{}'", job.jobname());
 
             if let Some(event_str) = schedule {
-                task_log!(worker, "task triggered by schedule '{event_str}'");
+                info!(tasklog = true, "task triggered by schedule '{event_str}'");
             }
 
-            let result = prune_datastore(worker.clone(), auth_id, prune_options, datastore, false);
+            let result = prune_datastore(auth_id, prune_options, datastore, false);
 
             let status = worker.create_state(&result);
 
diff --git a/src/server/pull.rs b/src/server/pull.rs
index a973a10e..cc9229b1 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -10,9 +10,9 @@ use anyhow::{bail, format_err, Error};
 use http::StatusCode;
 use pbs_config::CachedUserInfo;
 use serde_json::json;
+use tracing::info;
 
 use proxmox_router::HttpError;
-use proxmox_sys::task_log;
 
 use pbs_api_types::{
     print_store_and_ns, Authid, BackupNamespace, GroupFilter, GroupListItem, NamespaceListItem,
@@ -32,7 +32,6 @@ use pbs_datastore::manifest::{
 };
 use pbs_datastore::{check_backup_owner, DataStore, StoreProgress};
 use pbs_tools::sha::sha256;
-use proxmox_rest_server::WorkerTask;
 
 use crate::backup::{check_ns_modification_privs, check_ns_privs};
 use crate::tools::parallel_handler::ParallelHandler;
@@ -123,7 +122,6 @@ impl PullParameters {
 }
 
 async fn pull_index_chunks<I: IndexFile>(
-    worker: &WorkerTask,
     chunk_reader: RemoteChunkReader,
     target: Arc<DataStore>,
     index: I,
@@ -176,10 +174,10 @@ async fn pull_index_chunks<I: IndexFile>(
                     target.cond_touch_chunk(&info.digest, false)
                 })?;
                 if chunk_exists {
-                    //task_log!(worker, "chunk {} exists {}", pos, hex::encode(digest));
+                    //info!(tasklog = true, "chunk {} exists {}", pos, hex::encode(digest));
                     return Ok::<_, Error>(());
                 }
-                //task_log!(worker, "sync {} chunk {}", pos, hex::encode(digest));
+                //info!(tasklog = true, "sync {} chunk {}", pos, hex::encode(digest));
                 let chunk = chunk_reader.read_raw_chunk(&info.digest).await?;
                 let raw_size = chunk.raw_size() as usize;
 
@@ -205,8 +203,8 @@ async fn pull_index_chunks<I: IndexFile>(
 
     let bytes = bytes.load(Ordering::SeqCst);
 
-    task_log!(
-        worker,
+    info!(
+        tasklog = true,
         "downloaded {} bytes ({:.2} MiB/s)",
         bytes,
         (bytes as f64) / (1024.0 * 1024.0 * elapsed)
@@ -261,7 +259,6 @@ fn verify_archive(info: &FileInfo, csum: &[u8; 32], size: u64) -> Result<(), Err
 /// - if archive is an index, pull referenced chunks
 /// - Rename tmp file into real path
 async fn pull_single_archive(
-    worker: &WorkerTask,
     reader: &BackupReader,
     chunk_reader: &mut RemoteChunkReader,
     snapshot: &pbs_datastore::BackupDir,
@@ -275,7 +272,7 @@ async fn pull_single_archive(
     let mut tmp_path = path.clone();
     tmp_path.set_extension("tmp");
 
-    task_log!(worker, "sync archive {}", archive_name);
+    info!(tasklog = true, "sync archive {}", archive_name);
 
     let mut tmpfile = std::fs::OpenOptions::new()
         .write(true)
@@ -294,7 +291,6 @@ async fn pull_single_archive(
             verify_archive(archive_info, &csum, size)?;
 
             pull_index_chunks(
-                worker,
                 chunk_reader.clone(),
                 snapshot.datastore().clone(),
                 index,
@@ -310,7 +306,6 @@ async fn pull_single_archive(
             verify_archive(archive_info, &csum, size)?;
 
             pull_index_chunks(
-                worker,
                 chunk_reader.clone(),
                 snapshot.datastore().clone(),
                 index,
@@ -333,7 +328,6 @@ async fn pull_single_archive(
 // Note: The client.log.blob is uploaded after the backup, so it is
 // not mentioned in the manifest.
 async fn try_client_log_download(
-    worker: &WorkerTask,
     reader: Arc<BackupReader>,
     path: &std::path::Path,
 ) -> Result<(), Error> {
@@ -351,7 +345,10 @@ async fn try_client_log_download(
         if let Err(err) = std::fs::rename(&tmp_path, path) {
             bail!("Atomic rename file {:?} failed - {}", path, err);
         }
-        task_log!(worker, "got backup log file {:?}", CLIENT_LOG_BLOB_NAME);
+        info!(
+            tasklog = true,
+            "got backup log file {:?}", CLIENT_LOG_BLOB_NAME
+        );
     }
 
     Ok(())
@@ -367,7 +364,6 @@ async fn try_client_log_download(
 /// -- if not, pull it from the remote
 /// - Download log if not already existing
 async fn pull_snapshot(
-    worker: &WorkerTask,
     reader: Arc<BackupReader>,
     snapshot: &pbs_datastore::BackupDir,
     downloaded_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
@@ -388,8 +384,8 @@ async fn pull_snapshot(
             match err.downcast_ref::<HttpError>() {
                 Some(HttpError { code, message }) => match *code {
                     StatusCode::NOT_FOUND => {
-                        task_log!(
-                            worker,
+                        info!(
+                            tasklog = true,
                             "skipping snapshot {} - vanished since start of sync",
                             snapshot.dir(),
                         );
@@ -422,9 +418,9 @@ async fn pull_snapshot(
 
         if manifest_blob.raw_data() == tmp_manifest_blob.raw_data() {
             if !client_log_name.exists() {
-                try_client_log_download(worker, reader, &client_log_name).await?;
+                try_client_log_download(reader, &client_log_name).await?;
             }
-            task_log!(worker, "no data changes");
+            info!(tasklog = true, "no data changes");
             let _ = std::fs::remove_file(&tmp_manifest_name);
             return Ok(()); // nothing changed
         }
@@ -444,7 +440,7 @@ async fn pull_snapshot(
                     match manifest.verify_file(&item.filename, &csum, size) {
                         Ok(_) => continue,
                         Err(err) => {
-                            task_log!(worker, "detected changed file {:?} - {}", path, err);
+                            info!(tasklog = true, "detected changed file {:?} - {}", path, err);
                         }
                     }
                 }
@@ -454,7 +450,7 @@ async fn pull_snapshot(
                     match manifest.verify_file(&item.filename, &csum, size) {
                         Ok(_) => continue,
                         Err(err) => {
-                            task_log!(worker, "detected changed file {:?} - {}", path, err);
+                            info!(tasklog = true, "detected changed file {:?} - {}", path, err);
                         }
                     }
                 }
@@ -464,7 +460,7 @@ async fn pull_snapshot(
                     match manifest.verify_file(&item.filename, &csum, size) {
                         Ok(_) => continue,
                         Err(err) => {
-                            task_log!(worker, "detected changed file {:?} - {}", path, err);
+                            info!(tasklog = true, "detected changed file {:?} - {}", path, err);
                         }
                     }
                 }
@@ -479,7 +475,6 @@ async fn pull_snapshot(
         );
 
         pull_single_archive(
-            worker,
             &reader,
             &mut chunk_reader,
             snapshot,
@@ -494,7 +489,7 @@ async fn pull_snapshot(
     }
 
     if !client_log_name.exists() {
-        try_client_log_download(worker, reader, &client_log_name).await?;
+        try_client_log_download(reader, &client_log_name).await?;
     }
 
     snapshot
@@ -509,7 +504,6 @@ async fn pull_snapshot(
 /// The `reader` is configured to read from the remote / source namespace, while the `snapshot` is
 /// pointing to the local datastore and target namespace.
 async fn pull_snapshot_from(
-    worker: &WorkerTask,
     reader: Arc<BackupReader>,
     snapshot: &pbs_datastore::BackupDir,
     downloaded_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
@@ -519,22 +513,22 @@ async fn pull_snapshot_from(
         .create_locked_backup_dir(snapshot.backup_ns(), snapshot.as_ref())?;
 
     if is_new {
-        task_log!(worker, "sync snapshot {}", snapshot.dir());
+        info!(tasklog = true, "sync snapshot {}", snapshot.dir());
 
-        if let Err(err) = pull_snapshot(worker, reader, snapshot, downloaded_chunks).await {
+        if let Err(err) = pull_snapshot(reader, snapshot, downloaded_chunks).await {
             if let Err(cleanup_err) = snapshot.datastore().remove_backup_dir(
                 snapshot.backup_ns(),
                 snapshot.as_ref(),
                 true,
             ) {
-                task_log!(worker, "cleanup error - {}", cleanup_err);
+                info!(tasklog = true, "cleanup error - {}", cleanup_err);
             }
             return Err(err);
         }
-        task_log!(worker, "sync snapshot {} done", snapshot.dir());
+        info!(tasklog = true, "sync snapshot {} done", snapshot.dir());
     } else {
-        task_log!(worker, "re-sync snapshot {}", snapshot.dir());
-        pull_snapshot(worker, reader, snapshot, downloaded_chunks).await?;
+        info!(tasklog = true, "re-sync snapshot {}", snapshot.dir());
+        pull_snapshot(reader, snapshot, downloaded_chunks).await?;
     }
 
     Ok(())
@@ -638,14 +632,13 @@ impl std::fmt::Display for SkipInfo {
 /// - remote snapshot access is checked by remote (twice: query and opening the backup reader)
 /// - local group owner is already checked by pull_store
 async fn pull_group(
-    worker: &WorkerTask,
     client: &HttpClient,
     params: &PullParameters,
     group: &pbs_api_types::BackupGroup,
     remote_ns: BackupNamespace,
     progress: &mut StoreProgress,
 ) -> Result<(), Error> {
-    task_log!(worker, "sync group {}", group);
+    info!(tasklog = true, "sync group {}", group);
 
     let path = format!(
         "api2/json/admin/datastore/{}/snapshots",
@@ -697,10 +690,9 @@ async fn pull_group(
 
         // in-progress backups can't be synced
         if item.size.is_none() {
-            task_log!(
-                worker,
-                "skipping snapshot {} - in-progress backup",
-                snapshot
+            info!(
+                tasklog = true,
+                "skipping snapshot {} - in-progress backup", snapshot
             );
             continue;
         }
@@ -711,7 +703,7 @@ async fn pull_group(
             already_synced_skip_info.update(snapshot.time);
             continue;
         } else if already_synced_skip_info.count > 0 {
-            task_log!(worker, "{}", already_synced_skip_info);
+            info!(tasklog = true, "{}", already_synced_skip_info);
             already_synced_skip_info.reset();
         }
 
@@ -719,7 +711,7 @@ async fn pull_group(
             transfer_last_skip_info.update(snapshot.time);
             continue;
         } else if transfer_last_skip_info.count > 0 {
-            task_log!(worker, "{}", transfer_last_skip_info);
+            info!(tasklog = true, "{}", transfer_last_skip_info);
             transfer_last_skip_info.reset();
         }
 
@@ -749,10 +741,10 @@ async fn pull_group(
 
         let snapshot = params.store.backup_dir(target_ns.clone(), snapshot)?;
 
-        let result = pull_snapshot_from(worker, reader, &snapshot, downloaded_chunks.clone()).await;
+        let result = pull_snapshot_from(reader, &snapshot, downloaded_chunks.clone()).await;
 
         progress.done_snapshots = pos as u64 + 1;
-        task_log!(worker, "percentage done: {}", progress);
+        info!(tasklog = true, "percentage done: {}", progress);
 
         result?; // stop on error
     }
@@ -766,14 +758,18 @@ async fn pull_group(
                 continue;
             }
             if snapshot.is_protected() {
-                task_log!(
-                    worker,
+                info!(
+                    tasklog = true,
                     "don't delete vanished snapshot {} (protected)",
                     snapshot.dir()
                 );
                 continue;
             }
-            task_log!(worker, "delete vanished snapshot {}", snapshot.dir());
+            info!(
+                tasklog = true,
+                "delete vanished snapshot {}",
+                snapshot.dir()
+            );
             params
                 .store
                 .remove_backup_dir(&target_ns, snapshot.as_ref(), false)?;
@@ -785,7 +781,6 @@ async fn pull_group(
 
 // will modify params if switching to backwards mode for lack of NS support on remote end
 async fn query_namespaces(
-    worker: &WorkerTask,
     client: &HttpClient,
     params: &mut PullParameters,
 ) -> Result<Vec<BackupNamespace>, Error> {
@@ -808,8 +803,8 @@ async fn query_namespaces(
             Some(HttpError { code, message }) => match *code {
                 StatusCode::NOT_FOUND => {
                     if params.remote_ns.is_root() && params.max_depth.is_none() {
-                        task_log!(worker, "Could not query remote for namespaces (404) -> temporarily switching to backwards-compat mode");
-                        task_log!(worker, "Either make backwards-compat mode explicit (max-depth == 0) or upgrade remote system.");
+                        info!(tasklog = true, "Could not query remote for namespaces (404) -> temporarily switching to backwards-compat mode");
+                        info!(tasklog = true, "Either make backwards-compat mode explicit (max-depth == 0) or upgrade remote system.");
                         params.max_depth = Some(0);
                     } else {
                         bail!("Remote namespace set/recursive sync requested, but remote does not support namespaces.")
@@ -875,7 +870,6 @@ fn check_and_remove_ns(params: &PullParameters, local_ns: &BackupNamespace) -> R
 }
 
 fn check_and_remove_vanished_ns(
-    worker: &WorkerTask,
     params: &PullParameters,
     synced_ns: HashSet<BackupNamespace>,
 ) -> Result<bool, Error> {
@@ -913,14 +907,16 @@ fn check_and_remove_vanished_ns(
             continue;
         }
         match check_and_remove_ns(params, &local_ns) {
-            Ok(true) => task_log!(worker, "Removed namespace {}", local_ns),
-            Ok(false) => task_log!(
-                worker,
-                "Did not remove namespace {} - protected snapshots remain",
-                local_ns
+            Ok(true) => info!(tasklog = true, "Removed namespace {}", local_ns),
+            Ok(false) => info!(
+                tasklog = true,
+                "Did not remove namespace {} - protected snapshots remain", local_ns
             ),
             Err(err) => {
-                task_log!(worker, "Failed to remove namespace {} - {}", local_ns, err);
+                info!(
+                    tasklog = true,
+                    "Failed to remove namespace {} - {}", local_ns, err
+                );
                 errors = true;
             }
         }
@@ -947,7 +943,6 @@ fn check_and_remove_vanished_ns(
 /// - creation and removal of sub-NS checked here
 /// - access to sub-NS checked here
 pub(crate) async fn pull_store(
-    worker: &WorkerTask,
     client: &HttpClient,
     mut params: PullParameters,
 ) -> Result<(), Error> {
@@ -959,7 +954,7 @@ pub(crate) async fn pull_store(
     let namespaces = if params.remote_ns.is_root() && params.max_depth == Some(0) {
         vec![params.remote_ns.clone()] // backwards compat - don't query remote namespaces!
     } else {
-        query_namespaces(worker, client, &mut params).await?
+        query_namespaces(client, &mut params).await?
     };
     errors |= old_max_depth != params.max_depth; // fail job if we switched to backwards-compat mode
 
@@ -972,41 +967,36 @@ pub(crate) async fn pull_store(
         let target_ns = namespace.map_prefix(&params.remote_ns, &params.ns)?;
         let target_store_ns_str = print_store_and_ns(params.store.name(), &target_ns);
 
-        task_log!(worker, "----");
-        task_log!(
-            worker,
-            "Syncing {} into {}",
-            source_store_ns_str,
-            target_store_ns_str
+        info!(tasklog = true, "----");
+        info!(
+            tasklog = true,
+            "Syncing {} into {}", source_store_ns_str, target_store_ns_str
         );
 
         synced_ns.insert(target_ns.clone());
 
         match check_and_create_ns(&params, &target_ns) {
-            Ok(true) => task_log!(worker, "Created namespace {}", target_ns),
+            Ok(true) => info!(tasklog = true, "Created namespace {}", target_ns),
             Ok(false) => {}
             Err(err) => {
-                task_log!(
-                    worker,
-                    "Cannot sync {} into {} - {}",
-                    source_store_ns_str,
-                    target_store_ns_str,
-                    err,
+                info!(
+                    tasklog = true,
+                    "Cannot sync {} into {} - {}", source_store_ns_str, target_store_ns_str, err,
                 );
                 errors = true;
                 continue;
             }
         }
 
-        match pull_ns(worker, client, &params, namespace.clone(), target_ns).await {
+        match pull_ns(client, &params, namespace.clone(), target_ns).await {
             Ok((ns_progress, ns_errors)) => {
                 errors |= ns_errors;
 
                 if params.max_depth != Some(0) {
                     groups += ns_progress.done_groups;
                     snapshots += ns_progress.done_snapshots;
-                    task_log!(
-                        worker,
+                    info!(
+                        tasklog = true,
                         "Finished syncing namespace {}, current progress: {} groups, {} snapshots",
                         namespace,
                         groups,
@@ -1016,18 +1006,16 @@ pub(crate) async fn pull_store(
             }
             Err(err) => {
                 errors = true;
-                task_log!(
-                    worker,
-                    "Encountered errors while syncing namespace {} - {}",
-                    namespace,
-                    err,
+                info!(
+                    tasklog = true,
+                    "Encountered errors while syncing namespace {} - {}", namespace, err,
                 );
             }
         };
     }
 
     if params.remove_vanished {
-        errors |= check_and_remove_vanished_ns(worker, &params, synced_ns)?;
+        errors |= check_and_remove_vanished_ns(&params, synced_ns)?;
     }
 
     if errors {
@@ -1050,7 +1038,6 @@ pub(crate) async fn pull_store(
 /// - remote namespaces are filtered by remote
 /// - owner check for vanished groups done here
 pub(crate) async fn pull_ns(
-    worker: &WorkerTask,
     client: &HttpClient,
     params: &PullParameters,
     source_ns: BackupNamespace,
@@ -1096,15 +1083,15 @@ pub(crate) async fn pull_ns(
             .into_iter()
             .filter(|group| apply_filters(group, group_filter))
             .collect();
-        task_log!(
-            worker,
+        info!(
+            tasklog = true,
             "found {} groups to sync (out of {} total)",
             list.len(),
             unfiltered_count
         );
         list
     } else {
-        task_log!(worker, "found {} groups to sync", total_count);
+        info!(tasklog = true, "found {} groups to sync", total_count);
         list
     };
 
@@ -1129,11 +1116,9 @@ pub(crate) async fn pull_ns(
             {
                 Ok(result) => result,
                 Err(err) => {
-                    task_log!(
-                        worker,
-                        "sync group {} failed - group lock failed: {}",
-                        &group,
-                        err
+                    info!(
+                        tasklog = true,
+                        "sync group {} failed - group lock failed: {}", &group, err
                     );
                     errors = true; // do not stop here, instead continue
                     continue;
@@ -1143,25 +1128,15 @@ pub(crate) async fn pull_ns(
         // permission check
         if params.owner != owner {
             // only the owner is allowed to create additional snapshots
-            task_log!(
-                worker,
-                "sync group {} failed - owner check failed ({} != {})",
-                &group,
-                params.owner,
-                owner
+            info!(
+                tasklog = true,
+                "sync group {} failed - owner check failed ({} != {})", &group, params.owner, owner
             );
             errors = true; // do not stop here, instead continue
-        } else if let Err(err) = pull_group(
-            worker,
-            client,
-            params,
-            &group,
-            source_ns.clone(),
-            &mut progress,
-        )
-        .await
+        } else if let Err(err) =
+            pull_group(client, params, &group, source_ns.clone(), &mut progress).await
         {
-            task_log!(worker, "sync group {} failed - {}", &group, err,);
+            info!(tasklog = true, "sync group {} failed - {}", &group, err,);
             errors = true; // do not stop here, instead continue
         }
     }
@@ -1183,18 +1158,17 @@ pub(crate) async fn pull_ns(
                         continue;
                     }
                 }
-                task_log!(worker, "delete vanished group '{local_group}'",);
+                info!(tasklog = true, "delete vanished group '{local_group}'",);
                 match params.store.remove_backup_group(&target_ns, local_group) {
                     Ok(true) => {}
                     Ok(false) => {
-                        task_log!(
-                            worker,
-                            "kept some protected snapshots of group '{}'",
-                            local_group
+                        info!(
+                            tasklog = true,
+                            "kept some protected snapshots of group '{}'", local_group
                         );
                     }
                     Err(err) => {
-                        task_log!(worker, "{}", err);
+                        info!(tasklog = true, "{}", err);
                         errors = true;
                     }
                 }
@@ -1202,7 +1176,7 @@ pub(crate) async fn pull_ns(
             Ok(())
         });
         if let Err(err) = result {
-            task_log!(worker, "error during cleanup: {}", err);
+            info!(tasklog = true, "error during cleanup: {}", err);
             errors = true;
         };
     }
diff --git a/src/server/realm_sync_job.rs b/src/server/realm_sync_job.rs
index 1f92e843..952d09ad 100644
--- a/src/server/realm_sync_job.rs
+++ b/src/server/realm_sync_job.rs
@@ -1,13 +1,14 @@
 use anyhow::{bail, format_err, Context, Error};
+use tracing::{info, warn};
+
 use pbs_config::{acl::AclTree, token_shadow, BackupLockGuard};
 use proxmox_lang::try_block;
 use proxmox_ldap::{Config, Connection, SearchParameters, SearchResult};
 use proxmox_rest_server::WorkerTask;
 use proxmox_schema::{ApiType, Schema};
 use proxmox_section_config::SectionConfigData;
-use proxmox_sys::{task_log, task_warn};
 
-use std::{collections::HashSet, sync::Arc};
+use std::collections::HashSet;
 
 use pbs_api_types::{
     ApiToken, Authid, LdapRealmConfig, Realm, RemoveVanished, SyncAttributes as LdapSyncAttributes,
@@ -38,7 +39,7 @@ pub fn do_realm_sync_job(
         move |worker| {
             job.start(&worker.upid().to_string()).unwrap();
 
-            task_log!(worker, "starting realm sync for {}", realm.as_str());
+            info!(tasklog = true, "starting realm sync for {}", realm.as_str());
 
             let override_settings = GeneralSyncSettingsOverride {
                 remove_vanished,
@@ -46,7 +47,7 @@ pub fn do_realm_sync_job(
             };
 
             async move {
-                let sync_job = LdapRealmSyncJob::new(worker, realm, &override_settings, dry_run)?;
+                let sync_job = LdapRealmSyncJob::new(realm, &override_settings, dry_run)?;
                 sync_job.sync().await
             }
         },
@@ -57,7 +58,6 @@ pub fn do_realm_sync_job(
 
 /// Implemenation for syncing LDAP realms
 struct LdapRealmSyncJob {
-    worker: Arc<WorkerTask>,
     realm: Realm,
     general_sync_settings: GeneralSyncSettings,
     ldap_sync_settings: LdapSyncSettings,
@@ -68,7 +68,6 @@ struct LdapRealmSyncJob {
 impl LdapRealmSyncJob {
     /// Create new LdapRealmSyncJob
     fn new(
-        worker: Arc<WorkerTask>,
         realm: Realm,
         override_settings: &GeneralSyncSettingsOverride,
         dry_run: bool,
@@ -88,7 +87,6 @@ impl LdapRealmSyncJob {
         let ldap_config = auth::LdapAuthenticator::api_type_to_config(&config)?;
 
         Ok(Self {
-            worker,
             realm,
             general_sync_settings: sync_settings,
             ldap_sync_settings: sync_attributes,
@@ -100,8 +98,8 @@ impl LdapRealmSyncJob {
     /// Perform realm synchronization
     async fn sync(&self) -> Result<(), Error> {
         if self.dry_run {
-            task_log!(
-                self.worker,
+            info!(
+                tasklog = true,
                 "this is a DRY RUN - changes will not be persisted"
             );
         }
@@ -185,7 +183,7 @@ impl LdapRealmSyncJob {
                 anyhow::Ok(())
             });
             if let Err(e) = result {
-                task_log!(self.worker, "could not create/update user: {e}");
+                info!(tasklog = true, "could not create/update user: {e}");
             }
         }
 
@@ -204,15 +202,15 @@ impl LdapRealmSyncJob {
 
         if let Some(existing_user) = existing_user {
             if existing_user != new_or_updated_user {
-                task_log!(
-                    self.worker,
+                info!(
+                    tasklog = true,
                     "updating user {}",
                     new_or_updated_user.userid.as_str()
                 );
             }
         } else {
-            task_log!(
-                self.worker,
+            info!(
+                tasklog = true,
                 "creating user {}",
                 new_or_updated_user.userid.as_str()
             );
@@ -237,8 +235,8 @@ impl LdapRealmSyncJob {
             let schema = schema.unwrap_string_schema();
 
             if let Err(e) = schema.check_constraints(value) {
-                task_warn!(
-                    self.worker,
+                warn!(
+                    tasklog = true,
                     "{userid}: ignoring attribute `{attribute}`: {e}"
                 );
 
@@ -319,7 +317,7 @@ impl LdapRealmSyncJob {
         to_delete: &[Userid],
     ) -> Result<(), Error> {
         for userid in to_delete {
-            task_log!(self.worker, "deleting user {}", userid.as_str());
+            info!(tasklog = true, "deleting user {}", userid.as_str());
 
             // Delete the user
             user_config.sections.remove(userid.as_str());
@@ -346,7 +344,10 @@ impl LdapRealmSyncJob {
 
                     if !self.dry_run {
                         if let Err(e) = token_shadow::delete_secret(&tokenid) {
-                            task_warn!(self.worker, "could not delete token for user {userid}: {e}",)
+                            warn!(
+                                tasklog = true,
+                                "could not delete token for user {userid}: {e}",
+                            )
                         }
                     }
 
diff --git a/src/server/verify_job.rs b/src/server/verify_job.rs
index 8bf2a0c9..55c0a574 100644
--- a/src/server/verify_job.rs
+++ b/src/server/verify_job.rs
@@ -1,9 +1,9 @@
 use anyhow::{format_err, Error};
+use tracing::info;
 
 use pbs_api_types::{Authid, Operation, VerificationJobConfig};
 use pbs_datastore::DataStore;
 use proxmox_rest_server::WorkerTask;
-use proxmox_sys::task_log;
 
 use crate::{
     backup::{verify_all_backups, verify_filter},
@@ -36,9 +36,9 @@ pub fn do_verification_job(
         move |worker| {
             job.start(&worker.upid().to_string())?;
 
-            task_log!(worker, "Starting datastore verify job '{}'", job_id);
+            info!(tasklog = true, "Starting datastore verify job '{}'", job_id);
             if let Some(event_str) = schedule {
-                task_log!(worker, "task triggered by schedule '{}'", event_str);
+                info!(tasklog = true, "task triggered by schedule '{}'", event_str);
             }
 
             let ns = match verification_job.ns {
@@ -60,9 +60,12 @@ pub fn do_verification_job(
             let job_result = match result {
                 Ok(ref failed_dirs) if failed_dirs.is_empty() => Ok(()),
                 Ok(ref failed_dirs) => {
-                    task_log!(worker, "Failed to verify the following snapshots/groups:");
+                    info!(
+                        tasklog = true,
+                        "Failed to verify the following snapshots/groups:"
+                    );
                     for dir in failed_dirs {
-                        task_log!(worker, "\t{}", dir);
+                        info!(tasklog = true, "\t{}", dir);
                     }
 
                     Err(format_err!(
diff --git a/src/tape/drive/mod.rs b/src/tape/drive/mod.rs
index 86f01201..a85c353f 100644
--- a/src/tape/drive/mod.rs
+++ b/src/tape/drive/mod.rs
@@ -12,6 +12,7 @@ use nix::fcntl::OFlag;
 use nix::sys::stat::Mode;
 use serde::Deserialize;
 use serde_json::Value;
+use tracing::info;
 
 use proxmox_sys::fs::{
     atomic_open_or_create_file, file_read_optional_string, lock_file, replace_file, CreateOptions,
@@ -19,7 +20,7 @@ use proxmox_sys::fs::{
 
 use proxmox_io::ReadExt;
 use proxmox_section_config::SectionConfigData;
-use proxmox_sys::{task_log, WorkerTaskContext};
+use proxmox_sys::WorkerTaskContext;
 use proxmox_uuid::Uuid;
 
 use pbs_api_types::{Fingerprint, LtoTapeDrive, VirtualTapeDrive};
@@ -343,11 +344,9 @@ pub fn request_and_load_media(
 ) -> Result<(Box<dyn TapeDriver>, MediaId), Error> {
     let check_label = |handle: &mut dyn TapeDriver, uuid: &proxmox_uuid::Uuid| {
         if let Ok((Some(media_id), _)) = handle.read_label() {
-            task_log!(
-                worker,
-                "found media label {} ({})",
-                media_id.label.label_text,
-                media_id.label.uuid,
+            info!(
+                tasklog = true,
+                "found media label {} ({})", media_id.label.label_text, media_id.label.uuid,
             );
 
             if media_id.label.uuid == *uuid {
@@ -385,15 +384,15 @@ pub fn request_and_load_media(
                     let update_and_log_request_error =
                         |old: &mut TapeRequestError, new: TapeRequestError| -> Result<(), Error> {
                             if new != *old {
-                                task_log!(worker, "{}", new);
+                                info!(tasklog = true, "{}", new);
                                 let (device_type, device) = if let Some(changer) = changer {
                                     ("changer", changer.as_str())
                                 } else {
                                     ("drive", drive)
                                 };
 
-                                task_log!(
-                                    worker,
+                                info!(
+                                    tasklog = true,
                                     "Please insert media '{}' into {} '{}'",
                                     label_text,
                                     device_type,
@@ -423,18 +422,14 @@ pub fn request_and_load_media(
                                 std::thread::sleep(std::time::Duration::from_millis(100));
                             }
                         } else if drive_config.changer.is_none() {
-                            task_log!(
-                                worker,
-                                "Checking for media '{}' in drive '{}'",
-                                label_text,
-                                drive
+                            info!(
+                                tasklog = true,
+                                "Checking for media '{}' in drive '{}'", label_text, drive
                             );
                         } else {
-                            task_log!(
-                                worker,
-                                "trying to load media '{}' into drive '{}'",
-                                label_text,
-                                drive
+                            info!(
+                                tasklog = true,
+                                "trying to load media '{}' into drive '{}'", label_text, drive
                             );
                         }
 
@@ -462,8 +457,8 @@ pub fn request_and_load_media(
 
                         let request_error = match handle.read_label() {
                             Ok((Some(media_id), _)) if media_id.label.uuid == label.uuid => {
-                                task_log!(
-                                    worker,
+                                info!(
+                                    tasklog = true,
                                     "found media label {} ({})",
                                     media_id.label.label_text,
                                     media_id.label.uuid.to_string(),
diff --git a/src/tape/pool_writer/mod.rs b/src/tape/pool_writer/mod.rs
index d1add203..b05b8921 100644
--- a/src/tape/pool_writer/mod.rs
+++ b/src/tape/pool_writer/mod.rs
@@ -11,8 +11,8 @@ use std::sync::{Arc, Mutex};
 use std::time::SystemTime;
 
 use anyhow::{bail, Error};
+use tracing::{info, warn};
 
-use proxmox_sys::{task_log, task_warn};
 use proxmox_uuid::Uuid;
 
 use pbs_datastore::{DataStore, SnapshotReader};
@@ -57,7 +57,6 @@ impl PoolWriter {
     pub fn new(
         mut pool: MediaPool,
         drive_name: &str,
-        worker: &WorkerTask,
         notify_email: Option<String>,
         force_media_set: bool,
         ns_magic: bool,
@@ -66,11 +65,14 @@ impl PoolWriter {
 
         let new_media_set_reason = pool.start_write_session(current_time, force_media_set)?;
         if let Some(reason) = new_media_set_reason {
-            task_log!(worker, "starting new media set - reason: {}", reason,);
+            info!(
+                tasklog = true,
+                "starting new media set - reason: {}", reason,
+            );
         }
 
         let media_set_uuid = pool.current_media_set().uuid();
-        task_log!(worker, "media set uuid: {}", media_set_uuid);
+        info!(tasklog = true, "media set uuid: {}", media_set_uuid);
 
         let mut catalog_set = CatalogSet::new();
 
@@ -125,7 +127,7 @@ impl PoolWriter {
     }
 
     /// Eject media and drop PoolWriterState (close drive)
-    pub fn eject_media(&mut self, worker: &WorkerTask) -> Result<(), Error> {
+    pub fn eject_media(&mut self) -> Result<(), Error> {
         let mut status = match self.status.take() {
             Some(status) => status,
             None => return Ok(()), // no media loaded
@@ -134,13 +136,13 @@ impl PoolWriter {
         let (drive_config, _digest) = pbs_config::drive::config()?;
 
         if let Some((mut changer, _)) = media_changer(&drive_config, &self.drive_name)? {
-            task_log!(worker, "eject media");
+            info!(tasklog = true, "eject media");
             status.drive.eject_media()?; // rewind and eject early, so that unload_media is faster
             drop(status); // close drive
-            task_log!(worker, "unload media");
+            info!(tasklog = true, "unload media");
             changer.unload_media(None)?; //eject and unload
         } else {
-            task_log!(worker, "standalone drive - ejecting media");
+            info!(tasklog = true, "standalone drive - ejecting media");
             status.drive.eject_media()?;
         }
 
@@ -148,14 +150,14 @@ impl PoolWriter {
     }
 
     /// Export current media set and drop PoolWriterState (close drive)
-    pub fn export_media_set(&mut self, worker: &WorkerTask) -> Result<(), Error> {
+    pub fn export_media_set(&mut self) -> Result<(), Error> {
         let mut status = self.status.take();
 
         let (drive_config, _digest) = pbs_config::drive::config()?;
 
         if let Some((mut changer, _)) = media_changer(&drive_config, &self.drive_name)? {
             if let Some(ref mut status) = status {
-                task_log!(worker, "rewind media");
+                info!(tasklog = true, "rewind media");
                 // rewind first so that the unload command later does not run into a timeout
                 status.drive.rewind()?;
             }
@@ -165,23 +167,21 @@ impl PoolWriter {
                 let media = self.pool.lookup_media(media_uuid)?;
                 let label_text = media.label_text();
                 if let Some(slot) = changer.export_media(label_text)? {
-                    task_log!(
-                        worker,
-                        "exported media '{}' to import/export slot {}",
-                        label_text,
-                        slot
+                    info!(
+                        tasklog = true,
+                        "exported media '{}' to import/export slot {}", label_text, slot
                     );
                 } else {
-                    task_warn!(
-                        worker,
+                    warn!(
+                        tasklog = true,
                         "export failed - media '{}' is not online or in different drive",
                         label_text
                     );
                 }
             }
         } else if let Some(mut status) = status {
-            task_log!(
-                worker,
+            info!(
+                tasklog = true,
                 "standalone drive - ejecting media instead of export"
             );
             status.drive.eject_media()?;
@@ -224,15 +224,15 @@ impl PoolWriter {
             return Ok(media_uuid);
         }
 
-        task_log!(
-            worker,
+        info!(
+            tasklog = true,
             "allocated new writable media '{}'",
             media.label_text()
         );
 
         if let Some(PoolWriterState { mut drive, .. }) = self.status.take() {
             if last_media_uuid.is_some() {
-                task_log!(worker, "eject current media");
+                info!(tasklog = true, "eject current media");
                 drive.eject_media()?;
             }
         }
@@ -250,7 +250,7 @@ impl PoolWriter {
         // test for critical tape alert flags
         if let Ok(alert_flags) = drive.tape_alert_flags() {
             if !alert_flags.is_empty() {
-                task_log!(worker, "TapeAlertFlags: {:?}", alert_flags);
+                info!(tasklog = true, "TapeAlertFlags: {:?}", alert_flags);
                 if tape_alert_flags_critical(alert_flags) {
                     self.pool.set_media_status_damaged(&media_uuid)?;
                     bail!(
@@ -261,12 +261,8 @@ impl PoolWriter {
             }
         }
 
-        let (catalog, is_new_media) = update_media_set_label(
-            worker,
-            drive.as_mut(),
-            old_media_id.media_set_label,
-            media.id(),
-        )?;
+        let (catalog, is_new_media) =
+            update_media_set_label(drive.as_mut(), old_media_id.media_set_label, media.id())?;
 
         self.catalog_set.lock().unwrap().append_catalog(catalog)?;
 
@@ -288,7 +284,7 @@ impl PoolWriter {
 
         if is_new_media {
             // add catalogs from previous media
-            self.append_media_set_catalogs(worker)?;
+            self.append_media_set_catalogs()?;
         }
 
         self.used_tapes.insert(media_uuid.clone());
@@ -308,12 +304,12 @@ impl PoolWriter {
     // Check it tape is loaded, then move to EOM (if not already there)
     //
     // Returns the tape position at EOM.
-    fn prepare_tape_write(status: &mut PoolWriterState, worker: &WorkerTask) -> Result<u64, Error> {
+    fn prepare_tape_write(status: &mut PoolWriterState) -> Result<u64, Error> {
         if !status.at_eom {
-            task_log!(worker, "moving to end of media");
+            info!(tasklog = true, "moving to end of media");
             status.drive.move_to_eom(true)?;
             status.at_eom = true;
-            task_log!(worker, "arrived at end of media");
+            info!(tasklog = true, "arrived at end of media");
         }
 
         let current_file_number = status.drive.current_file_number()?;
@@ -334,7 +330,7 @@ impl PoolWriter {
     /// on the media (return value 'Ok(false, _)'). In that case, the
     /// archive is marked incomplete. The caller should mark the media
     /// as full and try again using another media.
-    pub fn append_catalog_archive(&mut self, worker: &WorkerTask) -> Result<bool, Error> {
+    pub fn append_catalog_archive(&mut self) -> Result<bool, Error> {
         let catalog_magic = self.catalog_version();
 
         let status = match self.status {
@@ -342,7 +338,7 @@ impl PoolWriter {
             None => bail!("PoolWriter - no media loaded"),
         };
 
-        Self::prepare_tape_write(status, worker)?;
+        Self::prepare_tape_write(status)?;
 
         let catalog_set = self.catalog_set.lock().unwrap();
 
@@ -385,7 +381,7 @@ impl PoolWriter {
     }
 
     // Append catalogs for all previous media in set (without last)
-    fn append_media_set_catalogs(&mut self, worker: &WorkerTask) -> Result<(), Error> {
+    fn append_media_set_catalogs(&mut self) -> Result<(), Error> {
         let media_set = self.pool.current_media_set();
 
         let mut media_list = media_set.media_list();
@@ -401,7 +397,7 @@ impl PoolWriter {
             None => bail!("PoolWriter - no media loaded"),
         };
 
-        Self::prepare_tape_write(status, worker)?;
+        Self::prepare_tape_write(status)?;
 
         for (seq_nr, uuid) in media_list.iter().enumerate() {
             let uuid = match uuid {
@@ -413,7 +409,7 @@ impl PoolWriter {
 
             let mut file = Self::open_catalog_file(uuid)?;
 
-            task_log!(worker, "write catalog for previous media: {}", uuid);
+            info!(tasklog = true, "write catalog for previous media: {}", uuid);
 
             if tape_write_catalog(
                 writer.as_mut(),
@@ -444,7 +440,6 @@ impl PoolWriter {
     /// media.
     pub fn append_snapshot_archive(
         &mut self,
-        worker: &WorkerTask,
         snapshot_reader: &SnapshotReader,
     ) -> Result<(bool, usize), Error> {
         let status = match self.status {
@@ -452,7 +447,7 @@ impl PoolWriter {
             None => bail!("PoolWriter - no media loaded"),
         };
 
-        let current_file_number = Self::prepare_tape_write(status, worker)?;
+        let current_file_number = Self::prepare_tape_write(status)?;
 
         let (done, bytes_written) = {
             let mut writer: Box<dyn TapeWrite> = status.drive.write_file()?;
@@ -498,7 +493,7 @@ impl PoolWriter {
             None => bail!("PoolWriter - no media loaded"),
         };
 
-        let current_file_number = Self::prepare_tape_write(status, worker)?;
+        let current_file_number = Self::prepare_tape_write(status)?;
 
         let writer = status.drive.write_file()?;
 
@@ -510,8 +505,8 @@ impl PoolWriter {
         status.bytes_written += bytes_written;
 
         let elapsed = start_time.elapsed()?.as_secs_f64();
-        task_log!(
-            worker,
+        info!(
+            tasklog = true,
             "wrote {} chunks ({:.2} MB at {:.2} MB/s)",
             saved_chunks.len(),
             bytes_written as f64 / 1_000_000.0,
@@ -591,7 +586,7 @@ fn write_chunk_archive<'a>(
         }
 
         if writer.bytes_written() > max_size {
-            //task_log!(worker, "Chunk Archive max size reached, closing archive");
+            //info!(tasklog = true, "Chunk Archive max size reached, closing archive");
             break;
         }
     }
@@ -605,7 +600,6 @@ fn write_chunk_archive<'a>(
 // set label does not match the expected media set, overwrite the
 // media set label.
 fn update_media_set_label(
-    worker: &WorkerTask,
     drive: &mut dyn TapeDriver,
     old_set: Option<MediaSetLabel>,
     media_id: &MediaId,
@@ -634,7 +628,7 @@ fn update_media_set_label(
 
     let new_media = match old_set {
         None => {
-            task_log!(worker, "writing new media set label");
+            info!(tasklog = true, "writing new media set label");
             drive.write_media_set_label(new_set, key_config.as_ref())?;
             media_catalog = MediaCatalog::overwrite(TAPE_STATUS_DIR, media_id, false)?;
             true
@@ -658,8 +652,8 @@ fn update_media_set_label(
 
                 false
             } else {
-                task_log!(
-                    worker,
+                info!(
+                    tasklog = true,
                     "writing new media set label (overwrite '{}/{}')",
                     media_set_label.uuid.to_string(),
                     media_set_label.seq_nr,
diff --git a/tests/worker-task-abort.rs b/tests/worker-task-abort.rs
index eda86f01..b022b38f 100644
--- a/tests/worker-task-abort.rs
+++ b/tests/worker-task-abort.rs
@@ -7,24 +7,25 @@ extern crate tokio;
 
 use proxmox_lang::try_block;
 use proxmox_sys::fs::CreateOptions;
-use proxmox_sys::{task_log, WorkerTaskContext};
+use proxmox_sys::WorkerTaskContext;
 
 use pbs_api_types::{Authid, UPID};
 
 use proxmox_rest_server::{CommandSocket, WorkerTask};
+use tracing::info;
 
 fn garbage_collection(worker: &WorkerTask) -> Result<(), Error> {
-    task_log!(worker, "start garbage collection");
+    info!(tasklog = true, "start garbage collection");
 
     for i in 0..50 {
         worker.check_abort()?;
 
-        task_log!(worker, "progress {}", i);
+        info!(tasklog = true, "progress {}", i);
 
         std::thread::sleep(std::time::Duration::from_millis(10));
     }
 
-    task_log!(worker, "end garbage collection");
+    info!(tasklog = true, "end garbage collection");
 
     Ok(())
 }
-- 
2.39.2





^ permalink raw reply	[flat|nested] 14+ messages in thread

* [pbs-devel] [RFC proxmox 2/2] proxmox-log: added tracing infra
  2023-10-11 14:01 [pbs-devel] [RFC proxmox-backup 0/2] Tasklog rewrite with tracing Gabriel Goller
  2023-10-11 14:01 ` [pbs-devel] [RFC proxmox-backup 1/2] log: removed task_log! macro and moved to tracing Gabriel Goller
@ 2023-10-11 14:01 ` Gabriel Goller
  2023-10-13 12:36   ` Gabriel Goller
  2023-10-18 13:26   ` Dominik Csapak
  2023-10-18 13:12 ` [pbs-devel] [RFC proxmox-backup 0/2] Tasklog rewrite with tracing Dominik Csapak
  2 siblings, 2 replies; 14+ messages in thread
From: Gabriel Goller @ 2023-10-11 14:01 UTC (permalink / raw)
  To: pbs-devel

Added the `proxmox_log` crate which includes the new logging infra.
Exports the `init_logger` function, which creates the `tracing` logger
that includes the default Subscriber and two custom layers. The first
layer is the syslog layer, which uses the `syslog` crate. The second
layer is the `file_layer` which uses the original `FileLogger` and
writes to a file (the tasklog). This last layer stores the `FileLogger`
as a `tokio::task_local` variable, which gets initialized at `spawn` or
`new_thread` in the `WorkerTask`.

Signed-off-by: Gabriel Goller <g.goller@proxmox.com>
---
 Cargo.toml                                    |   6 +
 proxmox-log/Cargo.toml                        |  23 ++++
 proxmox-log/debian/changelog                  |   5 +
 proxmox-log/debian/control                    |  53 +++++++++
 proxmox-log/debian/copyright                  |  18 +++
 proxmox-log/debian/debcargo.toml              |   7 ++
 proxmox-log/src/file_layer.rs                 |  89 +++++++++++++++
 .../src/file_logger.rs                        |   2 +-
 proxmox-log/src/lib.rs                        |  36 ++++++
 proxmox-log/src/syslog_layer.rs               |  78 +++++++++++++
 proxmox-rest-server/Cargo.toml                |   2 +
 proxmox-rest-server/src/api_config.rs         |   3 +-
 proxmox-rest-server/src/lib.rs                |   3 -
 proxmox-rest-server/src/rest.rs               |   4 +-
 proxmox-rest-server/src/worker_task.rs        | 108 ++++++++++--------
 proxmox-router/Cargo.toml                     |   2 +
 proxmox-sys/src/worker_task_context.rs        |  47 --------
 17 files changed, 382 insertions(+), 104 deletions(-)
 create mode 100644 proxmox-log/Cargo.toml
 create mode 100644 proxmox-log/debian/changelog
 create mode 100644 proxmox-log/debian/control
 create mode 100644 proxmox-log/debian/copyright
 create mode 100644 proxmox-log/debian/debcargo.toml
 create mode 100644 proxmox-log/src/file_layer.rs
 rename {proxmox-rest-server => proxmox-log}/src/file_logger.rs (98%)
 create mode 100644 proxmox-log/src/lib.rs
 create mode 100644 proxmox-log/src/syslog_layer.rs

diff --git a/Cargo.toml b/Cargo.toml
index 265f5c9..246d6e4 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -13,6 +13,7 @@ members = [
     "proxmox-io",
     "proxmox-lang",
     "proxmox-ldap",
+    "proxmox-log",
     "proxmox-login",
     "proxmox-metrics",
     "proxmox-notify",
@@ -81,10 +82,14 @@ tokio = "1.6"
 tokio-openssl = "0.6.1"
 tokio-stream = "0.1.0"
 tower-service = "0.3.0"
+tracing = "0.1"
+tracing-log = { version = "0.1.3", default-features = false }
+tracing-subscriber = "0.3.16"
 url = "2.2"
 walkdir = "2"
 webauthn-rs = "0.3"
 zstd = { version = "0.12", features = [ "bindgen" ] }
+syslog = "6"
 
 # workspace dependencies
 proxmox-api-macro = { version = "1.0.6", path = "proxmox-api-macro" }
@@ -95,6 +100,7 @@ proxmox-http-error = { version = "0.1.0", path = "proxmox-http-error" }
 proxmox-human-byte = { version = "0.1.0", path = "proxmox-human-byte" }
 proxmox-io = { version = "1.0.0", path = "proxmox-io" }
 proxmox-lang = { version = "1.1", path = "proxmox-lang" }
+proxmox-log= { version = "0.1.0", path = "proxmox-log" }
 proxmox-login = { version = "0.1.0", path = "proxmox-login" }
 proxmox-rest-server = { version = "0.4.0", path = "proxmox-rest-server" }
 proxmox-router = { version = "2.1.1", path = "proxmox-router" }
diff --git a/proxmox-log/Cargo.toml b/proxmox-log/Cargo.toml
new file mode 100644
index 0000000..e05b0be
--- /dev/null
+++ b/proxmox-log/Cargo.toml
@@ -0,0 +1,23 @@
+[package]
+name = "proxmox-log"
+version = "0.1.0"
+authors.workspace = true
+edition.workspace = true
+license.workspace = true
+repository.workspace = true
+description = "Logging infrastructure for proxmox"
+
+exclude.workspace = true
+
+[dependencies]
+anyhow.workspace = true
+syslog.workspace = true
+nix.workspace = true
+log.workspace = true
+tracing.workspace = true
+tracing-subscriber.workspace = true
+tracing-log.workspace = true
+tokio = { workspace = true, features = ["rt-multi-thread"] }
+proxmox-time.workspace = true
+proxmox-sys.workspace = true
+
diff --git a/proxmox-log/debian/changelog b/proxmox-log/debian/changelog
new file mode 100644
index 0000000..aaf8073
--- /dev/null
+++ b/proxmox-log/debian/changelog
@@ -0,0 +1,5 @@
+rust-proxmox-log (0.1.0-3) UNRELEASED; urgency=medium
+
+  * Initial release 
+
+ -- Gabriel Goller <ggoller@luna.proxmox.com>  Wed, 11 Oct 2023 15:13:58 +0200
diff --git a/proxmox-log/debian/control b/proxmox-log/debian/control
new file mode 100644
index 0000000..4f22ecb
--- /dev/null
+++ b/proxmox-log/debian/control
@@ -0,0 +1,53 @@
+Source: rust-proxmox-log
+Section: rust
+Priority: optional
+Build-Depends: debhelper (>= 12),
+ dh-cargo (>= 25),
+ cargo:native <!nocheck>,
+ rustc:native <!nocheck>,
+ libstd-rust-dev <!nocheck>,
+ librust-anyhow-1+default-dev <!nocheck>,
+ librust-log-0.4+default-dev (>= 0.4.17-~~) <!nocheck>,
+ librust-nix-0.26+default-dev (>= 0.26.1-~~) <!nocheck>,
+ librust-proxmox-sys-0.5+default-dev <!nocheck>,
+ librust-proxmox-time-1+default-dev (>= 1.1.4-~~) <!nocheck>,
+ librust-syslog-6+default-dev <!nocheck>,
+ librust-tokio-1+default-dev (>= 1.6-~~) <!nocheck>,
+ librust-tokio-1+rt-multi-thread-dev (>= 1.6-~~) <!nocheck>,
+ librust-tracing-0.1+default-dev <!nocheck>,
+ librust-tracing-log-0.1-dev (>= 0.1.3-~~) <!nocheck>,
+ librust-tracing-subscriber-0.3+default-dev (>= 0.3.16-~~) <!nocheck>
+Maintainer: Proxmox Support Team <support@proxmox.com>
+Standards-Version: 4.6.1
+Vcs-Git: git://git.proxmox.com/git/proxmox.git
+Vcs-Browser: https://git.proxmox.com/?p=proxmox.git
+X-Cargo-Crate: proxmox-log
+Rules-Requires-Root: no
+
+Package: librust-proxmox-log-dev
+Architecture: any
+Multi-Arch: same
+Depends:
+ ${misc:Depends},
+ librust-anyhow-1+default-dev,
+ librust-log-0.4+default-dev (>= 0.4.17-~~),
+ librust-nix-0.26+default-dev (>= 0.26.1-~~),
+ librust-proxmox-sys-0.5+default-dev,
+ librust-proxmox-time-1+default-dev (>= 1.1.4-~~),
+ librust-syslog-6+default-dev,
+ librust-tokio-1+default-dev (>= 1.6-~~),
+ librust-tokio-1+rt-multi-thread-dev (>= 1.6-~~),
+ librust-tracing-0.1+default-dev,
+ librust-tracing-log-0.1-dev (>= 0.1.3-~~),
+ librust-tracing-subscriber-0.3+default-dev (>= 0.3.16-~~)
+Provides:
+ librust-proxmox-log+default-dev (= ${binary:Version}),
+ librust-proxmox-log-0-dev (= ${binary:Version}),
+ librust-proxmox-log-0+default-dev (= ${binary:Version}),
+ librust-proxmox-log-0.1-dev (= ${binary:Version}),
+ librust-proxmox-log-0.1+default-dev (= ${binary:Version}),
+ librust-proxmox-log-0.1.0-dev (= ${binary:Version}),
+ librust-proxmox-log-0.1.0+default-dev (= ${binary:Version})
+Description: Logging infrastructure for proxmox - Rust source code
+ This package contains the source for the Rust proxmox-log crate, packaged by
+ debcargo for use with cargo and dh-cargo.
diff --git a/proxmox-log/debian/copyright b/proxmox-log/debian/copyright
new file mode 100644
index 0000000..0d9eab3
--- /dev/null
+++ b/proxmox-log/debian/copyright
@@ -0,0 +1,18 @@
+Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
+
+Files:
+ *
+Copyright: 2019 - 2023 Proxmox Server Solutions GmbH <support@proxmox.com>
+License: AGPL-3.0-or-later
+ This program is free software: you can redistribute it and/or modify it under
+ the terms of the GNU Affero General Public License as published by the Free
+ Software Foundation, either version 3 of the License, or (at your option) any
+ later version.
+ .
+ This program is distributed in the hope that it will be useful, but WITHOUT
+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+ FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more
+ details.
+ .
+ You should have received a copy of the GNU Affero General Public License along
+ with this program. If not, see <https://www.gnu.org/licenses/>.
diff --git a/proxmox-log/debian/debcargo.toml b/proxmox-log/debian/debcargo.toml
new file mode 100644
index 0000000..b7864cd
--- /dev/null
+++ b/proxmox-log/debian/debcargo.toml
@@ -0,0 +1,7 @@
+overlay = "."
+crate_src_path = ".."
+maintainer = "Proxmox Support Team <support@proxmox.com>"
+
+[source]
+vcs_git = "git://git.proxmox.com/git/proxmox.git"
+vcs_browser = "https://git.proxmox.com/?p=proxmox.git"
diff --git a/proxmox-log/src/file_layer.rs b/proxmox-log/src/file_layer.rs
new file mode 100644
index 0000000..1a167f4
--- /dev/null
+++ b/proxmox-log/src/file_layer.rs
@@ -0,0 +1,89 @@
+use std::cell::RefCell;
+use tracing::error;
+use tracing::field::Field;
+use tracing::field::Visit;
+use tracing::Event;
+use tracing::Level;
+use tracing::Subscriber;
+use tracing_subscriber::layer::Context;
+use tracing_subscriber::Layer;
+
+use crate::FileLogger;
+
+tokio::task_local! {
+    pub static LOGGER: RefCell<Option<FileLogger>>;
+    pub static WARN_COUNTER: RefCell<Option<u64>>;
+}
+
+pub struct FilelogLayer {}
+
+impl FilelogLayer {
+    pub fn new() -> Self {
+        Self {}
+    }
+}
+
+impl<S: Subscriber> Layer<S> for FilelogLayer {
+    fn on_event(&self, event: &Event<'_>, _ctx: Context<'_, S>) {
+        let mut buf = String::new();
+
+        event.record(&mut EventVisitor::new(&mut buf));
+
+        let logger_exists = LOGGER.try_with(|logger| {
+            let mut l = logger.borrow_mut();
+            let log = l.as_mut();
+            if let Some(l) = log {
+                log_to_file(l, event.metadata().level(), buf);
+            } else {
+                error!("FileLogger not found");
+            }
+        });
+        if let Err(e) = logger_exists {
+            error!(
+                "Error getting the logger from the task-local storage: {}",
+                e
+            )
+        }
+    }
+}
+
+fn log_to_file(logger: &mut FileLogger, level: &Level, buf: String) {
+    match level {
+        &Level::ERROR | &Level::WARN => {
+            let counter_exists = WARN_COUNTER.try_with(|counter| {
+                counter.replace_with(|c| {
+                    if let Some(v) = c {
+                        Some(v.to_owned() + 1)
+                    } else {
+                        None
+                    }
+                });
+                logger.log(buf.clone());
+            });
+            if let Err(e) = counter_exists {
+                error!("Error getting the warning_counter for the logger: {}", e)
+            }
+        }
+        &Level::INFO => logger.log(buf.clone()),
+        &Level::DEBUG => logger.log(format!("DEBUG: {}", buf)),
+        &Level::TRACE => logger.log(format!("TRACE: {}", buf)),
+    };
+}
+
+struct EventVisitor<'a> {
+    buf: &'a mut String,
+}
+
+impl<'a> EventVisitor<'a> {
+    fn new(buf: &'a mut String) -> Self {
+        Self { buf }
+    }
+}
+
+impl Visit for EventVisitor<'_> {
+    fn record_debug(&mut self, field: &Field, value: &dyn std::fmt::Debug) {
+        if field.name() == "message" {
+            self.buf.push_str(&format!("{:?}", value));
+        }
+    }
+}
diff --git a/proxmox-rest-server/src/file_logger.rs b/proxmox-log/src/file_logger.rs
similarity index 98%
rename from proxmox-rest-server/src/file_logger.rs
rename to proxmox-log/src/file_logger.rs
index 2bb1fac..c7e1d64 100644
--- a/proxmox-rest-server/src/file_logger.rs
+++ b/proxmox-log/src/file_logger.rs
@@ -30,7 +30,7 @@ pub struct FileLogOptions {
 /// #### Example:
 /// ```
 /// # use anyhow::{bail, format_err, Error};
-/// use proxmox_rest_server::{flog, FileLogger, FileLogOptions};
+/// use proxmox_log::{flog, FileLogger, FileLogOptions};
 ///
 /// # std::fs::remove_file("test.log");
 /// let options = FileLogOptions {
diff --git a/proxmox-log/src/lib.rs b/proxmox-log/src/lib.rs
new file mode 100644
index 0000000..0c101b7
--- /dev/null
+++ b/proxmox-log/src/lib.rs
@@ -0,0 +1,36 @@
+//! # Proxmox logging infrastructure
+use crate::syslog_layer::SyslogLayer;
+use file_layer::FilelogLayer;
+use std::env;
+use tracing_log::LogTracer;
+use tracing_subscriber::filter::{filter_fn, LevelFilter};
+use tracing_subscriber::prelude::*;
+
+pub mod file_layer;
+mod file_logger;
+pub use file_logger::{FileLogOptions, FileLogger};
+
+mod syslog_layer;
+
+pub fn init_logger(
+    env_var_name: &str,
+    default_log_level: LevelFilter,
+    application_name: &str,
+) -> Result<(), anyhow::Error> {
+    let mut log_level = default_log_level;
+    if let Ok(v) = env::var(env_var_name) {
+        if let Ok(l) = v.parse::<LevelFilter>() {
+            log_level = l;
+        }
+    }
+    let registry = tracing_subscriber::registry()
+        .with(FilelogLayer::new().with_filter(filter_fn(|metadata| {
+            metadata.fields().field("tasklog").is_some()
+        })))
+        .with(SyslogLayer::new(application_name.to_string()).with_filter(log_level));
+
+    tracing::subscriber::set_global_default(registry)?;
+
+    LogTracer::init()?;
+    Ok(())
+}
diff --git a/proxmox-log/src/syslog_layer.rs b/proxmox-log/src/syslog_layer.rs
new file mode 100644
index 0000000..50ea145
--- /dev/null
+++ b/proxmox-log/src/syslog_layer.rs
@@ -0,0 +1,78 @@
+use std::fmt;
+use std::sync::Arc;
+use std::sync::Mutex;
+use syslog::Formatter3164;
+use syslog::Logger;
+use syslog::LoggerBackend;
+use tracing::field::Field;
+use tracing::field::Visit;
+use tracing::Event;
+use tracing::Level;
+use tracing::Subscriber;
+use tracing_subscriber::layer::Context;
+use tracing_subscriber::Layer;
+
+pub struct SyslogLayer {
+    logger: Arc<Mutex<Logger<LoggerBackend, Formatter3164>>>,
+}
+
+impl SyslogLayer {
+    pub fn new(application_name: String) -> Self {
+        let mut formatter = Formatter3164::default();
+        formatter.facility = syslog::Facility::LOG_DAEMON;
+        formatter.process = application_name;
+
+        // unwrap because if the (syslog) logging is not working, we shut down
+        let backend = syslog::unix(formatter.clone())
+            .map(|logger: Logger<LoggerBackend, Formatter3164>| logger.backend)
+            .map_err(|e| {
+                anyhow::Error::new(std::io::Error::new(
+                    std::io::ErrorKind::Other,
+                    e.description(),
+                ))
+            })
+            .unwrap();
+
+        let logger = Arc::new(Mutex::new(Logger { formatter, backend }));
+
+        Self { logger }
+    }
+}
+
+impl<S: Subscriber> Layer<S> for SyslogLayer {
+    fn on_event(&self, event: &Event<'_>, _ctx: Context<'_, S>) {
+        let mut buf = String::new();
+
+        event.record(&mut EventVisitor::new(&mut buf));
+
+        let mut logger = self.logger.lock().unwrap();
+        let succes = match event.metadata().level() {
+            &Level::ERROR => logger.err(buf),
+            &Level::WARN => logger.warning(buf),
+            &Level::INFO => logger.info(buf),
+            &Level::DEBUG => logger.debug(buf),
+            &Level::TRACE => logger.debug(buf),
+        };
+        if let Err(err) = succes {
+            log::error!("{}", err.description());
+        }
+    }
+}
+
+struct EventVisitor<'a> {
+    buf: &'a mut String,
+}
+
+impl<'a> EventVisitor<'a> {
+    fn new(buf: &'a mut String) -> Self {
+        Self { buf }
+    }
+}
+
+impl Visit for EventVisitor<'_> {
+    fn record_debug(&mut self, field: &Field, value: &dyn fmt::Debug) {
+        if field.name() == "message" {
+            self.buf.push_str(&format!("{:?}", value));
+        }
+    }
+}
diff --git a/proxmox-rest-server/Cargo.toml b/proxmox-rest-server/Cargo.toml
index 8daf1d3..a444941 100644
--- a/proxmox-rest-server/Cargo.toml
+++ b/proxmox-rest-server/Cargo.toml
@@ -34,6 +34,7 @@ tokio-openssl.workspace = true
 tokio-stream.workspace = true
 tower-service.workspace = true
 url.workspace = true
+tracing.workspace = true
 
 proxmox-async.workspace = true
 proxmox-compression.workspace = true
@@ -44,6 +45,7 @@ proxmox-router.workspace = true
 proxmox-schema = { workspace = true, features = [ "api-macro", "upid-api-impl" ] }
 proxmox-sys = { workspace = true, features = [ "logrotate", "timer" ] }
 proxmox-time.workspace = true
+proxmox-log.workspace = true
 
 [features]
 default = []
diff --git a/proxmox-rest-server/src/api_config.rs b/proxmox-rest-server/src/api_config.rs
index ad9a811..cf14c8f 100644
--- a/proxmox-rest-server/src/api_config.rs
+++ b/proxmox-rest-server/src/api_config.rs
@@ -9,11 +9,12 @@ use http::{HeaderMap, Method};
 use hyper::http::request::Parts;
 use hyper::{Body, Response};
 
+use proxmox_log::{FileLogOptions, FileLogger};
 use proxmox_router::{Router, RpcEnvironmentType, UserInformation};
 use proxmox_sys::fs::{create_path, CreateOptions};
 
 use crate::rest::Handler;
-use crate::{CommandSocket, FileLogOptions, FileLogger, RestEnvironment};
+use crate::{CommandSocket, RestEnvironment};
 
 /// REST server configuration
 pub struct ApiConfig {
diff --git a/proxmox-rest-server/src/lib.rs b/proxmox-rest-server/src/lib.rs
index bc5be01..9c16065 100644
--- a/proxmox-rest-server/src/lib.rs
+++ b/proxmox-rest-server/src/lib.rs
@@ -41,9 +41,6 @@ pub use state::*;
 mod command_socket;
 pub use command_socket::*;
 
-mod file_logger;
-pub use file_logger::{FileLogOptions, FileLogger};
-
 mod api_config;
 pub use api_config::{ApiConfig, AuthError, AuthHandler, IndexHandler};
 
diff --git a/proxmox-rest-server/src/rest.rs b/proxmox-rest-server/src/rest.rs
index 2ccd4d5..9ad01a5 100644
--- a/proxmox-rest-server/src/rest.rs
+++ b/proxmox-rest-server/src/rest.rs
@@ -31,10 +31,10 @@ use proxmox_schema::{ObjectSchemaType, ParameterSchema};
 
 use proxmox_async::stream::AsyncReaderStream;
 use proxmox_compression::{DeflateEncoder, Level};
+use proxmox_log::FileLogger;
 
 use crate::{
-    formatter::*, normalize_path, ApiConfig, AuthError, CompressionMethod, FileLogger,
-    RestEnvironment,
+    formatter::*, normalize_path, ApiConfig, AuthError, CompressionMethod, RestEnvironment,
 };
 
 extern "C" {
diff --git a/proxmox-rest-server/src/worker_task.rs b/proxmox-rest-server/src/worker_task.rs
index 54b6bc2..f7b8396 100644
--- a/proxmox-rest-server/src/worker_task.rs
+++ b/proxmox-rest-server/src/worker_task.rs
@@ -1,3 +1,4 @@
+use std::cell::RefCell;
 use std::collections::{HashMap, VecDeque};
 use std::fs::File;
 use std::io::{BufRead, BufReader, Read, Write};
@@ -12,21 +13,23 @@ use futures::*;
 use lazy_static::lazy_static;
 use nix::fcntl::OFlag;
 use once_cell::sync::OnceCell;
+use proxmox_log::file_layer::{LOGGER, WARN_COUNTER};
 use serde::{Deserialize, Serialize};
 use serde_json::{json, Value};
 use tokio::signal::unix::SignalKind;
 use tokio::sync::oneshot;
+use tracing::{info, warn};
 
 use proxmox_lang::try_block;
+use proxmox_log::{FileLogOptions, FileLogger};
 use proxmox_schema::upid::UPID;
 use proxmox_sys::fs::{atomic_open_or_create_file, create_path, replace_file, CreateOptions};
 use proxmox_sys::linux::procfs;
-use proxmox_sys::task_warn;
 
 use proxmox_sys::logrotate::{LogRotate, LogRotateFiles};
 use proxmox_sys::WorkerTaskContext;
 
-use crate::{CommandSocket, FileLogOptions, FileLogger};
+use crate::CommandSocket;
 
 struct TaskListLockGuard(File);
 
@@ -274,7 +277,7 @@ pub fn rotate_task_log_archive(
 
 /// removes all task logs that are older than the oldest task entry in the
 /// task archive
-pub fn cleanup_old_tasks(worker: &dyn WorkerTaskContext, compressed: bool) -> Result<(), Error> {
+pub fn cleanup_old_tasks(compressed: bool) -> Result<(), Error> {
     let setup = worker_task_setup()?;
 
     let _lock = setup.lock_task_list_files(true)?;
@@ -312,7 +315,10 @@ pub fn cleanup_old_tasks(worker: &dyn WorkerTaskContext, compressed: bool) -> Re
                 Ok(files) => files,
                 Err(err) if err.kind() == std::io::ErrorKind::NotFound => continue,
                 Err(err) => {
-                    task_warn!(worker, "could not check task logs in '{:02X}': {}", i, err);
+                    warn!(
+                        tasklog = true,
+                        "could not check task logs in '{:02X}': {}", i, err
+                    );
                     continue;
                 }
             };
@@ -320,11 +326,9 @@ pub fn cleanup_old_tasks(worker: &dyn WorkerTaskContext, compressed: bool) -> Re
                 let file = match file {
                     Ok(file) => file,
                     Err(err) => {
-                        task_warn!(
-                            worker,
-                            "could not check some task log in '{:02X}': {}",
-                            i,
-                            err
+                        warn!(
+                            tasklog = true,
+                            "could not check some task log in '{:02X}': {}", i, err
                         );
                         continue;
                     }
@@ -334,7 +338,10 @@ pub fn cleanup_old_tasks(worker: &dyn WorkerTaskContext, compressed: bool) -> Re
                 let modified = match get_modified(file) {
                     Ok(modified) => modified,
                     Err(err) => {
-                        task_warn!(worker, "error getting mtime for '{:?}': {}", path, err);
+                        warn!(
+                            tasklog = true,
+                            "error getting mtime for '{:?}': {}", path, err
+                        );
                         continue;
                     }
                 };
@@ -344,7 +351,10 @@ pub fn cleanup_old_tasks(worker: &dyn WorkerTaskContext, compressed: bool) -> Re
                         Ok(()) => {}
                         Err(err) if err.kind() == std::io::ErrorKind::NotFound => {}
                         Err(err) => {
-                            task_warn!(worker, "could not remove file '{:?}': {}", path, err)
+                            warn!(
+                                tasklog = true,
+                                "could not remove file '{:?}': {}", path, err
+                            )
                         }
                     }
                 }
@@ -796,9 +806,7 @@ impl std::fmt::Display for WorkerTask {
 }
 
 struct WorkerTaskData {
-    logger: FileLogger,
     progress: f64, // 0..1
-    warn_count: u64,
     pub abort_listeners: Vec<oneshot::Sender<()>>,
 }
 
@@ -808,7 +816,7 @@ impl WorkerTask {
         worker_id: Option<String>,
         auth_id: String,
         to_stdout: bool,
-    ) -> Result<Arc<Self>, Error> {
+    ) -> Result<(Arc<Self>, FileLogger), Error> {
         let setup = worker_task_setup()?;
 
         let upid = UPID::new(worker_type, worker_id, auth_id)?;
@@ -842,9 +850,7 @@ impl WorkerTask {
             upid: upid.clone(),
             abort_requested: AtomicBool::new(false),
             data: Mutex::new(WorkerTaskData {
-                logger,
                 progress: 0.0,
-                warn_count: 0,
                 abort_listeners: vec![],
             }),
         });
@@ -858,7 +864,7 @@ impl WorkerTask {
 
         setup.update_active_workers(Some(&upid))?;
 
-        Ok(worker)
+        Ok((worker, logger))
     }
 
     /// Spawn a new tokio task/future.
@@ -873,12 +879,21 @@ impl WorkerTask {
         F: Send + 'static + FnOnce(Arc<WorkerTask>) -> T,
         T: Send + 'static + Future<Output = Result<(), Error>>,
     {
-        let worker = WorkerTask::new(worker_type, worker_id, auth_id, to_stdout)?;
+        let (worker, logger) = WorkerTask::new(worker_type, worker_id, auth_id, to_stdout)?;
         let upid_str = worker.upid.to_string();
         let f = f(worker.clone());
+
+        let logger = RefCell::new(Some(logger));
+        let counter = RefCell::new(Some(0));
         tokio::spawn(async move {
-            let result = f.await;
-            worker.log_result(&result);
+            LOGGER.scope(logger, async move {
+                WARN_COUNTER.scope(counter, async move {
+                    let result = f.await;
+                    worker.log_result(&result);
+                })
+                .await;
+            })
+            .await;
         });
 
         Ok(upid_str)
@@ -895,22 +910,27 @@ impl WorkerTask {
     where
         F: Send + UnwindSafe + 'static + FnOnce(Arc<WorkerTask>) -> Result<(), Error>,
     {
-        let worker = WorkerTask::new(worker_type, worker_id, auth_id, to_stdout)?;
+        let (worker, logger) = WorkerTask::new(worker_type, worker_id, auth_id, to_stdout)?;
         let upid_str = worker.upid.to_string();
 
         let _child = std::thread::Builder::new()
             .name(upid_str.clone())
             .spawn(move || {
-                let worker1 = worker.clone();
-                let result = match std::panic::catch_unwind(move || f(worker1)) {
-                    Ok(r) => r,
-                    Err(panic) => match panic.downcast::<&str>() {
-                        Ok(panic_msg) => Err(format_err!("worker panicked: {}", panic_msg)),
-                        Err(_) => Err(format_err!("worker panicked: unknown type.")),
-                    },
-                };
+                LOGGER.sync_scope(RefCell::new(Some(logger)), || {
+                    WARN_COUNTER.sync_scope(RefCell::new(Some(0)), || {
+                        let worker1 = worker.clone();
+
+                        let result = match std::panic::catch_unwind(move || f(worker1)) {
+                            Ok(r) => r,
+                            Err(panic) => match panic.downcast::<&str>() {
+                                Ok(panic_msg) => Err(format_err!("worker panicked: {}", panic_msg)),
+                                Err(_) => Err(format_err!("worker panicked: unknown type.")),
+                            },
+                        };
 
-                worker.log_result(&result);
+                        worker.log_result(&result);
+                    });
+                });
             });
 
         Ok(upid_str)
@@ -918,7 +938,13 @@ impl WorkerTask {
 
     /// create state from self and a result
     pub fn create_state(&self, result: &Result<(), Error>) -> TaskState {
-        let warn_count = self.data.lock().unwrap().warn_count;
+        let mut warn_count: u64 = 0;
+
+        let _ = WARN_COUNTER.try_with(|counter| {
+            if let Some(v) = counter.borrow_mut().as_mut() {
+                warn_count = v.to_owned();
+            }
+        });
 
         let endtime = proxmox_time::epoch_i64();
 
@@ -949,15 +975,7 @@ impl WorkerTask {
 
     /// Log a message.
     pub fn log_message<S: AsRef<str>>(&self, msg: S) {
-        let mut data = self.data.lock().unwrap();
-        data.logger.log(msg);
-    }
-
-    /// Log a message as warning.
-    pub fn log_warning<S: AsRef<str>>(&self, msg: S) {
-        let mut data = self.data.lock().unwrap();
-        data.logger.log(format!("WARN: {}", msg.as_ref()));
-        data.warn_count += 1;
+        info!(tasklog = true, "{}", msg.as_ref());
     }
 
     /// Set progress indicator
@@ -1020,16 +1038,6 @@ impl WorkerTaskContext for WorkerTask {
     fn fail_on_shutdown(&self) -> Result<(), Error> {
         crate::fail_on_shutdown()
     }
-
-    fn log(&self, level: log::Level, message: &std::fmt::Arguments) {
-        match level {
-            log::Level::Error => self.log_warning(message.to_string()),
-            log::Level::Warn => self.log_warning(message.to_string()),
-            log::Level::Info => self.log_message(message.to_string()),
-            log::Level::Debug => self.log_message(format!("DEBUG: {}", message)),
-            log::Level::Trace => self.log_message(format!("TRACE: {}", message)),
-        }
-    }
 }
 
 /// Wait for a locally spanned worker task
diff --git a/proxmox-router/Cargo.toml b/proxmox-router/Cargo.toml
index c53ccd0..2460999 100644
--- a/proxmox-router/Cargo.toml
+++ b/proxmox-router/Cargo.toml
@@ -19,6 +19,8 @@ percent-encoding.workspace = true
 serde_json.workspace = true
 serde.workspace = true
 unicode-width ="0.1.8"
+syslog = "6"
+log = "0.4.17"
 
 # cli:
 tokio = { workspace = true, features = [], optional = true }
diff --git a/proxmox-sys/src/worker_task_context.rs b/proxmox-sys/src/worker_task_context.rs
index 2c86857..743ae53 100644
--- a/proxmox-sys/src/worker_task_context.rs
+++ b/proxmox-sys/src/worker_task_context.rs
@@ -26,9 +26,6 @@ pub trait WorkerTaskContext: Send + Sync {
         }
         Ok(())
     }
-
-    /// Create a log message for this task.
-    fn log(&self, level: log::Level, message: &std::fmt::Arguments);
 }
 
 /// Convenience implementation:
@@ -48,48 +45,4 @@ impl<T: WorkerTaskContext + ?Sized> WorkerTaskContext for std::sync::Arc<T> {
     fn fail_on_shutdown(&self) -> Result<(), Error> {
         <T as WorkerTaskContext>::fail_on_shutdown(self)
     }
-
-    fn log(&self, level: log::Level, message: &std::fmt::Arguments) {
-        <T as WorkerTaskContext>::log(self, level, message)
-    }
-}
-
-/// Log an error to a [WorkerTaskContext]
-#[macro_export]
-macro_rules! task_error {
-    ($task:expr, $($fmt:tt)+) => {{
-        $crate::WorkerTaskContext::log(&*$task, log::Level::Error, &format_args!($($fmt)+))
-    }};
-}
-
-/// Log a warning to a [WorkerTaskContext]
-#[macro_export]
-macro_rules! task_warn {
-    ($task:expr, $($fmt:tt)+) => {{
-        $crate::WorkerTaskContext::log(&*$task, log::Level::Warn, &format_args!($($fmt)+))
-    }};
-}
-
-/// Log a message to a [WorkerTaskContext]
-#[macro_export]
-macro_rules! task_log {
-    ($task:expr, $($fmt:tt)+) => {{
-        $crate::WorkerTaskContext::log(&*$task, log::Level::Info, &format_args!($($fmt)+))
-    }};
-}
-
-/// Log a debug message to a [WorkerTaskContext]
-#[macro_export]
-macro_rules! task_debug {
-    ($task:expr, $($fmt:tt)+) => {{
-        $crate::WorkerTaskContext::log(&*$task, log::Level::Debug, &format_args!($($fmt)+))
-    }};
-}
-
-/// Log a trace message to a [WorkerTaskContext]
-#[macro_export]
-macro_rules! task_trace {
-    ($task:expr, $($fmt:tt)+) => {{
-        $crate::WorkerTaskContext::log(&*$task, log::Level::Trace, &format_args!($($fmt)+))
-    }};
 }
-- 
2.39.2





^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pbs-devel] [RFC proxmox 2/2] proxmox-log: added tracing infra
  2023-10-11 14:01 ` [pbs-devel] [RFC proxmox 2/2] proxmox-log: added tracing infra Gabriel Goller
@ 2023-10-13 12:36   ` Gabriel Goller
  2023-10-18 13:26   ` Dominik Csapak
  1 sibling, 0 replies; 14+ messages in thread
From: Gabriel Goller @ 2023-10-13 12:36 UTC (permalink / raw)
  To: pbs-devel

[-- Attachment #1: Type: text/plain, Size: 1420 bytes --]

Some things that are gonna get fixed with v2:

On 10/11/23 16:01, Gabriel Goller wrote:
> [..]
> +
> +tokio::task_local! {
> +    pub static LOGGER: RefCell<Option<FileLogger>>;
> +    pub static WARN_COUNTER: RefCell<Option<u64>>;
> +}
Move the `task_local!` stuff to the lib.rs file, that is
probably cleaner...
> [..]
> +
> +pub fn init_logger(
> +    env_var_name: &str,
> +    default_log_level: LevelFilter,
> +    application_name: &str,
> +) -> Result<(), anyhow::Error> {
> +    let mut log_level = default_log_level;
> +    if let Ok(v) = env::var(env_var_name) {
> +        if let Ok(l) = v.parse::<LevelFilter>() {
> +            log_level = l;
> +        }
> +    }
> +    let registry = tracing_subscriber::registry()
> +        .with(FilelogLayer::new().with_filter(filter_fn(|metadata| {
> +            metadata.fields().field("tasklog").is_some()
> +        })))
> +        .with(SyslogLayer::new(application_name.to_string()).with_filter(log_level));
> +
We also need the `.with_filter(log_level)` on the `FilelogLayer`, so
that the PBS_LOG variable also sets the log level on the task logs.
> +    tracing::subscriber::set_global_default(registry)?;
> +
> +    LogTracer::init()?;

Use `init_with_filter` here, so that we also set the loglevel on the 
(underlying) `log`
implementation. Like that a `log::info` call for example doesn't even
forward the message to `tracing`.

> +    Ok(())
> +}
> [..]

[-- Attachment #2: Type: text/html, Size: 2368 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pbs-devel] [RFC proxmox-backup 0/2] Tasklog rewrite with tracing
  2023-10-11 14:01 [pbs-devel] [RFC proxmox-backup 0/2] Tasklog rewrite with tracing Gabriel Goller
  2023-10-11 14:01 ` [pbs-devel] [RFC proxmox-backup 1/2] log: removed task_log! macro and moved to tracing Gabriel Goller
  2023-10-11 14:01 ` [pbs-devel] [RFC proxmox 2/2] proxmox-log: added tracing infra Gabriel Goller
@ 2023-10-18 13:12 ` Dominik Csapak
  2023-10-23  9:09   ` Gabriel Goller
  2 siblings, 1 reply; 14+ messages in thread
From: Dominik Csapak @ 2023-10-18 13:12 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Gabriel Goller

hi, a high level comments on this series:

you implemented a 'tasklog' = true filter, which is nice, but not filter for the syslog
in general we'd want to land things either in the task log (preferred) or in the syslog,
not in both (e.g. for some tasks this pollutes the syslog unnecessarily)

so i'd like to see also a syslog = false (or syslog = true)  and maybe some
functionality to enable/disable that for whole block (if that's even possible?)

rest of the comment in the relevant patch




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pbs-devel] [RFC proxmox 2/2] proxmox-log: added tracing infra
  2023-10-11 14:01 ` [pbs-devel] [RFC proxmox 2/2] proxmox-log: added tracing infra Gabriel Goller
  2023-10-13 12:36   ` Gabriel Goller
@ 2023-10-18 13:26   ` Dominik Csapak
  2023-10-23  8:56     ` Gabriel Goller
  1 sibling, 1 reply; 14+ messages in thread
From: Dominik Csapak @ 2023-10-18 13:26 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Gabriel Goller

i just glanced over the code but a few comments inline

On 10/11/23 16:01, Gabriel Goller wrote:
[..snip..]
> diff --git a/proxmox-log/src/file_layer.rs b/proxmox-log/src/file_layer.rs
> new file mode 100644
> index 0000000..1a167f4
> --- /dev/null
> +++ b/proxmox-log/src/file_layer.rs
> @@ -0,0 +1,89 @@
> +use std::cell::RefCell;
> +use tracing::error;
> +use tracing::field::Field;
> +use tracing::field::Visit;
> +use tracing::Event;
> +use tracing::Level;
> +use tracing::Subscriber;
> +use tracing_subscriber::layer::Context;
> +use tracing_subscriber::Layer;
> +
> +use crate::FileLogger;
> +
> +tokio::task_local! {
> +    pub static LOGGER: RefCell<Option<FileLogger>>;
> +    pub static WARN_COUNTER: RefCell<Option<u64>>;

why are these wrapped in options? with 'try_with' you get an error
anyway when they don't exists and i did not see a place
where you'd set them to None?

> +}
> +
> +pub struct FilelogLayer {}
> +
> +impl FilelogLayer {
> +    pub fn new() -> Self {
> +        Self {}
> +    }
> +}
> +
> +impl<S: Subscriber> Layer<S> for FilelogLayer {
> +    fn on_event(&self, event: &Event<'_>, _ctx: Context<'_, S>) {
> +        let mut buf = String::new();
> +
> +        event.record(&mut EventVisitor::new(&mut buf));
> +
> +        let logger_exists = LOGGER.try_with(|logger| {
> +            let mut l = logger.borrow_mut();
> +            let log = l.as_mut();
> +            if let Some(l) = log {
> +                log_to_file(l, event.metadata().level(), buf);
> +            } else {
> +                error!("FileLogger not found");
> +            }
> +        });
> +        if let Err(e) = logger_exists {
> +            error!(
> +                "Error getting the logger from the task-local storage: {}",
> +                e
> +            )
> +        }

this would be much shorter if it wasn't for the Option<>

> +    }
> +}
> +
> +fn log_to_file(logger: &mut FileLogger, level: &Level, buf: String) {
> +    match level {
> +        &Level::ERROR | &Level::WARN => {
> +            let counter_exists = WARN_COUNTER.try_with(|counter| > +                counter.replace_with(|c| {
> +                    if let Some(v) = c {
> +                        Some(v.to_owned() + 1)
> +                    } else {
> +                        None
> +                    }
> +                });

two things here:
* do we really want to increase the warn counter for error too?
   (i can't remember what we do now)
   normally we abort on an error anyway, so that does not make sense
   or if we now can log an error without aborting, we'd probably want
   to have a separate error count?

* clippy says this is a manually implemented map:

counter.replace_with(|c| c.as_mut().map(|v| v.to_owned() + 1));



> +                logger.log(buf.clone());
> +            });
> +            if let Err(e) = counter_exists {
> +                error!("Error getting the warning_counter for the logger: {}", e)
> +            }
> +        }
> +        &Level::INFO => logger.log(buf.clone()),

this clone is redundant (thanks clippy ;) )

> +        &Level::DEBUG => logger.log(format!("DEBUG: {}", buf)),
> +        &Level::TRACE => logger.log(format!("TRACE: {}", buf)),
> +    };
> +}
> +
> +struct EventVisitor<'a> {
> +    buf: &'a mut String,
> +}
> +
> +impl<'a> EventVisitor<'a> {
> +    fn new(buf: &'a mut String) -> Self {
> +        Self { buf }
> +    }
> +}
> +
> +impl Visit for EventVisitor<'_> {
> +    fn record_debug(&mut self, field: &Field, value: &dyn std::fmt::Debug) {
> +        if field.name() == "message" {
> +            self.buf.push_str(&format!("{:?}", value));
> +        }
> +    }
> +}
> diff --git a/proxmox-rest-server/src/file_logger.rs b/proxmox-log/src/file_logger.rs
> similarity index 98%
> rename from proxmox-rest-server/src/file_logger.rs
> rename to proxmox-log/src/file_logger.rs
> index 2bb1fac..c7e1d64 100644
> --- a/proxmox-rest-server/src/file_logger.rs
> +++ b/proxmox-log/src/file_logger.rs
> @@ -30,7 +30,7 @@ pub struct FileLogOptions {
>   /// #### Example:
>   /// ```
>   /// # use anyhow::{bail, format_err, Error};
> -/// use proxmox_rest_server::{flog, FileLogger, FileLogOptions};
> +/// use proxmox_log::{flog, FileLogger, FileLogOptions};
>   ///
>   /// # std::fs::remove_file("test.log");
>   /// let options = FileLogOptions {
> diff --git a/proxmox-log/src/lib.rs b/proxmox-log/src/lib.rs
> new file mode 100644
> index 0000000..0c101b7
> --- /dev/null
> +++ b/proxmox-log/src/lib.rs
> @@ -0,0 +1,36 @@
> +//! # Proxmox logging infrastructure
> +use crate::syslog_layer::SyslogLayer;
> +use file_layer::FilelogLayer;
> +use std::env;
> +use tracing_log::LogTracer;
> +use tracing_subscriber::filter::{filter_fn, LevelFilter};
> +use tracing_subscriber::prelude::*;
> +
> +pub mod file_layer;
> +mod file_logger;
> +pub use file_logger::{FileLogOptions, FileLogger};
> +
> +mod syslog_layer;
> +
> +pub fn init_logger(
> +    env_var_name: &str,
> +    default_log_level: LevelFilter,
> +    application_name: &str,
> +) -> Result<(), anyhow::Error> {
> +    let mut log_level = default_log_level;
> +    if let Ok(v) = env::var(env_var_name) {
> +        if let Ok(l) = v.parse::<LevelFilter>() {
> +            log_level = l;
> +        }
> +    }
> +    let registry = tracing_subscriber::registry()
> +        .with(FilelogLayer::new().with_filter(filter_fn(|metadata| {
> +            metadata.fields().field("tasklog").is_some()

you give 'tasklog = true' but only check here for is_some, wouldn't
it be better to do
metadata.fields().field("tasklog") == Some("true")
?
that way a
'tasklog = false' wouldn't (confusingly) log to the task log

> +        })))
> +        .with(SyslogLayer::new(application_name.to_string()).with_filter(log_level));
> +
> +    tracing::subscriber::set_global_default(registry)?;
> +
> +    LogTracer::init()?;
> +    Ok(())
> +}
> diff --git a/proxmox-log/src/syslog_layer.rs b/proxmox-log/src/syslog_layer.rs
> new file mode 100644
> index 0000000..50ea145
> --- /dev/null
> +++ b/proxmox-log/src/syslog_layer.rs
> @@ -0,0 +1,78 @@
> +use std::fmt;
> +use std::sync::Arc;
> +use std::sync::Mutex;
> +use syslog::Formatter3164;
> +use syslog::Logger;
> +use syslog::LoggerBackend;
> +use tracing::field::Field;
> +use tracing::field::Visit;
> +use tracing::Event;
> +use tracing::Level;
> +use tracing::Subscriber;
> +use tracing_subscriber::layer::Context;
> +use tracing_subscriber::Layer;
> +
> +pub struct SyslogLayer {
> +    logger: Arc<Mutex<Logger<LoggerBackend, Formatter3164>>>,
> +}
> +
> +impl SyslogLayer {
> +    pub fn new(application_name: String) -> Self {
> +        let mut formatter = Formatter3164::default();
> +        formatter.facility = syslog::Facility::LOG_DAEMON;
> +        formatter.process = application_name;
> +
> +        // unwrap because if the (syslog) logging is not working, we shut down
> +        let backend = syslog::unix(formatter.clone())
> +            .map(|logger: Logger<LoggerBackend, Formatter3164>| logger.backend)
> +            .map_err(|e| {
> +                anyhow::Error::new(std::io::Error::new(
> +                    std::io::ErrorKind::Other,
> +                    e.description(),
> +                ))
> +            })
> +            .unwrap();
> +
> +        let logger = Arc::new(Mutex::new(Logger { formatter, backend }));
> +
> +        Self { logger }
> +    }
> +}
> +
> +impl<S: Subscriber> Layer<S> for SyslogLayer {
> +    fn on_event(&self, event: &Event<'_>, _ctx: Context<'_, S>) {
> +        let mut buf = String::new();
> +
> +        event.record(&mut EventVisitor::new(&mut buf));
> +
> +        let mut logger = self.logger.lock().unwrap();
> +        let succes = match event.metadata().level() {
> +            &Level::ERROR => logger.err(buf),
> +            &Level::WARN => logger.warning(buf),
> +            &Level::INFO => logger.info(buf),
> +            &Level::DEBUG => logger.debug(buf),
> +            &Level::TRACE => logger.debug(buf),
> +        };
> +        if let Err(err) = succes {
> +            log::error!("{}", err.description());
> +        }
> +    }
> +}
> +
> +struct EventVisitor<'a> {
> +    buf: &'a mut String,
> +}
> +
> +impl<'a> EventVisitor<'a> {
> +    fn new(buf: &'a mut String) -> Self {
> +        Self { buf }
> +    }
> +}
> +
> +impl Visit for EventVisitor<'_> {
> +    fn record_debug(&mut self, field: &Field, value: &dyn fmt::Debug) {
> +        if field.name() == "message" {
> +            self.buf.push_str(&format!("{:?}", value));
> +        }
> +    }
> +}
> diff --git a/proxmox-rest-server/Cargo.toml b/proxmox-rest-server/Cargo.toml
> index 8daf1d3..a444941 100644
> --- a/proxmox-rest-server/Cargo.toml
> +++ b/proxmox-rest-server/Cargo.toml
> @@ -34,6 +34,7 @@ tokio-openssl.workspace = true
>   tokio-stream.workspace = true
>   tower-service.workspace = true
>   url.workspace = true
> +tracing.workspace = true
>   
>   proxmox-async.workspace = true
>   proxmox-compression.workspace = true
> @@ -44,6 +45,7 @@ proxmox-router.workspace = true
>   proxmox-schema = { workspace = true, features = [ "api-macro", "upid-api-impl" ] }
>   proxmox-sys = { workspace = true, features = [ "logrotate", "timer" ] }
>   proxmox-time.workspace = true
> +proxmox-log.workspace = true
>   
>   [features]
>   default = []
> diff --git a/proxmox-rest-server/src/api_config.rs b/proxmox-rest-server/src/api_config.rs
> index ad9a811..cf14c8f 100644
> --- a/proxmox-rest-server/src/api_config.rs
> +++ b/proxmox-rest-server/src/api_config.rs
> @@ -9,11 +9,12 @@ use http::{HeaderMap, Method};
>   use hyper::http::request::Parts;
>   use hyper::{Body, Response};
>   
> +use proxmox_log::{FileLogOptions, FileLogger};
>   use proxmox_router::{Router, RpcEnvironmentType, UserInformation};
>   use proxmox_sys::fs::{create_path, CreateOptions};
>   
>   use crate::rest::Handler;
> -use crate::{CommandSocket, FileLogOptions, FileLogger, RestEnvironment};
> +use crate::{CommandSocket, RestEnvironment};
>   
>   /// REST server configuration
>   pub struct ApiConfig {
> diff --git a/proxmox-rest-server/src/lib.rs b/proxmox-rest-server/src/lib.rs
> index bc5be01..9c16065 100644
> --- a/proxmox-rest-server/src/lib.rs
> +++ b/proxmox-rest-server/src/lib.rs
> @@ -41,9 +41,6 @@ pub use state::*;
>   mod command_socket;
>   pub use command_socket::*;
>   
> -mod file_logger;
> -pub use file_logger::{FileLogOptions, FileLogger};
> -
>   mod api_config;
>   pub use api_config::{ApiConfig, AuthError, AuthHandler, IndexHandler};
>   
> diff --git a/proxmox-rest-server/src/rest.rs b/proxmox-rest-server/src/rest.rs
> index 2ccd4d5..9ad01a5 100644
> --- a/proxmox-rest-server/src/rest.rs
> +++ b/proxmox-rest-server/src/rest.rs
> @@ -31,10 +31,10 @@ use proxmox_schema::{ObjectSchemaType, ParameterSchema};
>   
>   use proxmox_async::stream::AsyncReaderStream;
>   use proxmox_compression::{DeflateEncoder, Level};
> +use proxmox_log::FileLogger;
>   
>   use crate::{
> -    formatter::*, normalize_path, ApiConfig, AuthError, CompressionMethod, FileLogger,
> -    RestEnvironment,
> +    formatter::*, normalize_path, ApiConfig, AuthError, CompressionMethod, RestEnvironment,
>   };
>   
>   extern "C" {
> diff --git a/proxmox-rest-server/src/worker_task.rs b/proxmox-rest-server/src/worker_task.rs
> index 54b6bc2..f7b8396 100644
> --- a/proxmox-rest-server/src/worker_task.rs
> +++ b/proxmox-rest-server/src/worker_task.rs
> @@ -1,3 +1,4 @@
> +use std::cell::RefCell;
>   use std::collections::{HashMap, VecDeque};
>   use std::fs::File;
>   use std::io::{BufRead, BufReader, Read, Write};
> @@ -12,21 +13,23 @@ use futures::*;
>   use lazy_static::lazy_static;
>   use nix::fcntl::OFlag;
>   use once_cell::sync::OnceCell;
> +use proxmox_log::file_layer::{LOGGER, WARN_COUNTER};
>   use serde::{Deserialize, Serialize};
>   use serde_json::{json, Value};
>   use tokio::signal::unix::SignalKind;
>   use tokio::sync::oneshot;
> +use tracing::{info, warn};
>   
>   use proxmox_lang::try_block;
> +use proxmox_log::{FileLogOptions, FileLogger};
>   use proxmox_schema::upid::UPID;
>   use proxmox_sys::fs::{atomic_open_or_create_file, create_path, replace_file, CreateOptions};
>   use proxmox_sys::linux::procfs;
> -use proxmox_sys::task_warn;
>   
>   use proxmox_sys::logrotate::{LogRotate, LogRotateFiles};
>   use proxmox_sys::WorkerTaskContext;
>   
> -use crate::{CommandSocket, FileLogOptions, FileLogger};
> +use crate::CommandSocket;
>   
>   struct TaskListLockGuard(File);
>   
> @@ -274,7 +277,7 @@ pub fn rotate_task_log_archive(
>   
>   /// removes all task logs that are older than the oldest task entry in the
>   /// task archive
> -pub fn cleanup_old_tasks(worker: &dyn WorkerTaskContext, compressed: bool) -> Result<(), Error> {
> +pub fn cleanup_old_tasks(compressed: bool) -> Result<(), Error> {
>       let setup = worker_task_setup()?;
>   
>       let _lock = setup.lock_task_list_files(true)?;
> @@ -312,7 +315,10 @@ pub fn cleanup_old_tasks(worker: &dyn WorkerTaskContext, compressed: bool) -> Re
>                   Ok(files) => files,
>                   Err(err) if err.kind() == std::io::ErrorKind::NotFound => continue,
>                   Err(err) => {
> -                    task_warn!(worker, "could not check task logs in '{:02X}': {}", i, err);
> +                    warn!(
> +                        tasklog = true,
> +                        "could not check task logs in '{:02X}': {}", i, err
> +                    );
>                       continue;
>                   }
>               };
> @@ -320,11 +326,9 @@ pub fn cleanup_old_tasks(worker: &dyn WorkerTaskContext, compressed: bool) -> Re
>                   let file = match file {
>                       Ok(file) => file,
>                       Err(err) => {
> -                        task_warn!(
> -                            worker,
> -                            "could not check some task log in '{:02X}': {}",
> -                            i,
> -                            err
> +                        warn!(
> +                            tasklog = true,
> +                            "could not check some task log in '{:02X}': {}", i, err
>                           );
>                           continue;
>                       }
> @@ -334,7 +338,10 @@ pub fn cleanup_old_tasks(worker: &dyn WorkerTaskContext, compressed: bool) -> Re
>                   let modified = match get_modified(file) {
>                       Ok(modified) => modified,
>                       Err(err) => {
> -                        task_warn!(worker, "error getting mtime for '{:?}': {}", path, err);
> +                        warn!(
> +                            tasklog = true,
> +                            "error getting mtime for '{:?}': {}", path, err
> +                        );
>                           continue;
>                       }
>                   };
> @@ -344,7 +351,10 @@ pub fn cleanup_old_tasks(worker: &dyn WorkerTaskContext, compressed: bool) -> Re
>                           Ok(()) => {}
>                           Err(err) if err.kind() == std::io::ErrorKind::NotFound => {}
>                           Err(err) => {
> -                            task_warn!(worker, "could not remove file '{:?}': {}", path, err)
> +                            warn!(
> +                                tasklog = true,
> +                                "could not remove file '{:?}': {}", path, err
> +                            )
>                           }
>                       }
>                   }
> @@ -796,9 +806,7 @@ impl std::fmt::Display for WorkerTask {
>   }
>   
>   struct WorkerTaskData {
> -    logger: FileLogger,
>       progress: f64, // 0..1
> -    warn_count: u64,
>       pub abort_listeners: Vec<oneshot::Sender<()>>,
>   }
>   
> @@ -808,7 +816,7 @@ impl WorkerTask {
>           worker_id: Option<String>,
>           auth_id: String,
>           to_stdout: bool,
> -    ) -> Result<Arc<Self>, Error> {
> +    ) -> Result<(Arc<Self>, FileLogger), Error> {
>           let setup = worker_task_setup()?;
>   
>           let upid = UPID::new(worker_type, worker_id, auth_id)?;
> @@ -842,9 +850,7 @@ impl WorkerTask {
>               upid: upid.clone(),
>               abort_requested: AtomicBool::new(false),
>               data: Mutex::new(WorkerTaskData {
> -                logger,
>                   progress: 0.0,
> -                warn_count: 0,
>                   abort_listeners: vec![],
>               }),
>           });
> @@ -858,7 +864,7 @@ impl WorkerTask {
>   
>           setup.update_active_workers(Some(&upid))?;
>   
> -        Ok(worker)
> +        Ok((worker, logger))
>       }
>   
>       /// Spawn a new tokio task/future.
> @@ -873,12 +879,21 @@ impl WorkerTask {
>           F: Send + 'static + FnOnce(Arc<WorkerTask>) -> T,
>           T: Send + 'static + Future<Output = Result<(), Error>>,
>       {
> -        let worker = WorkerTask::new(worker_type, worker_id, auth_id, to_stdout)?;
> +        let (worker, logger) = WorkerTask::new(worker_type, worker_id, auth_id, to_stdout)?;
>           let upid_str = worker.upid.to_string();
>           let f = f(worker.clone());
> +
> +        let logger = RefCell::new(Some(logger));
> +        let counter = RefCell::new(Some(0));
>           tokio::spawn(async move {
> -            let result = f.await;
> -            worker.log_result(&result);
> +            LOGGER.scope(logger, async move {
> +                WARN_COUNTER.scope(counter, async move {
> +                    let result = f.await;
> +                    worker.log_result(&result);
> +                })
> +                .await;
> +            })
> +            .await;
>           });
>   
>           Ok(upid_str)
> @@ -895,22 +910,27 @@ impl WorkerTask {
>       where
>           F: Send + UnwindSafe + 'static + FnOnce(Arc<WorkerTask>) -> Result<(), Error>,
>       {
> -        let worker = WorkerTask::new(worker_type, worker_id, auth_id, to_stdout)?;
> +        let (worker, logger) = WorkerTask::new(worker_type, worker_id, auth_id, to_stdout)?;
>           let upid_str = worker.upid.to_string();
>   
>           let _child = std::thread::Builder::new()
>               .name(upid_str.clone())
>               .spawn(move || {
> -                let worker1 = worker.clone();
> -                let result = match std::panic::catch_unwind(move || f(worker1)) {
> -                    Ok(r) => r,
> -                    Err(panic) => match panic.downcast::<&str>() {
> -                        Ok(panic_msg) => Err(format_err!("worker panicked: {}", panic_msg)),
> -                        Err(_) => Err(format_err!("worker panicked: unknown type.")),
> -                    },
> -                };
> +                LOGGER.sync_scope(RefCell::new(Some(logger)), || {
> +                    WARN_COUNTER.sync_scope(RefCell::new(Some(0)), || {
> +                        let worker1 = worker.clone();
> +
> +                        let result = match std::panic::catch_unwind(move || f(worker1)) {
> +                            Ok(r) => r,
> +                            Err(panic) => match panic.downcast::<&str>() {
> +                                Ok(panic_msg) => Err(format_err!("worker panicked: {}", panic_msg)),
> +                                Err(_) => Err(format_err!("worker panicked: unknown type.")),
> +                            },
> +                        };
>   
> -                worker.log_result(&result);
> +                        worker.log_result(&result);
> +                    });
> +                });
>               });
>   
>           Ok(upid_str)
> @@ -918,7 +938,13 @@ impl WorkerTask {
>   
>       /// create state from self and a result
>       pub fn create_state(&self, result: &Result<(), Error>) -> TaskState {
> -        let warn_count = self.data.lock().unwrap().warn_count;
> +        let mut warn_count: u64 = 0;
> +
> +        let _ = WARN_COUNTER.try_with(|counter| {
> +            if let Some(v) = counter.borrow_mut().as_mut() {
> +                warn_count = v.to_owned();
> +            }
> +        });
>   
>           let endtime = proxmox_time::epoch_i64();
>   
> @@ -949,15 +975,7 @@ impl WorkerTask {
>   
>       /// Log a message.
>       pub fn log_message<S: AsRef<str>>(&self, msg: S) {
> -        let mut data = self.data.lock().unwrap();
> -        data.logger.log(msg);
> -    }
> -
> -    /// Log a message as warning.
> -    pub fn log_warning<S: AsRef<str>>(&self, msg: S) {
> -        let mut data = self.data.lock().unwrap();
> -        data.logger.log(format!("WARN: {}", msg.as_ref()));
> -        data.warn_count += 1;
> +        info!(tasklog = true, "{}", msg.as_ref());
>       }
>   
>       /// Set progress indicator
> @@ -1020,16 +1038,6 @@ impl WorkerTaskContext for WorkerTask {
>       fn fail_on_shutdown(&self) -> Result<(), Error> {
>           crate::fail_on_shutdown()
>       }
> -
> -    fn log(&self, level: log::Level, message: &std::fmt::Arguments) {
> -        match level {
> -            log::Level::Error => self.log_warning(message.to_string()),
> -            log::Level::Warn => self.log_warning(message.to_string()),
> -            log::Level::Info => self.log_message(message.to_string()),
> -            log::Level::Debug => self.log_message(format!("DEBUG: {}", message)),
> -            log::Level::Trace => self.log_message(format!("TRACE: {}", message)),
> -        }
> -    }
>   }
>   
>   /// Wait for a locally spanned worker task
> diff --git a/proxmox-router/Cargo.toml b/proxmox-router/Cargo.toml
> index c53ccd0..2460999 100644
> --- a/proxmox-router/Cargo.toml
> +++ b/proxmox-router/Cargo.toml
> @@ -19,6 +19,8 @@ percent-encoding.workspace = true
>   serde_json.workspace = true
>   serde.workspace = true
>   unicode-width ="0.1.8"
> +syslog = "6"
> +log = "0.4.17"
>   
>   # cli:
>   tokio = { workspace = true, features = [], optional = true }
> diff --git a/proxmox-sys/src/worker_task_context.rs b/proxmox-sys/src/worker_task_context.rs
> index 2c86857..743ae53 100644
> --- a/proxmox-sys/src/worker_task_context.rs
> +++ b/proxmox-sys/src/worker_task_context.rs
> @@ -26,9 +26,6 @@ pub trait WorkerTaskContext: Send + Sync {
>           }
>           Ok(())
>       }
> -
> -    /// Create a log message for this task.
> -    fn log(&self, level: log::Level, message: &std::fmt::Arguments);
>   }
>   
>   /// Convenience implementation:
> @@ -48,48 +45,4 @@ impl<T: WorkerTaskContext + ?Sized> WorkerTaskContext for std::sync::Arc<T> {
>       fn fail_on_shutdown(&self) -> Result<(), Error> {
>           <T as WorkerTaskContext>::fail_on_shutdown(self)
>       }
> -
> -    fn log(&self, level: log::Level, message: &std::fmt::Arguments) {
> -        <T as WorkerTaskContext>::log(self, level, message)
> -    }
> -}
> -
> -/// Log an error to a [WorkerTaskContext]
> -#[macro_export]
> -macro_rules! task_error {
> -    ($task:expr, $($fmt:tt)+) => {{
> -        $crate::WorkerTaskContext::log(&*$task, log::Level::Error, &format_args!($($fmt)+))
> -    }};
> -}
> -
> -/// Log a warning to a [WorkerTaskContext]
> -#[macro_export]
> -macro_rules! task_warn {
> -    ($task:expr, $($fmt:tt)+) => {{
> -        $crate::WorkerTaskContext::log(&*$task, log::Level::Warn, &format_args!($($fmt)+))
> -    }};
> -}
> -
> -/// Log a message to a [WorkerTaskContext]
> -#[macro_export]
> -macro_rules! task_log {
> -    ($task:expr, $($fmt:tt)+) => {{
> -        $crate::WorkerTaskContext::log(&*$task, log::Level::Info, &format_args!($($fmt)+))
> -    }};
> -}
> -
> -/// Log a debug message to a [WorkerTaskContext]
> -#[macro_export]
> -macro_rules! task_debug {
> -    ($task:expr, $($fmt:tt)+) => {{
> -        $crate::WorkerTaskContext::log(&*$task, log::Level::Debug, &format_args!($($fmt)+))
> -    }};
> -}
> -
> -/// Log a trace message to a [WorkerTaskContext]
> -#[macro_export]
> -macro_rules! task_trace {
> -    ($task:expr, $($fmt:tt)+) => {{
> -        $crate::WorkerTaskContext::log(&*$task, log::Level::Trace, &format_args!($($fmt)+))
> -    }};
>   }





^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pbs-devel] [RFC proxmox 2/2] proxmox-log: added tracing infra
  2023-10-18 13:26   ` Dominik Csapak
@ 2023-10-23  8:56     ` Gabriel Goller
  2023-10-23  9:11       ` Gabriel Goller
  0 siblings, 1 reply; 14+ messages in thread
From: Gabriel Goller @ 2023-10-23  8:56 UTC (permalink / raw)
  To: Dominik Csapak, Proxmox Backup Server development discussion

Thanks for the review (comments inline).

On 10/18/23 15:26, Dominik Csapak wrote:
> i just glanced over the code but a few comments inline
>
> On 10/11/23 16:01, Gabriel Goller wrote:
> [..snip..]
>> diff --git a/proxmox-log/src/file_layer.rs 
>> b/proxmox-log/src/file_layer.rs
>> new file mode 100644
>> index 0000000..1a167f4
>> --- /dev/null
>> +++ b/proxmox-log/src/file_layer.rs
>> @@ -0,0 +1,89 @@
>> +use std::cell::RefCell;
>> +use tracing::error;
>> +use tracing::field::Field;
>> +use tracing::field::Visit;
>> +use tracing::Event;
>> +use tracing::Level;
>> +use tracing::Subscriber;
>> +use tracing_subscriber::layer::Context;
>> +use tracing_subscriber::Layer;
>> +
>> +use crate::FileLogger;
>> +
>> +tokio::task_local! {
>> +    pub static LOGGER: RefCell<Option<FileLogger>>;
>> +    pub static WARN_COUNTER: RefCell<Option<u64>>;
>
> why are these wrapped in options? with 'try_with' you get an error
> anyway when they don't exists and i did not see a place
> where you'd set them to None?
Right, this was a relic from when I used `thread_local`, now with
`task_local` its not used anymore. Thanks!
>
>> +}
>> +
>> +pub struct FilelogLayer {}
>> +
>> +impl FilelogLayer {
>> +    pub fn new() -> Self {
>> +        Self {}
>> +    }
>> +}
>> +
>> +impl<S: Subscriber> Layer<S> for FilelogLayer {
>> +    fn on_event(&self, event: &Event<'_>, _ctx: Context<'_, S>) {
>> +        let mut buf = String::new();
>> +
>> +        event.record(&mut EventVisitor::new(&mut buf));
>> +
>> +        let logger_exists = LOGGER.try_with(|logger| {
>> +            let mut l = logger.borrow_mut();
>> +            let log = l.as_mut();
>> +            if let Some(l) = log {
>> +                log_to_file(l, event.metadata().level(), buf);
>> +            } else {
>> +                error!("FileLogger not found");
>> +            }
>> +        });
>> +        if let Err(e) = logger_exists {
>> +            error!(
>> +                "Error getting the logger from the task-local 
>> storage: {}",
>> +                e
>> +            )
>> +        }
>
> this would be much shorter if it wasn't for the Option<>
>
>> +    }
>> +}
>> +
>> +fn log_to_file(logger: &mut FileLogger, level: &Level, buf: String) {
>> +    match level {
>> +        &Level::ERROR | &Level::WARN => {
>> +            let counter_exists = WARN_COUNTER.try_with(|counter| > + 
>> counter.replace_with(|c| {
>> +                    if let Some(v) = c {
>> +                        Some(v.to_owned() + 1)
>> +                    } else {
>> +                        None
>> +                    }
>> +                });
>
> two things here:
> * do we really want to increase the warn counter for error too?
>   (i can't remember what we do now)
>   normally we abort on an error anyway, so that does not make sense
>   or if we now can log an error without aborting, we'd probably want
>   to have a separate error count?
We always incremented the `warn_counter` on error and warning. The thing is
we don't really use it when there is an error (check `create_state()` in 
worker_task.rs).
I don't have any strong feelings on this, we could also leave it as it 
is, as one could
interpret it as 'counter of events of level warn and higher'.
>
> * clippy says this is a manually implemented map:
>
> counter.replace_with(|c| c.as_mut().map(|v| v.to_owned() + 1));
>
not necessary anymore, as I removed the Option<> wrap.
>
>
>> +                logger.log(buf.clone());
>> +            });
>> +            if let Err(e) = counter_exists {
>> +                error!("Error getting the warning_counter for the 
>> logger: {}", e)
>> +            }
>> +        }
>> +        &Level::INFO => logger.log(buf.clone()),
>
> this clone is redundant (thanks clippy ;) )
Fixed this and some other clippy lints :)
>
>> +        &Level::DEBUG => logger.log(format!("DEBUG: {}", buf)),
>> +        &Level::TRACE => logger.log(format!("TRACE: {}", buf)),
>> +    };
>> +}
>> +
>> +struct EventVisitor<'a> {
>> +    buf: &'a mut String,
>> +}
>> +
>> +impl<'a> EventVisitor<'a> {
>> +    fn new(buf: &'a mut String) -> Self {
>> +        Self { buf }
>> +    }
>> +}
>> +
>> +impl Visit for EventVisitor<'_> {
>> +    fn record_debug(&mut self, field: &Field, value: &dyn 
>> std::fmt::Debug) {
>> +        if field.name() == "message" {
>> +            self.buf.push_str(&format!("{:?}", value));
>> +        }
>> +    }
>> +}
>> diff --git a/proxmox-rest-server/src/file_logger.rs 
>> b/proxmox-log/src/file_logger.rs
>> similarity index 98%
>> rename from proxmox-rest-server/src/file_logger.rs
>> rename to proxmox-log/src/file_logger.rs
>> index 2bb1fac..c7e1d64 100644
>> --- a/proxmox-rest-server/src/file_logger.rs
>> +++ b/proxmox-log/src/file_logger.rs
>> @@ -30,7 +30,7 @@ pub struct FileLogOptions {
>>   /// #### Example:
>>   /// ```
>>   /// # use anyhow::{bail, format_err, Error};
>> -/// use proxmox_rest_server::{flog, FileLogger, FileLogOptions};
>> +/// use proxmox_log::{flog, FileLogger, FileLogOptions};
>>   ///
>>   /// # std::fs::remove_file("test.log");
>>   /// let options = FileLogOptions {
>> diff --git a/proxmox-log/src/lib.rs b/proxmox-log/src/lib.rs
>> new file mode 100644
>> index 0000000..0c101b7
>> --- /dev/null
>> +++ b/proxmox-log/src/lib.rs
>> @@ -0,0 +1,36 @@
>> +//! # Proxmox logging infrastructure
>> +use crate::syslog_layer::SyslogLayer;
>> +use file_layer::FilelogLayer;
>> +use std::env;
>> +use tracing_log::LogTracer;
>> +use tracing_subscriber::filter::{filter_fn, LevelFilter};
>> +use tracing_subscriber::prelude::*;
>> +
>> +pub mod file_layer;
>> +mod file_logger;
>> +pub use file_logger::{FileLogOptions, FileLogger};
>> +
>> +mod syslog_layer;
>> +
>> +pub fn init_logger(
>> +    env_var_name: &str,
>> +    default_log_level: LevelFilter,
>> +    application_name: &str,
>> +) -> Result<(), anyhow::Error> {
>> +    let mut log_level = default_log_level;
>> +    if let Ok(v) = env::var(env_var_name) {
>> +        if let Ok(l) = v.parse::<LevelFilter>() {
>> +            log_level = l;
>> +        }
>> +    }
>> +    let registry = tracing_subscriber::registry()
>> + .with(FilelogLayer::new().with_filter(filter_fn(|metadata| {
>> +            metadata.fields().field("tasklog").is_some()
>
> you give 'tasklog = true' but only check here for is_some, wouldn't
> it be better to do
> metadata.fields().field("tasklog") == Some("true")
> ?
> that way a
> 'tasklog = false' wouldn't (confusingly) log to the task log
>
Sadly, we can't do that, because tracing only records the field names
before enabling/disabling a layer (because of performance reasons,
computing the value **could** be expensive).
> [..]
>




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pbs-devel] [RFC proxmox-backup 0/2] Tasklog rewrite with tracing
  2023-10-18 13:12 ` [pbs-devel] [RFC proxmox-backup 0/2] Tasklog rewrite with tracing Dominik Csapak
@ 2023-10-23  9:09   ` Gabriel Goller
  2023-10-23  9:29     ` Dominik Csapak
  0 siblings, 1 reply; 14+ messages in thread
From: Gabriel Goller @ 2023-10-23  9:09 UTC (permalink / raw)
  To: Dominik Csapak, Proxmox Backup Server development discussion

On 10/18/23 15:12, Dominik Csapak wrote:

> hi, a high level comments on this series:
>
> you implemented a 'tasklog' = true filter, which is nice, but not 
> filter for the syslog
> in general we'd want to land things either in the task log (preferred) 
> or in the syslog,
> not in both (e.g. for some tasks this pollutes the syslog unnecessarily)
>
Ok, this should be easy, we just check in the `syslog_layer` if the 
`task_log` attribute
is None in the Metadata. Like this we only log to task_log OR syslog.
> so i'd like to see also a syslog = false (or syslog = true)  and maybe 
> some
> functionality to enable/disable that for whole block (if that's even 
> possible?)
As mentioned in the other patch, currently we can't inspect metadata 
values when enabling/
disabling a layer :/ .
What do you mean exactly by 'disabling in a whole block'?

We could enable/disable the filelog layer in specific rust modules (not 
so useful for us)
or e.g, create a span on thread/task creation and then let all the logs 
in that span go to
the task_log? The problem with that is that we would have to create an 
specific attribute
to log to syslog while we are in a worker_task context (e.g. 
`info!(syslog = true, "to syslog")`)
and it's not so clear anymore where we write (i.e. I'd have to look at 
the context to know
if a `log::info!()` call writes to task_log or syslog).

IMO having a explicit `tasklog` attribute on every event is good (The 
name is obviously debatable,
we could choose something shorter, or create a macro `task_info!` to 
make it easier to use).




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pbs-devel] [RFC proxmox 2/2] proxmox-log: added tracing infra
  2023-10-23  8:56     ` Gabriel Goller
@ 2023-10-23  9:11       ` Gabriel Goller
  0 siblings, 0 replies; 14+ messages in thread
From: Gabriel Goller @ 2023-10-23  9:11 UTC (permalink / raw)
  To: Dominik Csapak, Proxmox Backup Server development discussion

On 10/23/23 10:56, Gabriel Goller wrote:

> [..]
> Sadly, we can't do that, because tracing only records the field names
> before enabling/disabling a layer (because of performance reasons,
> computing the value **could** be expensive).
>
Oops, Correction:
Tracing records the values of the fields **after** enabling/disabling 
layers. That's
why we don't have access to the values when creating a layerfilter.




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pbs-devel] [RFC proxmox-backup 0/2] Tasklog rewrite with tracing
  2023-10-23  9:09   ` Gabriel Goller
@ 2023-10-23  9:29     ` Dominik Csapak
  2023-10-23 11:43       ` Gabriel Goller
  2023-10-23 14:33       ` Thomas Lamprecht
  0 siblings, 2 replies; 14+ messages in thread
From: Dominik Csapak @ 2023-10-23  9:29 UTC (permalink / raw)
  To: Gabriel Goller, Proxmox Backup Server development discussion
  Cc: Thomas Lamprecht, Fabian Grünbichler

On 10/23/23 11:09, Gabriel Goller wrote:
> On 10/18/23 15:12, Dominik Csapak wrote:
> 
>> hi, a high level comments on this series:
>>
>> you implemented a 'tasklog' = true filter, which is nice, but not filter for the syslog
>> in general we'd want to land things either in the task log (preferred) or in the syslog,
>> not in both (e.g. for some tasks this pollutes the syslog unnecessarily)
>>
> Ok, this should be easy, we just check in the `syslog_layer` if the `task_log` attribute
> is None in the Metadata. Like this we only log to task_log OR syslog.
>> so i'd like to see also a syslog = false (or syslog = true)  and maybe some
>> functionality to enable/disable that for whole block (if that's even possible?)
> As mentioned in the other patch, currently we can't inspect metadata values when enabling/
> disabling a layer :/ .
> What do you mean exactly by 'disabling in a whole block'?


a (theoretical) example:

fn some_function_in_a_worker() {
    ...
    log_to_tasklog("...");
    ...
    if (some_condition) {
       // low level operations+logs that don't belog into task log
       only_log_to_syslog("...");
       ...
       only_log_to_syslog("...");
       ...
    }
}

also having either syslog or tasklog is also not right everywhere, since sometimes
we explicitly want to have it in both the task and syslog (probably?)


> 
> We could enable/disable the filelog layer in specific rust modules (not so useful for us)
> or e.g, create a span on thread/task creation and then let all the logs in that span go to
> the task_log? The problem with that is that we would have to create an specific attribute
> to log to syslog while we are in a worker_task context (e.g. `info!(syslog = true, "to syslog")`)
> and it's not so clear anymore where we write (i.e. I'd have to look at the context to know
> if a `log::info!()` call writes to task_log or syslog).
> 
> IMO having a explicit `tasklog` attribute on every event is good (The name is obviously debatable,
> we could choose something shorter, or create a macro `task_info!` to make it easier to use).

having the 'tasklog' everytime make it harder to use imho, but if it's hidden behind some macro
can be ok.

in general what i'd like to have are three variants of logging:

* task log only
* syslog only
* task log and syslog

while the first and second will get the most use inside and outside a worker context respectively
so that should be most convenient to use (iow. short)

the second and third should also be possible in a worker context, but there it's not so important
that it's short

in my (naive) imagination i would have liked something like this:

info!("foo"); // logs to task log inside worker, syslog outside
info!(syslog = true, "foo"); // logs only to syslog, even in worker context
info!(syslog = true, tasklog = true, "foo"); // log in both task and syslog (if possible)

(the names are just placeholders, not suggestions. also i'm not against using
macros for either like you wrote: task_log!, syslog!, task_and_syslog! (probably not that))

does that make sense? (@Thomas, @Fabian?)





^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pbs-devel] [RFC proxmox-backup 0/2] Tasklog rewrite with tracing
  2023-10-23  9:29     ` Dominik Csapak
@ 2023-10-23 11:43       ` Gabriel Goller
  2023-10-23 14:33       ` Thomas Lamprecht
  1 sibling, 0 replies; 14+ messages in thread
From: Gabriel Goller @ 2023-10-23 11:43 UTC (permalink / raw)
  To: Dominik Csapak, Proxmox Backup Server development discussion
  Cc: Thomas Lamprecht, Fabian Grünbichler

On 10/23/23 11:29, Dominik Csapak wrote:

> [..]
> having the 'tasklog' everytime make it harder to use imho, but if it's 
> hidden behind some macro
> can be ok.
>
> in general what i'd like to have are three variants of logging:
>
> * task log only
> * syslog only
> * task log and syslog
>
> while the first and second will get the most use inside and outside a 
> worker context respectively
> so that should be most convenient to use (iow. short)
>
> the second and third should also be possible in a worker context, but 
> there it's not so important
> that it's short
>
> in my (naive) imagination i would have liked something like this:
>
> info!("foo"); // logs to task log inside worker, syslog outside
> info!(syslog = true, "foo"); // logs only to syslog, even in worker 
> context
> info!(syslog = true, tasklog = true, "foo"); // log in both task and 
> syslog (if possible)
>
> (the names are just placeholders, not suggestions. also i'm not 
> against using
> macros for either like you wrote: task_log!, syslog!, task_and_syslog! 
> (probably not that))
>
> does that make sense? (@Thomas, @Fabian?)
>
I like this, should be possible to implement as well. Although, if we 
are using macros
(to get a cleaner/shorter version) we could just keep it simple and use 
the full version
without inferring the write location using context.
So:
These **always** go to syslog:
  - info!("..")
  - error!("..")
  - log::info!("..")
  - ...

Implement a macro task_log!(), task_warn!(), etc... that **always** goes 
to task_log:
e.g.:
#[macro_export]
macro_rules! task_log {
     ($task:expr, $($fmt:tt)+) => {{
         info!(tasklog = true, &format_args!($($fmt)+))
     }};
}

But I'm open to other opinions and the input from @Thomas or @Fabian.




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pbs-devel] [RFC proxmox-backup 0/2] Tasklog rewrite with tracing
  2023-10-23  9:29     ` Dominik Csapak
  2023-10-23 11:43       ` Gabriel Goller
@ 2023-10-23 14:33       ` Thomas Lamprecht
  2023-10-24 13:44         ` Gabriel Goller
  1 sibling, 1 reply; 14+ messages in thread
From: Thomas Lamprecht @ 2023-10-23 14:33 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Dominik Csapak,
	Gabriel Goller

Am 23/10/2023 um 11:29 schrieb Dominik Csapak:
> info!("foo"); // logs to task log inside worker, syslog outside
> info!(syslog = true, "foo"); // logs only to syslog, even in worker context
> info!(syslog = true, tasklog = true, "foo"); // log in both task and syslog (if possible)

I find this a bit confusing, especially if one stumbles upon
multiple variants in the same code (path), like e.g., when reading
the first and the second variant, one might think that the first
doesn't have syslog true, or not think that the second has tasklog
false implied.

So if, then I'd rather use negation, i.e., encode that one wants
to avoid logging something to some channel: info!(syslog = false, "..."),
but IMO not really ideal either – while the base idea can be OK, I'D
avoid such flags, they're making it longer and not really less confusing.

We generally want to log to the task-log inside workers, logging to
the syslog is rather the outlier (we overuse syslog currently already
with that downloaded chunk logging) reserved for warnings and errors,
as especially the latter can happen if there are system-wide errors
(like disk failure), so useful to see there.
This is what we're (mostly) doing in perl, connect stdout to the task
log, errors will show up in syslog too, but for targeting the syslogs
explicitly one needs to do so too, while not perfect, that system works
IMO relatively well.

So maybe just add the auto-log stuff from Dominik's variant for plain
info!("") et al., but also log any error to both task and syslog.

To reduce syslog noise we can also limit to only relaying warnings and
higher level to syslog in the "no worker task context case". I.e.:

- if in worker task:
  - log info and higher to worker task log
  - mirror error level to syslog too

- if not in worker task:
  - log warnings and higher to syslog

It's slightly magic, but fit our reality quite well.

For forcing other logs, or a debug level, to a target too, one then
could provide environment variables, or an API command-socket command
for a dynamic change of max-level to log – especially the latter is
a bit orthogonal to this and should be done later.

semi-related: I didn't check the series in detail, so sorry if already
answered there, but how's the story for logging specific spans?

As logging levels are normally rather coarse, especially on complex
tasks one can have a log going on, and so it can be helpful to, e.g.,
enable debug logging only for some specific "topics", especially if
one is debugging an issue that a user faces, but one cannot (easily)
reproduce themselves. But, I guess that is unrelated from your current
work here, or did you already think about that?

(the place I'd want something like that the most is pmxcfs, as there
you can currently only decide if you want very sparse information or
every detail times ten for about every file operation happening, very
frustrating)




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pbs-devel] [RFC proxmox-backup 0/2] Tasklog rewrite with tracing
  2023-10-23 14:33       ` Thomas Lamprecht
@ 2023-10-24 13:44         ` Gabriel Goller
  2023-10-25 13:55           ` Gabriel Goller
  0 siblings, 1 reply; 14+ messages in thread
From: Gabriel Goller @ 2023-10-24 13:44 UTC (permalink / raw)
  To: Thomas Lamprecht, Proxmox Backup Server development discussion,
	Dominik Csapak

On 10/23/23 16:33, Thomas Lamprecht wrote:

> [..]
> - if in worker task:
>    - log info and higher to worker task log
>    - mirror error level to syslog too
>
> - if not in worker task:
>    - log warnings and higher to syslog
>
> It's slightly magic, but fit our reality quite well.
>
> For forcing other logs, or a debug level, to a target too, one then
> could provide environment variables, or an API command-socket command
> for a dynamic change of max-level to log – especially the latter is
> a bit orthogonal to this and should be done later.
Yes, sounds good! I should be able to implement this.
> semi-related: I didn't check the series in detail, so sorry if already
> answered there, but how's the story for logging specific spans?
>
> As logging levels are normally rather coarse, especially on complex
> tasks one can have a log going on, and so it can be helpful to, e.g.,
> enable debug logging only for some specific "topics", especially if
> one is debugging an issue that a user faces, but one cannot (easily)
> reproduce themselves. But, I guess that is unrelated from your current
> work here, or did you already think about that?
>
> (the place I'd want something like that the most is pmxcfs, as there
> you can currently only decide if you want very sparse information or
> every detail times ten for about every file operation happening, very
> frustrating)
Hmm this is a tricky one... With the current RFC this is only possible by
reloading a specific layer (syslog or task_log) using another log-level (and
thus we need to pass a reload-handle around, which is not optimal).

But, if this is frequent, we could add another layer (conveniently called
debug_layer or something) that is only active inside a span with a specific
name and prints all levels to syslog. This layer would only be used when
debugging and shouldn't impact runtime performance that much (although I
will check this).

So you would basically wrap your critical code with a span:

fn super_long_function_with_critical_stuff() {
     let span_guard = tracing::span!(Level::INFO, "debug").entered();
     // other code that emits logs
     // the `span_guard` will be dropped at the end and the span will end
     // we can also end the span manually using `span.exit()`
}

We then have a special layer, that will check if the log is in a span
with name 'debug' and will write to syslog without checking the
max log level.




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [pbs-devel] [RFC proxmox-backup 0/2] Tasklog rewrite with tracing
  2023-10-24 13:44         ` Gabriel Goller
@ 2023-10-25 13:55           ` Gabriel Goller
  0 siblings, 0 replies; 14+ messages in thread
From: Gabriel Goller @ 2023-10-25 13:55 UTC (permalink / raw)
  To: Thomas Lamprecht, Proxmox Backup Server development discussion,
	Dominik Csapak

Submitted a new patch!




^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2023-10-25 13:55 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-10-11 14:01 [pbs-devel] [RFC proxmox-backup 0/2] Tasklog rewrite with tracing Gabriel Goller
2023-10-11 14:01 ` [pbs-devel] [RFC proxmox-backup 1/2] log: removed task_log! macro and moved to tracing Gabriel Goller
2023-10-11 14:01 ` [pbs-devel] [RFC proxmox 2/2] proxmox-log: added tracing infra Gabriel Goller
2023-10-13 12:36   ` Gabriel Goller
2023-10-18 13:26   ` Dominik Csapak
2023-10-23  8:56     ` Gabriel Goller
2023-10-23  9:11       ` Gabriel Goller
2023-10-18 13:12 ` [pbs-devel] [RFC proxmox-backup 0/2] Tasklog rewrite with tracing Dominik Csapak
2023-10-23  9:09   ` Gabriel Goller
2023-10-23  9:29     ` Dominik Csapak
2023-10-23 11:43       ` Gabriel Goller
2023-10-23 14:33       ` Thomas Lamprecht
2023-10-24 13:44         ` Gabriel Goller
2023-10-25 13:55           ` Gabriel Goller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal