public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pbs-devel] [PATCH proxmox-backup 1/2] tools/logrotate: fix compression logic
@ 2020-10-28  9:58 Dominik Csapak
  2020-10-28  9:58 ` [pbs-devel] [RFC PATCH proxmox-backup 2/2] server/worker_task: simplify task log writing Dominik Csapak
  2020-10-28 17:51 ` [pbs-devel] applied: [PATCH proxmox-backup 1/2] tools/logrotate: fix compression logic Thomas Lamprecht
  0 siblings, 2 replies; 4+ messages in thread
From: Dominik Csapak @ 2020-10-28  9:58 UTC (permalink / raw)
  To: pbs-devel

we never actually compressed any files, since we only looked at
the extension:
* if it was 'zst' (which was always true for newly rotated files), we
  would not compress it
* even if it was not 'zst', we compressed it inplace, never adding '.zst'
  (possibly compressing them multiple times as zstd)

now we add new rotated files simply as '.X' and add a 'target' to the
compress fn, where we rename it to (but now we have to unlink the source
path)

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
 src/tools/logrotate.rs | 37 +++++++++++++++++++------------------
 1 file changed, 19 insertions(+), 18 deletions(-)

diff --git a/src/tools/logrotate.rs b/src/tools/logrotate.rs
index 0a8203a9..13c73a76 100644
--- a/src/tools/logrotate.rs
+++ b/src/tools/logrotate.rs
@@ -46,9 +46,9 @@ impl LogRotate {
         }
     }
 
-    fn compress(file: &PathBuf, options: &CreateOptions) -> Result<(), Error> {
-        let mut source = File::open(file)?;
-        let (fd, tmp_path) = make_tmp_file(file, options.clone())?;
+    fn compress(source_path: &PathBuf, target_path: &PathBuf, options: &CreateOptions) -> Result<(), Error> {
+        let mut source = File::open(source_path)?;
+        let (fd, tmp_path) = make_tmp_file(target_path, options.clone())?;
         let target = unsafe { File::from_raw_fd(fd) };
         let mut encoder = match zstd::stream::write::Encoder::new(target, 0) {
             Ok(encoder) => encoder,
@@ -60,18 +60,23 @@ impl LogRotate {
 
         if let Err(err) = std::io::copy(&mut source, &mut encoder) {
             let _ = unistd::unlink(&tmp_path);
-            bail!("zstd encoding failed for file {:?} - {}", file, err);
+            bail!("zstd encoding failed for file {:?} - {}", target_path, err);
         }
 
         if let Err(err) = encoder.finish() {
             let _ = unistd::unlink(&tmp_path);
-            bail!("zstd finish failed for file {:?} - {}", file, err);
+            bail!("zstd finish failed for file {:?} - {}", target_path, err);
         }
 
-        if let Err(err) = rename(&tmp_path, file) {
+        if let Err(err) = rename(&tmp_path, target_path) {
             let _ = unistd::unlink(&tmp_path);
-            bail!("rename failed for file {:?} - {}", file, err);
+            bail!("rename failed for file {:?} - {}", target_path, err);
         }
+
+        if let Err(err) = unistd::unlink(source_path) {
+            bail!("unlink failed for file {:?} - {}", source_path, err);
+        }
+
         Ok(())
     }
 
@@ -80,9 +85,8 @@ impl LogRotate {
     ///
     /// e.g. rotates
     /// foo.2.zst => foo.3.zst
-    /// foo.1.zst => foo.2.zst
-    /// foo       => foo.1.zst
-    ///           => foo
+    /// foo.1     => foo.2.zst
+    /// foo       => foo.1
     pub fn do_rotate(&mut self, options: CreateOptions, max_files: Option<usize>) -> Result<(), Error> {
         let mut filenames: Vec<PathBuf> = self.file_names().collect();
         if filenames.is_empty() {
@@ -90,12 +94,7 @@ impl LogRotate {
         }
 
         let mut next_filename = self.base_path.clone().canonicalize()?.into_os_string();
-
-        if self.compress {
-            next_filename.push(format!(".{}.zst", filenames.len()));
-        } else {
-            next_filename.push(format!(".{}", filenames.len()));
-        }
+        next_filename.push(format!(".{}", filenames.len()));
 
         filenames.push(PathBuf::from(next_filename));
         let count = filenames.len();
@@ -105,9 +104,11 @@ impl LogRotate {
         }
 
         if self.compress {
-            for i in 2..count-1 {
+            for i in 2..count {
                 if filenames[i].extension().unwrap_or(std::ffi::OsStr::new("")) != "zst" {
-                    Self::compress(&filenames[i], &options)?;
+                    let mut target = filenames[i].clone().into_os_string();
+                    target.push(".zstd");
+                    Self::compress(&filenames[i], &target.into(), &options)?;
                 }
             }
         }
-- 
2.20.1





^ permalink raw reply	[flat|nested] 4+ messages in thread

* [pbs-devel] [RFC PATCH proxmox-backup 2/2] server/worker_task: simplify task log writing
  2020-10-28  9:58 [pbs-devel] [PATCH proxmox-backup 1/2] tools/logrotate: fix compression logic Dominik Csapak
@ 2020-10-28  9:58 ` Dominik Csapak
  2020-10-28 17:57   ` Thomas Lamprecht
  2020-10-28 17:51 ` [pbs-devel] applied: [PATCH proxmox-backup 1/2] tools/logrotate: fix compression logic Thomas Lamprecht
  1 sibling, 1 reply; 4+ messages in thread
From: Dominik Csapak @ 2020-10-28  9:58 UTC (permalink / raw)
  To: pbs-devel

instead of prerotating 1000 tasks
(which resulted in 2 writes each time an active worker was finished)
simply append finished tasks to the archive (which will be rotated)

page cache should be good enough so that we can get the task logs fast

since existing installations might have an 'index' file, we
still have to read tasks from there

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
maybe there is a better way to get rid of the index file handling?
we cannot really simply append the index file to the archive in
a postinst since the old daemon may still write into it..

do we actually care? the users may lose some task information, but
no actual harm should come from lost task logs, since we use
the jobstate for the state handling and scheduling

we could also simply removing the index code from the iterator
and leave it in 'update_active_workers' so that after the next finished
active task, we move them from the index file to archive
so we lose them just temporarily....?

not sure so sending as RFC

 src/server/worker_task.rs | 39 ++++++++++++---------------------------
 1 file changed, 12 insertions(+), 27 deletions(-)

diff --git a/src/server/worker_task.rs b/src/server/worker_task.rs
index 8ef0fde7..2e873904 100644
--- a/src/server/worker_task.rs
+++ b/src/server/worker_task.rs
@@ -35,8 +35,6 @@ pub const PROXMOX_BACKUP_ACTIVE_TASK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_
 pub const PROXMOX_BACKUP_INDEX_TASK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/index");
 pub const PROXMOX_BACKUP_ARCHIVE_TASK_FN: &str = concat!(PROXMOX_BACKUP_TASK_DIR_M!(), "/archive");
 
-const MAX_INDEX_TASKS: usize = 1000;
-
 lazy_static! {
     static ref WORKER_TASK_LIST: Mutex<HashMap<usize, Arc<WorkerTask>>> = Mutex::new(HashMap::new());
 
@@ -363,7 +361,10 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<(), Error> {
 
     let lock = lock_task_list_files(true)?;
 
+    // TODO remove with 2.0?
     let mut finish_list: Vec<TaskListInfo> = read_task_file_from_path(PROXMOX_BACKUP_INDEX_TASK_FN)?;
+    let had_index_file = !finish_list.is_empty();
+
     let mut active_list: Vec<TaskListInfo> = read_task_file_from_path(PROXMOX_BACKUP_ACTIVE_TASK_FN)?
         .into_iter()
         .filter_map(|info| {
@@ -412,33 +413,10 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<(), Error> {
         }
     });
 
-
-    let start = if finish_list.len() > MAX_INDEX_TASKS {
-        finish_list.len() - MAX_INDEX_TASKS
-    } else {
-        0
-    };
-
-    let end = (start+MAX_INDEX_TASKS).min(finish_list.len());
-
-    let index_raw = if end > start {
-        render_task_list(&finish_list[start..end])
-    } else {
-        "".to_string()
-    };
-
-    replace_file(
-        PROXMOX_BACKUP_INDEX_TASK_FN,
-        index_raw.as_bytes(),
-        CreateOptions::new()
-            .owner(backup_user.uid)
-            .group(backup_user.gid),
-    )?;
-
-    if !finish_list.is_empty() && start > 0 {
+    if !finish_list.is_empty() {
         match std::fs::OpenOptions::new().append(true).create(true).open(PROXMOX_BACKUP_ARCHIVE_TASK_FN) {
             Ok(mut writer) => {
-                for info in &finish_list[0..start] {
+                for info in &finish_list {
                     writer.write_all(render_task_line(&info).as_bytes())?;
                 }
             },
@@ -448,6 +426,12 @@ fn update_active_workers(new_upid: Option<&UPID>) -> Result<(), Error> {
         nix::unistd::chown(PROXMOX_BACKUP_ARCHIVE_TASK_FN, Some(backup_user.uid), Some(backup_user.gid))?;
     }
 
+    // TODO Remove with 2.0?
+    // for compatibility, if we had an INDEX file, we do not need it anymore
+    if had_index_file {
+        let _ = nix::unistd::unlink(PROXMOX_BACKUP_INDEX_TASK_FN);
+    }
+
     drop(lock);
 
     Ok(())
@@ -518,6 +502,7 @@ enum TaskFile {
     End,
 }
 
+// TODO remove Index with 2.0?
 pub struct TaskListInfoIterator {
     list: VecDeque<TaskListInfo>,
     file: TaskFile,
-- 
2.20.1





^ permalink raw reply	[flat|nested] 4+ messages in thread

* [pbs-devel] applied: [PATCH proxmox-backup 1/2] tools/logrotate: fix compression logic
  2020-10-28  9:58 [pbs-devel] [PATCH proxmox-backup 1/2] tools/logrotate: fix compression logic Dominik Csapak
  2020-10-28  9:58 ` [pbs-devel] [RFC PATCH proxmox-backup 2/2] server/worker_task: simplify task log writing Dominik Csapak
@ 2020-10-28 17:51 ` Thomas Lamprecht
  1 sibling, 0 replies; 4+ messages in thread
From: Thomas Lamprecht @ 2020-10-28 17:51 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Dominik Csapak

On 28.10.20 10:58, Dominik Csapak wrote:
> we never actually compressed any files, since we only looked at
> the extension:
> * if it was 'zst' (which was always true for newly rotated files), we
>   would not compress it
> * even if it was not 'zst', we compressed it inplace, never adding '.zst'
>   (possibly compressing them multiple times as zstd)
> 
> now we add new rotated files simply as '.X' and add a 'target' to the
> compress fn, where we rename it to (but now we have to unlink the source
> path)
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
>  src/tools/logrotate.rs | 37 +++++++++++++++++++------------------
>  1 file changed, 19 insertions(+), 18 deletions(-)
> 
>

applied, thanks!




^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [pbs-devel] [RFC PATCH proxmox-backup 2/2] server/worker_task: simplify task log writing
  2020-10-28  9:58 ` [pbs-devel] [RFC PATCH proxmox-backup 2/2] server/worker_task: simplify task log writing Dominik Csapak
@ 2020-10-28 17:57   ` Thomas Lamprecht
  0 siblings, 0 replies; 4+ messages in thread
From: Thomas Lamprecht @ 2020-10-28 17:57 UTC (permalink / raw)
  To: Proxmox Backup Server development discussion, Dominik Csapak

On 28.10.20 10:58, Dominik Csapak wrote:
> instead of prerotating 1000 tasks
> (which resulted in 2 writes each time an active worker was finished)
> simply append finished tasks to the archive (which will be rotated)
> 
> page cache should be good enough so that we can get the task logs fast
> 
> since existing installations might have an 'index' file, we
> still have to read tasks from there
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
> maybe there is a better way to get rid of the index file handling?
> we cannot really simply append the index file to the archive in
> a postinst since the old daemon may still write into it..
> 
> do we actually care? the users may lose some task information, but
> no actual harm should come from lost task logs, since we use
> the jobstate for the state handling and scheduling
> 
> we could also simply removing the index code from the iterator
> and leave it in 'update_active_workers' so that after the next finished
> active task, we move them from the index file to archive
> so we lose them just temporarily....?
> 

We may not care to much, while not ideal, we're still beta and this is not
a hard error.

The idea from above paragraph sounds somewhat OK.

In anyway, I'd do a not completely correct deprecation here, i.e., remove
backward compat stuff after 1.0, so those which update often and update relatively
soon after first stable release do not run into it and the others get some tasks
missing from the time when it was still beta.

Keeping some simple compat until after 1.0.1 or so would be OK for me, but doing
it until 2.0 is IMO to cumbersome compared to the drawbacks.





^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-10-28 17:57 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-28  9:58 [pbs-devel] [PATCH proxmox-backup 1/2] tools/logrotate: fix compression logic Dominik Csapak
2020-10-28  9:58 ` [pbs-devel] [RFC PATCH proxmox-backup 2/2] server/worker_task: simplify task log writing Dominik Csapak
2020-10-28 17:57   ` Thomas Lamprecht
2020-10-28 17:51 ` [pbs-devel] applied: [PATCH proxmox-backup 1/2] tools/logrotate: fix compression logic Thomas Lamprecht

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal