public inbox for pbs-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pbs-devel] [PATCH proxmox-backup 1/2] tape/file_formats/blocked_reader: restore EOD behaviour
@ 2021-04-09 14:18 Dominik Csapak
  2021-04-09 14:18 ` [pbs-devel] [PATCH proxmox-backup 2/2] api2/tape/backup: commit pool even after an error Dominik Csapak
  0 siblings, 1 reply; 2+ messages in thread
From: Dominik Csapak @ 2021-04-09 14:18 UTC (permalink / raw)
  To: pbs-devel

before commit
0db571249 ("tape: introduce BlockRead")

we did not return an error on EOD, but changed that.
The rest of the code assumes to be able to read there and not
encounter an error, so that change resulted in

'no space left on device' errors on all tasks/api calls where we
would read to the end of the tape, e.g. a restore, read label on an
empty tape, etc.

This patch restores the previous behaviour.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
not sure if this is the intended behavior, but fixes many
'no space on device' errors we encounter currently

if the intention was that we catch the enospc error explicitely on the
caller side, we would have to invent our own error type here,
as this results in an io::Error with ErrorKind::Other (makes matching a bit weird)

 src/tape/file_formats/blocked_reader.rs | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/src/tape/file_formats/blocked_reader.rs b/src/tape/file_formats/blocked_reader.rs
index 3df84a1b..e7dfa90a 100644
--- a/src/tape/file_formats/blocked_reader.rs
+++ b/src/tape/file_formats/blocked_reader.rs
@@ -111,12 +111,9 @@ impl <R: BlockRead> BlockedReader<R> {
                 }
                 Ok(true)
             }
-            Ok(BlockReadStatus::EndOfFile) => {
+            Ok(BlockReadStatus::EndOfFile) | Ok(BlockReadStatus::EndOfStream)=> {
                 Ok(false)
             }
-            Ok(BlockReadStatus::EndOfStream) => {
-                return Err(std::io::Error::from_raw_os_error(nix::errno::Errno::ENOSPC as i32));
-            }
             Err(err) => {
                 Err(err)
             }
-- 
2.20.1





^ permalink raw reply	[flat|nested] 2+ messages in thread

* [pbs-devel] [PATCH proxmox-backup 2/2] api2/tape/backup: commit pool even after an error
  2021-04-09 14:18 [pbs-devel] [PATCH proxmox-backup 1/2] tape/file_formats/blocked_reader: restore EOD behaviour Dominik Csapak
@ 2021-04-09 14:18 ` Dominik Csapak
  0 siblings, 0 replies; 2+ messages in thread
From: Dominik Csapak @ 2021-04-09 14:18 UTC (permalink / raw)
  To: pbs-devel

when we encounter an error (including a manually aborted task), try
to commit the pool, so that catalogs get written out.

This prevents that after an aborted backup, we have inconsistent
inventory and catalog, since the updated media set for the media
gets written to the inventory at the beginning.

Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
---
also not sure here if i missed something, but solves the
problem we had when aborted a backup before we could write the catalog
to disk

 src/api2/tape/backup.rs | 103 +++++++++++++++++++++-------------------
 1 file changed, 54 insertions(+), 49 deletions(-)

diff --git a/src/api2/tape/backup.rs b/src/api2/tape/backup.rs
index ec35038a..300decc8 100644
--- a/src/api2/tape/backup.rs
+++ b/src/api2/tape/backup.rs
@@ -437,66 +437,71 @@ fn backup_worker(
 
     let mut need_catalog = false; // avoid writing catalog for empty jobs
 
-    for (group_number, group) in group_list.into_iter().enumerate() {
-        progress.done_groups = group_number as u64;
-        progress.done_snapshots = 0;
-        progress.group_snapshots = 0;
-
-        let mut snapshot_list = group.list_backups(&datastore.base_path())?;
-
-        BackupInfo::sort_list(&mut snapshot_list, true); // oldest first
-
-        if latest_only {
-            progress.group_snapshots = 1;
-            if let Some(info) = snapshot_list.pop() {
-                if pool_writer.contains_snapshot(datastore_name, &info.backup_dir.to_string()) {
-                    task_log!(worker, "skip snapshot {}", info.backup_dir);
-                    continue;
-                }
+    let result: Result<(), Error> = try_block!({
+        for (group_number, group) in group_list.into_iter().enumerate() {
+            progress.done_groups = group_number as u64;
+            progress.done_snapshots = 0;
+            progress.group_snapshots = 0;
+
+            let mut snapshot_list = group.list_backups(&datastore.base_path())?;
+
+            BackupInfo::sort_list(&mut snapshot_list, true); // oldest first
+
+            if latest_only {
+                progress.group_snapshots = 1;
+                if let Some(info) = snapshot_list.pop() {
+                    if pool_writer.contains_snapshot(datastore_name, &info.backup_dir.to_string()) {
+                        task_log!(worker, "skip snapshot {}", info.backup_dir);
+                        continue;
+                    }
 
-                need_catalog = true;
+                    need_catalog = true;
 
-                let snapshot_name = info.backup_dir.to_string();
-                if !backup_snapshot(worker, &mut pool_writer, datastore.clone(), info.backup_dir)? {
-                    errors = true;
-                } else {
-                    summary.snapshot_list.push(snapshot_name);
-                }
-                progress.done_snapshots = 1;
-                task_log!(
-                    worker,
-                    "percentage done: {}",
-                    progress
-                );
-            }
-        } else {
-            progress.group_snapshots = snapshot_list.len() as u64;
-            for (snapshot_number, info) in snapshot_list.into_iter().enumerate() {
-                if pool_writer.contains_snapshot(datastore_name, &info.backup_dir.to_string()) {
-                    task_log!(worker, "skip snapshot {}", info.backup_dir);
-                    continue;
+                    let snapshot_name = info.backup_dir.to_string();
+                    if !backup_snapshot(worker, &mut pool_writer, datastore.clone(), info.backup_dir)? {
+                        errors = true;
+                    } else {
+                        summary.snapshot_list.push(snapshot_name);
+                    }
+                    progress.done_snapshots = 1;
+                    task_log!(
+                        worker,
+                        "percentage done: {}",
+                        progress
+                    );
                 }
+            } else {
+                progress.group_snapshots = snapshot_list.len() as u64;
+                for (snapshot_number, info) in snapshot_list.into_iter().enumerate() {
+                    if pool_writer.contains_snapshot(datastore_name, &info.backup_dir.to_string()) {
+                        task_log!(worker, "skip snapshot {}", info.backup_dir);
+                        continue;
+                    }
 
-                need_catalog = true;
+                    need_catalog = true;
 
-                let snapshot_name = info.backup_dir.to_string();
-                if !backup_snapshot(worker, &mut pool_writer, datastore.clone(), info.backup_dir)? {
-                    errors = true;
-                } else {
-                    summary.snapshot_list.push(snapshot_name);
+                    let snapshot_name = info.backup_dir.to_string();
+                    if !backup_snapshot(worker, &mut pool_writer, datastore.clone(), info.backup_dir)? {
+                        errors = true;
+                    } else {
+                        summary.snapshot_list.push(snapshot_name);
+                    }
+                    progress.done_snapshots = snapshot_number as u64 + 1;
+                    task_log!(
+                        worker,
+                        "percentage done: {}",
+                        progress
+                    );
                 }
-                progress.done_snapshots = snapshot_number as u64 + 1;
-                task_log!(
-                    worker,
-                    "percentage done: {}",
-                    progress
-                );
             }
         }
-    }
+        Ok(())
+    });
 
     pool_writer.commit()?;
 
+    let _ = result?;
+
     if need_catalog {
         task_log!(worker, "append media catalog");
 
-- 
2.20.1





^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2021-04-09 14:18 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-09 14:18 [pbs-devel] [PATCH proxmox-backup 1/2] tape/file_formats/blocked_reader: restore EOD behaviour Dominik Csapak
2021-04-09 14:18 ` [pbs-devel] [PATCH proxmox-backup 2/2] api2/tape/backup: commit pool even after an error Dominik Csapak

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal