* [pbs-devel] [PATCH proxmox-backup 01/15] report: type-alias function call tuple
2021-01-25 13:42 [pbs-devel] [PATCH proxmox-backup(-qemu) 00/17] clippy refactorings Fabian Grünbichler
@ 2021-01-25 13:42 ` Fabian Grünbichler
2021-01-25 13:42 ` [pbs-devel] [PATCH proxmox-backup 02/15] broadcast_future: refactor broadcast/future binding Fabian Grünbichler
` (16 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Fabian Grünbichler @ 2021-01-25 13:42 UTC (permalink / raw)
To: pbs-devel
to make clippy happy.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
could also be just #[allow(..)]ed
src/server/report.rs | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/src/server/report.rs b/src/server/report.rs
index 1b77f442..e89abbc5 100644
--- a/src/server/report.rs
+++ b/src/server/report.rs
@@ -29,8 +29,10 @@ fn commands() -> Vec<(&'static str, Vec<&'static str>)> {
]
}
- // (<description>, <function to call>)
-fn function_calls() -> Vec<(&'static str, fn() -> String)> {
+// (description, function())
+type FunctionMapping = (&'static str, fn() -> String);
+
+fn function_calls() -> Vec<FunctionMapping> {
vec![
("Datastores", || {
let config = match datastore::config() {
--
2.20.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 02/15] broadcast_future: refactor broadcast/future binding
2021-01-25 13:42 [pbs-devel] [PATCH proxmox-backup(-qemu) 00/17] clippy refactorings Fabian Grünbichler
2021-01-25 13:42 ` [pbs-devel] [PATCH proxmox-backup 01/15] report: type-alias function call tuple Fabian Grünbichler
@ 2021-01-25 13:42 ` Fabian Grünbichler
2021-01-25 13:42 ` [pbs-devel] [PATCH proxmox-backup 03/15] client: refactor catalog upload spawning Fabian Grünbichler
` (15 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Fabian Grünbichler @ 2021-01-25 13:42 UTC (permalink / raw)
To: pbs-devel
into its own, private struct.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
src/tools/broadcast_future.rs | 42 +++++++++++++++--------------------
1 file changed, 18 insertions(+), 24 deletions(-)
diff --git a/src/tools/broadcast_future.rs b/src/tools/broadcast_future.rs
index 94aedf18..88b7aaab 100644
--- a/src/tools/broadcast_future.rs
+++ b/src/tools/broadcast_future.rs
@@ -62,14 +62,16 @@ impl <T: Clone> BroadcastData<T> {
}
}
+type SourceFuture<T> = Pin<Box<dyn Future<Output = Result<T, Error>> + Send>>;
+
+struct BroadCastFutureBinding<T> {
+ broadcast: BroadcastData<T>,
+ future: Option<SourceFuture<T>>,
+}
+
/// Broadcast future results to registered listeners
pub struct BroadcastFuture<T> {
- inner: Arc<
- Mutex<(
- BroadcastData<T>,
- Option<Pin<Box<dyn Future<Output = Result<T, Error>> + Send>>>,
- )>,
- >,
+ inner: Arc<Mutex<BroadCastFutureBinding<T>>>,
}
impl<T: Clone + Send + 'static> BroadcastFuture<T> {
@@ -77,7 +79,11 @@ impl<T: Clone + Send + 'static> BroadcastFuture<T> {
///
/// The result of the future is sent to all registered listeners.
pub fn new(source: Box<dyn Future<Output = Result<T, Error>> + Send>) -> Self {
- Self { inner: Arc::new(Mutex::new((BroadcastData::new(), Some(Pin::from(source))))) }
+ let inner = BroadCastFutureBinding {
+ broadcast: BroadcastData::new(),
+ future: Some(Pin::from(source)),
+ };
+ Self { inner: Arc::new(Mutex::new(inner)) }
}
/// Creates a new instance with a oneshot channel as trigger
@@ -92,29 +98,17 @@ impl<T: Clone + Send + 'static> BroadcastFuture<T> {
}
fn notify_listeners(
- inner: Arc<
- Mutex<(
- BroadcastData<T>,
- Option<Pin<Box<dyn Future<Output = Result<T, Error>> + Send>>>,
- )>,
- >,
+ inner: Arc<Mutex<BroadCastFutureBinding<T>>>,
result: Result<T, String>,
) {
let mut data = inner.lock().unwrap();
- data.0.notify_listeners(result);
+ data.broadcast.notify_listeners(result);
}
- fn spawn(
- inner: Arc<
- Mutex<(
- BroadcastData<T>,
- Option<Pin<Box<dyn Future<Output = Result<T, Error>> + Send>>>,
- )>,
- >,
- ) -> impl Future<Output = Result<T, Error>> {
+ fn spawn(inner: Arc<Mutex<BroadCastFutureBinding<T>>>) -> impl Future<Output = Result<T, Error>> {
let mut data = inner.lock().unwrap();
- if let Some(source) = data.1.take() {
+ if let Some(source) = data.future.take() {
let inner1 = inner.clone();
@@ -127,7 +121,7 @@ impl<T: Clone + Send + 'static> BroadcastFuture<T> {
tokio::spawn(task);
}
- data.0.listen()
+ data.broadcast.listen()
}
/// Register a listener
--
2.20.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 03/15] client: refactor catalog upload spawning
2021-01-25 13:42 [pbs-devel] [PATCH proxmox-backup(-qemu) 00/17] clippy refactorings Fabian Grünbichler
2021-01-25 13:42 ` [pbs-devel] [PATCH proxmox-backup 01/15] report: type-alias function call tuple Fabian Grünbichler
2021-01-25 13:42 ` [pbs-devel] [PATCH proxmox-backup 02/15] broadcast_future: refactor broadcast/future binding Fabian Grünbichler
@ 2021-01-25 13:42 ` Fabian Grünbichler
2021-01-25 13:42 ` [pbs-devel] [PATCH proxmox-backup 04/15] allow complex Futures in tower_service impl Fabian Grünbichler
` (14 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Fabian Grünbichler @ 2021-01-25 13:42 UTC (permalink / raw)
To: pbs-devel
by pulling out Result type into separate struct
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
especially unhappy with the name here..
src/bin/proxmox-backup-client.rs | 26 +++++++++++++-------------
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/src/bin/proxmox-backup-client.rs b/src/bin/proxmox-backup-client.rs
index 041a670c..dce8f0b8 100644
--- a/src/bin/proxmox-backup-client.rs
+++ b/src/bin/proxmox-backup-client.rs
@@ -586,21 +586,21 @@ async fn start_garbage_collection(param: Value) -> Result<Value, Error> {
Ok(Value::Null)
}
+struct CatalogUploadResult {
+ catalog_writer: Arc<Mutex<CatalogWriter<crate::tools::StdChannelWriter>>>,
+ result: tokio::sync::oneshot::Receiver<Result<BackupStats, Error>>,
+}
+
fn spawn_catalog_upload(
client: Arc<BackupWriter>,
encrypt: bool,
-) -> Result<
- (
- Arc<Mutex<CatalogWriter<crate::tools::StdChannelWriter>>>,
- tokio::sync::oneshot::Receiver<Result<BackupStats, Error>>
- ), Error>
-{
+) -> Result<CatalogUploadResult, Error> {
let (catalog_tx, catalog_rx) = std::sync::mpsc::sync_channel(10); // allow to buffer 10 writes
let catalog_stream = crate::tools::StdChannelStream(catalog_rx);
let catalog_chunk_size = 512*1024;
let catalog_chunk_stream = ChunkStream::new(catalog_stream, Some(catalog_chunk_size));
- let catalog = Arc::new(Mutex::new(CatalogWriter::new(crate::tools::StdChannelWriter::new(catalog_tx))?));
+ let catalog_writer = Arc::new(Mutex::new(CatalogWriter::new(crate::tools::StdChannelWriter::new(catalog_tx))?));
let (catalog_result_tx, catalog_result_rx) = tokio::sync::oneshot::channel();
@@ -617,7 +617,7 @@ fn spawn_catalog_upload(
let _ = catalog_result_tx.send(catalog_upload_result);
});
- Ok((catalog, catalog_result_rx))
+ Ok(CatalogUploadResult { catalog_writer, result: catalog_result_rx })
}
fn keyfile_parameters(param: &Value) -> Result<(Option<Vec<u8>>, CryptMode), Error> {
@@ -990,7 +990,7 @@ async fn create_backup(
let mut manifest = BackupManifest::new(snapshot);
let mut catalog = None;
- let mut catalog_result_tx = None;
+ let mut catalog_result_rx = None;
for (backup_type, filename, target, size) in upload_list {
match backup_type {
@@ -1011,9 +1011,9 @@ async fn create_backup(
BackupSpecificationType::PXAR => {
// start catalog upload on first use
if catalog.is_none() {
- let (cat, res) = spawn_catalog_upload(client.clone(), crypt_mode == CryptMode::Encrypt)?;
- catalog = Some(cat);
- catalog_result_tx = Some(res);
+ let catalog_upload_res = spawn_catalog_upload(client.clone(), crypt_mode == CryptMode::Encrypt)?;
+ catalog = Some(catalog_upload_res.catalog_writer);
+ catalog_result_rx = Some(catalog_upload_res.result);
}
let catalog = catalog.as_ref().unwrap();
@@ -1065,7 +1065,7 @@ async fn create_backup(
drop(catalog); // close upload stream
- if let Some(catalog_result_rx) = catalog_result_tx {
+ if let Some(catalog_result_rx) = catalog_result_rx {
let stats = catalog_result_rx.await??;
manifest.add_file(CATALOG_NAME.to_owned(), stats.size, stats.csum, crypt_mode)?;
}
--
2.20.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 04/15] allow complex Futures in tower_service impl
2021-01-25 13:42 [pbs-devel] [PATCH proxmox-backup(-qemu) 00/17] clippy refactorings Fabian Grünbichler
` (2 preceding siblings ...)
2021-01-25 13:42 ` [pbs-devel] [PATCH proxmox-backup 03/15] client: refactor catalog upload spawning Fabian Grünbichler
@ 2021-01-25 13:42 ` Fabian Grünbichler
2021-01-25 13:42 ` [pbs-devel] [PATCH proxmox-backup 05/15] async index reader: typedef ReadFuture Fabian Grünbichler
` (13 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Fabian Grünbichler @ 2021-01-25 13:42 UTC (permalink / raw)
To: pbs-devel
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
src/server/h2service.rs | 5 +++--
src/server/rest.rs | 3 ++-
src/tools/http.rs | 5 ++---
3 files changed, 7 insertions(+), 6 deletions(-)
diff --git a/src/server/h2service.rs b/src/server/h2service.rs
index 989618ec..332b3b1a 100644
--- a/src/server/h2service.rs
+++ b/src/server/h2service.rs
@@ -1,6 +1,7 @@
use anyhow::{Error};
use std::collections::HashMap;
+use std::pin::Pin;
use std::sync::Arc;
use std::task::{Context, Poll};
@@ -85,8 +86,8 @@ impl <E: RpcEnvironment + Clone> H2Service<E> {
impl <E: RpcEnvironment + Clone> tower_service::Service<Request<Body>> for H2Service<E> {
type Response = Response<Body>;
type Error = Error;
- type Future =
- std::pin::Pin<Box<dyn Future<Output = Result<Response<Body>, Self::Error>> + Send>>;
+ #[allow(clippy::type_complexity)]
+ type Future = Pin<Box<dyn Future<Output = Result<Self::Response, Self::Error>> + Send>>;
fn poll_ready(&mut self, _cx: &mut Context) -> Poll<Result<(), Self::Error>> {
Poll::Ready(Ok(()))
diff --git a/src/server/rest.rs b/src/server/rest.rs
index 0586979a..fc59be9a 100644
--- a/src/server/rest.rs
+++ b/src/server/rest.rs
@@ -198,7 +198,8 @@ fn get_user_agent(headers: &HeaderMap) -> Option<String> {
impl tower_service::Service<Request<Body>> for ApiService {
type Response = Response<Body>;
type Error = Error;
- type Future = Pin<Box<dyn Future<Output = Result<Response<Body>, Self::Error>> + Send>>;
+ #[allow(clippy::type_complexity)]
+ type Future = Pin<Box<dyn Future<Output = Result<Self::Response, Self::Error>> + Send>>;
fn poll_ready(&mut self, _cx: &mut Context) -> Poll<Result<(), Self::Error>> {
Poll::Ready(Ok(()))
diff --git a/src/tools/http.rs b/src/tools/http.rs
index 47d6e1f6..0fbc85fb 100644
--- a/src/tools/http.rs
+++ b/src/tools/http.rs
@@ -108,9 +108,8 @@ type MaybeTlsStream = EitherStream<
impl hyper::service::Service<Uri> for HttpsConnector {
type Response = MaybeTlsStream;
type Error = Error;
- type Future = std::pin::Pin<Box<
- dyn Future<Output = Result<Self::Response, Self::Error>> + Send + 'static
- >>;
+ #[allow(clippy::type_complexity)]
+ type Future = Pin<Box<dyn Future<Output = Result<Self::Response, Self::Error>> + Send + 'static>>;
fn poll_ready(&mut self, _: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
// This connector is always ready, but others might not be.
--
2.20.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 05/15] async index reader: typedef ReadFuture
2021-01-25 13:42 [pbs-devel] [PATCH proxmox-backup(-qemu) 00/17] clippy refactorings Fabian Grünbichler
` (3 preceding siblings ...)
2021-01-25 13:42 ` [pbs-devel] [PATCH proxmox-backup 04/15] allow complex Futures in tower_service impl Fabian Grünbichler
@ 2021-01-25 13:42 ` Fabian Grünbichler
2021-01-25 13:42 ` [pbs-devel] [PATCH proxmox-backup 06/15] systemd/time: extract Time/DateSpec structs Fabian Grünbichler
` (12 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Fabian Grünbichler @ 2021-01-25 13:42 UTC (permalink / raw)
To: pbs-devel
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
there is also still a FIXME here..
src/backup/async_index_reader.rs | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/src/backup/async_index_reader.rs b/src/backup/async_index_reader.rs
index 60c68d4f..20a37e7e 100644
--- a/src/backup/async_index_reader.rs
+++ b/src/backup/async_index_reader.rs
@@ -15,6 +15,8 @@ use super::IndexFile;
use super::read_chunk::AsyncReadChunk;
use super::index::ChunkReadInfo;
+type ReadFuture<S> = dyn Future<Output = Result<(S, Vec<u8>), Error>> + Send + 'static;
+
// FIXME: This enum may not be required?
// - Put the `WaitForData` case directly into a `read_future: Option<>`
// - make the read loop as follows:
@@ -28,7 +30,7 @@ use super::index::ChunkReadInfo;
#[allow(clippy::enum_variant_names)]
enum AsyncIndexReaderState<S> {
NoData,
- WaitForData(Pin<Box<dyn Future<Output = Result<(S, Vec<u8>), Error>> + Send + 'static>>),
+ WaitForData(Pin<Box<ReadFuture<S>>>),
HaveData,
}
--
2.20.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 06/15] systemd/time: extract Time/DateSpec structs
2021-01-25 13:42 [pbs-devel] [PATCH proxmox-backup(-qemu) 00/17] clippy refactorings Fabian Grünbichler
` (4 preceding siblings ...)
2021-01-25 13:42 ` [pbs-devel] [PATCH proxmox-backup 05/15] async index reader: typedef ReadFuture Fabian Grünbichler
@ 2021-01-25 13:42 ` Fabian Grünbichler
2021-01-25 13:42 ` [pbs-devel] [PATCH proxmox-backup 07/15] client: factor out UploadOptions Fabian Grünbichler
` (11 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Fabian Grünbichler @ 2021-01-25 13:42 UTC (permalink / raw)
To: pbs-devel
could be pulled up into CalendarEvent if desired..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
src/tools/systemd/parse_time.rs | 41 ++++++++++++++++++++++-----------
1 file changed, 27 insertions(+), 14 deletions(-)
diff --git a/src/tools/systemd/parse_time.rs b/src/tools/systemd/parse_time.rs
index 88f5c92d..c0f58b04 100644
--- a/src/tools/systemd/parse_time.rs
+++ b/src/tools/systemd/parse_time.rs
@@ -86,6 +86,19 @@ lazy_static! {
map
};
}
+
+struct TimeSpec {
+ hour: Vec<DateTimeValue>,
+ minute: Vec<DateTimeValue>,
+ second: Vec<DateTimeValue>,
+}
+
+struct DateSpec {
+ year: Vec<DateTimeValue>,
+ month: Vec<DateTimeValue>,
+ day: Vec<DateTimeValue>,
+}
+
fn parse_time_comp(max: usize) -> impl Fn(&str) -> IResult<&str, u32> {
move |i: &str| {
let (i, v) = map_res(recognize(digit1), str::parse)(i)?;
@@ -176,7 +189,7 @@ fn parse_date_time_comp_list(start: u32, max: usize) -> impl Fn(&str) -> IResult
}
}
-fn parse_time_spec(i: &str) -> IResult<&str, (Vec<DateTimeValue>, Vec<DateTimeValue>, Vec<DateTimeValue>)> {
+fn parse_time_spec(i: &str) -> IResult<&str, TimeSpec> {
let (i, (hour, minute, opt_second)) = tuple((
parse_date_time_comp_list(0, 24),
@@ -185,13 +198,13 @@ fn parse_time_spec(i: &str) -> IResult<&str, (Vec<DateTimeValue>, Vec<DateTimeVa
))(i)?;
if let Some(second) = opt_second {
- Ok((i, (hour, minute, second)))
+ Ok((i, TimeSpec { hour, minute, second }))
} else {
- Ok((i, (hour, minute, vec![DateTimeValue::Single(0)])))
+ Ok((i, TimeSpec { hour, minute, second: vec![DateTimeValue::Single(0)] }))
}
}
-fn parse_date_spec(i: &str) -> IResult<&str, (Vec<DateTimeValue>, Vec<DateTimeValue>, Vec<DateTimeValue>)> {
+fn parse_date_spec(i: &str) -> IResult<&str, DateSpec> {
// TODO: implement ~ for days (man systemd.time)
if let Ok((i, (year, month, day))) = tuple((
@@ -199,12 +212,12 @@ fn parse_date_spec(i: &str) -> IResult<&str, (Vec<DateTimeValue>, Vec<DateTimeVa
preceded(tag("-"), parse_date_time_comp_list(1, 13)),
preceded(tag("-"), parse_date_time_comp_list(1, 32)),
))(i) {
- Ok((i, (year, month, day)))
+ Ok((i, DateSpec { year, month, day }))
} else if let Ok((i, (month, day))) = tuple((
parse_date_time_comp_list(1, 13),
preceded(tag("-"), parse_date_time_comp_list(1, 32)),
))(i) {
- Ok((i, (Vec::new(), month, day)))
+ Ok((i, DateSpec { year: Vec::new(), month, day }))
} else {
Err(parse_error(i, "invalid date spec"))
}
@@ -317,18 +330,18 @@ fn parse_calendar_event_incomplete(mut i: &str) -> IResult<&str, CalendarEvent>
for range in range_list { event.days.insert(range); }
}
- if let (n, Some((year, month, day))) = opt(parse_date_spec)(i)? {
- event.year = year;
- event.month = month;
- event.day = day;
+ if let (n, Some(date)) = opt(parse_date_spec)(i)? {
+ event.year = date.year;
+ event.month = date.month;
+ event.day = date.day;
has_datespec = true;
i = space0(n)?.0;
}
- if let (n, Some((hour, minute, second))) = opt(parse_time_spec)(i)? {
- event.hour = hour;
- event.minute = minute;
- event.second = second;
+ if let (n, Some(time)) = opt(parse_time_spec)(i)? {
+ event.hour = time.hour;
+ event.minute = time.minute;
+ event.second = time.second;
has_timespec = true;
i = n;
} else {
--
2.20.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 07/15] client: factor out UploadOptions
2021-01-25 13:42 [pbs-devel] [PATCH proxmox-backup(-qemu) 00/17] clippy refactorings Fabian Grünbichler
` (5 preceding siblings ...)
2021-01-25 13:42 ` [pbs-devel] [PATCH proxmox-backup 06/15] systemd/time: extract Time/DateSpec structs Fabian Grünbichler
@ 2021-01-25 13:42 ` Fabian Grünbichler
2021-01-25 13:42 ` [pbs-devel] [PATCH proxmox-backup 08/15] pxar: typedef on_error as ErrorHandler Fabian Grünbichler
` (10 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Fabian Grünbichler @ 2021-01-25 13:42 UTC (permalink / raw)
To: pbs-devel
to reduce function signature complexity.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
breaks/requires corresponding patch in proxmox-backup-qemu
src/bin/proxmox-backup-client.rs | 79 ++++++++++++++++++++++----------
src/client/backup_writer.rs | 44 ++++++++++--------
2 files changed, 80 insertions(+), 43 deletions(-)
diff --git a/src/bin/proxmox-backup-client.rs b/src/bin/proxmox-backup-client.rs
index dce8f0b8..af3b8464 100644
--- a/src/bin/proxmox-backup-client.rs
+++ b/src/bin/proxmox-backup-client.rs
@@ -280,7 +280,6 @@ pub async fn api_datastore_latest_snapshot(
async fn backup_directory<P: AsRef<Path>>(
client: &BackupWriter,
- previous_manifest: Option<Arc<BackupManifest>>,
dir_path: P,
archive_name: &str,
chunk_size: Option<usize>,
@@ -290,8 +289,7 @@ async fn backup_directory<P: AsRef<Path>>(
catalog: Arc<Mutex<CatalogWriter<crate::tools::StdChannelWriter>>>,
exclude_pattern: Vec<MatchEntry>,
entries_max: usize,
- compress: bool,
- encrypt: bool,
+ upload_options: UploadOptions,
) -> Result<BackupStats, Error> {
let pxar_stream = PxarBackupStream::open(
@@ -317,8 +315,12 @@ async fn backup_directory<P: AsRef<Path>>(
}
});
+ if upload_options.fixed_size.is_some() {
+ bail!("cannot backup directory with fixed chunk size!");
+ }
+
let stats = client
- .upload_stream(previous_manifest, archive_name, stream, "dynamic", None, compress, encrypt)
+ .upload_stream(archive_name, stream, upload_options)
.await?;
Ok(stats)
@@ -326,14 +328,10 @@ async fn backup_directory<P: AsRef<Path>>(
async fn backup_image<P: AsRef<Path>>(
client: &BackupWriter,
- previous_manifest: Option<Arc<BackupManifest>>,
image_path: P,
archive_name: &str,
- image_size: u64,
chunk_size: Option<usize>,
- compress: bool,
- encrypt: bool,
- _verbose: bool,
+ upload_options: UploadOptions,
) -> Result<BackupStats, Error> {
let path = image_path.as_ref().to_owned();
@@ -345,8 +343,12 @@ async fn backup_image<P: AsRef<Path>>(
let stream = FixedChunkStream::new(stream, chunk_size.unwrap_or(4*1024*1024));
+ if upload_options.fixed_size.is_none() {
+ bail!("cannot backup image with dynamic chunk size!");
+ }
+
let stats = client
- .upload_stream(previous_manifest, archive_name, stream, "fixed", Some(image_size), compress, encrypt)
+ .upload_stream(archive_name, stream, upload_options)
.await?;
Ok(stats)
@@ -604,9 +606,15 @@ fn spawn_catalog_upload(
let (catalog_result_tx, catalog_result_rx) = tokio::sync::oneshot::channel();
+ let upload_options = UploadOptions {
+ encrypt,
+ compress: true,
+ ..UploadOptions::default()
+ };
+
tokio::spawn(async move {
let catalog_upload_result = client
- .upload_stream(None, CATALOG_NAME, catalog_chunk_stream, "dynamic", None, true, encrypt)
+ .upload_stream(CATALOG_NAME, catalog_chunk_stream, upload_options)
.await;
if let Err(ref err) = catalog_upload_result {
@@ -995,16 +1003,28 @@ async fn create_backup(
for (backup_type, filename, target, size) in upload_list {
match backup_type {
BackupSpecificationType::CONFIG => {
+ let upload_options = UploadOptions {
+ compress: true,
+ encrypt: crypt_mode == CryptMode::Encrypt,
+ ..UploadOptions::default()
+ };
+
println!("Upload config file '{}' to '{}' as {}", filename, repo, target);
let stats = client
- .upload_blob_from_file(&filename, &target, true, crypt_mode == CryptMode::Encrypt)
+ .upload_blob_from_file(&filename, &target, upload_options)
.await?;
manifest.add_file(target, stats.size, stats.csum, crypt_mode)?;
}
BackupSpecificationType::LOGFILE => { // fixme: remove - not needed anymore ?
+ let upload_options = UploadOptions {
+ compress: true,
+ encrypt: crypt_mode == CryptMode::Encrypt,
+ ..UploadOptions::default()
+ };
+
println!("Upload log file '{}' to '{}' as {}", filename, repo, target);
let stats = client
- .upload_blob_from_file(&filename, &target, true, crypt_mode == CryptMode::Encrypt)
+ .upload_blob_from_file(&filename, &target, upload_options)
.await?;
manifest.add_file(target, stats.size, stats.csum, crypt_mode)?;
}
@@ -1019,9 +1039,15 @@ async fn create_backup(
println!("Upload directory '{}' to '{}' as {}", filename, repo, target);
catalog.lock().unwrap().start_directory(std::ffi::CString::new(target.as_str())?.as_c_str())?;
+ let upload_options = UploadOptions {
+ previous_manifest: previous_manifest.clone(),
+ compress: true,
+ encrypt: crypt_mode == CryptMode::Encrypt,
+ ..UploadOptions::default()
+ };
+
let stats = backup_directory(
&client,
- previous_manifest.clone(),
&filename,
&target,
chunk_size_opt,
@@ -1031,24 +1057,27 @@ async fn create_backup(
catalog.clone(),
pattern_list.clone(),
entries_max as usize,
- true,
- crypt_mode == CryptMode::Encrypt,
+ upload_options,
).await?;
manifest.add_file(target, stats.size, stats.csum, crypt_mode)?;
catalog.lock().unwrap().end_directory()?;
}
BackupSpecificationType::IMAGE => {
println!("Upload image '{}' to '{:?}' as {}", filename, repo, target);
+
+ let upload_options = UploadOptions {
+ previous_manifest: previous_manifest.clone(),
+ fixed_size: Some(size),
+ compress: true,
+ encrypt: crypt_mode == CryptMode::Encrypt,
+ };
+
let stats = backup_image(
&client,
- previous_manifest.clone(),
- &filename,
+ &filename,
&target,
- size,
chunk_size_opt,
- true,
- crypt_mode == CryptMode::Encrypt,
- verbose,
+ upload_options,
).await?;
manifest.add_file(target, stats.size, stats.csum, crypt_mode)?;
}
@@ -1074,8 +1103,9 @@ async fn create_backup(
if let Some(rsa_encrypted_key) = rsa_encrypted_key {
let target = ENCRYPTED_KEY_BLOB_NAME;
println!("Upload RSA encoded key to '{:?}' as {}", repo, target);
+ let options = UploadOptions { compress: false, encrypt: false, ..UploadOptions::default() };
let stats = client
- .upload_blob_from_data(rsa_encrypted_key, target, false, false)
+ .upload_blob_from_data(rsa_encrypted_key, target, options)
.await?;
manifest.add_file(target.to_string(), stats.size, stats.csum, crypt_mode)?;
@@ -1087,8 +1117,9 @@ async fn create_backup(
if verbose { println!("Upload index.json to '{}'", repo) };
+ let options = UploadOptions { compress: true, encrypt: false, ..UploadOptions::default() };
client
- .upload_blob_from_data(manifest.into_bytes(), MANIFEST_BLOB_NAME, true, false)
+ .upload_blob_from_data(manifest.into_bytes(), MANIFEST_BLOB_NAME, options)
.await?;
client.finish().await?;
diff --git a/src/client/backup_writer.rs b/src/client/backup_writer.rs
index e7a6d6bd..38953a45 100644
--- a/src/client/backup_writer.rs
+++ b/src/client/backup_writer.rs
@@ -39,6 +39,15 @@ pub struct BackupStats {
pub csum: [u8; 32],
}
+/// Options for uploading blobs/streams to the server
+#[derive(Default, Clone)]
+pub struct UploadOptions {
+ pub previous_manifest: Option<Arc<BackupManifest>>,
+ pub compress: bool,
+ pub encrypt: bool,
+ pub fixed_size: Option<u64>,
+}
+
type UploadQueueSender = mpsc::Sender<(MergedChunkInfo, Option<h2::client::ResponseFuture>)>;
type UploadResultReceiver = oneshot::Receiver<Result<(), Error>>;
@@ -168,13 +177,12 @@ impl BackupWriter {
&self,
data: Vec<u8>,
file_name: &str,
- compress: bool,
- encrypt: bool,
+ options: UploadOptions,
) -> Result<BackupStats, Error> {
- let blob = match (encrypt, &self.crypt_config) {
- (false, _) => DataBlob::encode(&data, None, compress)?,
+ let blob = match (options.encrypt, &self.crypt_config) {
+ (false, _) => DataBlob::encode(&data, None, options.compress)?,
(true, None) => bail!("requested encryption without a crypt config"),
- (true, Some(crypt_config)) => DataBlob::encode(&data, Some(crypt_config), compress)?,
+ (true, Some(crypt_config)) => DataBlob::encode(&data, Some(crypt_config), options.compress)?,
};
let raw_data = blob.into_inner();
@@ -190,8 +198,7 @@ impl BackupWriter {
&self,
src_path: P,
file_name: &str,
- compress: bool,
- encrypt: bool,
+ options: UploadOptions,
) -> Result<BackupStats, Error> {
let src_path = src_path.as_ref();
@@ -206,34 +213,33 @@ impl BackupWriter {
.await
.map_err(|err| format_err!("unable to read file {:?} - {}", src_path, err))?;
- self.upload_blob_from_data(contents, file_name, compress, encrypt).await
+ self.upload_blob_from_data(contents, file_name, options).await
}
pub async fn upload_stream(
&self,
- previous_manifest: Option<Arc<BackupManifest>>,
archive_name: &str,
stream: impl Stream<Item = Result<bytes::BytesMut, Error>>,
- prefix: &str,
- fixed_size: Option<u64>,
- compress: bool,
- encrypt: bool,
+ options: UploadOptions,
) -> Result<BackupStats, Error> {
let known_chunks = Arc::new(Mutex::new(HashSet::new()));
let mut param = json!({ "archive-name": archive_name });
- if let Some(size) = fixed_size {
+ let prefix = if let Some(size) = options.fixed_size {
param["size"] = size.into();
- }
+ "fixed"
+ } else {
+ "dynamic"
+ };
- if encrypt && self.crypt_config.is_none() {
+ if options.encrypt && self.crypt_config.is_none() {
bail!("requested encryption without a crypt config");
}
let index_path = format!("{}_index", prefix);
let close_path = format!("{}_close", prefix);
- if let Some(manifest) = previous_manifest {
+ if let Some(manifest) = options.previous_manifest {
// try, but ignore errors
match archive_type(archive_name) {
Ok(ArchiveType::FixedIndex) => {
@@ -255,8 +261,8 @@ impl BackupWriter {
stream,
&prefix,
known_chunks.clone(),
- if encrypt { self.crypt_config.clone() } else { None },
- compress,
+ if options.encrypt { self.crypt_config.clone() } else { None },
+ options.compress,
self.verbose,
)
.await?;
--
2.20.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 08/15] pxar: typedef on_error as ErrorHandler
2021-01-25 13:42 [pbs-devel] [PATCH proxmox-backup(-qemu) 00/17] clippy refactorings Fabian Grünbichler
` (6 preceding siblings ...)
2021-01-25 13:42 ` [pbs-devel] [PATCH proxmox-backup 07/15] client: factor out UploadOptions Fabian Grünbichler
@ 2021-01-25 13:42 ` Fabian Grünbichler
2021-01-25 13:42 ` [pbs-devel] [PATCH proxmox-backup 09/15] pxar: factor out PxarCreateOptions Fabian Grünbichler
` (9 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Fabian Grünbichler @ 2021-01-25 13:42 UTC (permalink / raw)
To: pbs-devel
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
src/bin/pxar.rs | 4 ++--
| 6 ++++--
src/pxar/mod.rs | 2 +-
3 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/src/bin/pxar.rs b/src/bin/pxar.rs
index 4d2308bf..3d92dbe0 100644
--- a/src/bin/pxar.rs
+++ b/src/bin/pxar.rs
@@ -17,7 +17,7 @@ use proxmox::api::cli::*;
use proxmox::api::api;
use proxmox_backup::tools;
-use proxmox_backup::pxar::{fuse, format_single_line_entry, ENCODER_MAX_ENTRIES, Flags};
+use proxmox_backup::pxar::{fuse, format_single_line_entry, ENCODER_MAX_ENTRIES, ErrorHandler, Flags};
fn extract_archive_from_reader<R: std::io::Read>(
reader: &mut R,
@@ -27,7 +27,7 @@ fn extract_archive_from_reader<R: std::io::Read>(
verbose: bool,
match_list: &[MatchEntry],
extract_match_default: bool,
- on_error: Option<Box<dyn FnMut(Error) -> Result<(), Error> + Send>>,
+ on_error: Option<ErrorHandler>,
) -> Result<(), Error> {
proxmox_backup::pxar::extract_archive(
pxar::decoder::Decoder::from_std(reader)?,
--git a/src/pxar/extract.rs b/src/pxar/extract.rs
index 12f9054d..e22fc847 100644
--- a/src/pxar/extract.rs
+++ b/src/pxar/extract.rs
@@ -24,6 +24,8 @@ use crate::pxar::dir_stack::PxarDirStack;
use crate::pxar::metadata;
use crate::pxar::Flags;
+pub type ErrorHandler = Box<dyn FnMut(Error) -> Result<(), Error> + Send>;
+
pub fn extract_archive<T, F>(
mut decoder: pxar::decoder::Decoder<T>,
destination: &Path,
@@ -32,7 +34,7 @@ pub fn extract_archive<T, F>(
feature_flags: Flags,
allow_existing_dirs: bool,
mut callback: F,
- on_error: Option<Box<dyn FnMut(Error) -> Result<(), Error> + Send>>,
+ on_error: Option<ErrorHandler>,
) -> Result<(), Error>
where
T: pxar::decoder::SeqRead,
@@ -212,7 +214,7 @@ pub(crate) struct Extractor {
/// Error callback. Includes `current_path` in the reformatted error, should return `Ok` to
/// continue extracting or the passed error as `Err` to bail out.
- on_error: Box<dyn FnMut(Error) -> Result<(), Error> + Send>,
+ on_error: ErrorHandler,
}
impl Extractor {
diff --git a/src/pxar/mod.rs b/src/pxar/mod.rs
index 6e910667..82998cf8 100644
--- a/src/pxar/mod.rs
+++ b/src/pxar/mod.rs
@@ -59,7 +59,7 @@ mod flags;
pub use flags::Flags;
pub use create::create_archive;
-pub use extract::extract_archive;
+pub use extract::{extract_archive, ErrorHandler};
/// The format requires to build sorted directory lookup tables in
/// memory, so we restrict the number of allowed entries to limit
--
2.20.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 09/15] pxar: factor out PxarCreateOptions
2021-01-25 13:42 [pbs-devel] [PATCH proxmox-backup(-qemu) 00/17] clippy refactorings Fabian Grünbichler
` (7 preceding siblings ...)
2021-01-25 13:42 ` [pbs-devel] [PATCH proxmox-backup 08/15] pxar: typedef on_error as ErrorHandler Fabian Grünbichler
@ 2021-01-25 13:42 ` Fabian Grünbichler
2021-01-25 13:42 ` [pbs-devel] [PATCH proxmox-backup 10/15] pxar: extract PxarExtractOptions Fabian Grünbichler
` (8 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Fabian Grünbichler @ 2021-01-25 13:42 UTC (permalink / raw)
To: pbs-devel
containing the CLI parameters that are mostly passed-through from the
client to our pxar archive creation wrapper in pxar::create
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
src/bin/proxmox-backup-client.rs | 27 ++++++++++++---------------
src/bin/pxar.rs | 22 ++++++++++++++--------
src/client/pxar_backup_stream.rs | 28 ++++++----------------------
src/pxar/create.rs | 28 ++++++++++++++++++++++------
src/pxar/mod.rs | 2 +-
tests/catar.rs | 10 ++++++----
6 files changed, 61 insertions(+), 56 deletions(-)
diff --git a/src/bin/proxmox-backup-client.rs b/src/bin/proxmox-backup-client.rs
index af3b8464..c3d6a0ed 100644
--- a/src/bin/proxmox-backup-client.rs
+++ b/src/bin/proxmox-backup-client.rs
@@ -283,23 +283,15 @@ async fn backup_directory<P: AsRef<Path>>(
dir_path: P,
archive_name: &str,
chunk_size: Option<usize>,
- device_set: Option<HashSet<u64>>,
- verbose: bool,
- skip_lost_and_found: bool,
catalog: Arc<Mutex<CatalogWriter<crate::tools::StdChannelWriter>>>,
- exclude_pattern: Vec<MatchEntry>,
- entries_max: usize,
+ pxar_create_options: proxmox_backup::pxar::PxarCreateOptions,
upload_options: UploadOptions,
) -> Result<BackupStats, Error> {
let pxar_stream = PxarBackupStream::open(
dir_path.as_ref(),
- device_set,
- verbose,
- skip_lost_and_found,
catalog,
- exclude_pattern,
- entries_max,
+ pxar_create_options,
)?;
let mut chunk_stream = ChunkStream::new(pxar_stream, chunk_size);
@@ -1039,6 +1031,15 @@ async fn create_backup(
println!("Upload directory '{}' to '{}' as {}", filename, repo, target);
catalog.lock().unwrap().start_directory(std::ffi::CString::new(target.as_str())?.as_c_str())?;
+
+ let pxar_options = proxmox_backup::pxar::PxarCreateOptions {
+ device_set: devices.clone(),
+ patterns: pattern_list.clone(),
+ entries_max: entries_max as usize,
+ skip_lost_and_found,
+ verbose,
+ };
+
let upload_options = UploadOptions {
previous_manifest: previous_manifest.clone(),
compress: true,
@@ -1051,12 +1052,8 @@ async fn create_backup(
&filename,
&target,
chunk_size_opt,
- devices.clone(),
- verbose,
- skip_lost_and_found,
catalog.clone(),
- pattern_list.clone(),
- entries_max as usize,
+ pxar_options,
upload_options,
).await?;
manifest.add_file(target, stats.size, stats.csum, crypt_mode)?;
diff --git a/src/bin/pxar.rs b/src/bin/pxar.rs
index 3d92dbe0..85606c1d 100644
--- a/src/bin/pxar.rs
+++ b/src/bin/pxar.rs
@@ -311,16 +311,16 @@ fn create_archive(
exclude: Option<Vec<String>>,
entries_max: isize,
) -> Result<(), Error> {
- let pattern_list = {
+ let patterns = {
let input = exclude.unwrap_or_else(Vec::new);
- let mut pattern_list = Vec::with_capacity(input.len());
+ let mut patterns = Vec::with_capacity(input.len());
for entry in input {
- pattern_list.push(
+ patterns.push(
MatchEntry::parse_pattern(entry, PatternFlag::PATH_NAME, MatchType::Exclude)
.map_err(|err| format_err!("error in exclude pattern: {}", err))?,
);
}
- pattern_list
+ patterns
};
let device_set = if all_file_systems {
@@ -329,6 +329,15 @@ fn create_archive(
Some(HashSet::new())
};
+ let options = proxmox_backup::pxar::PxarCreateOptions {
+ entries_max: entries_max as usize,
+ device_set,
+ patterns,
+ verbose,
+ skip_lost_and_found: false,
+ };
+
+
let source = PathBuf::from(source);
let dir = nix::dir::Dir::open(
@@ -368,18 +377,15 @@ fn create_archive(
proxmox_backup::pxar::create_archive(
dir,
writer,
- pattern_list,
feature_flags,
- device_set,
- false,
|path| {
if verbose {
println!("{:?}", path);
}
Ok(())
},
- entries_max as usize,
None,
+ options,
)?;
Ok(())
diff --git a/src/client/pxar_backup_stream.rs b/src/client/pxar_backup_stream.rs
index aa3355fe..5fb28fd5 100644
--- a/src/client/pxar_backup_stream.rs
+++ b/src/client/pxar_backup_stream.rs
@@ -1,4 +1,3 @@
-use std::collections::HashSet;
use std::io::Write;
//use std::os::unix::io::FromRawFd;
use std::path::Path;
@@ -13,8 +12,6 @@ use nix::dir::Dir;
use nix::fcntl::OFlag;
use nix::sys::stat::Mode;
-use pathpatterns::MatchEntry;
-
use crate::backup::CatalogWriter;
/// Stream implementation to encode and upload .pxar archives.
@@ -38,12 +35,8 @@ impl Drop for PxarBackupStream {
impl PxarBackupStream {
pub fn new<W: Write + Send + 'static>(
dir: Dir,
- device_set: Option<HashSet<u64>>,
- verbose: bool,
- skip_lost_and_found: bool,
catalog: Arc<Mutex<CatalogWriter<W>>>,
- patterns: Vec<MatchEntry>,
- entries_max: usize,
+ options: crate::pxar::PxarCreateOptions,
) -> Result<Self, Error> {
let (tx, rx) = std::sync::mpsc::sync_channel(10);
@@ -61,22 +54,21 @@ impl PxarBackupStream {
crate::tools::StdChannelWriter::new(tx),
);
+ let verbose = options.verbose;
+
let writer = pxar::encoder::sync::StandardWriter::new(writer);
if let Err(err) = crate::pxar::create_archive(
dir,
writer,
- patterns,
crate::pxar::Flags::DEFAULT,
- device_set,
- skip_lost_and_found,
|path| {
if verbose {
println!("{:?}", path);
}
Ok(())
},
- entries_max,
Some(&mut *catalog_guard),
+ options,
) {
let mut error = error.lock().unwrap();
*error = Some(err.to_string());
@@ -93,23 +85,15 @@ impl PxarBackupStream {
pub fn open<W: Write + Send + 'static>(
dirname: &Path,
- device_set: Option<HashSet<u64>>,
- verbose: bool,
- skip_lost_and_found: bool,
catalog: Arc<Mutex<CatalogWriter<W>>>,
- patterns: Vec<MatchEntry>,
- entries_max: usize,
+ options: crate::pxar::PxarCreateOptions,
) -> Result<Self, Error> {
let dir = nix::dir::Dir::open(dirname, OFlag::O_DIRECTORY, Mode::empty())?;
Self::new(
dir,
- device_set,
- verbose,
- skip_lost_and_found,
catalog,
- patterns,
- entries_max,
+ options,
)
}
}
diff --git a/src/pxar/create.rs b/src/pxar/create.rs
index a21511c3..229e3f59 100644
--- a/src/pxar/create.rs
+++ b/src/pxar/create.rs
@@ -27,6 +27,22 @@ use crate::pxar::Flags;
use crate::pxar::tools::assert_single_path_component;
use crate::tools::{acl, fs, xattr, Fd};
+/// Pxar options for creating a pxar archive/stream
+#[derive(Default, Clone)]
+pub struct PxarCreateOptions {
+ /// Device/mountpoint st_dev numbers that should be included. None for no limitation.
+ pub device_set: Option<HashSet<u64>>,
+ /// Exclusion patterns
+ pub patterns: Vec<MatchEntry>,
+ /// Maximum number of entries to hold in memory
+ pub entries_max: usize,
+ /// Skip lost+found directory
+ pub skip_lost_and_found: bool,
+ /// Verbose output
+ pub verbose: bool,
+}
+
+
fn detect_fs_type(fd: RawFd) -> Result<i64, Error> {
let mut fs_stat = std::mem::MaybeUninit::uninit();
let res = unsafe { libc::fstatfs(fd, fs_stat.as_mut_ptr()) };
@@ -136,13 +152,10 @@ type Encoder<'a, 'b> = pxar::encoder::Encoder<'a, &'b mut dyn pxar::encoder::Seq
pub fn create_archive<T, F>(
source_dir: Dir,
mut writer: T,
- mut patterns: Vec<MatchEntry>,
feature_flags: Flags,
- mut device_set: Option<HashSet<u64>>,
- skip_lost_and_found: bool,
mut callback: F,
- entry_limit: usize,
catalog: Option<&mut dyn BackupCatalogWriter>,
+ options: PxarCreateOptions,
) -> Result<(), Error>
where
T: pxar::encoder::SeqWrite,
@@ -164,6 +177,7 @@ where
)
.map_err(|err| format_err!("failed to get metadata for source directory: {}", err))?;
+ let mut device_set = options.device_set.clone();
if let Some(ref mut set) = device_set {
set.insert(stat.st_dev);
}
@@ -171,7 +185,9 @@ where
let writer = &mut writer as &mut dyn pxar::encoder::SeqWrite;
let mut encoder = Encoder::new(writer, &metadata)?;
- if skip_lost_and_found {
+ let mut patterns = options.patterns.clone();
+
+ if options.skip_lost_and_found {
patterns.push(MatchEntry::parse_pattern(
"lost+found",
PatternFlag::PATH_NAME,
@@ -188,7 +204,7 @@ where
catalog,
path: PathBuf::new(),
entry_counter: 0,
- entry_limit,
+ entry_limit: options.entries_max,
current_st_dev: stat.st_dev,
device_set,
hardlinks: HashMap::new(),
diff --git a/src/pxar/mod.rs b/src/pxar/mod.rs
index 82998cf8..c10bb6fb 100644
--- a/src/pxar/mod.rs
+++ b/src/pxar/mod.rs
@@ -58,7 +58,7 @@ pub(crate) mod tools;
mod flags;
pub use flags::Flags;
-pub use create::create_archive;
+pub use create::{create_archive, PxarCreateOptions};
pub use extract::{extract_archive, ErrorHandler};
/// The format requires to build sorted directory lookup tables in
diff --git a/tests/catar.rs b/tests/catar.rs
index a6acf9af..2d9dea71 100644
--- a/tests/catar.rs
+++ b/tests/catar.rs
@@ -25,16 +25,18 @@ fn run_test(dir_name: &str) -> Result<(), Error> {
dir_name, nix::fcntl::OFlag::O_NOFOLLOW,
nix::sys::stat::Mode::empty())?;
+ let options = PxarCreateOptions {
+ entries_max: ENCODER_MAX_ENTRIES,
+ ..PxarCreateOptions::default()
+ };
+
create_archive(
dir,
writer,
- Vec::new(),
Flags::DEFAULT,
- None,
- false,
|_| Ok(()),
- ENCODER_MAX_ENTRIES,
None,
+ options,
)?;
Command::new("cmp")
--
2.20.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 10/15] pxar: extract PxarExtractOptions
2021-01-25 13:42 [pbs-devel] [PATCH proxmox-backup(-qemu) 00/17] clippy refactorings Fabian Grünbichler
` (8 preceding siblings ...)
2021-01-25 13:42 ` [pbs-devel] [PATCH proxmox-backup 09/15] pxar: factor out PxarCreateOptions Fabian Grünbichler
@ 2021-01-25 13:42 ` Fabian Grünbichler
2021-01-25 13:42 ` [pbs-devel] [PATCH proxmox-backup 11/15] authid: make Tokenname(Ref) derive Eq Fabian Grünbichler
` (7 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Fabian Grünbichler @ 2021-01-25 13:42 UTC (permalink / raw)
To: pbs-devel
same as PxarCreateOptions, but for extraction/restore rather than
create.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
src/bin/proxmox-backup-client.rs | 12 ++++++++----
src/bin/pxar.rs | 31 ++++++++++++++-----------------
| 20 ++++++++++++--------
src/pxar/mod.rs | 2 +-
4 files changed, 35 insertions(+), 30 deletions(-)
diff --git a/src/bin/proxmox-backup-client.rs b/src/bin/proxmox-backup-client.rs
index c3d6a0ed..d31e47ae 100644
--- a/src/bin/proxmox-backup-client.rs
+++ b/src/bin/proxmox-backup-client.rs
@@ -1354,20 +1354,24 @@ async fn restore(param: Value) -> Result<Value, Error> {
let mut reader = BufferedDynamicReader::new(index, chunk_reader);
+ let options = proxmox_backup::pxar::PxarExtractOptions {
+ match_list: &[],
+ extract_match_default: true,
+ allow_existing_dirs,
+ on_error: None,
+ };
+
if let Some(target) = target {
proxmox_backup::pxar::extract_archive(
pxar::decoder::Decoder::from_std(reader)?,
Path::new(target),
- &[],
- true,
proxmox_backup::pxar::Flags::DEFAULT,
- allow_existing_dirs,
|path| {
if verbose {
println!("{:?}", path);
}
},
- None,
+ options,
)
.map_err(|err| format_err!("error extracting archive - {}", err))?;
} else {
diff --git a/src/bin/pxar.rs b/src/bin/pxar.rs
index 85606c1d..b2fe6d52 100644
--- a/src/bin/pxar.rs
+++ b/src/bin/pxar.rs
@@ -17,31 +17,26 @@ use proxmox::api::cli::*;
use proxmox::api::api;
use proxmox_backup::tools;
-use proxmox_backup::pxar::{fuse, format_single_line_entry, ENCODER_MAX_ENTRIES, ErrorHandler, Flags};
+use proxmox_backup::pxar::{fuse, format_single_line_entry, ENCODER_MAX_ENTRIES, Flags, PxarExtractOptions};
fn extract_archive_from_reader<R: std::io::Read>(
reader: &mut R,
target: &str,
feature_flags: Flags,
- allow_existing_dirs: bool,
verbose: bool,
- match_list: &[MatchEntry],
- extract_match_default: bool,
- on_error: Option<ErrorHandler>,
+ options: PxarExtractOptions,
) -> Result<(), Error> {
+
proxmox_backup::pxar::extract_archive(
pxar::decoder::Decoder::from_std(reader)?,
Path::new(target),
- &match_list,
- extract_match_default,
feature_flags,
- allow_existing_dirs,
|path| {
if verbose {
println!("{:?}", path);
}
},
- on_error,
+ options,
)
}
@@ -190,6 +185,13 @@ fn extract_archive(
}) as Box<dyn FnMut(Error) -> Result<(), Error> + Send>)
};
+ let options = PxarExtractOptions {
+ match_list: &match_list,
+ allow_existing_dirs,
+ extract_match_default,
+ on_error,
+ };
+
if archive == "-" {
let stdin = std::io::stdin();
let mut reader = stdin.lock();
@@ -197,11 +199,8 @@ fn extract_archive(
&mut reader,
&target,
feature_flags,
- allow_existing_dirs,
verbose,
- &match_list,
- extract_match_default,
- on_error,
+ options,
)?;
} else {
if verbose {
@@ -213,11 +212,8 @@ fn extract_archive(
&mut reader,
&target,
feature_flags,
- allow_existing_dirs,
verbose,
- &match_list,
- extract_match_default,
- on_error,
+ options,
)?;
}
@@ -297,6 +293,7 @@ fn extract_archive(
},
)]
/// Create a new .pxar archive.
+#[allow(clippy::too_many_arguments)]
fn create_archive(
archive: String,
source: String,
--git a/src/pxar/extract.rs b/src/pxar/extract.rs
index e22fc847..9cf3f928 100644
--- a/src/pxar/extract.rs
+++ b/src/pxar/extract.rs
@@ -24,17 +24,21 @@ use crate::pxar::dir_stack::PxarDirStack;
use crate::pxar::metadata;
use crate::pxar::Flags;
+pub struct PxarExtractOptions<'a> {
+ pub match_list: &'a[MatchEntry],
+ pub extract_match_default: bool,
+ pub allow_existing_dirs: bool,
+ pub on_error: Option<ErrorHandler>,
+}
+
pub type ErrorHandler = Box<dyn FnMut(Error) -> Result<(), Error> + Send>;
pub fn extract_archive<T, F>(
mut decoder: pxar::decoder::Decoder<T>,
destination: &Path,
- match_list: &[MatchEntry],
- extract_match_default: bool,
feature_flags: Flags,
- allow_existing_dirs: bool,
mut callback: F,
- on_error: Option<ErrorHandler>,
+ options: PxarExtractOptions,
) -> Result<(), Error>
where
T: pxar::decoder::SeqRead,
@@ -69,17 +73,17 @@ where
let mut extractor = Extractor::new(
dir,
root.metadata().clone(),
- allow_existing_dirs,
+ options.allow_existing_dirs,
feature_flags,
);
- if let Some(on_error) = on_error {
+ if let Some(on_error) = options.on_error {
extractor.on_error(on_error);
}
let mut match_stack = Vec::new();
let mut err_path_stack = vec![OsString::from("/")];
- let mut current_match = extract_match_default;
+ let mut current_match = options.extract_match_default;
while let Some(entry) = decoder.next() {
use pxar::EntryKind;
@@ -99,7 +103,7 @@ where
extractor.set_path(entry.path().as_os_str().to_owned());
- let match_result = match_list.matches(
+ let match_result = options.match_list.matches(
entry.path().as_os_str().as_bytes(),
Some(metadata.file_type() as u32),
);
diff --git a/src/pxar/mod.rs b/src/pxar/mod.rs
index c10bb6fb..5d03591b 100644
--- a/src/pxar/mod.rs
+++ b/src/pxar/mod.rs
@@ -59,7 +59,7 @@ mod flags;
pub use flags::Flags;
pub use create::{create_archive, PxarCreateOptions};
-pub use extract::{extract_archive, ErrorHandler};
+pub use extract::{extract_archive, ErrorHandler, PxarExtractOptions};
/// The format requires to build sorted directory lookup tables in
/// memory, so we restrict the number of allowed entries to limit
--
2.20.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 11/15] authid: make Tokenname(Ref) derive Eq
2021-01-25 13:42 [pbs-devel] [PATCH proxmox-backup(-qemu) 00/17] clippy refactorings Fabian Grünbichler
` (9 preceding siblings ...)
2021-01-25 13:42 ` [pbs-devel] [PATCH proxmox-backup 10/15] pxar: extract PxarExtractOptions Fabian Grünbichler
@ 2021-01-25 13:42 ` Fabian Grünbichler
2021-01-25 13:42 ` [pbs-devel] [PATCH proxmox-backup 12/15] derive/impl and use Default for some structs Fabian Grünbichler
` (6 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Fabian Grünbichler @ 2021-01-25 13:42 UTC (permalink / raw)
To: pbs-devel
it's needed to derive Hash, and we always compare Authids or their
Userid components, never just the Tokenname part anyway..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
src/api2/types/userid.rs | 50 +++++-----------------------------------
1 file changed, 6 insertions(+), 44 deletions(-)
diff --git a/src/api2/types/userid.rs b/src/api2/types/userid.rs
index 20ce9370..7c73e69f 100644
--- a/src/api2/types/userid.rs
+++ b/src/api2/types/userid.rs
@@ -16,10 +16,10 @@
//! * [`Authid`]: an owned Authentication ID (a `Userid` with an optional `Tokenname`).
//! Note that `Userid` and `Authid` do not have a separate borrowed type.
//!
-//! Note that `Username`s and `Tokenname`s are not unique, therefore they do not implement `Eq` and cannot be
+//! Note that `Username`s are not unique, therefore they do not implement `Eq` and cannot be
//! compared directly. If a direct comparison is really required, they can be compared as strings
-//! via the `as_str()` method. [`Realm`]s, [`Userid`]s and [`Authid`]s on the other
-//! hand can be compared with each other, as in those cases the comparison has meaning.
+//! via the `as_str()` method. [`Realm`]s, [`Userid`]s and [`Authid`]s on the other hand can be
+//! compared with each other, as in those cases the comparison has meaning.
use std::borrow::Borrow;
use std::convert::TryFrom;
@@ -299,16 +299,8 @@ impl PartialEq<Realm> for &RealmRef {
)]
/// The token ID part of an API token authentication id.
///
-/// This alone does NOT uniquely identify the API token and therefore does not implement `Eq`. In
-/// order to compare token IDs directly, they need to be explicitly compared as strings by calling
-/// `.as_str()`.
-///
-/// ```compile_fail
-/// fn test(a: Tokenname, b: Tokenname) -> bool {
-/// a == b // illegal and does not compile
-/// }
-/// ```
-#[derive(Clone, Debug, Hash, Deserialize, Serialize)]
+/// This alone does NOT uniquely identify the API token - use a full `Authid` for such use cases.
+#[derive(Clone, Debug, Eq, Hash, PartialEq, Deserialize, Serialize)]
pub struct Tokenname(String);
/// A reference to a token name part of an authentication id. This alone does NOT uniquely identify
@@ -336,24 +328,6 @@ pub struct TokennameRef(str);
/// let b: &UsernameRef = unsafe { std::mem::zeroed() };
/// let _ = <&UsernameRef as PartialEq>::eq(&a, &b);
/// ```
-///
-/// ```compile_fail
-/// let a: Tokenname = unsafe { std::mem::zeroed() };
-/// let b: Tokenname = unsafe { std::mem::zeroed() };
-/// let _ = <Tokenname as PartialEq>::eq(&a, &b);
-/// ```
-///
-/// ```compile_fail
-/// let a: &TokennameRef = unsafe { std::mem::zeroed() };
-/// let b: &TokennameRef = unsafe { std::mem::zeroed() };
-/// let _ = <&TokennameRef as PartialEq>::eq(a, b);
-/// ```
-///
-/// ```compile_fail
-/// let a: &TokennameRef = unsafe { std::mem::zeroed() };
-/// let b: &TokennameRef = unsafe { std::mem::zeroed() };
-/// let _ = <&TokennameRef as PartialEq>::eq(&a, &b);
-/// ```
struct _AssertNoEqImpl;
impl TokennameRef {
@@ -548,7 +522,7 @@ impl PartialEq<String> for Userid {
}
/// A complete authentication id consisting of a user id and an optional token name.
-#[derive(Clone, Debug, Hash)]
+#[derive(Clone, Debug, Eq, PartialEq, Hash)]
pub struct Authid {
user: Userid,
tokenname: Option<Tokenname>
@@ -590,18 +564,6 @@ lazy_static! {
pub static ref ROOT_AUTHID: Authid = Authid::from(Userid::new("root@pam".to_string(), 4));
}
-impl Eq for Authid {}
-
-impl PartialEq for Authid {
- fn eq(&self, rhs: &Self) -> bool {
- self.user == rhs.user && match (&self.tokenname, &rhs.tokenname) {
- (Some(ours), Some(theirs)) => ours.as_str() == theirs.as_str(),
- (None, None) => true,
- _ => false,
- }
- }
-}
-
impl From<Userid> for Authid {
fn from(parts: Userid) -> Self {
Self::new(parts, None)
--
2.20.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 12/15] derive/impl and use Default for some structs
2021-01-25 13:42 [pbs-devel] [PATCH proxmox-backup(-qemu) 00/17] clippy refactorings Fabian Grünbichler
` (10 preceding siblings ...)
2021-01-25 13:42 ` [pbs-devel] [PATCH proxmox-backup 11/15] authid: make Tokenname(Ref) derive Eq Fabian Grünbichler
@ 2021-01-25 13:42 ` Fabian Grünbichler
2021-01-25 13:42 ` [pbs-devel] [PATCH proxmox-backup 13/15] verify: factor out common parameters Fabian Grünbichler
` (5 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Fabian Grünbichler @ 2021-01-25 13:42 UTC (permalink / raw)
To: pbs-devel
and revamp HttpClientOptions with two constructors for the common use
cases
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
breaks proxmox-backup-qemu, corresponding patch comes later in this
series
examples/download-speed.rs | 2 +-
examples/upload-speed.rs | 2 +-
src/api2/config/remote.rs | 4 +---
src/backup/prune.rs | 1 +
src/bin/proxmox-backup-client.rs | 18 ++++-----------
src/client.rs | 11 +++++----
src/client/http_client.rs | 38 +++++++++++++++++++++++++-------
src/client/pull.rs | 4 +---
src/config/acl.rs | 8 ++++---
src/config/network.rs | 2 +-
10 files changed, 50 insertions(+), 40 deletions(-)
diff --git a/examples/download-speed.rs b/examples/download-speed.rs
index 3ccf4ce7..a4afb7ba 100644
--- a/examples/download-speed.rs
+++ b/examples/download-speed.rs
@@ -28,7 +28,7 @@ async fn run() -> Result<(), Error> {
let auth_id = Authid::root_auth_id();
- let options = HttpClientOptions::new()
+ let options = HttpClientOptions::default()
.interactive(true)
.ticket_cache(true);
diff --git a/examples/upload-speed.rs b/examples/upload-speed.rs
index 641ed952..05e44aaf 100644
--- a/examples/upload-speed.rs
+++ b/examples/upload-speed.rs
@@ -10,7 +10,7 @@ async fn upload_speed() -> Result<f64, Error> {
let auth_id = Authid::root_auth_id();
- let options = HttpClientOptions::new()
+ let options = HttpClientOptions::default()
.interactive(true)
.ticket_cache(true);
diff --git a/src/api2/config/remote.rs b/src/api2/config/remote.rs
index fe7dc451..28221358 100644
--- a/src/api2/config/remote.rs
+++ b/src/api2/config/remote.rs
@@ -310,9 +310,7 @@ pub fn delete_remote(name: String, digest: Option<String>) -> Result<(), Error>
/// Helper to get client for remote.cfg entry
pub async fn remote_client(remote: remote::Remote) -> Result<HttpClient, Error> {
- let options = HttpClientOptions::new()
- .password(Some(remote.password.clone()))
- .fingerprint(remote.fingerprint.clone());
+ let options = HttpClientOptions::new_non_interactive(remote.password.clone(), remote.fingerprint.clone());
let client = HttpClient::new(
&remote.host,
diff --git a/src/backup/prune.rs b/src/backup/prune.rs
index baec57d6..dd038055 100644
--- a/src/backup/prune.rs
+++ b/src/backup/prune.rs
@@ -67,6 +67,7 @@ fn remove_incomplete_snapshots(
}
}
+#[derive(Default)]
pub struct PruneOptions {
pub keep_last: Option<u64>,
pub keep_hourly: Option<u64>,
diff --git a/src/bin/proxmox-backup-client.rs b/src/bin/proxmox-backup-client.rs
index d31e47ae..fe305f63 100644
--- a/src/bin/proxmox-backup-client.rs
+++ b/src/bin/proxmox-backup-client.rs
@@ -211,13 +211,7 @@ fn connect_do(server: &str, port: u16, auth_id: &Authid) -> Result<HttpClient, E
Err(NotPresent) => None,
};
- let options = HttpClientOptions::new()
- .prefix(Some("proxmox-backup".to_string()))
- .password(password)
- .interactive(true)
- .fingerprint(fingerprint)
- .fingerprint_cache(true)
- .ticket_cache(true);
+ let options = HttpClientOptions::new_interactive(password, fingerprint);
HttpClient::new(server, port, auth_id, options)
}
@@ -1565,13 +1559,9 @@ async fn try_get(repo: &BackupRepository, url: &str) -> Value {
let fingerprint = std::env::var(ENV_VAR_PBS_FINGERPRINT).ok();
let password = std::env::var(ENV_VAR_PBS_PASSWORD).ok();
- let options = HttpClientOptions::new()
- .prefix(Some("proxmox-backup".to_string()))
- .password(password)
- .interactive(false)
- .fingerprint(fingerprint)
- .fingerprint_cache(true)
- .ticket_cache(true);
+ // ticket cache, but no questions asked
+ let options = HttpClientOptions::new_interactive(password, fingerprint)
+ .interactive(false);
let client = match HttpClient::new(repo.host(), repo.port(), repo.auth_id(), options) {
Ok(v) => v,
diff --git a/src/client.rs b/src/client.rs
index 8c4542b6..d50c26c2 100644
--- a/src/client.rs
+++ b/src/client.rs
@@ -49,17 +49,16 @@ pub fn connect_to_localhost() -> Result<HttpClient, Error> {
let uid = nix::unistd::Uid::current();
- let mut options = HttpClientOptions::new()
- .prefix(Some("proxmox-backup".to_string()))
- .verify_cert(false); // not required for connection to localhost
-
let client = if uid.is_root() {
let ticket = Ticket::new("PBS", Userid::root_userid())?
.sign(private_auth_key(), None)?;
- options = options.password(Some(ticket));
+ let fingerprint = crate::tools::cert::CertInfo::new()?.fingerprint()?;
+ let options = HttpClientOptions::new_non_interactive(ticket, Some(fingerprint));
+
HttpClient::new("localhost", 8007, Authid::root_auth_id(), options)?
} else {
- options = options.ticket_cache(true).interactive(true);
+ let options = HttpClientOptions::new_interactive(None, None);
+
HttpClient::new("localhost", 8007, Authid::root_auth_id(), options)?
};
diff --git a/src/client/http_client.rs b/src/client/http_client.rs
index f279d9dd..9fd1c013 100644
--- a/src/client/http_client.rs
+++ b/src/client/http_client.rs
@@ -52,15 +52,23 @@ pub struct HttpClientOptions {
impl HttpClientOptions {
- pub fn new() -> Self {
+ pub fn new_interactive(password: Option<String>, fingerprint: Option<String>) -> Self {
Self {
- prefix: None,
- password: None,
- fingerprint: None,
- interactive: false,
- ticket_cache: false,
- fingerprint_cache: false,
- verify_cert: true,
+ password,
+ fingerprint,
+ fingerprint_cache: true,
+ ticket_cache: true,
+ interactive: true,
+ prefix: Some("proxmox-backup".to_string()),
+ ..Self::default()
+ }
+ }
+
+ pub fn new_non_interactive(password: String, fingerprint: Option<String>) -> Self {
+ Self {
+ password: Some(password),
+ fingerprint,
+ ..Self::default()
}
}
@@ -100,6 +108,20 @@ impl HttpClientOptions {
}
}
+impl Default for HttpClientOptions {
+ fn default() -> Self {
+ Self {
+ prefix: None,
+ password: None,
+ fingerprint: None,
+ interactive: false,
+ ticket_cache: false,
+ fingerprint_cache: false,
+ verify_cert: true,
+ }
+ }
+}
+
/// HTTP(S) API client
pub struct HttpClient {
client: Client<HttpsConnector>,
diff --git a/src/client/pull.rs b/src/client/pull.rs
index 15514374..95720973 100644
--- a/src/client/pull.rs
+++ b/src/client/pull.rs
@@ -502,9 +502,7 @@ pub async fn pull_group(
// get updated auth_info (new tickets)
let auth_info = client.login().await?;
- let options = HttpClientOptions::new()
- .password(Some(auth_info.ticket.clone()))
- .fingerprint(fingerprint.clone());
+ let options = HttpClientOptions::new_non_interactive(auth_info.ticket.clone(), fingerprint.clone());
let new_client = HttpClient::new(
src_repo.host(),
diff --git a/src/config/acl.rs b/src/config/acl.rs
index 6ef54e30..e02ac5c7 100644
--- a/src/config/acl.rs
+++ b/src/config/acl.rs
@@ -299,6 +299,7 @@ pub fn check_acl_path(path: &str) -> Result<(), Error> {
}
/// Tree representing a parsed acl.cfg
+#[derive(Default)]
pub struct AclTree {
/// Root node of the tree.
///
@@ -308,6 +309,7 @@ pub struct AclTree {
}
/// Node representing ACLs for a certain ACL path.
+#[derive(Default)]
pub struct AclTreeNode {
/// [User](crate::config::user::User) or
/// [Token](crate::config::user::ApiToken) ACLs for this node.
@@ -412,7 +414,7 @@ impl AclTreeNode {
}
fn insert_group_role(&mut self, group: String, role: String, propagate: bool) {
- let map = self.groups.entry(group).or_insert_with(HashMap::new);
+ let map = self.groups.entry(group).or_default();
if role == ROLE_NAME_NO_ACCESS {
map.clear();
map.insert(role, propagate);
@@ -423,7 +425,7 @@ impl AclTreeNode {
}
fn insert_user_role(&mut self, auth_id: Authid, role: String, propagate: bool) {
- let map = self.users.entry(auth_id).or_insert_with(HashMap::new);
+ let map = self.users.entry(auth_id).or_default();
if role == ROLE_NAME_NO_ACCESS {
map.clear();
map.insert(role, propagate);
@@ -465,7 +467,7 @@ impl AclTree {
node = node
.children
.entry(String::from(*comp))
- .or_insert_with(AclTreeNode::new);
+ .or_default();
}
node
}
diff --git a/src/config/network.rs b/src/config/network.rs
index 4241261a..99ea0d08 100644
--- a/src/config/network.rs
+++ b/src/config/network.rs
@@ -318,7 +318,7 @@ enum NetworkOrderEntry {
Option(String),
}
-#[derive(Debug)]
+#[derive(Debug, Default)]
pub struct NetworkConfig {
pub interfaces: BTreeMap<String, Interface>,
order: Vec<NetworkOrderEntry>,
--
2.20.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 13/15] verify: factor out common parameters
2021-01-25 13:42 [pbs-devel] [PATCH proxmox-backup(-qemu) 00/17] clippy refactorings Fabian Grünbichler
` (11 preceding siblings ...)
2021-01-25 13:42 ` [pbs-devel] [PATCH proxmox-backup 12/15] derive/impl and use Default for some structs Fabian Grünbichler
@ 2021-01-25 13:42 ` Fabian Grünbichler
2021-01-25 13:42 ` [pbs-devel] [PATCH proxmox-backup 14/15] clippy: allow api functions with many arguments Fabian Grünbichler
` (4 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Fabian Grünbichler @ 2021-01-25 13:42 UTC (permalink / raw)
To: pbs-devel
all the verify methods pass along the following:
- task worker
- datastore
- corrupt and verified chunks
might as well pull that out into a common type, with the added bonus of
now having a single point for construction instead of copying the
default capacaties in three different modules..
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
src/api2/admin/datastore.rs | 17 +---
src/api2/backup/environment.rs | 10 +-
src/backup/verify.rs | 174 ++++++++++++++-------------------
src/server/verify_job.rs | 3 +-
4 files changed, 85 insertions(+), 119 deletions(-)
diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
index ba8f3417..3d5b6af6 100644
--- a/src/api2/admin/datastore.rs
+++ b/src/api2/admin/datastore.rs
@@ -3,7 +3,6 @@
use std::collections::HashSet;
use std::ffi::OsStr;
use std::os::unix::ffi::OsStrExt;
-use std::sync::{Arc, Mutex};
use std::path::{Path, PathBuf};
use std::pin::Pin;
@@ -672,17 +671,12 @@ pub fn verify(
auth_id.clone(),
to_stdout,
move |worker| {
- let verified_chunks = Arc::new(Mutex::new(HashSet::with_capacity(1024*16)));
- let corrupt_chunks = Arc::new(Mutex::new(HashSet::with_capacity(64)));
-
+ let verify_worker = crate::backup::VerifyWorker::new(worker.clone(), datastore);
let failed_dirs = if let Some(backup_dir) = backup_dir {
let mut res = Vec::new();
if !verify_backup_dir(
- datastore,
+ &verify_worker,
&backup_dir,
- verified_chunks,
- corrupt_chunks,
- worker.clone(),
worker.upid().clone(),
None,
)? {
@@ -691,12 +685,9 @@ pub fn verify(
res
} else if let Some(backup_group) = backup_group {
let failed_dirs = verify_backup_group(
- datastore,
+ &verify_worker,
&backup_group,
- verified_chunks,
- corrupt_chunks,
&mut StoreProgress::new(1),
- worker.clone(),
worker.upid(),
None,
)?;
@@ -711,7 +702,7 @@ pub fn verify(
None
};
- verify_all_backups(datastore, worker.clone(), worker.upid(), owner, None)?
+ verify_all_backups(&verify_worker, worker.upid(), owner, None)?
};
if !failed_dirs.is_empty() {
worker.log("Failed to verify the following snapshots/groups:");
diff --git a/src/api2/backup/environment.rs b/src/api2/backup/environment.rs
index 38061816..c8f52b6e 100644
--- a/src/api2/backup/environment.rs
+++ b/src/api2/backup/environment.rs
@@ -1,6 +1,6 @@
use anyhow::{bail, format_err, Error};
use std::sync::{Arc, Mutex};
-use std::collections::{HashMap, HashSet};
+use std::collections::HashMap;
use nix::dir::Dir;
use ::serde::{Serialize};
@@ -525,15 +525,11 @@ impl BackupEnvironment {
move |worker| {
worker.log("Automatically verifying newly added snapshot");
- let verified_chunks = Arc::new(Mutex::new(HashSet::with_capacity(1024*16)));
- let corrupt_chunks = Arc::new(Mutex::new(HashSet::with_capacity(64)));
+ let verify_worker = crate::backup::VerifyWorker::new(worker.clone(), datastore);
if !verify_backup_dir_with_lock(
- datastore,
+ &verify_worker,
&backup_dir,
- verified_chunks,
- corrupt_chunks,
- worker.clone(),
worker.upid().clone(),
None,
snap_lock,
diff --git a/src/backup/verify.rs b/src/backup/verify.rs
index 5e4bc7fb..ac4a6c29 100644
--- a/src/backup/verify.rs
+++ b/src/backup/verify.rs
@@ -29,6 +29,29 @@ use crate::{
tools::fs::lock_dir_noblock_shared,
};
+/// A VerifyWorker encapsulates a task worker, datastore and information about which chunks have
+/// already been verified or detected as corrupt.
+pub struct VerifyWorker {
+ worker: Arc<dyn TaskState + Send + Sync>,
+ datastore: Arc<DataStore>,
+ verified_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
+ corrupt_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
+}
+
+impl VerifyWorker {
+ /// Creates a new VerifyWorker for a given task worker and datastore.
+ pub fn new(worker: Arc<dyn TaskState + Send + Sync>, datastore: Arc<DataStore>) -> Self {
+ Self {
+ worker,
+ datastore,
+ // start with 16k chunks == up to 64G data
+ verified_chunks: Arc::new(Mutex::new(HashSet::with_capacity(16*1024))),
+ // start with 64 chunks since we assume there are few corrupt ones
+ corrupt_chunks: Arc::new(Mutex::new(HashSet::with_capacity(64))),
+ }
+ }
+}
+
fn verify_blob(datastore: Arc<DataStore>, backup_dir: &BackupDir, info: &FileInfo) -> Result<(), Error> {
let blob = datastore.load_blob(backup_dir, &info.filename)?;
@@ -82,12 +105,9 @@ fn rename_corrupted_chunk(
}
fn verify_index_chunks(
- datastore: Arc<DataStore>,
+ verify_worker: &VerifyWorker,
index: Box<dyn IndexFile + Send>,
- verified_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
- corrupt_chunks: Arc<Mutex<HashSet<[u8; 32]>>>,
crypt_mode: CryptMode,
- worker: Arc<dyn TaskState + Send + Sync>,
) -> Result<(), Error> {
let errors = Arc::new(AtomicUsize::new(0));
@@ -97,10 +117,10 @@ fn verify_index_chunks(
let mut read_bytes = 0;
let mut decoded_bytes = 0;
- let worker2 = Arc::clone(&worker);
- let datastore2 = Arc::clone(&datastore);
- let corrupt_chunks2 = Arc::clone(&corrupt_chunks);
- let verified_chunks2 = Arc::clone(&verified_chunks);
+ let worker2 = Arc::clone(&verify_worker.worker);
+ let datastore2 = Arc::clone(&verify_worker.datastore);
+ let corrupt_chunks2 = Arc::clone(&verify_worker.corrupt_chunks);
+ let verified_chunks2 = Arc::clone(&verify_worker.verified_chunks);
let errors2 = Arc::clone(&errors);
let decoder_pool = ParallelHandler::new(
@@ -141,29 +161,29 @@ fn verify_index_chunks(
for pos in 0..index.index_count() {
- worker.check_abort()?;
+ verify_worker.worker.check_abort()?;
crate::tools::fail_on_shutdown()?;
let info = index.chunk_info(pos).unwrap();
let size = info.size();
- if verified_chunks.lock().unwrap().contains(&info.digest) {
+ if verify_worker.verified_chunks.lock().unwrap().contains(&info.digest) {
continue; // already verified
}
- if corrupt_chunks.lock().unwrap().contains(&info.digest) {
+ if verify_worker.corrupt_chunks.lock().unwrap().contains(&info.digest) {
let digest_str = proxmox::tools::digest_to_hex(&info.digest);
- task_log!(worker, "chunk {} was marked as corrupt", digest_str);
+ task_log!(verify_worker.worker, "chunk {} was marked as corrupt", digest_str);
errors.fetch_add(1, Ordering::SeqCst);
continue;
}
- match datastore.load_chunk(&info.digest) {
+ match verify_worker.datastore.load_chunk(&info.digest) {
Err(err) => {
- corrupt_chunks.lock().unwrap().insert(info.digest);
- task_log!(worker, "can't verify chunk, load failed - {}", err);
+ verify_worker.corrupt_chunks.lock().unwrap().insert(info.digest);
+ task_log!(verify_worker.worker, "can't verify chunk, load failed - {}", err);
errors.fetch_add(1, Ordering::SeqCst);
- rename_corrupted_chunk(datastore.clone(), &info.digest, &worker);
+ rename_corrupted_chunk(verify_worker.datastore.clone(), &info.digest, &verify_worker.worker);
continue;
}
Ok(chunk) => {
@@ -187,7 +207,7 @@ fn verify_index_chunks(
let error_count = errors.load(Ordering::SeqCst);
task_log!(
- worker,
+ verify_worker.worker,
" verified {:.2}/{:.2} MiB in {:.2} seconds, speed {:.2}/{:.2} MiB/s ({} errors)",
read_bytes_mib,
decoded_bytes_mib,
@@ -205,18 +225,15 @@ fn verify_index_chunks(
}
fn verify_fixed_index(
- datastore: Arc<DataStore>,
+ verify_worker: &VerifyWorker,
backup_dir: &BackupDir,
info: &FileInfo,
- verified_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
- corrupt_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
- worker: Arc<dyn TaskState + Send + Sync>,
) -> Result<(), Error> {
let mut path = backup_dir.relative_path();
path.push(&info.filename);
- let index = datastore.open_fixed_reader(&path)?;
+ let index = verify_worker.datastore.open_fixed_reader(&path)?;
let (csum, size) = index.compute_csum();
if size != info.size {
@@ -228,28 +245,22 @@ fn verify_fixed_index(
}
verify_index_chunks(
- datastore,
+ verify_worker,
Box::new(index),
- verified_chunks,
- corrupt_chunks,
info.chunk_crypt_mode(),
- worker,
)
}
fn verify_dynamic_index(
- datastore: Arc<DataStore>,
+ verify_worker: &VerifyWorker,
backup_dir: &BackupDir,
info: &FileInfo,
- verified_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
- corrupt_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
- worker: Arc<dyn TaskState + Send + Sync>,
) -> Result<(), Error> {
let mut path = backup_dir.relative_path();
path.push(&info.filename);
- let index = datastore.open_dynamic_reader(&path)?;
+ let index = verify_worker.datastore.open_dynamic_reader(&path)?;
let (csum, size) = index.compute_csum();
if size != info.size {
@@ -261,12 +272,9 @@ fn verify_dynamic_index(
}
verify_index_chunks(
- datastore,
+ verify_worker,
Box::new(index),
- verified_chunks,
- corrupt_chunks,
info.chunk_crypt_mode(),
- worker,
)
}
@@ -280,34 +288,28 @@ fn verify_dynamic_index(
/// - Ok(false) if there were verification errors
/// - Err(_) if task was aborted
pub fn verify_backup_dir(
- datastore: Arc<DataStore>,
+ verify_worker: &VerifyWorker,
backup_dir: &BackupDir,
- verified_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
- corrupt_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
- worker: Arc<dyn TaskState + Send + Sync>,
upid: UPID,
filter: Option<&dyn Fn(&BackupManifest) -> bool>,
) -> Result<bool, Error> {
let snap_lock = lock_dir_noblock_shared(
- &datastore.snapshot_path(&backup_dir),
+ &verify_worker.datastore.snapshot_path(&backup_dir),
"snapshot",
"locked by another operation");
match snap_lock {
Ok(snap_lock) => verify_backup_dir_with_lock(
- datastore,
+ verify_worker,
backup_dir,
- verified_chunks,
- corrupt_chunks,
- worker,
upid,
filter,
snap_lock
),
Err(err) => {
task_log!(
- worker,
+ verify_worker.worker,
"SKIPPED: verify {}:{} - could not acquire snapshot lock: {}",
- datastore.name(),
+ verify_worker.datastore.name(),
backup_dir,
err,
);
@@ -318,22 +320,19 @@ pub fn verify_backup_dir(
/// See verify_backup_dir
pub fn verify_backup_dir_with_lock(
- datastore: Arc<DataStore>,
+ verify_worker: &VerifyWorker,
backup_dir: &BackupDir,
- verified_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
- corrupt_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
- worker: Arc<dyn TaskState + Send + Sync>,
upid: UPID,
filter: Option<&dyn Fn(&BackupManifest) -> bool>,
_snap_lock: Dir,
) -> Result<bool, Error> {
- let manifest = match datastore.load_manifest(&backup_dir) {
+ let manifest = match verify_worker.datastore.load_manifest(&backup_dir) {
Ok((manifest, _)) => manifest,
Err(err) => {
task_log!(
- worker,
+ verify_worker.worker,
"verify {}:{} - manifest load error: {}",
- datastore.name(),
+ verify_worker.datastore.name(),
backup_dir,
err,
);
@@ -344,54 +343,48 @@ pub fn verify_backup_dir_with_lock(
if let Some(filter) = filter {
if !filter(&manifest) {
task_log!(
- worker,
+ verify_worker.worker,
"SKIPPED: verify {}:{} (recently verified)",
- datastore.name(),
+ verify_worker.datastore.name(),
backup_dir,
);
return Ok(true);
}
}
- task_log!(worker, "verify {}:{}", datastore.name(), backup_dir);
+ task_log!(verify_worker.worker, "verify {}:{}", verify_worker.datastore.name(), backup_dir);
let mut error_count = 0;
let mut verify_result = VerifyState::Ok;
for info in manifest.files() {
let result = proxmox::try_block!({
- task_log!(worker, " check {}", info.filename);
+ task_log!(verify_worker.worker, " check {}", info.filename);
match archive_type(&info.filename)? {
ArchiveType::FixedIndex =>
verify_fixed_index(
- datastore.clone(),
+ verify_worker,
&backup_dir,
info,
- verified_chunks.clone(),
- corrupt_chunks.clone(),
- worker.clone(),
),
ArchiveType::DynamicIndex =>
verify_dynamic_index(
- datastore.clone(),
+ verify_worker,
&backup_dir,
info,
- verified_chunks.clone(),
- corrupt_chunks.clone(),
- worker.clone(),
),
- ArchiveType::Blob => verify_blob(datastore.clone(), &backup_dir, info),
+ ArchiveType::Blob => verify_blob(verify_worker.datastore.clone(), &backup_dir, info),
}
});
- worker.check_abort()?;
+ verify_worker.worker.check_abort()?;
crate::tools::fail_on_shutdown()?;
if let Err(err) = result {
task_log!(
- worker,
+ verify_worker.worker,
"verify {}:{}/{} failed: {}",
- datastore.name(),
+ verify_worker.datastore.name(),
backup_dir,
info.filename,
err,
@@ -407,7 +400,7 @@ pub fn verify_backup_dir_with_lock(
upid,
};
let verify_state = serde_json::to_value(verify_state)?;
- datastore.update_manifest(&backup_dir, |manifest| {
+ verify_worker.datastore.update_manifest(&backup_dir, |manifest| {
manifest.unprotected["verify_state"] = verify_state;
}).map_err(|err| format_err!("unable to update manifest blob - {}", err))?;
@@ -422,24 +415,21 @@ pub fn verify_backup_dir_with_lock(
/// - Ok((count, failed_dirs)) where failed_dirs had verification errors
/// - Err(_) if task was aborted
pub fn verify_backup_group(
- datastore: Arc<DataStore>,
+ verify_worker: &VerifyWorker,
group: &BackupGroup,
- verified_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
- corrupt_chunks: Arc<Mutex<HashSet<[u8;32]>>>,
progress: &mut StoreProgress,
- worker: Arc<dyn TaskState + Send + Sync>,
upid: &UPID,
filter: Option<&dyn Fn(&BackupManifest) -> bool>,
) -> Result<Vec<String>, Error> {
let mut errors = Vec::new();
- let mut list = match group.list_backups(&datastore.base_path()) {
+ let mut list = match group.list_backups(&verify_worker.datastore.base_path()) {
Ok(list) => list,
Err(err) => {
task_log!(
- worker,
+ verify_worker.worker,
"verify group {}:{} - unable to list backups: {}",
- datastore.name(),
+ verify_worker.datastore.name(),
group,
err,
);
@@ -448,18 +438,15 @@ pub fn verify_backup_group(
};
let snapshot_count = list.len();
- task_log!(worker, "verify group {}:{} ({} snapshots)", datastore.name(), group, snapshot_count);
+ task_log!(verify_worker.worker, "verify group {}:{} ({} snapshots)", verify_worker.datastore.name(), group, snapshot_count);
progress.group_snapshots = snapshot_count as u64;
BackupInfo::sort_list(&mut list, false); // newest first
for (pos, info) in list.into_iter().enumerate() {
if !verify_backup_dir(
- datastore.clone(),
+ verify_worker,
&info.backup_dir,
- verified_chunks.clone(),
- corrupt_chunks.clone(),
- worker.clone(),
upid.clone(),
filter,
)? {
@@ -467,7 +454,7 @@ pub fn verify_backup_group(
}
progress.done_snapshots = pos as u64 + 1;
task_log!(
- worker,
+ verify_worker.worker,
"percentage done: {}",
progress
);
@@ -484,22 +471,22 @@ pub fn verify_backup_group(
/// - Ok(failed_dirs) where failed_dirs had verification errors
/// - Err(_) if task was aborted
pub fn verify_all_backups(
- datastore: Arc<DataStore>,
- worker: Arc<dyn TaskState + Send + Sync>,
+ verify_worker: &VerifyWorker,
upid: &UPID,
owner: Option<Authid>,
filter: Option<&dyn Fn(&BackupManifest) -> bool>,
) -> Result<Vec<String>, Error> {
let mut errors = Vec::new();
+ let worker = Arc::clone(&verify_worker.worker);
- task_log!(worker, "verify datastore {}", datastore.name());
+ task_log!(worker, "verify datastore {}", verify_worker.datastore.name());
if let Some(owner) = &owner {
task_log!(worker, "limiting to backups owned by {}", owner);
}
let filter_by_owner = |group: &BackupGroup| {
- match (datastore.get_owner(group), &owner) {
+ match (verify_worker.datastore.get_owner(group), &owner) {
(Ok(ref group_owner), Some(owner)) => {
group_owner == owner
|| (group_owner.is_token()
@@ -527,7 +514,7 @@ pub fn verify_all_backups(
}
};
- let mut list = match BackupInfo::list_backup_groups(&datastore.base_path()) {
+ let mut list = match BackupInfo::list_backup_groups(&verify_worker.datastore.base_path()) {
Ok(list) => list
.into_iter()
.filter(|group| !(group.backup_type() == "host" && group.backup_id() == "benchmark"))
@@ -545,12 +532,6 @@ pub fn verify_all_backups(
list.sort_unstable();
- // start with 16384 chunks (up to 65GB)
- let verified_chunks = Arc::new(Mutex::new(HashSet::with_capacity(1024*16)));
-
- // start with 64 chunks since we assume there are few corrupt ones
- let corrupt_chunks = Arc::new(Mutex::new(HashSet::with_capacity(64)));
-
let group_count = list.len();
task_log!(worker, "found {} groups", group_count);
@@ -562,12 +543,9 @@ pub fn verify_all_backups(
progress.group_snapshots = 0;
let mut group_errors = verify_backup_group(
- datastore.clone(),
+ verify_worker,
&group,
- verified_chunks.clone(),
- corrupt_chunks.clone(),
&mut progress,
- worker.clone(),
upid,
filter,
)?;
diff --git a/src/server/verify_job.rs b/src/server/verify_job.rs
index ca6eb554..1dd8baa7 100644
--- a/src/server/verify_job.rs
+++ b/src/server/verify_job.rs
@@ -67,7 +67,8 @@ pub fn do_verification_job(
task_log!(worker,"task triggered by schedule '{}'", event_str);
}
- let result = verify_all_backups(datastore, worker.clone(), worker.upid(), None, Some(&filter));
+ let verify_worker = crate::backup::VerifyWorker::new(worker.clone(), datastore);
+ let result = verify_all_backups(&verify_worker, worker.upid(), None, Some(&filter));
let job_result = match result {
Ok(ref failed_dirs) if failed_dirs.is_empty() => Ok(()),
Ok(ref failed_dirs) => {
--
2.20.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 14/15] clippy: allow api functions with many arguments
2021-01-25 13:42 [pbs-devel] [PATCH proxmox-backup(-qemu) 00/17] clippy refactorings Fabian Grünbichler
` (12 preceding siblings ...)
2021-01-25 13:42 ` [pbs-devel] [PATCH proxmox-backup 13/15] verify: factor out common parameters Fabian Grünbichler
@ 2021-01-25 13:42 ` Fabian Grünbichler
2021-01-25 13:43 ` [pbs-devel] [PATCH proxmox-backup 15/15] clippy: more misc fixes Fabian Grünbichler
` (3 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Fabian Grünbichler @ 2021-01-25 13:42 UTC (permalink / raw)
To: pbs-devel
some of those can be reduced/cleaned up when we have updater support in
the api macro.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
src/api2/access/acl.rs | 1 +
src/api2/access/tfa.rs | 1 +
src/api2/access/user.rs | 1 +
src/api2/admin/datastore.rs | 2 --
src/api2/config/datastore.rs | 1 +
src/api2/config/remote.rs | 1 +
src/api2/config/sync.rs | 1 +
src/api2/config/verify.rs | 1 +
src/api2/node/network.rs | 2 ++
src/api2/node/tasks.rs | 1 +
src/bin/pxar.rs | 1 +
src/client/backup_writer.rs | 2 ++
12 files changed, 13 insertions(+), 2 deletions(-)
diff --git a/src/api2/access/acl.rs b/src/api2/access/acl.rs
index 6f1ba327..be9ad380 100644
--- a/src/api2/access/acl.rs
+++ b/src/api2/access/acl.rs
@@ -165,6 +165,7 @@ pub fn read_acl(
},
)]
/// Update Access Control List (ACLs).
+#[allow(clippy::too_many_arguments)]
pub fn update_acl(
path: String,
role: String,
diff --git a/src/api2/access/tfa.rs b/src/api2/access/tfa.rs
index 5df0baec..ba583323 100644
--- a/src/api2/access/tfa.rs
+++ b/src/api2/access/tfa.rs
@@ -421,6 +421,7 @@ impl TfaUpdateInfo {
},
)]
/// Add a TFA entry to the user.
+#[allow(clippy::too_many_arguments)]
fn add_tfa_entry(
userid: Userid,
description: Option<String>,
diff --git a/src/api2/access/user.rs b/src/api2/access/user.rs
index c6032818..c49b12b1 100644
--- a/src/api2/access/user.rs
+++ b/src/api2/access/user.rs
@@ -353,6 +353,7 @@ pub enum DeletableProperty {
},
)]
/// Update user configuration.
+#[allow(clippy::too_many_arguments)]
pub fn update_user(
userid: Userid,
comment: Option<String>,
diff --git a/src/api2/admin/datastore.rs b/src/api2/admin/datastore.rs
index 3d5b6af6..6f02e460 100644
--- a/src/api2/admin/datastore.rs
+++ b/src/api2/admin/datastore.rs
@@ -1293,8 +1293,6 @@ pub fn catalog(
backup_id: String,
backup_time: i64,
filepath: String,
- _param: Value,
- _info: &ApiMethod,
rpcenv: &mut dyn RpcEnvironment,
) -> Result<Value, Error> {
let datastore = DataStore::lookup_datastore(&store)?;
diff --git a/src/api2/config/datastore.rs b/src/api2/config/datastore.rs
index 00009b74..3a3dc176 100644
--- a/src/api2/config/datastore.rs
+++ b/src/api2/config/datastore.rs
@@ -277,6 +277,7 @@ pub enum DeletableProperty {
},
)]
/// Update datastore config.
+#[allow(clippy::too_many_arguments)]
pub fn update_datastore(
name: String,
comment: Option<String>,
diff --git a/src/api2/config/remote.rs b/src/api2/config/remote.rs
index 28221358..446a2604 100644
--- a/src/api2/config/remote.rs
+++ b/src/api2/config/remote.rs
@@ -203,6 +203,7 @@ pub enum DeletableProperty {
},
)]
/// Update remote configuration.
+#[allow(clippy::too_many_arguments)]
pub fn update_remote(
name: String,
comment: Option<String>,
diff --git a/src/api2/config/sync.rs b/src/api2/config/sync.rs
index e7360051..00a8c0b3 100644
--- a/src/api2/config/sync.rs
+++ b/src/api2/config/sync.rs
@@ -279,6 +279,7 @@ pub enum DeletableProperty {
},
)]
/// Update sync job config.
+#[allow(clippy::too_many_arguments)]
pub fn update_sync_job(
id: String,
store: Option<String>,
diff --git a/src/api2/config/verify.rs b/src/api2/config/verify.rs
index 08a9e717..db5f4d83 100644
--- a/src/api2/config/verify.rs
+++ b/src/api2/config/verify.rs
@@ -215,6 +215,7 @@ pub enum DeletableProperty {
},
)]
/// Update verification job config.
+#[allow(clippy::too_many_arguments)]
pub fn update_verification_job(
id: String,
store: Option<String>,
diff --git a/src/api2/node/network.rs b/src/api2/node/network.rs
index d9c031ae..0dc321b8 100644
--- a/src/api2/node/network.rs
+++ b/src/api2/node/network.rs
@@ -213,6 +213,7 @@ pub fn read_interface(iface: String) -> Result<Value, Error> {
},
)]
/// Create network interface configuration.
+#[allow(clippy::too_many_arguments)]
pub fn create_interface(
iface: String,
autostart: Option<bool>,
@@ -477,6 +478,7 @@ pub enum DeletableProperty {
},
)]
/// Update network interface config.
+#[allow(clippy::too_many_arguments)]
pub fn update_interface(
iface: String,
autostart: Option<bool>,
diff --git a/src/api2/node/tasks.rs b/src/api2/node/tasks.rs
index 99470531..ff6ed726 100644
--- a/src/api2/node/tasks.rs
+++ b/src/api2/node/tasks.rs
@@ -425,6 +425,7 @@ fn stop_task(
},
)]
/// List tasks.
+#[allow(clippy::too_many_arguments)]
pub fn list_tasks(
start: u64,
limit: u64,
diff --git a/src/bin/pxar.rs b/src/bin/pxar.rs
index b2fe6d52..814b3346 100644
--- a/src/bin/pxar.rs
+++ b/src/bin/pxar.rs
@@ -112,6 +112,7 @@ fn extract_archive_from_reader<R: std::io::Read>(
},
)]
/// Extract an archive.
+#[allow(clippy::too_many_arguments)]
fn extract_archive(
archive: String,
pattern: Option<Vec<String>>,
diff --git a/src/client/backup_writer.rs b/src/client/backup_writer.rs
index 38953a45..01ea7704 100644
--- a/src/client/backup_writer.rs
+++ b/src/client/backup_writer.rs
@@ -57,6 +57,8 @@ impl BackupWriter {
Arc::new(Self { h2, abort, crypt_config, verbose })
}
+ // FIXME: extract into (flattened) parameter struct?
+ #[allow(clippy::too_many_arguments)]
pub async fn start(
client: HttpClient,
crypt_config: Option<Arc<CryptConfig>>,
--
2.20.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [pbs-devel] [PATCH proxmox-backup 15/15] clippy: more misc fixes
2021-01-25 13:42 [pbs-devel] [PATCH proxmox-backup(-qemu) 00/17] clippy refactorings Fabian Grünbichler
` (13 preceding siblings ...)
2021-01-25 13:42 ` [pbs-devel] [PATCH proxmox-backup 14/15] clippy: allow api functions with many arguments Fabian Grünbichler
@ 2021-01-25 13:43 ` Fabian Grünbichler
2021-01-25 13:43 ` [pbs-devel] [PATCH proxmox-backup-qemu 1/2] use UploadOptions for uploading Blobs Fabian Grünbichler
` (2 subsequent siblings)
17 siblings, 0 replies; 19+ messages in thread
From: Fabian Grünbichler @ 2021-01-25 13:43 UTC (permalink / raw)
To: pbs-devel
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
src/bin/proxmox-daily-update.rs | 9 +++------
src/bin/proxmox_backup_client/key.rs | 24 +++++-------------------
src/bin/proxmox_backup_client/mount.rs | 2 +-
src/bin/proxmox_backup_manager/user.rs | 4 ++--
src/rrd/mod.rs | 1 +
5 files changed, 12 insertions(+), 28 deletions(-)
diff --git a/src/bin/proxmox-daily-update.rs b/src/bin/proxmox-daily-update.rs
index 99f5152e..83c6b80c 100644
--- a/src/bin/proxmox-daily-update.rs
+++ b/src/bin/proxmox-daily-update.rs
@@ -63,11 +63,8 @@ fn main() {
let mut rpcenv = CliEnvironment::new();
rpcenv.set_auth_id(Some(String::from("root@pam")));
- match proxmox_backup::tools::runtime::main(do_update(&mut rpcenv)) {
- Err(err) => {
- eprintln!("error during update: {}", err);
- std::process::exit(1);
- },
- _ => (),
+ if let Err(err) = proxmox_backup::tools::runtime::main(do_update(&mut rpcenv)) {
+ eprintln!("error during update: {}", err);
+ std::process::exit(1);
}
}
diff --git a/src/bin/proxmox_backup_client/key.rs b/src/bin/proxmox_backup_client/key.rs
index ef3123bb..405cb818 100644
--- a/src/bin/proxmox_backup_client/key.rs
+++ b/src/bin/proxmox_backup_client/key.rs
@@ -315,22 +315,13 @@ fn change_passphrase(
},
)]
/// Print the encryption key's metadata.
-fn show_key(
- path: Option<String>,
- param: Value,
-) -> Result<(), Error> {
+fn show_key(path: Option<String>, param: Value) -> Result<(), Error> {
let path = match path {
Some(path) => PathBuf::from(path),
- None => {
- let path = find_default_encryption_key()?
- .ok_or_else(|| {
- format_err!("no encryption file provided and no default file found")
- })?;
- path
- }
+ None => find_default_encryption_key()?
+ .ok_or_else(|| format_err!("no encryption file provided and no default file found"))?,
};
-
let config: KeyConfig = serde_json::from_slice(&file_get_contents(path.clone())?)?;
let output_format = get_output_format(¶m);
@@ -442,13 +433,8 @@ fn paper_key(
) -> Result<(), Error> {
let path = match path {
Some(path) => PathBuf::from(path),
- None => {
- let path = find_default_encryption_key()?
- .ok_or_else(|| {
- format_err!("no encryption file provided and no default file found")
- })?;
- path
- }
+ None => find_default_encryption_key()?
+ .ok_or_else(|| format_err!("no encryption file provided and no default file found"))?,
};
let data = file_get_contents(&path)?;
diff --git a/src/bin/proxmox_backup_client/mount.rs b/src/bin/proxmox_backup_client/mount.rs
index 72ed9166..24100752 100644
--- a/src/bin/proxmox_backup_client/mount.rs
+++ b/src/bin/proxmox_backup_client/mount.rs
@@ -345,7 +345,7 @@ fn unmap(
let mut any = false;
for (backing, loopdev) in tools::fuse_loop::find_all_mappings()? {
let name = tools::systemd::unescape_unit(&backing)?;
- println!("{}:\t{}", loopdev.unwrap_or("(unmapped)".to_owned()), name);
+ println!("{}:\t{}", loopdev.unwrap_or_else(|| "(unmapped)".to_string()), name);
any = true;
}
if !any {
diff --git a/src/bin/proxmox_backup_manager/user.rs b/src/bin/proxmox_backup_manager/user.rs
index d68e9e81..6603db1b 100644
--- a/src/bin/proxmox_backup_manager/user.rs
+++ b/src/bin/proxmox_backup_manager/user.rs
@@ -133,7 +133,7 @@ fn list_permissions(param: Value, rpcenv: &mut dyn RpcEnvironment) -> Result<Val
let output_format = get_output_format(¶m);
let info = &api2::access::API_METHOD_LIST_PERMISSIONS;
- let mut data = match info.handler {
+ let data = match info.handler {
ApiHandler::Sync(handler) => (handler)(param, info, rpcenv)?,
_ => unreachable!(),
};
@@ -161,7 +161,7 @@ fn list_permissions(param: Value, rpcenv: &mut dyn RpcEnvironment) -> Result<Val
}
}
} else {
- format_and_print_result(&mut data, &output_format);
+ format_and_print_result(&data, &output_format);
}
Ok(Value::Null)
diff --git a/src/rrd/mod.rs b/src/rrd/mod.rs
index c09efebf..03e4c9de 100644
--- a/src/rrd/mod.rs
+++ b/src/rrd/mod.rs
@@ -1,3 +1,4 @@
+#[allow(clippy::module_inception)]
mod rrd;
pub use rrd::*;
mod cache;
--
2.20.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [pbs-devel] [PATCH proxmox-backup-qemu 1/2] use UploadOptions for uploading Blobs
2021-01-25 13:42 [pbs-devel] [PATCH proxmox-backup(-qemu) 00/17] clippy refactorings Fabian Grünbichler
` (14 preceding siblings ...)
2021-01-25 13:43 ` [pbs-devel] [PATCH proxmox-backup 15/15] clippy: more misc fixes Fabian Grünbichler
@ 2021-01-25 13:43 ` Fabian Grünbichler
2021-01-25 13:43 ` [pbs-devel] [PATCH proxmox-backup-qemu 2/2] use new HttpClientOptions constructors Fabian Grünbichler
2021-01-26 9:44 ` [pbs-devel] applied series: [PATCH proxmox-backup(-qemu) 00/17] clippy refactorings Wolfgang Bumiller
17 siblings, 0 replies; 19+ messages in thread
From: Fabian Grünbichler @ 2021-01-25 13:43 UTC (permalink / raw)
To: pbs-devel
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
Notes:
needs corresponding proxmox-backup commit
src/commands.rs | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/src/commands.rs b/src/commands.rs
index 2f6eb0f..c63c4f7 100644
--- a/src/commands.rs
+++ b/src/commands.rs
@@ -93,7 +93,13 @@ pub(crate) async fn add_config(
let blob_name = format!("{}.blob", name);
- let stats = client.upload_blob_from_data(data, &blob_name, compress, crypt_mode == CryptMode::Encrypt).await?;
+ let options = UploadOptions {
+ compress,
+ encrypt: crypt_mode == CryptMode::Encrypt,
+ ..UploadOptions::default()
+ };
+
+ let stats = client.upload_blob_from_data(data, &blob_name, options).await?;
let mut guard = manifest.lock().unwrap();
guard.add_file(blob_name, stats.size, stats.csum, crypt_mode)?;
@@ -457,8 +463,13 @@ pub(crate) async fn finish_backup(
*key_fingerprint_guard = key_fingerprint;
}
+ let options = UploadOptions {
+ compress: true,
+ ..UploadOptions::default()
+ };
+
client
- .upload_blob_from_data(manifest.into_bytes(), MANIFEST_BLOB_NAME, true, false)
+ .upload_blob_from_data(manifest.into_bytes(), MANIFEST_BLOB_NAME, options)
.await?;
client.finish().await?;
--
2.20.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [pbs-devel] [PATCH proxmox-backup-qemu 2/2] use new HttpClientOptions constructors
2021-01-25 13:42 [pbs-devel] [PATCH proxmox-backup(-qemu) 00/17] clippy refactorings Fabian Grünbichler
` (15 preceding siblings ...)
2021-01-25 13:43 ` [pbs-devel] [PATCH proxmox-backup-qemu 1/2] use UploadOptions for uploading Blobs Fabian Grünbichler
@ 2021-01-25 13:43 ` Fabian Grünbichler
2021-01-26 9:44 ` [pbs-devel] applied series: [PATCH proxmox-backup(-qemu) 00/17] clippy refactorings Wolfgang Bumiller
17 siblings, 0 replies; 19+ messages in thread
From: Fabian Grünbichler @ 2021-01-25 13:43 UTC (permalink / raw)
To: pbs-devel
and make password non-optional from the start.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
---
Notes:
needs corresponding proxmox_backup commit
src/backup.rs | 7 ++++---
src/lib.rs | 8 +++++---
src/restore.rs | 7 ++++---
3 files changed, 13 insertions(+), 9 deletions(-)
diff --git a/src/backup.rs b/src/backup.rs
index b5e1263..e2062e7 100644
--- a/src/backup.rs
+++ b/src/backup.rs
@@ -107,9 +107,10 @@ impl BackupTask {
self.check_aborted()?;
let command_future = async {
- let options = HttpClientOptions::new()
- .fingerprint(self.setup.fingerprint.clone())
- .password(self.setup.password.clone());
+ let options = HttpClientOptions::new_non_interactive(
+ self.setup.password.clone(),
+ self.setup.fingerprint.clone(),
+ );
let http = HttpClient::new(&self.setup.host, self.setup.port, &self.setup.auth_id, options)?;
let writer = BackupWriter::start(
diff --git a/src/lib.rs b/src/lib.rs
index b755014..fd7c064 100644
--- a/src/lib.rs
+++ b/src/lib.rs
@@ -134,7 +134,7 @@ pub(crate) struct BackupSetup {
pub backup_type: String,
pub backup_id: String,
pub backup_time: i64,
- pub password: Option<String>,
+ pub password: String,
pub keyfile: Option<std::path::PathBuf>,
pub key_password: Option<String>,
pub fingerprint: Option<String>,
@@ -222,7 +222,8 @@ pub extern "C" fn proxmox_backup_new(
let backup_id = tools::utf8_c_string(backup_id)?
.ok_or_else(|| format_err!("backup_id must not be NULL"))?;
- let password = tools::utf8_c_string(password)?;
+ let password = tools::utf8_c_string(password)?
+ .ok_or_else(|| format_err!("password must not be NULL"))?;
let keyfile = tools::utf8_c_string(keyfile)?.map(std::path::PathBuf::from);
let key_password = tools::utf8_c_string(key_password)?;
let fingerprint = tools::utf8_c_string(fingerprint)?;
@@ -701,7 +702,8 @@ pub extern "C" fn proxmox_restore_new(
let backup_id = snapshot.group().backup_id().to_owned();
let backup_time = snapshot.backup_time();
- let password = tools::utf8_c_string(password)?;
+ let password = tools::utf8_c_string(password)?
+ .ok_or_else(|| format_err!("password must not be null"))?;
let keyfile = tools::utf8_c_string(keyfile)?.map(std::path::PathBuf::from);
let key_password = tools::utf8_c_string(key_password)?;
let fingerprint = tools::utf8_c_string(fingerprint)?;
diff --git a/src/restore.rs b/src/restore.rs
index 1cc99a6..0790d7f 100644
--- a/src/restore.rs
+++ b/src/restore.rs
@@ -75,9 +75,10 @@ impl RestoreTask {
pub async fn connect(&self) -> Result<libc::c_int, Error> {
- let options = HttpClientOptions::new()
- .fingerprint(self.setup.fingerprint.clone())
- .password(self.setup.password.clone());
+ let options = HttpClientOptions::new_non_interactive(
+ self.setup.password.clone(),
+ self.setup.fingerprint.clone(),
+ );
let http = HttpClient::new(&self.setup.host, self.setup.port, &self.setup.auth_id, options)?;
let client = BackupReader::start(
--
2.20.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [pbs-devel] applied series: [PATCH proxmox-backup(-qemu) 00/17] clippy refactorings
2021-01-25 13:42 [pbs-devel] [PATCH proxmox-backup(-qemu) 00/17] clippy refactorings Fabian Grünbichler
` (16 preceding siblings ...)
2021-01-25 13:43 ` [pbs-devel] [PATCH proxmox-backup-qemu 2/2] use new HttpClientOptions constructors Fabian Grünbichler
@ 2021-01-26 9:44 ` Wolfgang Bumiller
17 siblings, 0 replies; 19+ messages in thread
From: Wolfgang Bumiller @ 2021-01-26 9:44 UTC (permalink / raw)
To: Fabian Grünbichler; +Cc: pbs-devel
applied series
On Mon, Jan 25, 2021 at 02:42:45PM +0100, Fabian Grünbichler wrote:
> various parameter/types refactorings to simplify function
> signatures/return types/...
>
> I am grateful for suggestions on better names ;)
>
> I tried to order/group the patches so that they also apply
> individually/as sub-groups.
>
> with all of these, we are now down to some very recently touched files
> with trivial fixes that I didn't want to change since I suspected them
> to be still under development, and one missing docs lint:
>
> warning: unsafe function's docs miss `# Safety` section
> --> src/tools/fs.rs:78:5
> |
> 78 | / pub unsafe fn file_name_utf8_unchecked(&self) -> &str {
> 79 | | std::str::from_utf8_unchecked(self.file_name().to_bytes())
> 80 | | }
> | |_____^
>
> proxmox-backup:
> report: type-alias function call tuple
> broadcast_future: refactor broadcast/future binding
> client: refactor catalog upload spawning
> allow complex Futures in tower_service impl
> async index reader: typedef ReadFuture
> systemd/time: extract Time/DateSpec structs
> client: factor out UploadOptions
> pxar: typedef on_error as ErrorHandler
> pxar: factor out PxarCreateOptions
> pxar: extract PxarExtractOptions
> authid: make Tokenname(Ref) derive Eq
> derive/impl and use Default for some structs
> verify: factor out common parameters
> clippy: allow api functions with many arguments
> clippy: more misc fixes
>
> examples/download-speed.rs | 2 +-
> examples/upload-speed.rs | 2 +-
> src/api2/access/acl.rs | 1 +
> src/api2/access/tfa.rs | 1 +
> src/api2/access/user.rs | 1 +
> src/api2/admin/datastore.rs | 19 +--
> src/api2/backup/environment.rs | 10 +-
> src/api2/config/datastore.rs | 1 +
> src/api2/config/remote.rs | 5 +-
> src/api2/config/sync.rs | 1 +
> src/api2/config/verify.rs | 1 +
> src/api2/node/network.rs | 2 +
> src/api2/node/tasks.rs | 1 +
> src/api2/types/userid.rs | 52 +-------
> src/backup/async_index_reader.rs | 4 +-
> src/backup/prune.rs | 1 +
> src/backup/verify.rs | 174 +++++++++++--------------
> src/bin/proxmox-backup-client.rs | 162 +++++++++++++----------
> src/bin/proxmox-daily-update.rs | 9 +-
> src/bin/proxmox_backup_client/key.rs | 24 +---
> src/bin/proxmox_backup_client/mount.rs | 2 +-
> src/bin/proxmox_backup_manager/user.rs | 4 +-
> src/bin/pxar.rs | 54 ++++----
> src/client.rs | 11 +-
> src/client/backup_writer.rs | 46 ++++---
> src/client/http_client.rs | 38 ++++--
> src/client/pull.rs | 4 +-
> src/client/pxar_backup_stream.rs | 28 +---
> src/config/acl.rs | 8 +-
> src/config/network.rs | 2 +-
> src/pxar/create.rs | 28 +++-
> src/pxar/extract.rs | 24 ++--
> src/pxar/mod.rs | 4 +-
> src/rrd/mod.rs | 1 +
> src/server/h2service.rs | 5 +-
> src/server/report.rs | 6 +-
> src/server/rest.rs | 3 +-
> src/server/verify_job.rs | 3 +-
> src/tools/broadcast_future.rs | 38 ++----
> src/tools/http.rs | 5 +-
> src/tools/systemd/parse_time.rs | 41 ++++--
> tests/catar.rs | 10 +-
> 42 files changed, 414 insertions(+), 424 deletions(-)
>
> proxmox-backup-qemu:
> use UploadOptions for uploading Blobs
> use new HttpClientOptions constructors
>
> src/backup.rs | 7 ++++---
> src/commands.rs | 15 +++++++++++++--
> src/lib.rs | 8 +++++---
> src/restore.rs | 7 ++++---
> 4 files changed, 26 insertions(+), 11 deletions(-)
>
> --
> 2.20.1
^ permalink raw reply [flat|nested] 19+ messages in thread