From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id 302221FF15E for ; Fri, 18 Oct 2024 11:07:39 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 045621EFEE; Fri, 18 Oct 2024 11:08:14 +0200 (CEST) Date: Fri, 18 Oct 2024 11:08:10 +0200 From: Gabriel Goller To: Thomas Lamprecht Message-ID: <20241018090810.tusiekryqwlr2u76@luna.proxmox.com> References: <20241015131823.338766-1-g.goller@proxmox.com> <20241015131823.338766-2-g.goller@proxmox.com> <6376441e-f2ac-4cb6-ae63-39f21bb2c4c3@proxmox.com> <20241017132010.5q7i7llfkn6jcuqn@luna.proxmox.com> <3fc374d0-6f94-4e4e-a4aa-2ee2f5744e3d@proxmox.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <3fc374d0-6f94-4e4e-a4aa-2ee2f5744e3d@proxmox.com> User-Agent: NeoMutt/20220429 X-SPAM-LEVEL: Spam detection results: 0 AWL -0.042 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pbs-devel] [PATCH proxmox-backup 1/3] fix #3786: api: add resync-corrupt option to sync jobs X-BeenThere: pbs-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox Backup Server development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Proxmox Backup Server development discussion Cc: Proxmox Backup Server development discussion Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Errors-To: pbs-devel-bounces@lists.proxmox.com Sender: "pbs-devel" On 17.10.2024 16:09, Thomas Lamprecht wrote: >Am 17/10/2024 um 15:20 schrieb Gabriel Goller: >> On 17.10.2024 08:15, Thomas Lamprecht wrote: >>> In general, it would be _really_ nice to get some regression testing for >>> sync jobs covering the selection with different matching and options. >>> The lack of that doesn't have to block your/shannon's patch, but IMO it >>> should be added rather sooner than later. Sync is becoming more and more >>> complex, and the selection of snapshots is a very important and fundamental >>> part of it. >> >> Do you think we could mock ourselves out of this and unit-test it >> somehow? For full regression tests we will need Lukas's CI work. > >I hope so, as that is mostly pure application logic. Albeit I'm I currently >can't give you tips for what the best/simplest way for mocking this would be. >Obvious candidates would probably be to pull out the parts for getting local and >remote backup snapshot lists and info into a trait or alternatively >refactoring more logic out that gets the info passed as parameter and call >that in the tests with the test data. > >I have no strong preference here and there might be even better options for >this specific cases, but it'd be great if the existing "real" code does not >need to bend backwards to support the mocking and that defining or expanding >the tests isn't too tedious. >FWIW, one option might even be to have the list of snapshots defined and >generating two directory trees during build that mimic the actual datastores >more closely and might require less mocking and thus have bigger code coverage. Interesting, I'll have a look at it when I'm done with my other stuff :) >>>> + /// Load the verify state from the manifest. >>>> + pub fn verify_state(&self) -> Option { >>>> + self.load_manifest().ok().and_then(|(m, _)| { >>> >>> hmm, just ignoring errors here seems a bit odd, that might be a case where one >>> might want to re-sync. So I'd prefer wrapping this a result, even if it's a bit >>> tedious. >> >> Converted to (and returned) a `Result` for now, >> but I'd be wary about treating a failed `load_manifest` the same as a >> `VerifyState::Failed`. Especially because currently an unverified >> snapshot simply returns `Err(null...)`. So we would first need to extend >> `VerifyState` to have a `Unknown` variant. > >Yeah, I agree with you not treating a failure to load/parse the manifest >the same as a failed verification state, but the caller can log the error >and thus at least make the user aware of something unexpected happening. Will do! >>> And would it make sense to check the downloaded manifest's verify state, no point >>> in re-pulling if the other side is also corrupt, or is that possibility already >>> excluded somewhere else in this code path? >> >> This should be covered, because: >> - if the index is broken, we can't even open it >> - if an unencrypted chunk is corrupted, the crc is checked when reading >> it (pbs-client/src/remote_chunk_reader.rs:55) >> - if an encrypted chunk is corrupted, the crc is checked as well >> (pbs-datastore/src/data_blob.rs:81) > >ok, thanks for confirming this. Thanks for the reply! Posted a v2. _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel