From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id 6B8D91FF143 for ; Mon, 02 Feb 2026 14:57:29 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 4EB457122; Mon, 2 Feb 2026 14:57:59 +0100 (CET) Message-ID: <860132a0-dcdf-4e6f-8673-8f8150cf739e@proxmox.com> Date: Mon, 2 Feb 2026 14:57:53 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v5 proxmox-backup 07/16] api: backup: make fixed index file size optional To: Robert Obkircher , pbs-devel@lists.proxmox.com References: <20260130164552.281581-1-r.obkircher@proxmox.com> <20260130164552.281581-8-r.obkircher@proxmox.com> <37527a80-51b7-4b16-b2a8-a1b1c0a66664@proxmox.com> Content-Language: en-US, de-DE From: Christian Ebner In-Reply-To: <37527a80-51b7-4b16-b2a8-a1b1c0a66664@proxmox.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1770040600332 X-SPAM-LEVEL: Spam detection results: 0 AWL 0.048 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Message-ID-Hash: 4QEP47DZ4LQETGTFV4MUIPWIYY6AJ2XP X-Message-ID-Hash: 4QEP47DZ4LQETGTFV4MUIPWIYY6AJ2XP X-MailFrom: c.ebner@proxmox.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header X-Mailman-Version: 3.3.10 Precedence: list List-Id: Proxmox Backup Server development discussion List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: On 2/2/26 2:19 PM, Robert Obkircher wrote: > > On 2/2/26 12:38, Christian Ebner wrote: >> On 1/30/26 5:45 PM, Robert Obkircher wrote: >>> Pass the optional size to the FixedIndexWriter to make it grow as >>> necessary. >>> >>> Signed-off-by: Robert Obkircher >>> --- >>>   src/api2/backup/environment.rs | 21 ++++++++++++--------- >>>   src/api2/backup/mod.rs         |  8 +++----- >>>   2 files changed, 15 insertions(+), 14 deletions(-) >>> >>> diff --git a/src/api2/backup/environment.rs >>> b/src/api2/backup/environment.rs >>> index bd9c5211..90a84ed2 100644 >>> --- a/src/api2/backup/environment.rs >>> +++ b/src/api2/backup/environment.rs >>> @@ -67,7 +67,7 @@ struct DynamicWriterState { >>>   struct FixedWriterState { >>>       name: String, >>>       index: FixedIndexWriter, >>> -    size: usize, >>> +    size: Option, >>>       chunk_size: u32, >>>       chunk_count: u64, >>>       small_chunk_count: usize, // allow 0..1 small chunks (last >>> chunk may be smaller) >>> @@ -349,7 +349,7 @@ impl BackupEnvironment { >>>           &self, >>>           index: FixedIndexWriter, >>>           name: String, >>> -        size: usize, >>> +        size: Option, >>>           chunk_size: u32, >>>           incremental: bool, >>>       ) -> Result { >>> @@ -443,6 +443,7 @@ impl BackupEnvironment { >>>           } >>>             let end = (offset as usize) + (size as usize); >>> +        data.index.grow_to_size(end)?; >>>           let idx = data.index.check_chunk_alignment(end, size as >>> usize)?; >>>             data.chunk_count += 1; >>> @@ -624,13 +625,15 @@ impl BackupEnvironment { >>>                   ); >>>               } >>>   -            if size != (data.size as u64) { >>> -                bail!( >>> -                    "fixed writer '{}' close failed - unexpected >>> file size ({} != {})", >>> -                    data.name, >>> -                    data.size, >>> -                    size >>> -                ); >>> +            if let Some(known_size) = data.size { >>> +                if size != known_size as u64 { >>> +                    bail!( >>> +                        "fixed writer '{}' close failed - >>> unexpected file size ({} != {})", >>> +                        data.name, >>> +                        known_size, >>> +                        size, >>> +                    ); >>> +                } >>>               } >>>           } >>>   diff --git a/src/api2/backup/mod.rs b/src/api2/backup/mod.rs >>> index c625a2dc..c2822c18 100644 >>> --- a/src/api2/backup/mod.rs >>> +++ b/src/api2/backup/mod.rs >>> @@ -456,7 +456,7 @@ pub const API_METHOD_CREATE_FIXED_INDEX: >>> ApiMethod = ApiMethod::new( >>>               ("archive-name", false, &BACKUP_ARCHIVE_NAME_SCHEMA), >>>               ( >>>                   "size", >>> -                false, >>> +                true, >>>                   &IntegerSchema::new("File >>> size.").minimum(1).schema() >>>               ), >>>               ( >>> @@ -480,7 +480,7 @@ fn create_fixed_index( >>>       let env: &BackupEnvironment = rpcenv.as_ref(); >>>         let name = required_string_param(¶m, >>> "archive-name")?.to_owned(); >>> -    let size = required_integer_param(¶m, "size")? as usize; >>> +    let size = >>> param["size"].as_u64().map(usize::try_from).transpose()?; >>>       let reuse_csum = param["reuse-csum"].as_str(); >> >> The fixed index writer does not allow for incremental backups, >> failing if it is requested. >> >> question: should we allow to proceed the backup without failing by >> simply switching to a non incremental backup for those index writers >> where no size is specified, including a log warning that that a >> non-incremental mode is used? > I think reuse-csum without size could reasonably be supported by > copying the entire file and taking the size from there. Now that > add_chunk is a single operation it would even be possible to allow > overwriting the partial chunk in the end to continue growing. Yes, if this can be supported as well would be great. It will however only allow to skip re-uploads, chunking of the stream on the client side has to happen anyways. And it must be assured that the stream might shrink, not just grow. >> >> Should be better than failing the whole backup snapshot in case of >> multiple archives with mixed inputs. > As far as I can tell, "reuse-csum" is only set in one specific place > in proxmox-backup-qemu and only if the checksum matches exactly. Is > this actually relevant for anything else? No, AFAIK the checksum assures that the index cannot be reused for an incremental backup if e.g. there was a backup to the same backup group from a different client, which could lead to inconsistencies. So the checksum stored by the client must match the one found on the server to proceed safely, otherwise fallback to non-incremental. >> >>>         let archive_name = name.clone(); >>> @@ -528,9 +528,7 @@ fn create_fixed_index( >>>           reader = Some(index); >>>       } >>>   -    let mut writer = env >>> -        .datastore >>> -        .create_fixed_writer(&path, Some(size), chunk_size)?; >>> +    let mut writer = env.datastore.create_fixed_writer(&path, >>> size, chunk_size)?; >>>         if let Some(reader) = reader { >>>           writer.clone_data_from(&reader)?; >>