From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <pbs-devel-bounces@lists.proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9])
	by lore.proxmox.com (Postfix) with ESMTPS id B0B191FF15E
	for <inbox@lore.proxmox.com>; Tue, 25 Mar 2025 14:05:43 +0100 (CET)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
	by firstgate.proxmox.com (Proxmox) with ESMTP id 98BDB96F8;
	Tue, 25 Mar 2025 14:05:39 +0100 (CET)
Message-ID: <324cf64c-29b5-4fdc-b882-2258624c895d@proxmox.com>
Date: Tue, 25 Mar 2025 14:05:05 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
To: Thomas Lamprecht <t.lamprecht@proxmox.com>,
 Proxmox Backup Server development discussion <pbs-devel@lists.proxmox.com>
References: <20250321093202.155899-1-c.ebner@proxmox.com>
 <20250321093202.155899-6-c.ebner@proxmox.com>
 <093f7fcc-8c0d-46d0-b8bf-e09c0b0688a2@proxmox.com>
Content-Language: en-US, de-DE
From: Christian Ebner <c.ebner@proxmox.com>
In-Reply-To: <093f7fcc-8c0d-46d0-b8bf-e09c0b0688a2@proxmox.com>
X-SPAM-LEVEL: Spam detection results:  0
 AWL 0.031 Adjusted score from AWL reputation of From: address
 BAYES_00                 -1.9 Bayes spam probability is 0 to 1%
 DMARC_MISSING             0.1 Missing DMARC policy
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
Subject: Re: [pbs-devel] [PATCH v4 proxmox-backup 5/5] fix #5331: garbage
 collection: avoid multiple chunk atime updates
X-BeenThere: pbs-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox Backup Server development discussion
 <pbs-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pbs-devel>, 
 <mailto:pbs-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pbs-devel/>
List-Post: <mailto:pbs-devel@lists.proxmox.com>
List-Help: <mailto:pbs-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel>, 
 <mailto:pbs-devel-request@lists.proxmox.com?subject=subscribe>
Reply-To: Proxmox Backup Server development discussion
 <pbs-devel@lists.proxmox.com>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Errors-To: pbs-devel-bounces@lists.proxmox.com
Sender: "pbs-devel" <pbs-devel-bounces@lists.proxmox.com>

On 3/25/25 12:56, Thomas Lamprecht wrote:
> Am 21.03.25 um 10:32 schrieb Christian Ebner:
>> To reduce the number of atimes updates, keep track of the recently
>> marked chunks in phase 1 of garbage to avoid multiple atime updates
>> via expensive utimensat() calls.
>>
>> Recently touched chunks are tracked by storing the chunk digests in
>> an LRU cache of fixed capacity. By inserting a digest, the chunk will
>> be the most recently touched one and if already present in the cache
>> before insert, the atime update can be skipped.
> 
> Code-wise this looks alright to me, albeit I did not look at it in-depth,
> but what I'd be interested is documenting some more thoughts about how
> the size of the cache was chosen; even if it was mostly random then stating
> so can help a lot when rethinking this in the future, as then one doesn't
> have to guess if there was some more reasoning behind that.
> 
> Also some basic benchmarks might be great, even if from some random grown
> setup, as long as one describes it, like the overall pool data usage,
> deduplication factor, amount of backup groups, amount of snapshots and
> their rough age (distribution) and basic system characteristics like the
> cpu and basic parameters of the underlying storage, like filesystem type
> and (block) device type that backs it, as with that one can classify the
> change somewhat good enough.
> 
> 
>> Fixes: https://bugzilla.proxmox.com/show_bug.cgi?id=5331
>> Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
>> ---
>> changes since version 3:
>> - no changes
>>
>>   pbs-datastore/src/datastore.rs | 26 ++++++++++++++++++++++++--
>>   1 file changed, 24 insertions(+), 2 deletions(-)
>>
>> diff --git a/pbs-datastore/src/datastore.rs b/pbs-datastore/src/datastore.rs
>> index ea7e5e9f3..4445944c0 100644
>> --- a/pbs-datastore/src/datastore.rs
>> +++ b/pbs-datastore/src/datastore.rs
> 
> ...
> 
>> @@ -1128,6 +1136,8 @@ impl DataStore {
>>           let mut unprocessed_index_list = self.list_index_files()?;
>>           let index_count = unprocessed_index_list.len();
>>   
>> +        // Allow up to 32 MiB, as only storing the 32 digest as key
> 
> Above comment is IMO a bit hard to parse and does not really provide any
> reasoning about the chosen size FWICT.
> 
>> +        let mut recently_touched_chunks = LruCache::new(1024 * 1024);
> 
> It's quite a descriptive and good name, but something slightly shorter
> like `chunk_lru_cache` would be IMO fine here too, but really no hard
> feelings.

Okay, will adapt this and the other suggestions and use the testlab 
datastore to generate the benchmarks as requested. Thanks!


_______________________________________________
pbs-devel mailing list
pbs-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel