From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <t.lamprecht@proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id 3E6267310D
 for <pbs-devel@lists.proxmox.com>; Wed, 14 Apr 2021 17:42:11 +0200 (CEST)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id 2E3931169B
 for <pbs-devel@lists.proxmox.com>; Wed, 14 Apr 2021 17:42:11 +0200 (CEST)
Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com
 [212.186.127.180])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS id F311C1168C
 for <pbs-devel@lists.proxmox.com>; Wed, 14 Apr 2021 17:42:09 +0200 (CEST)
Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1])
 by proxmox-new.maurer-it.com (Proxmox) with ESMTP id BCE8E45A06
 for <pbs-devel@lists.proxmox.com>; Wed, 14 Apr 2021 17:42:09 +0200 (CEST)
Message-ID: <52bb61ea-ad18-65d6-5e76-71a8a3a5047e@proxmox.com>
Date: Wed, 14 Apr 2021 17:42:08 +0200
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:88.0) Gecko/20100101
 Thunderbird/88.0
Content-Language: en-US
To: Proxmox Backup Server development discussion
 <pbs-devel@lists.proxmox.com>, Dominik Csapak <d.csapak@proxmox.com>
References: <20210413143536.19004-1-d.csapak@proxmox.com>
From: Thomas Lamprecht <t.lamprecht@proxmox.com>
In-Reply-To: <20210413143536.19004-1-d.csapak@proxmox.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-SPAM-LEVEL: Spam detection results:  0
 AWL -0.041 Adjusted score from AWL reputation of From: address
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 RCVD_IN_DNSWL_MED        -2.3 Sender listed at https://www.dnswl.org/,
 medium trust
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
 URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See
 http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more
 information. [datastore.rs, verify.rs]
Subject: [pbs-devel] applied: [PATCH proxmox-backup] backup/verify: improve
 speed by sorting chunks by inode
X-BeenThere: pbs-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox Backup Server development discussion
 <pbs-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pbs-devel>, 
 <mailto:pbs-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pbs-devel/>
List-Post: <mailto:pbs-devel@lists.proxmox.com>
List-Help: <mailto:pbs-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel>, 
 <mailto:pbs-devel-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Wed, 14 Apr 2021 15:42:11 -0000

On 13.04.21 16:35, Dominik Csapak wrote:
> before reading the chunks from disk in the order of the index file,
> stat them first and sort them by inode number.
> 
> this can have a very positive impact on read speed on spinning disks,
> even with the additional stat'ing of the chunks.
> 
> memory footprint should be tolerable, for 1_000_000 chunks
> we need about ~16MiB of memory (Vec of 64bit position + 64bit inode)
> (assuming 4MiB Chunks, such an index would reference 4TiB of data)
> 
> two small benchmarks (single spinner, ext4) here showed an improvement from
> ~430 seconds to ~330 seconds for a 32GiB fixed index
> and from
> ~160 seconds to ~120 seconds for a 10GiB dynamic index
> 
> Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
> ---
> it would be great if other people could also benchmark this patch on
> different setups a little (in addition to me), to verify or disprove my results
> 
>  src/backup/datastore.rs |  5 +++++
>  src/backup/verify.rs    | 32 +++++++++++++++++++++++++++++---
>  2 files changed, 34 insertions(+), 3 deletions(-)
> 
>

applied this for now, did already so before Fabians feedback.

Actually I had a a slight regression here too, but not as bad as Fabian reported
and also on a plain SSD-backed ext4, where I expected that the overhead of getting
the inodes out weights the advantages for storage which is already good at random IO.

I booted an older test-server and with lots of data and a more complex spinner
setup, lets see what that one reports.

Any how, we could, and probably should, make this a switch very easily, either as a
datastore option, or by checking the underlying storage - the latter is easy for single
disk storage (just check the rotational flag in /sys/block/...) but gets quickly ugly
with zfs/btrfs/... and the special devices they support.

If we further add such optimizations for sync (to remote and tape) then those would also
fall under that option-switch. Admins like to tune and this would give them a knob to
check what's best for a setup of theirs.