From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id CF55C20EC91 for ; Tue, 30 Apr 2024 13:01:06 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id A65BF322A6; Tue, 30 Apr 2024 13:01:17 +0200 (CEST) Message-ID: <14667c59-bea8-43a3-998c-a13a7ae31927@proxmox.com> Date: Tue, 30 Apr 2024 13:00:41 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Beta From: Dominik Csapak To: pbs-devel@lists.proxmox.com References: <20240430093939.1318786-1-d.csapak@proxmox.com> <20240430093939.1318786-2-d.csapak@proxmox.com> Content-Language: en-US In-Reply-To: <20240430093939.1318786-2-d.csapak@proxmox.com> X-SPAM-LEVEL: Spam detection results: 0 AWL 0.016 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pbs-devel] [PATCH proxmox-backup 2/2] tape: use datastores 'read-thread' for tape backup X-BeenThere: pbs-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox Backup Server development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Proxmox Backup Server development discussion Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Errors-To: pbs-devel-bounces@lists.proxmox.com Sender: "pbs-devel" On 4/30/24 11:39, Dominik Csapak wrote: > using a single thread for reading is not optimal in some cases, e.g. > when the underlying storage can handle more reads in parallel than > with a single thread. > > This depends largely on the storage and cpu. > > We use the ParallelHandler to handle the actual reads. > Make the sync_channel buffer size depending on the number of threads > so we have space for two chunks per thread. > > Did some benchmarks on my (virtual) pbs with a real tape drive (lto8 > tape in an lto9 drive): > > For my NVME datastore it did not matter much how many threads were used > so i guess the bottleneck was either in the hba/drive or cable rather > than the disks/cpu. (Always got around ~300MB/s from the task log) > > For a datastore on a single HDD, the results are much more interesting: > > 1 Thread: ~55MB/s > 2 Threads: ~70MB/s > 4 Threads: ~80MB/s > 8 Threads: ~95MB/s > > So the fact that multiple IO request are done in parallel does speed up > the tape backup in general. > eh that sentence might be misleading, what i meant was not 'in general' but for the case of spinning disks could be amended before applying or in a v2 _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel