From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <d.csapak@proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id 05CC2693DB
 for <pbs-devel@lists.proxmox.com>; Mon, 26 Jul 2021 10:38:02 +0200 (CEST)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id E964D9EEB
 for <pbs-devel@lists.proxmox.com>; Mon, 26 Jul 2021 10:37:31 +0200 (CEST)
Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com
 [94.136.29.106])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS id 75EAF9EDD
 for <pbs-devel@lists.proxmox.com>; Mon, 26 Jul 2021 10:37:31 +0200 (CEST)
Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1])
 by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 465FB4092C
 for <pbs-devel@lists.proxmox.com>; Mon, 26 Jul 2021 10:37:31 +0200 (CEST)
To: Dietmar Maurer <dietmar@proxmox.com>,
 Proxmox Backup Server development discussion <pbs-devel@lists.proxmox.com>
References: <704281229.1167.1627287980047@webmail.proxmox.com>
From: Dominik Csapak <d.csapak@proxmox.com>
Message-ID: <3f349341-d27f-bcd8-b578-fd2631b8b88a@proxmox.com>
Date: Mon, 26 Jul 2021 10:37:30 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.12.0
MIME-Version: 1.0
In-Reply-To: <704281229.1167.1627287980047@webmail.proxmox.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-SPAM-LEVEL: Spam detection results:  0
 AWL 1.056 Adjusted score from AWL reputation of From: address
 BAYES_00                 -1.9 Bayes spam probability is 0 to 1%
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 NICE_REPLY_A           -1.091 Looks like a legit reply (A)
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
Subject: Re: [pbs-devel] [PATCH proxmox-backup v2 0/7] improve catalog
 handling
X-BeenThere: pbs-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox Backup Server development discussion
 <pbs-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pbs-devel>, 
 <mailto:pbs-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pbs-devel/>
List-Post: <mailto:pbs-devel@lists.proxmox.com>
List-Help: <mailto:pbs-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel>, 
 <mailto:pbs-devel-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Mon, 26 Jul 2021 08:38:02 -0000



On 7/26/21 10:26 AM, Dietmar Maurer wrote:
> 
>> On 07/22/2021 3:40 PM Dominik Csapak <d.csapak@proxmox.com> wrote:
>>   
>> this series combines my previous catalog related patch-series[0][1][2]
>>
>> changes the catalog interface to be more concise, optimizes catalog
>> commit calls during restore, and implements a fast catalog for the
>> gui which only contains the snapshot lists
>>
>> changes from v1:
>> * only write snapshot list in new 'finish' method of the catalog
>> * add 'finish' also to pool writer
>> * replace pending offset counter with reducing the chunk_archive
>>    interface of the catalog
> 
> Now, during tape backup, users do not see any progress on the GUI. This
> can be particularly confusing on long running tape backups.
> 
> A simpler approach would be to only generate cache files for "finished" tapes (content
> will never change), while using the original catalog for tapes still writable. This should
> be much easier to implement?
> 

yes it would be simpler, but this does not completely solve the issue of
slow reads on large slow catalogs? (the last tape of the media-set can
still be so big that the reads take too long?)

also, the 'progress' they do not see is only in the 'content' view.
the task log of the running tape backup still shows the normal progress.

what about my suggestion to indicate a running backup in the content 
view instead ? so the user knows this is still running.

also what if the tape is damaged later in the backup? then the user
saw that some things are backed up, but in reality the tape
is broken and nothing is properly backed up?

also the progress in the content view was incomplete anyway
since we only updated that once every 128GiB (that can be
many snapshots) or at the end of the tape/backup

so if we do not want to update the cache only at the end of the 
backup/tape, i'd rather suggest to regenerate the cache on
each pool_writer commit, so we can profit from it even on
non-finished tapes