From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 4D31A6B030 for ; Fri, 26 Mar 2021 17:24:35 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 3547F2AB18 for ; Fri, 26 Mar 2021 17:24:05 +0100 (CET) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [212.186.127.180]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id AAE752AB0B for ; Fri, 26 Mar 2021 17:24:04 +0100 (CET) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 6CF8D426DF; Fri, 26 Mar 2021 17:24:04 +0100 (CET) Date: Fri, 26 Mar 2021 17:24:03 +0100 (CET) From: =?UTF-8?Q?Fabian_Gr=C3=BCnbichler?= To: Proxmox Backup Server development discussion , Sebastian Message-ID: <1338533683.857.1616775843288@webmail.proxmox.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Priority: 3 Importance: Normal X-Mailer: Open-Xchange Mailer v7.10.5-Rev5 X-Originating-Client: open-xchange-appsuite X-SPAM-LEVEL: Spam detection results: 0 AWL 0.027 Adjusted score from AWL reputation of From: address KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_DNSWL_MED -2.3 Sender listed at https://www.dnswl.org/, medium trust SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pbs-devel] Bulk initial sync from remote X-BeenThere: pbs-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox Backup Server development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 26 Mar 2021 16:24:35 -0000 > Sebastian hat am 26.03.2021 16:15 geschrieben: > > Good afternoon everyone, > > is it possible to do an initial bulk sync from a remote? (using external storage media for example) > E.g. can all files (chunk directory etc.) be blindy copied from one pbs server to the remote pbs server using a external storage medium? yes. a "blind copy" does risk a certain amount of inconsistency if there are any concurrent actions on the datastore (e.g., if you first copy all the snapshot metadata first, then continue with .chunks, and now a prune + GC run happens and deletes some chunks that you haven't copied yet). you can avoid that by: - defining the external medium as datastore, configure a 'local' remote pointing to the same node, and use the sync/pull mechanism instead of a blind copy (that will iterate over snapshots and copy associated chunks together with the snapshot metadata, so you'll never copy orphaned chunks or snapshot metadata without associated chunks). this will incur network/TLS overhead since it works over the API - do a two-phase rsync or similar, and ensure the datastore is quiet for the final (small) sync after moving your external disk, you need to manually create the datastore.cfg entry (or create a datastore using the GUI with a different path, and then edit it to point it to your actual path, or copy the contents from your external media into the created directory). a datastore directory with the .chunks subdir and the backup type directories (by default: vm, ct, host) is self-contained as far as stored backups are concerned. scheduled jobs (prune, verify, GC) are stored outside, so those need to be recreated if you just have the "raw" datastore. > Use-case: doing a initial full sync from a remote can cost a lot of bandwidth (or time), while incrementals can be small (when aren't a lot of changes). common use case, should work with the caveats noted above :)