From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id D66B11FF187 for ; Mon, 14 Jul 2025 17:40:07 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 9E57E1B37A; Mon, 14 Jul 2025 17:41:04 +0200 (CEST) Message-ID: <208b3247-cc11-46b0-8e2e-603cbcffe763@proxmox.com> Date: Mon, 14 Jul 2025 17:40:30 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird To: Lukas Wagner , Proxmox Backup Server development discussion References: <20250710170728.102829-1-c.ebner@proxmox.com> <75170e22-a2f0-44b2-b612-5ce10bf85e49@proxmox.com> Content-Language: en-US, de-DE From: Christian Ebner In-Reply-To: <75170e22-a2f0-44b2-b612-5ce10bf85e49@proxmox.com> X-SPAM-LEVEL: Spam detection results: 0 AWL 0.043 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pbs-devel] [PATCH proxmox{, -backup} v7 00/47] fix #2943: S3 storage backend for datastores X-BeenThere: pbs-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox Backup Server development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Proxmox Backup Server development discussion Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Errors-To: pbs-devel-bounces@lists.proxmox.com Sender: "pbs-devel" On 7/14/25 16:33, Lukas Wagner wrote: > On 2025-07-10 19:06, Christian Ebner wrote: >> Disclaimer: These patches are still in an experimental state and not >> intended for production use. >> >> This patch series aims to add S3 compatible object stores as storage >> backend for PBS datastores. A PBS local cache store using the regular >> datastore layout is used for faster operation, bypassing requests to >> the S3 api when possible. Further, the local cache store allows to >> keep frequently used chunks and is used to avoid expensive metadata >> updates on the object store, e.g. by using local marker file during >> garbage collection. >> >> Backups are created by upload chunks to the corresponding S3 bucket, >> while keeping the index files in the local cache store, on backup >> finish, the snapshot metadata are persisted to the S3 storage backend. >> >> Snapshot restores read chunks preferably from the local cache store, >> downloading and insterting them if not present from the S3 object >> store. Listing and snapsoht metadata operation currently rely soly on >> the local cache store. >> >> Currently chunks use a 1:1 mapping to S3 objects. An advanced packing >> mechanism for chunks to significantly reduce the number of api >> requests and therefore be more cost effective will be implemented as >> followup patches. >> > > Applied these patches of the latest proxmox and proxmox-backup master branches and > tried to thoroughly test this new feature. > > Here's what I tested: > - Backup > - Restore > - Prune jobs > - GC > - Local sync from/to the S3 datastore with some namespace variations > - Delete datastore > - Tried to add the same S3 bucket as a new datastore > > I ran into an issue when I attempted to run a verify job, which Chris and I already > debugged off-list: > > - An all-zero, 4MB chunk (hash: bb9f...) will not be uploaded to S3 due to it's special usage > during the atime check during datastore creation. > This can be easily triggered by backing up a VM with some amounts of unused disk space > to an *unencrypted* S3 datastore. The error surfaces once attempting to do a > verification job. > If the chunk is uploaded manually (e.g. using some kind of S3 client CLI), the verification > job goes through without any problems. Thanks a lot for testing and your debugging efforts, was able to fix this for the upcoming version of the patches! > Some UI/UX observations: > - Minor: Would be easier to understand to unify "Unique Identifier" in the S3 client view > and "S3 Client ID" when adding the datastore (I prefer the latter, it seems more clear to me) Okay, adapted this as well for the S3 client view and create window. Also added the still missing cli commands for s3 client manipulation. > - Minor: The "Host" column in the "Add Datastore" -> S3 Client ID picker does not show > anything for me. Ah, the field here got renamed from host to endpoint, as this was better fitting. Fixed this as well, thanks. > - It might make sense to make it a bit easier to re-add an existing S3 bucket that was already > used as a datastore before - right now, it is a bit unintuitive. > Right now, to "import" an existing bucket, one has to: > - Use the same datastore name (since it is used in the object key) > - Enter the same bucket name (makes sense) > - Make sure that "reuse existing datastore" is *not* ticked (confusing) > - Press "S3 sync" after adding the datastore (could be automatic) > > I think we might be able to reuse the 'reuse datastore' flag and change its behavior > for S3 datastores to do the right thing automatically, which would be to > recreate a local cache and then do the S3 sync to get the list of snapshots > from the bucket. Okay, will have a go at this tomorrow and see if I manage to adapt this as well. I agree that reusing the "reuse existing datastore" flag and an automatic s3-refresh might be more intuitive here. > In the long-term it could be nice be to actually try to list the contents of > a bucket and use some heuristics to "find" existing datastores in the bucket > (could be as easy as trying to find some key that contains ".chunks" in the > second level, e.g. (somestore/.chunks/...) > and showing them in some drop-down in the dialog. Keeping this in mind, but this is out of scope for this series, I would rather focus on consolidating the current patches for now. > Keeping the use case of 'reusing' an S3 bucket in mind, maybe it would make > sense to mark 'ownership' of a datastore in the bucket, e.g. in some special marker > object (could contain the host name, host key fingerprint, machine-id, etc.), > as to make it harder to accidentally use the same datastore from multiple PBS servers. > There could be an "export" mechanism, effectively giving up the ownership by clearing > the marker, signalling it to be safe to re-add it to another PBS server. > Just capturing some thoughts here. :) Hmm, will keep this in mind as well, although I do not see the benefit of storing the ownership per-se. Ownership and permissions on the bucket and sub-object are best handled by the provider and their acls on tokens. But adding a marker which flags the store as in use seems a good idea and I will see if it makes sense to add this already. If the user wants to reuse a datastore for a PBS instance which is no longer available or failed, removing the marker by some other means (e.g. provider tooling) first should be acceptable as fail safe I think. _______________________________________________ pbs-devel mailing list pbs-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel