From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <f.ebner@proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by lists.proxmox.com (Postfix) with ESMTPS id 2A21E92259
 for <pve-devel@lists.proxmox.com>; Thu,  1 Feb 2024 14:11:53 +0100 (CET)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
 by firstgate.proxmox.com (Proxmox) with ESMTP id 039E8100AE
 for <pve-devel@lists.proxmox.com>; Thu,  1 Feb 2024 14:11:23 +0100 (CET)
Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com
 [94.136.29.106])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by firstgate.proxmox.com (Proxmox) with ESMTPS
 for <pve-devel@lists.proxmox.com>; Thu,  1 Feb 2024 14:11:21 +0100 (CET)
Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1])
 by proxmox-new.maurer-it.com (Proxmox) with ESMTP id B624E408AF;
 Thu,  1 Feb 2024 14:11:21 +0100 (CET)
Message-ID: <1526d2b1-d71d-4bb7-b085-a7e3f059b9c3@proxmox.com>
Date: Thu, 1 Feb 2024 14:11:20 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Content-Language: en-US
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>,
 "DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
References: <20240125144149.216064-1-f.ebner@proxmox.com>
 <20240125144149.216064-10-f.ebner@proxmox.com>
 <e541f8a0-4c11-4904-95cd-1e8d1c73bdc2@proxmox.com>
 <ee733a4ffeddddc5e2071c003bd5b58ea080c682.camel@groupe-cyllene.com>
From: Fiona Ebner <f.ebner@proxmox.com>
In-Reply-To: <ee733a4ffeddddc5e2071c003bd5b58ea080c682.camel@groupe-cyllene.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-SPAM-LEVEL: Spam detection results:  0
 AWL -0.073 Adjusted score from AWL reputation of From: address
 BAYES_00                 -1.9 Bayes spam probability is 0 to 1%
 DMARC_MISSING             0.1 Missing DMARC policy
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
 T_SCC_BODY_TEXT_LINE    -0.01 -
Subject: Re: [pve-devel] [RFC guest-common 09/13] vzdump: schema: add
 fleecing property string
X-BeenThere: pve-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/>
List-Post: <mailto:pve-devel@lists.proxmox.com>
List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe>
X-List-Received-Date: Thu, 01 Feb 2024 13:11:53 -0000

Am 01.02.24 um 13:39 schrieb DERUMIER, Alexandre:
>>> LVM and non-sparse ZFS need enough space for a copy for the full disk
>>> up-front, so are not suitable as fleecing storages in many cases.
> 
> can't we force sparse for theses fleecing volumes, even if the storage
> don't have sparse enabled ? (I can understand that it could make sense
> for user to have non sparse for production for performance or
> allocation reservation, but for fleecing image, it should be
> exceptionnal to rewrite a full image)
> 

For ZFS, we could always allocate fleecing images sparsely, but would
require a change to the storage API as you can't tell vdisk_alloc() to
do that right now. There could also be a new helper altogether,
allocate_fleecing_image() then the storage plugin itself could decide
what the best settings are.

>>> Should the setting rather be VM-specific than backup job-specific?
>>> These issues
>>> mostly defeat the purpose of the default here.
> 
> can't we forbidden it in storage plugin features ? { fleecing => 1} ?
> 

There is no feature list for storage plugins right now, just
volume_has_feature() and that doesn't help if don't already have a volume.

There is storage_can_replicate() and we could either switch to a common
helper for storage features and deprecate the old or simply add a
storage_supports_fleecing() helper.

But the question remains if the setting should be VM-specific or
job-wide. Most flexible would be both, but I'd rather not overcomplicate
things. Maybe my idea for the default with "use same storage for
fleecing" is not actually a good one and having a dedicated storage for
fleecing is better. Then it needs to be a conscious decision.

>>> IIRC older version of NFS lack the ability to discard. While not
>>> quite
>>> as bad as the above, it's still far from ideal. Might also be worth
>>> trying to detect? Will add something to the docs in any case.
> 
> I never have seen working discard with nfs, I think (never tested) it's
> possible with 4.2, but 4.2 is really new on nas appliance (netapp,...).
> So I think than 90% of user don't have working discard with nfs.
> 

With NFS 4.2 discard works nicely even with raw format. But you might be
right about most users not having new enough versions. We discussed this
off-list too and an improvement would be to use qcow2, so the discards
could happen at least internally. The qcow2 could not free the allocated
blocks, but re-use already allocated ones.

> Is it a problem if the vm main storage support discard , but not
> fleecing storage ? (I don't have looked yet how exactly fleecing is
> working)
> 

It doesn't matter if the main storage does or not. It only depends on
the fleecing storage.

> If it's a problem, I think we should forbind to use a fleecing storage
> not supporting discard, if the vm have discard on 1 disk.
> 

The problem is that the space usage can be very high. It's not a
fundamental problem, you can still use fleecing on such storages if you
have enough space.

There are already ideas to have a limit setting, monitor the space usage
and abort when the limit is hit, but nothing concrete yet.