From: Fiona Ebner <f.ebner@proxmox.com>
To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>,
"DERUMIER, Alexandre" <alexandre.derumier@groupe-cyllene.com>
Subject: Re: [pve-devel] [RFC guest-common 09/13] vzdump: schema: add fleecing property string
Date: Thu, 1 Feb 2024 14:11:20 +0100 [thread overview]
Message-ID: <1526d2b1-d71d-4bb7-b085-a7e3f059b9c3@proxmox.com> (raw)
In-Reply-To: <ee733a4ffeddddc5e2071c003bd5b58ea080c682.camel@groupe-cyllene.com>
Am 01.02.24 um 13:39 schrieb DERUMIER, Alexandre:
>>> LVM and non-sparse ZFS need enough space for a copy for the full disk
>>> up-front, so are not suitable as fleecing storages in many cases.
>
> can't we force sparse for theses fleecing volumes, even if the storage
> don't have sparse enabled ? (I can understand that it could make sense
> for user to have non sparse for production for performance or
> allocation reservation, but for fleecing image, it should be
> exceptionnal to rewrite a full image)
>
For ZFS, we could always allocate fleecing images sparsely, but would
require a change to the storage API as you can't tell vdisk_alloc() to
do that right now. There could also be a new helper altogether,
allocate_fleecing_image() then the storage plugin itself could decide
what the best settings are.
>>> Should the setting rather be VM-specific than backup job-specific?
>>> These issues
>>> mostly defeat the purpose of the default here.
>
> can't we forbidden it in storage plugin features ? { fleecing => 1} ?
>
There is no feature list for storage plugins right now, just
volume_has_feature() and that doesn't help if don't already have a volume.
There is storage_can_replicate() and we could either switch to a common
helper for storage features and deprecate the old or simply add a
storage_supports_fleecing() helper.
But the question remains if the setting should be VM-specific or
job-wide. Most flexible would be both, but I'd rather not overcomplicate
things. Maybe my idea for the default with "use same storage for
fleecing" is not actually a good one and having a dedicated storage for
fleecing is better. Then it needs to be a conscious decision.
>>> IIRC older version of NFS lack the ability to discard. While not
>>> quite
>>> as bad as the above, it's still far from ideal. Might also be worth
>>> trying to detect? Will add something to the docs in any case.
>
> I never have seen working discard with nfs, I think (never tested) it's
> possible with 4.2, but 4.2 is really new on nas appliance (netapp,...).
> So I think than 90% of user don't have working discard with nfs.
>
With NFS 4.2 discard works nicely even with raw format. But you might be
right about most users not having new enough versions. We discussed this
off-list too and an improvement would be to use qcow2, so the discards
could happen at least internally. The qcow2 could not free the allocated
blocks, but re-use already allocated ones.
> Is it a problem if the vm main storage support discard , but not
> fleecing storage ? (I don't have looked yet how exactly fleecing is
> working)
>
It doesn't matter if the main storage does or not. It only depends on
the fleecing storage.
> If it's a problem, I think we should forbind to use a fleecing storage
> not supporting discard, if the vm have discard on 1 disk.
>
The problem is that the space usage can be very high. It's not a
fundamental problem, you can still use fleecing on such storages if you
have enough space.
There are already ideas to have a limit setting, monitor the space usage
and abort when the limit is hit, but nothing concrete yet.
next prev parent reply other threads:[~2024-02-01 13:11 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-25 14:41 [pve-devel] [RFC qemu/guest-common/manager/qemu-server/docs 00/13] fix #4136: implement backup fleecing Fiona Ebner
2024-01-25 14:41 ` [pve-devel] [PATCH qemu 01/13] backup: factor out gathering device info into helper Fiona Ebner
2024-01-25 14:41 ` [pve-devel] [PATCH qemu 02/13] backup: get device info: code cleanup Fiona Ebner
2024-01-25 14:41 ` [pve-devel] [PATCH qemu 03/13] block/io: clear BDRV_BLOCK_RECURSE flag after recursing in bdrv_co_block_status Fiona Ebner
2024-01-25 14:41 ` [pve-devel] [RFC qemu 04/13] block/copy-before-write: create block_copy bitmap in filter node Fiona Ebner
2024-01-25 14:41 ` [pve-devel] [RFC qemu 05/13] qapi: blockdev-backup: add discard-source parameter Fiona Ebner
2024-01-25 14:41 ` [pve-devel] [HACK qemu 06/13] block/{copy-before-write, snapshot-access}: implement bdrv_co_get_info driver callback Fiona Ebner
2024-01-29 14:35 ` Fiona Ebner
2024-01-25 14:41 ` [pve-devel] [HACK qemu 07/13] block/block-copy: always consider source cluster size too Fiona Ebner
2024-01-25 14:41 ` [pve-devel] [RFC qemu 08/13] PVE backup: add fleecing option Fiona Ebner
2024-01-25 14:41 ` [pve-devel] [RFC guest-common 09/13] vzdump: schema: add fleecing property string Fiona Ebner
2024-01-29 15:41 ` Fiona Ebner
2024-01-30 14:03 ` DERUMIER, Alexandre
2024-02-01 8:28 ` Fiona Ebner
2024-02-01 12:39 ` DERUMIER, Alexandre
2024-02-01 13:11 ` Fiona Ebner [this message]
2024-02-01 13:20 ` DERUMIER, Alexandre
2024-02-01 13:27 ` Fiona Ebner
2024-02-01 21:33 ` DERUMIER, Alexandre
2024-02-02 8:30 ` Fiona Ebner
2024-02-01 13:30 ` Fiona Ebner
2024-01-25 14:41 ` [pve-devel] [RFC manager 10/13] vzdump: handle new 'fleecing' " Fiona Ebner
2024-01-25 14:41 ` [pve-devel] [RFC qemu-server 11/13] backup: disk info: also keep track of size Fiona Ebner
2024-01-25 14:41 ` [pve-devel] [RFC qemu-server 12/13] backup: implement fleecing option Fiona Ebner
2024-01-29 15:28 ` Fiona Ebner
2024-01-25 14:41 ` [pve-devel] [RFC docs 13/13] vzdump: add section about backup fleecing Fiona Ebner
2024-01-25 16:13 ` Dietmar Maurer
2024-01-25 16:41 ` DERUMIER, Alexandre
2024-01-25 18:18 ` Dietmar Maurer
2024-01-26 8:39 ` Fiona Ebner
2024-01-25 16:02 ` [pve-devel] [RFC qemu/guest-common/manager/qemu-server/docs 00/13] fix #4136: implement " DERUMIER, Alexandre
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1526d2b1-d71d-4bb7-b085-a7e3f059b9c3@proxmox.com \
--to=f.ebner@proxmox.com \
--cc=alexandre.derumier@groupe-cyllene.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox