public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
From: Daniel Kral <d.kral@proxmox.com>
To: Fabio Fantoni <fabio.fantoni@m2r.biz>,
	Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [RFC PATCH 2/2] common: btrfs: lower minimum amount of disks for raid10 to 2
Date: Wed, 15 Jan 2025 10:00:03 +0100	[thread overview]
Message-ID: <0472416b-6e05-4793-876f-cc679fccf70f@proxmox.com> (raw)
In-Reply-To: <0999b2e1-8b8b-4baf-84d6-32251a675338@m2r.biz>

On 1/13/25 13:24, Fabio Fantoni wrote:
> btrfs profiles work differently but other hardware or software raids, 
> many users may not inform themselves well beforehand but even in the 
> case of informed users even if technically now btrfs allows lower limits 
> with the creation of raid 0 (and raid10) I think it would be better to 
> keep them at the base at the creation and then it must be the user who 
> consciously makes any subsequent conversions.

Hm, I'm still unsure about this, because AFAIK we already allow creating 
ZFS RAID0 with a single disk, which technically also isn't a "real" 
RAID0 setup itself. But fair point for RAID10, it could be irritating 
for users to have a discrepancy between the minimum disk amount of ZFS 
and BTRFS RAID10 and it'd be a bit harder to communicate that in a 
understandable manner.

> 
> regarding btrfs profiles at creation, one thing that could be useful is 
> to always put duplicate metadata (dup with single disk or raid 1 in the 
> case of raid0), if you don't want it by default maybe put it as an 
> additional option, and if you don't want that either at least add it to 
> the documentation (as a suggestion if you want greater resilience of the 
> filesystem without consuming excessive space)

Currently, the installer creates the BTRFS filesystem with the data and 
metadata both using the same profile. I also think it could be valuable 
to have an "advanced" option, which allows to set a separate profile for 
the metadata.

Feel free to send either a RFC for it (even if I can't tell you whether 
it will be accepted as it adds some complexity to the fs setup) or 
create a Bugzilla so also other users and developers can discuss it.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


  parent reply	other threads:[~2025-01-15  9:00 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-01-10 17:00 [pve-devel] [RFC PATCH 1/2] install: btrfs: fix raid level falling back to single mode Daniel Kral
2025-01-10 17:00 ` [pve-devel] [RFC PATCH 2/2] common: btrfs: lower minimum amount of disks for raid10 to 2 Daniel Kral
2025-01-13 12:24   ` Fabio Fantoni via pve-devel
     [not found]   ` <0999b2e1-8b8b-4baf-84d6-32251a675338@m2r.biz>
2025-01-15  9:00     ` Daniel Kral [this message]
2025-01-15 16:14       ` Fabio Fantoni via pve-devel
2025-01-13 12:15 ` [pve-devel] [RFC PATCH 1/2] install: btrfs: fix raid level falling back to single mode Fabio Fantoni via pve-devel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0472416b-6e05-4793-876f-cc679fccf70f@proxmox.com \
    --to=d.kral@proxmox.com \
    --cc=fabio.fantoni@m2r.biz \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal