From: "Fabian Grünbichler" <f.gruenbichler@proxmox.com>
To: Konstantin <frank030366@hotmail.com>,
Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Subject: Re: [pve-devel] [PATCH pve-container 1/1] Adding new mount point type named 'zfs' to let configure a ZFS dataset as mount point for LXC container
Date: Wed, 17 May 2023 09:50:37 +0200 [thread overview]
Message-ID: <1684308983.b8k6x6xxqg.astroid@yuna.none> (raw)
In-Reply-To: <PAWPR02MB9056D6A844C3A636D8B5597CBF799@PAWPR02MB9056.eurprd02.prod.outlook.com>
On May 16, 2023 3:07 pm, Konstantin wrote:
> Hello,
>
> > most tools have ways to exclude certain paths ;)
>
> Yeah - and every time when this "need to be excluded datasets"
> list/names changed we need to update exclude options for this tools as
> well. It seems that just make this datasets not visible to host is
> simpler, isn't it?
well the idea would be to exclude the whole dataset used by PVE, not
every single volume on it. but I understand this can be cumbersome.
> > you could "protect" the guest:
>
> I know about this option - but sometimes it isn't applicable. For
> example I often use the following scenario when need to upgrade an OS on
> container: save configs from container, destroy the container (dataset
> with my data isn't destroyed because it's non PVE), deploy the new one
> from updated template (dataset with my data just reattached back),
> restore configs and it's ready to use. Maybe the following option will
> be useful if you're insist on using Proxmox managed storage - introduce
> the ability to protect a volume? If so - it probably will be acceptable
> way for me.
you can just create a new container, then re-assign your "data" volume
that is managed by PVE to that new container (that feature is even on
the GUI nowadays ;)), then delete the old one. before that people used
"special" VMIDs to own such volumes, which also works, but is a bit more
brittle (e.g., migration will allocate a new volume owned by the guest,
and that would then be cleaned up, so extra care would need to be
applied).
> > but like I said, it can be implemented more properly as well
>
> In a couple with volume protection capability it could be an option -
> make a possibility for PVE managed ZFS dataset to have a legacy
> mountpoint instead of mandatory mount on host. But as I said - it's the
> only (and working) method which I've found for me and I'm just proposing
> it as starting point for possible improvement in such use cases like
> mine. If you can propose a better solution for that - ok, let's discuss
> in details how it can be done.
adding a protected flag that prevents certain operations is doable - the
question is then, what else except explicit detaching of the volume
should be forbidden? force restoring over that container? moving the
volume? reassigning it? migrating the container? changing some option of
the mountpoint? destruction of the container itself? the semantics are
not 100% clear to me, and should not be tailored to one specific use
case but match as broadly as sensible. but if you think this is
sensible, we can also discuss this enhancement further (but to me, it's
entirely orthogonal to the mountpoint issue at hand, other than you
happening to want both of them for your use case ;)).
my gut feeling is still that the root issue is that you have data
that is both too valuable to accidentally lose, but at the same time not
backed up? because usually when you have backups, you still try to
minimize the potential for accidents, but you accept the fact that you
cannot ever 100% prevent them. this is a time bomb waiting to explode,
no amount of features or workarounds will really help unless the root
problem is addressed. if I misunderstood something, I'd be glad to get
more information to help me understand the issue!
like I said, changing our ZFS mountpoint handling to either default to,
or optionally support working without the need to have the volume
dataset already mounted in a specific path by the storage layer sounds
okay to me. there is no need for this on the PVE side, so somebody that
wants this feature would need to write the patches and drive the change,
otherwise it will be a low-priority enhancement request.
prev parent reply other threads:[~2023-05-17 7:50 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20230510000830.1851-1-frank030366@hotmail.com>
[not found] ` <mailman.224.1683710297.359.pve-devel@lists.proxmox.com>
2023-05-11 8:27 ` Fabian Grünbichler
[not found] ` <PAWPR02MB9056A83118D3E0A23AC28759BF749@PAWPR02MB9056.eurprd02.prod.outlook.com>
2023-05-11 12:25 ` Fabian Grünbichler
[not found] ` <PAWPR02MB9056D6A844C3A636D8B5597CBF799@PAWPR02MB9056.eurprd02.prod.outlook.com>
2023-05-17 7:50 ` Fabian Grünbichler [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1684308983.b8k6x6xxqg.astroid@yuna.none \
--to=f.gruenbichler@proxmox.com \
--cc=frank030366@hotmail.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox