From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 59AF69A322 for ; Wed, 17 May 2023 09:50:45 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 3C01D24C99 for ; Wed, 17 May 2023 09:50:45 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Wed, 17 May 2023 09:50:44 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 6B88E453FB; Wed, 17 May 2023 09:50:44 +0200 (CEST) Date: Wed, 17 May 2023 09:50:37 +0200 From: Fabian =?iso-8859-1?q?Gr=FCnbichler?= To: Konstantin , Proxmox VE development discussion References: <20230510000830.1851-1-frank030366@hotmail.com> <1785581048.4184.1683793637397@webmail.proxmox.com> <1950349103.4373.1683807934502@webmail.proxmox.com> In-Reply-To: MIME-Version: 1.0 User-Agent: astroid/0.16.0 (https://github.com/astroidmail/astroid) Message-Id: <1684308983.b8k6x6xxqg.astroid@yuna.none> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-SPAM-LEVEL: Spam detection results: 0 AWL 0.074 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record T_SCC_BODY_TEXT_LINE -0.01 - Subject: Re: [pve-devel] [PATCH pve-container 1/1] Adding new mount point type named 'zfs' to let configure a ZFS dataset as mount point for LXC container X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 May 2023 07:50:45 -0000 On May 16, 2023 3:07 pm, Konstantin wrote: > Hello, >=20 > > most tools have ways to exclude certain paths ;) >=20 > Yeah - and every time when this "need to be excluded datasets"=20 > list/names changed we need to update exclude options for this tools as=20 > well. It seems that just make this datasets not visible to host is=20 > simpler, isn't it? well the idea would be to exclude the whole dataset used by PVE, not every single volume on it. but I understand this can be cumbersome. > > you could "protect" the guest: >=20 > I know about this option - but sometimes it isn't applicable. For=20 > example I often use the following scenario when need to upgrade an OS on=20 > container: save configs from container, destroy the container (dataset=20 > with my data isn't destroyed because it's non PVE), deploy the new one=20 > from updated template (dataset with my data just reattached back),=20 > restore configs and it's ready to use. Maybe the following option will=20 > be useful if you're insist on using Proxmox managed storage - introduce=20 > the ability to protect a volume? If so - it probably will be acceptable=20 > way for me. you can just create a new container, then re-assign your "data" volume that is managed by PVE to that new container (that feature is even on the GUI nowadays ;)), then delete the old one. before that people used "special" VMIDs to own such volumes, which also works, but is a bit more brittle (e.g., migration will allocate a new volume owned by the guest, and that would then be cleaned up, so extra care would need to be applied). > > but like I said, it can be implemented more properly as well >=20 > In a couple with volume protection capability it could be an option -=20 > make a possibility for PVE managed ZFS dataset to have a legacy=20 > mountpoint instead of mandatory mount on host. But as I said - it's the=20 > only (and working) method which I've found for me and I'm just proposing=20 > it as starting point for possible improvement in such use cases like=20 > mine. If you can propose a better solution for that - ok, let's discuss=20 > in details how it can be done. adding a protected flag that prevents certain operations is doable - the question is then, what else except explicit detaching of the volume should be forbidden? force restoring over that container? moving the volume? reassigning it? migrating the container? changing some option of the mountpoint? destruction of the container itself? the semantics are not 100% clear to me, and should not be tailored to one specific use case but match as broadly as sensible. but if you think this is sensible, we can also discuss this enhancement further (but to me, it's entirely orthogonal to the mountpoint issue at hand, other than you happening to want both of them for your use case ;)). my gut feeling is still that the root issue is that you have data that is both too valuable to accidentally lose, but at the same time not backed up? because usually when you have backups, you still try to minimize the potential for accidents, but you accept the fact that you cannot ever 100% prevent them. this is a time bomb waiting to explode, no amount of features or workarounds will really help unless the root problem is addressed. if I misunderstood something, I'd be glad to get more information to help me understand the issue! like I said, changing our ZFS mountpoint handling to either default to, or optionally support working without the need to have the volume dataset already mounted in a specific path by the storage layer sounds okay to me. there is no need for this on the PVE side, so somebody that wants this feature would need to write the patches and drive the change, otherwise it will be a low-priority enhancement request.