From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id EC0711FF187 for ; Mon, 28 Jul 2025 16:43:08 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 1C45D36984; Mon, 28 Jul 2025 16:44:33 +0200 (CEST) From: Shannon Sterz To: pve-devel@lists.proxmox.com Date: Mon, 28 Jul 2025 16:43:59 +0200 Message-ID: <20250728144359.279907-1-s.sterz@proxmox.com> X-Mailer: git-send-email 2.47.2 MIME-Version: 1.0 X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1753713832096 X-SPAM-LEVEL: Spam detection results: 0 AWL 0.022 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: [pve-devel] [PATCH pve-storage] fix #6561: zfspool: track refquota for subvolumes via user properties X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Proxmox VE development discussion Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: pve-devel-bounces@lists.proxmox.com Sender: "pve-devel" zfs itself does not track the refquota per snapshot so we need handle this ourselves. this implementation tries to do so by leveraging a user property per snapshot. Signed-off-by: Shannon Sterz --- this approach is not backward compatible, meaning that changes to volume sizes between snapshot before this patch will still be affected by this issue. however, it is fairly self-contained, does not require us to rely on the container config and works well with replication. we could fall back to resetting the refquota higher up in the call chain in case the storage doesn't manage to do it by itself. however, that comes with the potential downside of users messing with their configs und us resizing the disk when we shouldn't. src/PVE/Storage/ZFSPoolPlugin.pm | 46 +++++++++++++++++++++++++++++++- 1 file changed, 45 insertions(+), 1 deletion(-) diff --git a/src/PVE/Storage/ZFSPoolPlugin.pm b/src/PVE/Storage/ZFSPoolPlugin.pm index cdf5868..2474b7f 100644 --- a/src/PVE/Storage/ZFSPoolPlugin.pm +++ b/src/PVE/Storage/ZFSPoolPlugin.pm @@ -482,9 +482,28 @@ sub volume_size_info { sub volume_snapshot { my ($class, $scfg, $storeid, $volname, $snap) = @_; - my $vname = ($class->parse_volname($volname))[1]; + my (undef, $vname, undef, undef, undef, undef, $format) = $class->parse_volname($volname); $class->zfs_request($scfg, undef, 'snapshot', "$scfg->{pool}/$vname\@$snap"); + + # if this is a subvol, track refquota information with snapshot, as zfs does + # not track this property via snapshosts and consequently does not roll it + # back + if ($format eq 'subvol') { + my $refquota = $class->zfs_request( + $scfg, undef, 'get', 'refquota', '-o', 'value', '-Hp', "$scfg->{pool}/$vname", + ); + + chomp($refquota); + + $class->zfs_request( + $scfg, + undef, + 'set', + "pve-storage:refquota=${refquota}", + "$scfg->{pool}/$vname\@$snap", + ); + } } sub volume_snapshot_delete { @@ -503,6 +522,31 @@ sub volume_snapshot_rollback { my $msg = $class->zfs_request($scfg, undef, 'rollback', "$scfg->{pool}/$vname\@$snap"); + if ($format eq 'subvol') { + } + + # if this is a subvol, check if we tracked the refquota manually via user properties and if so, + # set it appropriatelly again + if ($format eq 'subvol') { + my $refquota = $class->zfs_request( + $scfg, + undef, + 'get', + 'pve-storage:refquota', + '-o', + 'value', + '-Hp', + "$scfg->{pool}/$vname\@$snap", + ); + + chomp($refquota); + + if ($refquota =~ m/^\d+$/) { + $class->zfs_request($scfg, undef, 'set', "refquota=${refquota}", + "$scfg->{pool}/$vname"); + } + } + # we have to unmount rollbacked subvols, to invalidate wrong kernel # caches, they get mounted in activate volume again # see zfs bug #10931 https://github.com/openzfs/zfs/issues/10931 -- 2.47.2 _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel