From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id 88D141FF138 for ; Wed, 18 Mar 2026 13:47:33 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 4B7F136711; Wed, 18 Mar 2026 13:47:43 +0100 (CET) From: Stoiko Ivanov To: pve-devel@lists.proxmox.com Subject: [PATCH storage 2/4] fix #2350: zfspool: send without preserving encryption Date: Wed, 18 Mar 2026 13:40:15 +0100 Message-ID: <20260318124659.374754-3-s.ivanov@proxmox.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260318124659.374754-1-s.ivanov@proxmox.com> References: <20260318124659.374754-1-s.ivanov@proxmox.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1773837983589 X-SPAM-LEVEL: Spam detection results: 0 AWL -0.979 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.408 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.819 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.903 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Message-ID-Hash: V3MYMEJ5P4MHGHNVQN4HND2ZRRUHCGWZ X-Message-ID-Hash: V3MYMEJ5P4MHGHNVQN4HND2ZRRUHCGWZ X-MailFrom: s.ivanov@proxmox.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header X-Mailman-Version: 3.3.10 Precedence: list List-Id: Proxmox VE development discussion List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: OpenZFS recently merged support for `zfs send`ing datasets, without their encryption properties[0]. Setting the new option in `volume_export` makes it possible to use storage migration for replication and migration when the guest-disk is encrypted through ZFS native encryption, without using raw sends. Raw sends explicitly set the encryption properties (keys, passphrases) on the destination side - breaking inheriting them from the parent datasets there (thus you'd need to load the keys for each guest disk/volume, instead of inherting them from the encryption root). In order to not always receive and create the datasets as unencrypted the encryption properties need to be excluded on receive (via `-x encryption`). The approach is quite flexible in allowing for sending encrypted and unencrypted datasets and have the target define the encryption state (e.g. remote-migrating to a location where encryption is needed from one where it has not been set up). The receiving node needs to have the patches applied before initial receiving to prevent accidentally creating the dataset without encryption (despite the parent dataset/pool being encrypted). This needs a versioned dependency bump on zfsutils-linux. Tested minimally with containers and a VM with migration/replication on 2 machines with: * pools without encryption on both sides * encrypted pools on both sides * one encrypted and an unecrypted pool each [0] https://github.com/openzfs/zfs/pull/18240 Fixes: https://bugzilla.proxmox.com/show_bug.cgi?id=2350 Signed-off-by: Stoiko Ivanov --- src/PVE/Storage/ZFSPoolPlugin.pm | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/src/PVE/Storage/ZFSPoolPlugin.pm b/src/PVE/Storage/ZFSPoolPlugin.pm index 3b3456b..eaeeb9d 100644 --- a/src/PVE/Storage/ZFSPoolPlugin.pm +++ b/src/PVE/Storage/ZFSPoolPlugin.pm @@ -817,7 +817,7 @@ sub volume_export { # For zfs we always create a replication stream (-R) which means the remote # side will always delete non-existing source snapshots. This should work # for all our use cases. - my $cmd = ['zfs', 'send', '-Rpv']; + my $cmd = ['zfs', 'send', '-RpvU']; if (defined($base_snapshot)) { my $arg = $with_snapshots ? '-I' : '-i'; push @$cmd, $arg, $base_snapshot; @@ -879,7 +879,10 @@ sub volume_import { $zfspath = "$scfg->{pool}/$dataset"; } - eval { run_command(['zfs', 'recv', '-F', '--', $zfspath], input => "<&$fd") }; + eval { + run_command(['zfs', 'recv', '-F', '-x', 'encryption', '--', $zfspath], + input => "<&$fd"); + }; if (my $err = $@) { if (defined($base_snapshot)) { eval { run_command(['zfs', 'rollback', '-r', '--', "$zfspath\@$base_snapshot"]) }; -- 2.47.3