From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [IPv6:2a01:7e0:0:424::9]) by lore.proxmox.com (Postfix) with ESMTPS id 40D0B1FF13F for ; Thu, 23 Apr 2026 12:27:11 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 9CD1411A49; Thu, 23 Apr 2026 12:27:10 +0200 (CEST) From: "m.loderer@proxmox.com" To: pve-devel@lists.proxmox.com Subject: Re: [PATCH storage 2/4] fix #2350: zfspool: send without preserving encryption Date: Thu, 23 Apr 2026 12:26:34 +0200 Message-ID: <3069647.e9J7NaK4W3@darkbox> Organization: Proxmox Server Solutions GmbH. In-Reply-To: <20260318124659.374754-3-s.ivanov@proxmox.com> References: <20260318124659.374754-1-s.ivanov@proxmox.com> <20260318124659.374754-3-s.ivanov@proxmox.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1776939905679 X-SPAM-LEVEL: Spam detection results: 0 BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [zfspoolplugin.pm,proxmox.com] Message-ID-Hash: RYAHABBYWNZBRRM3AYATNHWPTO3NIEQL X-Message-ID-Hash: RYAHABBYWNZBRRM3AYATNHWPTO3NIEQL X-MailFrom: m.loderer@proxmox.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header X-Mailman-Version: 3.3.10 Precedence: list List-Id: Proxmox VE development discussion List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: I applied the patches and tested: I have this running on a test cluster with one VM and one CT. I tested the= =20 following over about a week with active replication: =2D Start/Stop =2D Live migration =2D Offline migration =2D Snapshots + migrations =2D Rebooting cluster nodes =2D Waiting after reboot of a node to the VMs start-timeout, to rule out zf= s=20 mounting problems with subvols =2D Bidirectional replication of the CT and the VM at the same time It's working perfectly so far!! Tested-by: Mario Loderer =2D-- Am Mittwoch, 18. M=C3=A4rz 2026, 13:40:15 Mitteleurop=C3=A4ische Sommerzeit= schrieb=20 Stoiko Ivanov: > OpenZFS recently merged support for `zfs send`ing datasets, without > their encryption properties[0]. Setting the new option in > `volume_export` makes it possible to use storage migration for > replication and migration when the guest-disk is encrypted through ZFS > native encryption, without using raw sends. > Raw sends explicitly set the encryption properties (keys, passphrases) > on the destination side - breaking inheriting them from the parent > datasets there (thus you'd need to load the keys for each guest > disk/volume, instead of inherting them from the encryption root). >=20 > In order to not always receive and create the datasets as unencrypted > the encryption properties need to be excluded on receive (via > `-x encryption`). >=20 > The approach is quite flexible in allowing for sending encrypted > and unencrypted datasets and have the target define the encryption > state (e.g. remote-migrating to a location where encryption is needed > from one where it has not been set up). >=20 > The receiving node needs to have the patches applied before initial > receiving to prevent accidentally creating the dataset without > encryption (despite the parent dataset/pool being encrypted). >=20 > This needs a versioned dependency bump on zfsutils-linux. >=20 > Tested minimally with containers and a VM with migration/replication > on 2 machines with: > * pools without encryption on both sides > * encrypted pools on both sides > * one encrypted and an unecrypted pool each >=20 > [0] https://github.com/openzfs/zfs/pull/18240 >=20 > Fixes: https://bugzilla.proxmox.com/show_bug.cgi?id=3D2350 > Signed-off-by: Stoiko Ivanov > --- > src/PVE/Storage/ZFSPoolPlugin.pm | 7 +++++-- > 1 file changed, 5 insertions(+), 2 deletions(-) >=20 > diff --git a/src/PVE/Storage/ZFSPoolPlugin.pm > b/src/PVE/Storage/ZFSPoolPlugin.pm index 3b3456b..eaeeb9d 100644 > --- a/src/PVE/Storage/ZFSPoolPlugin.pm > +++ b/src/PVE/Storage/ZFSPoolPlugin.pm > @@ -817,7 +817,7 @@ sub volume_export { > # For zfs we always create a replication stream (-R) which means the > remote # side will always delete non-existing source snapshots. This shou= ld > work # for all our use cases. > - my $cmd =3D ['zfs', 'send', '-Rpv']; > + my $cmd =3D ['zfs', 'send', '-RpvU']; > if (defined($base_snapshot)) { > my $arg =3D $with_snapshots ? '-I' : '-i'; > push @$cmd, $arg, $base_snapshot; > @@ -879,7 +879,10 @@ sub volume_import { > $zfspath =3D "$scfg->{pool}/$dataset"; > } >=20 > - eval { run_command(['zfs', 'recv', '-F', '--', $zfspath], input =3D> > "<&$fd") }; + eval { > + run_command(['zfs', 'recv', '-F', '-x', 'encryption', '--', > $zfspath], + input =3D> "<&$fd"); > + }; > if (my $err =3D $@) { > if (defined($base_snapshot)) { > eval { run_command(['zfs', 'rollback', '-r', '--', > "$zfspath\@$base_snapshot"]) };