From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <pve-devel-bounces@lists.proxmox.com>
Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68])
	by lore.proxmox.com (Postfix) with ESMTPS id E9BE41FF165
	for <inbox@lore.proxmox.com>; Thu,  5 Jun 2025 13:18:11 +0200 (CEST)
Received: from firstgate.proxmox.com (localhost [127.0.0.1])
	by firstgate.proxmox.com (Proxmox) with ESMTP id 21C8616C0D;
	Thu,  5 Jun 2025 13:18:30 +0200 (CEST)
Message-ID: <ce917c6b-0c59-4a62-b1d9-62de484eb5da@proxmox.com>
Date: Thu, 5 Jun 2025 13:17:56 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
To: Christoph Heiss <c.heiss@proxmox.com>
References: <20250527133431.74325-1-f.ebner@proxmox.com>
 <DAEJRK60I18R.3H50H3BHTNIA1@proxmox.com>
Content-Language: en-US
From: Fiona Ebner <f.ebner@proxmox.com>
In-Reply-To: <DAEJRK60I18R.3H50H3BHTNIA1@proxmox.com>
X-SPAM-LEVEL: Spam detection results:  0
 AWL -0.032 Adjusted score from AWL reputation of From: address
 BAYES_00                 -1.9 Bayes spam probability is 0 to 1%
 DMARC_MISSING             0.1 Missing DMARC policy
 KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment
 RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to
 Validity was blocked. See
 https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more
 information.
 RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to
 Validity was blocked. See
 https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more
 information.
 RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to
 Validity was blocked. See
 https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more
 information.
 SPF_HELO_NONE           0.001 SPF: HELO does not publish an SPF Record
 SPF_PASS               -0.001 SPF: sender matches SPF record
 URIBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to URIBL was blocked. See
 http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more
 information. [proxmox.com]
Subject: Re: [pve-devel] [PATCH v2 storage 1/2] fix #5071: zfs over iscsi:
 add 'zfs-base-path' configuration option
X-BeenThere: pve-devel@lists.proxmox.com
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Proxmox VE development discussion <pve-devel.lists.proxmox.com>
List-Unsubscribe: <https://lists.proxmox.com/cgi-bin/mailman/options/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=unsubscribe>
List-Archive: <http://lists.proxmox.com/pipermail/pve-devel/>
List-Post: <mailto:pve-devel@lists.proxmox.com>
List-Help: <mailto:pve-devel-request@lists.proxmox.com?subject=help>
List-Subscribe: <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel>, 
 <mailto:pve-devel-request@lists.proxmox.com?subject=subscribe>
Reply-To: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Cc: Proxmox VE development discussion <pve-devel@lists.proxmox.com>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Errors-To: pve-devel-bounces@lists.proxmox.com
Sender: "pve-devel" <pve-devel-bounces@lists.proxmox.com>

Am 05.06.25 um 13:02 schrieb Christoph Heiss:
> Tested the series by setting up a iSCSI target using targetcli(d) (on a
> separate PVE 8.4 system as a base, due to ZFS goodies) and then adding a
> ZFS-over-iSCSI storage using the LIO provider to a test cluster.
> 
> Confirmed that
> 
> - `zfs-base-path` is correctly detected when adding the storage
> 
> - the iSCSI storage is seen correctly after setting up and that VM disks
>   can be (live-)migrated to the ZFS-over-iSCSI storage w/o problems.
> 
> One small comment inline, just a typo in the API description.
> 
> Please consider the series in any case
> 
> Tested-by: Christoph Heiss <c.heiss@proxmox.com>

Thank you for testing! Superseded by v3 with the typo fixed:
https://lore.proxmox.com/pve-devel/20250605111109.52712-1-f.ebner@proxmox.com/

> One unrelated thing I noticed during testing, but wanted to note for
> reference:
> 
> When one hits the error due to a bad `zfs-base-path` (e.g. as currently
> happens):
> 
>   `TASK ERROR: storage migration failed: Could not open /dev/<poolname>/vm-100-disk-0`
> 
> the target zvol isn't cleaned up, e.g. the above would result in
> `<poolname>/vm-100-disk-0` still being present on the remote zpool.
> 
> Fortunately this doesn't really break anything, as the next available
> disk number (in this case, `vm-100-disk-1`), is chosen automatically
> anyway when creating a new disk.

There actually is already error handling for freeing up allocated disks
in this context. But the storage plugin itself already fails during
allocation, so the new volume ID is never returned as a result, so
qemu-server doesn't know about it. I'll send a patch to improve cleanup
handling inside the plugin itself.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel