From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id CF5A71FF161 for ; Wed, 18 Dec 2024 15:08:57 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 621AC1778A; Wed, 18 Dec 2024 15:08:58 +0100 (CET) Message-ID: <42277ef6-f586-4697-8ffd-d52ccd0943c2@proxmox.com> Date: Wed, 18 Dec 2024 15:08:55 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Content-Language: en-US To: pve-devel@lists.proxmox.com References: <20241217154814.82121-1-f.ebner@proxmox.com> From: Aaron Lauterer In-Reply-To: <20241217154814.82121-1-f.ebner@proxmox.com> X-SPAM-LEVEL: Spam detection results: 0 AWL -0.034 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pve-devel] [PATCH v2 storage 00/10] import/export for shared storages X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Proxmox VE development discussion Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Errors-To: pve-devel-bounces@lists.proxmox.com Sender: "pve-devel" Did a high-level test between a PVE+Ceph cluster and a single PVE node with the remote-migration of a Windows Server 2022 VM with EFI & TPM disks. Ceph RBD -> remote LVM thin LVM thin -> remote Ceph RBD Worked in both directions, and the VM booted up as expected after each migration. One thing I ran into, only tangentially related to this series, is that we don't support the 'raw+size' option for ZFS. Maybe we can get it working on ZFS at least for VM disk images (zvol)? Maybe it might also be time to consider if we want to handle CT volumes differently on ZFS in the long term (file based dataset). In all other storage options we have a block dev or raw file that we loop mount into the CT. Aligning this with ZFS would probably simplify things quite a bit. With the above mentioned tests, partially: Tested-By: Aaron Lauterer _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel