From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 178719A784 for ; Tue, 9 May 2023 14:55:53 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id E21821EF4E for ; Tue, 9 May 2023 14:55:22 +0200 (CEST) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Tue, 9 May 2023 14:55:21 +0200 (CEST) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 4467547D6D for ; Tue, 9 May 2023 14:55:21 +0200 (CEST) Message-ID: <90009b0e-670f-d294-78b9-536eacb90e14@proxmox.com> Date: Tue, 9 May 2023 14:55:20 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.10.1 To: Fiona Ebner , pve-devel@lists.proxmox.com, =?UTF-8?Q?Fabian_Gr=c3=bcnbichler?= References: <20230502131732.1875692-1-a.lauterer@proxmox.com> <20230502131732.1875692-2-a.lauterer@proxmox.com> <91ef008a-9b97-90b5-4f11-365d43ebd108@proxmox.com> Content-Language: en-US From: Aaron Lauterer In-Reply-To: <91ef008a-9b97-90b5-4f11-365d43ebd108@proxmox.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-SPAM-LEVEL: Spam detection results: 0 AWL 0.809 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment NICE_REPLY_A -1.802 Looks like a legit reply (A) SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record T_SCC_BODY_TEXT_LINE -0.01 - Subject: Re: [pve-devel] [PATCH qemu-server 1/2] migration: avoid migrating disk images multiple times X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 09 May 2023 12:55:53 -0000 On 5/9/23 09:34, Fiona Ebner wrote: > Am 02.05.23 um 15:17 schrieb Aaron Lauterer: >> Scan the VM config and store the volid and full path for each storage. >> Do the same when we scan each storage. Then we can have these >> scenarios: >> * multiple storage configurations might point to the same storage >> The result is, that when scanning the storages, we find the disk image >> multiple times. >> -> we ignore them >> > > Having the same storage with overlapping content types is a > configuration error (except if you use different 'content-dirs' I > guess). We can't make guarantees for locking, which e.g. leads to races > for allocation, it can lead to left-over references, etc.. Rather than > trying to handle this here, I'd prefer a warning about the > misconfiguration somewhere (maybe during storage.cfg parsing?) and/or > error out when attempting to create such a configuration. Adding > something in the docs also makes sense if there isn't yet. After having a discussion with @Fabian offline, and I hope I don't forget to mention something: Yes, having two storage configurations pointing to the same location should not happen as far as we know. For most situation where one might want to do that, there are other, better options to separate it on the storage level. For example: * ZFS and different volblock sizes -> use different base datasets for each storage * RBD: use KRBD or not -> use RBD namespaces to separate them But it is hard to detect that on the storage layer reliably. For example, with an RBD storage I might add different monitors; do they point to the same cluster? There is no way to tell unless we open a connection and gather the Ceph FSID of that cluster. For other storage types, it would also be possible to run into similar problems where we cannot really tell, by the storage definition alone, if they point to the same location or not. Another approach that could make a migration handle such situations better but should only be targeting PVE 8: * Don't scan all storages and only look at disk images that are referenced in the config. With this, we should have removed most situations where aliases would happen, and a migration is less likely to fail, because a storage is not online. * If we detect an aliased and referenced image, fail the migration with the hint that this setup should get fixed. But since we would fail the migration, instead of potentially creating duplicate images on the target node, this is a rather breaking change -> PVE 8 I hope I summed it up correctly.