From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 7F7FD69EA4 for ; Wed, 3 Mar 2021 13:17:47 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 68AE136BE4 for ; Wed, 3 Mar 2021 13:17:17 +0100 (CET) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [212.186.127.180]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS id CCE7536BD9 for ; Wed, 3 Mar 2021 13:17:16 +0100 (CET) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 926EB4583C for ; Wed, 3 Mar 2021 13:17:16 +0100 (CET) Date: Wed, 03 Mar 2021 13:17:09 +0100 From: Fabian =?iso-8859-1?q?Gr=FCnbichler?= To: Proxmox VE development discussion References: <20210301151226.9631-1-a.lauterer@proxmox.com> <6b5b07965b8c1ac945a8502cc839287a00da4131.camel@odiso.com> In-Reply-To: <6b5b07965b8c1ac945a8502cc839287a00da4131.camel@odiso.com> MIME-Version: 1.0 User-Agent: astroid/0.15.0 (https://github.com/astroidmail/astroid) Message-Id: <1614772534.wb4idwnury.astroid@nora.none> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-SPAM-LEVEL: Spam detection results: 0 AWL 0.027 Adjusted score from AWL reputation of From: address KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_DNSWL_MED -2.3 Sender listed at https://www.dnswl.org/, medium trust SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Subject: Re: [pve-devel] [PATCH RFC storage] rbd: fix #3286 add namespace support X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 03 Mar 2021 12:17:47 -0000 On March 3, 2021 11:10 am, aderumier@odiso.com wrote: > Is they any plan on the roadmap to generalize namespace, but at vm > level ? >=20 > I'm still looking for easy cross-cluster vm migration =C2=A0with shared > storage. I recently picked up the remote migration feature, FWIW ;) >=20 > I was thinking about something simple like > /etc/pve//qemu-server// > with new disk volumes including the namespace in their path like: > "scsi0: :/vm-100-disk-0" I am not sure how that would solve the issue? the problem with sharing a=20 shared storage between clusters is that VMID 100 on cluster A and VMID=20 100 on cluster B are not the same entity, so a volume owned by VMID 100=20 is not attributable to either cluster. if both clusters are allowed to setup a namespace FOO, then you need to=20 manually take care not to duplicate VMIDs inside this namespace across=20 all clusters, just like you have to take care to not duplicate VMIDs=20 across all clusters right now? if only one cluster is allowed to use a certain namespace, then shared=20 migration needs to do a rename (or rather, move the VM and volumes=20 from one namespace to another). that would mean no live-migration, since=20 a live-rename of a volume is not possible, unless the namespace is not=20 actually encoded in the volume name on the storage. if the namespace=20 is not actually encoded in the volume name, it does not protect against=20 cross-namespace confusion (since when listing a storage's contents, I=20 can't tell which namespace volume BAR belongs to), and we'd be back to=20 square one. IMHO there are things that might help with the issue: - a client used to manage all clusters that ensures a VMID is not=20 assigned to more than one cluster - better support for custom volids (reduce chance of clashes, does not=20 solve issue with orphaned/unreferenced volumes) - allow marking a storage as "don't scan for unreferenced volumes", so=20 that stray volumes likely belonging to other clusters are not picked=20 up when migrating/deleting/.. guests (setting this would also need to=20 disallow deleting any volumes via the storage API instead of the guest=20 API, as we don't have any safeguards on the storage level then..) the first point is hard to do atomically, since we don't have a=20 cross-cluster pmxcfs, but some sort of "assign ranges to clusters,=20 remember exceptions for VMs which have been migrated away" could work,=20 if ALL management then happens using this client and not the regular=20 per-cluster API. this could also be supported in PVE right now=20 (configure range in datacenter.cfg, have some API call to register "this=20 VMID is burnt/does not belong to this cluster anymore, ignore it for all=20 intents and purposes) - although obviously this would not yet guarantuee=20 no re-use across clusters, but just enable integration/management tools=20 to have some support on the PVE side for enforcing those ranges. just some quick thoughts, might not be 100% thought-through in all=20 directions :-P =