From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.proxmox.com (Postfix) with ESMTPS id 7E139977E5 for ; Tue, 5 Mar 2024 13:13:42 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 4E5C27607 for ; Tue, 5 Mar 2024 13:13:12 +0100 (CET) Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by firstgate.proxmox.com (Proxmox) with ESMTPS for ; Tue, 5 Mar 2024 13:13:11 +0100 (CET) Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id F39404881D; Tue, 5 Mar 2024 13:13:10 +0100 (CET) Date: Tue, 5 Mar 2024 13:13:08 +0100 (CET) From: =?UTF-8?Q?Fabian_Gr=C3=BCnbichler?= To: Proxmox VE development discussion , Fiona Ebner Message-ID: <2079301154.8553.1709640788566@webmail.proxmox.com> In-Reply-To: References: <20240223092436.202277-1-roland.kammerer@linbit.com> <20240223092436.202277-2-roland.kammerer@linbit.com> <6f0faf71-10c6-4ffd-b5ac-dbc925dd2804@proxmox.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Priority: 3 Importance: Normal X-Mailer: Open-Xchange Mailer v7.10.6-Rev59 X-Originating-Client: open-xchange-appsuite X-SPAM-LEVEL: Spam detection results: 0 AWL 0.066 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record T_SCC_BODY_TEXT_LINE -0.01 - Subject: Re: [pve-devel] [PATCH storage 1/1] storage/plugins: pass scfg to parse_volname X-BeenThere: pve-devel@lists.proxmox.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Proxmox VE development discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 05 Mar 2024 12:13:42 -0000 Roland Kammerer wrote: > On Thu, Feb 29, 2024 at 02:29:51PM +0100, Fiona Ebner wrote: > > Am 23.02.24 um 10:24 schrieb Roland Kammerer: > > > This passes the well known $scfg to parse_volname and bumps the API > > > versions accordingly. This allows plugins to access their configuration > > > if necessary. > > > > > > > Hi, > > can you please describe your use case in more detail? Why does parsing > > the volume name depend on something in the storage configuration? > > Hi, > > I can try, hopefully not only repeating what I wrote in the cover > letter: > > - We want to implement reassigning disks to different VMs. > - this calls the plugin's rename_volume. > > Usually this renames a disk from vm-100-disk-1 to something like > vm-101-disk-1. Great, no changes needed to parse_volume, the (new) VM ID > can be parsed from the name with a regex and the actual disk. > > For us this would mean that we would need to rename the resource in > LINSTOR, the DRBD devices on all nodes, the backing device for the DRBD > resources on all nodes,... For various reasons that is terribly hard, > especially considering cluster wide rollbacks. If you want me to, I can > fill some pages about that :). > > So I came up with this: > > - Let's not store the VM ID in the name of the resource/device, but > let's generate resource/"disk" names like pm-$UUID. The LINSTOR > resource is named pm-$UUID, the DRBD device is named pm-$UUID, the > backing devices on LVM/ZFS are named /dev/vg/pm-$UUID... > - Great, we abstracted away the VM IDs. So where do we then store the VM > ID that for example parse_volname needs to return the actual VM ID? > - We can store that in meta data that is associated with the > LINSTOR/DRBD resource. We have a central component, the LINSTOR > controller, the "brain of the SDS", which is needed anyways and it's > IP is already in the storage configuration. Just think of it as a > "data base". > - So in that "data base" we store that pm-123... belongs to VM ID 100. > - Reassigning a disk to a new VM (ID) is then trivial, we just update > the "data base" entry for pm-$UUID, now pointing to VM ID 101 (or > whatever). That is exactly what I do in rename_volume. I don't > actually rename the LINSTOR/DRBD/LVM objects/disks, I just update the > mapping. When getting called for "reassing disks to a new VM", the > plugin is free to generate a name. I return the old/existing name, but > the new VM ID. Yes, that is a bit if cheating, but works perfectly > fine for that scenario of reassinging disks to VMs. > - Whenever we need the VM ID, we don't parse it from the pm-$UUID name, > we can't, but we ask our "data base". > - In oder to ask the "data base", we need to know its IP. And for that > we need a pointer to the storage configuration that (already) contains > that IP. > - all plugin functions already get a parameter to $scfg. With one > exception: parse_volname. > > All in all, yes, this is specific for our use case, otherwise > parse_volname would already have that additional parameter as all the > other plugin functions, but I don't see where this would hurt existing > code, and it certainly helps us to enable reassigning disks to VMs. > Passing in a param all other functions already get access to also does > not sound too terrible to me. > > If there are further questions please feel free to ask. thanks for spelling out your rationale! I wonder whether the following wouldn't also work? - keep the volume name on the PVE side like the other storage plugins (which means - encode vital information about the volume there so that it's possible to query *without* talking to the storage[0]) - use a separate, unknown to PVE identifier on the DRBD side - store the association (either from volname to internal, or both ways) in linstore and potentially cache it on the PVE side if needed that way, the interface can remain as it is. renaming is just a metadata update (change volname on the PVE side, update mapping in linstore), no need to change the internal DRBD identifier or any related entities. there is no chance of accidentally triggering a storm of requests or a return of outdated information just by calls to parse_volname. you only need to query the mapping between PVE volname and DRBD identifier when you actually need to do stuff with the volume on the DRBD end (e.g., things that already have $scfg at the moment). the reason parse_volname doesn't get the storage config entry at the moment is exactly that we want to force plugins to encode the basic information in the volname, so that we can rely on it being cheap to call everywhere. basically, instead of your proposed mapping of volname to "information usually encoded in volname" via linstore, the approach lined out above would be a mapping of volname "with all the basic information" to DRBD-internal, real volume ID via linstore. there wouldn't be a fundamental difference to other storages that also have to adapt the "logical" volname into full information via some means (e.g., extend it with the actual base dir, or add (lib)rbd related information, or translating it to a block device path, or ..), except that maybe it's a tad more expensive than usual (although, with storage access, all bets are off anyway ;)). if there is no way to make the above work for DRBD, extending parse_volname is of course still an option like Thomas said in his last reply, but such a change should be a last resort IMHO.