From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id 188351FF13B for ; Wed, 25 Feb 2026 15:34:30 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id CA9075430; Wed, 25 Feb 2026 15:35:23 +0100 (CET) From: Daniel Kral To: pve-devel@lists.proxmox.com Subject: [RFC PATCH-SERIES qemu-server 0/1] fix #7053: allow setting additional HA migration parameters Date: Wed, 25 Feb 2026 15:35:04 +0100 Message-ID: <20260225143514.368884-1-d.kral@proxmox.com> X-Mailer: git-send-email 2.47.3 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1772030101129 X-SPAM-LEVEL: Spam detection results: 0 AWL -1.040 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 1.113 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.358 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.659 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Message-ID-Hash: OBOAQDKSLUAVPFLG46GSDPZ6TE4RWFI4 X-Message-ID-Hash: OBOAQDKSLUAVPFLG46GSDPZ6TE4RWFI4 X-MailFrom: d.kral@proxmox.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header X-Mailman-Version: 3.3.10 Precedence: list List-Id: Proxmox VE development discussion List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: Bugzilla #7053 reports that even though 'with-conntrack-state' is checked, the VM will always migrate without conntrack state in the end. In fact, any parameters from the migrate_vm API endpoint but the $vmid and $node are not passed on to the HA stack at all. This was likely caught now, because the conntrack state is the only optional parameter visible in the web interface and set by default. Currently, the resource motion crm command is matched from ^ to $: if ($cmd =~ m/^(migrate|relocate)\s+(\S+)\s+(\S+)$/) { We could extend that crm command to something like: if ($cmd =~ m/^(migrate|relocate)\s+(\S+)\s+(\S+)(?:\s+(\S.*))?$/) { but this would need the newer `ha-manager {migrate,relocate} ...` API/CLI endpoint to append both the standard and extended version for some period as older HA Manager versions wouldn't be able to parse the extended version but only the standard versions. Newer HA Manager versions would be fine though, as first the standard version would be parsed and afterwards the extended version would overwrite the request from the standard version. The downside from this though is that the migration parameters are not the same for VMs and CTs (and possible future resource types) and would therefore expose quite a lot of resource-specific data structures to the more generic HA Manager code. Additionally, both the node with the active HA Manager as well as the node's LRM where the to-be-moved HA resource is on need to have the newer pve-ha-manager version to correctly relay the migration parameters. As the migrate_vm API request is proxied to the node where the HA resource is assigned to, this RFC patch series puts the responsibility to handle the additional migration parameters at the caller's side, where these are saved while the request is relayed through the HA stack until the LRM on the node calls migrate_vm again. The implementation is not fully fleshed out (e.g. cleaning up the migration params file on a crashed/stopped VM or rejected migration requests, etc.), but I wanted to get more feedback whether this solution has any merit and if not decide on another possible solution. If it does have merit, this could be generalized for both qemu-server and pve-container if it useful for containers as well. qemu-server: Daniel Kral (1): fix #7053: api: migrate: save and restore migration params for HA managed VMs src/PVE/API2/Qemu.pm | 54 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 54 insertions(+) Summary over all repositories: 1 files changed, 54 insertions(+), 0 deletions(-) -- Generated by murpp 0.9.0