From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id A5A691FF138 for ; Wed, 18 Feb 2026 08:30:27 +0100 (CET) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 6784313C59; Wed, 18 Feb 2026 08:31:25 +0100 (CET) Date: Wed, 18 Feb 2026 08:30:47 +0100 From: Fabian =?iso-8859-1?q?Gr=FCnbichler?= Subject: Re: [pve-devel] storage plugin volume sync during live migration To: Andrei Perepiolkin , Proxmox VE development discussion References: In-Reply-To: MIME-Version: 1.0 User-Agent: astroid/0.17.0 (https://github.com/astroidmail/astroid) Message-Id: <1771399591.15k1mhcwf2.astroid@yuna.none> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1771399843233 X-SPAM-LEVEL: Spam detection results: 0 AWL 0.046 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment RCVD_IN_VALIDITY_CERTIFIED_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_RPBL_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. RCVD_IN_VALIDITY_SAFE_BLOCKED 0.001 ADMINISTRATOR NOTICE: The query to Validity was blocked. See https://knowledge.validity.com/hc/en-us/articles/20961730681243 for more information. SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Message-ID-Hash: MMRWAXREURJSF46LIJ373B36ST7P3SKK X-Message-ID-Hash: MMRWAXREURJSF46LIJ373B36ST7P3SKK X-MailFrom: f.gruenbichler@proxmox.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header X-Mailman-Version: 3.3.10 Precedence: list List-Id: Proxmox VE development discussion List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: On February 17, 2026 1:06 pm, Andrei Perepiolkin wrote: > Hi, >=20 > I have a question regarding Proxmox storage plugin=20 > activate_volume/deactivate_volume concurrency for shared volumes. >=20 > During live migration volume can be simultaneously attached to multiple=20 > nodes at same time. > =C2=A0=C2=A0=C2=A0 If vm migrates from node1 to node2, activate volume o= n node2 will=20 > be called before deactivate is done on node1. >=20 > This raises question regarding concurrency, a specially for sync-like=20 > operations. >=20 > Does Proxmox have any mean to guaranty that no I/O will be performed on=20 > node2 until deactivation on node1 is completed? It is ensured that the guest doesn't do any I/O, because the sequence is like this: 1. VM is running on source node 2. VM is started in suspended state on target node (this means no guest execution) 3. state (RAM, ..) is transferred until it converges 4. VM on source node is automatically paused once state is synced up 5. config file is moved to target node to hand over ownership 6. VM is resumed on target node 7. VM is stopped/killed on source node the VM is "logically" down (no guest execution on either node) from 4 to 6. up to 4, only the VM on the source node is actually executing the guest. after 6, only the VM on the target node is actually executing the guest. it becomes a bit more involved if you mix in local disks (in addition or instead of shared disks), but the principle remains the same. > Can I/O operation take place on node2 before activation on node2 is=20 > completed? only if your storage does that on its own. if it does so in a problematic fashion, it's not really a useful shared storage ;) Fabian