From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from firstgate.proxmox.com (firstgate.proxmox.com [212.224.123.68]) by lore.proxmox.com (Postfix) with ESMTPS id 971E91FF133 for ; Mon, 27 Apr 2026 17:19:18 +0200 (CEST) Received: from firstgate.proxmox.com (localhost [127.0.0.1]) by firstgate.proxmox.com (Proxmox) with ESMTP id 40F0E22218; Mon, 27 Apr 2026 17:19:17 +0200 (CEST) Message-ID: Date: Mon, 27 Apr 2026 17:18:41 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH guest-common/qemu-server/docs/manager v3 0/11] Virtiofs improvements To: Markus Frank , pve-devel@lists.proxmox.com References: <20260427121746.270544-1-m.frank@proxmox.com> Content-Language: en-US From: Filip Schauer In-Reply-To: <20260427121746.270544-1-m.frank@proxmox.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Bm-Milter-Handled: 55990f41-d878-4baa-be0a-ee34c49e34d2 X-Bm-Transport-Timestamp: 1777303027176 X-SPAM-LEVEL: Spam detection results: 0 AWL 0.013 Adjusted score from AWL reputation of From: address BAYES_00 -1.9 Bayes spam probability is 0 to 1% DMARC_MISSING 0.1 Missing DMARC policy KAM_DMARC_STATUS 0.01 Test Rule for DKIM or SPF Failure with Strict Alignment SPF_HELO_NONE 0.001 SPF: HELO does not publish an SPF Record SPF_PASS -0.001 SPF: sender matches SPF record Message-ID-Hash: JT2DSMKTXX5WQKQAGVJMCJVXFOIEVEI4 X-Message-ID-Hash: JT2DSMKTXX5WQKQAGVJMCJVXFOIEVEI4 X-MailFrom: f.schauer@proxmox.com X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header X-Mailman-Version: 3.3.10 Precedence: list List-Id: Proxmox VE development discussion List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: On 27/04/2026 14:16, Markus Frank wrote: > It seems the migration-mode find-paths is enabled by default: > https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/src/main.rs?ref_type=heads#L359 > > I would still disable live migration by default on our side so that > users have to actively choose the migration mode and learn about the > limitations and risks. > > The VM and virtiofsd needs to be stopped and started again for the > migration mode change to take effect. Tested live migration while copying a 1GiB file within the virtiofs. I tested with NFS, CephFS, and CIFS as the shared backing file system for the virtiofs share. The guest was running Debian 13. These were the steps I performed: 1. Start copying inside the VM    `/mnt/virtiofs# dd if=testA of=testB status=progress` 2. Live migrate the VM to another host 3. Check if copying continues normally 4. Once copy completed, check if the two files are identical and if    reading and writing to the share still works Overview of the results: backing fs  live mig. method  data integrity  write works?  read works? NFS         find-paths        ok              yes           yes NFS         file-handles      data mismatch   yes           yes CephFS      find-paths        ok              yes           yes CephFS      file-handles      ok              yes           yes CIFS        find-paths        copy failed     I/O error     I/O error CIFS        file-handles      copy failed     I/O error     yes Live Migration with CephFS backing virtiofs worked flawlessly. NFS worked well with find-paths but with file-handles it corrupted (zeroed out) 1536 Bytes of data. Live Migration while writing to a virtiofs share backed by CIFS lead to an I/O error, leaving the share in a broken state. Logs on source node during live migration with find-paths on CIFS: Apr 27 16:58:00 pve2 pvedaemon[1185]: starting task UPID:pve2:00001653:0001BB68:69EF7978:qmigrate:100:root@pam: Apr 27 16:58:00 pve2 systemd[1]: Starting pve-dbus-vmstate@100.service - PVE DBus VMState Helper (VM 100)... Apr 27 16:58:00 pve2 dbus-vmstate[5717]: pve-vmstate-100 listening on :1.21 Apr 27 16:58:00 pve2 systemd[1]: Started pve-dbus-vmstate@100.service - PVE DBus VMState Helper (VM 100). Apr 27 16:58:01 pve2 pmxcfs[834]: [status] notice: received log Apr 27 16:58:03 pve2 pmxcfs[834]: [status] notice: received log Apr 27 16:58:27 pve2 virtiofsd[5199]: pve2 virtiofsd[5196]: Inode 2: Operation not supported (os error 95) Apr 27 16:58:27 pve2 virtiofsd[5199]: pve2 virtiofsd[5196]: Inode 3: Operation not supported (os error 95) Apr 27 16:58:32 pve2 virtiofsd[5199]: pve2 virtiofsd[5196]: Inode 2: Operation not supported (os error 95) Apr 27 16:58:32 pve2 virtiofsd[5199]: pve2 virtiofsd[5196]: Inode 3: Operation not supported (os error 95) Apr 27 16:58:32 pve2 virtiofsd[5199]: pve2 virtiofsd[5196]: Failed to serialize inode 1 (st_dev=51, mnt_id=330, st_ino=131663): Failed to reconstruct inode location; marking as invalid Apr 27 16:58:32 pve2 virtiofsd[5199]: pve2 virtiofsd[5196]: Failed to serialize inode 2 (st_dev=51, mnt_id=330, st_ino=131665): Failed to reconstruct inode location; marking as invalid Apr 27 16:58:32 pve2 virtiofsd[5199]: pve2 virtiofsd[5196]: Failed to serialize inode 3 (st_dev=51, mnt_id=330, st_ino=130930): Failed to reconstruct inode location; marking as invalid Apr 27 16:58:32 pve2 dbus-vmstate[5717]: received 0 conntrack entries Apr 27 16:58:32 pve2 dbus-vmstate[5717]: transferring 0 bytes of conntrack state Apr 27 16:58:36 pve2 dbus-vmstate[5717]: shutting down gracefully .. Apr 27 16:58:36 pve2 systemd[1]: pve-dbus-vmstate@100.service: Deactivated successfully. Logs on target node during live migration with find-paths on CIFS: Apr 27 16:58:01 pve3 qm[5049]: start VM 100: UPID:pve3:000013B9:00017FBA:69EF7979:qmstart:100:root@pam: Apr 27 16:58:02 pve3 systemd[1]: Created slice qemu.slice - Slice /qemu. Apr 27 16:58:02 pve3 systemd[1]: Started 100.scope. Apr 27 16:58:02 pve3 virtiofsd[5079]: pve3 virtiofsd[5076]: Waiting for vhost-user socket connection... Apr 27 16:58:02 pve3 virtiofsd[5079]: pve3 virtiofsd[5076]: Client connected, servicing requests Apr 27 16:58:03 pve3 qm[5049]: VM 100 started with PID 5084. Apr 27 16:58:03 pve3 systemd[1]: Starting pve-dbus-vmstate@100.service - PVE DBus VMState Helper (VM 100)... Apr 27 16:58:03 pve3 dbus-vmstate[5148]: pve-vmstate-100 listening on :1.22 Apr 27 16:58:03 pve3 systemd[1]: Started pve-dbus-vmstate@100.service - PVE DBus VMState Helper (VM 100). Apr 27 16:58:03 pve3 qm[5048]: end task UPID:pve3:000013B9:00017FBA:69EF7979:qmstart:100:root@pam: OK Apr 27 16:58:32 pve3 virtiofsd[5079]: pve3 virtiofsd[5076]: Invalid inode 1 indexed: Migration source has lost inode 1 Apr 27 16:58:32 pve3 virtiofsd[5079]: pve3 virtiofsd[5076]: Invalid inode 3 indexed: Migration source has lost inode 3 Apr 27 16:58:32 pve3 virtiofsd[5079]: pve3 virtiofsd[5076]: Invalid inode 2 indexed: Migration source has lost inode 2 Apr 27 16:58:32 pve3 virtiofsd[5079]: pve3 virtiofsd[5076]: Invalid handle 1 is open in guest: Opening inode 2 as handle 1: Inode is invalid because of an error during the preceding migration, which was: Migration source has lost inode 2 Apr 27 16:58:32 pve3 virtiofsd[5079]: pve3 virtiofsd[5076]: Invalid handle 2 is open in guest: Opening inode 3 as handle 2: Inode is invalid because of an error during the preceding migration, which was: Migration source has lost inode 3 Logs on source node during live migration with file-handles on CIFS: Apr 27 17:11:03 pve3 pvedaemon[1198]: starting task UPID:pve3:00001E6D:0002B0D4:69EF7C87:qmigrate:100:root@pam: Apr 27 17:11:03 pve3 systemd[1]: Starting pve-dbus-vmstate@100.service - PVE DBus VMState Helper (VM 100)... Apr 27 17:11:03 pve3 dbus-vmstate[7791]: pve-vmstate-100 listening on :1.56 Apr 27 17:11:03 pve3 systemd[1]: Started pve-dbus-vmstate@100.service - PVE DBus VMState Helper (VM 100). Apr 27 17:11:04 pve3 pmxcfs[830]: [status] notice: received log Apr 27 17:11:06 pve3 pmxcfs[830]: [status] notice: received log Apr 27 17:11:29 pve3 virtiofsd[7469]: pve3 virtiofsd[7467]: Inode 2 (/fileA): Failed to generate file handle: Operation not supported (os error 95) Apr 27 17:11:29 pve3 virtiofsd[7469]: pve3 virtiofsd[7467]: Inode 3 (/fileB): Failed to generate file handle: Operation not supported (os error 95) Apr 27 17:11:34 pve3 virtiofsd[7469]: pve3 virtiofsd[7467]: Failed to serialize inode 2 (st_dev=51, mnt_id=338, st_ino=131665): Failed to reconstruct inode location; marking as invalid Apr 27 17:11:34 pve3 virtiofsd[7469]: pve3 virtiofsd[7467]: Failed to serialize inode 3 (st_dev=51, mnt_id=338, st_ino=130930): Failed to reconstruct inode location; marking as invalid Apr 27 17:11:34 pve3 dbus-vmstate[7791]: received 0 conntrack entries Apr 27 17:11:34 pve3 dbus-vmstate[7791]: transferring 0 bytes of conntrack state Apr 27 17:11:37 pve3 dbus-vmstate[7791]: shutting down gracefully .. Apr 27 17:11:37 pve3 systemd[1]: pve-dbus-vmstate@100.service: Deactivated successfully. Apr 27 17:11:40 pve3 qmeventd[711]: read: Connection reset by peer Apr 27 17:11:40 pve3 virtiofsd[7469]: pve3 virtiofsd[7467]: Client disconnected, shutting down Logs on target node during live migration with file-handles on CIFS: Apr 27 17:11:04 pve2 qm[8116]: start VM 100: UPID:pve2:00001FB4:0002EDB2:69EF7C88:qmstart:100:root@pam: Apr 27 17:11:05 pve2 systemd[1]: Started 100.scope. Apr 27 17:11:05 pve2 virtiofsd[8146]: pve2 virtiofsd[8143]: Waiting for vhost-user socket connection... Apr 27 17:11:05 pve2 virtiofsd[8146]: pve2 virtiofsd[8143]: Client connected, servicing requests Apr 27 17:11:05 pve2 qm[8116]: VM 100 started with PID 8151. Apr 27 17:11:05 pve2 systemd[1]: Starting pve-dbus-vmstate@100.service - PVE DBus VMState Helper (VM 100)... Apr 27 17:11:06 pve2 dbus-vmstate[8208]: pve-vmstate-100 listening on :1.30 Apr 27 17:11:06 pve2 systemd[1]: Started pve-dbus-vmstate@100.service - PVE DBus VMState Helper (VM 100). Apr 27 17:11:06 pve2 qm[8115]: end task UPID:pve2:00001FB4:0002EDB2:69EF7C88:qmstart:100:root@pam: OK Apr 27 17:11:34 pve2 virtiofsd[8146]: pve2 virtiofsd[8143]: Invalid inode 3 indexed: Migration source has lost inode 3 Apr 27 17:11:34 pve2 virtiofsd[8146]: pve2 virtiofsd[8143]: Invalid inode 2 indexed: Migration source has lost inode 2 Apr 27 17:11:34 pve2 virtiofsd[8146]: pve2 virtiofsd[8143]: Invalid handle 1 is open in guest: Opening inode 2 as handle 1: Inode is invalid because of an error during the preceding migration, which was: Migration source has lost inode 2 Apr 27 17:11:34 pve2 virtiofsd[8146]: pve2 virtiofsd[8143]: Invalid handle 2 is open in guest: Opening inode 3 as handle 2: Inode is invalid because of an error during the preceding migration, which was: Migration source has lost inode 3