all lists on lists.proxmox.com
 help / color / mirror / Atom feed
From: Filip Schauer <f.schauer@proxmox.com>
To: Markus Frank <m.frank@proxmox.com>, pve-devel@lists.proxmox.com
Subject: Re: [PATCH guest-common/qemu-server/docs/manager v3 0/11] Virtiofs improvements
Date: Mon, 27 Apr 2026 17:18:41 +0200	[thread overview]
Message-ID: <d8e8960a-0abe-4635-ae5d-ec7492995465@proxmox.com> (raw)
In-Reply-To: <20260427121746.270544-1-m.frank@proxmox.com>

On 27/04/2026 14:16, Markus Frank wrote:
> It seems the migration-mode find-paths is enabled by default:
> https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/src/main.rs?ref_type=heads#L359
>
> I would still disable live migration by default on our side so that
> users have to actively choose the migration mode and learn about the
> limitations and risks.
>
> The VM and virtiofsd needs to be stopped and started again for the
> migration mode change to take effect.

Tested live migration while copying a 1GiB file within the virtiofs.
I tested with NFS, CephFS, and CIFS as the shared backing file system
for the virtiofs share. The guest was running Debian 13.

These were the steps I performed:
1. Start copying inside the VM
    `/mnt/virtiofs# dd if=testA of=testB status=progress`
2. Live migrate the VM to another host
3. Check if copying continues normally
4. Once copy completed, check if the two files are identical and if
    reading and writing to the share still works


Overview of the results:

backing fs  live mig. method  data integrity  write works?  read works?
NFS         find-paths        ok              yes           yes
NFS         file-handles      data mismatch   yes           yes
CephFS      find-paths        ok              yes           yes
CephFS      file-handles      ok              yes           yes
CIFS        find-paths        copy failed     I/O error     I/O error
CIFS        file-handles      copy failed     I/O error     yes


Live Migration with CephFS backing virtiofs worked flawlessly.

NFS worked well with find-paths
but with file-handles it corrupted (zeroed out) 1536 Bytes of data.

Live Migration while writing to a virtiofs share backed by CIFS lead to
an I/O error, leaving the share in a broken state.

Logs on source node during live migration with find-paths on CIFS:
Apr 27 16:58:00 pve2 pvedaemon[1185]: <root@pam> starting task 
UPID:pve2:00001653:0001BB68:69EF7978:qmigrate:100:root@pam:
Apr 27 16:58:00 pve2 systemd[1]: Starting pve-dbus-vmstate@100.service - 
PVE DBus VMState Helper (VM 100)...
Apr 27 16:58:00 pve2 dbus-vmstate[5717]: pve-vmstate-100 listening on :1.21
Apr 27 16:58:00 pve2 systemd[1]: Started pve-dbus-vmstate@100.service - 
PVE DBus VMState Helper (VM 100).
Apr 27 16:58:01 pve2 pmxcfs[834]: [status] notice: received log
Apr 27 16:58:03 pve2 pmxcfs[834]: [status] notice: received log
Apr 27 16:58:27 pve2 virtiofsd[5199]: pve2 virtiofsd[5196]: Inode 2: 
Operation not supported (os error 95)
Apr 27 16:58:27 pve2 virtiofsd[5199]: pve2 virtiofsd[5196]: Inode 3: 
Operation not supported (os error 95)
Apr 27 16:58:32 pve2 virtiofsd[5199]: pve2 virtiofsd[5196]: Inode 2: 
Operation not supported (os error 95)
Apr 27 16:58:32 pve2 virtiofsd[5199]: pve2 virtiofsd[5196]: Inode 3: 
Operation not supported (os error 95)
Apr 27 16:58:32 pve2 virtiofsd[5199]: pve2 virtiofsd[5196]: Failed to 
serialize inode 1 (st_dev=51, mnt_id=330, st_ino=131663): Failed to 
reconstruct inode location; marking as invalid
Apr 27 16:58:32 pve2 virtiofsd[5199]: pve2 virtiofsd[5196]: Failed to 
serialize inode 2 (st_dev=51, mnt_id=330, st_ino=131665): Failed to 
reconstruct inode location; marking as invalid
Apr 27 16:58:32 pve2 virtiofsd[5199]: pve2 virtiofsd[5196]: Failed to 
serialize inode 3 (st_dev=51, mnt_id=330, st_ino=130930): Failed to 
reconstruct inode location; marking as invalid
Apr 27 16:58:32 pve2 dbus-vmstate[5717]: received 0 conntrack entries
Apr 27 16:58:32 pve2 dbus-vmstate[5717]: transferring 0 bytes of 
conntrack state
Apr 27 16:58:36 pve2 dbus-vmstate[5717]: shutting down gracefully ..
Apr 27 16:58:36 pve2 systemd[1]: pve-dbus-vmstate@100.service: 
Deactivated successfully.

Logs on target node during live migration with find-paths on CIFS:
Apr 27 16:58:01 pve3 qm[5049]: start VM 100: 
UPID:pve3:000013B9:00017FBA:69EF7979:qmstart:100:root@pam:
Apr 27 16:58:02 pve3 systemd[1]: Created slice qemu.slice - Slice /qemu.
Apr 27 16:58:02 pve3 systemd[1]: Started 100.scope.
Apr 27 16:58:02 pve3 virtiofsd[5079]: pve3 virtiofsd[5076]: Waiting for 
vhost-user socket connection...
Apr 27 16:58:02 pve3 virtiofsd[5079]: pve3 virtiofsd[5076]: Client 
connected, servicing requests
Apr 27 16:58:03 pve3 qm[5049]: VM 100 started with PID 5084.
Apr 27 16:58:03 pve3 systemd[1]: Starting pve-dbus-vmstate@100.service - 
PVE DBus VMState Helper (VM 100)...
Apr 27 16:58:03 pve3 dbus-vmstate[5148]: pve-vmstate-100 listening on :1.22
Apr 27 16:58:03 pve3 systemd[1]: Started pve-dbus-vmstate@100.service - 
PVE DBus VMState Helper (VM 100).
Apr 27 16:58:03 pve3 qm[5048]: <root@pam> end task 
UPID:pve3:000013B9:00017FBA:69EF7979:qmstart:100:root@pam: OK
Apr 27 16:58:32 pve3 virtiofsd[5079]: pve3 virtiofsd[5076]: Invalid 
inode 1 indexed: Migration source has lost inode 1
Apr 27 16:58:32 pve3 virtiofsd[5079]: pve3 virtiofsd[5076]: Invalid 
inode 3 indexed: Migration source has lost inode 3
Apr 27 16:58:32 pve3 virtiofsd[5079]: pve3 virtiofsd[5076]: Invalid 
inode 2 indexed: Migration source has lost inode 2
Apr 27 16:58:32 pve3 virtiofsd[5079]: pve3 virtiofsd[5076]: Invalid 
handle 1 is open in guest: Opening inode 2 as handle 1: Inode is invalid 
because of an error during the preceding migration, which was: Migration 
source has lost inode 2
Apr 27 16:58:32 pve3 virtiofsd[5079]: pve3 virtiofsd[5076]: Invalid 
handle 2 is open in guest: Opening inode 3 as handle 2: Inode is invalid 
because of an error during the preceding migration, which was: Migration 
source has lost inode 3


Logs on source node during live migration with file-handles on CIFS:
Apr 27 17:11:03 pve3 pvedaemon[1198]: <root@pam> starting task 
UPID:pve3:00001E6D:0002B0D4:69EF7C87:qmigrate:100:root@pam:
Apr 27 17:11:03 pve3 systemd[1]: Starting pve-dbus-vmstate@100.service - 
PVE DBus VMState Helper (VM 100)...
Apr 27 17:11:03 pve3 dbus-vmstate[7791]: pve-vmstate-100 listening on :1.56
Apr 27 17:11:03 pve3 systemd[1]: Started pve-dbus-vmstate@100.service - 
PVE DBus VMState Helper (VM 100).
Apr 27 17:11:04 pve3 pmxcfs[830]: [status] notice: received log
Apr 27 17:11:06 pve3 pmxcfs[830]: [status] notice: received log
Apr 27 17:11:29 pve3 virtiofsd[7469]: pve3 virtiofsd[7467]: Inode 2 
(/fileA): Failed to generate file handle: Operation not supported (os 
error 95)
Apr 27 17:11:29 pve3 virtiofsd[7469]: pve3 virtiofsd[7467]: Inode 3 
(/fileB): Failed to generate file handle: Operation not supported (os 
error 95)
Apr 27 17:11:34 pve3 virtiofsd[7469]: pve3 virtiofsd[7467]: Failed to 
serialize inode 2 (st_dev=51, mnt_id=338, st_ino=131665): Failed to 
reconstruct inode location; marking as invalid
Apr 27 17:11:34 pve3 virtiofsd[7469]: pve3 virtiofsd[7467]: Failed to 
serialize inode 3 (st_dev=51, mnt_id=338, st_ino=130930): Failed to 
reconstruct inode location; marking as invalid
Apr 27 17:11:34 pve3 dbus-vmstate[7791]: received 0 conntrack entries
Apr 27 17:11:34 pve3 dbus-vmstate[7791]: transferring 0 bytes of 
conntrack state
Apr 27 17:11:37 pve3 dbus-vmstate[7791]: shutting down gracefully ..
Apr 27 17:11:37 pve3 systemd[1]: pve-dbus-vmstate@100.service: 
Deactivated successfully.
Apr 27 17:11:40 pve3 qmeventd[711]: read: Connection reset by peer
Apr 27 17:11:40 pve3 virtiofsd[7469]: pve3 virtiofsd[7467]: Client 
disconnected, shutting down

Logs on target node during live migration with file-handles on CIFS:
Apr 27 17:11:04 pve2 qm[8116]: start VM 100: 
UPID:pve2:00001FB4:0002EDB2:69EF7C88:qmstart:100:root@pam:
Apr 27 17:11:05 pve2 systemd[1]: Started 100.scope.
Apr 27 17:11:05 pve2 virtiofsd[8146]: pve2 virtiofsd[8143]: Waiting for 
vhost-user socket connection...
Apr 27 17:11:05 pve2 virtiofsd[8146]: pve2 virtiofsd[8143]: Client 
connected, servicing requests
Apr 27 17:11:05 pve2 qm[8116]: VM 100 started with PID 8151.
Apr 27 17:11:05 pve2 systemd[1]: Starting pve-dbus-vmstate@100.service - 
PVE DBus VMState Helper (VM 100)...
Apr 27 17:11:06 pve2 dbus-vmstate[8208]: pve-vmstate-100 listening on :1.30
Apr 27 17:11:06 pve2 systemd[1]: Started pve-dbus-vmstate@100.service - 
PVE DBus VMState Helper (VM 100).
Apr 27 17:11:06 pve2 qm[8115]: <root@pam> end task 
UPID:pve2:00001FB4:0002EDB2:69EF7C88:qmstart:100:root@pam: OK
Apr 27 17:11:34 pve2 virtiofsd[8146]: pve2 virtiofsd[8143]: Invalid 
inode 3 indexed: Migration source has lost inode 3
Apr 27 17:11:34 pve2 virtiofsd[8146]: pve2 virtiofsd[8143]: Invalid 
inode 2 indexed: Migration source has lost inode 2
Apr 27 17:11:34 pve2 virtiofsd[8146]: pve2 virtiofsd[8143]: Invalid 
handle 1 is open in guest: Opening inode 2 as handle 1: Inode is invalid 
because of an error during the preceding migration, which was: Migration 
source has lost inode 2
Apr 27 17:11:34 pve2 virtiofsd[8146]: pve2 virtiofsd[8143]: Invalid 
handle 2 is open in guest: Opening inode 3 as handle 2: Inode is invalid 
because of an error during the preceding migration, which was: Migration 
source has lost inode 3





      parent reply	other threads:[~2026-04-27 15:19 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-27 12:16 [PATCH guest-common/qemu-server/docs/manager v3 0/11] Virtiofs improvements Markus Frank
2026-04-27 12:16 ` [PATCH guest-common v3 1/11] mapping: dir: add 'live-migration-method' parameter Markus Frank
2026-04-27 12:16 ` [PATCH qemu-server v3 2/11] fix #6370: virtiofs: add support for thread-pool-size option Markus Frank
2026-04-27 12:16 ` [PATCH qemu-server v3 3/11] virtiofs: add readonly option Markus Frank
2026-04-27 12:16 ` [PATCH qemu-server v3 4/11] virtiofs: add live migration support Markus Frank
2026-04-27 12:16 ` [PATCH docs v3 5/11] virtiofs: add explanation for cache=metadata behavior Markus Frank
2026-04-27 12:16 ` [PATCH docs v3 6/11] virtiofs: add table for optional parameters Markus Frank
2026-04-27 12:16 ` [PATCH docs v3 7/11] virtiofs: add description for thread-pool-size and readonly Markus Frank
2026-04-27 12:16 ` [PATCH docs v3 8/11] virtiofs: add documentation for live migration Markus Frank
2026-04-27 12:16 ` [PATCH manager v3 09/11] fix #6370: ui: virtiofs edit: add support for thread-pool-size option Markus Frank
2026-04-27 12:16 ` [PATCH manager v3 10/11] virtiofs edit: add support for readonly option Markus Frank
2026-04-27 12:17 ` [PATCH manager v3 11/11] directory mapping: add live-migration-method option for virtiofs Markus Frank
2026-04-27 15:18 ` Filip Schauer [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d8e8960a-0abe-4635-ae5d-ec7492995465@proxmox.com \
    --to=f.schauer@proxmox.com \
    --cc=m.frank@proxmox.com \
    --cc=pve-devel@lists.proxmox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal