From: Fiona Ebner <f.ebner@proxmox.com>
To: pve-devel@lists.proxmox.com
Subject: [pve-devel] [PATCH v2 qemu-server] virtiofs: prevent issue with Windows OS and too many files
Date: Fri, 2 May 2025 16:21:33 +0200 [thread overview]
Message-ID: <20250502142133.59401-1-f.ebner@proxmox.com> (raw)
As reported in the community forum [0] and the virtio-win project [1],
virtiofsd will run into its open file limit when used with a Windows
guest that reads too many files. It's also reported that the issue
does not occur with Linux guests and a workaround is using
'--inode-file-handles=mandatory' on virtiofsd command line.
The option is described as follows in the virtiofsd help:
> When to use file handles to reference inodes instead of O_PATH file
> descriptors (never, prefer, mandatory)
and the default is 'never'.
Fix the above issue by using 'prefer' rather than 'mandatory', because
that should not break other edge cases:
> prefer: Attempt to generate file handles, but fall back to O_PATH
> file descriptors where the underlying filesystem does not support
> file handles. Useful when there are various different filesystems
> under the shared directory and some of them do not support file
> handles.
[0]: https://forum.proxmox.com/threads/165565/
[1]: https://github.com/virtio-win/kvm-guest-drivers-windows/issues/1136
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Tested-by: Markus Frank <m.frank@proxmox.com>
---
Changes in v2:
* fix typo in commit message
* add Markus's T-b trailer
PVE/QemuServer/Virtiofs.pm | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/PVE/QemuServer/Virtiofs.pm b/PVE/QemuServer/Virtiofs.pm
index cfde92c9..5a91b23a 100644
--- a/PVE/QemuServer/Virtiofs.pm
+++ b/PVE/QemuServer/Virtiofs.pm
@@ -130,14 +130,17 @@ sub start_all_virtiofsd {
next if !$conf->{$opt};
my $virtiofs = parse_property_string('pve-qm-virtiofs', $conf->{$opt});
- my $virtiofs_socket = start_virtiofsd($vmid, $i, $virtiofs);
+ # See https://github.com/virtio-win/kvm-guest-drivers-windows/issues/1136
+ my $prefer_inode_fh = PVE::QemuServer::Helpers::windows_version($conf->{ostype}) ? 1 : 0;
+
+ my $virtiofs_socket = start_virtiofsd($vmid, $i, $virtiofs, $prefer_inode_fh);
push @$virtiofs_sockets, $virtiofs_socket;
}
return $virtiofs_sockets;
}
sub start_virtiofsd {
- my ($vmid, $fsid, $virtiofs) = @_;
+ my ($vmid, $fsid, $virtiofs, $prefer_inode_fh) = @_;
mkdir $socket_path_root;
my $socket_path = "$socket_path_root/vm$vmid-fs$fsid";
@@ -175,6 +178,7 @@ sub start_virtiofsd {
push @$cmd, '--announce-submounts';
push @$cmd, '--allow-direct-io' if $virtiofs->{'direct-io'};
push @$cmd, '--cache='.$virtiofs->{cache} if $virtiofs->{cache};
+ push @$cmd, '--inode-file-handles=prefer' if $prefer_inode_fh;
push @$cmd, '--syslog';
exec(@$cmd);
} elsif (!defined($pid2)) {
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
reply other threads:[~2025-05-02 14:21 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250502142133.59401-1-f.ebner@proxmox.com \
--to=f.ebner@proxmox.com \
--cc=pve-devel@lists.proxmox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal