public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v8 0/7] virtiofs
@ 2023-11-08  8:52 Markus Frank
  2023-11-08  8:52 ` [pve-devel] [PATCH cluster v8 1/7] add mapping/dir.cfg for resource mapping Markus Frank
                   ` (8 more replies)
  0 siblings, 9 replies; 26+ messages in thread
From: Markus Frank @ 2023-11-08  8:52 UTC (permalink / raw)
  To: pve-devel

build-order:
1. cluster
2. guest-common
3. docs
4. qemu-server
5. manager

I did not get virtiofsd to run with run_command without creating zombie
processes after stutdown.
So I replaced run_command with exec for now. 
Maybe someone can find out why this happens.


cluster:

Markus Frank (1):
  add mapping/dir.cfg for resource mapping

 src/PVE/Cluster.pm  | 1 +
 src/pmxcfs/status.c | 1 +
 2 files changed, 2 insertions(+)


guest-common:

v7:
* renamed DIR to Dir
* made xattr & acl settings per directory-id and not per node

Markus Frank (1):
  add Dir mapping config

 src/Makefile           |   1 +
 src/PVE/Mapping/Dir.pm | 177 +++++++++++++++++++++++++++++++++++++++++
 2 files changed, 178 insertions(+)
 create mode 100644 src/PVE/Mapping/Dir.pm


docs:

v8:
 * added "Known Limitations"
 * removed old mount tag

Markus Frank (1):
  added shared filesystem doc for virtio-fs

 qm.adoc | 84 +++++++++++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 82 insertions(+), 2 deletions(-)


qemu-server:

v8:
 * changed permission checks to cover cloning and restoring and
 made the helper functions similar to the PCI, USB permission check functions.
 * warn if acl is activated on Windows VM, since the virtiofs device cannot be
 mounted on Windows if acl is on and moved with dir config validation to
 its own function. This function is called in config_to_command so that
 no virtiofsd command is running although qmstart died because a second
 virtiofs device was incorrectly configured.

v7:
 * enabled use of hugepages
 * renamed variables
 * added acl & xattr parameters that overwrite the default directory
 mapping settings

v6:
 * added virtiofsd dependency
 * 2 new patches:
    * Permission check for virtiofs directory access
    * check_local_resources: virtiofs

v5:
 * allow numa settings with virtio-fs
 * added direct-io & cache settings
 * changed to rust implementation of virtiofsd
 * made double fork and closed all file descriptor so that the lockfile
 gets released.

v3:
 * created own socket and get file descriptor for virtiofsd
 so there is no race between starting virtiofsd & qemu
 * added TODO to replace virtiofsd with rust implementation in bookworm
 (I packaged the rust implementation for bookworm & the C implementation
 in qemu will be removed in qemu 8.0)

v2:
 * replaced sharedfiles_fmt path in qemu-server with dirid:
 * user can use the dirid to specify the directory without requiring root access

Markus Frank (3):
  feature #1027: virtio-fs support
  Permission check for virtiofs directory access
  check_local_resources: virtiofs

 PVE/API2/Qemu.pm             |  39 ++++++-
 PVE/QemuServer.pm            | 200 ++++++++++++++++++++++++++++++++++-
 PVE/QemuServer/Memory.pm     |  25 +++--
 debian/control               |   1 +
 test/MigrationTest/Shared.pm |   7 ++
 5 files changed, 263 insertions(+), 9 deletions(-)


manager:

v8: removed ui patches for now

Markus Frank (1):
  api: add resource map api endpoints for directories

 PVE/API2/Cluster/Mapping.pm       |   7 +
 PVE/API2/Cluster/Mapping/Dir.pm   | 309 ++++++++++++++++++++++++++++++
 PVE/API2/Cluster/Mapping/Makefile |   3 +-
 3 files changed, 318 insertions(+), 1 deletion(-)
 create mode 100644 PVE/API2/Cluster/Mapping/Dir.pm

-- 
2.39.2

From d327cf6a5f7108518f95bf133008ea449b8385c9 Mon Sep 17 00:00:00 2001
From: Markus Frank <m.frank@proxmox.com>
Date: Wed, 8 Nov 2023 09:14:13 +0100
Subject: [PATCH qemu-server v8 0/6] *** SUBJECT HERE ***

*** BLURB HERE ***


-- 
2.39.2





^ permalink raw reply	[flat|nested] 26+ messages in thread

* [pve-devel] [PATCH cluster v8 1/7] add mapping/dir.cfg for resource mapping
  2023-11-08  8:52 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v8 0/7] virtiofs Markus Frank
@ 2023-11-08  8:52 ` Markus Frank
  2024-01-31 12:01   ` Fiona Ebner
  2023-11-08  8:52 ` [pve-devel] [PATCH guest-common v8 2/7] add Dir mapping config Markus Frank
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 26+ messages in thread
From: Markus Frank @ 2023-11-08  8:52 UTC (permalink / raw)
  To: pve-devel

Add it to both, the perl side (PVE/Cluster.pm) and pmxcfs side
(status.c).

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 src/PVE/Cluster.pm  | 1 +
 src/pmxcfs/status.c | 1 +
 2 files changed, 2 insertions(+)

diff --git a/src/PVE/Cluster.pm b/src/PVE/Cluster.pm
index cfa2583..39bdfa1 100644
--- a/src/PVE/Cluster.pm
+++ b/src/PVE/Cluster.pm
@@ -82,6 +82,7 @@ my $observed = {
     'virtual-guest/cpu-models.conf' => 1,
     'mapping/pci.cfg' => 1,
     'mapping/usb.cfg' => 1,
+    'mapping/dir.cfg' => 1,
 };
 
 sub prepare_observed_file_basedirs {
diff --git a/src/pmxcfs/status.c b/src/pmxcfs/status.c
index c8094ac..48ebc62 100644
--- a/src/pmxcfs/status.c
+++ b/src/pmxcfs/status.c
@@ -112,6 +112,7 @@ static memdb_change_t memdb_change_array[] = {
 	{ .path = "firewall/cluster.fw" },
 	{ .path = "mapping/pci.cfg" },
 	{ .path = "mapping/usb.cfg" },
+	{ .path = "mapping/dir.cfg" },
 };
 
 static GMutex mutex;
-- 
2.39.2





^ permalink raw reply	[flat|nested] 26+ messages in thread

* [pve-devel] [PATCH guest-common v8 2/7] add Dir mapping config
  2023-11-08  8:52 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v8 0/7] virtiofs Markus Frank
  2023-11-08  8:52 ` [pve-devel] [PATCH cluster v8 1/7] add mapping/dir.cfg for resource mapping Markus Frank
@ 2023-11-08  8:52 ` Markus Frank
  2024-01-31 12:01   ` Fiona Ebner
  2024-01-31 13:02   ` Fiona Ebner
  2023-11-08  8:52 ` [pve-devel] [PATCH docs v8 3/7] added shared filesystem doc for virtio-fs Markus Frank
                   ` (6 subsequent siblings)
  8 siblings, 2 replies; 26+ messages in thread
From: Markus Frank @ 2023-11-08  8:52 UTC (permalink / raw)
  To: pve-devel

Adds a config file for directories by using a 'map'
array propertystring for each node mapping.

Next to node & path, there is the optional
submounts parameter in the map array.
Additionally there are the default settings for xattr & acl.

example config:
```
some-dir-id
	map node=node1,path=/mnt/share/,submounts=1
	map node=node2,path=/mnt/share/,
	xattr 1
	acl 1
```

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 src/Makefile           |   1 +
 src/PVE/Mapping/Dir.pm | 177 +++++++++++++++++++++++++++++++++++++++++
 2 files changed, 178 insertions(+)
 create mode 100644 src/PVE/Mapping/Dir.pm

diff --git a/src/Makefile b/src/Makefile
index cbc40c1..2ebe08b 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -17,6 +17,7 @@ install: PVE
 	install -d ${PERL5DIR}/PVE/Mapping
 	install -m 0644 PVE/Mapping/PCI.pm ${PERL5DIR}/PVE/Mapping/
 	install -m 0644 PVE/Mapping/USB.pm ${PERL5DIR}/PVE/Mapping/
+	install -m 0644 PVE/Mapping/Dir.pm ${PERL5DIR}/PVE/Mapping/
 	install -d ${PERL5DIR}/PVE/VZDump
 	install -m 0644 PVE/VZDump/Plugin.pm ${PERL5DIR}/PVE/VZDump/
 	install -m 0644 PVE/VZDump/Common.pm ${PERL5DIR}/PVE/VZDump/
diff --git a/src/PVE/Mapping/Dir.pm b/src/PVE/Mapping/Dir.pm
new file mode 100644
index 0000000..6c87073
--- /dev/null
+++ b/src/PVE/Mapping/Dir.pm
@@ -0,0 +1,177 @@
+package PVE::Mapping::Dir;
+
+use strict;
+use warnings;
+
+use PVE::Cluster qw(cfs_register_file cfs_read_file cfs_lock_file cfs_write_file);
+use PVE::JSONSchema qw(get_standard_option parse_property_string);
+use PVE::SectionConfig;
+use PVE::INotify;
+
+use base qw(PVE::SectionConfig);
+
+my $FILENAME = 'mapping/dir.cfg';
+
+cfs_register_file($FILENAME,
+                  sub { __PACKAGE__->parse_config(@_); },
+                  sub { __PACKAGE__->write_config(@_); });
+
+
+# so we don't have to repeat the type every time
+sub parse_section_header {
+    my ($class, $line) = @_;
+
+    if ($line =~ m/^(\S+)\s*$/) {
+	my $id = $1;
+	my $errmsg = undef; # set if you want to skip whole section
+	eval { PVE::JSONSchema::pve_verify_configid($id) };
+	$errmsg = $@ if $@;
+	my $config = {}; # to return additional attributes
+	return ('dir', $id, $errmsg, $config);
+    }
+    return undef;
+}
+
+sub format_section_header {
+    my ($class, $type, $sectionId, $scfg, $done_hash) = @_;
+
+    return "$sectionId\n";
+}
+
+sub type {
+    return 'dir';
+}
+
+my $map_fmt = {
+    node => get_standard_option('pve-node'),
+    path => {
+	description => "Directory-path that should be shared with the guest.",
+	type => 'string',
+	format => 'pve-storage-path',
+    },
+    submounts => {
+	type => 'boolean',
+	description => "Option to tell the guest which directories are mount points.",
+	optional => 1,
+    },
+    description => {
+	description => "Description of the node specific directory.",
+	type => 'string',
+	optional => 1,
+	maxLength => 4096,
+    },
+};
+
+my $defaultData = {
+    propertyList => {
+        id => {
+            type => 'string',
+            description => "The ID of the directory",
+            format => 'pve-configid',
+        },
+        description => {
+            description => "Description of the directory",
+            type => 'string',
+            optional => 1,
+            maxLength => 4096,
+        },
+        map => {
+            type => 'array',
+            description => 'A list of maps for the cluster nodes.',
+	    optional => 1,
+            items => {
+                type => 'string',
+                format => $map_fmt,
+            },
+        },
+	xattr => {
+	    type => 'boolean',
+	    description => "Enable support for extended attributes.",
+	    optional => 1,
+	},
+	acl => {
+	    type => 'boolean',
+	    description => "Enable support for posix ACLs (implies --xattr).",
+	    optional => 1,
+	},
+    },
+};
+
+sub private {
+    return $defaultData;
+}
+
+sub options {
+    return {
+        description => { optional => 1 },
+        map => {},
+        xattr => { optional => 1 },
+        acl => { optional => 1 },
+    };
+}
+
+sub assert_valid {
+    my ($dir_cfg) = @_;
+
+    my $path = $dir_cfg->{path};
+
+    if (! -e $path) {
+        die "Path $path does not exist\n";
+    }
+    if ((-e $path) && (! -d $path)) {
+        die "Path $path exists but is not a directory\n"
+    }
+
+    return 1;
+};
+
+sub config {
+    return cfs_read_file($FILENAME);
+}
+
+sub lock_dir_config {
+    my ($code, $errmsg) = @_;
+
+    cfs_lock_file($FILENAME, undef, $code);
+    my $err = $@;
+    if ($err) {
+        $errmsg ? die "$errmsg: $err" : die $err;
+    }
+}
+
+sub write_dir_config {
+    my ($cfg) = @_;
+
+    cfs_write_file($FILENAME, $cfg);
+}
+
+sub find_on_current_node {
+    my ($id) = @_;
+
+    my $cfg = config();
+    my $node = PVE::INotify::nodename();
+
+    return get_node_mapping($cfg, $id, $node);
+}
+
+sub get_node_mapping {
+    my ($cfg, $id, $nodename) = @_;
+
+    return undef if !defined($cfg->{ids}->{$id});
+
+    my $res = [];
+    my $mapping_list = $cfg->{ids}->{$id}->{map};
+    foreach my $map (@{$mapping_list}) {
+	my $entry = eval { parse_property_string($map_fmt, $map) };
+	warn $@ if $@;
+	if ($entry && $entry->{node} eq $nodename) {
+	    push $res->@*, $entry;
+	}
+    }
+    return $res;
+}
+
+PVE::Mapping::Dir->register();
+PVE::Mapping::Dir->init();
+
+1;
-- 
2.39.2





^ permalink raw reply	[flat|nested] 26+ messages in thread

* [pve-devel] [PATCH docs v8 3/7] added shared filesystem doc for virtio-fs
  2023-11-08  8:52 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v8 0/7] virtiofs Markus Frank
  2023-11-08  8:52 ` [pve-devel] [PATCH cluster v8 1/7] add mapping/dir.cfg for resource mapping Markus Frank
  2023-11-08  8:52 ` [pve-devel] [PATCH guest-common v8 2/7] add Dir mapping config Markus Frank
@ 2023-11-08  8:52 ` Markus Frank
  2024-01-31 13:26   ` Fiona Ebner
  2024-01-31 13:34   ` Fiona Ebner
  2023-11-08  8:52 ` [pve-devel] [PATCH qemu-server v8 4/7] feature #1027: virtio-fs support Markus Frank
                   ` (5 subsequent siblings)
  8 siblings, 2 replies; 26+ messages in thread
From: Markus Frank @ 2023-11-08  8:52 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 qm.adoc | 84 +++++++++++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 82 insertions(+), 2 deletions(-)

diff --git a/qm.adoc b/qm.adoc
index c4f1024..571c42e 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -996,6 +996,85 @@ recommended to always use a limiter to avoid guests using too many host
 resources. If desired, a value of '0' for `max_bytes` can be used to disable
 all limits.
 
+[[qm_virtiofs]]
+Virtio-fs
+~~~~~~~~~
+
+Virtio-fs is a shared file system, that enables sharing a directory between
+host and guest VM while taking advantage of the locality of virtual machines
+and the hypervisor to get a higher throughput than 9p.
+
+To use virtio-fs, the https://gitlab.com/virtio-fs/virtiofsd[virtiofsd] daemon
+needs to run in the background.
+In {pve} this process starts immediately before the start of QEMU.
+
+Linux VMs with kernel >=5.4 support this feature by default.
+
+There is a guide available on how to utilize virtio-fs in Windows VMs.
+https://github.com/virtio-win/kvm-guest-drivers-windows/wiki/Virtiofs:-Shared-file-system
+
+Known limitations
+^^^^^^^^^^^^^^^^^
+
+* Virtiofsd crashing means no recovery until VM is fully stopped
+and restarted.
+* Virtiofsd not responding may result in NFS-like hanging access in the VM.
+* Memory hotplug does not work in combination with virtio-fs.
+* Windows cannot understand ACLs. Therefore, disable it for Windows VMs,
+otherwise the virtio-fs device will not be visible within the VMs.
+
+Add mapping for Shared Directories
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To add a mapping, either use the API directly with pvesh as described in the
+xref:resource_mapping[Resource Mapping] section,
+or add the mapping to the configuration file `/etc/pve/mapping/dir.cfg`:
+
+----
+some-dir-id
+    map node=node1,path=/mnt/share/,submounts=1
+    map node=node2,path=/mnt/share/,
+    xattr 1
+    acl 1
+----
+
+Set `submounts` to `1` when multiple file systems are mounted in a
+shared directory.
+
+Add virtio-fs to VM
+^^^^^^^^^^^^^^^^^^^
+
+To share a directory using virtio-fs, you need to specify the directory ID
+(dirid) that has been configured in the Resource Mapping.
+Additionally, you can set the `cache` option to either `always`, `never`,
+or `auto`, depending on your requirements.
+If you want virtio-fs to honor the `O_DIRECT` flag, you can set the
+`direct-io` parameter to `1`.
+Additionally it possible to overwrite the default mapping settings
+for xattr & acl by setting then to either `1` or `0`.
+
+The `acl` parameter automatically implies `xattr`, that is, it makes no
+difference whether you set xattr to `0` if acl is set to `1`.
+
+----
+qm set <vmid> -virtiofs0 dirid=<dirid>,cache=always,direct-io=1
+qm set <vmid> -virtiofs1 <dirid>,cache=never,xattr=1
+qm set <vmid> -virtiofs2 <dirid>,acl=1
+----
+
+To mount virtio-fs in a guest VM with the Linux kernel virtio-fs driver, run the
+following command:
+
+The dirid associated with the path on the current node is also used as the
+mount tag (name used to mount the device on the guest).
+
+----
+mount -t virtiofs <mount tag> <mount point>
+----
+
+For more information on available virtiofsd parameters, see the
+https://gitlab.com/virtio-fs/virtiofsd[GitLab virtiofsd project page].
+
 [[qm_bootorder]]
 Device Boot Order
 ~~~~~~~~~~~~~~~~~
@@ -1603,8 +1682,9 @@ in the relevant tab in the `Resource Mappings` category, or on the cli with
 
 [thumbnail="screenshot/gui-datacenter-mapping-pci-edit.png"]
 
-Where `<type>` is the hardware type (currently either `pci` or `usb`) and
-`<options>` are the device mappings and other configuration parameters.
+Where `<type>` is the hardware type (currently either `pci`, `usb` or
+xref:qm_virtiofs[dir]) and `<options>` are the device mappings and other
+configuration parameters.
 
 Note that the options must include a map property with all identifying
 properties of that hardware, so that it's possible to verify the hardware did
-- 
2.39.2





^ permalink raw reply	[flat|nested] 26+ messages in thread

* [pve-devel] [PATCH qemu-server v8 4/7] feature #1027: virtio-fs support
  2023-11-08  8:52 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v8 0/7] virtiofs Markus Frank
                   ` (2 preceding siblings ...)
  2023-11-08  8:52 ` [pve-devel] [PATCH docs v8 3/7] added shared filesystem doc for virtio-fs Markus Frank
@ 2023-11-08  8:52 ` Markus Frank
  2024-01-31 15:02   ` Fiona Ebner
  2023-11-08  8:52 ` [pve-devel] [PATCH qemu-server v8 5/7] Permission check for virtiofs directory access Markus Frank
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 26+ messages in thread
From: Markus Frank @ 2023-11-08  8:52 UTC (permalink / raw)
  To: pve-devel

add support for sharing directories with a guest vm

virtio-fs needs virtiofsd to be started.
In order to start virtiofsd as a process (despite being a daemon it is does not run
in the background), a double-fork is used.

virtiofsd should close itself together with qemu.

There are the parameters dirid
and the optional parameters direct-io & cache.
Additionally the xattr & acl parameter overwrite the
directory mapping settings for xattr & acl.

The dirid gets mapped to the path on the current node
and is also used as a mount-tag (name used to mount the
device on the guest).

example config:
```
virtiofs0: foo,direct-io=1,cache=always,acl=1
virtiofs1: dirid=bar,cache=never,xattr=1
```

For information on the optional parameters see there:
https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 PVE/QemuServer.pm        | 185 +++++++++++++++++++++++++++++++++++++++
 PVE/QemuServer/Memory.pm |  25 ++++--
 debian/control           |   1 +
 3 files changed, 205 insertions(+), 6 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 2895675..92580df 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -43,6 +43,7 @@ use PVE::PBSClient;
 use PVE::RESTEnvironment qw(log_warn);
 use PVE::RPCEnvironment;
 use PVE::Storage;
+use PVE::Mapping::Dir;
 use PVE::SysFSTools;
 use PVE::Systemd;
 use PVE::Tools qw(run_command file_read_firstline file_get_contents dir_glob_foreach get_host_arch $IPV6RE);
@@ -277,6 +278,42 @@ my $rng_fmt = {
     },
 };
 
+my $virtiofs_fmt = {
+    'dirid' => {
+	type => 'string',
+	default_key => 1,
+	description => "Mapping identifier of the directory mapping to be"
+	    ." shared with the guest. Also used as a mount tag inside the VM.",
+	format_description => 'mapping-id',
+	format => 'pve-configid',
+    },
+    'cache' => {
+	type => 'string',
+	description => "The caching policy the file system should use"
+	    ." (auto, always, never).",
+	format_description => "virtiofs-cache",
+	enum => [qw(auto always never)],
+	optional => 1,
+    },
+    'direct-io' => {
+	type => 'boolean',
+	description => "Honor the O_DIRECT flag passed down by guest applications",
+	format_description => "virtiofs-directio",
+	optional => 1,
+    },
+    xattr => {
+	type => 'boolean',
+	description => "Enable support for extended attributes.",
+	optional => 1,
+    },
+    acl => {
+	type => 'boolean',
+	description => "Enable support for posix ACLs (implies --xattr).",
+	optional => 1,
+    },
+};
+PVE::JSONSchema::register_format('pve-qm-virtiofs', $virtiofs_fmt);
+
 my $meta_info_fmt = {
     'ctime' => {
 	type => 'integer',
@@ -839,6 +876,7 @@ while (my ($k, $v) = each %$confdesc) {
 }
 
 my $MAX_NETS = 32;
+my $MAX_VIRTIOFS = 10;
 my $MAX_SERIAL_PORTS = 4;
 my $MAX_PARALLEL_PORTS = 3;
 
@@ -948,6 +986,21 @@ my $netdesc = {
 
 PVE::JSONSchema::register_standard_option("pve-qm-net", $netdesc);
 
+my $virtiofsdesc = {
+    optional => 1,
+    type => 'string', format => $virtiofs_fmt,
+    description => "share files between host and guest",
+};
+PVE::JSONSchema::register_standard_option("pve-qm-virtiofs", $virtiofsdesc);
+
+sub max_virtiofs {
+    return $MAX_VIRTIOFS;
+}
+
+for (my $i = 0; $i < $MAX_VIRTIOFS; $i++)  {
+    $confdesc->{"virtiofs$i"} = $virtiofsdesc;
+}
+
 my $ipconfig_fmt = {
     ip => {
 	type => 'string',
@@ -4055,6 +4108,23 @@ sub config_to_command {
 	push @$devices, '-device', $netdevicefull;
     }
 
+    my $virtiofs_enabled = 0;
+    for (my $i = 0; $i < $MAX_VIRTIOFS; $i++) {
+	my $opt = "virtiofs$i";
+
+	next if !$conf->{$opt};
+	my $virtiofs = parse_property_string('pve-qm-virtiofs', $conf->{$opt});
+	next if !$virtiofs;
+
+	check_virtiofs_config ($conf, $virtiofs);
+
+	push @$devices, '-chardev', "socket,id=virtfs$i,path=/var/run/virtiofsd/vm$vmid-fs$i";
+	push @$devices, '-device', 'vhost-user-fs-pci,queue-size=1024'
+	    .",chardev=virtfs$i,tag=$virtiofs->{dirid}";
+
+	$virtiofs_enabled = 1;
+    }
+
     if ($conf->{ivshmem}) {
 	my $ivshmem = parse_property_string($ivshmem_fmt, $conf->{ivshmem});
 
@@ -4114,6 +4184,14 @@ sub config_to_command {
     }
     push @$machineFlags, "type=${machine_type_min}";
 
+    if ($virtiofs_enabled && !$conf->{numa}) {
+	# kvm: '-machine memory-backend' and '-numa memdev' properties are
+	# mutually exclusive
+	push @$devices, '-object', 'memory-backend-memfd,id=virtiofs-mem'
+	    .",size=$conf->{memory}M,share=on";
+	push @$machineFlags, 'memory-backend=virtiofs-mem';
+    }
+
     push @$cmd, @$devices;
     push @$cmd, '-rtc', join(',', @$rtcFlags) if scalar(@$rtcFlags);
     push @$cmd, '-machine', join(',', @$machineFlags) if scalar(@$machineFlags);
@@ -4140,6 +4218,96 @@ sub config_to_command {
     return wantarray ? ($cmd, $vollist, $spice_port, $pci_devices) : $cmd;
 }
 
+sub check_virtiofs_config {
+    my ($conf, $virtiofs) = @_;
+    my $dir_cfg = PVE::Mapping::Dir::config()->{ids}->{$virtiofs->{dirid}};
+    my $node_list = PVE::Mapping::Dir::find_on_current_node($virtiofs->{dirid});
+
+    my $acl = $virtiofs->{'acl'} // $dir_cfg->{'acl'};
+    if ($acl && windows_version($conf->{ostype})) {
+	log_warn(
+	    "Please disable ACLs for virtiofs on Windows VMs, otherwise"
+	    ." the virtiofs shared directory cannot be mounted.\n"
+	);
+    }
+
+    if (!$node_list || scalar($node_list->@*) != 1) {
+	die "virtiofs needs exactly one mapping for this node\n";
+    }
+
+    eval {
+	PVE::Mapping::Dir::assert_valid($node_list->[0]);
+    };
+    if (my $err = $@) {
+	die "Directory Mapping invalid: $err\n";
+    }
+}
+
+sub start_virtiofs {
+    my ($vmid, $fsid, $virtiofs) = @_;
+
+    my $dir_cfg = PVE::Mapping::Dir::config()->{ids}->{$virtiofs->{dirid}};
+    my $node_list = PVE::Mapping::Dir::find_on_current_node($virtiofs->{dirid});
+
+    # Default to dir config xattr & acl settings
+    my $xattr = $virtiofs->{xattr} // $dir_cfg->{xattr};
+    my $acl = $virtiofs->{'acl'} // $dir_cfg->{'acl'};
+
+    my $node_cfg = $node_list->[0];
+    my $path = $node_cfg->{path};
+    my $socket_path_root = "/var/run/virtiofsd";
+    mkdir $socket_path_root;
+    my $socket_path = "$socket_path_root/vm$vmid-fs$fsid";
+    unlink($socket_path);
+    my $socket = IO::Socket::UNIX->new(
+	Type => SOCK_STREAM,
+	Local => $socket_path,
+	Listen => 1,
+    ) or die "cannot create socket - $!\n";
+
+    my $flags = fcntl($socket, F_GETFD, 0)
+	or die "failed to get file descriptor flags: $!\n";
+    fcntl($socket, F_SETFD, $flags & ~FD_CLOEXEC)
+	or die "failed to remove FD_CLOEXEC from file descriptor\n";
+
+    my $fd = $socket->fileno();
+
+    my $virtiofsd_bin = '/usr/libexec/virtiofsd';
+
+    my $pid = fork();
+    if ($pid == 0) {
+	setsid();
+	$0 = "task pve-vm$vmid-virtiofs$fsid";
+	for my $fd_loop (3 .. POSIX::sysconf( &POSIX::_SC_OPEN_MAX )) {
+	    POSIX::close($fd_loop) if ($fd_loop != $fd);
+	}
+
+	my $pid2 = fork();
+	if ($pid2 == 0) {
+	    my $cmd = [$virtiofsd_bin, "--fd=$fd", "--shared-dir=$path"];
+	    push @$cmd, '--xattr' if $xattr;
+	    push @$cmd, '--posix-acl' if $acl;
+	    push @$cmd, '--announce-submounts' if ($node_cfg->{submounts});
+	    push @$cmd, '--allow-direct-io' if ($virtiofs->{'direct-io'});
+	    push @$cmd, "--cache=$virtiofs->{'cache'}" if ($virtiofs->{'cache'});
+	    push @$cmd, '--syslog';
+	    exec(@$cmd);
+	} elsif (!defined($pid2)) {
+	    die "could not fork to start virtiofsd\n";
+	} else {
+	    POSIX::_exit(0);
+	}
+    } elsif (!defined($pid)) {
+	die "could not fork to start virtiofsd\n";
+    } else {
+	waitpid($pid, 0);
+    }
+
+    # return socket to keep it alive,
+    # so that qemu will wait for virtiofsd to start
+    return $socket;
+}
+
 sub check_rng_source {
     my ($source) = @_;
 
@@ -5835,6 +6003,18 @@ sub vm_start_nolock {
 	PVE::Tools::run_fork sub {
 	    PVE::Systemd::enter_systemd_scope($vmid, "Proxmox VE VM $vmid", %systemd_properties);
 
+	    my @virtiofs_sockets;
+	    for (my $i = 0; $i < $MAX_VIRTIOFS; $i++) {
+		my $opt = "virtiofs$i";
+
+		next if !$conf->{$opt};
+		my $virtiofs = parse_property_string('pve-qm-virtiofs', $conf->{$opt});
+		next if !$virtiofs;
+
+		my $virtiofs_socket = start_virtiofs($vmid, $i, $virtiofs);
+		push @virtiofs_sockets, $virtiofs_socket;
+	    }
+
 	    my $tpmpid;
 	    if ((my $tpm = $conf->{tpmstate0}) && !PVE::QemuConfig->is_template($conf)) {
 		# start the TPM emulator so QEMU can connect on start
@@ -5849,6 +6029,11 @@ sub vm_start_nolock {
 		}
 		die "QEMU exited with code $exitcode\n";
 	    }
+
+	    foreach my $virtiofs_socket (@virtiofs_sockets) {
+		shutdown($virtiofs_socket, 2);
+		close($virtiofs_socket);
+	    }
 	};
     };
 
diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
index f365f2d..647595a 100644
--- a/PVE/QemuServer/Memory.pm
+++ b/PVE/QemuServer/Memory.pm
@@ -367,6 +367,16 @@ sub config {
 
     die "numa needs to be enabled to use hugepages" if $conf->{hugepages} && !$conf->{numa};
 
+    my $virtiofs_enabled = 0;
+    for (my $i = 0; $i < PVE::QemuServer::max_virtiofs(); $i++) {
+	my $opt = "virtiofs$i";
+	next if !$conf->{$opt};
+	my $virtiofs = PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', $conf->{$opt});
+	if ($virtiofs) {
+	    $virtiofs_enabled = 1;
+	}
+    }
+
     if ($conf->{numa}) {
 
 	my $numa_totalmemory = undef;
@@ -379,7 +389,8 @@ sub config {
 	    my $numa_memory = $numa->{memory};
 	    $numa_totalmemory += $numa_memory;
 
-	    my $mem_object = print_mem_object($conf, "ram-node$i", $numa_memory);
+	    my $memdev = $virtiofs_enabled ? "virtiofs-mem$i" : "ram-node$i";
+	    my $mem_object = print_mem_object($conf, $memdev, $numa_memory);
 
 	    # cpus
 	    my $cpulists = $numa->{cpus};
@@ -404,7 +415,7 @@ sub config {
 	    }
 
 	    push @$cmd, '-object', $mem_object;
-	    push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=ram-node$i";
+	    push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=$memdev";
 	}
 
 	die "total memory for NUMA nodes must be equal to vm static memory\n"
@@ -418,13 +429,13 @@ sub config {
 		die "host NUMA node$i doesn't exist\n"
 		    if !host_numanode_exists($i) && $conf->{hugepages};
 
-		my $mem_object = print_mem_object($conf, "ram-node$i", $numa_memory);
-		push @$cmd, '-object', $mem_object;
-
 		my $cpus = ($cores * $i);
 		$cpus .= "-" . ($cpus + $cores - 1) if $cores > 1;
 
-		push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=ram-node$i";
+		my $memdev = $virtiofs_enabled ? "virtiofs-mem$i" : "ram-node$i";
+		my $mem_object = print_mem_object($conf, $memdev, $numa_memory);
+		push @$cmd, '-object', $mem_object;
+		push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=$memdev";
 	    }
 	}
     }
@@ -453,6 +464,8 @@ sub print_mem_object {
 	my $path = hugepages_mount_path($hugepages_size);
 
 	return "memory-backend-file,id=$id,size=${size}M,mem-path=$path,share=on,prealloc=yes";
+    } elsif ($id =~ m/^virtiofs-mem/) {
+	return "memory-backend-memfd,id=$id,size=${size}M,share=on";
     } else {
 	return "memory-backend-ram,id=$id,size=${size}M";
     }
diff --git a/debian/control b/debian/control
index 49f67b2..f008a9b 100644
--- a/debian/control
+++ b/debian/control
@@ -53,6 +53,7 @@ Depends: dbus,
          socat,
          swtpm,
          swtpm-tools,
+         virtiofsd,
          ${misc:Depends},
          ${perl:Depends},
          ${shlibs:Depends},
-- 
2.39.2





^ permalink raw reply	[flat|nested] 26+ messages in thread

* [pve-devel] [PATCH qemu-server v8 5/7] Permission check for virtiofs directory access
  2023-11-08  8:52 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v8 0/7] virtiofs Markus Frank
                   ` (3 preceding siblings ...)
  2023-11-08  8:52 ` [pve-devel] [PATCH qemu-server v8 4/7] feature #1027: virtio-fs support Markus Frank
@ 2023-11-08  8:52 ` Markus Frank
  2024-01-31 15:23   ` Fiona Ebner
  2023-11-08  8:52 ` [pve-devel] [PATCH qemu-server v8 6/7] check_local_resources: virtiofs Markus Frank
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 26+ messages in thread
From: Markus Frank @ 2023-11-08  8:52 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 PVE/API2/Qemu.pm  | 39 ++++++++++++++++++++++++++++++++++++++-
 PVE/QemuServer.pm |  5 ++++-
 2 files changed, 42 insertions(+), 2 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index c8a87f3..1c5eb4c 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -650,6 +650,32 @@ my sub check_vm_create_hostpci_perm {
     return 1;
 };
 
+my sub check_dir_perm {
+    my ($rpcenv, $authuser, $vmid, $pool, $opt, $value) = @_;
+
+    return 1 if $authuser eq 'root@pam';
+
+    $rpcenv->check_vm_perm($authuser, $vmid, $pool, ['VM.Config.Disk']);
+
+    my $virtiofs = PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', $value);
+    $rpcenv->check_full($authuser, "/mapping/dir/$virtiofs->{dirid}", ['Mapping.Use']);
+
+    return 1;
+};
+
+my sub check_vm_create_dir_perm {
+    my ($rpcenv, $authuser, $vmid, $pool, $param) = @_;
+
+    return 1 if $authuser eq 'root@pam';
+
+    foreach my $opt (keys %{$param}) {
+	next if $opt !~ m/^virtiofs\d+$/;
+	check_dir_perm($rpcenv, $authuser, $vmid, $pool, $opt, $param->{$opt});
+    }
+
+    return 1;
+};
+
 my $check_vm_modify_config_perm = sub {
     my ($rpcenv, $authuser, $vmid, $pool, $key_list) = @_;
 
@@ -660,7 +686,7 @@ my $check_vm_modify_config_perm = sub {
 	# else, as there the permission can be value dependend
 	next if PVE::QemuServer::is_valid_drivename($opt);
 	next if $opt eq 'cdrom';
-	next if $opt =~ m/^(?:unused|serial|usb|hostpci)\d+$/;
+	next if $opt =~ m/^(?:unused|serial|usb|hostpci|virtiofs)\d+$/;
 	next if $opt eq 'tags';
 
 
@@ -929,6 +955,7 @@ __PACKAGE__->register_method({
 	    &$check_vm_create_serial_perm($rpcenv, $authuser, $vmid, $pool, $param);
 	    check_vm_create_usb_perm($rpcenv, $authuser, $vmid, $pool, $param);
 	    check_vm_create_hostpci_perm($rpcenv, $authuser, $vmid, $pool, $param);
+	    check_vm_create_dir_perm($rpcenv, $authuser, $vmid, $pool, $param);
 
 	    PVE::QemuServer::check_bridge_access($rpcenv, $authuser, $param);
 	    &$check_cpu_model_access($rpcenv, $authuser, $param);
@@ -1790,6 +1817,10 @@ my $update_vm_api  = sub {
 		    check_hostpci_perm($rpcenv, $authuser, $vmid, undef, $opt, $val);
 		    PVE::QemuConfig->add_to_pending_delete($conf, $opt, $force);
 		    PVE::QemuConfig->write_config($vmid, $conf);
+		} elsif ($opt =~ m/^virtiofs\d$/) {
+		    check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, $val);
+		    PVE::QemuConfig->add_to_pending_delete($conf, $opt, $force);
+		    PVE::QemuConfig->write_config($vmid, $conf);
 		} elsif ($opt eq 'tags') {
 		    assert_tag_permissions($vmid, $val, '', $rpcenv, $authuser);
 		    delete $conf->{$opt};
@@ -1869,6 +1900,12 @@ my $update_vm_api  = sub {
 		    }
 		    check_hostpci_perm($rpcenv, $authuser, $vmid, undef, $opt, $param->{$opt});
 		    $conf->{pending}->{$opt} = $param->{$opt};
+		} elsif ($opt =~ m/^virtiofs\d$/) {
+		    if (my $oldvalue = $conf->{$opt}) {
+			check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, $oldvalue);
+		    }
+		    check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, $param->{$opt});
+		    $conf->{pending}->{$opt} = $param->{$opt};
 		} elsif ($opt eq 'tags') {
 		    assert_tag_permissions($vmid, $conf->{$opt}, $param->{$opt}, $rpcenv, $authuser);
 		    $conf->{pending}->{$opt} = PVE::GuestHelpers::get_unique_tags($param->{$opt});
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 92580df..f66f26e 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -6643,7 +6643,10 @@ sub check_mapping_access {
 	    } else {
 		die "either 'host' or 'mapping' must be set.\n";
 	    }
-       }
+	} elsif ($opt =~ m/^virtiofs\d$/) {
+	    my $virtiofs = PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', $conf->{$opt});
+	    $rpcenv->check_full($user, "/mapping/dir/$virtiofs->{dirid}", ['Mapping.Use']);
+	}
    }
 };
 
-- 
2.39.2





^ permalink raw reply	[flat|nested] 26+ messages in thread

* [pve-devel] [PATCH qemu-server v8 6/7] check_local_resources: virtiofs
  2023-11-08  8:52 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v8 0/7] virtiofs Markus Frank
                   ` (4 preceding siblings ...)
  2023-11-08  8:52 ` [pve-devel] [PATCH qemu-server v8 5/7] Permission check for virtiofs directory access Markus Frank
@ 2023-11-08  8:52 ` Markus Frank
  2024-01-31 15:35   ` Fiona Ebner
  2023-11-08  8:52 ` [pve-devel] [PATCH manager v8 7/7] api: add resource map api endpoints for directories Markus Frank
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 26+ messages in thread
From: Markus Frank @ 2023-11-08  8:52 UTC (permalink / raw)
  To: pve-devel

add dir mapping checks to check_local_resources

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 PVE/QemuServer.pm            | 10 +++++++++-
 test/MigrationTest/Shared.pm |  7 +++++++
 2 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index f66f26e..b5c2c14 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2664,6 +2664,7 @@ sub check_local_resources {
     my $nodelist = PVE::Cluster::get_nodelist();
     my $pci_map = PVE::Mapping::PCI::config();
     my $usb_map = PVE::Mapping::USB::config();
+    my $dir_map = PVE::Mapping::Dir::config();
 
     my $missing_mappings_by_node = { map { $_ => [] } @$nodelist };
 
@@ -2675,6 +2676,8 @@ sub check_local_resources {
 		$entry = PVE::Mapping::PCI::get_node_mapping($pci_map, $id, $node);
 	    } elsif ($type eq 'usb') {
 		$entry = PVE::Mapping::USB::get_node_mapping($usb_map, $id, $node);
+	    } elsif ($type eq 'dir') {
+		$entry = PVE::Mapping::Dir::get_node_mapping($dir_map, $id, $node);
 	    }
 	    if (!scalar($entry->@*)) {
 		push @{$missing_mappings_by_node->{$node}}, $key;
@@ -2703,9 +2706,14 @@ sub check_local_resources {
 		push @$mapped_res, $k;
 	    }
 	}
+	if ($k =~ m/^virtiofs/) {
+	    my $entry = parse_property_string('pve-qm-virtiofs', $conf->{$k});
+	    $add_missing_mapping->('dir', $k, $entry->{dirid});
+	    push @$mapped_res, $k;
+	}
 	# sockets are safe: they will recreated be on the target side post-migrate
 	next if $k =~ m/^serial/ && ($conf->{$k} eq 'socket');
-	push @loc_res, $k if $k =~ m/^(usb|hostpci|serial|parallel)\d+$/;
+	push @loc_res, $k if $k =~ m/^(usb|hostpci|serial|parallel|virtiofs)\d+$/;
     }
 
     die "VM uses local resources\n" if scalar @loc_res && !$noerr;
diff --git a/test/MigrationTest/Shared.pm b/test/MigrationTest/Shared.pm
index aa7203d..c5d0722 100644
--- a/test/MigrationTest/Shared.pm
+++ b/test/MigrationTest/Shared.pm
@@ -90,6 +90,13 @@ $mapping_pci_module->mock(
     },
 );
 
+our $mapping_dir_module = Test::MockModule->new("PVE::Mapping::Dir");
+$mapping_dir_module->mock(
+    config => sub {
+	return {};
+    },
+);
+
 our $ha_config_module = Test::MockModule->new("PVE::HA::Config");
 $ha_config_module->mock(
     vm_is_ha_managed => sub {
-- 
2.39.2





^ permalink raw reply	[flat|nested] 26+ messages in thread

* [pve-devel] [PATCH manager v8 7/7] api: add resource map api endpoints for directories
  2023-11-08  8:52 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v8 0/7] virtiofs Markus Frank
                   ` (5 preceding siblings ...)
  2023-11-08  8:52 ` [pve-devel] [PATCH qemu-server v8 6/7] check_local_resources: virtiofs Markus Frank
@ 2023-11-08  8:52 ` Markus Frank
  2024-01-31 15:56   ` Fiona Ebner
  2024-01-30 12:31 ` [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v8 0/7] virtiofs Markus Frank
  2024-01-31 12:01 ` Fiona Ebner
  8 siblings, 1 reply; 26+ messages in thread
From: Markus Frank @ 2023-11-08  8:52 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 PVE/API2/Cluster/Mapping.pm       |   7 +
 PVE/API2/Cluster/Mapping/Dir.pm   | 309 ++++++++++++++++++++++++++++++
 PVE/API2/Cluster/Mapping/Makefile |   3 +-
 3 files changed, 318 insertions(+), 1 deletion(-)
 create mode 100644 PVE/API2/Cluster/Mapping/Dir.pm

diff --git a/PVE/API2/Cluster/Mapping.pm b/PVE/API2/Cluster/Mapping.pm
index 40386579..c5993208 100644
--- a/PVE/API2/Cluster/Mapping.pm
+++ b/PVE/API2/Cluster/Mapping.pm
@@ -5,6 +5,7 @@ use warnings;
 
 use PVE::API2::Cluster::Mapping::PCI;
 use PVE::API2::Cluster::Mapping::USB;
+use PVE::API2::Cluster::Mapping::Dir;
 
 use base qw(PVE::RESTHandler);
 
@@ -18,6 +19,11 @@ __PACKAGE__->register_method ({
     path => 'usb',
 });
 
+__PACKAGE__->register_method ({
+    subclass => "PVE::API2::Cluster::Mapping::Dir",
+    path => 'dir',
+});
+
 __PACKAGE__->register_method ({
     name => 'index',
     path => '',
@@ -43,6 +49,7 @@ __PACKAGE__->register_method ({
 	my $result = [
 	    { name => 'pci' },
 	    { name => 'usb' },
+	    { name => 'dir' },
 	];
 
 	return $result;
diff --git a/PVE/API2/Cluster/Mapping/Dir.pm b/PVE/API2/Cluster/Mapping/Dir.pm
new file mode 100644
index 00000000..f6e8f26f
--- /dev/null
+++ b/PVE/API2/Cluster/Mapping/Dir.pm
@@ -0,0 +1,309 @@
+package PVE::API2::Cluster::Mapping::Dir;
+
+use strict;
+use warnings;
+
+use Storable qw(dclone);
+
+use PVE::Mapping::Dir ();
+use PVE::JSONSchema qw(get_standard_option);
+use PVE::Tools qw(extract_param);
+
+use base qw(PVE::RESTHandler);
+
+__PACKAGE__->register_method ({
+    name => 'index',
+    path => '',
+    method => 'GET',
+    # only proxy if we give the 'check-node' parameter
+    proxyto_callback => sub {
+	my ($rpcenv, $proxyto, $param) = @_;
+	return $param->{'check-node'} // 'localhost';
+    },
+    description => "List Dir Hardware Mapping",
+    permissions => {
+	description => "Only lists entries where you have 'Mapping.Modify', 'Mapping.Use' or".
+	    " 'Mapping.Audit' permissions on '/mapping/dir/<id>'.",
+	user => 'all',
+    },
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    'check-node' => get_standard_option('pve-node', {
+		description => "If given, checks the configurations on the given node for ".
+		    "correctness, and adds relevant diagnostics for the devices to the response.",
+		optional => 1,
+	    }),
+	},
+    },
+    returns => {
+	type => 'array',
+	items => {
+	    type => "object",
+	    properties => {
+		id => {
+		    type => 'string',
+		    description => "The logical ID of the mapping."
+		},
+		map => {
+		    type => 'array',
+		    description => "The entries of the mapping.",
+		    items => {
+			type => 'string',
+			description => "A mapping for a node.",
+		    },
+		},
+		description => {
+		    type => 'string',
+		    description => "A description of the logical mapping.",
+		},
+		xattr => {
+		    type => 'boolean',
+		    description => "Enable support for extended attributes.",
+		    optional => 1,
+		},
+		acl => {
+		    type => 'boolean',
+		    description => "Enable support for posix ACLs (implies --xattr).",
+		    optional => 1,
+		},
+		checks => {
+		    type => "array",
+		    optional => 1,
+		    description => "A list of checks, only present if 'check_node' is set.",
+		    items => {
+			type => 'object',
+			properties => {
+			    severity => {
+				type => "string",
+				enum => ['warning', 'error'],
+				description => "The severity of the error",
+			    },
+			    message => {
+				type => "string",
+				description => "The message of the error",
+			    },
+			},
+		    }
+		},
+	    },
+	},
+	links => [ { rel => 'child', href => "{id}" } ],
+    },
+    code => sub {
+	my ($param) = @_;
+
+	my $rpcenv = PVE::RPCEnvironment::get();
+	my $authuser = $rpcenv->get_user();
+
+	my $check_node = $param->{'check-node'};
+	my $local_node = PVE::INotify::nodename();
+
+	die "wrong node to check - $check_node != $local_node\n"
+	    if defined($check_node) && $check_node ne 'localhost' && $check_node ne $local_node;
+
+	my $cfg = PVE::Mapping::Dir::config();
+
+	my $can_see_mapping_privs = ['Mapping.Modify', 'Mapping.Use', 'Mapping.Audit'];
+
+	my $res = [];
+	for my $id (keys $cfg->{ids}->%*) {
+	    next if !$rpcenv->check_any($authuser, "/mapping/dir/$id", $can_see_mapping_privs, 1);
+	    next if !$cfg->{ids}->{$id};
+
+	    my $entry = dclone($cfg->{ids}->{$id});
+	    $entry->{id} = $id;
+	    $entry->{digest} = $cfg->{digest};
+
+	    if (defined($check_node)) {
+		$entry->{checks} = [];
+		if (my $mappings = PVE::Mapping::Dir::get_node_mapping($cfg, $id, $check_node)) {
+		    if (!scalar($mappings->@*)) {
+			push $entry->{checks}->@*, {
+			    severity => 'warning',
+			    message => "No mapping for node $check_node.",
+			};
+		    }
+		    for my $mapping ($mappings->@*) {
+			eval { PVE::Mapping::Dir::assert_valid($id, $mapping) };
+			if (my $err = $@) {
+			    push $entry->{checks}->@*, {
+				severity => 'error',
+				message => "Invalid configuration: $err",
+			    };
+			}
+		    }
+		}
+	    }
+
+	    push @$res, $entry;
+	}
+
+	return $res;
+    },
+});
+
+__PACKAGE__->register_method ({
+    name => 'get',
+    protected => 1,
+    path => '{id}',
+    method => 'GET',
+    description => "Get Dir Mapping.",
+    permissions => {
+	check =>['or',
+	    ['perm', '/mapping/dir/{id}', ['Mapping.Use']],
+	    ['perm', '/mapping/dir/{id}', ['Mapping.Modify']],
+	    ['perm', '/mapping/dir/{id}', ['Mapping.Audit']],
+	],
+    },
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    id => {
+		type => 'string',
+		format => 'pve-configid',
+	    },
+	}
+    },
+    returns => { type => 'object' },
+    code => sub {
+	my ($param) = @_;
+
+	my $cfg = PVE::Mapping::Dir::config();
+	my $id = $param->{id};
+
+	my $entry = $cfg->{ids}->{$id};
+	die "mapping '$param->{id}' not found\n" if !defined($entry);
+
+	my $data = dclone($entry);
+
+	$data->{digest} = $cfg->{digest};
+
+	return $data;
+    }});
+
+__PACKAGE__->register_method ({
+    name => 'create',
+    protected => 1,
+    path => '',
+    method => 'POST',
+    description => "Create a new hardware mapping.",
+    permissions => {
+	check => ['perm', '/mapping/dir', ['Mapping.Modify']],
+    },
+    parameters => PVE::Mapping::Dir->createSchema(1),
+    returns => {
+	type => 'null',
+    },
+    code => sub {
+	my ($param) = @_;
+
+	my $id = extract_param($param, 'id');
+
+	my $plugin = PVE::Mapping::Dir->lookup('dir');
+	my $opts = $plugin->check_config($id, $param, 1, 1);
+
+	PVE::Mapping::Dir::lock_dir_config(sub {
+	    my $cfg = PVE::Mapping::Dir::config();
+
+	    die "dir ID '$id' already defined\n" if defined($cfg->{ids}->{$id});
+
+	    $cfg->{ids}->{$id} = $opts;
+
+	    PVE::Mapping::Dir::write_dir_config($cfg);
+
+	}, "create hardware mapping failed");
+
+	return;
+    },
+});
+
+__PACKAGE__->register_method ({
+    name => 'update',
+    protected => 1,
+    path => '{id}',
+    method => 'PUT',
+    description => "Update a hardware mapping.",
+    permissions => {
+	check => ['perm', '/mapping/dir/{id}', ['Mapping.Modify']],
+    },
+    parameters => PVE::Mapping::Dir->updateSchema(),
+    returns => {
+	type => 'null',
+    },
+    code => sub {
+	my ($param) = @_;
+
+	my $digest = extract_param($param, 'digest');
+	my $delete = extract_param($param, 'delete');
+	my $id = extract_param($param, 'id');
+
+	if ($delete) {
+	    $delete = [ PVE::Tools::split_list($delete) ];
+	}
+
+	PVE::Mapping::Dir::lock_dir_config(sub {
+	    my $cfg = PVE::Mapping::Dir::config();
+
+	    PVE::Tools::assert_if_modified($cfg->{digest}, $digest) if defined($digest);
+
+	    die "dir ID '$id' does not exist\n" if !defined($cfg->{ids}->{$id});
+
+	    my $plugin = PVE::Mapping::Dir->lookup('dir');
+	    my $opts = $plugin->check_config($id, $param, 1, 1);
+
+	    my $data = $cfg->{ids}->{$id};
+
+	    my $options = $plugin->private()->{options}->{dir};
+	    PVE::SectionConfig::delete_from_config($data, $options, $opts, $delete);
+
+	    $data->{$_} = $opts->{$_} for keys $opts->%*;
+
+	    PVE::Mapping::Dir::write_dir_config($cfg);
+
+	}, "update hardware mapping failed");
+
+	return;
+    },
+});
+
+__PACKAGE__->register_method ({
+    name => 'delete',
+    protected => 1,
+    path => '{id}',
+    method => 'DELETE',
+    description => "Remove Hardware Mapping.",
+    permissions => {
+	check => [ 'perm', '/mapping/dir', ['Mapping.Modify']],
+    },
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    id => {
+		type => 'string',
+		format => 'pve-configid',
+	    },
+	}
+    },
+    returns => { type => 'null' },
+    code => sub {
+	my ($param) = @_;
+
+	my $id = $param->{id};
+
+	PVE::Mapping::Dir::lock_dir_config(sub {
+	    my $cfg = PVE::Mapping::Dir::config();
+
+	    if ($cfg->{ids}->{$id}) {
+		delete $cfg->{ids}->{$id};
+	    }
+
+	    PVE::Mapping::Dir::write_dir_config($cfg);
+
+	}, "delete dir mapping failed");
+
+	return;
+    }
+});
+
+1;
diff --git a/PVE/API2/Cluster/Mapping/Makefile b/PVE/API2/Cluster/Mapping/Makefile
index e7345ab4..a80c8ab3 100644
--- a/PVE/API2/Cluster/Mapping/Makefile
+++ b/PVE/API2/Cluster/Mapping/Makefile
@@ -4,7 +4,8 @@ include ../../../../defines.mk
 # ensure we do not conflict with files shipped by pve-cluster!!
 PERLSOURCE= 	\
 	PCI.pm	\
-	USB.pm
+	USB.pm	\
+	Dir.pm
 
 all:
 
-- 
2.39.2





^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v8 0/7] virtiofs
  2023-11-08  8:52 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v8 0/7] virtiofs Markus Frank
                   ` (6 preceding siblings ...)
  2023-11-08  8:52 ` [pve-devel] [PATCH manager v8 7/7] api: add resource map api endpoints for directories Markus Frank
@ 2024-01-30 12:31 ` Markus Frank
  2024-01-31 12:01 ` Fiona Ebner
  8 siblings, 0 replies; 26+ messages in thread
From: Markus Frank @ 2024-01-30 12:31 UTC (permalink / raw)
  To: Markus Frank, Proxmox VE development discussion

ping, patches still apply. Only the cluster patch needs a 3-way merge to apply.

On  2023-11-08 09:52, Markus Frank wrote:
> build-order:
> 1. cluster
> 2. guest-common
> 3. docs
> 4. qemu-server
> 5. manager
> 
> I did not get virtiofsd to run with run_command without creating zombie
> processes after stutdown.
> So I replaced run_command with exec for now.
> Maybe someone can find out why this happens.
> 
> 
> cluster:
> 
> Markus Frank (1):
>    add mapping/dir.cfg for resource mapping
> 
>   src/PVE/Cluster.pm  | 1 +
>   src/pmxcfs/status.c | 1 +
>   2 files changed, 2 insertions(+)
> 
> 
> guest-common:
> 
> v7:
> * renamed DIR to Dir
> * made xattr & acl settings per directory-id and not per node
> 
> Markus Frank (1):
>    add Dir mapping config
> 
>   src/Makefile           |   1 +
>   src/PVE/Mapping/Dir.pm | 177 +++++++++++++++++++++++++++++++++++++++++
>   2 files changed, 178 insertions(+)
>   create mode 100644 src/PVE/Mapping/Dir.pm
> 
> 
> docs:
> 
> v8:
>   * added "Known Limitations"
>   * removed old mount tag
> 
> Markus Frank (1):
>    added shared filesystem doc for virtio-fs
> 
>   qm.adoc | 84 +++++++++++++++++++++++++++++++++++++++++++++++++++++++--
>   1 file changed, 82 insertions(+), 2 deletions(-)
> 
> 
> qemu-server:
> 
> v8:
>   * changed permission checks to cover cloning and restoring and
>   made the helper functions similar to the PCI, USB permission check functions.
>   * warn if acl is activated on Windows VM, since the virtiofs device cannot be
>   mounted on Windows if acl is on and moved with dir config validation to
>   its own function. This function is called in config_to_command so that
>   no virtiofsd command is running although qmstart died because a second
>   virtiofs device was incorrectly configured.
> 
> v7:
>   * enabled use of hugepages
>   * renamed variables
>   * added acl & xattr parameters that overwrite the default directory
>   mapping settings
> 
> v6:
>   * added virtiofsd dependency
>   * 2 new patches:
>      * Permission check for virtiofs directory access
>      * check_local_resources: virtiofs
> 
> v5:
>   * allow numa settings with virtio-fs
>   * added direct-io & cache settings
>   * changed to rust implementation of virtiofsd
>   * made double fork and closed all file descriptor so that the lockfile
>   gets released.
> 
> v3:
>   * created own socket and get file descriptor for virtiofsd
>   so there is no race between starting virtiofsd & qemu
>   * added TODO to replace virtiofsd with rust implementation in bookworm
>   (I packaged the rust implementation for bookworm & the C implementation
>   in qemu will be removed in qemu 8.0)
> 
> v2:
>   * replaced sharedfiles_fmt path in qemu-server with dirid:
>   * user can use the dirid to specify the directory without requiring root access
> 
> Markus Frank (3):
>    feature #1027: virtio-fs support
>    Permission check for virtiofs directory access
>    check_local_resources: virtiofs
> 
>   PVE/API2/Qemu.pm             |  39 ++++++-
>   PVE/QemuServer.pm            | 200 ++++++++++++++++++++++++++++++++++-
>   PVE/QemuServer/Memory.pm     |  25 +++--
>   debian/control               |   1 +
>   test/MigrationTest/Shared.pm |   7 ++
>   5 files changed, 263 insertions(+), 9 deletions(-)
> 
> 
> manager:
> 
> v8: removed ui patches for now
> 
> Markus Frank (1):
>    api: add resource map api endpoints for directories
> 
>   PVE/API2/Cluster/Mapping.pm       |   7 +
>   PVE/API2/Cluster/Mapping/Dir.pm   | 309 ++++++++++++++++++++++++++++++
>   PVE/API2/Cluster/Mapping/Makefile |   3 +-
>   3 files changed, 318 insertions(+), 1 deletion(-)
>   create mode 100644 PVE/API2/Cluster/Mapping/Dir.pm
> 




^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [pve-devel] [PATCH guest-common v8 2/7] add Dir mapping config
  2023-11-08  8:52 ` [pve-devel] [PATCH guest-common v8 2/7] add Dir mapping config Markus Frank
@ 2024-01-31 12:01   ` Fiona Ebner
  2024-01-31 13:42     ` Markus Frank
  2024-01-31 13:02   ` Fiona Ebner
  1 sibling, 1 reply; 26+ messages in thread
From: Fiona Ebner @ 2024-01-31 12:01 UTC (permalink / raw)
  To: Proxmox VE development discussion, Markus Frank

Am 08.11.23 um 09:52 schrieb Markus Frank:
> Adds a config file for directories by using a 'map'
> array propertystring for each node mapping.
> 
> Next to node & path, there is the optional
> submounts parameter in the map array.
> Additionally there are the default settings for xattr & acl.
> 
> example config:
> ```
> some-dir-id
> 	map node=node1,path=/mnt/share/,submounts=1
> 	map node=node2,path=/mnt/share/,
> 	xattr 1
> 	acl 1
> ```
> 
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> ---
>  src/Makefile           |   1 +
>  src/PVE/Mapping/Dir.pm | 177 +++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 178 insertions(+)
>  create mode 100644 src/PVE/Mapping/Dir.pm
> 

> diff --git a/src/PVE/Mapping/Dir.pm b/src/PVE/Mapping/Dir.pm
> new file mode 100644
> index 0000000..6c87073
> --- /dev/null
> +++ b/src/PVE/Mapping/Dir.pm
> @@ -0,0 +1,177 @@
> +package PVE::Mapping::Dir;
> +
> +use strict;
> +use warnings;
> +
> +use PVE::Cluster qw(cfs_register_file cfs_read_file cfs_lock_file cfs_write_file);
> +use PVE::JSONSchema qw(get_standard_option parse_property_string);
> +use PVE::SectionConfig;
> +use PVE::INotify;

Nit: not sorted alphabetically

---snip---

> +my $map_fmt = {
> +    node => get_standard_option('pve-node'),
> +    path => {
> +	description => "Directory-path that should be shared with the guest.",

Nit: space instead of - is nicer IMHO and since it's an absolute path
"Absolute directory path"

> +	type => 'string',
> +	format => 'pve-storage-path',

This is registered in PVE/Storage/Plugin.pm
To be clean, we should either add an include here (guest-common already
depends on libpve-storage) or move registering the format to PVE::JSONSchema

> +    },
> +    submounts => {
> +	type => 'boolean',
> +	description => "Option to tell the guest which directories are mount points.",

I haven't looked at the code where this is used yet, so I'm as confused
as a user now ;) Does this affect mount points within the directory
(which the "sub" prefix suggests)? But it's a boolean, so how can I tell
"which directories"? The description should answer: When does it need to
be set/what effect does it have?

> +	optional => 1,
> +    },
> +    description => {
> +	description => "Description of the node specific directory.",
> +	type => 'string',
> +	optional => 1,
> +	maxLength => 4096,
> +    },
> +};
> +
> +my $defaultData = {
> +    propertyList => {
> +        id => {
> +            type => 'string',
> +            description => "The ID of the directory",
> +            format => 'pve-configid',
> +        },
> +        description => {
> +            description => "Description of the directory",
> +            type => 'string',
> +            optional => 1,
> +            maxLength => 4096,
> +        },
> +        map => {
> +            type => 'array',
> +            description => 'A list of maps for the cluster nodes.',
> +	    optional => 1,
> +            items => {
> +                type => 'string',
> +                format => $map_fmt,
> +            },
> +        },

Style nit: Indentation is mostly wrong since the beginning of
$defaultData until here.

> +	xattr => {
> +	    type => 'boolean',
> +	    description => "Enable support for extended attributes.",
> +	    optional => 1,
> +	},
> +	acl => {
> +	    type => 'boolean',
> +	    description => "Enable support for posix ACLs (implies --xattr).",

s/posix/POSIX/

> +	    optional => 1,

What could also be mentioned for xattr and acl: do the underlying file
systems need to support these? What happens if they don't?

> +	},
> +    },
> +};
> +
> +sub private {
> +    return $defaultData;
> +}
> +
> +sub options {
> +    return {
> +        description => { optional => 1 },
> +        map => {},
> +        xattr => { optional => 1 },
> +        acl => { optional => 1 },

Style nit: wrong indentation

> +    };
> +}
> +
> +sub assert_valid {
> +    my ($dir_cfg) = @_;
> +
> +    my $path = $dir_cfg->{path};
> +
> +    if (! -e $path) {
> +        die "Path $path does not exist\n";

Style nit: wrong indentation

> +    }
> +    if ((-e $path) && (! -d $path)) {

Style nit: could be made into an elsif to avoid an extra -e check

> +        die "Path $path exists but is not a directory\n"

Style nit: wrong indentation

> +    }
> +
> +    return 1;
> +};
> +
> +sub config {
> +    return cfs_read_file($FILENAME);
> +}
> +
> +sub lock_dir_config {
> +    my ($code, $errmsg) = @_;
> +
> +    cfs_lock_file($FILENAME, undef, $code);
> +    my $err = $@;
> +    if ($err) {

Style nit: could be if (my $err = $@) {

> +        $errmsg ? die "$errmsg: $err" : die $err;

Style nit: wrong indentation

> +    }
> +}
> +
> +sub write_dir_config {
> +    my ($cfg) = @_;
> +
> +    cfs_write_file($FILENAME, $cfg);
> +}
> +
> +sub find_on_current_node {
> +    my ($id) = @_;
> +
> +    my $cfg = config();
> +    my $node = PVE::INotify::nodename();
> +
> +    return get_node_mapping($cfg, $id, $node);
> +}
> +
> +sub get_node_mapping {
> +    my ($cfg, $id, $nodename) = @_;
> +
> +    return undef if !defined($cfg->{ids}->{$id});
> +
> +    my $res = [];
> +    my $mapping_list = $cfg->{ids}->{$id}->{map};
> +    foreach my $map (@{$mapping_list}) {

Style nit: new code should use "for"

> +	my $entry = eval { parse_property_string($map_fmt, $map) };
> +	warn $@ if $@;
> +	if ($entry && $entry->{node} eq $nodename) {
> +	    push $res->@*, $entry;
> +	}
> +    }
> +    return $res;
> +}
> +
> +PVE::Mapping::Dir->register();
> +PVE::Mapping::Dir->init();
> +
> +1;




^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [pve-devel] [PATCH cluster v8 1/7] add mapping/dir.cfg for resource mapping
  2023-11-08  8:52 ` [pve-devel] [PATCH cluster v8 1/7] add mapping/dir.cfg for resource mapping Markus Frank
@ 2024-01-31 12:01   ` Fiona Ebner
  0 siblings, 0 replies; 26+ messages in thread
From: Fiona Ebner @ 2024-01-31 12:01 UTC (permalink / raw)
  To: Proxmox VE development discussion, Markus Frank

Am 08.11.23 um 09:52 schrieb Markus Frank:
> Add it to both, the perl side (PVE/Cluster.pm) and pmxcfs side
> (status.c).
> 

A short explanation what it will be used for would be nice for context.

> Signed-off-by: Markus Frank <m.frank@proxmox.com>

Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>




^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v8 0/7] virtiofs
  2023-11-08  8:52 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v8 0/7] virtiofs Markus Frank
                   ` (7 preceding siblings ...)
  2024-01-30 12:31 ` [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v8 0/7] virtiofs Markus Frank
@ 2024-01-31 12:01 ` Fiona Ebner
  8 siblings, 0 replies; 26+ messages in thread
From: Fiona Ebner @ 2024-01-31 12:01 UTC (permalink / raw)
  To: Proxmox VE development discussion, Markus Frank

Am 08.11.23 um 09:52 schrieb Markus Frank:
> build-order:
> 1. cluster
> 2. guest-common
> 3. docs
> 4. qemu-server
> 5. manager
> 
> I did not get virtiofsd to run with run_command without creating zombie
> processes after stutdown.
> So I replaced run_command with exec for now. 
> Maybe someone can find out why this happens.
> 

Yes, it would be nicer if we could re-use our existing helpers from
PVE::Tools.

A short description what this series is for and basic outline/overview
on how it does it would help in the cover letter. I know there is a docs
patch which covers non-technical aspects of it, but would still be good
to have.




^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [pve-devel] [PATCH guest-common v8 2/7] add Dir mapping config
  2023-11-08  8:52 ` [pve-devel] [PATCH guest-common v8 2/7] add Dir mapping config Markus Frank
  2024-01-31 12:01   ` Fiona Ebner
@ 2024-01-31 13:02   ` Fiona Ebner
  1 sibling, 0 replies; 26+ messages in thread
From: Fiona Ebner @ 2024-01-31 13:02 UTC (permalink / raw)
  To: Proxmox VE development discussion, Markus Frank

Am 08.11.23 um 09:52 schrieb Markus Frank:
> +my $map_fmt = {
> +    node => get_standard_option('pve-node'),
> +    path => {
> +	description => "Directory-path that should be shared with the guest.",
> +	type => 'string',
> +	format => 'pve-storage-path',
> +    },
> +    submounts => {
> +	type => 'boolean',
> +	description => "Option to tell the guest which directories are mount points.",
> +	optional => 1,

Please specify what the default is here

--snip---

> +	xattr => {
> +	    type => 'boolean',
> +	    description => "Enable support for extended attributes.",
> +	    optional => 1,

and here

> +	},
> +	acl => {
> +	    type => 'boolean',
> +	    description => "Enable support for posix ACLs (implies --xattr).",
> +	    optional => 1,

and here

> +	},
> +    },
> +};
> +




^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [pve-devel] [PATCH docs v8 3/7] added shared filesystem doc for virtio-fs
  2023-11-08  8:52 ` [pve-devel] [PATCH docs v8 3/7] added shared filesystem doc for virtio-fs Markus Frank
@ 2024-01-31 13:26   ` Fiona Ebner
  2024-01-31 13:34   ` Fiona Ebner
  1 sibling, 0 replies; 26+ messages in thread
From: Fiona Ebner @ 2024-01-31 13:26 UTC (permalink / raw)
  To: Proxmox VE development discussion, Markus Frank

IMHO the wording/word order with "shared filesystem doc" could be
improved. It's the doc/section for the shared filesystem virtio-fs.

Am 08.11.23 um 09:52 schrieb Markus Frank:
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> ---
>  qm.adoc | 84 +++++++++++++++++++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 82 insertions(+), 2 deletions(-)
> 
> diff --git a/qm.adoc b/qm.adoc
> index c4f1024..571c42e 100644
> --- a/qm.adoc
> +++ b/qm.adoc
> @@ -996,6 +996,85 @@ recommended to always use a limiter to avoid guests using too many host
>  resources. If desired, a value of '0' for `max_bytes` can be used to disable
>  all limits.
>  
> +[[qm_virtiofs]]
> +Virtio-fs
> +~~~~~~~~~
> +
> +Virtio-fs is a shared file system, that enables sharing a directory between

I think the comma should not be here

> +host and guest VM while taking advantage of the locality of virtual machines

Maybe split the sentence, e.g. "guest VM. It takes advantage"

> +and the hypervisor to get a higher throughput than 9p.

I'd say "the 9p remote file system protocol" so that people hearing it
for the first time will have an idea too.

> +
> +To use virtio-fs, the https://gitlab.com/virtio-fs/virtiofsd[virtiofsd] daemon
> +needs to run in the background.
> +In {pve} this process starts immediately before the start of QEMU.

I think there should be a comma after {pve}. My thesis advisor taught me
the following trick that works in many cases: "If you can read it as a
stand-alone sentence, there should be a comma." For example, the "there
should be a comma" can be read as a stand-alone sentence ;)

Nit: since it's the same paragraph, there should be no newline before
this line.

> +
> +Linux VMs with kernel >=5.4 support this feature by default.
> +
> +There is a guide available on how to utilize virtio-fs in Windows VMs.
> +https://github.com/virtio-win/kvm-guest-drivers-windows/wiki/Virtiofs:-Shared-file-system
> +
> +Known limitations

L should be capitalized for title-case

> +^^^^^^^^^^^^^^^^^
> +
> +* Virtiofsd crashing means no recovery until VM is fully stopped
> +and restarted.
> +* Virtiofsd not responding may result in NFS-like hanging access in the VM.
> +* Memory hotplug does not work in combination with virtio-fs.

Out of interest: is this being worked on upstream, a fundamental design
limitation or a limitation on our side?

> +* Windows cannot understand ACLs. Therefore, disable it for Windows VMs,
> +otherwise the virtio-fs device will not be visible within the VMs.

If there is no corresponding warning for this in qemu-server yet, please
add one

> +
> +Add mapping for Shared Directories

M should be capitalized for title-case

> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> +
> +To add a mapping, either use the API directly with pvesh as described in the

Since it's a CLI command s/pvesh/`pvesh`/

I'd be inclined to repeat, i.e. "a mapping for a shared directory,"

> +xref:resource_mapping[Resource Mapping] section,

Style nit: no newline here

> +or add the mapping to the configuration file `/etc/pve/mapping/dir.cfg`:

I'd rather not suggest this. Manual editing will cause much more
problems, because there are no checks.

> +
> +----
> +some-dir-id
> +    map node=node1,path=/mnt/share/,submounts=1
> +    map node=node2,path=/mnt/share/,
> +    xattr 1
> +    acl 1
> +----
> +
> +Set `submounts` to `1` when multiple file systems are mounted in a
> +shared directory.

I suppose "and you want to make all available in the guest."

> +
> +Add virtio-fs to VM

to a VM

> +^^^^^^^^^^^^^^^^^^^
> +
> +To share a directory using virtio-fs, you need to specify the directory ID
> +(dirid) that has been configured in the Resource Mapping.

Specify it where? Maybe start out with mentioning the configuration
option, i.e. virtiofs<N> and then what parameters it takes?

"resource mapping" should not be capitalized

> +Additionally, you can set the `cache` option to either `always`, `never`,

Style nit: no newline before this line, it's the same paragraph

> +or `auto`, depending on your requirements.
> +If you want virtio-fs to honor the `O_DIRECT` flag, you can set the
> +`direct-io` parameter to `1`.

Please mention briefly what the implications for the different cache
settings and the direct-io flag are and what the defaults are.

> +Additionally it possible to overwrite the default mapping settings

"Additionally, it is"

> +for xattr & acl by setting then to either `1` or `0`.

`xattr` and `acl`
s/then/them/

> +
> +The `acl` parameter automatically implies `xattr`, that is, it makes no
> +difference whether you set xattr to `0` if acl is set to `1`.

This should be mentioned in the description of the parameter in the
schema too. Also, please use `xattr` and `acl` again.

> +
> +----
> +qm set <vmid> -virtiofs0 dirid=<dirid>,cache=always,direct-io=1
> +qm set <vmid> -virtiofs1 <dirid>,cache=never,xattr=1
> +qm set <vmid> -virtiofs2 <dirid>,acl=1
> +----
> +
> +To mount virtio-fs in a guest VM with the Linux kernel virtio-fs driver, run the
> +following command:

"command inside the guest:" to reduce the number of users trying it on
the host

> +
> +The dirid associated with the path on the current node is also used as the
> +mount tag (name used to mount the device on the guest).

Style nit: Early newline, "mount" fits the 80 character limit




^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [pve-devel] [PATCH docs v8 3/7] added shared filesystem doc for virtio-fs
  2023-11-08  8:52 ` [pve-devel] [PATCH docs v8 3/7] added shared filesystem doc for virtio-fs Markus Frank
  2024-01-31 13:26   ` Fiona Ebner
@ 2024-01-31 13:34   ` Fiona Ebner
  1 sibling, 0 replies; 26+ messages in thread
From: Fiona Ebner @ 2024-01-31 13:34 UTC (permalink / raw)
  To: Proxmox VE development discussion, Markus Frank

Am 08.11.23 um 09:52 schrieb Markus Frank:
> +
> +Known limitations
> +^^^^^^^^^^^^^^^^^
> +
> +* Virtiofsd crashing means no recovery until VM is fully stopped
> +and restarted.
> +* Virtiofsd not responding may result in NFS-like hanging access in the VM.
> +* Memory hotplug does not work in combination with virtio-fs.
> +* Windows cannot understand ACLs. Therefore, disable it for Windows VMs,
> +otherwise the virtio-fs device will not be visible within the VMs.
> +

Oh, and what about live migration (also with respect to different cache
settings)?




^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [pve-devel] [PATCH guest-common v8 2/7] add Dir mapping config
  2024-01-31 12:01   ` Fiona Ebner
@ 2024-01-31 13:42     ` Markus Frank
  2024-01-31 13:53       ` Fiona Ebner
  0 siblings, 1 reply; 26+ messages in thread
From: Markus Frank @ 2024-01-31 13:42 UTC (permalink / raw)
  To: Fiona Ebner, Proxmox VE development discussion

Thanks for the review,

2 answers inline. The rest is clear.

On  2024-01-31 13:01, Fiona Ebner wrote:
> Am 08.11.23 um 09:52 schrieb Markus Frank:
>> Adds a config file for directories by using a 'map'
>> array propertystring for each node mapping.
>>
>> Next to node & path, there is the optional
>> submounts parameter in the map array.
>> Additionally there are the default settings for xattr & acl.
>>
>> example config:
>> ```
>> some-dir-id
>> 	map node=node1,path=/mnt/share/,submounts=1
>> 	map node=node2,path=/mnt/share/,
>> 	xattr 1
>> 	acl 1
>> ```
>>
>> Signed-off-by: Markus Frank <m.frank@proxmox.com>
>> ---
>>   src/Makefile           |   1 +
>>   src/PVE/Mapping/Dir.pm | 177 +++++++++++++++++++++++++++++++++++++++++
>>   2 files changed, 178 insertions(+)
>>   create mode 100644 src/PVE/Mapping/Dir.pm
>>
> 
>> diff --git a/src/PVE/Mapping/Dir.pm b/src/PVE/Mapping/Dir.pm
>> new file mode 100644
>> index 0000000..6c87073
>> --- /dev/null
>> +++ b/src/PVE/Mapping/Dir.pm
>> @@ -0,0 +1,177 @@
>> +package PVE::Mapping::Dir;
>> +
>> +use strict;
>> +use warnings;
>> +
>> +use PVE::Cluster qw(cfs_register_file cfs_read_file cfs_lock_file cfs_write_file);
>> +use PVE::JSONSchema qw(get_standard_option parse_property_string);
>> +use PVE::SectionConfig;
>> +use PVE::INotify;
> 
> Nit: not sorted alphabetically
> 
> ---snip---
> 
>> +my $map_fmt = {
>> +    node => get_standard_option('pve-node'),
>> +    path => {
>> +	description => "Directory-path that should be shared with the guest.",
> 
> Nit: space instead of - is nicer IMHO and since it's an absolute path
> "Absolute directory path"
> 
>> +	type => 'string',
>> +	format => 'pve-storage-path',
> 
> This is registered in PVE/Storage/Plugin.pm
> To be clean, we should either add an include here (guest-common already
> depends on libpve-storage) or move registering the format to PVE::JSONSchema
> 
>> +    },
>> +    submounts => {
>> +	type => 'boolean',
>> +	description => "Option to tell the guest which directories are mount points.",
> 
> I haven't looked at the code where this is used yet, so I'm as confused
> as a user now ;) Does this affect mount points within the directory
> (which the "sub" prefix suggests)? But it's a boolean, so how can I tell
> "which directories"? The description should answer: When does it need to
> be set/what effect does it have?
I do not know why I wrote it like that.

This is more correct:
	description => "Announce that the directory contains other mounted file systems.",

> 
>> +	optional => 1,
>> +    },
>> +    description => {
>> +	description => "Description of the node specific directory.",
>> +	type => 'string',
>> +	optional => 1,
>> +	maxLength => 4096,
>> +    },
>> +};
>> +
>> +my $defaultData = {
>> +    propertyList => {
>> +        id => {
>> +            type => 'string',
>> +            description => "The ID of the directory",
>> +            format => 'pve-configid',
>> +        },
>> +        description => {
>> +            description => "Description of the directory",
>> +            type => 'string',
>> +            optional => 1,
>> +            maxLength => 4096,
>> +        },
>> +        map => {
>> +            type => 'array',
>> +            description => 'A list of maps for the cluster nodes.',
>> +	    optional => 1,
>> +            items => {
>> +                type => 'string',
>> +                format => $map_fmt,
>> +            },
>> +        },
> 
> Style nit: Indentation is mostly wrong since the beginning of
> $defaultData until here.
> 
>> +	xattr => {
>> +	    type => 'boolean',
>> +	    description => "Enable support for extended attributes.",
>> +	    optional => 1,
>> +	},
>> +	acl => {
>> +	    type => 'boolean',
>> +	    description => "Enable support for posix ACLs (implies --xattr).",
> 
> s/posix/POSIX/
> 
>> +	    optional => 1,
> 
> What could also be mentioned for xattr and acl: do the underlying file
> systems need to support these? What happens if they don't?
ACLs and xattrs just get ignored if not supported.
> 
>> +	},
>> +    },
>> +};
>> +
>> +sub private {
>> +    return $defaultData;
>> +}
>> +
>> +sub options {
>> +    return {
>> +        description => { optional => 1 },
>> +        map => {},
>> +        xattr => { optional => 1 },
>> +        acl => { optional => 1 },
> 
> Style nit: wrong indentation
> 
>> +    };
>> +}
>> +
>> +sub assert_valid {
>> +    my ($dir_cfg) = @_;
>> +
>> +    my $path = $dir_cfg->{path};
>> +
>> +    if (! -e $path) {
>> +        die "Path $path does not exist\n";
> 
> Style nit: wrong indentation
> 
>> +    }
>> +    if ((-e $path) && (! -d $path)) {
> 
> Style nit: could be made into an elsif to avoid an extra -e check
> 
>> +        die "Path $path exists but is not a directory\n"
> 
> Style nit: wrong indentation
> 
>> +    }
>> +
>> +    return 1;
>> +};
>> +
>> +sub config {
>> +    return cfs_read_file($FILENAME);
>> +}
>> +
>> +sub lock_dir_config {
>> +    my ($code, $errmsg) = @_;
>> +
>> +    cfs_lock_file($FILENAME, undef, $code);
>> +    my $err = $@;
>> +    if ($err) {
> 
> Style nit: could be if (my $err = $@) {
> 
>> +        $errmsg ? die "$errmsg: $err" : die $err;
> 
> Style nit: wrong indentation
> 
>> +    }
>> +}
>> +
>> +sub write_dir_config {
>> +    my ($cfg) = @_;
>> +
>> +    cfs_write_file($FILENAME, $cfg);
>> +}
>> +
>> +sub find_on_current_node {
>> +    my ($id) = @_;
>> +
>> +    my $cfg = config();
>> +    my $node = PVE::INotify::nodename();
>> +
>> +    return get_node_mapping($cfg, $id, $node);
>> +}
>> +
>> +sub get_node_mapping {
>> +    my ($cfg, $id, $nodename) = @_;
>> +
>> +    return undef if !defined($cfg->{ids}->{$id});
>> +
>> +    my $res = [];
>> +    my $mapping_list = $cfg->{ids}->{$id}->{map};
>> +    foreach my $map (@{$mapping_list}) {
> 
> Style nit: new code should use "for"
> 
>> +	my $entry = eval { parse_property_string($map_fmt, $map) };
>> +	warn $@ if $@;
>> +	if ($entry && $entry->{node} eq $nodename) {
>> +	    push $res->@*, $entry;
>> +	}
>> +    }
>> +    return $res;
>> +}
>> +
>> +PVE::Mapping::Dir->register();
>> +PVE::Mapping::Dir->init();
>> +
>> +1;




^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [pve-devel] [PATCH guest-common v8 2/7] add Dir mapping config
  2024-01-31 13:42     ` Markus Frank
@ 2024-01-31 13:53       ` Fiona Ebner
  2024-01-31 14:00         ` Fiona Ebner
  0 siblings, 1 reply; 26+ messages in thread
From: Fiona Ebner @ 2024-01-31 13:53 UTC (permalink / raw)
  To: Markus Frank, Proxmox VE development discussion

Am 31.01.24 um 14:42 schrieb Markus Frank:
>>
>> I haven't looked at the code where this is used yet, so I'm as confused
>> as a user now ;) Does this affect mount points within the directory
>> (which the "sub" prefix suggests)? But it's a boolean, so how can I tell
>> "which directories"? The description should answer: When does it need to
>> be set/what effect does it have?
> I do not know why I wrote it like that.
> 
> This is more correct:
>     description => "Announce that the directory contains other mounted
> file systems.",
> 

What happens if there are nested mounts, but the flag is not set? Will
the directory be mounted without the nested mounts or will it fail? This
should be made clear to users.


>>
>> What could also be mentioned for xattr and acl: do the underlying file
>> systems need to support these? What happens if they don't?
> ACLs and xattrs just get ignored if not supported.


Please include that in the description, thanks!




^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [pve-devel] [PATCH guest-common v8 2/7] add Dir mapping config
  2024-01-31 13:53       ` Fiona Ebner
@ 2024-01-31 14:00         ` Fiona Ebner
  2024-01-31 14:15           ` Markus Frank
  0 siblings, 1 reply; 26+ messages in thread
From: Fiona Ebner @ 2024-01-31 14:00 UTC (permalink / raw)
  To: Markus Frank, Proxmox VE development discussion

Am 31.01.24 um 14:53 schrieb Fiona Ebner:
> Am 31.01.24 um 14:42 schrieb Markus Frank:
> 
> 
>>>
>>> What could also be mentioned for xattr and acl: do the underlying file
>>> systems need to support these? What happens if they don't?
>> ACLs and xattrs just get ignored if not supported.
> 
> 
> Please include that in the description, thanks!
> 

But wait, in the docs you say that you need to disable the setting for
Windows to make it work. So it's not simply ignored, at least not in all
cases. And I guess this is also the reason why there are overrides for
xattr and acl in the VM configuration (or why else would we need those
rather than just using the setting from the mapping)?




^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [pve-devel] [PATCH guest-common v8 2/7] add Dir mapping config
  2024-01-31 14:00         ` Fiona Ebner
@ 2024-01-31 14:15           ` Markus Frank
  0 siblings, 0 replies; 26+ messages in thread
From: Markus Frank @ 2024-01-31 14:15 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fiona Ebner


On  2024-01-31 15:00, Fiona Ebner wrote:
> Am 31.01.24 um 14:53 schrieb Fiona Ebner:
>> Am 31.01.24 um 14:42 schrieb Markus Frank:
>>
>>
>>>>
>>>> What could also be mentioned for xattr and acl: do the underlying file
>>>> systems need to support these? What happens if they don't?
>>> ACLs and xattrs just get ignored if not supported.
>>
>>
>> Please include that in the description, thanks!
>>
> 
> But wait, in the docs you say that you need to disable the setting for
> Windows to make it work. So it's not simply ignored, at least not in all
> cases. And I guess this is also the reason why there are overrides for
> xattr and acl in the VM configuration (or why else would we need those
> rather than just using the setting from the mapping)?
Yes, you are right.
I just tried mounting a directory with a fat32 fs inside a linux VM with acl and xattr on and it just ignored the ACLs and xattrs.
I forgot about Windows, there enabling acl with a ext4 shared directory results in Windows not finding the virtiofsd device to mount.
I will try to summarize this in the description. Thanks for reminding me.




^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [pve-devel] [PATCH qemu-server v8 4/7] feature #1027: virtio-fs support
  2023-11-08  8:52 ` [pve-devel] [PATCH qemu-server v8 4/7] feature #1027: virtio-fs support Markus Frank
@ 2024-01-31 15:02   ` Fiona Ebner
  2024-02-13 11:52     ` Markus Frank
  0 siblings, 1 reply; 26+ messages in thread
From: Fiona Ebner @ 2024-01-31 15:02 UTC (permalink / raw)
  To: Proxmox VE development discussion, Markus Frank

Am 08.11.23 um 09:52 schrieb Markus Frank:
> add support for sharing directories with a guest vm
> 
> virtio-fs needs virtiofsd to be started.
> In order to start virtiofsd as a process (despite being a daemon it is does not run
> in the background), a double-fork is used.
> 
> virtiofsd should close itself together with qemu.
> 
> There are the parameters dirid
> and the optional parameters direct-io & cache.
> Additionally the xattr & acl parameter overwrite the
> directory mapping settings for xattr & acl.
> 
> The dirid gets mapped to the path on the current node
> and is also used as a mount-tag (name used to mount the
> device on the guest).
> 
> example config:
> ```
> virtiofs0: foo,direct-io=1,cache=always,acl=1
> virtiofs1: dirid=bar,cache=never,xattr=1
> ```
> 
> For information on the optional parameters see there:
> https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md
> 
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> ---
>  PVE/QemuServer.pm        | 185 +++++++++++++++++++++++++++++++++++++++
>  PVE/QemuServer/Memory.pm |  25 ++++--

>  debian/control           |   1 +

I'd like to have the change to debian/control as a separate preparatory
patch.

>  3 files changed, 205 insertions(+), 6 deletions(-)
> 
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index 2895675..92580df 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -43,6 +43,7 @@ use PVE::PBSClient;
>  use PVE::RESTEnvironment qw(log_warn);
>  use PVE::RPCEnvironment;
>  use PVE::Storage;
> +use PVE::Mapping::Dir;

So the missing include of PVE::Storage::Plugin in PVE::Mapping::Dir I
mentioned in the guest-common patch is the reason it's not sorted
alphabetically here ;)

>  use PVE::SysFSTools;
>  use PVE::Systemd;
>  use PVE::Tools qw(run_command file_read_firstline file_get_contents dir_glob_foreach get_host_arch $IPV6RE);
> @@ -277,6 +278,42 @@ my $rng_fmt = {
>      },
>  };
>  

I'd like to have the format/helpers/etc. (e.g. config_to_command could
also call a helper) live in a dedicated module
PVE/QemuServer/Virtiofs.pm. We should aim to make PVE/QemuServer.pm
smaller, not bigger.

> +my $virtiofs_fmt = {
> +    'dirid' => {
> +	type => 'string',
> +	default_key => 1,
> +	description => "Mapping identifier of the directory mapping to be"
> +	    ." shared with the guest. Also used as a mount tag inside the VM.",
> +	format_description => 'mapping-id',
> +	format => 'pve-configid',
> +    },
> +    'cache' => {
> +	type => 'string',
> +	description => "The caching policy the file system should use"
> +	    ." (auto, always, never).",

Style nit: can fit on one line with the 100 character limit

> +	format_description => "virtiofs-cache",

format_description is (usually) used to tell a user how the string
should look like if it has a special format, e.g. base64, cidr. It's not
needed here when there already is an enum, and "virtiofs-xyz" is not
really helping to clarify this.

> +	enum => [qw(auto always never)],
> +	optional => 1,

Missing default

> +    },
> +    'direct-io' => {
> +	type => 'boolean',
> +	description => "Honor the O_DIRECT flag passed down by guest applications",
> +	format_description => "virtiofs-directio",

Similar here

> +	optional => 1,

Missing default

> +    },
> +    xattr => {
> +	type => 'boolean',
> +	description => "Enable support for extended attributes.",

Could mention it's an override for the value coming from the mapping.

> +	optional => 1,

Missing default, i.e. "use value from mapping"

> +    },
> +    acl => {
> +	type => 'boolean',
> +	description => "Enable support for posix ACLs (implies --xattr).",

Could mention it's an override for the value coming from the mapping.

> +	optional => 1,

Missing default, i.e. "use value from mapping"

> +    },
> +};
> +PVE::JSONSchema::register_format('pve-qm-virtiofs', $virtiofs_fmt);
> +
>  my $meta_info_fmt = {
>      'ctime' => {
>  	type => 'integer',
> @@ -839,6 +876,7 @@ while (my ($k, $v) = each %$confdesc) {
>  }
>  
>  my $MAX_NETS = 32;
> +my $MAX_VIRTIOFS = 10;

Is there a specific reason for ten or just to have an initial limit
that's not too small? Does it still work nicely with all ten?

>  my $MAX_SERIAL_PORTS = 4;
>  my $MAX_PARALLEL_PORTS = 3;
>  
> @@ -948,6 +986,21 @@ my $netdesc = {
>  
>  PVE::JSONSchema::register_standard_option("pve-qm-net", $netdesc);
>  
> +my $virtiofsdesc = {
> +    optional => 1,
> +    type => 'string', format => $virtiofs_fmt,
> +    description => "share files between host and guest",

Nit: s/files/a directory/ is slightly more precise?

> +};
> +PVE::JSONSchema::register_standard_option("pve-qm-virtiofs", $virtiofsdesc);
> +
> +sub max_virtiofs {
> +    return $MAX_VIRTIOFS;
> +}
> +
> +for (my $i = 0; $i < $MAX_VIRTIOFS; $i++)  {
> +    $confdesc->{"virtiofs$i"} = $virtiofsdesc;
> +}
> +
>  my $ipconfig_fmt = {
>      ip => {
>  	type => 'string',
> @@ -4055,6 +4108,23 @@ sub config_to_command {
>  	push @$devices, '-device', $netdevicefull;
>      }
>  
> +    my $virtiofs_enabled = 0;
> +    for (my $i = 0; $i < $MAX_VIRTIOFS; $i++) {
> +	my $opt = "virtiofs$i";
> +
> +	next if !$conf->{$opt};
> +	my $virtiofs = parse_property_string('pve-qm-virtiofs', $conf->{$opt});
> +	next if !$virtiofs;
> +
> +	check_virtiofs_config ($conf, $virtiofs);

Style nit: space between function name and parenthesis

> +
> +	push @$devices, '-chardev', "socket,id=virtfs$i,path=/var/run/virtiofsd/vm$vmid-fs$i";
> +	push @$devices, '-device', 'vhost-user-fs-pci,queue-size=1024'

Any specific reason for queue-size=1024? Better performance than the
default 128?

> +	    .",chardev=virtfs$i,tag=$virtiofs->{dirid}";
> +
> +	$virtiofs_enabled = 1;
> +    }
> +
>      if ($conf->{ivshmem}) {
>  	my $ivshmem = parse_property_string($ivshmem_fmt, $conf->{ivshmem});
>  
> @@ -4114,6 +4184,14 @@ sub config_to_command {
>      }
>      push @$machineFlags, "type=${machine_type_min}";
>  
> +    if ($virtiofs_enabled && !$conf->{numa}) {
> +	# kvm: '-machine memory-backend' and '-numa memdev' properties are
> +	# mutually exclusive
> +	push @$devices, '-object', 'memory-backend-memfd,id=virtiofs-mem'
> +	    .",size=$conf->{memory}M,share=on";
> +	push @$machineFlags, 'memory-backend=virtiofs-mem';
> +    }

I'd like to have this handled in the PVE::QemuServer::Memory::config()
call (need to additionally pass along $machineFlags of course).

> +
>      push @$cmd, @$devices;
>      push @$cmd, '-rtc', join(',', @$rtcFlags) if scalar(@$rtcFlags);
>      push @$cmd, '-machine', join(',', @$machineFlags) if scalar(@$machineFlags);
> @@ -4140,6 +4218,96 @@ sub config_to_command {
>      return wantarray ? ($cmd, $vollist, $spice_port, $pci_devices) : $cmd;
>  }
>  
> +sub check_virtiofs_config {

Since this dies, maybe assert_ instead of check_?

> +    my ($conf, $virtiofs) = @_;
> +    my $dir_cfg = PVE::Mapping::Dir::config()->{ids}->{$virtiofs->{dirid}};
> +    my $node_list = PVE::Mapping::Dir::find_on_current_node($virtiofs->{dirid});
> +
> +    my $acl = $virtiofs->{'acl'} // $dir_cfg->{'acl'};
> +    if ($acl && windows_version($conf->{ostype})) {
> +	log_warn(
> +	    "Please disable ACLs for virtiofs on Windows VMs, otherwise"
> +	    ." the virtiofs shared directory cannot be mounted.\n"

A great, the warning is already here :)
Nit: no need for "\n" with log_warn()

> +	);
> +    }
> +
> +    if (!$node_list || scalar($node_list->@*) != 1) {
> +	die "virtiofs needs exactly one mapping for this node\n";
> +    }
> +
> +    eval {
> +	PVE::Mapping::Dir::assert_valid($node_list->[0]);
> +    };

Style nit: eval block could be all on one line

> +    if (my $err = $@) {
> +	die "Directory Mapping invalid: $err\n";
> +    }
> +}
> +
> +sub start_virtiofs {
> +    my ($vmid, $fsid, $virtiofs) = @_;
> +
> +    my $dir_cfg = PVE::Mapping::Dir::config()->{ids}->{$virtiofs->{dirid}};
> +    my $node_list = PVE::Mapping::Dir::find_on_current_node($virtiofs->{dirid});
> +
> +    # Default to dir config xattr & acl settings
> +    my $xattr = $virtiofs->{xattr} // $dir_cfg->{xattr};
> +    my $acl = $virtiofs->{'acl'} // $dir_cfg->{'acl'};
> +
> +    my $node_cfg = $node_list->[0];
> +    my $path = $node_cfg->{path};
> +    my $socket_path_root = "/var/run/virtiofsd";

I think you can also just use /run instead of /var/run. Could also live
in /run/qemu-server/virtiofsd instead of being stand-alone, but both are
fine by me.

> +    mkdir $socket_path_root;
> +    my $socket_path = "$socket_path_root/vm$vmid-fs$fsid";
> +    unlink($socket_path);
> +    my $socket = IO::Socket::UNIX->new(
> +	Type => SOCK_STREAM,
> +	Local => $socket_path,
> +	Listen => 1,
> +    ) or die "cannot create socket - $!\n";
> +
> +    my $flags = fcntl($socket, F_GETFD, 0)
> +	or die "failed to get file descriptor flags: $!\n";
> +    fcntl($socket, F_SETFD, $flags & ~FD_CLOEXEC)
> +	or die "failed to remove FD_CLOEXEC from file descriptor\n";
> +
> +    my $fd = $socket->fileno();
> +
> +    my $virtiofsd_bin = '/usr/libexec/virtiofsd';
> +
> +    my $pid = fork();
> +    if ($pid == 0) {
> +	setsid();
> +	$0 = "task pve-vm$vmid-virtiofs$fsid";
> +	for my $fd_loop (3 .. POSIX::sysconf( &POSIX::_SC_OPEN_MAX )) {

Is there no better way to avoid this large number of close() calls (most
of which are not needed)?

> +	    POSIX::close($fd_loop) if ($fd_loop != $fd);

Style nit: no need for the parentheses with post-if

> +	}
> +
> +	my $pid2 = fork();
> +	if ($pid2 == 0) {
> +	    my $cmd = [$virtiofsd_bin, "--fd=$fd", "--shared-dir=$path"];
> +	    push @$cmd, '--xattr' if $xattr;
> +	    push @$cmd, '--posix-acl' if $acl;
> +	    push @$cmd, '--announce-submounts' if ($node_cfg->{submounts});
> +	    push @$cmd, '--allow-direct-io' if ($virtiofs->{'direct-io'});
> +	    push @$cmd, "--cache=$virtiofs->{'cache'}" if ($virtiofs->{'cache'});
> +	    push @$cmd, '--syslog';
> +	    exec(@$cmd);
> +	} elsif (!defined($pid2)) {
> +	    die "could not fork to start virtiofsd\n";
> +	} else {
> +	    POSIX::_exit(0);
> +	}
> +    } elsif (!defined($pid)) {
> +	die "could not fork to start virtiofsd\n";
> +    } else {
> +	waitpid($pid, 0);
> +    }
> +
> +    # return socket to keep it alive,
> +    # so that qemu will wait for virtiofsd to start

Nit: s/qemu/QEMU

> +    return $socket;
> +}
> +
>  sub check_rng_source {
>      my ($source) = @_;
>  
> @@ -5835,6 +6003,18 @@ sub vm_start_nolock {
>  	PVE::Tools::run_fork sub {
>  	    PVE::Systemd::enter_systemd_scope($vmid, "Proxmox VE VM $vmid", %systemd_properties);
>  
> +	    my @virtiofs_sockets;
> +	    for (my $i = 0; $i < $MAX_VIRTIOFS; $i++) {
> +		my $opt = "virtiofs$i";
> +
> +		next if !$conf->{$opt};
> +		my $virtiofs = parse_property_string('pve-qm-virtiofs', $conf->{$opt});
> +		next if !$virtiofs;
> +
> +		my $virtiofs_socket = start_virtiofs($vmid, $i, $virtiofs);
> +		push @virtiofs_sockets, $virtiofs_socket;
> +	    }
> +
>  	    my $tpmpid;
>  	    if ((my $tpm = $conf->{tpmstate0}) && !PVE::QemuConfig->is_template($conf)) {
>  		# start the TPM emulator so QEMU can connect on start
> @@ -5849,6 +6029,11 @@ sub vm_start_nolock {
>  		}
>  		die "QEMU exited with code $exitcode\n";
>  	    }
> +
> +	    foreach my $virtiofs_socket (@virtiofs_sockets) {

Style nit: for

> +		shutdown($virtiofs_socket, 2);
> +		close($virtiofs_socket);
> +	    }

Shouldn't this also be done when QEMU start fails?

>  	};
>      };
>  
> diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
> index f365f2d..647595a 100644
> --- a/PVE/QemuServer/Memory.pm
> +++ b/PVE/QemuServer/Memory.pm
> @@ -367,6 +367,16 @@ sub config {
>  
>      die "numa needs to be enabled to use hugepages" if $conf->{hugepages} && !$conf->{numa};
>  
> +    my $virtiofs_enabled = 0;
> +    for (my $i = 0; $i < PVE::QemuServer::max_virtiofs(); $i++) {

This is one reason it shoulb live in its own module.
PVE/QemuServer/Memory.pm should not include or call into
PVE/QemuServer.pm, that would be cyclic and can lead to strange issues
down the line.

> +	my $opt = "virtiofs$i";
> +	next if !$conf->{$opt};
> +	my $virtiofs = PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', $conf->{$opt});
> +	if ($virtiofs) {
> +	    $virtiofs_enabled = 1;
> +	}
> +    }
> +
>      if ($conf->{numa}) {
>  
>  	my $numa_totalmemory = undef;




^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [pve-devel] [PATCH qemu-server v8 5/7] Permission check for virtiofs directory access
  2023-11-08  8:52 ` [pve-devel] [PATCH qemu-server v8 5/7] Permission check for virtiofs directory access Markus Frank
@ 2024-01-31 15:23   ` Fiona Ebner
  0 siblings, 0 replies; 26+ messages in thread
From: Fiona Ebner @ 2024-01-31 15:23 UTC (permalink / raw)
  To: Proxmox VE development discussion, Markus Frank

I feel like this patch should be squashed into the last one, because the
checks are an essential part of the feature and they always should be
applied together anyways.

Am 08.11.23 um 09:52 schrieb Markus Frank:
> +my sub check_vm_create_dir_perm {
> +    my ($rpcenv, $authuser, $vmid, $pool, $param) = @_;
> +
> +    return 1 if $authuser eq 'root@pam';
> +
> +    foreach my $opt (keys %{$param}) {

Style nit: please use "for"

Looks good to me otherwise :)




^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [pve-devel] [PATCH qemu-server v8 6/7] check_local_resources: virtiofs
  2023-11-08  8:52 ` [pve-devel] [PATCH qemu-server v8 6/7] check_local_resources: virtiofs Markus Frank
@ 2024-01-31 15:35   ` Fiona Ebner
  2024-02-22 10:44     ` Markus Frank
  0 siblings, 1 reply; 26+ messages in thread
From: Fiona Ebner @ 2024-01-31 15:35 UTC (permalink / raw)
  To: Proxmox VE development discussion, Markus Frank

A 'migration: ' prefix would be nice for the commit title.

Am 08.11.23 um 09:52 schrieb Markus Frank:
> add dir mapping checks to check_local_resources
> 

So, as long as there is a mapping for the target node, the migration
check goes through. Should it? I mean, nothing ensures that the contents
of the directory are the same on the target, or? What happens if they
are not?

Is migration with a directory on a shared storage reliable? With all
cache settings?

If you already tested these things, please share them in the commit
message (and maybe also docs). Otherwise, I know nothing without testing
myself ;)




^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [pve-devel] [PATCH manager v8 7/7] api: add resource map api endpoints for directories
  2023-11-08  8:52 ` [pve-devel] [PATCH manager v8 7/7] api: add resource map api endpoints for directories Markus Frank
@ 2024-01-31 15:56   ` Fiona Ebner
  0 siblings, 0 replies; 26+ messages in thread
From: Fiona Ebner @ 2024-01-31 15:56 UTC (permalink / raw)
  To: Proxmox VE development discussion, Markus Frank

Since you require the invariant that there is no duplicate entry for a
single node in qemu-server, you could check when adding or updating the
mapping that no such duplicate is created. For example,

pvesh create /cluster/mapping/dir --id foo --map
node=pve8a1,path=/root/bar --map node=pve8a1,path=/root/baz

currently does not fail, but it would be nicer if it did.

Am 08.11.23 um 09:52 schrieb Markus Frank:
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> ---
>  PVE/API2/Cluster/Mapping.pm       |   7 +
>  PVE/API2/Cluster/Mapping/Dir.pm   | 309 ++++++++++++++++++++++++++++++
>  PVE/API2/Cluster/Mapping/Makefile |   3 +-
>  3 files changed, 318 insertions(+), 1 deletion(-)
>  create mode 100644 PVE/API2/Cluster/Mapping/Dir.pm
> 
> diff --git a/PVE/API2/Cluster/Mapping.pm b/PVE/API2/Cluster/Mapping.pm
> index 40386579..c5993208 100644
> --- a/PVE/API2/Cluster/Mapping.pm
> +++ b/PVE/API2/Cluster/Mapping.pm
> @@ -5,6 +5,7 @@ use warnings;
>  
>  use PVE::API2::Cluster::Mapping::PCI;
>  use PVE::API2::Cluster::Mapping::USB;
> +use PVE::API2::Cluster::Mapping::Dir;
>  
>  use base qw(PVE::RESTHandler);
>  
> @@ -18,6 +19,11 @@ __PACKAGE__->register_method ({
>      path => 'usb',
>  });
>  
> +__PACKAGE__->register_method ({
> +    subclass => "PVE::API2::Cluster::Mapping::Dir",
> +    path => 'dir',
> +});

Nit: not sorted alphabetically

> +
>  __PACKAGE__->register_method ({
>      name => 'index',
>      path => '',
> @@ -43,6 +49,7 @@ __PACKAGE__->register_method ({
>  	my $result = [
>  	    { name => 'pci' },
>  	    { name => 'usb' },
> +	    { name => 'dir' },

Nit: not sorted alphabetically

>  	];
>  
>  	return $result;
> diff --git a/PVE/API2/Cluster/Mapping/Dir.pm b/PVE/API2/Cluster/Mapping/Dir.pm
> new file mode 100644
> index 00000000..f6e8f26f
> --- /dev/null
> +++ b/PVE/API2/Cluster/Mapping/Dir.pm
> @@ -0,0 +1,309 @@
> +package PVE::API2::Cluster::Mapping::Dir;
> +
> +use strict;
> +use warnings;
> +
> +use Storable qw(dclone);
> +
> +use PVE::Mapping::Dir ();
> +use PVE::JSONSchema qw(get_standard_option);
> +use PVE::Tools qw(extract_param);
> +
> +use base qw(PVE::RESTHandler);
> +
> +__PACKAGE__->register_method ({
> +    name => 'index',
> +    path => '',
> +    method => 'GET',
> +    # only proxy if we give the 'check-node' parameter
> +    proxyto_callback => sub {
> +	my ($rpcenv, $proxyto, $param) = @_;
> +	return $param->{'check-node'} // 'localhost';
> +    },
> +    description => "List Dir Hardware Mapping",

s/Dir/directory
not sure it should include "hardware" and no need for title case

> +    permissions => {
> +	description => "Only lists entries where you have 'Mapping.Modify', 'Mapping.Use' or".
> +	    " 'Mapping.Audit' permissions on '/mapping/dir/<id>'.",
> +	user => 'all',
> +    },
> +    parameters => {
> +	additionalProperties => 0,
> +	properties => {
> +	    'check-node' => get_standard_option('pve-node', {
> +		description => "If given, checks the configurations on the given node for ".
> +		    "correctness, and adds relevant diagnostics for the devices to the response.",

s/devices/directories

> +		optional => 1,
> +	    }),
> +	},
> +    },
> +    returns => {
> +	type => 'array',
> +	items => {
> +	    type => "object",
> +	    properties => {
> +		id => {
> +		    type => 'string',
> +		    description => "The logical ID of the mapping."
> +		},
> +		map => {
> +		    type => 'array',
> +		    description => "The entries of the mapping.",
> +		    items => {
> +			type => 'string',
> +			description => "A mapping for a node.",
> +		    },
> +		},
> +		description => {
> +		    type => 'string',
> +		    description => "A description of the logical mapping.",
> +		},
> +		xattr => {
> +		    type => 'boolean',
> +		    description => "Enable support for extended attributes.",
> +		    optional => 1,
> +		},
> +		acl => {
> +		    type => 'boolean',
> +		    description => "Enable support for posix ACLs (implies --xattr).",
> +		    optional => 1,
> +		},
> +		checks => {
> +		    type => "array",
> +		    optional => 1,
> +		    description => "A list of checks, only present if 'check_node' is set.",

It's "check-node"

> +		    items => {
> +			type => 'object',
> +			properties => {
> +			    severity => {
> +				type => "string",
> +				enum => ['warning', 'error'],
> +				description => "The severity of the error",
> +			    },
> +			    message => {
> +				type => "string",
> +				description => "The message of the error",
> +			    },
> +			},
> +		    }
> +		},
> +	    },
> +	},
> +	links => [ { rel => 'child', href => "{id}" } ],
> +    },
> +    code => sub {
> +	my ($param) = @_;
> +
> +	my $rpcenv = PVE::RPCEnvironment::get();

missing use statement for PVE::RPCEnvironment at the start of the file

> +	my $authuser = $rpcenv->get_user();
> +
> +	my $check_node = $param->{'check-node'};
> +	my $local_node = PVE::INotify::nodename();

missing use statement for PVE::INotify at the start of the file

> +
> +	die "wrong node to check - $check_node != $local_node\n"
> +	    if defined($check_node) && $check_node ne 'localhost' && $check_node ne $local_node;
> +
> +	my $cfg = PVE::Mapping::Dir::config();
> +
> +	my $can_see_mapping_privs = ['Mapping.Modify', 'Mapping.Use', 'Mapping.Audit'];
> +
> +	my $res = [];
> +	for my $id (keys $cfg->{ids}->%*) {
> +	    next if !$rpcenv->check_any($authuser, "/mapping/dir/$id", $can_see_mapping_privs, 1);
> +	    next if !$cfg->{ids}->{$id};
> +
> +	    my $entry = dclone($cfg->{ids}->{$id});
> +	    $entry->{id} = $id;
> +	    $entry->{digest} = $cfg->{digest};
> +
> +	    if (defined($check_node)) {
> +		$entry->{checks} = [];
> +		if (my $mappings = PVE::Mapping::Dir::get_node_mapping($cfg, $id, $check_node)) {
> +		    if (!scalar($mappings->@*)) {
> +			push $entry->{checks}->@*, {
> +			    severity => 'warning',
> +			    message => "No mapping for node $check_node.",
> +			};
> +		    }
> +		    for my $mapping ($mappings->@*) {
> +			eval { PVE::Mapping::Dir::assert_valid($id, $mapping) };

assert_valid() only takes one parameter, i.e. the config

> +			if (my $err = $@) {
> +			    push $entry->{checks}->@*, {
> +				severity => 'error',
> +				message => "Invalid configuration: $err",
> +			    };
> +			}
> +		    }
> +		}
> +	    }
> +
> +	    push @$res, $entry;
> +	}
> +
> +	return $res;
> +    },
> +});
> +
> +__PACKAGE__->register_method ({
> +    name => 'get',
> +    protected => 1,
> +    path => '{id}',
> +    method => 'GET',
> +    description => "Get Dir Mapping.",

s/Dir/directory
no need for title case

---snip---

> +__PACKAGE__->register_method ({
> +    name => 'create',
> +    protected => 1,
> +    path => '',
> +    method => 'POST',
> +    description => "Create a new hardware mapping.",

s/hardware/directory/

> +    permissions => {
> +	check => ['perm', '/mapping/dir', ['Mapping.Modify']],
> +    },
> +    parameters => PVE::Mapping::Dir->createSchema(1),
> +    returns => {
> +	type => 'null',
> +    },
> +    code => sub {
> +	my ($param) = @_;
> +
> +	my $id = extract_param($param, 'id');
> +
> +	my $plugin = PVE::Mapping::Dir->lookup('dir');
> +	my $opts = $plugin->check_config($id, $param, 1, 1);
> +
> +	PVE::Mapping::Dir::lock_dir_config(sub {
> +	    my $cfg = PVE::Mapping::Dir::config();
> +
> +	    die "dir ID '$id' already defined\n" if defined($cfg->{ids}->{$id});
> +
> +	    $cfg->{ids}->{$id} = $opts;
> +
> +	    PVE::Mapping::Dir::write_dir_config($cfg);
> +
> +	}, "create hardware mapping failed");

s/hardware/directory/

> +
> +	return;
> +    },
> +});
> +
> +__PACKAGE__->register_method ({
> +    name => 'update',
> +    protected => 1,
> +    path => '{id}',
> +    method => 'PUT',
> +    description => "Update a hardware mapping.",

s/hardware/directory/

> +    permissions => {
> +	check => ['perm', '/mapping/dir/{id}', ['Mapping.Modify']],
> +    },
> +    parameters => PVE::Mapping::Dir->updateSchema(),
> +    returns => {
> +	type => 'null',
> +    },
> +    code => sub {
> +	my ($param) = @_;
> +
> +	my $digest = extract_param($param, 'digest');
> +	my $delete = extract_param($param, 'delete');
> +	my $id = extract_param($param, 'id');
> +
> +	if ($delete) {
> +	    $delete = [ PVE::Tools::split_list($delete) ];
> +	}
> +
> +	PVE::Mapping::Dir::lock_dir_config(sub {
> +	    my $cfg = PVE::Mapping::Dir::config();
> +
> +	    PVE::Tools::assert_if_modified($cfg->{digest}, $digest) if defined($digest);
> +
> +	    die "dir ID '$id' does not exist\n" if !defined($cfg->{ids}->{$id});
> +
> +	    my $plugin = PVE::Mapping::Dir->lookup('dir');
> +	    my $opts = $plugin->check_config($id, $param, 1, 1);
> +
> +	    my $data = $cfg->{ids}->{$id};
> +
> +	    my $options = $plugin->private()->{options}->{dir};
> +	    PVE::SectionConfig::delete_from_config($data, $options, $opts, $delete);
> +
> +	    $data->{$_} = $opts->{$_} for keys $opts->%*;
> +
> +	    PVE::Mapping::Dir::write_dir_config($cfg);
> +
> +	}, "update hardware mapping failed");

s/hardware/directory/

> +
> +	return;
> +    },
> +});
> +
> +__PACKAGE__->register_method ({
> +    name => 'delete',
> +    protected => 1,
> +    path => '{id}',
> +    method => 'DELETE',
> +    description => "Remove Hardware Mapping.",

s/hardware/directory/
no need for title case

> +    permissions => {
> +	check => [ 'perm', '/mapping/dir', ['Mapping.Modify']],
> +    },
> +    parameters => {
> +	additionalProperties => 0,
> +	properties => {
> +	    id => {
> +		type => 'string',
> +		format => 'pve-configid',
> +	    },
> +	}
> +    },
> +    returns => { type => 'null' },
> +    code => sub {
> +	my ($param) = @_;
> +
> +	my $id = $param->{id};
> +
> +	PVE::Mapping::Dir::lock_dir_config(sub {
> +	    my $cfg = PVE::Mapping::Dir::config();
> +
> +	    if ($cfg->{ids}->{$id}) {
> +		delete $cfg->{ids}->{$id};
> +	    }
> +
> +	    PVE::Mapping::Dir::write_dir_config($cfg);
> +
> +	}, "delete dir mapping failed");

s/dir/directory/

> +
> +	return;
> +    }
> +});
> +
> +1;
> diff --git a/PVE/API2/Cluster/Mapping/Makefile b/PVE/API2/Cluster/Mapping/Makefile
> index e7345ab4..a80c8ab3 100644
> --- a/PVE/API2/Cluster/Mapping/Makefile
> +++ b/PVE/API2/Cluster/Mapping/Makefile
> @@ -4,7 +4,8 @@ include ../../../../defines.mk
>  # ensure we do not conflict with files shipped by pve-cluster!!
>  PERLSOURCE= 	\
>  	PCI.pm	\
> -	USB.pm
> +	USB.pm	\
> +	Dir.pm

Nit: not sorted alphabetically




^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [pve-devel] [PATCH qemu-server v8 4/7] feature #1027: virtio-fs support
  2024-01-31 15:02   ` Fiona Ebner
@ 2024-02-13 11:52     ` Markus Frank
  2024-02-13 12:04       ` Fiona Ebner
  0 siblings, 1 reply; 26+ messages in thread
From: Markus Frank @ 2024-02-13 11:52 UTC (permalink / raw)
  To: Fiona Ebner, Proxmox VE development discussion

Thanks,

I already moved most of the code into a new PVE/QemuServer/Virtiofs.pm module.

Just an clarification & question concerning the queue-size:

On  2024-01-31 16:02, Fiona Ebner wrote:
>> +	push @$devices, '-chardev', "socket,id=virtfs$i,path=/var/run/virtiofsd/vm$vmid-fs$i";
>> +	push @$devices, '-device', 'vhost-user-fs-pci,queue-size=1024'
> 
> Any specific reason for queue-size=1024? Better performance than the
> default 128?

There was problem with Windows Guests:
https://bugzilla.redhat.com/show_bug.cgi?id=1873088
https://github.com/virtio-win/kvm-guest-drivers-windows/issues/764

queue-size=1024 is still in every documentation and every qemu command for virtiofs I have seen used 1024.
Would it better to add a parameter to configure this queue-size?




^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [pve-devel] [PATCH qemu-server v8 4/7] feature #1027: virtio-fs support
  2024-02-13 11:52     ` Markus Frank
@ 2024-02-13 12:04       ` Fiona Ebner
  0 siblings, 0 replies; 26+ messages in thread
From: Fiona Ebner @ 2024-02-13 12:04 UTC (permalink / raw)
  To: Markus Frank, Proxmox VE development discussion

Am 13.02.24 um 12:52 schrieb Markus Frank:
> Thanks,
> 
> I already moved most of the code into a new PVE/QemuServer/Virtiofs.pm
> module.
> 

Great! :)

> Just an clarification & question concerning the queue-size:
> 
> On  2024-01-31 16:02, Fiona Ebner wrote:
>>> +    push @$devices, '-chardev',
>>> "socket,id=virtfs$i,path=/var/run/virtiofsd/vm$vmid-fs$i";
>>> +    push @$devices, '-device', 'vhost-user-fs-pci,queue-size=1024'
>>
>> Any specific reason for queue-size=1024? Better performance than the
>> default 128?
> 
> There was problem with Windows Guests:
> https://bugzilla.redhat.com/show_bug.cgi?id=1873088
> https://github.com/virtio-win/kvm-guest-drivers-windows/issues/764
> 
> queue-size=1024 is still in every documentation and every qemu command
> for virtiofs I have seen used 1024.

Good to know! Please add a comment referencing this, so people looking
at the code will be aware of it and why the value was chosen.

> Would it better to add a parameter to configure this queue-size?

We can add a parameter if enough users with valid use cases request it.
But to start out, I don't see much need for it.




^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [pve-devel] [PATCH qemu-server v8 6/7] check_local_resources: virtiofs
  2024-01-31 15:35   ` Fiona Ebner
@ 2024-02-22 10:44     ` Markus Frank
  0 siblings, 0 replies; 26+ messages in thread
From: Markus Frank @ 2024-02-22 10:44 UTC (permalink / raw)
  To: Fiona Ebner, Proxmox VE development discussion


On  2024-01-31 16:35, Fiona Ebner wrote:
> A 'migration: ' prefix would be nice for the commit title.
> 
> Am 08.11.23 um 09:52 schrieb Markus Frank:
>> add dir mapping checks to check_local_resources
>>
> 
> So, as long as there is a mapping for the target node, the migration
> check goes through. Should it? I mean, nothing ensures that the contents
> of the directory are the same on the target, or? What happens if they
> are not?
Yes. This is similar to the PCI & USB Mapping. Nothing ensures that the
mapped device/directory is the same on the other host.

If the contents are not the same they will not be the same.
But this should not be a problem as long as nothing in a VM depends on
something in the virtiofs directory.
This is something the user should take care of.
Or am I missing something?
> 
> Is migration with a directory on a shared storage reliable? With all
> cache settings?
I didn't encounter any problems.

Since the VM has to be powered off to migrate, I assume the cache gets
cleared & written to disk when virtiofs stops.

> 
> If you already tested these things, please share them in the commit
> message (and maybe also docs). Otherwise, I know nothing without testing
> myself ;)
Okay :)




^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2024-02-22 10:44 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-08  8:52 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v8 0/7] virtiofs Markus Frank
2023-11-08  8:52 ` [pve-devel] [PATCH cluster v8 1/7] add mapping/dir.cfg for resource mapping Markus Frank
2024-01-31 12:01   ` Fiona Ebner
2023-11-08  8:52 ` [pve-devel] [PATCH guest-common v8 2/7] add Dir mapping config Markus Frank
2024-01-31 12:01   ` Fiona Ebner
2024-01-31 13:42     ` Markus Frank
2024-01-31 13:53       ` Fiona Ebner
2024-01-31 14:00         ` Fiona Ebner
2024-01-31 14:15           ` Markus Frank
2024-01-31 13:02   ` Fiona Ebner
2023-11-08  8:52 ` [pve-devel] [PATCH docs v8 3/7] added shared filesystem doc for virtio-fs Markus Frank
2024-01-31 13:26   ` Fiona Ebner
2024-01-31 13:34   ` Fiona Ebner
2023-11-08  8:52 ` [pve-devel] [PATCH qemu-server v8 4/7] feature #1027: virtio-fs support Markus Frank
2024-01-31 15:02   ` Fiona Ebner
2024-02-13 11:52     ` Markus Frank
2024-02-13 12:04       ` Fiona Ebner
2023-11-08  8:52 ` [pve-devel] [PATCH qemu-server v8 5/7] Permission check for virtiofs directory access Markus Frank
2024-01-31 15:23   ` Fiona Ebner
2023-11-08  8:52 ` [pve-devel] [PATCH qemu-server v8 6/7] check_local_resources: virtiofs Markus Frank
2024-01-31 15:35   ` Fiona Ebner
2024-02-22 10:44     ` Markus Frank
2023-11-08  8:52 ` [pve-devel] [PATCH manager v8 7/7] api: add resource map api endpoints for directories Markus Frank
2024-01-31 15:56   ` Fiona Ebner
2024-01-30 12:31 ` [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v8 0/7] virtiofs Markus Frank
2024-01-31 12:01 ` Fiona Ebner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal