public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH cluster/guest-common/qemu-server/manager v6 0/11] virtiofs
@ 2023-07-06 10:54 Markus Frank
  2023-07-06 10:54 ` [pve-devel] [PATCH cluster v6 1/1] add mapping/dir.cfg for resource mapping Markus Frank
                   ` (14 more replies)
  0 siblings, 15 replies; 19+ messages in thread
From: Markus Frank @ 2023-07-06 10:54 UTC (permalink / raw)
  To: pve-devel

cluster:

Markus Frank (1):
  add mapping/dir.cfg for resource mapping

 src/PVE/Cluster.pm  | 1 +
 src/pmxcfs/status.c | 1 +
 2 files changed, 2 insertions(+)


guest-common:

Markus Frank (1):
  add DIR mapping config

 src/Makefile           |   1 +
 src/PVE/Mapping/DIR.pm | 175 +++++++++++++++++++++++++++++++++++++++++
 2 files changed, 176 insertions(+)
 create mode 100644 src/PVE/Mapping/DIR.pm


qemu-server:

v6:
 * added virtiofsd dependency
 * 2 new patches:
    * Permission check for virtiofs directory access
    * check_local_resources: virtiofs

v5:
 * allow numa settings with virtio-fs
 * added direct-io & cache settings
 * changed to rust implementation of virtiofsd
 * made double fork and closed all file descriptor so that the lockfile
 gets released.

v3:
 * created own socket and get file descriptor for virtiofsd
 so there is no race between starting virtiofsd & qemu
 * added TODO to replace virtiofsd with rust implementation in bookworm
 (I packaged the rust implementation for bookworm & the C implementation
 in qemu will be removed in qemu 8.0)

v2:
 * replaced sharedfiles_fmt path in qemu-server with dirid:
 * user can use the dirid to specify the directory without requiring root access

Markus Frank (3):
  feature #1027: virtio-fs support
  Permission check for virtiofs directory access
  check_local_resources: virtiofs

 PVE/API2/Qemu.pm         |  18 +++++
 PVE/QemuServer.pm        | 167 ++++++++++++++++++++++++++++++++++++++-
 PVE/QemuServer/Memory.pm |  25 ++++--
 debian/control           |   1 +
 4 files changed, 204 insertions(+), 7 deletions(-)


manager:

v6: completly new except "ui: added options to add virtio-fs to qemu config"

Markus Frank (5):
  api: add resource map api endpoints for directories
  ui: add edit window for dir mappings
  ui: ResourceMapTree for DIR
  ui: form: add DIRMapSelector
  ui: added options to add virtio-fs to qemu config

 PVE/API2/Cluster/Mapping.pm         |   7 +
 PVE/API2/Cluster/Mapping/DIR.pm     | 299 ++++++++++++++++++++++++++++
 PVE/API2/Cluster/Mapping/Makefile   |   3 +-
 www/manager6/Makefile               |   4 +
 www/manager6/Utils.js               |   1 +
 www/manager6/dc/Config.js           |  10 +
 www/manager6/dc/DIRMapView.js       |  50 +++++
 www/manager6/form/DIRMapSelector.js |  63 ++++++
 www/manager6/qemu/HardwareView.js   |  19 ++
 www/manager6/qemu/VirtiofsEdit.js   | 120 +++++++++++
 www/manager6/window/DIRMapEdit.js   | 186 +++++++++++++++++
 11 files changed, 761 insertions(+), 1 deletion(-)
 create mode 100644 PVE/API2/Cluster/Mapping/DIR.pm
 create mode 100644 www/manager6/dc/DIRMapView.js
 create mode 100644 www/manager6/form/DIRMapSelector.js
 create mode 100644 www/manager6/qemu/VirtiofsEdit.js
 create mode 100644 www/manager6/window/DIRMapEdit.js

-- 
2.39.2





^ permalink raw reply	[flat|nested] 19+ messages in thread

* [pve-devel] [PATCH cluster v6 1/1] add mapping/dir.cfg for resource mapping
  2023-07-06 10:54 [pve-devel] [PATCH cluster/guest-common/qemu-server/manager v6 0/11] virtiofs Markus Frank
@ 2023-07-06 10:54 ` Markus Frank
  2023-07-06 10:54 ` [pve-devel] [PATCH guest-common v6 1/1] add DIR mapping config Markus Frank
                   ` (13 subsequent siblings)
  14 siblings, 0 replies; 19+ messages in thread
From: Markus Frank @ 2023-07-06 10:54 UTC (permalink / raw)
  To: pve-devel

Add it to both, the perl side (PVE/Cluster.pm) and pmxcfs side
(status.c).

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 src/PVE/Cluster.pm  | 1 +
 src/pmxcfs/status.c | 1 +
 2 files changed, 2 insertions(+)

diff --git a/src/PVE/Cluster.pm b/src/PVE/Cluster.pm
index ff777ba..39f2d99 100644
--- a/src/PVE/Cluster.pm
+++ b/src/PVE/Cluster.pm
@@ -80,6 +80,7 @@ my $observed = {
     'virtual-guest/cpu-models.conf' => 1,
     'mapping/pci.cfg' => 1,
     'mapping/usb.cfg' => 1,
+    'mapping/dir.cfg' => 1,
 };
 
 sub prepare_observed_file_basedirs {
diff --git a/src/pmxcfs/status.c b/src/pmxcfs/status.c
index 1f29b07..e6f0bac 100644
--- a/src/pmxcfs/status.c
+++ b/src/pmxcfs/status.c
@@ -110,6 +110,7 @@ static memdb_change_t memdb_change_array[] = {
 	{ .path = "firewall/cluster.fw" },
 	{ .path = "mapping/pci.cfg" },
 	{ .path = "mapping/usb.cfg" },
+	{ .path = "mapping/dir.cfg" },
 };
 
 static GMutex mutex;
-- 
2.39.2





^ permalink raw reply	[flat|nested] 19+ messages in thread

* [pve-devel] [PATCH guest-common v6 1/1] add DIR mapping config
  2023-07-06 10:54 [pve-devel] [PATCH cluster/guest-common/qemu-server/manager v6 0/11] virtiofs Markus Frank
  2023-07-06 10:54 ` [pve-devel] [PATCH cluster v6 1/1] add mapping/dir.cfg for resource mapping Markus Frank
@ 2023-07-06 10:54 ` Markus Frank
  2023-07-19 12:09   ` Fabian Grünbichler
  2023-07-06 10:54 ` [pve-devel] [PATCH docs v6 1/1] added shared filesystem doc for virtio-fs Markus Frank
                   ` (12 subsequent siblings)
  14 siblings, 1 reply; 19+ messages in thread
From: Markus Frank @ 2023-07-06 10:54 UTC (permalink / raw)
  To: pve-devel

adds a config file for directories by using a 'map'
array propertystring for each node mapping.

next to node & path, there are xattr, acl & submounts parameters for
virtiofsd in the map array.

example config:
```
some-dir-id
	map node=node1,path=/mnt/share/,xattr=1,acl=1,submounts=1
	map node=node2,path=/mnt/share/,xattr=1
	map node=node3,path=/mnt/share/,submounts=1
	map node=node4,path=/mnt/share/
```

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 src/Makefile           |   1 +
 src/PVE/Mapping/DIR.pm | 175 +++++++++++++++++++++++++++++++++++++++++
 2 files changed, 176 insertions(+)
 create mode 100644 src/PVE/Mapping/DIR.pm

diff --git a/src/Makefile b/src/Makefile
index cbc40c1..876829a 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -17,6 +17,7 @@ install: PVE
 	install -d ${PERL5DIR}/PVE/Mapping
 	install -m 0644 PVE/Mapping/PCI.pm ${PERL5DIR}/PVE/Mapping/
 	install -m 0644 PVE/Mapping/USB.pm ${PERL5DIR}/PVE/Mapping/
+	install -m 0644 PVE/Mapping/DIR.pm ${PERL5DIR}/PVE/Mapping/
 	install -d ${PERL5DIR}/PVE/VZDump
 	install -m 0644 PVE/VZDump/Plugin.pm ${PERL5DIR}/PVE/VZDump/
 	install -m 0644 PVE/VZDump/Common.pm ${PERL5DIR}/PVE/VZDump/
diff --git a/src/PVE/Mapping/DIR.pm b/src/PVE/Mapping/DIR.pm
new file mode 100644
index 0000000..a5da042
--- /dev/null
+++ b/src/PVE/Mapping/DIR.pm
@@ -0,0 +1,175 @@
+package PVE::Mapping::DIR;
+
+use strict;
+use warnings;
+
+use PVE::Cluster qw(cfs_register_file cfs_read_file cfs_lock_file cfs_write_file);
+use PVE::JSONSchema qw(get_standard_option parse_property_string);
+use PVE::SectionConfig;
+use PVE::INotify;
+
+use base qw(PVE::SectionConfig);
+
+my $FILENAME = 'mapping/dir.cfg';
+
+cfs_register_file($FILENAME,
+                  sub { __PACKAGE__->parse_config(@_); },
+                  sub { __PACKAGE__->write_config(@_); });
+
+
+# so we don't have to repeat the type every time
+sub parse_section_header {
+    my ($class, $line) = @_;
+
+    if ($line =~ m/^(\S+)\s*$/) {
+	my $id = $1;
+	my $errmsg = undef; # set if you want to skip whole section
+	eval { PVE::JSONSchema::pve_verify_configid($id) };
+	$errmsg = $@ if $@;
+	my $config = {}; # to return additional attributes
+	return ('dir', $id, $errmsg, $config);
+    }
+    return undef;
+}
+
+sub format_section_header {
+    my ($class, $type, $sectionId, $scfg, $done_hash) = @_;
+
+    return "$sectionId\n";
+}
+
+sub type {
+    return 'dir';
+}
+
+my $map_fmt = {
+    node => get_standard_option('pve-node'),
+    path => {
+	description => "Directory-path that should be shared with the guest.",
+	type => 'string',
+	format => 'pve-storage-path',
+    },
+    xattr => {
+	type => 'boolean',
+	description => "Enable support for extended attributes (xattrs).",
+	optional => 1,
+    },
+    acl => {
+	type => 'boolean',
+	description => "Enable support for posix ACLs (implies --xattr).",
+	optional => 1,
+    },
+    submounts => {
+	type => 'boolean',
+	description => "Option to tell the guest which directories are mount points.",
+	optional => 1,
+    },
+    description => {
+	description => "Description of the node specific directory.",
+	type => 'string',
+	optional => 1,
+	maxLength => 4096,
+    },
+};
+
+my $defaultData = {
+    propertyList => {
+        id => {
+            type => 'string',
+            description => "The ID of the directory",
+            format => 'pve-configid',
+        },
+        description => {
+            description => "Description of the directory",
+            type => 'string',
+            optional => 1,
+            maxLength => 4096,
+        },
+        map => {
+            type => 'array',
+            description => 'A list of maps for the cluster nodes.',
+	    optional => 1,
+            items => {
+                type => 'string',
+                format => $map_fmt,
+            },
+        },
+    },
+};
+
+sub private {
+    return $defaultData;
+}
+
+sub options {
+    return {
+        description => { optional => 1 },
+        map => {},
+    };
+}
+
+sub assert_valid {
+    my ($dir_cfg) = @_;
+
+    my $path = $dir_cfg->{path};
+
+    if (! -e $path) {
+        die "Path $path does not exist\n";
+    }
+    if ((-e $path) && (! -d $path)) {
+        die "Path $path exists but is not a directory\n"
+    }
+
+    return 1;
+};
+
+sub config {
+    return cfs_read_file($FILENAME);
+}
+
+sub lock_dir_config {
+    my ($code, $errmsg) = @_;
+
+    cfs_lock_file($FILENAME, undef, $code);
+    my $err = $@;
+    if ($err) {
+        $errmsg ? die "$errmsg: $err" : die $err;
+    }
+}
+
+sub write_dir_config {
+    my ($cfg) = @_;
+
+    cfs_write_file($FILENAME, $cfg);
+}
+
+sub find_on_current_node {
+    my ($id) = @_;
+
+    my $cfg = config();
+    my $node = PVE::INotify::nodename();
+
+    return get_node_mapping($cfg, $id, $node);
+}
+
+sub get_node_mapping {
+    my ($cfg, $id, $nodename) = @_;
+
+    return undef if !defined($cfg->{ids}->{$id});
+
+    my $res = [];
+    my $mapping_list = $cfg->{ids}->{$id}->{map};
+    foreach my $map (@{$mapping_list}) {
+	my $entry = eval { parse_property_string($map_fmt, $map) };
+	warn $@ if $@;
+	if ($entry && $entry->{node} eq $nodename) {
+	    push $res->@*, $entry;
+	}
+    }
+    return $res;
+}
+
+PVE::Mapping::DIR->register();
+PVE::Mapping::DIR->init();
+
+1;
-- 
2.39.2





^ permalink raw reply	[flat|nested] 19+ messages in thread

* [pve-devel] [PATCH docs v6 1/1] added shared filesystem doc for virtio-fs
  2023-07-06 10:54 [pve-devel] [PATCH cluster/guest-common/qemu-server/manager v6 0/11] virtiofs Markus Frank
  2023-07-06 10:54 ` [pve-devel] [PATCH cluster v6 1/1] add mapping/dir.cfg for resource mapping Markus Frank
  2023-07-06 10:54 ` [pve-devel] [PATCH guest-common v6 1/1] add DIR mapping config Markus Frank
@ 2023-07-06 10:54 ` Markus Frank
  2023-07-17  8:08   ` Christoph Heiss
  2023-07-06 10:54 ` [pve-devel] [PATCH qemu-server v6 1/3] feature #1027: virtio-fs support Markus Frank
                   ` (11 subsequent siblings)
  14 siblings, 1 reply; 19+ messages in thread
From: Markus Frank @ 2023-07-06 10:54 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 qm.adoc | 60 +++++++++++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 58 insertions(+), 2 deletions(-)

diff --git a/qm.adoc b/qm.adoc
index e35dbf0..00a0668 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -997,6 +997,61 @@ recommended to always use a limiter to avoid guests using too many host
 resources. If desired, a value of '0' for `max_bytes` can be used to disable
 all limits.
 
+[[qm_virtiofs]]
+Virtio-fs
+~~~~~~~~~
+
+Virtio-fs is a shared file system, that enables sharing a directory between
+host and guest VM while taking advantage of the locality of virtual machines
+and the hypervisor to get a higher throughput than 9p.
+The parameter `hugepages` must be disabled to use virtio-fs.
+
+Add mapping for Shared Directories
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To add a mapping, go to the Resource Mapping tab in Datacenter in the WebUI,
+use the API directly with pvesh as described in the
+xref:resource_mapping[Resource Mapping] section,
+or add the mapping to the configuration file /etc/pve/mapping/dir.cfg:
+
+----
+some-dir-id
+	map node=node1,path=/share/,xattr=1,acl=1,submounts=1
+	map node=node2,path=/share/,xattr=1
+	map node=node3,path=/different-share-path/,submounts=1
+	map node=node4,path=/foobar/
+	map node=node5,path=/somewhere/,acl=1
+----
+
+The Parameter `acl` automatically implies `xattr`, so there would be no need to
+set `xattr` for node1 in this example.
+Set `submounts` to `1` when using multiple file systems in the shared directory.
+
+Add virtiofs to VM
+^^^^^^^^^^^^^^^^^^
+
+To share a directory with virtio-fs, you need to specify the directory ID
+that has been configured in the Resource Mapping. Additionally, you can set
+the `cache` option to either `always`, `never`, or `auto`, depending on your
+requirements. If you want virtio-fs to honor the `O_DIRECT` flag, you can set the
+`direct-io` parameter to `1`.
+
+----
+qm set <vmid> -virtiofs0 dirid=<dirid>,tag=<mount tag>,cache=always,direct-io=1
+qm set <vmid> -virtiofs1 <dirid>,tag=<mount tag>,cache=never
+qm set <vmid> -virtiofs2 <dirid>,tag=<mount tag>
+----
+
+To mount virtio-fs in a guest VM with the Linux kernel virtiofs driver, run the
+following command:
+
+----
+mount -t virtiofs <mount tag> <mount point>
+----
+
+For more information on the virtiofsd parameters, see:
+https://gitlab.com/virtio-fs/virtiofsd[GitLab virtiofsd]
+
 [[qm_bootorder]]
 Device Boot Order
 ~~~~~~~~~~~~~~~~~
@@ -1600,8 +1655,9 @@ in the relevant tab in the `Resource Mappings` category, or on the cli with
 # pvesh create /cluster/mapping/<type> <options>
 ----
 
-Where `<type>` is the hardware type (currently either `pci` or `usb`) and
-`<options>` are the device mappings and other configuration parameters.
+Where `<type>` is the hardware type (currently either `pci`, `usb` or
+xref:qm_virtiofs[dir]) and `<options>` are the device mappings and other
+configuration parameters.
 
 Note that the options must include a map property with all identifying
 properties of that hardware, so that it's possible to verify the hardware did
-- 
2.39.2





^ permalink raw reply	[flat|nested] 19+ messages in thread

* [pve-devel] [PATCH qemu-server v6 1/3] feature #1027: virtio-fs support
  2023-07-06 10:54 [pve-devel] [PATCH cluster/guest-common/qemu-server/manager v6 0/11] virtiofs Markus Frank
                   ` (2 preceding siblings ...)
  2023-07-06 10:54 ` [pve-devel] [PATCH docs v6 1/1] added shared filesystem doc for virtio-fs Markus Frank
@ 2023-07-06 10:54 ` Markus Frank
  2023-07-19 12:08   ` Fabian Grünbichler
  2023-07-06 10:54 ` [pve-devel] [PATCH qemu-server v6 2/3] Permission check for virtiofs directory access Markus Frank
                   ` (10 subsequent siblings)
  14 siblings, 1 reply; 19+ messages in thread
From: Markus Frank @ 2023-07-06 10:54 UTC (permalink / raw)
  To: pve-devel

adds support for sharing directorys with a guest vm

virtio-fs needs virtiofsd to be started.

In order to start virtiofsd as a process (despite being a daemon it is does not run
in the background), a double-fork is used.

virtiofsd should close itself together with qemu.

There are the parameters dirid & tag
and the optional parameters direct-io & cache.

The dirid gets mapped to the path on the current node.
The tag parameter is for choosing the tag-name that is used with the
mount command.

example config:
```
virtiofs0: foo,tag=tag1,direct-io=1,cache=always
virtiofs1: dirid=bar,tag=tag2,cache=never
```

For information on the optional parameters see there:
https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 PVE/QemuServer.pm        | 157 +++++++++++++++++++++++++++++++++++++++
 PVE/QemuServer/Memory.pm |  25 +++++--
 debian/control           |   1 +
 3 files changed, 177 insertions(+), 6 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 940cdac..3a8b4c5 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -43,6 +43,7 @@ use PVE::PBSClient;
 use PVE::RESTEnvironment qw(log_warn);
 use PVE::RPCEnvironment;
 use PVE::Storage;
+use PVE::Mapping::DIR;
 use PVE::SysFSTools;
 use PVE::Systemd;
 use PVE::Tools qw(run_command file_read_firstline file_get_contents dir_glob_foreach get_host_arch $IPV6RE);
@@ -276,6 +277,35 @@ my $rng_fmt = {
     },
 };
 
+my $virtiofs_fmt = {
+    'dirid' => {
+	type => 'string',
+	default_key => 1,
+	description => "dirid of directory you want to share with the guest VM",
+	format_description => "virtiofs-dirid",
+    },
+    'tag' => {
+	type => 'string',
+	description => "tag name for mounting in the guest VM",
+	format_description => "virtiofs-tag",
+    },
+    'cache' => {
+	type => 'string',
+	description => "The caching policy the file system should use"
+	    ." (auto, always, never).",
+	format_description => "virtiofs-cache",
+	enum => [qw(auto always never)],
+	optional => 1,
+    },
+    'direct-io' => {
+	type => 'boolean',
+	description => "Honor the O_DIRECT flag passed down by guest applications",
+	format_description => "virtiofs-directio",
+	optional => 1,
+    },
+};
+PVE::JSONSchema::register_format('pve-qm-virtiofs', $virtiofs_fmt);
+
 my $meta_info_fmt = {
     'ctime' => {
 	type => 'integer',
@@ -838,6 +868,7 @@ while (my ($k, $v) = each %$confdesc) {
 }
 
 my $MAX_NETS = 32;
+my $MAX_VIRTIOFS = 10;
 my $MAX_SERIAL_PORTS = 4;
 my $MAX_PARALLEL_PORTS = 3;
 my $MAX_NUMA = 8;
@@ -982,6 +1013,21 @@ my $netdesc = {
 
 PVE::JSONSchema::register_standard_option("pve-qm-net", $netdesc);
 
+my $virtiofsdesc = {
+    optional => 1,
+    type => 'string', format => $virtiofs_fmt,
+    description => "share files between host and guest",
+};
+PVE::JSONSchema::register_standard_option("pve-qm-virtiofs", $virtiofsdesc);
+
+sub max_virtiofs {
+    return $MAX_VIRTIOFS;
+}
+
+for (my $i = 0; $i < $MAX_VIRTIOFS; $i++)  {
+    $confdesc->{"virtiofs$i"} = $virtiofsdesc;
+}
+
 my $ipconfig_fmt = {
     ip => {
 	type => 'string',
@@ -4100,6 +4146,25 @@ sub config_to_command {
 	push @$devices, '-device', $netdevicefull;
     }
 
+    my $onevirtiofs = 0;
+    for (my $i = 0; $i < $MAX_VIRTIOFS; $i++) {
+	my $virtiofsstr = "virtiofs$i";
+
+	next if !$conf->{$virtiofsstr};
+	my $virtiofs = parse_property_string('pve-qm-virtiofs', $conf->{$virtiofsstr});
+	next if !$virtiofs;
+
+	push @$devices, '-chardev', "socket,id=virtfs$i,path=/var/run/virtiofsd/vm$vmid-fs$i";
+	push @$devices, '-device', 'vhost-user-fs-pci,queue-size=1024'
+	    .",chardev=virtfs$i,tag=$virtiofs->{tag}";
+
+	$onevirtiofs = 1;
+    }
+
+    if ($onevirtiofs && $conf->{hugepages}){
+	die "hugepages not supported in combination with virtiofs\n";
+    }
+
     if ($conf->{ivshmem}) {
 	my $ivshmem = parse_property_string($ivshmem_fmt, $conf->{ivshmem});
 
@@ -4159,6 +4224,14 @@ sub config_to_command {
     }
     push @$machineFlags, "type=${machine_type_min}";
 
+    if ($onevirtiofs && !$conf->{numa}) {
+	# kvm: '-machine memory-backend' and '-numa memdev' properties are
+	# mutually exclusive
+	push @$devices, '-object', 'memory-backend-file,id=virtiofs-mem'
+	    .",size=$conf->{memory}M,mem-path=/dev/shm,share=on";
+	push @$machineFlags, 'memory-backend=virtiofs-mem';
+    }
+
     push @$cmd, @$devices;
     push @$cmd, '-rtc', join(',', @$rtcFlags) if scalar(@$rtcFlags);
     push @$cmd, '-machine', join(',', @$machineFlags) if scalar(@$machineFlags);
@@ -4185,6 +4258,72 @@ sub config_to_command {
     return wantarray ? ($cmd, $vollist, $spice_port, $pci_devices) : $cmd;
 }
 
+sub start_virtiofs {
+    my ($vmid, $fsid, $virtiofs) = @_;
+
+    my $dir_list = PVE::Mapping::DIR::find_on_current_node($virtiofs->{dirid});
+
+    if (!$dir_list || scalar($dir_list->@*) != 1) {
+	die "virtiofs needs exactly one mapping for this node\n";
+    }
+
+    eval {
+	PVE::Mapping::DIR::assert_valid($dir_list->[0]);
+    };
+    if (my $err = $@) {
+	die "Directory Mapping invalid: $err\n";
+    }
+
+    my $dir_cfg = $dir_list->[0];
+    my $path = $dir_cfg->{path};
+    my $socket_path_root = "/var/run/virtiofsd";
+    mkdir $socket_path_root;
+    my $socket_path = "$socket_path_root/vm$vmid-fs$fsid";
+    unlink($socket_path);
+    my $socket = IO::Socket::UNIX->new(
+	Type => SOCK_STREAM,
+	Local => $socket_path,
+	Listen => 1,
+    ) or die "cannot create socket - $!\n";
+
+    my $flags = fcntl($socket, F_GETFD, 0)
+	or die "failed to get file descriptor flags: $!\n";
+    fcntl($socket, F_SETFD, $flags & ~FD_CLOEXEC)
+	or die "failed to remove FD_CLOEXEC from file descriptor\n";
+
+    my $fd = $socket->fileno();
+
+    my $virtiofsd_bin = '/usr/libexec/virtiofsd';
+
+    my $pid = fork();
+    if ($pid == 0) {
+	for my $fd_loop (3 .. POSIX::sysconf( &POSIX::_SC_OPEN_MAX )) {
+	    POSIX::close($fd_loop) if ($fd_loop != $fd);
+	}
+	my $pid2 = fork();
+	if ($pid2 == 0) {
+	    my $cmd = [$virtiofsd_bin, "--fd=$fd", "--shared-dir=$path"];
+	    push @$cmd, '--xattr' if ($dir_cfg->{xattr});
+	    push @$cmd, '--posix-acl' if ($dir_cfg->{acl});
+	    push @$cmd, '--announce-submounts' if ($dir_cfg->{submounts});
+	    push @$cmd, '--allow-direct-io' if ($virtiofs->{'direct-io'});
+	    push @$cmd, "--cache=$virtiofs->{'cache'}" if ($virtiofs->{'cache'});
+	    run_command($cmd);
+	    POSIX::_exit(0);
+	} elsif (!defined($pid2)) {
+	    die "could not fork to start virtiofsd\n";
+	} else {
+	    POSIX::_exit(0);
+	}
+    } elsif (!defined($pid)) {
+	die "could not fork to start virtiofsd\n";
+    }
+
+    # return socket to keep it alive,
+    # so that qemu will wait for virtiofsd to start
+    return $socket;
+}
+
 sub check_rng_source {
     my ($source) = @_;
 
@@ -5740,6 +5879,19 @@ sub vm_start_nolock {
     my ($cmd, $vollist, $spice_port, $pci_devices) = config_to_command($storecfg, $vmid,
 	$conf, $defaults, $forcemachine, $forcecpu, $params->{'pbs-backing'});
 
+    my @sockets;
+    for (my $i = 0; $i < $MAX_VIRTIOFS; $i++) {
+	my $virtiofsstr = "virtiofs$i";
+
+	next if !$conf->{$virtiofsstr};
+	my $virtiofs = parse_property_string('pve-qm-virtiofs', $conf->{$virtiofsstr});
+	next if !$virtiofs;
+
+
+	my $socket = start_virtiofs($vmid, $i, $virtiofs);
+	push @sockets, $socket;
+    }
+
     my $migration_ip;
     my $get_migration_ip = sub {
 	my ($nodename) = @_;
@@ -6093,6 +6245,11 @@ sub vm_start_nolock {
 
     PVE::GuestHelpers::exec_hookscript($conf, $vmid, 'post-start');
 
+    foreach my $socket (@sockets) {
+	shutdown($socket, 2);
+	close($socket);
+    }
+
     return $res;
 }
 
diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
index 0601dd6..3b58b36 100644
--- a/PVE/QemuServer/Memory.pm
+++ b/PVE/QemuServer/Memory.pm
@@ -278,6 +278,16 @@ sub config {
 
     die "numa needs to be enabled to use hugepages" if $conf->{hugepages} && !$conf->{numa};
 
+    my $onevirtiofs = 0;
+    for (my $i = 0; $i < PVE::QemuServer::max_virtiofs(); $i++) {
+	my $virtiofsstr = "virtiofs$i";
+	next if !$conf->{$virtiofsstr};
+	my $virtiofs = PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', $conf->{$virtiofsstr});
+	if ($virtiofs) {
+	    $onevirtiofs = 1;
+	}
+    }
+
     if ($conf->{numa}) {
 
 	my $numa_totalmemory = undef;
@@ -290,7 +300,8 @@ sub config {
 	    my $numa_memory = $numa->{memory};
 	    $numa_totalmemory += $numa_memory;
 
-	    my $mem_object = print_mem_object($conf, "ram-node$i", $numa_memory);
+	    my $memdev = $onevirtiofs ? "virtiofs-mem$i" : "ram-node$i";
+	    my $mem_object = print_mem_object($conf, $memdev, $numa_memory);
 
 	    # cpus
 	    my $cpulists = $numa->{cpus};
@@ -315,7 +326,7 @@ sub config {
 	    }
 
 	    push @$cmd, '-object', $mem_object;
-	    push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=ram-node$i";
+	    push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=$memdev";
 	}
 
 	die "total memory for NUMA nodes must be equal to vm static memory\n"
@@ -329,13 +340,13 @@ sub config {
 		die "host NUMA node$i doesn't exist\n"
 		    if !host_numanode_exists($i) && $conf->{hugepages};
 
-		my $mem_object = print_mem_object($conf, "ram-node$i", $numa_memory);
-		push @$cmd, '-object', $mem_object;
-
 		my $cpus = ($cores * $i);
 		$cpus .= "-" . ($cpus + $cores - 1) if $cores > 1;
 
-		push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=ram-node$i";
+		my $memdev = $onevirtiofs ? "virtiofs-mem$i" : "ram-node$i";
+		my $mem_object = print_mem_object($conf, $memdev, $numa_memory);
+		push @$cmd, '-object', $mem_object;
+		push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=$memdev";
 	    }
 	}
     }
@@ -364,6 +375,8 @@ sub print_mem_object {
 	my $path = hugepages_mount_path($hugepages_size);
 
 	return "memory-backend-file,id=$id,size=${size}M,mem-path=$path,share=on,prealloc=yes";
+    } elsif ($id =~ m/^virtiofs-mem/) {
+	return "memory-backend-file,id=$id,size=${size}M,mem-path=/dev/shm,share=on";
     } else {
 	return "memory-backend-ram,id=$id,size=${size}M";
     }
diff --git a/debian/control b/debian/control
index 49f67b2..f008a9b 100644
--- a/debian/control
+++ b/debian/control
@@ -53,6 +53,7 @@ Depends: dbus,
          socat,
          swtpm,
          swtpm-tools,
+         virtiofsd,
          ${misc:Depends},
          ${perl:Depends},
          ${shlibs:Depends},
-- 
2.39.2





^ permalink raw reply	[flat|nested] 19+ messages in thread

* [pve-devel] [PATCH qemu-server v6 2/3] Permission check for virtiofs directory access
  2023-07-06 10:54 [pve-devel] [PATCH cluster/guest-common/qemu-server/manager v6 0/11] virtiofs Markus Frank
                   ` (3 preceding siblings ...)
  2023-07-06 10:54 ` [pve-devel] [PATCH qemu-server v6 1/3] feature #1027: virtio-fs support Markus Frank
@ 2023-07-06 10:54 ` Markus Frank
  2023-07-06 10:54 ` [pve-devel] [PATCH qemu-server v6 3/3] check_local_resources: virtiofs Markus Frank
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 19+ messages in thread
From: Markus Frank @ 2023-07-06 10:54 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 PVE/API2/Qemu.pm | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index d0c199b..2048e4a 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -585,6 +585,19 @@ my $check_vm_create_serial_perm = sub {
     return 1;
 };
 
+my sub check_vm_dir_perm {
+    my ($rpcenv, $authuser, $param) = @_;
+
+    return 1 if $authuser eq 'root@pam';
+
+    foreach my $opt (keys %{$param}) {
+	next if $opt !~ m/^virtiofs\d+$/;
+	my $virtiofs = PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', $param->{$opt});
+	$rpcenv->check_full($authuser, "/mapping/dir/$virtiofs->{dirid}", ['Mapping.Use']);
+    }
+    return 1;
+};
+
 my sub check_usb_perm {
     my ($rpcenv, $authuser, $vmid, $pool, $opt, $value) = @_;
 
@@ -686,6 +699,8 @@ my $check_vm_modify_config_perm = sub {
 	    # the user needs Disk and PowerMgmt privileges to change the vmstate
 	    # also needs privileges on the storage, that will be checked later
 	    $rpcenv->check_vm_perm($authuser, $vmid, $pool, ['VM.Config.Disk', 'VM.PowerMgmt' ]);
+	} elsif ($opt =~ m/^virtiofs\d$/) {
+	    $rpcenv->check_vm_perm($authuser, $vmid, $pool, ['VM.Config.Disk']);
 	} else {
 	    # catches args, lock, etc.
 	    # new options will be checked here
@@ -924,6 +939,7 @@ __PACKAGE__->register_method({
 
 	    &$check_vm_modify_config_perm($rpcenv, $authuser, $vmid, $pool, [ keys %$param]);
 
+	    check_vm_dir_perm($rpcenv, $authuser, $param);
 	    &$check_vm_create_serial_perm($rpcenv, $authuser, $vmid, $pool, $param);
 	    check_vm_create_usb_perm($rpcenv, $authuser, $vmid, $pool, $param);
 	    check_vm_create_hostpci_perm($rpcenv, $authuser, $vmid, $pool, $param);
@@ -1646,6 +1662,8 @@ my $update_vm_api  = sub {
 
     &$check_vm_modify_config_perm($rpcenv, $authuser, $vmid, undef, [keys %$param]);
 
+    check_vm_dir_perm($rpcenv, $authuser, $param);
+
     &$check_storage_access($rpcenv, $authuser, $storecfg, $vmid, $param);
 
     PVE::QemuServer::check_bridge_access($rpcenv, $authuser, $param);
-- 
2.39.2





^ permalink raw reply	[flat|nested] 19+ messages in thread

* [pve-devel] [PATCH qemu-server v6 3/3] check_local_resources: virtiofs
  2023-07-06 10:54 [pve-devel] [PATCH cluster/guest-common/qemu-server/manager v6 0/11] virtiofs Markus Frank
                   ` (4 preceding siblings ...)
  2023-07-06 10:54 ` [pve-devel] [PATCH qemu-server v6 2/3] Permission check for virtiofs directory access Markus Frank
@ 2023-07-06 10:54 ` Markus Frank
  2023-07-06 10:54 ` [pve-devel] [PATCH manager v6 1/5] api: add resource map api endpoints for directories Markus Frank
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 19+ messages in thread
From: Markus Frank @ 2023-07-06 10:54 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 PVE/QemuServer.pm | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 3a8b4c5..8914154 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2703,6 +2703,7 @@ sub check_local_resources {
     my $nodelist = PVE::Cluster::get_nodelist();
     my $pci_map = PVE::Mapping::PCI::config();
     my $usb_map = PVE::Mapping::USB::config();
+    my $dir_map = PVE::Mapping::DIR::config();
 
     my $missing_mappings_by_node = { map { $_ => [] } @$nodelist };
 
@@ -2714,6 +2715,8 @@ sub check_local_resources {
 		$entry = PVE::Mapping::PCI::get_node_mapping($pci_map, $id, $node);
 	    } elsif ($type eq 'usb') {
 		$entry = PVE::Mapping::USB::get_node_mapping($usb_map, $id, $node);
+	    } elsif ($type eq 'dir') {
+		$entry = PVE::Mapping::DIR::get_node_mapping($dir_map, $id, $node);
 	    }
 	    if (!scalar($entry->@*)) {
 		push @{$missing_mappings_by_node->{$node}}, $key;
@@ -2742,9 +2745,14 @@ sub check_local_resources {
 		push @$mapped_res, $k;
 	    }
 	}
+	if ($k =~ m/^virtiofs/) {
+	    my $entry = parse_property_string('pve-qm-virtiofs', $conf->{$k});
+	    $add_missing_mapping->('dir', $k, $entry->{dirid});
+	    push @$mapped_res, $k;
+	}
 	# sockets are safe: they will recreated be on the target side post-migrate
 	next if $k =~ m/^serial/ && ($conf->{$k} eq 'socket');
-	push @loc_res, $k if $k =~ m/^(usb|hostpci|serial|parallel)\d+$/;
+	push @loc_res, $k if $k =~ m/^(usb|hostpci|serial|parallel|virtiofs)\d+$/;
     }
 
     die "VM uses local resources\n" if scalar @loc_res && !$noerr;
-- 
2.39.2





^ permalink raw reply	[flat|nested] 19+ messages in thread

* [pve-devel] [PATCH manager v6 1/5] api: add resource map api endpoints for directories
  2023-07-06 10:54 [pve-devel] [PATCH cluster/guest-common/qemu-server/manager v6 0/11] virtiofs Markus Frank
                   ` (5 preceding siblings ...)
  2023-07-06 10:54 ` [pve-devel] [PATCH qemu-server v6 3/3] check_local_resources: virtiofs Markus Frank
@ 2023-07-06 10:54 ` Markus Frank
  2023-07-06 10:54 ` [pve-devel] [PATCH manager v6 2/5] ui: add edit window for dir mappings Markus Frank
                   ` (7 subsequent siblings)
  14 siblings, 0 replies; 19+ messages in thread
From: Markus Frank @ 2023-07-06 10:54 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 PVE/API2/Cluster/Mapping.pm       |   7 +
 PVE/API2/Cluster/Mapping/DIR.pm   | 299 ++++++++++++++++++++++++++++++
 PVE/API2/Cluster/Mapping/Makefile |   3 +-
 3 files changed, 308 insertions(+), 1 deletion(-)
 create mode 100644 PVE/API2/Cluster/Mapping/DIR.pm

diff --git a/PVE/API2/Cluster/Mapping.pm b/PVE/API2/Cluster/Mapping.pm
index 40386579..c3674604 100644
--- a/PVE/API2/Cluster/Mapping.pm
+++ b/PVE/API2/Cluster/Mapping.pm
@@ -5,6 +5,7 @@ use warnings;
 
 use PVE::API2::Cluster::Mapping::PCI;
 use PVE::API2::Cluster::Mapping::USB;
+use PVE::API2::Cluster::Mapping::DIR;
 
 use base qw(PVE::RESTHandler);
 
@@ -18,6 +19,11 @@ __PACKAGE__->register_method ({
     path => 'usb',
 });
 
+__PACKAGE__->register_method ({
+    subclass => "PVE::API2::Cluster::Mapping::DIR",
+    path => 'dir',
+});
+
 __PACKAGE__->register_method ({
     name => 'index',
     path => '',
@@ -43,6 +49,7 @@ __PACKAGE__->register_method ({
 	my $result = [
 	    { name => 'pci' },
 	    { name => 'usb' },
+	    { name => 'dir' },
 	];
 
 	return $result;
diff --git a/PVE/API2/Cluster/Mapping/DIR.pm b/PVE/API2/Cluster/Mapping/DIR.pm
new file mode 100644
index 00000000..bb28b612
--- /dev/null
+++ b/PVE/API2/Cluster/Mapping/DIR.pm
@@ -0,0 +1,299 @@
+package PVE::API2::Cluster::Mapping::DIR;
+
+use strict;
+use warnings;
+
+use Storable qw(dclone);
+
+use PVE::Mapping::DIR ();
+use PVE::JSONSchema qw(get_standard_option);
+use PVE::Tools qw(extract_param);
+
+use base qw(PVE::RESTHandler);
+
+__PACKAGE__->register_method ({
+    name => 'index',
+    path => '',
+    method => 'GET',
+    # only proxy if we give the 'check-node' parameter
+    proxyto_callback => sub {
+	my ($rpcenv, $proxyto, $param) = @_;
+	return $param->{'check-node'} // 'localhost';
+    },
+    description => "List DIR Hardware Mapping",
+    permissions => {
+	description => "Only lists entries where you have 'Mapping.Modify', 'Mapping.Use' or".
+	    " 'Mapping.Audit' permissions on '/mapping/dir/<id>'.",
+	user => 'all',
+    },
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    'check-node' => get_standard_option('pve-node', {
+		description => "If given, checks the configurations on the given node for ".
+		    "correctness, and adds relevant diagnostics for the devices to the response.",
+		optional => 1,
+	    }),
+	},
+    },
+    returns => {
+	type => 'array',
+	items => {
+	    type => "object",
+	    properties => {
+		id => {
+		    type => 'string',
+		    description => "The logical ID of the mapping."
+		},
+		map => {
+		    type => 'array',
+		    description => "The entries of the mapping.",
+		    items => {
+			type => 'string',
+			description => "A mapping for a node.",
+		    },
+		},
+		description => {
+		    type => 'string',
+		    description => "A description of the logical mapping.",
+		},
+		checks => {
+		    type => "array",
+		    optional => 1,
+		    description => "A list of checks, only present if 'check_node' is set.",
+		    items => {
+			type => 'object',
+			properties => {
+			    severity => {
+				type => "string",
+				enum => ['warning', 'error'],
+				description => "The severity of the error",
+			    },
+			    message => {
+				type => "string",
+				description => "The message of the error",
+			    },
+			},
+		    }
+		},
+	    },
+	},
+	links => [ { rel => 'child', href => "{id}" } ],
+    },
+    code => sub {
+	my ($param) = @_;
+
+	my $rpcenv = PVE::RPCEnvironment::get();
+	my $authuser = $rpcenv->get_user();
+
+	my $check_node = $param->{'check-node'};
+	my $local_node = PVE::INotify::nodename();
+
+	die "wrong node to check - $check_node != $local_node\n"
+	    if defined($check_node) && $check_node ne 'localhost' && $check_node ne $local_node;
+
+	my $cfg = PVE::Mapping::DIR::config();
+
+	my $can_see_mapping_privs = ['Mapping.Modify', 'Mapping.Use', 'Mapping.Audit'];
+
+	my $res = [];
+	for my $id (keys $cfg->{ids}->%*) {
+	    next if !$rpcenv->check_any($authuser, "/mapping/dir/$id", $can_see_mapping_privs, 1);
+	    next if !$cfg->{ids}->{$id};
+
+	    my $entry = dclone($cfg->{ids}->{$id});
+	    $entry->{id} = $id;
+	    $entry->{digest} = $cfg->{digest};
+
+	    if (defined($check_node)) {
+		$entry->{checks} = [];
+		if (my $mappings = PVE::Mapping::DIR::get_node_mapping($cfg, $id, $check_node)) {
+		    if (!scalar($mappings->@*)) {
+			push $entry->{checks}->@*, {
+			    severity => 'warning',
+			    message => "No mapping for node $check_node.",
+			};
+		    }
+		    for my $mapping ($mappings->@*) {
+			eval { PVE::Mapping::DIR::assert_valid($id, $mapping) };
+			if (my $err = $@) {
+			    push $entry->{checks}->@*, {
+				severity => 'error',
+				message => "Invalid configuration: $err",
+			    };
+			}
+		    }
+		}
+	    }
+
+	    push @$res, $entry;
+	}
+
+	return $res;
+    },
+});
+
+__PACKAGE__->register_method ({
+    name => 'get',
+    protected => 1,
+    path => '{id}',
+    method => 'GET',
+    description => "Get DIR Mapping.",
+    permissions => {
+	check =>['or',
+	    ['perm', '/mapping/dir/{id}', ['Mapping.Use']],
+	    ['perm', '/mapping/dir/{id}', ['Mapping.Modify']],
+	    ['perm', '/mapping/dir/{id}', ['Mapping.Audit']],
+	],
+    },
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    id => {
+		type => 'string',
+		format => 'pve-configid',
+	    },
+	}
+    },
+    returns => { type => 'object' },
+    code => sub {
+	my ($param) = @_;
+
+	my $cfg = PVE::Mapping::DIR::config();
+	my $id = $param->{id};
+
+	my $entry = $cfg->{ids}->{$id};
+	die "mapping '$param->{id}' not found\n" if !defined($entry);
+
+	my $data = dclone($entry);
+
+	$data->{digest} = $cfg->{digest};
+
+	return $data;
+    }});
+
+__PACKAGE__->register_method ({
+    name => 'create',
+    protected => 1,
+    path => '',
+    method => 'POST',
+    description => "Create a new hardware mapping.",
+    permissions => {
+	check => ['perm', '/mapping/dir', ['Mapping.Modify']],
+    },
+    parameters => PVE::Mapping::DIR->createSchema(1),
+    returns => {
+	type => 'null',
+    },
+    code => sub {
+	my ($param) = @_;
+
+	my $id = extract_param($param, 'id');
+
+	my $plugin = PVE::Mapping::DIR->lookup('dir');
+	my $opts = $plugin->check_config($id, $param, 1, 1);
+
+	PVE::Mapping::DIR::lock_dir_config(sub {
+	    my $cfg = PVE::Mapping::DIR::config();
+
+	    die "dir ID '$id' already defined\n" if defined($cfg->{ids}->{$id});
+
+	    $cfg->{ids}->{$id} = $opts;
+
+	    PVE::Mapping::DIR::write_dir_config($cfg);
+
+	}, "create hardware mapping failed");
+
+	return;
+    },
+});
+
+__PACKAGE__->register_method ({
+    name => 'update',
+    protected => 1,
+    path => '{id}',
+    method => 'PUT',
+    description => "Update a hardware mapping.",
+    permissions => {
+	check => ['perm', '/mapping/dir/{id}', ['Mapping.Modify']],
+    },
+    parameters => PVE::Mapping::DIR->updateSchema(),
+    returns => {
+	type => 'null',
+    },
+    code => sub {
+	my ($param) = @_;
+
+	my $digest = extract_param($param, 'digest');
+	my $delete = extract_param($param, 'delete');
+	my $id = extract_param($param, 'id');
+
+	if ($delete) {
+	    $delete = [ PVE::Tools::split_list($delete) ];
+	}
+
+	PVE::Mapping::DIR::lock_dir_config(sub {
+	    my $cfg = PVE::Mapping::DIR::config();
+
+	    PVE::Tools::assert_if_modified($cfg->{digest}, $digest) if defined($digest);
+
+	    die "dir ID '$id' does not exist\n" if !defined($cfg->{ids}->{$id});
+
+	    my $plugin = PVE::Mapping::DIR->lookup('dir');
+	    my $opts = $plugin->check_config($id, $param, 1, 1);
+
+	    my $data = $cfg->{ids}->{$id};
+
+	    my $options = $plugin->private()->{options}->{dir};
+	    PVE::SectionConfig::delete_from_config($data, $options, $opts, $delete);
+
+	    $data->{$_} = $opts->{$_} for keys $opts->%*;
+
+	    PVE::Mapping::DIR::write_dir_config($cfg);
+
+	}, "update hardware mapping failed");
+
+	return;
+    },
+});
+
+__PACKAGE__->register_method ({
+    name => 'delete',
+    protected => 1,
+    path => '{id}',
+    method => 'DELETE',
+    description => "Remove Hardware Mapping.",
+    permissions => {
+	check => [ 'perm', '/mapping/dir', ['Mapping.Modify']],
+    },
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    id => {
+		type => 'string',
+		format => 'pve-configid',
+	    },
+	}
+    },
+    returns => { type => 'null' },
+    code => sub {
+	my ($param) = @_;
+
+	my $id = $param->{id};
+
+	PVE::Mapping::DIR::lock_dir_config(sub {
+	    my $cfg = PVE::Mapping::DIR::config();
+
+	    if ($cfg->{ids}->{$id}) {
+		delete $cfg->{ids}->{$id};
+	    }
+
+	    PVE::Mapping::DIR::write_dir_config($cfg);
+
+	}, "delete dir mapping failed");
+
+	return;
+    }
+});
+
+1;
diff --git a/PVE/API2/Cluster/Mapping/Makefile b/PVE/API2/Cluster/Mapping/Makefile
index e7345ab4..b543a09d 100644
--- a/PVE/API2/Cluster/Mapping/Makefile
+++ b/PVE/API2/Cluster/Mapping/Makefile
@@ -4,7 +4,8 @@ include ../../../../defines.mk
 # ensure we do not conflict with files shipped by pve-cluster!!
 PERLSOURCE= 	\
 	PCI.pm	\
-	USB.pm
+	USB.pm	\
+	DIR.pm
 
 all:
 
-- 
2.39.2





^ permalink raw reply	[flat|nested] 19+ messages in thread

* [pve-devel] [PATCH manager v6 2/5] ui: add edit window for dir mappings
  2023-07-06 10:54 [pve-devel] [PATCH cluster/guest-common/qemu-server/manager v6 0/11] virtiofs Markus Frank
                   ` (6 preceding siblings ...)
  2023-07-06 10:54 ` [pve-devel] [PATCH manager v6 1/5] api: add resource map api endpoints for directories Markus Frank
@ 2023-07-06 10:54 ` Markus Frank
  2023-07-06 10:54 ` [pve-devel] [PATCH manager v6 3/5] ui: ResourceMapTree for DIR Markus Frank
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 19+ messages in thread
From: Markus Frank @ 2023-07-06 10:54 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 www/manager6/Makefile             |   1 +
 www/manager6/window/DIRMapEdit.js | 186 ++++++++++++++++++++++++++++++
 2 files changed, 187 insertions(+)
 create mode 100644 www/manager6/window/DIRMapEdit.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 5b455c80..6c7b2211 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -128,6 +128,7 @@ JSSRC= 							\
 	window/TreeSettingsEdit.js			\
 	window/PCIMapEdit.js				\
 	window/USBMapEdit.js				\
+	window/DIRMapEdit.js                            \
 	ha/Fencing.js					\
 	ha/GroupEdit.js					\
 	ha/GroupSelector.js				\
diff --git a/www/manager6/window/DIRMapEdit.js b/www/manager6/window/DIRMapEdit.js
new file mode 100644
index 00000000..7d810dcf
--- /dev/null
+++ b/www/manager6/window/DIRMapEdit.js
@@ -0,0 +1,186 @@
+Ext.define('PVE.window.DIRMapEditWindow', {
+    extend: 'Proxmox.window.Edit',
+
+    mixins: ['Proxmox.Mixin.CBind'],
+
+    cbindData: function(initialConfig) {
+	let me = this;
+	me.isCreate = !me.name;
+	me.method = me.isCreate ? 'POST' : 'PUT';
+	me.hideMapping = !!me.entryOnly;
+	me.hideComment = me.name && !me.entryOnly;
+	me.hideNodeSelector = me.nodename || me.entryOnly;
+	me.hideNode = !me.nodename || !me.hideNodeSelector;
+	return {
+	    name: me.name,
+	    nodename: me.nodename,
+	};
+    },
+
+    submitUrl: function(_url, data) {
+	let me = this;
+	let name = me.isCreate ? '' : me.name;
+	return `/cluster/mapping/dir/${name}`;
+    },
+
+    title: gettext('Add DIR mapping'),
+
+    onlineHelp: 'resource_mapping',
+
+    method: 'POST',
+
+    controller: {
+	xclass: 'Ext.app.ViewController',
+
+	onGetValues: function(values) {
+	    let me = this;
+	    let view = me.getView();
+	    values.node ??= view.nodename;
+
+	    let name = values.name;
+	    let description = values.description;
+	    delete values.description;
+	    delete values.name;
+
+	    let map = [];
+	    if (me.originalMap) {
+		map = PVE.Parser.filterPropertyStringList(me.originalMap, (e) => e.node !== values.node);
+	    }
+	    if (values.path) {
+		map.push(PVE.Parser.printPropertyString(values));
+	    }
+
+	    values = { map };
+	    if (description) {
+		values.description = description;
+	    }
+
+	    if (view.isCreate) {
+		values.id = name;
+	    }
+	    return values;
+	},
+
+	onSetValues: function(values) {
+	    let me = this;
+	    let view = me.getView();
+	    me.originalMap = [...values.map];
+	    let configuredNodes = [];
+	    PVE.Parser.filterPropertyStringList(values.map, (e) => {
+		configuredNodes.push(e.node);
+		if (e.node === view.nodename) {
+		    values = e;
+		}
+		return false;
+	    });
+
+	    me.lookup('nodeselector').disallowedNodes = configuredNodes;
+	    if (values.path) {
+		values.usb = 'path';
+	    }
+
+	    return values;
+	},
+
+	init: function(view) {
+	    let me = this;
+
+	    if (!view.nodename) {
+		//throw "no nodename given";
+	    }
+	},
+    },
+
+    items: [
+	{
+	    xtype: 'inputpanel',
+	    onGetValues: function(values) {
+		return this.up('window').getController().onGetValues(values);
+	    },
+
+	    onSetValues: function(values) {
+		return this.up('window').getController().onSetValues(values);
+	    },
+
+	    column1: [
+		{
+		    xtype: 'pmxDisplayEditField',
+		    fieldLabel: gettext('Name'),
+		    cbind: {
+			editable: '{!name}',
+			value: '{name}',
+			submitValue: '{isCreate}',
+		    },
+		    name: 'name',
+		    allowBlank: false,
+		},
+		{
+		    xtype: 'pveNodeSelector',
+		    reference: 'nodeselector',
+		    fieldLabel: gettext('Node'),
+		    name: 'node',
+		    cbind: {
+			disabled: '{hideNodeSelector}',
+			hidden: '{hideNodeSelector}',
+		    },
+		    allowBlank: false,
+		},
+	    ],
+
+	    column2: [
+		{
+		    xtype: 'fieldcontainer',
+		    defaultType: 'radiofield',
+		    layout: 'fit',
+		    cbind: {
+			disabled: '{hideMapping}',
+			hidden: '{hideMapping}',
+		    },
+		    items: [
+			{
+			    xtype: 'textfield',
+			    name: 'path',
+			    reference: 'path',
+			    value: '',
+			    emptyText: gettext('/some/path'),
+			    cbind: {
+				nodename: '{nodename}',
+				disabled: '{hideMapping}',
+			    },
+			    allowBlank: false,
+			    fieldLabel: gettext('Path'),
+			},
+			{
+			    xtype: 'proxmoxcheckbox',
+			    name: 'xattr',
+			    fieldLabel: gettext('xattr'),
+			},
+			{
+			    xtype: 'proxmoxcheckbox',
+			    name: 'acl',
+			    fieldLabel: gettext('acl'),
+			},
+			{
+			    xtype: 'proxmoxcheckbox',
+			    name: 'submounts',
+			    fieldLabel: gettext('submounts'),
+			},
+		    ],
+		}
+	    ],
+
+	    columnB: [
+		{
+		    xtype: 'proxmoxtextfield',
+		    fieldLabel: gettext('Comment'),
+		    submitValue: true,
+		    name: 'description',
+		    cbind: {
+			disabled: '{hideComment}',
+			hidden: '{hideComment}',
+		    },
+		},
+	    ],
+	},
+    ],
+});
-- 
2.39.2





^ permalink raw reply	[flat|nested] 19+ messages in thread

* [pve-devel] [PATCH manager v6 3/5] ui: ResourceMapTree for DIR
  2023-07-06 10:54 [pve-devel] [PATCH cluster/guest-common/qemu-server/manager v6 0/11] virtiofs Markus Frank
                   ` (7 preceding siblings ...)
  2023-07-06 10:54 ` [pve-devel] [PATCH manager v6 2/5] ui: add edit window for dir mappings Markus Frank
@ 2023-07-06 10:54 ` Markus Frank
  2023-07-06 10:54 ` [pve-devel] [PATCH manager v6 4/5] ui: form: add DIRMapSelector Markus Frank
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 19+ messages in thread
From: Markus Frank @ 2023-07-06 10:54 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 www/manager6/Makefile         |  1 +
 www/manager6/dc/Config.js     | 10 +++++++
 www/manager6/dc/DIRMapView.js | 50 +++++++++++++++++++++++++++++++++++
 3 files changed, 61 insertions(+)
 create mode 100644 www/manager6/dc/DIRMapView.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 6c7b2211..9078c130 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -179,6 +179,7 @@ JSSRC= 							\
 	dc/RealmSyncJob.js				\
 	dc/PCIMapView.js				\
 	dc/USBMapView.js				\
+	dc/DIRMapView.js				\
 	lxc/CmdMenu.js					\
 	lxc/Config.js					\
 	lxc/CreateWizard.js				\
diff --git a/www/manager6/dc/Config.js b/www/manager6/dc/Config.js
index 04ed04f0..f7ba1799 100644
--- a/www/manager6/dc/Config.js
+++ b/www/manager6/dc/Config.js
@@ -312,6 +312,16 @@ Ext.define('PVE.dc.Config', {
 			    title: gettext('USB Devices'),
 			    flex: 1,
 			},
+			{
+			    xtype: 'splitter',
+			    collapsible: false,
+			    performCollapse: false,
+			},
+			{
+			    xtype: 'pveDcDIRMapView',
+			    title: gettext('Directories'),
+			    flex: 1,
+			},
 		    ],
 		},
 	    );
diff --git a/www/manager6/dc/DIRMapView.js b/www/manager6/dc/DIRMapView.js
new file mode 100644
index 00000000..a4304c1b
--- /dev/null
+++ b/www/manager6/dc/DIRMapView.js
@@ -0,0 +1,50 @@
+Ext.define('pve-resource-dir-tree', {
+    extend: 'Ext.data.Model',
+    idProperty: 'internalId',
+    fields: ['type', 'text', 'path', 'id', 'description', 'digest'],
+});
+
+Ext.define('PVE.dc.DIRMapView', {
+    extend: 'PVE.tree.ResourceMapTree',
+    alias: 'widget.pveDcDIRMapView',
+
+    editWindowClass: 'PVE.window.DIRMapEditWindow',
+    baseUrl: '/cluster/mapping/dir',
+    mapIconCls: 'fa fa-folder',
+    entryIdProperty: 'path',
+
+    store: {
+	sorters: 'text',
+	model: 'pve-resource-dir-tree',
+	data: {},
+    },
+
+    columns: [
+	{
+	    xtype: 'treecolumn',
+	    text: gettext('ID/Node'),
+	    dataIndex: 'text',
+	    width: 200,
+	},
+	{
+	    text: gettext('xattr'),
+	    dataIndex: 'xattr',
+	},
+	{
+	    text: gettext('acl'),
+	    dataIndex: 'acl',
+	},
+	{
+	    text: gettext('submounts'),
+	    dataIndex: 'submounts',
+	},
+	{
+	    header: gettext('Comment'),
+	    dataIndex: 'description',
+	    renderer: function(value, _meta, record) {
+		return value ?? record.data.comment;
+	    },
+	    flex: 1,
+	},
+    ],
+});
-- 
2.39.2





^ permalink raw reply	[flat|nested] 19+ messages in thread

* [pve-devel] [PATCH manager v6 4/5] ui: form: add DIRMapSelector
  2023-07-06 10:54 [pve-devel] [PATCH cluster/guest-common/qemu-server/manager v6 0/11] virtiofs Markus Frank
                   ` (8 preceding siblings ...)
  2023-07-06 10:54 ` [pve-devel] [PATCH manager v6 3/5] ui: ResourceMapTree for DIR Markus Frank
@ 2023-07-06 10:54 ` Markus Frank
  2023-07-06 10:54 ` [pve-devel] [PATCH manager v6 5/5] ui: added options to add virtio-fs to qemu config Markus Frank
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 19+ messages in thread
From: Markus Frank @ 2023-07-06 10:54 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 www/manager6/Makefile               |  1 +
 www/manager6/form/DIRMapSelector.js | 63 +++++++++++++++++++++++++++++
 2 files changed, 64 insertions(+)
 create mode 100644 www/manager6/form/DIRMapSelector.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 9078c130..5c8b4b8d 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -34,6 +34,7 @@ JSSRC= 							\
 	form/ContentTypeSelector.js			\
 	form/ControllerSelector.js			\
 	form/DayOfWeekSelector.js			\
+	form/DIRMapSelector.js                          \
 	form/DiskFormatSelector.js			\
 	form/DiskStorageSelector.js			\
 	form/EmailNotificationSelector.js		\
diff --git a/www/manager6/form/DIRMapSelector.js b/www/manager6/form/DIRMapSelector.js
new file mode 100644
index 00000000..6cfb89cf
--- /dev/null
+++ b/www/manager6/form/DIRMapSelector.js
@@ -0,0 +1,63 @@
+Ext.define('PVE.form.DIRMapSelector', {
+    extend: 'Proxmox.form.ComboGrid',
+    alias: 'widget.pveDIRMapSelector',
+
+    store: {
+	fields: ['name', 'path'],
+	filterOnLoad: true,
+	sorters: [
+	    {
+		property: 'name',
+		direction: 'ASC',
+	    },
+	],
+    },
+
+    allowBlank: false,
+    autoSelect: false,
+    displayField: 'id',
+    valueField: 'id',
+
+    listConfig: {
+	columns: [
+	    {
+		header: gettext('Directory ID'),
+		dataIndex: 'id',
+		flex: 1,
+	    },
+	    {
+		header: gettext('Comment'),
+		dataIndex: 'description',
+		flex: 1,
+	    },
+	],
+    },
+
+    setNodename: function(nodename) {
+	var me = this;
+
+	if (!nodename || me.nodename === nodename) {
+	    return;
+	}
+
+	me.nodename = nodename;
+
+	me.store.setProxy({
+	    type: 'proxmox',
+	    url: `/api2/json/cluster/mapping/dir?check-node=${nodename}`,
+	});
+
+	me.store.load();
+    },
+
+    initComponent: function() {
+	var me = this;
+
+	var nodename = me.nodename;
+	me.nodename = undefined;
+
+        me.callParent();
+
+	me.setNodename(nodename);
+    },
+});
-- 
2.39.2





^ permalink raw reply	[flat|nested] 19+ messages in thread

* [pve-devel] [PATCH manager v6 5/5] ui: added options to add virtio-fs to qemu config
  2023-07-06 10:54 [pve-devel] [PATCH cluster/guest-common/qemu-server/manager v6 0/11] virtiofs Markus Frank
                   ` (9 preceding siblings ...)
  2023-07-06 10:54 ` [pve-devel] [PATCH manager v6 4/5] ui: form: add DIRMapSelector Markus Frank
@ 2023-07-06 10:54 ` Markus Frank
  2023-07-17  7:51 ` [pve-devel] [PATCH cluster/guest-common/qemu-server/manager v6 0/11] virtiofs Christoph Heiss
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 19+ messages in thread
From: Markus Frank @ 2023-07-06 10:54 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 www/manager6/Makefile             |   1 +
 www/manager6/Utils.js             |   1 +
 www/manager6/qemu/HardwareView.js |  19 +++++
 www/manager6/qemu/VirtiofsEdit.js | 120 ++++++++++++++++++++++++++++++
 4 files changed, 141 insertions(+)
 create mode 100644 www/manager6/qemu/VirtiofsEdit.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 5c8b4b8d..c9b7484f 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -259,6 +259,7 @@ JSSRC= 							\
 	qemu/Smbios1Edit.js				\
 	qemu/SystemEdit.js				\
 	qemu/USBEdit.js					\
+	qemu/VirtiofsEdit.js				\
 	sdn/Browser.js					\
 	sdn/ControllerView.js				\
 	sdn/Status.js					\
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index a150e848..161d46bc 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -1606,6 +1606,7 @@ Ext.define('PVE.Utils', {
 	serial: 4,
 	rng: 1,
 	tpmstate: 1,
+	virtiofs: 10,
     },
 
     // we can have usb6 and up only for specific machine/ostypes
diff --git a/www/manager6/qemu/HardwareView.js b/www/manager6/qemu/HardwareView.js
index 5b33b1e2..daa81d92 100644
--- a/www/manager6/qemu/HardwareView.js
+++ b/www/manager6/qemu/HardwareView.js
@@ -309,6 +309,16 @@ Ext.define('PVE.qemu.HardwareView', {
 	    never_delete: !caps.nodes['Sys.Console'],
 	    header: gettext("VirtIO RNG"),
 	};
+	for (let i = 0; i < PVE.Utils.hardware_counts.virtiofs; i++) {
+	    let confid = "virtiofs" + i.toString();
+	    rows[confid] = {
+		group: 50,
+		order: i,
+		iconCls: 'folder',
+		editor: 'PVE.qemu.VirtiofsEdit',
+		header: gettext('Virtiofs') + ' (' + confid +')',
+	    };
+	}
 
 	var sorterFn = function(rec1, rec2) {
 	    var v1 = rec1.data.key;
@@ -583,6 +593,7 @@ Ext.define('PVE.qemu.HardwareView', {
 	    const noVMConfigDiskPerm = !caps.vms['VM.Config.Disk'];
 	    const noVMConfigCDROMPerm = !caps.vms['VM.Config.CDROM'];
 	    const noVMConfigCloudinitPerm = !caps.vms['VM.Config.Cloudinit'];
+	    const noVMConfigOptionsPerm = !caps.vms['VM.Config.Options'];
 
 	    me.down('#addUsb').setDisabled(noHWPerm || isAtUsbLimit());
 	    me.down('#addPci').setDisabled(noHWPerm || isAtLimit('hostpci'));
@@ -592,6 +603,7 @@ Ext.define('PVE.qemu.HardwareView', {
 	    me.down('#addRng').setDisabled(noSysConsolePerm || isAtLimit('rng'));
 	    efidisk_menuitem.setDisabled(noVMConfigDiskPerm || isAtLimit('efidisk'));
 	    me.down('#addTpmState').setDisabled(noSysConsolePerm || isAtLimit('tpmstate'));
+	    me.down('#addVirtiofs').setDisabled(noVMConfigOptionsPerm || isAtLimit('virtiofs'));
 	    me.down('#addCloudinitDrive').setDisabled(noVMConfigCDROMPerm || noVMConfigCloudinitPerm || hasCloudInit);
 
 	    if (!rec) {
@@ -736,6 +748,13 @@ Ext.define('PVE.qemu.HardwareView', {
 				disabled: !caps.nodes['Sys.Console'],
 				handler: editorFactory('RNGEdit'),
 			    },
+			    {
+				text: gettext("Shared Filesystem"),
+				itemId: 'addVirtiofs',
+				iconCls: 'fa fa-folder',
+				disabled: !caps.nodes['Sys.Console'],
+				handler: editorFactory('VirtiofsEdit'),
+			    },
 			],
 		    }),
 		},
diff --git a/www/manager6/qemu/VirtiofsEdit.js b/www/manager6/qemu/VirtiofsEdit.js
new file mode 100644
index 00000000..f3af1e08
--- /dev/null
+++ b/www/manager6/qemu/VirtiofsEdit.js
@@ -0,0 +1,120 @@
+Ext.define('PVE.qemu.VirtiofsInputPanel', {
+    extend: 'Proxmox.panel.InputPanel',
+    xtype: 'pveVirtiofsInputPanel',
+    onlineHelp: 'qm_virtiofs',
+
+    insideWizard: false,
+
+    onGetValues: function(values) {
+	var me = this;
+	var confid = me.confid;
+	var params = {};
+	params[confid] = PVE.Parser.printPropertyString(values, 'dirid');
+	return params;
+    },
+
+    setSharedfiles: function(confid, data) {
+	var me = this;
+	me.confid = confid;
+	me.virtiofs = data;
+	me.setValues(me.virtiofs);
+    },
+    initComponent: function() {
+	let me = this;
+
+	me.nodename = me.pveSelNode.data.node;
+	if (!me.nodename) {
+	    throw "no node name specified";
+	}
+	me.items = [
+	    {
+		xtype: 'pveDIRMapSelector',
+		emptyText: 'dirid',
+		nodename: me.nodename,
+		fieldLabel: gettext('Directory ID'),
+		name: 'dirid',
+		allowBlank: false,
+	    },
+	    {
+		xtype: 'proxmoxtextfield',
+		emptyText: 'tag name',
+		fieldLabel: gettext('Mount-Tag'),
+		name: 'tag',
+		allowBlank: false,
+	    },
+	    {
+		xtype: 'proxmoxKVComboBox',
+		value: '__default__',
+		fieldLabel: gettext('Cache'),
+		name: 'cache',
+		deleteEmpty: false,
+		comboItems: [
+		    ['__default__', 'Default (auto)'],
+		    ['auto', 'auto'],
+		    ['always', 'always'],
+		    ['never', 'never'],
+		],
+		allowBlank: false,
+	    },
+	    {
+		xtype: 'proxmoxcheckbox',
+		fieldLabel: gettext('Direct-io'),
+		name: 'direct-io',
+	    },
+	];
+
+	me.virtiofs = {};
+	me.confid = 'virtiofs0';
+	me.callParent();
+    },
+});
+
+Ext.define('PVE.qemu.VirtiofsEdit', {
+    extend: 'Proxmox.window.Edit',
+
+    subject: gettext('Filesystem Passthrough'),
+
+    initComponent: function() {
+	var me = this;
+
+	me.isCreate = !me.confid;
+
+	var ipanel = Ext.create('PVE.qemu.VirtiofsInputPanel', {
+	    confid: me.confid,
+	    pveSelNode: me.pveSelNode,
+	    isCreate: me.isCreate,
+	});
+
+	Ext.applyIf(me, {
+	    items: ipanel,
+	});
+
+	me.callParent();
+
+	me.load({
+	    success: function(response) {
+	        me.conf = response.result.data;
+	        var i, confid;
+	        if (!me.isCreate) {
+		    var value = me.conf[me.confid];
+		    var virtiofs = PVE.Parser.parsePropertyString(value, "dirid");
+		    if (!virtiofs) {
+			Ext.Msg.alert(gettext('Error'), 'Unable to parse virtiofs options');
+			me.close();
+			return;
+		    }
+		    ipanel.setSharedfiles(me.confid, virtiofs);
+		} else {
+		    for (i = 0; i < PVE.Utils.hardware_counts.virtiofs; i++) {
+		        confid = 'virtiofs' + i.toString();
+		        if (!Ext.isDefined(me.conf[confid])) {
+			    me.confid = confid;
+			    break;
+			}
+		    }
+		    ipanel.setSharedfiles(me.confid, {});
+		}
+	    },
+	});
+    },
+});
-- 
2.39.2





^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [pve-devel] [PATCH cluster/guest-common/qemu-server/manager v6 0/11] virtiofs
  2023-07-06 10:54 [pve-devel] [PATCH cluster/guest-common/qemu-server/manager v6 0/11] virtiofs Markus Frank
                   ` (10 preceding siblings ...)
  2023-07-06 10:54 ` [pve-devel] [PATCH manager v6 5/5] ui: added options to add virtio-fs to qemu config Markus Frank
@ 2023-07-17  7:51 ` Christoph Heiss
  2023-07-18 12:56 ` Friedrich Weber
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 19+ messages in thread
From: Christoph Heiss @ 2023-07-17  7:51 UTC (permalink / raw)
  To: Proxmox VE development discussion, Markus Frank


Tested the whole series, using Debian 12 and Windows 11 as guest
machines. Works fine with both.

Two small things overall that I noticed, although only really pertaining
to documentation:

  * It should be explained in the docs and via a tooltip in the GUI
    what the "mount tag" is / does. This might not be immediately
    obvious to all users.

  * It should also be mentioned in the docs that for it to work under
    Windows, the WinFsp [0] drivers are needed, in addition to the
    virtio ones.

Consider the series:

Tested-by: Christoph Heiss <c.heiss@proxmox.com>

[0] https://winfsp.dev/

On Thu, Jul 06, 2023 at 12:54:10PM +0200, Markus Frank wrote:
>
> cluster:
>
> Markus Frank (1):
>   add mapping/dir.cfg for resource mapping
>
>  src/PVE/Cluster.pm  | 1 +
>  src/pmxcfs/status.c | 1 +
>  2 files changed, 2 insertions(+)
>
>
> guest-common:
>
> Markus Frank (1):
>   add DIR mapping config
>
>  src/Makefile           |   1 +
>  src/PVE/Mapping/DIR.pm | 175 +++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 176 insertions(+)
>  create mode 100644 src/PVE/Mapping/DIR.pm
>
>
> qemu-server:
>
> v6:
>  * added virtiofsd dependency
>  * 2 new patches:
>     * Permission check for virtiofs directory access
>     * check_local_resources: virtiofs
>
> v5:
>  * allow numa settings with virtio-fs
>  * added direct-io & cache settings
>  * changed to rust implementation of virtiofsd
>  * made double fork and closed all file descriptor so that the lockfile
>  gets released.
>
> v3:
>  * created own socket and get file descriptor for virtiofsd
>  so there is no race between starting virtiofsd & qemu
>  * added TODO to replace virtiofsd with rust implementation in bookworm
>  (I packaged the rust implementation for bookworm & the C implementation
>  in qemu will be removed in qemu 8.0)
>
> v2:
>  * replaced sharedfiles_fmt path in qemu-server with dirid:
>  * user can use the dirid to specify the directory without requiring root access
>
> Markus Frank (3):
>   feature #1027: virtio-fs support
>   Permission check for virtiofs directory access
>   check_local_resources: virtiofs
>
>  PVE/API2/Qemu.pm         |  18 +++++
>  PVE/QemuServer.pm        | 167 ++++++++++++++++++++++++++++++++++++++-
>  PVE/QemuServer/Memory.pm |  25 ++++--
>  debian/control           |   1 +
>  4 files changed, 204 insertions(+), 7 deletions(-)
>
>
> manager:
>
> v6: completly new except "ui: added options to add virtio-fs to qemu config"
>
> Markus Frank (5):
>   api: add resource map api endpoints for directories
>   ui: add edit window for dir mappings
>   ui: ResourceMapTree for DIR
>   ui: form: add DIRMapSelector
>   ui: added options to add virtio-fs to qemu config
>
>  PVE/API2/Cluster/Mapping.pm         |   7 +
>  PVE/API2/Cluster/Mapping/DIR.pm     | 299 ++++++++++++++++++++++++++++
>  PVE/API2/Cluster/Mapping/Makefile   |   3 +-
>  www/manager6/Makefile               |   4 +
>  www/manager6/Utils.js               |   1 +
>  www/manager6/dc/Config.js           |  10 +
>  www/manager6/dc/DIRMapView.js       |  50 +++++
>  www/manager6/form/DIRMapSelector.js |  63 ++++++
>  www/manager6/qemu/HardwareView.js   |  19 ++
>  www/manager6/qemu/VirtiofsEdit.js   | 120 +++++++++++
>  www/manager6/window/DIRMapEdit.js   | 186 +++++++++++++++++
>  11 files changed, 761 insertions(+), 1 deletion(-)
>  create mode 100644 PVE/API2/Cluster/Mapping/DIR.pm
>  create mode 100644 www/manager6/dc/DIRMapView.js
>  create mode 100644 www/manager6/form/DIRMapSelector.js
>  create mode 100644 www/manager6/qemu/VirtiofsEdit.js
>  create mode 100644 www/manager6/window/DIRMapEdit.js
>
> --
> 2.39.2
>
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>





^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [pve-devel] [PATCH docs v6 1/1] added shared filesystem doc for virtio-fs
  2023-07-06 10:54 ` [pve-devel] [PATCH docs v6 1/1] added shared filesystem doc for virtio-fs Markus Frank
@ 2023-07-17  8:08   ` Christoph Heiss
  0 siblings, 0 replies; 19+ messages in thread
From: Christoph Heiss @ 2023-07-17  8:08 UTC (permalink / raw)
  To: Proxmox VE development discussion, Markus Frank


w.r.t the subject line: s/added/add/, should not be past-tense

As mentioned in the cover letter reply, a sentence explaining what the
mount tag is and about the virtio/WinFsp drivers situation would also be
useful.

On Thu, Jul 06, 2023 at 12:54:13PM +0200, Markus Frank wrote:
>
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> ---
>  qm.adoc | 60 +++++++++++++++++++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 58 insertions(+), 2 deletions(-)
>
> diff --git a/qm.adoc b/qm.adoc
> index e35dbf0..00a0668 100644
> --- a/qm.adoc
> +++ b/qm.adoc
> @@ -997,6 +997,61 @@ recommended to always use a limiter to avoid guests using too many host
>  resources. If desired, a value of '0' for `max_bytes` can be used to disable
>  all limits.
>
> +[[qm_virtiofs]]
> +Virtio-fs
> +~~~~~~~~~
> +
> +Virtio-fs is a shared file system, that enables sharing a directory between
> +host and guest VM while taking advantage of the locality of virtual machines
> +and the hypervisor to get a higher throughput than 9p.
Maybe add a sentence about availability/compatibility? E.g. it is
supported since Linux 5.4 from what I could quickly gather. Minimum
virtio drivers version for Windows would also be useful, I guess. There
are always some people who try to run some ancient software.

> +The parameter `hugepages` must be disabled to use virtio-fs.
This can probably be reworded a bit, to make it clear that this means
the VM configuration parameter.

Something like e.g.: "This feature is incompatible with the hugepages
feature. The `hugepages` VM configuration option must thus be
disabled if virtio-fs is to be used."

> +
> +Add mapping for Shared Directories
> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> +
> +To add a mapping, go to the Resource Mapping tab in Datacenter in the WebUI,
> +use the API directly with pvesh as described in the
> +xref:resource_mapping[Resource Mapping] section,
> +or add the mapping to the configuration file /etc/pve/mapping/dir.cfg:
nit: The path should be surrounded with backticks for monospacing

> +
> +----
> +some-dir-id
> +	map node=node1,path=/share/,xattr=1,acl=1,submounts=1
> +	map node=node2,path=/share/,xattr=1
> +	map node=node3,path=/different-share-path/,submounts=1
> +	map node=node4,path=/foobar/
> +	map node=node5,path=/somewhere/,acl=1
> +----
> +
> +The Parameter `acl` automatically implies `xattr`, so there would be no need to
nit:   ^^^^^^^^^ should be lowercase

> +set `xattr` for node1 in this example.
> +Set `submounts` to `1` when using multiple file systems in the shared directory.
> +
> +Add virtiofs to VM
> +^^^^^^^^^^^^^^^^^^
> +
> +To share a directory with virtio-fs, you need to specify the directory ID
nit: s/with/using/      ^^^^

> +that has been configured in the Resource Mapping. Additionally, you can set
> +the `cache` option to either `always`, `never`, or `auto`, depending on your
> +requirements. If you want virtio-fs to honor the `O_DIRECT` flag, you can set the
> +`direct-io` parameter to `1`.
> +
> +----
> +qm set <vmid> -virtiofs0 dirid=<dirid>,tag=<mount tag>,cache=always,direct-io=1
> +qm set <vmid> -virtiofs1 <dirid>,tag=<mount tag>,cache=never
> +qm set <vmid> -virtiofs2 <dirid>,tag=<mount tag>
> +----
> +
> +To mount virtio-fs in a guest VM with the Linux kernel virtiofs driver, run the
> +following command:
> +
> +----
> +mount -t virtiofs <mount tag> <mount point>
> +----
> +
> +For more information on the virtiofsd parameters, see:
Maybe better written like:

"For more information on available virtiofsd parameters, see the
https://gitlab.com/virtio-fs/virtiofsd[GitLab project page]."

> +https://gitlab.com/virtio-fs/virtiofsd[GitLab virtiofsd]
> +
>  [[qm_bootorder]]
>  Device Boot Order
>  ~~~~~~~~~~~~~~~~~
> @@ -1600,8 +1655,9 @@ in the relevant tab in the `Resource Mappings` category, or on the cli with
>  # pvesh create /cluster/mapping/<type> <options>
>  ----
>
> -Where `<type>` is the hardware type (currently either `pci` or `usb`) and
> -`<options>` are the device mappings and other configuration parameters.
> +Where `<type>` is the hardware type (currently either `pci`, `usb` or
> +xref:qm_virtiofs[dir]) and `<options>` are the device mappings and other
> +configuration parameters.
>
>  Note that the options must include a map property with all identifying
>  properties of that hardware, so that it's possible to verify the hardware did
> --
> 2.39.2
>
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>




^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [pve-devel] [PATCH cluster/guest-common/qemu-server/manager v6 0/11] virtiofs
  2023-07-06 10:54 [pve-devel] [PATCH cluster/guest-common/qemu-server/manager v6 0/11] virtiofs Markus Frank
                   ` (11 preceding siblings ...)
  2023-07-17  7:51 ` [pve-devel] [PATCH cluster/guest-common/qemu-server/manager v6 0/11] virtiofs Christoph Heiss
@ 2023-07-18 12:56 ` Friedrich Weber
  2023-07-19 12:08 ` Fabian Grünbichler
  2023-07-20  7:12 ` Fabian Grünbichler
  14 siblings, 0 replies; 19+ messages in thread
From: Friedrich Weber @ 2023-07-18 12:56 UTC (permalink / raw)
  To: Proxmox VE development discussion, Markus Frank

Tested the following:

* created a mapping on a 3-node cluster, added mapping to PVE8 VM,
offline-migrated VM between cluster nodes, checked that `mount` inside
the VM mounts the correct host directory

* checked that `xattr=1` makes xattrs available in the guest, and
`acl=1` makes acls available in the guest

* added a non-privileged user with different combinations of
Mapping.Audit/Use/Modify and played around with modifying/using
directory mappings

Overall, it's working fine and I did not encounter major issues. Here
are a few things I noticed (somewhat sorted by priority in descending
order):

* after having started and stopped a VM with a shared filesystem a few
times, I noticed quite some zombie virtiofsd processes, I guess they
would need to be cleaned up:

```
root       11121  0.0  3.5 251260 140924 ?       S    14:23   0:00 task
UPID:cl2:00002B6C:00056BEE:64B68425:qmstart:100:fred@pve:
root       11125  0.0  0.0      0     0 ?        Z    14:23   0:00  \_
[virtiofsd] <defunct>
root       12064  0.0  3.5 251180 140980 ?       S    14:28   0:00 task
UPID:cl2:00002F1D:0005E581:64B6855D:qmstart:100:fred@pve:
root       12067  0.0  0.0      0     0 ?        Z    14:28   0:00  \_
[virtiofsd] <defunct>
...
```

* is it intended that the virtiofsd process is started as a child of the
qmstart task process, causing the task process to stay around as long as
the VM is up? This seemed a bit unexpected to me when I first read the
`ps` output, but also I don't know if there is a good alternative.

* in the GUI, the Add->Shared Filesystem button is greyed out if I do
not have the Sys.Console privilege, but via the API I can create the
shared filesystem without Sys.Console and with just (I think)
VM.Config.Disk and Mapping.Use. I'm not sure, but it seems like the GUI
permission check is too strict and Sys.Console should not be required?

* in the GUI, I can add multiple shared directories with the same tag
but different dirids to a VM. In a quick test, it looked like the first
one took precedence. Not sure if there should be some kind of validation
logic here checking that no two virtiofs entries use the same tag?

* in the GUI, if I add a shared filesystem, the dialog title is "Add:
Filesystem passthrough", this should probably be "Add: Shared
Filesystem" for consistency with the button text.

On 06/07/2023 12:54, Markus Frank wrote:
> cluster:
> 
> Markus Frank (1):
>   add mapping/dir.cfg for resource mapping
> 
>  src/PVE/Cluster.pm  | 1 +
>  src/pmxcfs/status.c | 1 +
>  2 files changed, 2 insertions(+)
> 
> 
> guest-common:
> 
> Markus Frank (1):
>   add DIR mapping config
> 
>  src/Makefile           |   1 +
>  src/PVE/Mapping/DIR.pm | 175 +++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 176 insertions(+)
>  create mode 100644 src/PVE/Mapping/DIR.pm
> 
> 
> qemu-server:
> 
> v6:
>  * added virtiofsd dependency
>  * 2 new patches:
>     * Permission check for virtiofs directory access
>     * check_local_resources: virtiofs
> 
> v5:
>  * allow numa settings with virtio-fs
>  * added direct-io & cache settings
>  * changed to rust implementation of virtiofsd
>  * made double fork and closed all file descriptor so that the lockfile
>  gets released.
> 
> v3:
>  * created own socket and get file descriptor for virtiofsd
>  so there is no race between starting virtiofsd & qemu
>  * added TODO to replace virtiofsd with rust implementation in bookworm
>  (I packaged the rust implementation for bookworm & the C implementation
>  in qemu will be removed in qemu 8.0)
> 
> v2:
>  * replaced sharedfiles_fmt path in qemu-server with dirid:
>  * user can use the dirid to specify the directory without requiring root access
> 
> Markus Frank (3):
>   feature #1027: virtio-fs support
>   Permission check for virtiofs directory access
>   check_local_resources: virtiofs
> 
>  PVE/API2/Qemu.pm         |  18 +++++
>  PVE/QemuServer.pm        | 167 ++++++++++++++++++++++++++++++++++++++-
>  PVE/QemuServer/Memory.pm |  25 ++++--
>  debian/control           |   1 +
>  4 files changed, 204 insertions(+), 7 deletions(-)
> 
> 
> manager:
> 
> v6: completly new except "ui: added options to add virtio-fs to qemu config"
> 
> Markus Frank (5):
>   api: add resource map api endpoints for directories
>   ui: add edit window for dir mappings
>   ui: ResourceMapTree for DIR
>   ui: form: add DIRMapSelector
>   ui: added options to add virtio-fs to qemu config
> 
>  PVE/API2/Cluster/Mapping.pm         |   7 +
>  PVE/API2/Cluster/Mapping/DIR.pm     | 299 ++++++++++++++++++++++++++++
>  PVE/API2/Cluster/Mapping/Makefile   |   3 +-
>  www/manager6/Makefile               |   4 +
>  www/manager6/Utils.js               |   1 +
>  www/manager6/dc/Config.js           |  10 +
>  www/manager6/dc/DIRMapView.js       |  50 +++++
>  www/manager6/form/DIRMapSelector.js |  63 ++++++
>  www/manager6/qemu/HardwareView.js   |  19 ++
>  www/manager6/qemu/VirtiofsEdit.js   | 120 +++++++++++
>  www/manager6/window/DIRMapEdit.js   | 186 +++++++++++++++++
>  11 files changed, 761 insertions(+), 1 deletion(-)
>  create mode 100644 PVE/API2/Cluster/Mapping/DIR.pm
>  create mode 100644 www/manager6/dc/DIRMapView.js
>  create mode 100644 www/manager6/form/DIRMapSelector.js
>  create mode 100644 www/manager6/qemu/VirtiofsEdit.js
>  create mode 100644 www/manager6/window/DIRMapEdit.js
> 




^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [pve-devel] [PATCH qemu-server v6 1/3] feature #1027: virtio-fs support
  2023-07-06 10:54 ` [pve-devel] [PATCH qemu-server v6 1/3] feature #1027: virtio-fs support Markus Frank
@ 2023-07-19 12:08   ` Fabian Grünbichler
  0 siblings, 0 replies; 19+ messages in thread
From: Fabian Grünbichler @ 2023-07-19 12:08 UTC (permalink / raw)
  To: Proxmox VE development discussion

On July 6, 2023 12:54 pm, Markus Frank wrote:
> adds support for sharing directorys with a guest vm
> 
> virtio-fs needs virtiofsd to be started.
> 
> In order to start virtiofsd as a process (despite being a daemon it is does not run
> in the background), a double-fork is used.
> 
> virtiofsd should close itself together with qemu.
> 
> There are the parameters dirid & tag
> and the optional parameters direct-io & cache.
> 
> The dirid gets mapped to the path on the current node.
> The tag parameter is for choosing the tag-name that is used with the
> mount command.
> 
> example config:
> ```
> virtiofs0: foo,tag=tag1,direct-io=1,cache=always
> virtiofs1: dirid=bar,tag=tag2,cache=never
> ```
> 
> For information on the optional parameters see there:
> https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md

some information why virtiofsd is incompatible with hugepages, and why
it needs a different memdev identifier would be nice to have, either in
the commit message, or as a comment..

nits/questions below mainly if you respin with the guest-common related
feedback adressed, if not, these can obviously be done as a follow up as
well - but please also see the cover letter reply, the problems listed
there likely mean this patch in particular still needs some rework
anyway!

> 
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> ---
>  PVE/QemuServer.pm        | 157 +++++++++++++++++++++++++++++++++++++++
>  PVE/QemuServer/Memory.pm |  25 +++++--
>  debian/control           |   1 +
>  3 files changed, 177 insertions(+), 6 deletions(-)
> 
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index 940cdac..3a8b4c5 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -43,6 +43,7 @@ use PVE::PBSClient;
>  use PVE::RESTEnvironment qw(log_warn);
>  use PVE::RPCEnvironment;
>  use PVE::Storage;
> +use PVE::Mapping::DIR;
>  use PVE::SysFSTools;
>  use PVE::Systemd;
>  use PVE::Tools qw(run_command file_read_firstline file_get_contents dir_glob_foreach get_host_arch $IPV6RE);
> @@ -276,6 +277,35 @@ my $rng_fmt = {
>      },
>  };
>  
> +my $virtiofs_fmt = {
> +    'dirid' => {
> +	type => 'string',
> +	default_key => 1,
> +	description => "dirid of directory you want to share with the guest VM",
> +	format_description => "virtiofs-dirid",
> +    },

the other two mapping types call this 'mapping', set a format and
different format_description. maybe we should be consistent?

nit: the description could just be

"Mapping identifier of the directory mapping to be shared with the
guest."

or something similar, "dirid of directory" is rather redundant, and
"you" seems wrong for a property description.

> +    'tag' => {
> +	type => 'string',
> +	description => "tag name for mounting in the guest VM",
> +	format_description => "virtiofs-tag",
> +    },
> +    'cache' => {
> +	type => 'string',
> +	description => "The caching policy the file system should use"
> +	    ." (auto, always, never).",
> +	format_description => "virtiofs-cache",
> +	enum => [qw(auto always never)],
> +	optional => 1,
> +    },
> +    'direct-io' => {
> +	type => 'boolean',
> +	description => "Honor the O_DIRECT flag passed down by guest applications",
> +	format_description => "virtiofs-directio",
> +	optional => 1,
> +    },

what about 'ro'?

should we also add 'xattr', 'acl' and the submount part here to control
whether those are actually enabled for this particular guest?

> +};
> +PVE::JSONSchema::register_format('pve-qm-virtiofs', $virtiofs_fmt);
> +
>  my $meta_info_fmt = {
>      'ctime' => {
>  	type => 'integer',
> @@ -838,6 +868,7 @@ while (my ($k, $v) = each %$confdesc) {
>  }
>  
>  my $MAX_NETS = 32;
> +my $MAX_VIRTIOFS = 10;
>  my $MAX_SERIAL_PORTS = 4;
>  my $MAX_PARALLEL_PORTS = 3;
>  my $MAX_NUMA = 8;
> @@ -982,6 +1013,21 @@ my $netdesc = {
>  
>  PVE::JSONSchema::register_standard_option("pve-qm-net", $netdesc);
>  
> +my $virtiofsdesc = {
> +    optional => 1,
> +    type => 'string', format => $virtiofs_fmt,
> +    description => "share files between host and guest",
> +};
> +PVE::JSONSchema::register_standard_option("pve-qm-virtiofs", $virtiofsdesc);
> +
> +sub max_virtiofs {
> +    return $MAX_VIRTIOFS;
> +}
> +
> +for (my $i = 0; $i < $MAX_VIRTIOFS; $i++)  {
> +    $confdesc->{"virtiofs$i"} = $virtiofsdesc;
> +}
> +
>  my $ipconfig_fmt = {
>      ip => {
>  	type => 'string',
> @@ -4100,6 +4146,25 @@ sub config_to_command {
>  	push @$devices, '-device', $netdevicefull;
>      }
>  
> +    my $onevirtiofs = 0;

nit: my $virtiofs_enabled , or something similar ('onevirtiofs' sounds like
there is exactly one virtiofs configured).

> +    for (my $i = 0; $i < $MAX_VIRTIOFS; $i++) {
> +	my $virtiofsstr = "virtiofs$i";

nit: variable naming could be better here - we usually call this "$opt"
or "$key" ;)

> +
> +	next if !$conf->{$virtiofsstr};
> +	my $virtiofs = parse_property_string('pve-qm-virtiofs', $conf->{$virtiofsstr});
> +	next if !$virtiofs;
> +
> +	push @$devices, '-chardev', "socket,id=virtfs$i,path=/var/run/virtiofsd/vm$vmid-fs$i";
> +	push @$devices, '-device', 'vhost-user-fs-pci,queue-size=1024'
> +	    .",chardev=virtfs$i,tag=$virtiofs->{tag}";
> +
> +	$onevirtiofs = 1;
> +    }
> +
> +    if ($onevirtiofs && $conf->{hugepages}){
> +	die "hugepages not supported in combination with virtiofs\n";
> +    }
> +
>      if ($conf->{ivshmem}) {
>  	my $ivshmem = parse_property_string($ivshmem_fmt, $conf->{ivshmem});
>  
> @@ -4159,6 +4224,14 @@ sub config_to_command {
>      }
>      push @$machineFlags, "type=${machine_type_min}";
>  
> +    if ($onevirtiofs && !$conf->{numa}) {
> +	# kvm: '-machine memory-backend' and '-numa memdev' properties are
> +	# mutually exclusive
> +	push @$devices, '-object', 'memory-backend-file,id=virtiofs-mem'
> +	    .",size=$conf->{memory}M,mem-path=/dev/shm,share=on";
> +	push @$machineFlags, 'memory-backend=virtiofs-mem';
> +    }
> +
>      push @$cmd, @$devices;
>      push @$cmd, '-rtc', join(',', @$rtcFlags) if scalar(@$rtcFlags);
>      push @$cmd, '-machine', join(',', @$machineFlags) if scalar(@$machineFlags);
> @@ -4185,6 +4258,72 @@ sub config_to_command {
>      return wantarray ? ($cmd, $vollist, $spice_port, $pci_devices) : $cmd;
>  }
>  
> +sub start_virtiofs {
> +    my ($vmid, $fsid, $virtiofs) = @_;
> +
> +    my $dir_list = PVE::Mapping::DIR::find_on_current_node($virtiofs->{dirid});
> +
> +    if (!$dir_list || scalar($dir_list->@*) != 1) {
> +	die "virtiofs needs exactly one mapping for this node\n";
> +    }
> +
> +    eval {
> +	PVE::Mapping::DIR::assert_valid($dir_list->[0]);
> +    };
> +    if (my $err = $@) {
> +	die "Directory Mapping invalid: $err\n";
> +    }
> +
> +    my $dir_cfg = $dir_list->[0];
> +    my $path = $dir_cfg->{path};
> +    my $socket_path_root = "/var/run/virtiofsd";
> +    mkdir $socket_path_root;
> +    my $socket_path = "$socket_path_root/vm$vmid-fs$fsid";
> +    unlink($socket_path);
> +    my $socket = IO::Socket::UNIX->new(
> +	Type => SOCK_STREAM,
> +	Local => $socket_path,
> +	Listen => 1,
> +    ) or die "cannot create socket - $!\n";
> +
> +    my $flags = fcntl($socket, F_GETFD, 0)
> +	or die "failed to get file descriptor flags: $!\n";
> +    fcntl($socket, F_SETFD, $flags & ~FD_CLOEXEC)
> +	or die "failed to remove FD_CLOEXEC from file descriptor\n";
> +
> +    my $fd = $socket->fileno();
> +
> +    my $virtiofsd_bin = '/usr/libexec/virtiofsd';
> +
> +    my $pid = fork();
> +    if ($pid == 0) {
> +	for my $fd_loop (3 .. POSIX::sysconf( &POSIX::_SC_OPEN_MAX )) {
> +	    POSIX::close($fd_loop) if ($fd_loop != $fd);
> +	}
> +	my $pid2 = fork();
> +	if ($pid2 == 0) {
> +	    my $cmd = [$virtiofsd_bin, "--fd=$fd", "--shared-dir=$path"];
> +	    push @$cmd, '--xattr' if ($dir_cfg->{xattr});
> +	    push @$cmd, '--posix-acl' if ($dir_cfg->{acl});
> +	    push @$cmd, '--announce-submounts' if ($dir_cfg->{submounts});
> +	    push @$cmd, '--allow-direct-io' if ($virtiofs->{'direct-io'});
> +	    push @$cmd, "--cache=$virtiofs->{'cache'}" if ($virtiofs->{'cache'});
> +	    run_command($cmd);
> +	    POSIX::_exit(0);
> +	} elsif (!defined($pid2)) {
> +	    die "could not fork to start virtiofsd\n";
> +	} else {
> +	    POSIX::_exit(0);
> +	}
> +    } elsif (!defined($pid)) {
> +	die "could not fork to start virtiofsd\n";
> +    }
> +
> +    # return socket to keep it alive,
> +    # so that qemu will wait for virtiofsd to start
> +    return $socket;
> +}
> +
>  sub check_rng_source {
>      my ($source) = @_;
>  
> @@ -5740,6 +5879,19 @@ sub vm_start_nolock {
>      my ($cmd, $vollist, $spice_port, $pci_devices) = config_to_command($storecfg, $vmid,
>  	$conf, $defaults, $forcemachine, $forcecpu, $params->{'pbs-backing'});
>  
> +    my @sockets;
> +    for (my $i = 0; $i < $MAX_VIRTIOFS; $i++) {
> +	my $virtiofsstr = "virtiofs$i";
> +
> +	next if !$conf->{$virtiofsstr};
> +	my $virtiofs = parse_property_string('pve-qm-virtiofs', $conf->{$virtiofsstr});
> +	next if !$virtiofs;
> +
> +
> +	my $socket = start_virtiofs($vmid, $i, $virtiofs);
> +	push @sockets, $socket;
> +    }
> +
>      my $migration_ip;
>      my $get_migration_ip = sub {
>  	my ($nodename) = @_;
> @@ -6093,6 +6245,11 @@ sub vm_start_nolock {
>  
>      PVE::GuestHelpers::exec_hookscript($conf, $vmid, 'post-start');
>  
> +    foreach my $socket (@sockets) {
> +	shutdown($socket, 2);
> +	close($socket);
> +    }
> +
>      return $res;
>  }
>  
> diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
> index 0601dd6..3b58b36 100644
> --- a/PVE/QemuServer/Memory.pm
> +++ b/PVE/QemuServer/Memory.pm
> @@ -278,6 +278,16 @@ sub config {
>  
>      die "numa needs to be enabled to use hugepages" if $conf->{hugepages} && !$conf->{numa};
>  
> +    my $onevirtiofs = 0;
> +    for (my $i = 0; $i < PVE::QemuServer::max_virtiofs(); $i++) {
> +	my $virtiofsstr = "virtiofs$i";
> +	next if !$conf->{$virtiofsstr};
> +	my $virtiofs = PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', $conf->{$virtiofsstr});
> +	if ($virtiofs) {
> +	    $onevirtiofs = 1;
> +	}
> +    }
> +
>      if ($conf->{numa}) {
>  
>  	my $numa_totalmemory = undef;
> @@ -290,7 +300,8 @@ sub config {
>  	    my $numa_memory = $numa->{memory};
>  	    $numa_totalmemory += $numa_memory;
>  
> -	    my $mem_object = print_mem_object($conf, "ram-node$i", $numa_memory);
> +	    my $memdev = $onevirtiofs ? "virtiofs-mem$i" : "ram-node$i";
> +	    my $mem_object = print_mem_object($conf, $memdev, $numa_memory);
>  
>  	    # cpus
>  	    my $cpulists = $numa->{cpus};
> @@ -315,7 +326,7 @@ sub config {
>  	    }
>  
>  	    push @$cmd, '-object', $mem_object;
> -	    push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=ram-node$i";
> +	    push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=$memdev";
>  	}
>  
>  	die "total memory for NUMA nodes must be equal to vm static memory\n"
> @@ -329,13 +340,13 @@ sub config {
>  		die "host NUMA node$i doesn't exist\n"
>  		    if !host_numanode_exists($i) && $conf->{hugepages};
>  
> -		my $mem_object = print_mem_object($conf, "ram-node$i", $numa_memory);
> -		push @$cmd, '-object', $mem_object;
> -
>  		my $cpus = ($cores * $i);
>  		$cpus .= "-" . ($cpus + $cores - 1) if $cores > 1;
>  
> -		push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=ram-node$i";
> +		my $memdev = $onevirtiofs ? "virtiofs-mem$i" : "ram-node$i";
> +		my $mem_object = print_mem_object($conf, $memdev, $numa_memory);
> +		push @$cmd, '-object', $mem_object;
> +		push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=$memdev";
>  	    }
>  	}
>      }
> @@ -364,6 +375,8 @@ sub print_mem_object {
>  	my $path = hugepages_mount_path($hugepages_size);
>  
>  	return "memory-backend-file,id=$id,size=${size}M,mem-path=$path,share=on,prealloc=yes";
> +    } elsif ($id =~ m/^virtiofs-mem/) {
> +	return "memory-backend-file,id=$id,size=${size}M,mem-path=/dev/shm,share=on";
>      } else {
>  	return "memory-backend-ram,id=$id,size=${size}M";
>      }
> diff --git a/debian/control b/debian/control
> index 49f67b2..f008a9b 100644
> --- a/debian/control
> +++ b/debian/control
> @@ -53,6 +53,7 @@ Depends: dbus,
>           socat,
>           swtpm,
>           swtpm-tools,
> +         virtiofsd,
>           ${misc:Depends},
>           ${perl:Depends},
>           ${shlibs:Depends},
> -- 
> 2.39.2
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 




^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [pve-devel] [PATCH cluster/guest-common/qemu-server/manager v6 0/11] virtiofs
  2023-07-06 10:54 [pve-devel] [PATCH cluster/guest-common/qemu-server/manager v6 0/11] virtiofs Markus Frank
                   ` (12 preceding siblings ...)
  2023-07-18 12:56 ` Friedrich Weber
@ 2023-07-19 12:08 ` Fabian Grünbichler
  2023-07-20  7:12 ` Fabian Grünbichler
  14 siblings, 0 replies; 19+ messages in thread
From: Fabian Grünbichler @ 2023-07-19 12:08 UTC (permalink / raw)
  To: Proxmox VE development discussion

high level:

- some indication for which patches require which patches and/or package
  relation bumps would be nice (e.g., qemu-server -> pve-guest-common
  and pve-manager -> pve-doc-generator)
- the qemu-server unit tests fail for me with the patches, but work
  without. they do work when run as root, so I assume some mocking for
  pmxcfs related things is missing

if I start a VM with `qm start`:

Jul 19 13:57:48 yuna qm[3872900]: <root@pam> starting task UPID:yuna:003B1897:0127A0F1:64B7CFBC:qmstart:901:root@pam:
Jul 19 13:57:48 yuna qm[3872919]: start VM 901: UPID:yuna:003B1897:0127A0F1:64B7CFBC:qmstart:901:root@pam:
Jul 19 13:57:48 yuna systemd[1]: Started 901.scope.
Jul 19 13:57:48 yuna kernel: device tap901i0 entered promiscuous mode
Jul 19 13:57:48 yuna kernel: vmbr10: port 2(tap901i0) entered blocking state
Jul 19 13:57:48 yuna kernel: vmbr10: port 2(tap901i0) entered disabled state
Jul 19 13:57:48 yuna kernel: vmbr10: port 2(tap901i0) entered blocking state
Jul 19 13:57:48 yuna kernel: vmbr10: port 2(tap901i0) entered listening state
Jul 19 13:57:48 yuna QEMU[3872940]: kvm: Unexpected end-of-file before all data were read
Jul 19 13:57:48 yuna qm[3872923]: command '/usr/libexec/virtiofsd '--fd=9' '--shared-dir=/var/lib/apt'' failed: received interrupt
Jul 19 13:57:48 yuna qm[3872900]: <root@pam> end task UPID:yuna:003B1897:0127A0F1:64B7CFBC:qmstart:901:root@pam: OK
Jul 19 13:57:48 yuna sudo[3872855]: pam_unix(sudo:session): session closed for user root
Jul 19 13:57:59 yuna QEMU[3872940]: kvm: Failed to set msg fds.
Jul 19 13:57:59 yuna QEMU[3872940]: kvm: vhost_set_vring_call failed 22
Jul 19 13:57:59 yuna QEMU[3872940]: kvm: Failed to set msg fds.
Jul 19 13:57:59 yuna QEMU[3872940]: kvm: vhost_set_vring_call failed 22
Jul 19 13:57:59 yuna QEMU[3872940]: kvm: Failed to set msg fds.
Jul 19 13:57:59 yuna QEMU[3872940]: kvm: vhost_set_features failed: Invalid argument (22)
Jul 19 13:57:59 yuna QEMU[3872940]: kvm: Error starting vhost: 22
Jul 19 13:57:59 yuna QEMU[3872940]: kvm: Failed to set msg fds.
Jul 19 13:57:59 yuna QEMU[3872940]: kvm: vhost_set_vring_call failed 22
Jul 19 13:57:59 yuna QEMU[3872940]: kvm: Failed to set msg fds.
Jul 19 13:57:59 yuna QEMU[3872940]: kvm: vhost_set_vring_call failed 22

$ qm start 901
[2023-07-19T11:57:48Z INFO  virtiofsd] Waiting for vhost-user socket connection...
[2023-07-19T11:57:48Z INFO  virtiofsd] Client connected, servicing requests

with only the qemu process in the systemd scope for that VM..

if I start via the API, the above issue goes away, but as noted by
Friedrich, the virtiofsd process is not part of the VM scope, but a
child of the start task, and lingers after the VM has been stopped.

On July 6, 2023 12:54 pm, Markus Frank wrote:
> cluster:
> 
> Markus Frank (1):
>   add mapping/dir.cfg for resource mapping
> 
>  src/PVE/Cluster.pm  | 1 +
>  src/pmxcfs/status.c | 1 +
>  2 files changed, 2 insertions(+)
> 
> 
> guest-common:
> 
> Markus Frank (1):
>   add DIR mapping config
> 
>  src/Makefile           |   1 +
>  src/PVE/Mapping/DIR.pm | 175 +++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 176 insertions(+)
>  create mode 100644 src/PVE/Mapping/DIR.pm
> 
> 
> qemu-server:
> 
> v6:
>  * added virtiofsd dependency
>  * 2 new patches:
>     * Permission check for virtiofs directory access
>     * check_local_resources: virtiofs
> 
> v5:
>  * allow numa settings with virtio-fs
>  * added direct-io & cache settings
>  * changed to rust implementation of virtiofsd
>  * made double fork and closed all file descriptor so that the lockfile
>  gets released.
> 
> v3:
>  * created own socket and get file descriptor for virtiofsd
>  so there is no race between starting virtiofsd & qemu
>  * added TODO to replace virtiofsd with rust implementation in bookworm
>  (I packaged the rust implementation for bookworm & the C implementation
>  in qemu will be removed in qemu 8.0)
> 
> v2:
>  * replaced sharedfiles_fmt path in qemu-server with dirid:
>  * user can use the dirid to specify the directory without requiring root access
> 
> Markus Frank (3):
>   feature #1027: virtio-fs support
>   Permission check for virtiofs directory access
>   check_local_resources: virtiofs
> 
>  PVE/API2/Qemu.pm         |  18 +++++
>  PVE/QemuServer.pm        | 167 ++++++++++++++++++++++++++++++++++++++-
>  PVE/QemuServer/Memory.pm |  25 ++++--
>  debian/control           |   1 +
>  4 files changed, 204 insertions(+), 7 deletions(-)
> 
> 
> manager:
> 
> v6: completly new except "ui: added options to add virtio-fs to qemu config"
> 
> Markus Frank (5):
>   api: add resource map api endpoints for directories
>   ui: add edit window for dir mappings
>   ui: ResourceMapTree for DIR
>   ui: form: add DIRMapSelector
>   ui: added options to add virtio-fs to qemu config
> 
>  PVE/API2/Cluster/Mapping.pm         |   7 +
>  PVE/API2/Cluster/Mapping/DIR.pm     | 299 ++++++++++++++++++++++++++++
>  PVE/API2/Cluster/Mapping/Makefile   |   3 +-
>  www/manager6/Makefile               |   4 +
>  www/manager6/Utils.js               |   1 +
>  www/manager6/dc/Config.js           |  10 +
>  www/manager6/dc/DIRMapView.js       |  50 +++++
>  www/manager6/form/DIRMapSelector.js |  63 ++++++
>  www/manager6/qemu/HardwareView.js   |  19 ++
>  www/manager6/qemu/VirtiofsEdit.js   | 120 +++++++++++
>  www/manager6/window/DIRMapEdit.js   | 186 +++++++++++++++++
>  11 files changed, 761 insertions(+), 1 deletion(-)
>  create mode 100644 PVE/API2/Cluster/Mapping/DIR.pm
>  create mode 100644 www/manager6/dc/DIRMapView.js
>  create mode 100644 www/manager6/form/DIRMapSelector.js
>  create mode 100644 www/manager6/qemu/VirtiofsEdit.js
>  create mode 100644 www/manager6/window/DIRMapEdit.js
> 
> -- 
> 2.39.2
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 




^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [pve-devel] [PATCH guest-common v6 1/1] add DIR mapping config
  2023-07-06 10:54 ` [pve-devel] [PATCH guest-common v6 1/1] add DIR mapping config Markus Frank
@ 2023-07-19 12:09   ` Fabian Grünbichler
  0 siblings, 0 replies; 19+ messages in thread
From: Fabian Grünbichler @ 2023-07-19 12:09 UTC (permalink / raw)
  To: Proxmox VE development discussion

On July 6, 2023 12:54 pm, Markus Frank wrote:
> adds a config file for directories by using a 'map'
> array propertystring for each node mapping.
> 
> next to node & path, there are xattr, acl & submounts parameters for
> virtiofsd in the map array.
> 
> example config:
> ```
> some-dir-id
> 	map node=node1,path=/mnt/share/,xattr=1,acl=1,submounts=1
> 	map node=node2,path=/mnt/share/,xattr=1
> 	map node=node3,path=/mnt/share/,submounts=1
> 	map node=node4,path=/mnt/share/
> ```

does it really make sense to configure a dir with acl (support) on one
node, and without on another? same for xattr? submounts I could kinda
see a use case, although it would be a bit contrived..

e.g., for pci mappings we have the node-specific parts in the "map"
array items, but the "global" mdev boolean property is defined once per
ID, and not for each node..

> 
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> ---
>  src/Makefile           |   1 +
>  src/PVE/Mapping/DIR.pm | 175 +++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 176 insertions(+)
>  create mode 100644 src/PVE/Mapping/DIR.pm
> 
> diff --git a/src/Makefile b/src/Makefile
> index cbc40c1..876829a 100644
> --- a/src/Makefile
> +++ b/src/Makefile
> @@ -17,6 +17,7 @@ install: PVE
>  	install -d ${PERL5DIR}/PVE/Mapping
>  	install -m 0644 PVE/Mapping/PCI.pm ${PERL5DIR}/PVE/Mapping/
>  	install -m 0644 PVE/Mapping/USB.pm ${PERL5DIR}/PVE/Mapping/
> +	install -m 0644 PVE/Mapping/DIR.pm ${PERL5DIR}/PVE/Mapping/
>  	install -d ${PERL5DIR}/PVE/VZDump
>  	install -m 0644 PVE/VZDump/Plugin.pm ${PERL5DIR}/PVE/VZDump/
>  	install -m 0644 PVE/VZDump/Common.pm ${PERL5DIR}/PVE/VZDump/
> diff --git a/src/PVE/Mapping/DIR.pm b/src/PVE/Mapping/DIR.pm

nit: why DIR and not Dir ? USB and PCI are acronyms, Dir is just short for
Directory..

> new file mode 100644
> index 0000000..a5da042
> --- /dev/null
> +++ b/src/PVE/Mapping/DIR.pm
> @@ -0,0 +1,175 @@
> +package PVE::Mapping::DIR;
> +
> +use strict;
> +use warnings;
> +
> +use PVE::Cluster qw(cfs_register_file cfs_read_file cfs_lock_file cfs_write_file);
> +use PVE::JSONSchema qw(get_standard_option parse_property_string);
> +use PVE::SectionConfig;
> +use PVE::INotify;
> +
> +use base qw(PVE::SectionConfig);
> +
> +my $FILENAME = 'mapping/dir.cfg';
> +
> +cfs_register_file($FILENAME,
> +                  sub { __PACKAGE__->parse_config(@_); },
> +                  sub { __PACKAGE__->write_config(@_); });
> +
> +
> +# so we don't have to repeat the type every time
> +sub parse_section_header {
> +    my ($class, $line) = @_;
> +
> +    if ($line =~ m/^(\S+)\s*$/) {
> +	my $id = $1;
> +	my $errmsg = undef; # set if you want to skip whole section
> +	eval { PVE::JSONSchema::pve_verify_configid($id) };
> +	$errmsg = $@ if $@;
> +	my $config = {}; # to return additional attributes
> +	return ('dir', $id, $errmsg, $config);
> +    }
> +    return undef;
> +}
> +
> +sub format_section_header {
> +    my ($class, $type, $sectionId, $scfg, $done_hash) = @_;
> +
> +    return "$sectionId\n";
> +}
> +
> +sub type {
> +    return 'dir';
> +}
> +
> +my $map_fmt = {
> +    node => get_standard_option('pve-node'),
> +    path => {
> +	description => "Directory-path that should be shared with the guest.",
> +	type => 'string',
> +	format => 'pve-storage-path',
> +    },
> +    xattr => {
> +	type => 'boolean',
> +	description => "Enable support for extended attributes (xattrs).",
> +	optional => 1,
> +    },
> +    acl => {
> +	type => 'boolean',
> +	description => "Enable support for posix ACLs (implies --xattr).",
> +	optional => 1,
> +    },
> +    submounts => {
> +	type => 'boolean',
> +	description => "Option to tell the guest which directories are mount points.",
> +	optional => 1,
> +    },
> +    description => {
> +	description => "Description of the node specific directory.",
> +	type => 'string',
> +	optional => 1,
> +	maxLength => 4096,
> +    },
> +};
> +
> +my $defaultData = {
> +    propertyList => {
> +        id => {
> +            type => 'string',
> +            description => "The ID of the directory",
> +            format => 'pve-configid',
> +        },
> +        description => {
> +            description => "Description of the directory",
> +            type => 'string',
> +            optional => 1,
> +            maxLength => 4096,
> +        },
> +        map => {
> +            type => 'array',
> +            description => 'A list of maps for the cluster nodes.',
> +	    optional => 1,
> +            items => {
> +                type => 'string',
> +                format => $map_fmt,
> +            },
> +        },
> +    },
> +};
> +
> +sub private {
> +    return $defaultData;
> +}
> +
> +sub options {
> +    return {
> +        description => { optional => 1 },
> +        map => {},
> +    };
> +}
> +
> +sub assert_valid {
> +    my ($dir_cfg) = @_;
> +
> +    my $path = $dir_cfg->{path};
> +
> +    if (! -e $path) {
> +        die "Path $path does not exist\n";
> +    }
> +    if ((-e $path) && (! -d $path)) {
> +        die "Path $path exists but is not a directory\n"
> +    }
> +
> +    return 1;
> +};
> +
> +sub config {
> +    return cfs_read_file($FILENAME);
> +}
> +
> +sub lock_dir_config {
> +    my ($code, $errmsg) = @_;
> +
> +    cfs_lock_file($FILENAME, undef, $code);
> +    my $err = $@;
> +    if ($err) {
> +        $errmsg ? die "$errmsg: $err" : die $err;
> +    }
> +}
> +
> +sub write_dir_config {
> +    my ($cfg) = @_;
> +
> +    cfs_write_file($FILENAME, $cfg);
> +}
> +
> +sub find_on_current_node {
> +    my ($id) = @_;
> +
> +    my $cfg = config();
> +    my $node = PVE::INotify::nodename();
> +
> +    return get_node_mapping($cfg, $id, $node);
> +}
> +
> +sub get_node_mapping {
> +    my ($cfg, $id, $nodename) = @_;
> +
> +    return undef if !defined($cfg->{ids}->{$id});
> +
> +    my $res = [];
> +    my $mapping_list = $cfg->{ids}->{$id}->{map};
> +    foreach my $map (@{$mapping_list}) {
> +	my $entry = eval { parse_property_string($map_fmt, $map) };
> +	warn $@ if $@;
> +	if ($entry && $entry->{node} eq $nodename) {
> +	    push $res->@*, $entry;
> +	}
> +    }
> +    return $res;
> +}
> +
> +PVE::Mapping::DIR->register();
> +PVE::Mapping::DIR->init();
> +
> +1;
> -- 
> 2.39.2
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 




^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [pve-devel] [PATCH cluster/guest-common/qemu-server/manager v6 0/11] virtiofs
  2023-07-06 10:54 [pve-devel] [PATCH cluster/guest-common/qemu-server/manager v6 0/11] virtiofs Markus Frank
                   ` (13 preceding siblings ...)
  2023-07-19 12:08 ` Fabian Grünbichler
@ 2023-07-20  7:12 ` Fabian Grünbichler
  14 siblings, 0 replies; 19+ messages in thread
From: Fabian Grünbichler @ 2023-07-20  7:12 UTC (permalink / raw)
  To: Proxmox VE development discussion

also something that might be mentioned in the docs (via Debian's
qemu/virtiofsd maintainer who forwarded it to me):

https://gitlab.com/virtio-fs/virtiofsd/-/issues/109

the required changes w.r.t. memory setup that virtiofsd needs mean no
KSM for virtiofsd-enabled guests!

On July 6, 2023 12:54 pm, Markus Frank wrote:
> cluster:
> 
> Markus Frank (1):
>   add mapping/dir.cfg for resource mapping
> 
>  src/PVE/Cluster.pm  | 1 +
>  src/pmxcfs/status.c | 1 +
>  2 files changed, 2 insertions(+)
> 
> 
> guest-common:
> 
> Markus Frank (1):
>   add DIR mapping config
> 
>  src/Makefile           |   1 +
>  src/PVE/Mapping/DIR.pm | 175 +++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 176 insertions(+)
>  create mode 100644 src/PVE/Mapping/DIR.pm
> 
> 
> qemu-server:
> 
> v6:
>  * added virtiofsd dependency
>  * 2 new patches:
>     * Permission check for virtiofs directory access
>     * check_local_resources: virtiofs
> 
> v5:
>  * allow numa settings with virtio-fs
>  * added direct-io & cache settings
>  * changed to rust implementation of virtiofsd
>  * made double fork and closed all file descriptor so that the lockfile
>  gets released.
> 
> v3:
>  * created own socket and get file descriptor for virtiofsd
>  so there is no race between starting virtiofsd & qemu
>  * added TODO to replace virtiofsd with rust implementation in bookworm
>  (I packaged the rust implementation for bookworm & the C implementation
>  in qemu will be removed in qemu 8.0)
> 
> v2:
>  * replaced sharedfiles_fmt path in qemu-server with dirid:
>  * user can use the dirid to specify the directory without requiring root access
> 
> Markus Frank (3):
>   feature #1027: virtio-fs support
>   Permission check for virtiofs directory access
>   check_local_resources: virtiofs
> 
>  PVE/API2/Qemu.pm         |  18 +++++
>  PVE/QemuServer.pm        | 167 ++++++++++++++++++++++++++++++++++++++-
>  PVE/QemuServer/Memory.pm |  25 ++++--
>  debian/control           |   1 +
>  4 files changed, 204 insertions(+), 7 deletions(-)
> 
> 
> manager:
> 
> v6: completly new except "ui: added options to add virtio-fs to qemu config"
> 
> Markus Frank (5):
>   api: add resource map api endpoints for directories
>   ui: add edit window for dir mappings
>   ui: ResourceMapTree for DIR
>   ui: form: add DIRMapSelector
>   ui: added options to add virtio-fs to qemu config
> 
>  PVE/API2/Cluster/Mapping.pm         |   7 +
>  PVE/API2/Cluster/Mapping/DIR.pm     | 299 ++++++++++++++++++++++++++++
>  PVE/API2/Cluster/Mapping/Makefile   |   3 +-
>  www/manager6/Makefile               |   4 +
>  www/manager6/Utils.js               |   1 +
>  www/manager6/dc/Config.js           |  10 +
>  www/manager6/dc/DIRMapView.js       |  50 +++++
>  www/manager6/form/DIRMapSelector.js |  63 ++++++
>  www/manager6/qemu/HardwareView.js   |  19 ++
>  www/manager6/qemu/VirtiofsEdit.js   | 120 +++++++++++
>  www/manager6/window/DIRMapEdit.js   | 186 +++++++++++++++++
>  11 files changed, 761 insertions(+), 1 deletion(-)
>  create mode 100644 PVE/API2/Cluster/Mapping/DIR.pm
>  create mode 100644 www/manager6/dc/DIRMapView.js
>  create mode 100644 www/manager6/form/DIRMapSelector.js
>  create mode 100644 www/manager6/qemu/VirtiofsEdit.js
>  create mode 100644 www/manager6/window/DIRMapEdit.js
> 
> -- 
> 2.39.2
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 




^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2023-07-20  7:12 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-07-06 10:54 [pve-devel] [PATCH cluster/guest-common/qemu-server/manager v6 0/11] virtiofs Markus Frank
2023-07-06 10:54 ` [pve-devel] [PATCH cluster v6 1/1] add mapping/dir.cfg for resource mapping Markus Frank
2023-07-06 10:54 ` [pve-devel] [PATCH guest-common v6 1/1] add DIR mapping config Markus Frank
2023-07-19 12:09   ` Fabian Grünbichler
2023-07-06 10:54 ` [pve-devel] [PATCH docs v6 1/1] added shared filesystem doc for virtio-fs Markus Frank
2023-07-17  8:08   ` Christoph Heiss
2023-07-06 10:54 ` [pve-devel] [PATCH qemu-server v6 1/3] feature #1027: virtio-fs support Markus Frank
2023-07-19 12:08   ` Fabian Grünbichler
2023-07-06 10:54 ` [pve-devel] [PATCH qemu-server v6 2/3] Permission check for virtiofs directory access Markus Frank
2023-07-06 10:54 ` [pve-devel] [PATCH qemu-server v6 3/3] check_local_resources: virtiofs Markus Frank
2023-07-06 10:54 ` [pve-devel] [PATCH manager v6 1/5] api: add resource map api endpoints for directories Markus Frank
2023-07-06 10:54 ` [pve-devel] [PATCH manager v6 2/5] ui: add edit window for dir mappings Markus Frank
2023-07-06 10:54 ` [pve-devel] [PATCH manager v6 3/5] ui: ResourceMapTree for DIR Markus Frank
2023-07-06 10:54 ` [pve-devel] [PATCH manager v6 4/5] ui: form: add DIRMapSelector Markus Frank
2023-07-06 10:54 ` [pve-devel] [PATCH manager v6 5/5] ui: added options to add virtio-fs to qemu config Markus Frank
2023-07-17  7:51 ` [pve-devel] [PATCH cluster/guest-common/qemu-server/manager v6 0/11] virtiofs Christoph Heiss
2023-07-18 12:56 ` Friedrich Weber
2023-07-19 12:08 ` Fabian Grünbichler
2023-07-20  7:12 ` Fabian Grünbichler

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal