public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v10 0/11] virtiofs
@ 2024-05-14 10:54 Markus Frank
  2024-05-14 10:54 ` [pve-devel] [PATCH cluster v10 1/11] add mapping/dir.cfg for resource mapping Markus Frank
                   ` (10 more replies)
  0 siblings, 11 replies; 16+ messages in thread
From: Markus Frank @ 2024-05-14 10:54 UTC (permalink / raw)
  To: pve-devel

Virtio-fs is a shared file system that enables sharing a directory
between host and guest VMs. It takes advantage of the locality of
virtual machines and the hypervisor to get a higher throughput than
the 9p remote file system protocol.

build-order:
1. cluster
2. guest-common
3. docs
4. qemu-server
5. manager

I did not get virtiofsd to run with run_command without creating
zombie processes after stutdown. So I replaced run_command with exec
for now. Maybe someone can find out why this happens.


changes v10:
* rebase to master
* added gui patches again

cluster:

Markus Frank (1):
  add mapping/dir.cfg for resource mapping

 src/PVE/Cluster.pm  | 1 +
 src/pmxcfs/status.c | 1 +
 2 files changed, 2 insertions(+)


guest-common:

Markus Frank (1):
  add dir mapping section config

 src/Makefile           |   1 +
 src/PVE/Mapping/Dir.pm | 205 +++++++++++++++++++++++++++++++++++++++++
 2 files changed, 206 insertions(+)
 create mode 100644 src/PVE/Mapping/Dir.pm


docs:

Markus Frank (1):
  add doc section for the shared filesystem virtio-fs

 qm.adoc | 94 +++++++++++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 92 insertions(+), 2 deletions(-)


qemu-server:

Markus Frank (3):
  add virtiofsd as runtime dependency for qemu-server
  fix #1027: virtio-fs support
  migration: check_local_resources for virtiofs

 PVE/API2/Qemu.pm             |  39 ++++++-
 PVE/QemuServer.pm            |  29 ++++-
 PVE/QemuServer/Makefile      |   3 +-
 PVE/QemuServer/Memory.pm     |  34 ++++--
 PVE/QemuServer/Virtiofs.pm   | 212 +++++++++++++++++++++++++++++++++++
 debian/control               |   1 +
 test/MigrationTest/Shared.pm |   7 ++
 7 files changed, 313 insertions(+), 12 deletions(-)
 create mode 100644 PVE/QemuServer/Virtiofs.pm


manager:

Markus Frank (5):
  api: add resource map api endpoints for directories
  ui: add edit window for dir mappings
  ui: ResourceMapTree for DIR
  ui: form: add DIRMapSelector
  ui: add options to add virtio-fs to qemu config

 PVE/API2/Cluster/Mapping.pm         |   7 +
 PVE/API2/Cluster/Mapping/Dir.pm     | 317 ++++++++++++++++++++++++++++
 PVE/API2/Cluster/Mapping/Makefile   |   1 +
 www/manager6/Makefile               |   4 +
 www/manager6/Utils.js               |   1 +
 www/manager6/dc/Config.js           |  10 +
 www/manager6/dc/DirMapView.js       |  50 +++++
 www/manager6/form/DirMapSelector.js |  63 ++++++
 www/manager6/qemu/HardwareView.js   |  19 ++
 www/manager6/qemu/VirtiofsEdit.js   | 137 ++++++++++++
 www/manager6/window/DirMapEdit.js   | 222 +++++++++++++++++++
 11 files changed, 831 insertions(+)
 create mode 100644 PVE/API2/Cluster/Mapping/Dir.pm
 create mode 100644 www/manager6/dc/DirMapView.js
 create mode 100644 www/manager6/form/DirMapSelector.js
 create mode 100644 www/manager6/qemu/VirtiofsEdit.js
 create mode 100644 www/manager6/window/DirMapEdit.js

-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [pve-devel] [PATCH cluster v10 1/11] add mapping/dir.cfg for resource mapping
  2024-05-14 10:54 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v10 0/11] virtiofs Markus Frank
@ 2024-05-14 10:54 ` Markus Frank
  2024-05-14 10:54 ` [pve-devel] [PATCH guest-common v10 2/11] add dir mapping section config Markus Frank
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 16+ messages in thread
From: Markus Frank @ 2024-05-14 10:54 UTC (permalink / raw)
  To: pve-devel

Add it to both the perl side (PVE/Cluster.pm) and pmxcfs side
(status.c).
This dir.cfg is used to map directory IDs to paths on selected hosts.

Signed-off-by: Markus Frank <m.frank@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
---
 src/PVE/Cluster.pm  | 1 +
 src/pmxcfs/status.c | 1 +
 2 files changed, 2 insertions(+)

diff --git a/src/PVE/Cluster.pm b/src/PVE/Cluster.pm
index f899dbe..6b775f8 100644
--- a/src/PVE/Cluster.pm
+++ b/src/PVE/Cluster.pm
@@ -82,6 +82,7 @@ my $observed = {
     'sdn/.running-config' => 1,
     'virtual-guest/cpu-models.conf' => 1,
     'virtual-guest/profiles.cfg' => 1,
+    'mapping/dir.cfg' => 1,
     'mapping/pci.cfg' => 1,
     'mapping/usb.cfg' => 1,
 };
diff --git a/src/pmxcfs/status.c b/src/pmxcfs/status.c
index dc44464..17cbf61 100644
--- a/src/pmxcfs/status.c
+++ b/src/pmxcfs/status.c
@@ -112,6 +112,7 @@ static memdb_change_t memdb_change_array[] = {
 	{ .path = "virtual-guest/cpu-models.conf" },
 	{ .path = "virtual-guest/profiles.cfg" },
 	{ .path = "firewall/cluster.fw" },
+	{ .path = "mapping/dir.cfg" },
 	{ .path = "mapping/pci.cfg" },
 	{ .path = "mapping/usb.cfg" },
 };
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [pve-devel] [PATCH guest-common v10 2/11] add dir mapping section config
  2024-05-14 10:54 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v10 0/11] virtiofs Markus Frank
  2024-05-14 10:54 ` [pve-devel] [PATCH cluster v10 1/11] add mapping/dir.cfg for resource mapping Markus Frank
@ 2024-05-14 10:54 ` Markus Frank
  2024-06-12 11:50   ` Fiona Ebner
  2024-05-14 10:54 ` [pve-devel] [PATCH docs v10 3/11] add doc section for the shared filesystem virtio-fs Markus Frank
                   ` (8 subsequent siblings)
  10 siblings, 1 reply; 16+ messages in thread
From: Markus Frank @ 2024-05-14 10:54 UTC (permalink / raw)
  To: pve-devel

Adds a config file for directories by using a 'map' property string for
each node mapping.

Next to node & path, there is the optional submounts parameter in the
map property string that is used to announce other mounted file systems
in the specified directory.

Additionally there are the default settings for xattr & acl.

example config:
```
some-dir-id
	map node=node1,path=/mnt/share/,submounts=1
	map node=node2,path=/mnt/share/,
	xattr 1
	acl 1
```

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 src/Makefile           |   1 +
 src/PVE/Mapping/Dir.pm | 205 +++++++++++++++++++++++++++++++++++++++++
 2 files changed, 206 insertions(+)
 create mode 100644 src/PVE/Mapping/Dir.pm

diff --git a/src/Makefile b/src/Makefile
index cbc40c1..030e7f7 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -15,6 +15,7 @@ install: PVE
 	install -m 0644 PVE/StorageTunnel.pm ${PERL5DIR}/PVE/
 	install -m 0644 PVE/Tunnel.pm ${PERL5DIR}/PVE/
 	install -d ${PERL5DIR}/PVE/Mapping
+	install -m 0644 PVE/Mapping/Dir.pm ${PERL5DIR}/PVE/Mapping/
 	install -m 0644 PVE/Mapping/PCI.pm ${PERL5DIR}/PVE/Mapping/
 	install -m 0644 PVE/Mapping/USB.pm ${PERL5DIR}/PVE/Mapping/
 	install -d ${PERL5DIR}/PVE/VZDump
diff --git a/src/PVE/Mapping/Dir.pm b/src/PVE/Mapping/Dir.pm
new file mode 100644
index 0000000..8f131c2
--- /dev/null
+++ b/src/PVE/Mapping/Dir.pm
@@ -0,0 +1,205 @@
+package PVE::Mapping::Dir;
+
+use strict;
+use warnings;
+
+use PVE::Cluster qw(cfs_register_file cfs_read_file cfs_lock_file cfs_write_file);
+use PVE::INotify;
+use PVE::JSONSchema qw(get_standard_option parse_property_string);
+use PVE::SectionConfig;
+use PVE::Storage::Plugin;
+
+use base qw(PVE::SectionConfig);
+
+my $FILENAME = 'mapping/dir.cfg';
+
+cfs_register_file($FILENAME,
+                  sub { __PACKAGE__->parse_config(@_); },
+                  sub { __PACKAGE__->write_config(@_); });
+
+
+# so we don't have to repeat the type every time
+sub parse_section_header {
+    my ($class, $line) = @_;
+
+    if ($line =~ m/^(\S+)\s*$/) {
+	my $id = $1;
+	my $errmsg = undef; # set if you want to skip whole section
+	eval { PVE::JSONSchema::pve_verify_configid($id) };
+	$errmsg = $@ if $@;
+	my $config = {}; # to return additional attributes
+	return ('dir', $id, $errmsg, $config);
+    }
+    return undef;
+}
+
+sub format_section_header {
+    my ($class, $type, $sectionId, $scfg, $done_hash) = @_;
+
+    return "$sectionId\n";
+}
+
+sub type {
+    return 'dir';
+}
+
+my $map_fmt = {
+    node => get_standard_option('pve-node'),
+    path => {
+	description => "Absolute directory path that should be shared with the guest.",
+	type => 'string',
+	format => 'pve-storage-path',
+    },
+    submounts => {
+	type => 'boolean',
+	description => "Announce that the directory contains other mounted"
+	    ." file systems. If this is not set and multiple file systems are"
+	    ." mounted, the guest may encounter duplicates due to file system"
+	    ." specific inode IDs.",
+	optional => 1,
+	default => 0,
+    },
+    description => {
+	description => "Description of the node specific directory.",
+	type => 'string',
+	optional => 1,
+	maxLength => 4096,
+    },
+};
+
+my $defaultData = {
+    propertyList => {
+	id => {
+	    type => 'string',
+	    description => "The ID of the directory",
+	    format => 'pve-configid',
+	},
+	description => {
+	    description => "Description of the directory",
+	    type => 'string',
+	    optional => 1,
+	    maxLength => 4096,
+	},
+	map => {
+	    type => 'array',
+	    description => 'A list of maps for the cluster nodes.',
+	    optional => 1,
+	    items => {
+		type => 'string',
+		format => $map_fmt,
+	    },
+	},
+	xattr => {
+	    type => 'boolean',
+	    description => "Enable support for extended attributes."
+		." If not supported by Guest OS or file system, this option is"
+		." simply ignored.",
+	    optional => 1,
+	    default => 0,
+	},
+	acl => {
+	    type => 'boolean',
+	    description => "Enable support for POSIX ACLs (implies --xattr)."
+		." The guest OS has to support ACLs. When used in a directory"
+		." with a file system without ACL support, the ACLs are ignored.",
+	    optional => 1,
+	    default => 0,
+	},
+    },
+};
+
+sub private {
+    return $defaultData;
+}
+
+sub map_fmt {
+    return $map_fmt;
+}
+
+sub options {
+    return {
+	description => { optional => 1 },
+	map => {},
+	xattr => { optional => 1 },
+	acl => { optional => 1 },
+    };
+}
+
+sub assert_valid {
+    my ($dir_cfg) = @_;
+
+    my $path = $dir_cfg->{path};
+
+    if (! -e $path) {
+	die "Path $path does not exist\n";
+    } elsif (! -d $path) {
+	die "Path $path exists but is not a directory\n"
+    }
+
+    return 1;
+};
+
+sub check_duplicate {
+    my ($map_list) = @_;
+
+    my %count;
+    for my $map (@$map_list){
+	my $entry = parse_property_string($map_fmt, $map);
+	$count{$entry->{node}}++;
+    }
+    for my $node (keys %count) {
+	if ($count{$node} > 1) {
+	    die "Node '$node' is specified $count{$node} times.\n";
+	}
+    }
+}
+
+sub config {
+    return cfs_read_file($FILENAME);
+}
+
+sub lock_dir_config {
+    my ($code, $errmsg) = @_;
+
+    cfs_lock_file($FILENAME, undef, $code);
+    if (my $err = $@) {
+	$errmsg ? die "$errmsg: $err" : die $err;
+    }
+}
+
+sub write_dir_config {
+    my ($cfg) = @_;
+
+    cfs_write_file($FILENAME, $cfg);
+}
+
+sub find_on_current_node {
+    my ($id) = @_;
+
+    my $cfg = config();
+    my $node = PVE::INotify::nodename();
+
+    return get_node_mapping($cfg, $id, $node);
+}
+
+sub get_node_mapping {
+    my ($cfg, $id, $nodename) = @_;
+
+    return undef if !defined($cfg->{ids}->{$id});
+
+    my $res = [];
+    my $mapping_list = $cfg->{ids}->{$id}->{map};
+    for my $map (@{$mapping_list}) {
+	my $entry = eval { parse_property_string($map_fmt, $map) };
+	warn $@ if $@;
+	if ($entry && $entry->{node} eq $nodename) {
+	    push $res->@*, $entry;
+	}
+    }
+    return $res;
+}
+
+PVE::Mapping::Dir->register();
+PVE::Mapping::Dir->init();
+
+1;
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [pve-devel] [PATCH docs v10 3/11] add doc section for the shared filesystem virtio-fs
  2024-05-14 10:54 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v10 0/11] virtiofs Markus Frank
  2024-05-14 10:54 ` [pve-devel] [PATCH cluster v10 1/11] add mapping/dir.cfg for resource mapping Markus Frank
  2024-05-14 10:54 ` [pve-devel] [PATCH guest-common v10 2/11] add dir mapping section config Markus Frank
@ 2024-05-14 10:54 ` Markus Frank
  2024-06-12 11:50   ` Fiona Ebner
  2024-05-14 10:54 ` [pve-devel] [PATCH qemu-server v10 4/11] add virtiofsd as runtime dependency for qemu-server Markus Frank
                   ` (7 subsequent siblings)
  10 siblings, 1 reply; 16+ messages in thread
From: Markus Frank @ 2024-05-14 10:54 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 qm.adoc | 94 +++++++++++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 92 insertions(+), 2 deletions(-)

diff --git a/qm.adoc b/qm.adoc
index 42c26db..755e20e 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -1081,6 +1081,95 @@ recommended to always use a limiter to avoid guests using too many host
 resources. If desired, a value of '0' for `max_bytes` can be used to disable
 all limits.
 
+[[qm_virtiofs]]
+Virtio-fs
+~~~~~~~~~
+
+Virtio-fs is a shared file system that enables sharing a directory between host
+and guest VM. It takes advantage of the locality of virtual machines and the
+hypervisor to get a higher throughput than the 9p remote file system protocol.
+
+To use virtio-fs, the https://gitlab.com/virtio-fs/virtiofsd[virtiofsd] daemon
+needs to run in the background. In {pve}, this process starts immediately before
+the start of QEMU.
+
+Linux VMs with kernel >=5.4 support this feature by default.
+
+There is a guide available on how to utilize virtio-fs in Windows VMs.
+https://github.com/virtio-win/kvm-guest-drivers-windows/wiki/Virtiofs:-Shared-file-system
+
+Known Limitations
+^^^^^^^^^^^^^^^^^
+
+* Virtiofsd crashing means no recovery until VM is fully stopped and restarted.
+* Virtiofsd not responding may result in NFS-like hanging access in the VM.
+* Memory hotplug does not work in combination with virtio-fs (also results in
+hanging access).
+* Live migration does not work.
+* Windows cannot understand ACLs. Therefore, disable it for Windows VMs,
+otherwise the virtio-fs device will not be visible within the VMs.
+
+Add Mapping for Shared Directories
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To add a mapping for a shared directory, either use the API directly with
+`pvesh` as described in the xref:resource_mapping[Resource Mapping] section:
+
+----
+pvesh create /cluster/mapping/dir --id dir1 \
+    --map node=node1,path=/path/to/share1 \
+    --map node=node2,path=/path/to/share2,submounts=1 \
+    --xattr 1 \
+    --acl 1
+----
+
+The `acl` parameter automatically implies `xattr`, that is, it makes no
+difference whether you set `xattr` to `0` if `acl` is set to `1`.
+
+Set `submounts` to `1` when multiple file systems are mounted in a shared
+directory to prevent the guest from creating duplicates because of file system
+specific inode IDs that get passed through.
+
+
+Add virtio-fs to a VM
+^^^^^^^^^^^^^^^^^^^^^
+
+To share a directory using virtio-fs, add the parameter `virtiofs<N>` (N can be
+anything between 0 and 9) to the VM config and use a directory ID (dirid) that
+has been configured in the resource mapping. Additionally, you can set the
+`cache` option to either `always`, `never`, or `auto` (default: `auto`),
+depending on your requirements. How the different caching modes behave can be
+read at https://lwn.net/Articles/774495/ under the title "Caching Modes". To
+enable writeback cache set `writeback` to `1`.
+
+If you want virtio-fs to honor the `O_DIRECT` flag, you can set the `direct-io`
+parameter to `1` (default: `0`). This will degrade performance, but is useful if
+applications do their own caching.
+
+Additionally, it is possible to overwrite the default mapping settings for
+`xattr` and `acl` by setting them to either `1` or `0`. The `acl` parameter
+automatically implies `xattr`, that is, it makes no difference whether you set
+`xattr` to `0` if `acl` is set to `1`.
+
+----
+qm set <vmid> -virtiofs0 dirid=<dirid>,cache=always,direct-io=1
+qm set <vmid> -virtiofs1 <dirid>,cache=never,xattr=1
+qm set <vmid> -virtiofs2 <dirid>,acl=1,writeback=1
+----
+
+To mount virtio-fs in a guest VM with the Linux kernel virtio-fs driver, run the
+following command inside the guest:
+
+----
+mount -t virtiofs <mount tag> <mount point>
+----
+
+The dirid associated with the path on the current node is also used as the mount
+tag (name used to mount the device on the guest).
+
+For more information on available virtiofsd parameters, see the
+https://gitlab.com/virtio-fs/virtiofsd[GitLab virtiofsd project page].
+
 [[qm_bootorder]]
 Device Boot Order
 ~~~~~~~~~~~~~~~~~
@@ -1743,8 +1832,9 @@ in the relevant tab in the `Resource Mappings` category, or on the cli with
 
 [thumbnail="screenshot/gui-datacenter-mapping-pci-edit.png"]
 
-Where `<type>` is the hardware type (currently either `pci` or `usb`) and
-`<options>` are the device mappings and other configuration parameters.
+Where `<type>` is the hardware type (currently either `pci`, `usb` or
+xref:qm_virtiofs[dir]) and `<options>` are the device mappings and other
+configuration parameters.
 
 Note that the options must include a map property with all identifying
 properties of that hardware, so that it's possible to verify the hardware did
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [pve-devel] [PATCH qemu-server v10 4/11] add virtiofsd as runtime dependency for qemu-server
  2024-05-14 10:54 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v10 0/11] virtiofs Markus Frank
                   ` (2 preceding siblings ...)
  2024-05-14 10:54 ` [pve-devel] [PATCH docs v10 3/11] add doc section for the shared filesystem virtio-fs Markus Frank
@ 2024-05-14 10:54 ` Markus Frank
  2024-06-12 11:50   ` Fiona Ebner
  2024-05-14 10:54 ` [pve-devel] [PATCH qemu-server v10 5/11] fix #1027: virtio-fs support Markus Frank
                   ` (6 subsequent siblings)
  10 siblings, 1 reply; 16+ messages in thread
From: Markus Frank @ 2024-05-14 10:54 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 debian/control | 1 +
 1 file changed, 1 insertion(+)

diff --git a/debian/control b/debian/control
index 1301a36..8e4ca7f 100644
--- a/debian/control
+++ b/debian/control
@@ -55,6 +55,7 @@ Depends: dbus,
          socat,
          swtpm,
          swtpm-tools,
+         virtiofsd,
          ${misc:Depends},
          ${perl:Depends},
          ${shlibs:Depends},
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [pve-devel] [PATCH qemu-server v10 5/11] fix #1027: virtio-fs support
  2024-05-14 10:54 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v10 0/11] virtiofs Markus Frank
                   ` (3 preceding siblings ...)
  2024-05-14 10:54 ` [pve-devel] [PATCH qemu-server v10 4/11] add virtiofsd as runtime dependency for qemu-server Markus Frank
@ 2024-05-14 10:54 ` Markus Frank
  2024-06-12 11:50   ` Fiona Ebner
  2024-05-14 10:54 ` [pve-devel] [PATCH qemu-server v10 6/11] migration: check_local_resources for virtiofs Markus Frank
                   ` (5 subsequent siblings)
  10 siblings, 1 reply; 16+ messages in thread
From: Markus Frank @ 2024-05-14 10:54 UTC (permalink / raw)
  To: pve-devel

add support for sharing directories with a guest vm

virtio-fs needs virtiofsd to be started.
In order to start virtiofsd as a process (despite being a daemon it is does not
run in the background), a double-fork is used.

virtiofsd should close itself together with qemu.

There are the parameters dirid and the optional parameters direct-io, cache and
writeback. Additionally the xattr & acl parameter overwrite the directory
mapping settings for xattr & acl.

The dirid gets mapped to the path on the current node and is also used as a
mount tag (name used to mount the device on the guest).

example config:
```
virtiofs0: foo,direct-io=1,cache=always,acl=1
virtiofs1: dirid=bar,cache=never,xattr=1,writeback=1
```

For information on the optional parameters see the coherent doc patch
and the official gitlab README:
https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md

Also add a permission check for virtiofs directory access.

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 PVE/API2/Qemu.pm           |  39 ++++++-
 PVE/QemuServer.pm          |  19 +++-
 PVE/QemuServer/Makefile    |   3 +-
 PVE/QemuServer/Memory.pm   |  34 ++++--
 PVE/QemuServer/Virtiofs.pm | 212 +++++++++++++++++++++++++++++++++++++
 5 files changed, 296 insertions(+), 11 deletions(-)
 create mode 100644 PVE/QemuServer/Virtiofs.pm

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 2a349c8..5d97896 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -695,6 +695,32 @@ my sub check_vm_create_hostpci_perm {
     return 1;
 };
 
+my sub check_dir_perm {
+    my ($rpcenv, $authuser, $vmid, $pool, $opt, $value) = @_;
+
+    return 1 if $authuser eq 'root@pam';
+
+    $rpcenv->check_vm_perm($authuser, $vmid, $pool, ['VM.Config.Disk']);
+
+    my $virtiofs = PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', $value);
+    $rpcenv->check_full($authuser, "/mapping/dir/$virtiofs->{dirid}", ['Mapping.Use']);
+
+    return 1;
+};
+
+my sub check_vm_create_dir_perm {
+    my ($rpcenv, $authuser, $vmid, $pool, $param) = @_;
+
+    return 1 if $authuser eq 'root@pam';
+
+    for my $opt (keys %{$param}) {
+	next if $opt !~ m/^virtiofs\d+$/;
+	check_dir_perm($rpcenv, $authuser, $vmid, $pool, $opt, $param->{$opt});
+    }
+
+    return 1;
+};
+
 my $check_vm_modify_config_perm = sub {
     my ($rpcenv, $authuser, $vmid, $pool, $key_list) = @_;
 
@@ -705,7 +731,7 @@ my $check_vm_modify_config_perm = sub {
 	# else, as there the permission can be value dependend
 	next if PVE::QemuServer::is_valid_drivename($opt);
 	next if $opt eq 'cdrom';
-	next if $opt =~ m/^(?:unused|serial|usb|hostpci)\d+$/;
+	next if $opt =~ m/^(?:unused|serial|usb|hostpci|virtiofs)\d+$/;
 	next if $opt eq 'tags';
 
 
@@ -999,6 +1025,7 @@ __PACKAGE__->register_method({
 	    &$check_vm_create_serial_perm($rpcenv, $authuser, $vmid, $pool, $param);
 	    check_vm_create_usb_perm($rpcenv, $authuser, $vmid, $pool, $param);
 	    check_vm_create_hostpci_perm($rpcenv, $authuser, $vmid, $pool, $param);
+	    check_vm_create_dir_perm($rpcenv, $authuser, $vmid, $pool, $param);
 
 	    PVE::QemuServer::check_bridge_access($rpcenv, $authuser, $param);
 	    &$check_cpu_model_access($rpcenv, $authuser, $param);
@@ -1899,6 +1926,10 @@ my $update_vm_api  = sub {
 		    check_hostpci_perm($rpcenv, $authuser, $vmid, undef, $opt, $val);
 		    PVE::QemuConfig->add_to_pending_delete($conf, $opt, $force);
 		    PVE::QemuConfig->write_config($vmid, $conf);
+		} elsif ($opt =~ m/^virtiofs\d$/) {
+		    check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, $val);
+		    PVE::QemuConfig->add_to_pending_delete($conf, $opt, $force);
+		    PVE::QemuConfig->write_config($vmid, $conf);
 		} elsif ($opt eq 'tags') {
 		    assert_tag_permissions($vmid, $val, '', $rpcenv, $authuser);
 		    delete $conf->{$opt};
@@ -1987,6 +2018,12 @@ my $update_vm_api  = sub {
 		    }
 		    check_hostpci_perm($rpcenv, $authuser, $vmid, undef, $opt, $param->{$opt});
 		    $conf->{pending}->{$opt} = $param->{$opt};
+		} elsif ($opt =~ m/^virtiofs\d$/) {
+		    if (my $oldvalue = $conf->{$opt}) {
+			check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, $oldvalue);
+		    }
+		    check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, $param->{$opt});
+		    $conf->{pending}->{$opt} = $param->{$opt};
 		} elsif ($opt eq 'tags') {
 		    assert_tag_permissions($vmid, $conf->{$opt}, $param->{$opt}, $rpcenv, $authuser);
 		    $conf->{pending}->{$opt} = PVE::GuestHelpers::get_unique_tags($param->{$opt});
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 9032d29..a0600d7 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -35,6 +35,7 @@ use PVE::Exception qw(raise raise_param_exc);
 use PVE::Format qw(render_duration render_bytes);
 use PVE::GuestHelpers qw(safe_string_ne safe_num_ne safe_boolean_ne);
 use PVE::HA::Config;
+use PVE::Mapping::Dir;
 use PVE::Mapping::PCI;
 use PVE::Mapping::USB;
 use PVE::INotify;
@@ -61,6 +62,7 @@ use PVE::QemuServer::Monitor qw(mon_cmd);
 use PVE::QemuServer::PCI qw(print_pci_addr print_pcie_addr print_pcie_root_port parse_hostpci);
 use PVE::QemuServer::QMPHelpers qw(qemu_deviceadd qemu_devicedel qemu_objectadd qemu_objectdel);
 use PVE::QemuServer::USB;
+use PVE::QemuServer::Virtiofs qw(max_virtiofs start_all_virtiofsd);
 
 my $have_sdn;
 eval {
@@ -956,6 +958,10 @@ my $netdesc = {
 
 PVE::JSONSchema::register_standard_option("pve-qm-net", $netdesc);
 
+for (my $i = 0; $i < max_virtiofs(); $i++)  {
+    $confdesc->{"virtiofs$i"} = get_standard_option('pve-qm-virtiofs');
+}
+
 my $ipconfig_fmt = {
     ip => {
 	type => 'string',
@@ -3807,7 +3813,7 @@ sub config_to_command {
     }
 
     PVE::QemuServer::Memory::config(
-	$conf, $vmid, $sockets, $cores, $hotplug_features->{memory}, $cmd);
+	$conf, $vmid, $sockets, $cores, $hotplug_features->{memory}, $cmd, $machineFlags);
 
     push @$cmd, '-S' if $conf->{freeze};
 
@@ -4094,6 +4100,8 @@ sub config_to_command {
 	}
     }
 
+    PVE::QemuServer::Virtiofs::config($conf, $vmid, $devices, $machineFlags);
+
     push @$cmd, @$devices;
     push @$cmd, '-rtc', join(',', @$rtcFlags) if scalar(@$rtcFlags);
     push @$cmd, '-machine', join(',', @$machineFlags) if scalar(@$machineFlags);
@@ -5880,6 +5888,8 @@ sub vm_start_nolock {
 	PVE::Tools::run_fork sub {
 	    PVE::Systemd::enter_systemd_scope($vmid, "Proxmox VE VM $vmid", %systemd_properties);
 
+	    my $virtiofs_sockets = start_all_virtiofsd($conf, $vmid);
+
 	    my $tpmpid;
 	    if ((my $tpm = $conf->{tpmstate0}) && !PVE::QemuConfig->is_template($conf)) {
 		# start the TPM emulator so QEMU can connect on start
@@ -5892,8 +5902,10 @@ sub vm_start_nolock {
 		    warn "stopping swtpm instance (pid $tpmpid) due to QEMU startup error\n";
 		    kill 'TERM', $tpmpid;
 		}
+		PVE::QemuServer::Virtiofs::close_sockets(@$virtiofs_sockets);
 		die "QEMU exited with code $exitcode\n";
 	    }
+	    PVE::QemuServer::Virtiofs::close_sockets(@$virtiofs_sockets);
 	};
     };
 
@@ -6527,7 +6539,10 @@ sub check_mapping_access {
 	    } else {
 		die "either 'host' or 'mapping' must be set.\n";
 	    }
-       }
+	} elsif ($opt =~ m/^virtiofs\d$/) {
+	    my $virtiofs = PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', $conf->{$opt});
+	    $rpcenv->check_full($user, "/mapping/dir/$virtiofs->{dirid}", ['Mapping.Use']);
+	}
    }
 };
 
diff --git a/PVE/QemuServer/Makefile b/PVE/QemuServer/Makefile
index ac26e56..d1bf8bb 100644
--- a/PVE/QemuServer/Makefile
+++ b/PVE/QemuServer/Makefile
@@ -11,7 +11,8 @@ SOURCES=PCI.pm		\
 	CPUConfig.pm	\
 	CGroup.pm	\
 	Drive.pm	\
-	QMPHelpers.pm
+	QMPHelpers.pm	\
+	Virtiofs.pm
 
 .PHONY: install
 install: ${SOURCES}
diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
index f365f2d..691f9b3 100644
--- a/PVE/QemuServer/Memory.pm
+++ b/PVE/QemuServer/Memory.pm
@@ -10,6 +10,7 @@ use PVE::Exception qw(raise raise_param_exc);
 use PVE::QemuServer::Helpers qw(parse_number_sets);
 use PVE::QemuServer::Monitor qw(mon_cmd);
 use PVE::QemuServer::QMPHelpers qw(qemu_devicedel qemu_objectdel);
+use PVE::QemuServer::Virtiofs;
 
 use base qw(Exporter);
 
@@ -336,7 +337,7 @@ sub qemu_memdevices_list {
 }
 
 sub config {
-    my ($conf, $vmid, $sockets, $cores, $hotplug, $cmd) = @_;
+    my ($conf, $vmid, $sockets, $cores, $hotplug, $cmd, $machine_flags) = @_;
 
     my $memory = get_current_memory($conf->{memory});
     my $static_memory = 0;
@@ -367,6 +368,16 @@ sub config {
 
     die "numa needs to be enabled to use hugepages" if $conf->{hugepages} && !$conf->{numa};
 
+    my $virtiofs_enabled = 0;
+    for (my $i = 0; $i < PVE::QemuServer::Virtiofs::max_virtiofs(); $i++) {
+	my $opt = "virtiofs$i";
+	next if !$conf->{$opt};
+	my $virtiofs = PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', $conf->{$opt});
+	if ($virtiofs) {
+	    $virtiofs_enabled = 1;
+	}
+    }
+
     if ($conf->{numa}) {
 
 	my $numa_totalmemory = undef;
@@ -379,7 +390,8 @@ sub config {
 	    my $numa_memory = $numa->{memory};
 	    $numa_totalmemory += $numa_memory;
 
-	    my $mem_object = print_mem_object($conf, "ram-node$i", $numa_memory);
+	    my $memdev = $virtiofs_enabled ? "virtiofs-mem$i" : "ram-node$i";
+	    my $mem_object = print_mem_object($conf, $memdev, $numa_memory);
 
 	    # cpus
 	    my $cpulists = $numa->{cpus};
@@ -404,7 +416,7 @@ sub config {
 	    }
 
 	    push @$cmd, '-object', $mem_object;
-	    push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=ram-node$i";
+	    push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=$memdev";
 	}
 
 	die "total memory for NUMA nodes must be equal to vm static memory\n"
@@ -418,15 +430,21 @@ sub config {
 		die "host NUMA node$i doesn't exist\n"
 		    if !host_numanode_exists($i) && $conf->{hugepages};
 
-		my $mem_object = print_mem_object($conf, "ram-node$i", $numa_memory);
-		push @$cmd, '-object', $mem_object;
-
 		my $cpus = ($cores * $i);
 		$cpus .= "-" . ($cpus + $cores - 1) if $cores > 1;
 
-		push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=ram-node$i";
+		my $memdev = $virtiofs_enabled ? "virtiofs-mem$i" : "ram-node$i";
+		my $mem_object = print_mem_object($conf, $memdev, $numa_memory);
+		push @$cmd, '-object', $mem_object;
+		push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=$memdev";
 	    }
 	}
+    } elsif ($virtiofs_enabled) {
+	# kvm: '-machine memory-backend' and '-numa memdev' properties are
+	# mutually exclusive
+	push @$cmd, '-object', 'memory-backend-memfd,id=virtiofs-mem'
+	    .",size=$conf->{memory}M,share=on";
+	push @$machine_flags, 'memory-backend=virtiofs-mem';
     }
 
     if ($hotplug) {
@@ -453,6 +471,8 @@ sub print_mem_object {
 	my $path = hugepages_mount_path($hugepages_size);
 
 	return "memory-backend-file,id=$id,size=${size}M,mem-path=$path,share=on,prealloc=yes";
+    } elsif ($id =~ m/^virtiofs-mem/) {
+	return "memory-backend-memfd,id=$id,size=${size}M,share=on";
     } else {
 	return "memory-backend-ram,id=$id,size=${size}M";
     }
diff --git a/PVE/QemuServer/Virtiofs.pm b/PVE/QemuServer/Virtiofs.pm
new file mode 100644
index 0000000..4c59348
--- /dev/null
+++ b/PVE/QemuServer/Virtiofs.pm
@@ -0,0 +1,212 @@
+package PVE::QemuServer::Virtiofs;
+
+use strict;
+use warnings;
+
+use Fcntl;
+use IO::Socket::UNIX;
+use POSIX;
+use PVE::JSONSchema qw(get_standard_option parse_property_string);
+use PVE::Mapping::Dir;
+
+use base qw(Exporter);
+
+our @EXPORT_OK = qw(
+max_virtiofs
+assert_virtiofs_config
+start_virtiofsd
+start_all_virtiofsd
+virtiofs_qemu_param
+close_sockets
+);
+
+my $MAX_VIRTIOFS = 10;
+my $socket_path_root = "/run/qemu-server/virtiofsd";
+
+my $virtiofs_fmt = {
+    'dirid' => {
+	type => 'string',
+	default_key => 1,
+	description => "Mapping identifier of the directory mapping to be"
+	    ." shared with the guest. Also used as a mount tag inside the VM.",
+	format_description => 'mapping-id',
+	format => 'pve-configid',
+    },
+    'cache' => {
+	type => 'string',
+	description => "The caching policy the file system should use (auto, always, never).",
+	enum => [qw(auto always never)],
+	default => "auto",
+	optional => 1,
+    },
+    'direct-io' => {
+	type => 'boolean',
+	description => "Honor the O_DIRECT flag passed down by guest applications.",
+	default => 0,
+	optional => 1,
+    },
+    writeback => {
+	type => 'boolean',
+	description => "Enable writeback cache. If enabled, writes may be cached"
+	    ." in the guest until the file is closed or an fsync is performed.",
+	default => 0,
+	optional => 1,
+    },
+    xattr => {
+	type => 'boolean',
+	description => "Overwrite the xattr option from mapping and explicitly"
+	    ." enable/disable support for extended attributes for the VM.",
+	default => "use value from mapping",
+	optional => 1,
+    },
+    acl => {
+	type => 'boolean',
+	description => "Overwrite the acl option from mapping and explicitly"
+	    ." enable/disable support for posix ACLs (enabled acl implies xattr)"
+	    ." for the VM.",
+	default => "use value from mapping",
+	optional => 1,
+    },
+};
+PVE::JSONSchema::register_format('pve-qm-virtiofs', $virtiofs_fmt);
+
+my $virtiofsdesc = {
+    optional => 1,
+    type => 'string', format => $virtiofs_fmt,
+    description => "share a directory between host and guest",
+};
+PVE::JSONSchema::register_standard_option("pve-qm-virtiofs", $virtiofsdesc);
+
+sub max_virtiofs {
+    return $MAX_VIRTIOFS;
+}
+
+sub assert_virtiofs_config {
+    my ($conf, $virtiofs) = @_;
+    my $dir_cfg = PVE::Mapping::Dir::config()->{ids}->{$virtiofs->{dirid}};
+    my $node_list = PVE::Mapping::Dir::find_on_current_node($virtiofs->{dirid});
+
+    my $acl = $virtiofs->{'acl'} // $dir_cfg->{'acl'};
+    if ($acl && PVE::QemuServer::windows_version($conf->{ostype})) {
+	log_warn(
+	    "Please disable ACLs for virtiofs on Windows VMs, otherwise"
+	    ." the virtiofs shared directory cannot be mounted."
+	);
+    }
+
+    if (!$node_list || scalar($node_list->@*) != 1) {
+	die "virtiofs needs exactly one mapping for this node\n";
+    }
+
+    eval PVE::Mapping::Dir::assert_valid($node_list->[0]);
+    if (my $err = $@) {
+	die "Directory Mapping invalid: $err\n";
+    }
+}
+
+sub config {
+    my ($conf, $vmid, $devices, $machine_flags) = @_;
+    for (my $i = 0; $i < max_virtiofs(); $i++) {
+	my $opt = "virtiofs$i";
+
+	next if !$conf->{$opt};
+	my $virtiofs = parse_property_string('pve-qm-virtiofs', $conf->{$opt});
+	next if !$virtiofs;
+
+	assert_virtiofs_config($conf, $virtiofs);
+
+	push @$devices, '-chardev', "socket,id=virtfs$i,path=$socket_path_root/vm$vmid-fs$i";
+
+	# queue-size is set 1024 because of bug with Windows guests:
+	# https://bugzilla.redhat.com/show_bug.cgi?id=1873088
+	# 1024 is also always used in the virtiofs documentations:
+	# https://gitlab.com/virtio-fs/virtiofsd#examples
+	push @$devices, '-device', 'vhost-user-fs-pci,queue-size=1024'
+	    .",chardev=virtfs$i,tag=$virtiofs->{dirid}";
+    }
+}
+
+sub start_all_virtiofsd {
+    my ($conf, $vmid) = @_;
+    my $virtiofs_sockets = [];
+    for (my $i = 0; $i < max_virtiofs(); $i++) {
+	my $opt = "virtiofs$i";
+
+	next if !$conf->{$opt};
+	my $virtiofs = parse_property_string('pve-qm-virtiofs', $conf->{$opt});
+	next if !$virtiofs;
+
+	my $virtiofs_socket = start_virtiofsd($vmid, $i, $virtiofs);
+	push @$virtiofs_sockets, $virtiofs_socket;
+    }
+    return $virtiofs_sockets;
+}
+
+sub close_sockets {
+    my @sockets = @_;
+    for my $socket (@sockets) {
+	shutdown($socket, 2);
+	close($socket);
+    }
+}
+
+sub start_virtiofsd {
+    my ($vmid, $fsid, $virtiofs) = @_;
+
+    my $dir_cfg = PVE::Mapping::Dir::config()->{ids}->{$virtiofs->{dirid}};
+    my $node_list = PVE::Mapping::Dir::find_on_current_node($virtiofs->{dirid});
+
+    # Default to dir config xattr & acl settings
+    my $xattr = $virtiofs->{xattr} // $dir_cfg->{xattr};
+    my $acl = $virtiofs->{'acl'} // $dir_cfg->{'acl'};
+
+    my $node_cfg = $node_list->[0];
+    my $path = $node_cfg->{path};
+    mkdir $socket_path_root;
+    my $socket_path = "$socket_path_root/vm$vmid-fs$fsid";
+    unlink($socket_path);
+    my $socket = IO::Socket::UNIX->new(
+	Type => SOCK_STREAM,
+	Local => $socket_path,
+	Listen => 1,
+    ) or die "cannot create socket - $!\n";
+
+    my $flags = fcntl($socket, F_GETFD, 0)
+	or die "failed to get file descriptor flags: $!\n";
+    fcntl($socket, F_SETFD, $flags & ~FD_CLOEXEC)
+	or die "failed to remove FD_CLOEXEC from file descriptor\n";
+
+    my $fd = $socket->fileno();
+
+    my $virtiofsd_bin = '/usr/libexec/virtiofsd';
+
+    my $pid = fork();
+    if ($pid == 0) {
+	setsid();
+	$0 = "task pve-vm$vmid-virtiofs$fsid";
+	my $pid2 = fork();
+	if ($pid2 == 0) {
+	    my $cmd = [$virtiofsd_bin, "--fd=$fd", "--shared-dir=$path"];
+	    push @$cmd, '--xattr' if $xattr;
+	    push @$cmd, '--posix-acl' if $acl;
+	    push @$cmd, '--announce-submounts' if ($node_cfg->{submounts});
+	    push @$cmd, '--allow-direct-io' if ($virtiofs->{'direct-io'});
+	    push @$cmd, "--cache=$virtiofs->{'cache'}" if ($virtiofs->{'cache'});
+	    push @$cmd, "--writeback" if ($virtiofs->{'writeback'});
+	    push @$cmd, '--syslog';
+	    exec(@$cmd);
+	} elsif (!defined($pid2)) {
+	    die "could not fork to start virtiofsd\n";
+	} else {
+	    POSIX::_exit(0);
+	}
+    } elsif (!defined($pid)) {
+	die "could not fork to start virtiofsd\n";
+    } else {
+	waitpid($pid, 0);
+    }
+
+    # return socket to keep it alive,
+    # so that QEMU will wait for virtiofsd to start
+    return $socket;
+}
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [pve-devel] [PATCH qemu-server v10 6/11] migration: check_local_resources for virtiofs
  2024-05-14 10:54 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v10 0/11] virtiofs Markus Frank
                   ` (4 preceding siblings ...)
  2024-05-14 10:54 ` [pve-devel] [PATCH qemu-server v10 5/11] fix #1027: virtio-fs support Markus Frank
@ 2024-05-14 10:54 ` Markus Frank
  2024-05-14 10:54 ` [pve-devel] [PATCH manager v10 07/11] api: add resource map api endpoints for directories Markus Frank
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 16+ messages in thread
From: Markus Frank @ 2024-05-14 10:54 UTC (permalink / raw)
  To: pve-devel

add dir mapping checks to check_local_resources

Since the VM needs to be powered off for migration, migration should
work with a directory on shared storage with all caching settings.

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 PVE/QemuServer.pm            | 10 +++++++++-
 test/MigrationTest/Shared.pm |  7 +++++++
 2 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index a0600d7..ee7699f 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2573,6 +2573,7 @@ sub check_local_resources {
     my $nodelist = PVE::Cluster::get_nodelist();
     my $pci_map = PVE::Mapping::PCI::config();
     my $usb_map = PVE::Mapping::USB::config();
+    my $dir_map = PVE::Mapping::Dir::config();
 
     my $missing_mappings_by_node = { map { $_ => [] } @$nodelist };
 
@@ -2584,6 +2585,8 @@ sub check_local_resources {
 		$entry = PVE::Mapping::PCI::get_node_mapping($pci_map, $id, $node);
 	    } elsif ($type eq 'usb') {
 		$entry = PVE::Mapping::USB::get_node_mapping($usb_map, $id, $node);
+	    } elsif ($type eq 'dir') {
+		$entry = PVE::Mapping::Dir::get_node_mapping($dir_map, $id, $node);
 	    }
 	    if (!scalar($entry->@*)) {
 		push @{$missing_mappings_by_node->{$node}}, $key;
@@ -2612,9 +2615,14 @@ sub check_local_resources {
 		push @$mapped_res, $k;
 	    }
 	}
+	if ($k =~ m/^virtiofs/) {
+	    my $entry = parse_property_string('pve-qm-virtiofs', $conf->{$k});
+	    $add_missing_mapping->('dir', $k, $entry->{dirid});
+	    push @$mapped_res, $k;
+	}
 	# sockets are safe: they will recreated be on the target side post-migrate
 	next if $k =~ m/^serial/ && ($conf->{$k} eq 'socket');
-	push @loc_res, $k if $k =~ m/^(usb|hostpci|serial|parallel)\d+$/;
+	push @loc_res, $k if $k =~ m/^(usb|hostpci|serial|parallel|virtiofs)\d+$/;
     }
 
     die "VM uses local resources\n" if scalar @loc_res && !$noerr;
diff --git a/test/MigrationTest/Shared.pm b/test/MigrationTest/Shared.pm
index aa7203d..c5d0722 100644
--- a/test/MigrationTest/Shared.pm
+++ b/test/MigrationTest/Shared.pm
@@ -90,6 +90,13 @@ $mapping_pci_module->mock(
     },
 );
 
+our $mapping_dir_module = Test::MockModule->new("PVE::Mapping::Dir");
+$mapping_dir_module->mock(
+    config => sub {
+	return {};
+    },
+);
+
 our $ha_config_module = Test::MockModule->new("PVE::HA::Config");
 $ha_config_module->mock(
     vm_is_ha_managed => sub {
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [pve-devel] [PATCH manager v10 07/11] api: add resource map api endpoints for directories
  2024-05-14 10:54 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v10 0/11] virtiofs Markus Frank
                   ` (5 preceding siblings ...)
  2024-05-14 10:54 ` [pve-devel] [PATCH qemu-server v10 6/11] migration: check_local_resources for virtiofs Markus Frank
@ 2024-05-14 10:54 ` Markus Frank
  2024-05-14 10:54 ` [pve-devel] [PATCH manager v10 08/11] ui: add edit window for dir mappings Markus Frank
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 16+ messages in thread
From: Markus Frank @ 2024-05-14 10:54 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 PVE/API2/Cluster/Mapping.pm       |   7 +
 PVE/API2/Cluster/Mapping/Dir.pm   | 317 ++++++++++++++++++++++++++++++
 PVE/API2/Cluster/Mapping/Makefile |   1 +
 3 files changed, 325 insertions(+)
 create mode 100644 PVE/API2/Cluster/Mapping/Dir.pm

diff --git a/PVE/API2/Cluster/Mapping.pm b/PVE/API2/Cluster/Mapping.pm
index 40386579..9f0dcd2b 100644
--- a/PVE/API2/Cluster/Mapping.pm
+++ b/PVE/API2/Cluster/Mapping.pm
@@ -3,11 +3,17 @@ package PVE::API2::Cluster::Mapping;
 use strict;
 use warnings;
 
+use PVE::API2::Cluster::Mapping::Dir;
 use PVE::API2::Cluster::Mapping::PCI;
 use PVE::API2::Cluster::Mapping::USB;
 
 use base qw(PVE::RESTHandler);
 
+__PACKAGE__->register_method ({
+    subclass => "PVE::API2::Cluster::Mapping::Dir",
+    path => 'dir',
+});
+
 __PACKAGE__->register_method ({
     subclass => "PVE::API2::Cluster::Mapping::PCI",
     path => 'pci',
@@ -41,6 +47,7 @@ __PACKAGE__->register_method ({
 	my ($param) = @_;
 
 	my $result = [
+	    { name => 'dir' },
 	    { name => 'pci' },
 	    { name => 'usb' },
 	];
diff --git a/PVE/API2/Cluster/Mapping/Dir.pm b/PVE/API2/Cluster/Mapping/Dir.pm
new file mode 100644
index 00000000..ddb6977d
--- /dev/null
+++ b/PVE/API2/Cluster/Mapping/Dir.pm
@@ -0,0 +1,317 @@
+package PVE::API2::Cluster::Mapping::Dir;
+
+use strict;
+use warnings;
+
+use Storable qw(dclone);
+
+use PVE::INotify;
+use PVE::JSONSchema qw(get_standard_option parse_property_string);
+use PVE::Mapping::Dir ();
+use PVE::RPCEnvironment;
+use PVE::Tools qw(extract_param);
+
+use base qw(PVE::RESTHandler);
+
+__PACKAGE__->register_method ({
+    name => 'index',
+    path => '',
+    method => 'GET',
+    # only proxy if we give the 'check-node' parameter
+    proxyto_callback => sub {
+	my ($rpcenv, $proxyto, $param) = @_;
+	return $param->{'check-node'} // 'localhost';
+    },
+    description => "List directory mapping",
+    permissions => {
+	description => "Only lists entries where you have 'Mapping.Modify', 'Mapping.Use' or".
+	    " 'Mapping.Audit' permissions on '/mapping/dir/<id>'.",
+	user => 'all',
+    },
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    'check-node' => get_standard_option('pve-node', {
+		description => "If given, checks the configurations on the given node for ".
+		    "correctness, and adds relevant diagnostics for the directory to the response.",
+		optional => 1,
+	    }),
+	},
+    },
+    returns => {
+	type => 'array',
+	items => {
+	    type => "object",
+	    properties => {
+		id => {
+		    type => 'string',
+		    description => "The logical ID of the mapping."
+		},
+		map => {
+		    type => 'array',
+		    description => "The entries of the mapping.",
+		    items => {
+			type => 'string',
+			description => "A mapping for a node.",
+		    },
+		},
+		description => {
+		    type => 'string',
+		    description => "A description of the logical mapping.",
+		},
+		xattr => {
+		    type => 'boolean',
+		    description => "Enable support for extended attributes.",
+		    optional => 1,
+		},
+		acl => {
+		    type => 'boolean',
+		    description => "Enable support for posix ACLs (implies --xattr).",
+		    optional => 1,
+		},
+		checks => {
+		    type => "array",
+		    optional => 1,
+		    description => "A list of checks, only present if 'check-node' is set.",
+		    items => {
+			type => 'object',
+			properties => {
+			    severity => {
+				type => "string",
+				enum => ['warning', 'error'],
+				description => "The severity of the error",
+			    },
+			    message => {
+				type => "string",
+				description => "The message of the error",
+			    },
+			},
+		    }
+		},
+	    },
+	},
+	links => [ { rel => 'child', href => "{id}" } ],
+    },
+    code => sub {
+	my ($param) = @_;
+
+	my $rpcenv = PVE::RPCEnvironment::get();
+	my $authuser = $rpcenv->get_user();
+
+	my $check_node = $param->{'check-node'};
+	my $local_node = PVE::INotify::nodename();
+
+	die "wrong node to check - $check_node != $local_node\n"
+	    if defined($check_node) && $check_node ne 'localhost' && $check_node ne $local_node;
+
+	my $cfg = PVE::Mapping::Dir::config();
+
+	my $can_see_mapping_privs = ['Mapping.Modify', 'Mapping.Use', 'Mapping.Audit'];
+
+	my $res = [];
+	for my $id (keys $cfg->{ids}->%*) {
+	    next if !$rpcenv->check_any($authuser, "/mapping/dir/$id", $can_see_mapping_privs, 1);
+	    next if !$cfg->{ids}->{$id};
+
+	    my $entry = dclone($cfg->{ids}->{$id});
+	    $entry->{id} = $id;
+	    $entry->{digest} = $cfg->{digest};
+
+	    if (defined($check_node)) {
+		$entry->{checks} = [];
+		if (my $mappings = PVE::Mapping::Dir::get_node_mapping($cfg, $id, $check_node)) {
+		    if (!scalar($mappings->@*)) {
+			push $entry->{checks}->@*, {
+			    severity => 'warning',
+			    message => "No mapping for node $check_node.",
+			};
+		    }
+		    for my $mapping ($mappings->@*) {
+			eval { PVE::Mapping::Dir::assert_valid($mapping) };
+			if (my $err = $@) {
+			    push $entry->{checks}->@*, {
+				severity => 'error',
+				message => "Invalid configuration: $err",
+			    };
+			}
+		    }
+		}
+	    }
+
+	    push @$res, $entry;
+	}
+
+	return $res;
+    },
+});
+
+__PACKAGE__->register_method ({
+    name => 'get',
+    protected => 1,
+    path => '{id}',
+    method => 'GET',
+    description => "Get directory mapping.",
+    permissions => {
+	check =>['or',
+	    ['perm', '/mapping/dir/{id}', ['Mapping.Use']],
+	    ['perm', '/mapping/dir/{id}', ['Mapping.Modify']],
+	    ['perm', '/mapping/dir/{id}', ['Mapping.Audit']],
+	],
+    },
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    id => {
+		type => 'string',
+		format => 'pve-configid',
+	    },
+	}
+    },
+    returns => { type => 'object' },
+    code => sub {
+	my ($param) = @_;
+
+	my $cfg = PVE::Mapping::Dir::config();
+	my $id = $param->{id};
+
+	my $entry = $cfg->{ids}->{$id};
+	die "mapping '$param->{id}' not found\n" if !defined($entry);
+
+	my $data = dclone($entry);
+
+	$data->{digest} = $cfg->{digest};
+
+	return $data;
+    }});
+
+__PACKAGE__->register_method ({
+    name => 'create',
+    protected => 1,
+    path => '',
+    method => 'POST',
+    description => "Create a new directory mapping.",
+    permissions => {
+	check => ['perm', '/mapping/dir', ['Mapping.Modify']],
+    },
+    parameters => PVE::Mapping::Dir->createSchema(1),
+    returns => {
+	type => 'null',
+    },
+    code => sub {
+	my ($param) = @_;
+
+	my $id = extract_param($param, 'id');
+
+	my $plugin = PVE::Mapping::Dir->lookup('dir');
+	my $opts = $plugin->check_config($id, $param, 1, 1);
+
+	my $map_list = $opts->{map};
+	PVE::Mapping::Dir::check_duplicate($map_list);
+
+	PVE::Mapping::Dir::lock_dir_config(sub {
+	    my $cfg = PVE::Mapping::Dir::config();
+
+	    die "dir ID '$id' already defined\n" if defined($cfg->{ids}->{$id});
+
+	    $cfg->{ids}->{$id} = $opts;
+
+	    PVE::Mapping::Dir::write_dir_config($cfg);
+
+	}, "create directory mapping failed");
+
+	return;
+    },
+});
+
+__PACKAGE__->register_method ({
+    name => 'update',
+    protected => 1,
+    path => '{id}',
+    method => 'PUT',
+    description => "Update a directory mapping.",
+    permissions => {
+	check => ['perm', '/mapping/dir/{id}', ['Mapping.Modify']],
+    },
+    parameters => PVE::Mapping::Dir->updateSchema(),
+    returns => {
+	type => 'null',
+    },
+    code => sub {
+	my ($param) = @_;
+
+	my $digest = extract_param($param, 'digest');
+	my $delete = extract_param($param, 'delete');
+	my $id = extract_param($param, 'id');
+
+	if ($delete) {
+	    $delete = [ PVE::Tools::split_list($delete) ];
+	}
+
+	PVE::Mapping::Dir::lock_dir_config(sub {
+	    my $cfg = PVE::Mapping::Dir::config();
+
+	    PVE::Tools::assert_if_modified($cfg->{digest}, $digest) if defined($digest);
+
+	    die "dir ID '$id' does not exist\n" if !defined($cfg->{ids}->{$id});
+
+	    my $plugin = PVE::Mapping::Dir->lookup('dir');
+	    my $opts = $plugin->check_config($id, $param, 1, 1);
+
+	    my $map_list = $opts->{map};
+	    PVE::Mapping::Dir::check_duplicate($map_list);
+
+	    my $data = $cfg->{ids}->{$id};
+
+	    my $options = $plugin->private()->{options}->{dir};
+	    PVE::SectionConfig::delete_from_config($data, $options, $opts, $delete);
+
+	    $data->{$_} = $opts->{$_} for keys $opts->%*;
+
+	    PVE::Mapping::Dir::write_dir_config($cfg);
+
+	}, "update directory mapping failed");
+
+	return;
+    },
+});
+
+__PACKAGE__->register_method ({
+    name => 'delete',
+    protected => 1,
+    path => '{id}',
+    method => 'DELETE',
+    description => "Remove directory mapping.",
+    permissions => {
+	check => [ 'perm', '/mapping/dir', ['Mapping.Modify']],
+    },
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    id => {
+		type => 'string',
+		format => 'pve-configid',
+	    },
+	}
+    },
+    returns => { type => 'null' },
+    code => sub {
+	my ($param) = @_;
+
+	my $id = $param->{id};
+
+	PVE::Mapping::Dir::lock_dir_config(sub {
+	    my $cfg = PVE::Mapping::Dir::config();
+
+	    if ($cfg->{ids}->{$id}) {
+		delete $cfg->{ids}->{$id};
+	    }
+
+	    PVE::Mapping::Dir::write_dir_config($cfg);
+
+	}, "delete dir mapping failed");
+
+	return;
+    }
+});
+
+1;
diff --git a/PVE/API2/Cluster/Mapping/Makefile b/PVE/API2/Cluster/Mapping/Makefile
index e7345ab4..5dbb3f5c 100644
--- a/PVE/API2/Cluster/Mapping/Makefile
+++ b/PVE/API2/Cluster/Mapping/Makefile
@@ -3,6 +3,7 @@ include ../../../../defines.mk
 # for node independent, cluster-wide applicable, API endpoints
 # ensure we do not conflict with files shipped by pve-cluster!!
 PERLSOURCE= 	\
+	Dir.pm  \
 	PCI.pm	\
 	USB.pm
 
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [pve-devel] [PATCH manager v10 08/11] ui: add edit window for dir mappings
  2024-05-14 10:54 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v10 0/11] virtiofs Markus Frank
                   ` (6 preceding siblings ...)
  2024-05-14 10:54 ` [pve-devel] [PATCH manager v10 07/11] api: add resource map api endpoints for directories Markus Frank
@ 2024-05-14 10:54 ` Markus Frank
  2024-05-14 10:54 ` [pve-devel] [PATCH manager v10 09/11] ui: ResourceMapTree for DIR Markus Frank
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 16+ messages in thread
From: Markus Frank @ 2024-05-14 10:54 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 www/manager6/Makefile             |   1 +
 www/manager6/window/DirMapEdit.js | 222 ++++++++++++++++++++++++++++++
 2 files changed, 223 insertions(+)
 create mode 100644 www/manager6/window/DirMapEdit.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 2c3a822b..f6140562 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -137,6 +137,7 @@ JSSRC= 							\
 	window/TreeSettingsEdit.js			\
 	window/PCIMapEdit.js				\
 	window/USBMapEdit.js				\
+	window/DirMapEdit.js                            \
 	window/GuestImport.js				\
 	ha/Fencing.js					\
 	ha/GroupEdit.js					\
diff --git a/www/manager6/window/DirMapEdit.js b/www/manager6/window/DirMapEdit.js
new file mode 100644
index 00000000..cda5824b
--- /dev/null
+++ b/www/manager6/window/DirMapEdit.js
@@ -0,0 +1,222 @@
+Ext.define('PVE.window.DirMapEditWindow', {
+    extend: 'Proxmox.window.Edit',
+
+    mixins: ['Proxmox.Mixin.CBind'],
+
+    cbindData: function(initialConfig) {
+	let me = this;
+	me.isCreate = !me.name;
+	me.method = me.isCreate ? 'POST' : 'PUT';
+	me.hideMapping = !!me.entryOnly;
+	me.hideComment = me.name && !me.entryOnly;
+	me.hideNodeSelector = me.nodename || me.entryOnly;
+	me.hideNode = !me.nodename || !me.hideNodeSelector;
+	return {
+	    name: me.name,
+	    nodename: me.nodename,
+	};
+    },
+
+    submitUrl: function(_url, data) {
+	let me = this;
+	let name = me.isCreate ? '' : me.name;
+	return `/cluster/mapping/dir/${name}`;
+    },
+
+    title: gettext('Add Dir mapping'),
+
+    onlineHelp: 'resource_mapping',
+
+    method: 'POST',
+
+    controller: {
+	xclass: 'Ext.app.ViewController',
+
+	onGetValues: function(values) {
+	    let me = this;
+	    let view = me.getView();
+	    values.node ??= view.nodename;
+
+	    let name = values.name;
+	    let description = values.description;
+	    let xattr = values.xattr;
+	    let acl = values.acl;
+	    let deletes = values.delete;
+
+	    delete values.description;
+	    delete values.name;
+	    delete values.xattr;
+	    delete values.acl;
+
+	    let map = [];
+	    if (me.originalMap) {
+		map = PVE.Parser.filterPropertyStringList(me.originalMap, (e) => e.node !== values.node);
+	    }
+	    if (values.path) {
+		map.push(PVE.Parser.printPropertyString(values));
+	    }
+
+	    values = { map };
+	    if (description) {
+		values.description = description;
+	    }
+	    if (xattr) {
+		values.xattr = xattr;
+	    }
+	    if (acl) {
+		values.acl = acl;
+	    }
+	    if (deletes) {
+		values.delete = deletes;
+	    }
+
+	    if (view.isCreate) {
+		values.id = name;
+	    }
+	    return values;
+	},
+
+	onSetValues: function(values) {
+	    let me = this;
+	    let view = me.getView();
+	    me.originalMap = [...values.map];
+	    let configuredNodes = [];
+	    PVE.Parser.filterPropertyStringList(values.map, (e) => {
+		configuredNodes.push(e.node);
+		if (e.node === view.nodename) {
+		    values = e;
+		}
+		return false;
+	    });
+
+	    me.lookup('nodeselector').disallowedNodes = configuredNodes;
+
+	    return values;
+	},
+
+	init: function(view) {
+	    let me = this;
+
+	    if (!view.nodename) {
+		//throw "no nodename given";
+	    }
+	},
+    },
+
+    items: [
+	{
+	    xtype: 'inputpanel',
+	    onGetValues: function(values) {
+		return this.up('window').getController().onGetValues(values);
+	    },
+
+	    onSetValues: function(values) {
+		return this.up('window').getController().onSetValues(values);
+	    },
+
+	    columnT: [
+		{
+		    xtype: 'displayfield',
+		    reference: 'directory-hint',
+		    columnWidth: 1,
+		    value: 'Make sure the directory exists.',
+		    cbind: {
+			disabled: '{hideMapping}',
+			hidden: '{hideMapping}',
+		    },
+		    userCls: 'pmx-hint',
+		},
+	    ],
+
+	    column1: [
+		{
+		    xtype: 'pmxDisplayEditField',
+		    fieldLabel: gettext('Name'),
+		    cbind: {
+			editable: '{!name}',
+			value: '{name}',
+			submitValue: '{isCreate}',
+		    },
+		    name: 'name',
+		    allowBlank: false,
+		},
+		{
+		    xtype: 'pveNodeSelector',
+		    reference: 'nodeselector',
+		    fieldLabel: gettext('Node'),
+		    name: 'node',
+		    cbind: {
+			disabled: '{hideNodeSelector}',
+			hidden: '{hideNodeSelector}',
+		    },
+		    allowBlank: false,
+		},
+	    ],
+
+	    column2: [
+		{
+		    xtype: 'fieldcontainer',
+		    defaultType: 'radiofield',
+		    layout: 'fit',
+		    cbind: {
+			disabled: '{hideMapping}',
+			hidden: '{hideMapping}',
+		    },
+		    items: [
+			{
+			    xtype: 'textfield',
+			    name: 'path',
+			    reference: 'path',
+			    value: '',
+			    emptyText: gettext('/some/path'),
+			    cbind: {
+				nodename: '{nodename}',
+				disabled: '{hideMapping}',
+			    },
+			    allowBlank: false,
+			    fieldLabel: gettext('Path'),
+			},
+			{
+			    xtype: 'proxmoxcheckbox',
+			    name: 'submounts',
+			    fieldLabel: gettext('submounts'),
+			},
+		    ],
+		},
+	    ],
+
+	    columnB: [
+		{
+		    xtype: 'fieldcontainer',
+		    defaultType: 'radiofield',
+		    layout: 'fit',
+		    cbind: {
+			disabled: '{hideComment}',
+			hidden: '{hideComment}',
+		    },
+		    items: [
+			{
+			    xtype: 'proxmoxtextfield',
+			    fieldLabel: gettext('Comment'),
+			    submitValue: true,
+			    name: 'description',
+			    deleteEmpty: true,
+			},
+			{
+			    xtype: 'proxmoxcheckbox',
+			    fieldLabel: gettext('Use dir with xattr by default'),
+			    name: 'xattr',
+			    deleteEmpty: true,
+			},
+			{
+			    xtype: 'proxmoxcheckbox',
+			    fieldLabel: gettext('Use dir with acl by default'),
+			    name: 'acl',
+			    deleteEmpty: true,
+			},
+		    ],
+		},
+	    ],
+	},
+    ],
+});
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [pve-devel] [PATCH manager v10 09/11] ui: ResourceMapTree for DIR
  2024-05-14 10:54 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v10 0/11] virtiofs Markus Frank
                   ` (7 preceding siblings ...)
  2024-05-14 10:54 ` [pve-devel] [PATCH manager v10 08/11] ui: add edit window for dir mappings Markus Frank
@ 2024-05-14 10:54 ` Markus Frank
  2024-05-14 10:54 ` [pve-devel] [PATCH manager v10 10/11] ui: form: add DIRMapSelector Markus Frank
  2024-05-14 10:54 ` [pve-devel] [PATCH manager v10 11/11] ui: add options to add virtio-fs to qemu config Markus Frank
  10 siblings, 0 replies; 16+ messages in thread
From: Markus Frank @ 2024-05-14 10:54 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 www/manager6/Makefile         |  1 +
 www/manager6/dc/Config.js     | 10 +++++++
 www/manager6/dc/DirMapView.js | 50 +++++++++++++++++++++++++++++++++++
 3 files changed, 61 insertions(+)
 create mode 100644 www/manager6/dc/DirMapView.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index f6140562..5a3541e0 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -189,6 +189,7 @@ JSSRC= 							\
 	dc/RealmSyncJob.js				\
 	dc/PCIMapView.js				\
 	dc/USBMapView.js				\
+	dc/DirMapView.js				\
 	lxc/CmdMenu.js					\
 	lxc/Config.js					\
 	lxc/CreateWizard.js				\
diff --git a/www/manager6/dc/Config.js b/www/manager6/dc/Config.js
index ddbb58b1..3355c835 100644
--- a/www/manager6/dc/Config.js
+++ b/www/manager6/dc/Config.js
@@ -320,6 +320,16 @@ Ext.define('PVE.dc.Config', {
 			    title: gettext('USB Devices'),
 			    flex: 1,
 			},
+			{
+			    xtype: 'splitter',
+			    collapsible: false,
+			    performCollapse: false,
+			},
+			{
+			    xtype: 'pveDcDirMapView',
+			    title: gettext('Directories'),
+			    flex: 1,
+			},
 		    ],
 		},
 	    );
diff --git a/www/manager6/dc/DirMapView.js b/www/manager6/dc/DirMapView.js
new file mode 100644
index 00000000..4468e951
--- /dev/null
+++ b/www/manager6/dc/DirMapView.js
@@ -0,0 +1,50 @@
+Ext.define('pve-resource-dir-tree', {
+    extend: 'Ext.data.Model',
+    idProperty: 'internalId',
+    fields: ['type', 'text', 'path', 'id', 'description', 'digest'],
+});
+
+Ext.define('PVE.dc.DirMapView', {
+    extend: 'PVE.tree.ResourceMapTree',
+    alias: 'widget.pveDcDirMapView',
+
+    editWindowClass: 'PVE.window.DirMapEditWindow',
+    baseUrl: '/cluster/mapping/dir',
+    mapIconCls: 'fa fa-folder',
+    entryIdProperty: 'path',
+
+    store: {
+	sorters: 'text',
+	model: 'pve-resource-dir-tree',
+	data: {},
+    },
+
+    columns: [
+	{
+	    xtype: 'treecolumn',
+	    text: gettext('ID/Node'),
+	    dataIndex: 'text',
+	    width: 200,
+	},
+	{
+	    text: gettext('xattr'),
+	    dataIndex: 'xattr',
+	},
+	{
+	    text: gettext('acl'),
+	    dataIndex: 'acl',
+	},
+	{
+	    text: gettext('submounts'),
+	    dataIndex: 'submounts',
+	},
+	{
+	    header: gettext('Comment'),
+	    dataIndex: 'description',
+	    renderer: function(value, _meta, record) {
+		return value ?? record.data.comment;
+	    },
+	    flex: 1,
+	},
+    ],
+});
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [pve-devel] [PATCH manager v10 10/11] ui: form: add DIRMapSelector
  2024-05-14 10:54 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v10 0/11] virtiofs Markus Frank
                   ` (8 preceding siblings ...)
  2024-05-14 10:54 ` [pve-devel] [PATCH manager v10 09/11] ui: ResourceMapTree for DIR Markus Frank
@ 2024-05-14 10:54 ` Markus Frank
  2024-05-14 10:54 ` [pve-devel] [PATCH manager v10 11/11] ui: add options to add virtio-fs to qemu config Markus Frank
  10 siblings, 0 replies; 16+ messages in thread
From: Markus Frank @ 2024-05-14 10:54 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 www/manager6/Makefile               |  1 +
 www/manager6/form/DirMapSelector.js | 63 +++++++++++++++++++++++++++++
 2 files changed, 64 insertions(+)
 create mode 100644 www/manager6/form/DirMapSelector.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 5a3541e0..cac8cd02 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -35,6 +35,7 @@ JSSRC= 							\
 	form/ContentTypeSelector.js			\
 	form/ControllerSelector.js			\
 	form/DayOfWeekSelector.js			\
+	form/DirMapSelector.js                          \
 	form/DiskFormatSelector.js			\
 	form/DiskStorageSelector.js			\
 	form/FileSelector.js				\
diff --git a/www/manager6/form/DirMapSelector.js b/www/manager6/form/DirMapSelector.js
new file mode 100644
index 00000000..473a2ffe
--- /dev/null
+++ b/www/manager6/form/DirMapSelector.js
@@ -0,0 +1,63 @@
+Ext.define('PVE.form.DirMapSelector', {
+    extend: 'Proxmox.form.ComboGrid',
+    alias: 'widget.pveDirMapSelector',
+
+    store: {
+	fields: ['name', 'path'],
+	filterOnLoad: true,
+	sorters: [
+	    {
+		property: 'id',
+		direction: 'ASC',
+	    },
+	],
+    },
+
+    allowBlank: false,
+    autoSelect: false,
+    displayField: 'id',
+    valueField: 'id',
+
+    listConfig: {
+	columns: [
+	    {
+		header: gettext('Directory ID'),
+		dataIndex: 'id',
+		flex: 1,
+	    },
+	    {
+		header: gettext('Comment'),
+		dataIndex: 'description',
+		flex: 1,
+	    },
+	],
+    },
+
+    setNodename: function(nodename) {
+	var me = this;
+
+	if (!nodename || me.nodename === nodename) {
+	    return;
+	}
+
+	me.nodename = nodename;
+
+	me.store.setProxy({
+	    type: 'proxmox',
+	    url: `/api2/json/cluster/mapping/dir?check-node=${nodename}`,
+	});
+
+	me.store.load();
+    },
+
+    initComponent: function() {
+	var me = this;
+
+	var nodename = me.nodename;
+	me.nodename = undefined;
+
+        me.callParent();
+
+	me.setNodename(nodename);
+    },
+});
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [pve-devel] [PATCH manager v10 11/11] ui: add options to add virtio-fs to qemu config
  2024-05-14 10:54 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v10 0/11] virtiofs Markus Frank
                   ` (9 preceding siblings ...)
  2024-05-14 10:54 ` [pve-devel] [PATCH manager v10 10/11] ui: form: add DIRMapSelector Markus Frank
@ 2024-05-14 10:54 ` Markus Frank
  10 siblings, 0 replies; 16+ messages in thread
From: Markus Frank @ 2024-05-14 10:54 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
 www/manager6/Makefile             |   1 +
 www/manager6/Utils.js             |   1 +
 www/manager6/qemu/HardwareView.js |  19 +++++
 www/manager6/qemu/VirtiofsEdit.js | 137 ++++++++++++++++++++++++++++++
 4 files changed, 158 insertions(+)
 create mode 100644 www/manager6/qemu/VirtiofsEdit.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index cac8cd02..29d676df 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -270,6 +270,7 @@ JSSRC= 							\
 	qemu/Smbios1Edit.js				\
 	qemu/SystemEdit.js				\
 	qemu/USBEdit.js					\
+	qemu/VirtiofsEdit.js				\
 	sdn/Browser.js					\
 	sdn/ControllerView.js				\
 	sdn/Status.js					\
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index f5608944..52ea5589 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -1636,6 +1636,7 @@ Ext.define('PVE.Utils', {
 	serial: 4,
 	rng: 1,
 	tpmstate: 1,
+	virtiofs: 10,
     },
 
     // we can have usb6 and up only for specific machine/ostypes
diff --git a/www/manager6/qemu/HardwareView.js b/www/manager6/qemu/HardwareView.js
index 672a7e1a..0d4a3984 100644
--- a/www/manager6/qemu/HardwareView.js
+++ b/www/manager6/qemu/HardwareView.js
@@ -309,6 +309,16 @@ Ext.define('PVE.qemu.HardwareView', {
 	    never_delete: !caps.nodes['Sys.Console'],
 	    header: gettext("VirtIO RNG"),
 	};
+	for (let i = 0; i < PVE.Utils.hardware_counts.virtiofs; i++) {
+	    let confid = "virtiofs" + i.toString();
+	    rows[confid] = {
+		group: 50,
+		order: i,
+		iconCls: 'folder',
+		editor: 'PVE.qemu.VirtiofsEdit',
+		header: gettext('Virtiofs') + ' (' + confid +')',
+	    };
+	}
 
 	var sorterFn = function(rec1, rec2) {
 	    var v1 = rec1.data.key;
@@ -583,6 +593,7 @@ Ext.define('PVE.qemu.HardwareView', {
 	    const noVMConfigDiskPerm = !caps.vms['VM.Config.Disk'];
 	    const noVMConfigCDROMPerm = !caps.vms['VM.Config.CDROM'];
 	    const noVMConfigCloudinitPerm = !caps.vms['VM.Config.Cloudinit'];
+	    const noVMConfigOptionsPerm = !caps.vms['VM.Config.Options'];
 
 	    me.down('#addUsb').setDisabled(noHWPerm || isAtUsbLimit());
 	    me.down('#addPci').setDisabled(noHWPerm || isAtLimit('hostpci'));
@@ -592,6 +603,7 @@ Ext.define('PVE.qemu.HardwareView', {
 	    me.down('#addRng').setDisabled(noSysConsolePerm || isAtLimit('rng'));
 	    efidisk_menuitem.setDisabled(noVMConfigDiskPerm || isAtLimit('efidisk'));
 	    me.down('#addTpmState').setDisabled(noSysConsolePerm || isAtLimit('tpmstate'));
+	    me.down('#addVirtiofs').setDisabled(noVMConfigOptionsPerm || isAtLimit('virtiofs'));
 	    me.down('#addCloudinitDrive').setDisabled(noVMConfigCDROMPerm || noVMConfigCloudinitPerm || hasCloudInit);
 
 	    if (!rec) {
@@ -735,6 +747,13 @@ Ext.define('PVE.qemu.HardwareView', {
 				disabled: !caps.nodes['Sys.Console'],
 				handler: editorFactory('RNGEdit'),
 			    },
+			    {
+				text: gettext("Virtiofs"),
+				itemId: 'addVirtiofs',
+				iconCls: 'fa fa-folder',
+				disabled: !caps.nodes['Sys.Console'],
+				handler: editorFactory('VirtiofsEdit'),
+			    },
 			],
 		    }),
 		},
diff --git a/www/manager6/qemu/VirtiofsEdit.js b/www/manager6/qemu/VirtiofsEdit.js
new file mode 100644
index 00000000..ec5c69fd
--- /dev/null
+++ b/www/manager6/qemu/VirtiofsEdit.js
@@ -0,0 +1,137 @@
+Ext.define('PVE.qemu.VirtiofsInputPanel', {
+    extend: 'Proxmox.panel.InputPanel',
+    xtype: 'pveVirtiofsInputPanel',
+    onlineHelp: 'qm_virtiofs',
+
+    insideWizard: false,
+
+    onGetValues: function(values) {
+	var me = this;
+	var confid = me.confid;
+	var params = {};
+	delete values.delete;
+	params[confid] = PVE.Parser.printPropertyString(values, 'dirid');
+	return params;
+    },
+
+    setSharedfiles: function(confid, data) {
+	var me = this;
+	me.confid = confid;
+	me.virtiofs = data;
+	me.setValues(me.virtiofs);
+    },
+    initComponent: function() {
+	let me = this;
+
+	me.nodename = me.pveSelNode.data.node;
+	if (!me.nodename) {
+	    throw "no node name specified";
+	}
+	me.items = [
+	    {
+		xtype: 'pveDirMapSelector',
+		emptyText: 'dirid',
+		nodename: me.nodename,
+		fieldLabel: gettext('Directory ID'),
+		name: 'dirid',
+		allowBlank: false,
+	    },
+	    {
+		xtype: 'proxmoxKVComboBox',
+		fieldLabel: gettext('Cache'),
+		name: 'cache',
+		value: '__default__',
+		deleteDefaultValue: false,
+		comboItems: [
+		    ['__default__', Proxmox.Utils.defaultText + ' (auto)'],
+		    ['auto', 'auto'],
+		    ['always', 'always'],
+		    ['never', 'never'],
+		],
+	    },
+	    {
+		xtype: 'proxmoxKVComboBox',
+		fieldLabel: gettext('xattr'),
+		name: 'xattr',
+		value: '__default__',
+		deleteDefaultValue: false,
+		comboItems: [
+		    ['__default__', Proxmox.Utils.defaultText + ' (Mapping Settings)'],
+		    ['0', 'off'],
+		    ['1', 'on'],
+		],
+	    },
+	    {
+		xtype: 'proxmoxKVComboBox',
+		fieldLabel: gettext('acl (implies xattr)'),
+		name: 'acl',
+		value: '__default__',
+		deleteDefaultValue: false,
+		comboItems: [
+		    ['__default__', Proxmox.Utils.defaultText + ' (Mapping Settings)'],
+		    ['0', 'off'],
+		    ['1', 'on'],
+		],
+	    },
+	    {
+		xtype: 'proxmoxcheckbox',
+		fieldLabel: gettext('Direct-io'),
+		name: 'direct-io',
+	    },
+	];
+
+	me.virtiofs = {};
+	me.confid = 'virtiofs0';
+	me.callParent();
+    },
+});
+
+Ext.define('PVE.qemu.VirtiofsEdit', {
+    extend: 'Proxmox.window.Edit',
+
+    subject: gettext('Filesystem Passthrough'),
+
+    initComponent: function() {
+	var me = this;
+
+	me.isCreate = !me.confid;
+
+	var ipanel = Ext.create('PVE.qemu.VirtiofsInputPanel', {
+	    confid: me.confid,
+	    pveSelNode: me.pveSelNode,
+	    isCreate: me.isCreate,
+	});
+
+	Ext.applyIf(me, {
+	    items: ipanel,
+	});
+
+	me.callParent();
+
+	me.load({
+	    success: function(response) {
+	        me.conf = response.result.data;
+	        var i, confid;
+	        if (!me.isCreate) {
+		    var value = me.conf[me.confid];
+		    var virtiofs = PVE.Parser.parsePropertyString(value, "dirid");
+		    if (!virtiofs) {
+			Ext.Msg.alert(gettext('Error'), 'Unable to parse virtiofs options');
+			me.close();
+			return;
+		    }
+		    ipanel.setSharedfiles(me.confid, virtiofs);
+		} else {
+		    for (i = 0; i < PVE.Utils.hardware_counts.virtiofs; i++) {
+		        confid = 'virtiofs' + i.toString();
+		        if (!Ext.isDefined(me.conf[confid])) {
+			    me.confid = confid;
+			    break;
+			}
+		    }
+		    ipanel.setSharedfiles(me.confid, {});
+		}
+	    },
+	});
+    },
+});
-- 
2.39.2



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [pve-devel] [PATCH guest-common v10 2/11] add dir mapping section config
  2024-05-14 10:54 ` [pve-devel] [PATCH guest-common v10 2/11] add dir mapping section config Markus Frank
@ 2024-06-12 11:50   ` Fiona Ebner
  0 siblings, 0 replies; 16+ messages in thread
From: Fiona Ebner @ 2024-06-12 11:50 UTC (permalink / raw)
  To: Proxmox VE development discussion, Markus Frank

Am 14.05.24 um 12:54 schrieb Markus Frank:
> diff --git a/src/PVE/Mapping/Dir.pm b/src/PVE/Mapping/Dir.pm
> new file mode 100644
> index 0000000..8f131c2
> --- /dev/null
> +++ b/src/PVE/Mapping/Dir.pm
> @@ -0,0 +1,205 @@
> +package PVE::Mapping::Dir;
> +
> +use strict;
> +use warnings;
> +
> +use PVE::Cluster qw(cfs_register_file cfs_read_file cfs_lock_file cfs_write_file);
> +use PVE::INotify;
> +use PVE::JSONSchema qw(get_standard_option parse_property_string);
> +use PVE::SectionConfig;
> +use PVE::Storage::Plugin;

Storage::Plugin isn't used anywhere?

> +
> +use base qw(PVE::SectionConfig);
> +
> +my $FILENAME = 'mapping/dir.cfg';
> +
> +cfs_register_file($FILENAME,
> +                  sub { __PACKAGE__->parse_config(@_); },
> +                  sub { __PACKAGE__->write_config(@_); });

Style nit: uses only spaces, non-standard indentation

---snip---

> +my $map_fmt = {
> +    node => get_standard_option('pve-node'),
> +    path => {
> +	description => "Absolute directory path that should be shared with the guest.",
> +	type => 'string',
> +	format => 'pve-storage-path',
> +    },
> +    submounts => {
> +	type => 'boolean',
> +	description => "Announce that the directory contains other mounted"
> +	    ." file systems. If this is not set and multiple file systems are"
> +	    ." mounted, the guest may encounter duplicates due to file system"
> +	    ." specific inode IDs.",
> +	optional => 1,
> +	default => 0,

Hmm, so even if this option is not set, the files from submounts are
accessible? Should this rather be turned on by default or always? Or
what are the downsides when setting this?

---snip---

> +sub assert_valid {
> +    my ($dir_cfg) = @_;
> +
> +    my $path = $dir_cfg->{path};
> +
> +    if (! -e $path) {
> +	die "Path $path does not exist\n";
> +    } elsif (! -d $path) {
> +	die "Path $path exists but is not a directory\n"

Nit, missing comma before 'but', missing semicolon at the end of the line.

> +    }
> +
> +    return 1;
> +};
> +
> +sub check_duplicate {

Better called e.g. assert_no_duplicate_node, because it dies. "check" is
mostly used for functions returning a "boolean" in our code base.

> +    my ($map_list) = @_;
> +
> +    my %count;
> +    for my $map (@$map_list){

Style nit: missing space after closing parenthesis

> +	my $entry = parse_property_string($map_fmt, $map);
> +	$count{$entry->{node}}++;> +    }
> +    for my $node (keys %count) {
> +	if ($count{$node} > 1) {
> +	    die "Node '$node' is specified $count{$node} times.\n";
> +	}
> +    }
> +}
> +


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [pve-devel] [PATCH docs v10 3/11] add doc section for the shared filesystem virtio-fs
  2024-05-14 10:54 ` [pve-devel] [PATCH docs v10 3/11] add doc section for the shared filesystem virtio-fs Markus Frank
@ 2024-06-12 11:50   ` Fiona Ebner
  0 siblings, 0 replies; 16+ messages in thread
From: Fiona Ebner @ 2024-06-12 11:50 UTC (permalink / raw)
  To: Proxmox VE development discussion, Markus Frank

Am 14.05.24 um 12:54 schrieb Markus Frank:
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> ---
>  qm.adoc | 94 +++++++++++++++++++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 92 insertions(+), 2 deletions(-)
> 
> diff --git a/qm.adoc b/qm.adoc
> index 42c26db..755e20e 100644
> --- a/qm.adoc
> +++ b/qm.adoc
> @@ -1081,6 +1081,95 @@ recommended to always use a limiter to avoid guests using too many host
>  resources. If desired, a value of '0' for `max_bytes` can be used to disable
>  all limits.
>  
> +[[qm_virtiofs]]
> +Virtio-fs
> +~~~~~~~~~
> +
> +Virtio-fs is a shared file system that enables sharing a directory between host
> +and guest VM. It takes advantage of the locality of virtual machines and the
> +hypervisor to get a higher throughput than the 9p remote file system protocol.

A bit more general would be:
"higher throughput than network-based protocols, like the 9p remote file
system protocol."

> +
> +To use virtio-fs, the https://gitlab.com/virtio-fs/virtiofsd[virtiofsd] daemon
> +needs to run in the background. In {pve}, this process starts immediately before
> +the start of QEMU.> +
> +Linux VMs with kernel >=5.4 support this feature by default.
> +
> +There is a guide available on how to utilize virtio-fs in Windows VMs.
> +https://github.com/virtio-win/kvm-guest-drivers-windows/wiki/Virtiofs:-Shared-file-system
> +
> +Known Limitations
> +^^^^^^^^^^^^^^^^^
> +
> +* Virtiofsd crashing means no recovery until VM is fully stopped and restarted.
> +* Virtiofsd not responding may result in NFS-like hanging access in the VM.
> +* Memory hotplug does not work in combination with virtio-fs (also results in
> +hanging access).
> +* Live migration does not work.
> +* Windows cannot understand ACLs. Therefore, disable it for Windows VMs,
> +otherwise the virtio-fs device will not be visible within the VMs.
> +
> +Add Mapping for Shared Directories
> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> +
> +To add a mapping for a shared directory, either use the API directly with
> +`pvesh` as described in the xref:resource_mapping[Resource Mapping] section:

There is an "either" in the sentence above, but then there is no "or".

> +
> +----
> +pvesh create /cluster/mapping/dir --id dir1 \
> +    --map node=node1,path=/path/to/share1 \
> +    --map node=node2,path=/path/to/share2,submounts=1 \
> +    --xattr 1 \
> +    --acl 1
> +----
> +
> +The `acl` parameter automatically implies `xattr`, that is, it makes no
> +difference whether you set `xattr` to `0` if `acl` is set to `1`.
> +
> +Set `submounts` to `1` when multiple file systems are mounted in a shared
> +directory to prevent the guest from creating duplicates because of file system
> +specific inode IDs that get passed through.
> +
> +
> +Add virtio-fs to a VM
> +^^^^^^^^^^^^^^^^^^^^^
> +
> +To share a directory using virtio-fs, add the parameter `virtiofs<N>` (N can be
> +anything between 0 and 9) to the VM config and use a directory ID (dirid) that
> +has been configured in the resource mapping. Additionally, you can set the
> +`cache` option to either `always`, `never`, or `auto` (default: `auto`),
> +depending on your requirements. How the different caching modes behave can be
> +read at https://lwn.net/Articles/774495/ under the title "Caching Modes". To

s/title/section/ ?

> +enable writeback cache set `writeback` to `1`.
> +


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [pve-devel] [PATCH qemu-server v10 4/11] add virtiofsd as runtime dependency for qemu-server
  2024-05-14 10:54 ` [pve-devel] [PATCH qemu-server v10 4/11] add virtiofsd as runtime dependency for qemu-server Markus Frank
@ 2024-06-12 11:50   ` Fiona Ebner
  0 siblings, 0 replies; 16+ messages in thread
From: Fiona Ebner @ 2024-06-12 11:50 UTC (permalink / raw)
  To: Proxmox VE development discussion, Markus Frank

Commit title should be prefixed with "d/control:"

Am 14.05.24 um 12:54 schrieb Markus Frank:
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> ---
>  debian/control | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/debian/control b/debian/control
> index 1301a36..8e4ca7f 100644
> --- a/debian/control
> +++ b/debian/control
> @@ -55,6 +55,7 @@ Depends: dbus,
>           socat,
>           swtpm,
>           swtpm-tools,
> +         virtiofsd,
>           ${misc:Depends},
>           ${perl:Depends},
>           ${shlibs:Depends},


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [pve-devel] [PATCH qemu-server v10 5/11] fix #1027: virtio-fs support
  2024-05-14 10:54 ` [pve-devel] [PATCH qemu-server v10 5/11] fix #1027: virtio-fs support Markus Frank
@ 2024-06-12 11:50   ` Fiona Ebner
  0 siblings, 0 replies; 16+ messages in thread
From: Fiona Ebner @ 2024-06-12 11:50 UTC (permalink / raw)
  To: Proxmox VE development discussion, Markus Frank

Am 14.05.24 um 12:54 schrieb Markus Frank:
> add support for sharing directories with a guest vm
> 
> virtio-fs needs virtiofsd to be started.
> In order to start virtiofsd as a process (despite being a daemon it is does not
> run in the background), a double-fork is used.
> 
> virtiofsd should close itself together with qemu.

Nit: s/qemu/QEMU/

> 
> There are the parameters dirid and the optional parameters direct-io, cache and
> writeback. Additionally the xattr & acl parameter overwrite the directory
> mapping settings for xattr & acl.
> 
> The dirid gets mapped to the path on the current node and is also used as a
> mount tag (name used to mount the device on the guest).
> 
> example config:
> ```
> virtiofs0: foo,direct-io=1,cache=always,acl=1
> virtiofs1: dirid=bar,cache=never,xattr=1,writeback=1
> ```
> 
> For information on the optional parameters see the coherent doc patch
> and the official gitlab README:
> https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md
> 
> Also add a permission check for virtiofs directory access.
> 
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> ---
>  PVE/API2/Qemu.pm           |  39 ++++++-
>  PVE/QemuServer.pm          |  19 +++-
>  PVE/QemuServer/Makefile    |   3 +-
>  PVE/QemuServer/Memory.pm   |  34 ++++--
>  PVE/QemuServer/Virtiofs.pm | 212 +++++++++++++++++++++++++++++++++++++
>  5 files changed, 296 insertions(+), 11 deletions(-)
>  create mode 100644 PVE/QemuServer/Virtiofs.pm
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index 2a349c8..5d97896 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -695,6 +695,32 @@ my sub check_vm_create_hostpci_perm {
>      return 1;
>  };
>  
> +my sub check_dir_perm {
> +    my ($rpcenv, $authuser, $vmid, $pool, $opt, $value) = @_;
> +
> +    return 1 if $authuser eq 'root@pam';
> +
> +    $rpcenv->check_vm_perm($authuser, $vmid, $pool, ['VM.Config.Disk']);
> +
> +    my $virtiofs = PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', $value);

The "pve-qm-virtiofs" format is registered by PVE::Qemu::Virtiofs so
there should be a use statement for that module at the top.

> +    $rpcenv->check_full($authuser, "/mapping/dir/$virtiofs->{dirid}", ['Mapping.Use']);
> +
> +    return 1;
> +};
> +

---snip---

> diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
> index f365f2d..691f9b3 100644
> --- a/PVE/QemuServer/Memory.pm
> +++ b/PVE/QemuServer/Memory.pm
> @@ -10,6 +10,7 @@ use PVE::Exception qw(raise raise_param_exc);
>  use PVE::QemuServer::Helpers qw(parse_number_sets);
>  use PVE::QemuServer::Monitor qw(mon_cmd);
>  use PVE::QemuServer::QMPHelpers qw(qemu_devicedel qemu_objectdel);
> +use PVE::QemuServer::Virtiofs;

Can't we avoid this use statement? You could check on the call site and
pass in $virtiofs_enabled as a parameter to sub config(). Introducing
that parameter and the necessary changes in config() could also be a
preparatory patch. I.e. initially, there would be no caller passing in
1, but a later patch would change that.

>  
>  use base qw(Exporter);
>  
> @@ -336,7 +337,7 @@ sub qemu_memdevices_list {
>  }
>  
>  sub config {
> -    my ($conf, $vmid, $sockets, $cores, $hotplug, $cmd) = @_;
> +    my ($conf, $vmid, $sockets, $cores, $hotplug, $cmd, $machine_flags) = @_;
>  
>      my $memory = get_current_memory($conf->{memory});
>      my $static_memory = 0;
> @@ -367,6 +368,16 @@ sub config {
>  
>      die "numa needs to be enabled to use hugepages" if $conf->{hugepages} && !$conf->{numa};
>  
> +    my $virtiofs_enabled = 0;
> +    for (my $i = 0; $i < PVE::QemuServer::Virtiofs::max_virtiofs(); $i++) {
> +	my $opt = "virtiofs$i";
> +	next if !$conf->{$opt};
> +	my $virtiofs = PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', $conf->{$opt});
> +	if ($virtiofs) {
> +	    $virtiofs_enabled = 1;
> +	}
> +    }
> +
>      if ($conf->{numa}) {
>  
>  	my $numa_totalmemory = undef;
> @@ -379,7 +390,8 @@ sub config {
>  	    my $numa_memory = $numa->{memory};
>  	    $numa_totalmemory += $numa_memory;
>  
> -	    my $mem_object = print_mem_object($conf, "ram-node$i", $numa_memory);
> +	    my $memdev = $virtiofs_enabled ? "virtiofs-mem$i" : "ram-node$i";
> +	    my $mem_object = print_mem_object($conf, $memdev, $numa_memory);
>  
>  	    # cpus
>  	    my $cpulists = $numa->{cpus};
> @@ -404,7 +416,7 @@ sub config {
>  	    }
>  
>  	    push @$cmd, '-object', $mem_object;
> -	    push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=ram-node$i";
> +	    push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=$memdev";
>  	}
>  
>  	die "total memory for NUMA nodes must be equal to vm static memory\n"
> @@ -418,15 +430,21 @@ sub config {
>  		die "host NUMA node$i doesn't exist\n"
>  		    if !host_numanode_exists($i) && $conf->{hugepages};
>  
> -		my $mem_object = print_mem_object($conf, "ram-node$i", $numa_memory);
> -		push @$cmd, '-object', $mem_object;
> -
>  		my $cpus = ($cores * $i);
>  		$cpus .= "-" . ($cpus + $cores - 1) if $cores > 1;
>  
> -		push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=ram-node$i";
> +		my $memdev = $virtiofs_enabled ? "virtiofs-mem$i" : "ram-node$i";
> +		my $mem_object = print_mem_object($conf, $memdev, $numa_memory);
> +		push @$cmd, '-object', $mem_object;
> +		push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=$memdev";
>  	    }
>  	}
> +    } elsif ($virtiofs_enabled) {
> +	# kvm: '-machine memory-backend' and '-numa memdev' properties are
> +	# mutually exclusive
> +	push @$cmd, '-object', 'memory-backend-memfd,id=virtiofs-mem'
> +	    .",size=$conf->{memory}M,share=on";
> +	push @$machine_flags, 'memory-backend=virtiofs-mem';
>      }
>  
>      if ($hotplug) {
> @@ -453,6 +471,8 @@ sub print_mem_object {
>  	my $path = hugepages_mount_path($hugepages_size);
>  
>  	return "memory-backend-file,id=$id,size=${size}M,mem-path=$path,share=on,prealloc=yes";
> +    } elsif ($id =~ m/^virtiofs-mem/) {
> +	return "memory-backend-memfd,id=$id,size=${size}M,share=on";
>      } else {
>  	return "memory-backend-ram,id=$id,size=${size}M";
>      }
> diff --git a/PVE/QemuServer/Virtiofs.pm b/PVE/QemuServer/Virtiofs.pm
> new file mode 100644
> index 0000000..4c59348
> --- /dev/null
> +++ b/PVE/QemuServer/Virtiofs.pm
> @@ -0,0 +1,212 @@
> +package PVE::QemuServer::Virtiofs;
> +
> +use strict;
> +use warnings;
> +
> +use Fcntl;

Should be use Fcntl qw(F_GETFD F_SETFD FD_CLOEXEC); since you use those
constants below.

> +use IO::Socket::UNIX;
> +use POSIX;

Nit: please add a blank line here.

> +use PVE::JSONSchema qw(get_standard_option parse_property_string);
> +use PVE::Mapping::Dir;
> +
> +use base qw(Exporter);
> +
> +our @EXPORT_OK = qw(
> +max_virtiofs
> +assert_virtiofs_config
> +start_virtiofsd
> +start_all_virtiofsd
> +virtiofs_qemu_param

This function does not exist?

> +close_sockets

This is too generic of a name to export. All the callers use the prefix
luckily, so this export can just be dropped.

Since you only import max_virtiofs and start_all_virtiofsd in
QemuServer.pm, why not just export those two?

> +);
> +
> +my $MAX_VIRTIOFS = 10;
> +my $socket_path_root = "/run/qemu-server/virtiofsd";
> +
> +my $virtiofs_fmt = {
> +    'dirid' => {
> +	type => 'string',
> +	default_key => 1,
> +	description => "Mapping identifier of the directory mapping to be"

Style nit: you can go until 100 characters with your description lines

> +	    ." shared with the guest. Also used as a mount tag inside the VM.",
> +	format_description => 'mapping-id',
> +	format => 'pve-configid',
> +    },
> +    'cache' => {
> +	type => 'string',
> +	description => "The caching policy the file system should use (auto, always, never).",
> +	enum => [qw(auto always never)],
> +	default => "auto",
> +	optional => 1,
> +    },
> +    'direct-io' => {
> +	type => 'boolean',
> +	description => "Honor the O_DIRECT flag passed down by guest applications.",
> +	default => 0,
> +	optional => 1,
> +    },
> +    writeback => {
> +	type => 'boolean',
> +	description => "Enable writeback cache. If enabled, writes may be cached"
> +	    ." in the guest until the file is closed or an fsync is performed.",
> +	default => 0,
> +	optional => 1,
> +    },
> +    xattr => {
> +	type => 'boolean',
> +	description => "Overwrite the xattr option from mapping and explicitly"
> +	    ." enable/disable support for extended attributes for the VM.",
> +	default => "use value from mapping",
> +	optional => 1,
> +    },
> +    acl => {
> +	type => 'boolean',
> +	description => "Overwrite the acl option from mapping and explicitly"
> +	    ." enable/disable support for posix ACLs (enabled acl implies xattr)"
> +	    ." for the VM.",
> +	default => "use value from mapping",
> +	optional => 1,
> +    },
> +};
> +PVE::JSONSchema::register_format('pve-qm-virtiofs', $virtiofs_fmt);
> +
> +my $virtiofsdesc = {
> +    optional => 1,
> +    type => 'string', format => $virtiofs_fmt,
> +    description => "share a directory between host and guest",

Maybe add "Configuration for sharing" at the beginning and "using
Virtio-fs" at the end?

> +};
> +PVE::JSONSchema::register_standard_option("pve-qm-virtiofs", $virtiofsdesc);
> +
> +sub max_virtiofs {
> +    return $MAX_VIRTIOFS;
> +}
> +
> +sub assert_virtiofs_config {
> +    my ($conf, $virtiofs) = @_;

Style nit: missing blank line, also for other functions in the module

> +    my $dir_cfg = PVE::Mapping::Dir::config()->{ids}->{$virtiofs->{dirid}};
> +    my $node_list = PVE::Mapping::Dir::find_on_current_node($virtiofs->{dirid});
> +
> +    my $acl = $virtiofs->{'acl'} // $dir_cfg->{'acl'};

Style nit: using single quotes for 'acl' key

> +    if ($acl && PVE::QemuServer::windows_version($conf->{ostype})) {

There is no use statement for PVE::QemuServer at the beginning and that
would create a cyclic dependency. However, the function actually lives
in PVE::QemuServer::Helpers, so please use it from there.

> +	log_warn(

log_warn was never imported.

> +	    "Please disable ACLs for virtiofs on Windows VMs, otherwise"
> +	    ." the virtiofs shared directory cannot be mounted."
> +	);
> +    }
> +
> +    if (!$node_list || scalar($node_list->@*) != 1) {
> +	die "virtiofs needs exactly one mapping for this node\n";
> +    }
> +
> +    eval PVE::Mapping::Dir::assert_valid($node_list->[0]);

Missing curly braces after eval

> +    if (my $err = $@) {
> +	die "Directory Mapping invalid: $err\n";

Please do not capitalize "Mapping", "Directory" also doesn't need to be,
but can be fine. Could also be done as a one-liner with
die "...: $@\n" if $@;

> +    }
> +}
> +
> +sub config {
> +    my ($conf, $vmid, $devices, $machine_flags) = @_;

$machine_flags is not used?

> +    for (my $i = 0; $i < max_virtiofs(); $i++) {
> +	my $opt = "virtiofs$i";
> +
> +	next if !$conf->{$opt};
> +	my $virtiofs = parse_property_string('pve-qm-virtiofs', $conf->{$opt});
> +	next if !$virtiofs;
> +
> +	assert_virtiofs_config($conf, $virtiofs);
> +
> +	push @$devices, '-chardev', "socket,id=virtfs$i,path=$socket_path_root/vm$vmid-fs$i";

Is using "virtfs" rather than "virtiofs" for the id intentional? Would
need to be adapted below too

> +
> +	# queue-size is set 1024 because of bug with Windows guests:
> +	# https://bugzilla.redhat.com/show_bug.cgi?id=1873088
> +	# 1024 is also always used in the virtiofs documentations:
> +	# https://gitlab.com/virtio-fs/virtiofsd#examples
> +	push @$devices, '-device', 'vhost-user-fs-pci,queue-size=1024'
> +	    .",chardev=virtfs$i,tag=$virtiofs->{dirid}";
> +    }
> +}
> +
> +sub start_all_virtiofsd {
> +    my ($conf, $vmid) = @_;
> +    my $virtiofs_sockets = [];
> +    for (my $i = 0; $i < max_virtiofs(); $i++) {
> +	my $opt = "virtiofs$i";
> +
> +	next if !$conf->{$opt};
> +	my $virtiofs = parse_property_string('pve-qm-virtiofs', $conf->{$opt});
> +	next if !$virtiofs;
> +
> +	my $virtiofs_socket = start_virtiofsd($vmid, $i, $virtiofs);
> +	push @$virtiofs_sockets, $virtiofs_socket;
> +    }
> +    return $virtiofs_sockets;
> +}
> +
> +sub close_sockets {
> +    my @sockets = @_;
> +    for my $socket (@sockets) {
> +	shutdown($socket, 2);
> +	close($socket);
> +    }
> +}
> +
> +sub start_virtiofsd {
> +    my ($vmid, $fsid, $virtiofs) = @_;
> +
> +    my $dir_cfg = PVE::Mapping::Dir::config()->{ids}->{$virtiofs->{dirid}};
> +    my $node_list = PVE::Mapping::Dir::find_on_current_node($virtiofs->{dirid});
> +
> +    # Default to dir config xattr & acl settings
> +    my $xattr = $virtiofs->{xattr} // $dir_cfg->{xattr};
> +    my $acl = $virtiofs->{'acl'} // $dir_cfg->{'acl'};

Style nit: using single quotes for 'acl' key

> +
> +    my $node_cfg = $node_list->[0];
> +    my $path = $node_cfg->{path};

Style nit: let's move the $path variable closer to where it is used,
i.e. after the $socket_path stuff ;)

> +    mkdir $socket_path_root;
> +    my $socket_path = "$socket_path_root/vm$vmid-fs$fsid";
> +    unlink($socket_path);
> +    my $socket = IO::Socket::UNIX->new(
> +	Type => SOCK_STREAM,

Missing use Socket qw(SOCK_STREAM); at the top

> +	Local => $socket_path,
> +	Listen => 1,
> +    ) or die "cannot create socket - $!\n";
> +
> +    my $flags = fcntl($socket, F_GETFD, 0)
> +	or die "failed to get file descriptor flags: $!\n";
> +    fcntl($socket, F_SETFD, $flags & ~FD_CLOEXEC)
> +	or die "failed to remove FD_CLOEXEC from file descriptor\n";
> +
> +    my $fd = $socket->fileno();
> +
> +    my $virtiofsd_bin = '/usr/libexec/virtiofsd';
> +
> +    my $pid = fork();
> +    if ($pid == 0) {
> +	setsid();

POSIX::setsid();

> +	$0 = "task pve-vm$vmid-virtiofs$fsid";
> +	my $pid2 = fork();
> +	if ($pid2 == 0) {
> +	    my $cmd = [$virtiofsd_bin, "--fd=$fd", "--shared-dir=$path"];
> +	    push @$cmd, '--xattr' if $xattr;
> +	    push @$cmd, '--posix-acl' if $acl;
> +	    push @$cmd, '--announce-submounts' if ($node_cfg->{submounts});

Style nit: superfluous parentheses for post-if, also below

> +	    push @$cmd, '--allow-direct-io' if ($virtiofs->{'direct-io'});
> +	    push @$cmd, "--cache=$virtiofs->{'cache'}" if ($virtiofs->{'cache'});
> +	    push @$cmd, "--writeback" if ($virtiofs->{'writeback'});
> +	    push @$cmd, '--syslog';
> +	    exec(@$cmd);
> +	} elsif (!defined($pid2)) {
> +	    die "could not fork to start virtiofsd\n";
> +	} else {
> +	    POSIX::_exit(0);
> +	}
> +    } elsif (!defined($pid)) {
> +	die "could not fork to start virtiofsd\n";
> +    } else {
> +	waitpid($pid, 0);
> +    }
> +
> +    # return socket to keep it alive,
> +    # so that QEMU will wait for virtiofsd to start
> +    return $socket;
> +}

Module should end with "1;"


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2024-06-12 11:50 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-05-14 10:54 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v10 0/11] virtiofs Markus Frank
2024-05-14 10:54 ` [pve-devel] [PATCH cluster v10 1/11] add mapping/dir.cfg for resource mapping Markus Frank
2024-05-14 10:54 ` [pve-devel] [PATCH guest-common v10 2/11] add dir mapping section config Markus Frank
2024-06-12 11:50   ` Fiona Ebner
2024-05-14 10:54 ` [pve-devel] [PATCH docs v10 3/11] add doc section for the shared filesystem virtio-fs Markus Frank
2024-06-12 11:50   ` Fiona Ebner
2024-05-14 10:54 ` [pve-devel] [PATCH qemu-server v10 4/11] add virtiofsd as runtime dependency for qemu-server Markus Frank
2024-06-12 11:50   ` Fiona Ebner
2024-05-14 10:54 ` [pve-devel] [PATCH qemu-server v10 5/11] fix #1027: virtio-fs support Markus Frank
2024-06-12 11:50   ` Fiona Ebner
2024-05-14 10:54 ` [pve-devel] [PATCH qemu-server v10 6/11] migration: check_local_resources for virtiofs Markus Frank
2024-05-14 10:54 ` [pve-devel] [PATCH manager v10 07/11] api: add resource map api endpoints for directories Markus Frank
2024-05-14 10:54 ` [pve-devel] [PATCH manager v10 08/11] ui: add edit window for dir mappings Markus Frank
2024-05-14 10:54 ` [pve-devel] [PATCH manager v10 09/11] ui: ResourceMapTree for DIR Markus Frank
2024-05-14 10:54 ` [pve-devel] [PATCH manager v10 10/11] ui: form: add DIRMapSelector Markus Frank
2024-05-14 10:54 ` [pve-devel] [PATCH manager v10 11/11] ui: add options to add virtio-fs to qemu config Markus Frank

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal