public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v14 0/12] virtiofs
@ 2025-03-04 11:57 Markus Frank
  2025-03-04 11:57 ` [pve-devel] [PATCH cluster v14 1/12] add mapping/dir.cfg for resource mapping Markus Frank
                   ` (15 more replies)
  0 siblings, 16 replies; 30+ messages in thread
From: Markus Frank @ 2025-03-04 11:57 UTC (permalink / raw)
  To: pve-devel

Virtio-fs is a shared file system that enables sharing a directory
between host and guest VMs. It takes advantage of the locality of
virtual machines and the hypervisor to get a higher throughput than
the 9p remote file system protocol.

build-order:
1. cluster
2. guest-common
3. docs
4. qemu-server
5. manager

I did not get virtiofsd to run with run_command without creating
zombie processes after stutdown. So I replaced run_command with exec
for now. Maybe someone can find out why this happens.


changes in v14:
* disallow commas and equal signs in path until the path can be quoted
 in property strings
* addressed style nits and improved formatting
* use max_virtiofs() in check_vm_create_dir_perm
* removed unnecessary checks after parse_property_string
* find_on_current_node returns only one entry
* improved docs
* added missing imports/uses

changes in v13:
* removed acl/xattr attributes in node config
* renamed acl/xattr in virtiofs qemu config to expose-acl/expose-xattr
* renamed submounts in node config to announce-submounts
* the "disable snapshot (with RAM) and hibernate with virtio-fs devices"
 patch now uses the check_non_migratable_resources function
* rewritten the part about announce-submounts in pve-docs patch

Changes in v12:
* rebase to master as most patches could not be applied anymore

Changes in v11:
* made submounts option on by default in WebUI and section config
* PVE::QemuServer::Virtiofs dependency removed in QemuServer/Memory.pm
* Minor changes to function/variable names
* Disable snapshots (with RAM) and hibernate due to incompatibility



cluster:

Markus Frank (1):
  add mapping/dir.cfg for resource mapping

 src/PVE/Cluster.pm  | 1 +
 src/pmxcfs/status.c | 1 +
 2 files changed, 2 insertions(+)



guest-common:

Markus Frank (1):
  add dir mapping section config

 src/Makefile           |   1 +
 src/PVE/Mapping/Dir.pm | 196 +++++++++++++++++++++++++++++++++++++++++
 2 files changed, 197 insertions(+)
 create mode 100644 src/PVE/Mapping/Dir.pm



docs:

Markus Frank (1):
  add doc section for the shared filesystem virtio-fs

 qm.adoc | 99 +++++++++++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 97 insertions(+), 2 deletions(-)



qemu-server:

Markus Frank (4):
  control: add virtiofsd as runtime dependency for qemu-server
  fix #1027: virtio-fs support
  migration: check_local_resources for virtiofs
  disable snapshot (with RAM) and hibernate with virtio-fs devices

 PVE/API2/Qemu.pm             |  41 ++++++-
 PVE/QemuServer.pm            |  46 +++++++-
 PVE/QemuServer/Makefile      |   3 +-
 PVE/QemuServer/Memory.pm     |  22 ++--
 PVE/QemuServer/Virtiofs.pm   | 211 +++++++++++++++++++++++++++++++++++
 debian/control               |   1 +
 test/MigrationTest/Shared.pm |   7 ++
 7 files changed, 317 insertions(+), 14 deletions(-)
 create mode 100644 PVE/QemuServer/Virtiofs.pm



manager:

Markus Frank (5):
  api: add resource map api endpoints for directories
  ui: add edit window for dir mappings
  ui: add resource mapping view for directories
  ui: form: add selector for directory mappings
  ui: add options to add virtio-fs to qemu config

 PVE/API2/Cluster/Mapping.pm         |   7 +
 PVE/API2/Cluster/Mapping/Dir.pm     | 308 ++++++++++++++++++++++++++++
 PVE/API2/Cluster/Mapping/Makefile   |   1 +
 www/manager6/Makefile               |   4 +
 www/manager6/Utils.js               |   1 +
 www/manager6/dc/Config.js           |  10 +
 www/manager6/dc/DirMapView.js       |  42 ++++
 www/manager6/form/DirMapSelector.js |  63 ++++++
 www/manager6/qemu/HardwareView.js   |  19 ++
 www/manager6/qemu/VirtiofsEdit.js   | 137 +++++++++++++
 www/manager6/window/DirMapEdit.js   | 214 +++++++++++++++++++
 11 files changed, 806 insertions(+)
 create mode 100644 PVE/API2/Cluster/Mapping/Dir.pm
 create mode 100644 www/manager6/dc/DirMapView.js
 create mode 100644 www/manager6/form/DirMapSelector.js
 create mode 100644 www/manager6/qemu/VirtiofsEdit.js
 create mode 100644 www/manager6/window/DirMapEdit.js

-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [pve-devel] [PATCH cluster v14 1/12] add mapping/dir.cfg for resource mapping
  2025-03-04 11:57 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v14 0/12] virtiofs Markus Frank
@ 2025-03-04 11:57 ` Markus Frank
  2025-03-04 11:57 ` [pve-devel] [PATCH guest-common v14 2/12] add dir mapping section config Markus Frank
                   ` (14 subsequent siblings)
  15 siblings, 0 replies; 30+ messages in thread
From: Markus Frank @ 2025-03-04 11:57 UTC (permalink / raw)
  To: pve-devel

Add it to both the perl side (PVE/Cluster.pm) and pmxcfs side
(status.c).
This dir.cfg is used to map directory IDs to paths on selected hosts.

Signed-off-by: Markus Frank <m.frank@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
---
v14: nothing changed

 src/PVE/Cluster.pm  | 1 +
 src/pmxcfs/status.c | 1 +
 2 files changed, 2 insertions(+)

diff --git a/src/PVE/Cluster.pm b/src/PVE/Cluster.pm
index e0e3ee9..b9311c7 100644
--- a/src/PVE/Cluster.pm
+++ b/src/PVE/Cluster.pm
@@ -84,6 +84,7 @@ my $observed = {
     'sdn/.running-config' => 1,
     'virtual-guest/cpu-models.conf' => 1,
     'virtual-guest/profiles.cfg' => 1,
+    'mapping/dir.cfg' => 1,
     'mapping/pci.cfg' => 1,
     'mapping/usb.cfg' => 1,
 };
diff --git a/src/pmxcfs/status.c b/src/pmxcfs/status.c
index ff5fcc4..39b17f4 100644
--- a/src/pmxcfs/status.c
+++ b/src/pmxcfs/status.c
@@ -114,6 +114,7 @@ static memdb_change_t memdb_change_array[] = {
 	{ .path = "virtual-guest/cpu-models.conf" },
 	{ .path = "virtual-guest/profiles.cfg" },
 	{ .path = "firewall/cluster.fw" },
+	{ .path = "mapping/dir.cfg" },
 	{ .path = "mapping/pci.cfg" },
 	{ .path = "mapping/usb.cfg" },
 };
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [pve-devel] [PATCH guest-common v14 2/12] add dir mapping section config
  2025-03-04 11:57 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v14 0/12] virtiofs Markus Frank
  2025-03-04 11:57 ` [pve-devel] [PATCH cluster v14 1/12] add mapping/dir.cfg for resource mapping Markus Frank
@ 2025-03-04 11:57 ` Markus Frank
  2025-04-02 13:14   ` Fabian Grünbichler
  2025-04-02 13:41   ` Daniel Kral
  2025-03-04 11:57 ` [pve-devel] [PATCH docs v14 3/12] add doc section for the shared filesystem virtio-fs Markus Frank
                   ` (13 subsequent siblings)
  15 siblings, 2 replies; 30+ messages in thread
From: Markus Frank @ 2025-03-04 11:57 UTC (permalink / raw)
  To: pve-devel

Adds a config file for directories by using a 'map' property string for
each node mapping.

Next to node & path, there is the optional announce-submounts parameter
which forces virtiofsd to report a different device number for each
submount it encounters. Without it, duplicates may be created because
inode IDs are only unique on a single filesystem.

example config:
```
some-dir-id
	map node=node1,path=/mnt/share/,announce-submounts=1
	map node=node2,path=/mnt/share/,
```

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
v14:
* disallow commas and equal signs in path until the path can be quoted
 in property strings
* addressed style nits and improved formatting

 src/Makefile           |   1 +
 src/PVE/Mapping/Dir.pm | 196 +++++++++++++++++++++++++++++++++++++++++
 2 files changed, 197 insertions(+)
 create mode 100644 src/PVE/Mapping/Dir.pm

diff --git a/src/Makefile b/src/Makefile
index cbc40c1..030e7f7 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -15,6 +15,7 @@ install: PVE
 	install -m 0644 PVE/StorageTunnel.pm ${PERL5DIR}/PVE/
 	install -m 0644 PVE/Tunnel.pm ${PERL5DIR}/PVE/
 	install -d ${PERL5DIR}/PVE/Mapping
+	install -m 0644 PVE/Mapping/Dir.pm ${PERL5DIR}/PVE/Mapping/
 	install -m 0644 PVE/Mapping/PCI.pm ${PERL5DIR}/PVE/Mapping/
 	install -m 0644 PVE/Mapping/USB.pm ${PERL5DIR}/PVE/Mapping/
 	install -d ${PERL5DIR}/PVE/VZDump
diff --git a/src/PVE/Mapping/Dir.pm b/src/PVE/Mapping/Dir.pm
new file mode 100644
index 0000000..581ec39
--- /dev/null
+++ b/src/PVE/Mapping/Dir.pm
@@ -0,0 +1,196 @@
+package PVE::Mapping::Dir;
+
+use strict;
+use warnings;
+
+use PVE::Cluster qw(cfs_register_file cfs_read_file cfs_lock_file cfs_write_file);
+use PVE::INotify;
+use PVE::JSONSchema qw(get_standard_option parse_property_string);
+use PVE::SectionConfig;
+
+use base qw(PVE::SectionConfig);
+
+my $FILENAME = 'mapping/dir.cfg';
+
+cfs_register_file($FILENAME,
+    sub { __PACKAGE__->parse_config(@_); },
+    sub { __PACKAGE__->write_config(@_); });
+
+
+# so we don't have to repeat the type every time
+sub parse_section_header {
+    my ($class, $line) = @_;
+
+    if ($line =~ m/^(\S+)\s*$/) {
+	my $id = $1;
+	my $errmsg = undef; # set if you want to skip whole section
+	eval { PVE::JSONSchema::pve_verify_configid($id) };
+	$errmsg = $@ if $@;
+	my $config = {}; # to return additional attributes
+	return ('dir', $id, $errmsg, $config);
+    }
+    return undef;
+}
+
+sub format_section_header {
+    my ($class, $type, $sectionId, $scfg, $done_hash) = @_;
+
+    return "$sectionId\n";
+}
+
+sub type {
+    return 'dir';
+}
+
+# temporary path format that also disallows commas and equal signs
+# TODO: Remove this when property_string supports quotation of properties
+PVE::JSONSchema::register_format('pve-storage-path-in-property-string', \&verify_path);
+sub verify_path {
+    my ($path, $noerr) = @_;
+
+    if ($path !~ m|^/[^;,=\(\)]+|) {
+	return undef if $noerr;
+	die "Value does not look like a valid absolute path."
+	    ." These symbols are currently not allowed in path: ;,=())\n";
+    }
+    return $path;
+}
+
+my $map_fmt = {
+    node => get_standard_option('pve-node'),
+    path => {
+	description => "Absolute directory path that should be shared with the guest.",
+	type => 'string',
+	format => 'pve-storage-path-in-property-string',
+    },
+    'announce-submounts' => {
+	type => 'boolean',
+	description => "Announce that the directory contains other mounted file systems."
+	    ." If this is not set and multiple file systems are mounted, the guest may"
+	    ." encounter duplicates due to file system specific inode IDs.",
+	optional => 1,
+	default => 1,
+    },
+};
+
+my $defaultData = {
+    propertyList => {
+	id => {
+	    type => 'string',
+	    description => "The ID of the directory mapping",
+	    format => 'pve-configid',
+	},
+	description => {
+	    type => 'string',
+	    description => "Description of the directory mapping",
+	    optional => 1,
+	    maxLength => 4096,
+	},
+	map => {
+	    type => 'array',
+	    description => 'A list of maps for the cluster nodes.',
+	    optional => 1,
+	    items => {
+		type => 'string',
+		format => $map_fmt,
+	    },
+	},
+    },
+};
+
+sub private {
+    return $defaultData;
+}
+
+sub options {
+    return {
+	description => { optional => 1 },
+	map => {},
+    };
+}
+
+sub assert_valid {
+    my ($dir_cfg) = @_;
+
+    my $path = $dir_cfg->{path};
+
+    verify_path($path);
+
+    if (! -e $path) {
+	die "Path $path does not exist\n";
+    } elsif (! -d $path) {
+	die "Path $path exists, but is not a directory\n";
+    }
+
+    return 1;
+};
+
+sub assert_valid_map_list {
+    my ($map_list) = @_;
+
+    my %count;
+    for my $map (@$map_list) {
+	my $entry = parse_property_string($map_fmt, $map);
+	assert_valid($entry);
+	$count{$entry->{node}}++;
+    }
+    for my $node (keys %count) {
+	if ($count{$node} > 1) {
+	    die "Node '$node' is specified $count{$node} times.\n";
+	}
+    }
+}
+
+sub config {
+    return cfs_read_file($FILENAME);
+}
+
+sub lock_dir_config {
+    my ($code, $errmsg) = @_;
+
+    cfs_lock_file($FILENAME, undef, $code);
+    if (my $err = $@) {
+	$errmsg ? die "$errmsg: $err" : die $err;
+    }
+}
+
+sub write_dir_config {
+    my ($cfg) = @_;
+
+    cfs_write_file($FILENAME, $cfg);
+}
+
+sub find_on_current_node {
+    my ($id) = @_;
+
+    my $cfg = config();
+    my $node = PVE::INotify::nodename();
+
+    my $node_mapping = get_node_mapping($cfg, $id, $node);
+    if (@{$node_mapping} > 1) {
+	die "More than than one directory mapping for node $node.\n";
+    }
+    return $node_mapping->[0];
+}
+
+sub get_node_mapping {
+    my ($cfg, $id, $nodename) = @_;
+
+    return undef if !defined($cfg->{ids}->{$id});
+
+    my $res = [];
+    my $mapping_list = $cfg->{ids}->{$id}->{map};
+    for my $map (@{$mapping_list}) {
+	my $entry = eval { parse_property_string($map_fmt, $map) };
+	warn $@ if $@;
+	if ($entry && $entry->{node} eq $nodename) {
+	    push $res->@*, $entry;
+	}
+    }
+    return $res;
+}
+
+PVE::Mapping::Dir->register();
+PVE::Mapping::Dir->init();
+
+1;
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [pve-devel] [PATCH docs v14 3/12] add doc section for the shared filesystem virtio-fs
  2025-03-04 11:57 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v14 0/12] virtiofs Markus Frank
  2025-03-04 11:57 ` [pve-devel] [PATCH cluster v14 1/12] add mapping/dir.cfg for resource mapping Markus Frank
  2025-03-04 11:57 ` [pve-devel] [PATCH guest-common v14 2/12] add dir mapping section config Markus Frank
@ 2025-03-04 11:57 ` Markus Frank
  2025-04-02 10:36   ` Laurențiu Leahu-Vlăducu
                     ` (2 more replies)
  2025-03-04 11:57 ` [pve-devel] [PATCH qemu-server v14 4/12] control: add virtiofsd as runtime dependency for qemu-server Markus Frank
                   ` (12 subsequent siblings)
  15 siblings, 3 replies; 30+ messages in thread
From: Markus Frank @ 2025-03-04 11:57 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
v14:
* addressed formulation nits
* added paragraph about expose-acl & expose-xattr

 qm.adoc | 99 +++++++++++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 97 insertions(+), 2 deletions(-)

diff --git a/qm.adoc b/qm.adoc
index 4bb8f2c..86b3877 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -1202,6 +1202,100 @@ recommended to always use a limiter to avoid guests using too many host
 resources. If desired, a value of '0' for `max_bytes` can be used to disable
 all limits.
 
+[[qm_virtiofs]]
+Virtio-fs
+~~~~~~~~~
+
+Virtio-fs is a shared filesystem designed for virtual environments. It allows to
+share a directory tree available on the host by mounting it within VMs. It does
+not use the network stack and aims to offer similar performance and semantics as
+the source filesystem.
+
+To use virtio-fs, the https://gitlab.com/virtio-fs/virtiofsd[virtiofsd] daemon
+needs to run in the background. This happens automatically in {pve} when
+starting a VM using a virtio-fs mount.
+
+Linux VMs with kernel >=5.4 support virtio-fs by default.
+
+There is a guide available on how to utilize virtio-fs in Windows VMs.
+https://github.com/virtio-win/kvm-guest-drivers-windows/wiki/Virtiofs:-Shared-file-system
+
+Known Limitations
+^^^^^^^^^^^^^^^^^
+
+* If virtiofsd crashes, its mount point will hang in the VM until the VM
+is completely stopped.
+* virtiofsd not responding may result in a hanging mount in the VM, similar to
+an unreachable NFS.
+* Memory hotplug does not work in combination with virtio-fs (also results in
+hanging access).
+* Memory related features such as live migration, snapshots, and hibernate are
+not available with virtio-fs devices.
+* Windows cannot understand ACLs in the context of virtio-fs. Therefore, do not
+expose ACLs for Windows VMs, otherwise the virtio-fs device will not be
+visible within the VM.
+
+Add Mapping for Shared Directories
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To add a mapping for a shared directory, you can use the API directly with
+`pvesh` as described in the xref:resource_mapping[Resource Mapping] section:
+
+----
+pvesh create /cluster/mapping/dir --id dir1 \
+    --map node=node1,path=/path/to/share1 \
+    --map node=node2,path=/path/to/share2,announce-submounts=1 \
+----
+
+Set `announce-submounts` to `1` if multiple filesystems are mounted in a shared
+directory. This tells the guest which directories are mount points to prevent
+data loss/corruption. With `announce-submounts`, virtiofsd reports a different
+device number for each submount it encounters. Without it, duplicates may be
+created because inode IDs are only unique on a single filesystem.
+
+Add virtio-fs to a VM
+^^^^^^^^^^^^^^^^^^^^^
+
+To share a directory using virtio-fs, add the parameter `virtiofs<N>` (N can be
+anything between 0 and 9) to the VM config and use a directory ID (dirid) that
+has been configured in the resource mapping. Additionally, you can set the
+`cache` option to either `always`, `never`, or `auto` (default: `auto`),
+depending on your requirements. How the different caching modes behave can be
+read at https://lwn.net/Articles/774495/ under the "Caching Modes" section. To
+enable writeback cache set `writeback` to `1`.
+
+Virtiofsd supports ACL and xattr passthrough (can be enabled with the
+`expose-acl` and `expose-xattr` options), allowing the guest to access ACLs and
+xattrs if the underlying host filesystem supports them, but they must also be
+compatible with the guest filesystem (for example most Linux filesystems support
+ACLs, while Windows filesystems do not).
+
+The `expose-acl` option automatically implies `expose-xattr`, that is, it makes
+no difference if you set `expose-xattr` to `0` if `expose-acl` is set to `1`.
+
+If you want virtio-fs to honor the `O_DIRECT` flag, you can set the `direct-io`
+parameter to `1` (default: `0`). This will degrade performance, but is useful if
+applications do their own caching.
+
+----
+qm set <vmid> -virtiofs0 dirid=<dirid>,cache=always,direct-io=1
+qm set <vmid> -virtiofs1 <dirid>,cache=never,expose-xattr=1
+qm set <vmid> -virtiofs2 <dirid>,expose-acl=1,writeback=1
+----
+
+To mount virtio-fs in a guest VM with the Linux kernel virtio-fs driver, run the
+following command inside the guest:
+
+----
+mount -t virtiofs <mount tag> <mount point>
+----
+
+The dirid associated with the path on the current node is also used as the mount
+tag (name used to mount the device on the guest).
+
+For more information on available virtiofsd parameters, see the
+https://gitlab.com/virtio-fs/virtiofsd[GitLab virtiofsd project page].
+
 [[qm_bootorder]]
 Device Boot Order
 ~~~~~~~~~~~~~~~~~
@@ -1885,8 +1979,9 @@ in the relevant tab in the `Resource Mappings` category, or on the cli with
 
 [thumbnail="screenshot/gui-datacenter-mapping-pci-edit.png"]
 
-Where `<type>` is the hardware type (currently either `pci` or `usb`) and
-`<options>` are the device mappings and other configuration parameters.
+Where `<type>` is the hardware type (currently either `pci`, `usb` or
+xref:qm_virtiofs[dir]) and `<options>` are the device mappings and other
+configuration parameters.
 
 Note that the options must include a map property with all identifying
 properties of that hardware, so that it's possible to verify the hardware did
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [pve-devel] [PATCH qemu-server v14 4/12] control: add virtiofsd as runtime dependency for qemu-server
  2025-03-04 11:57 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v14 0/12] virtiofs Markus Frank
                   ` (2 preceding siblings ...)
  2025-03-04 11:57 ` [pve-devel] [PATCH docs v14 3/12] add doc section for the shared filesystem virtio-fs Markus Frank
@ 2025-03-04 11:57 ` Markus Frank
  2025-03-04 11:57 ` [pve-devel] [PATCH qemu-server v14 5/12] fix #1027: virtio-fs support Markus Frank
                   ` (11 subsequent siblings)
  15 siblings, 0 replies; 30+ messages in thread
From: Markus Frank @ 2025-03-04 11:57 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
---
v14:
* nothing changed

 debian/control | 1 +
 1 file changed, 1 insertion(+)

diff --git a/debian/control b/debian/control
index 81f0fad6..eda357a5 100644
--- a/debian/control
+++ b/debian/control
@@ -55,6 +55,7 @@ Depends: dbus,
          socat,
          swtpm,
          swtpm-tools,
+         virtiofsd,
          ${misc:Depends},
          ${perl:Depends},
          ${shlibs:Depends},
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [pve-devel] [PATCH qemu-server v14 5/12] fix #1027: virtio-fs support
  2025-03-04 11:57 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v14 0/12] virtiofs Markus Frank
                   ` (3 preceding siblings ...)
  2025-03-04 11:57 ` [pve-devel] [PATCH qemu-server v14 4/12] control: add virtiofsd as runtime dependency for qemu-server Markus Frank
@ 2025-03-04 11:57 ` Markus Frank
  2025-04-02 13:13   ` Fabian Grünbichler
  2025-03-04 11:57 ` [pve-devel] [PATCH qemu-server v14 6/12] migration: check_local_resources for virtiofs Markus Frank
                   ` (10 subsequent siblings)
  15 siblings, 1 reply; 30+ messages in thread
From: Markus Frank @ 2025-03-04 11:57 UTC (permalink / raw)
  To: pve-devel

add support for sharing directories with a guest vm.

virtio-fs needs virtiofsd to be started.
In order to start virtiofsd as a process (despite being a daemon it is
does not run in the background), a double-fork is used.

virtiofsd should close itself together with QEMU.

There are the parameters dirid and the optional parameters direct-io,
cache and writeback. Additionally the expose-xattr & expose-acl
parameter can be set to expose xattr & acl settings from the shared
filesystem to the guest system.

The dirid gets mapped to the path on the current node and is also used
as a mount tag (name used to mount the device on the guest).

example config:
```
virtiofs0: foo,direct-io=1,cache=always,expose-acl=1
virtiofs1: dirid=bar,cache=never,expose-xattr=1,writeback=1
```

For information on the optional parameters see the coherent doc patch
and the official gitlab README:
https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md

Also add a permission check for virtiofs directory access.

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
v14:
* use max_virtiofs() in check_vm_create_dir_perm
* addressed style nits and improved formatting
* added missing imports/uses
* assert_virtiofs_config now only gets the ostype and the virtiofs cfg
* removed unnecessary checks after parse_property_string

 PVE/API2/Qemu.pm           |  41 ++++++-
 PVE/QemuServer.pm          |  29 ++++-
 PVE/QemuServer/Makefile    |   3 +-
 PVE/QemuServer/Memory.pm   |  22 ++--
 PVE/QemuServer/Virtiofs.pm | 211 +++++++++++++++++++++++++++++++++++++
 5 files changed, 295 insertions(+), 11 deletions(-)
 create mode 100644 PVE/QemuServer/Virtiofs.pm

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 5ac61aa5..7cefffdf 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -39,6 +39,7 @@ use PVE::QemuServer::MetaInfo;
 use PVE::QemuServer::PCI;
 use PVE::QemuServer::QMPHelpers;
 use PVE::QemuServer::USB;
+use PVE::QemuServer::Virtiofs qw(max_virtiofs);
 use PVE::QemuMigrate;
 use PVE::RPCEnvironment;
 use PVE::AccessControl;
@@ -801,6 +802,33 @@ my sub check_vm_create_hostpci_perm {
     return 1;
 };
 
+my sub check_dir_perm {
+    my ($rpcenv, $authuser, $vmid, $pool, $opt, $value) = @_;
+
+    return 1 if $authuser eq 'root@pam';
+
+    $rpcenv->check_vm_perm($authuser, $vmid, $pool, ['VM.Config.Disk']);
+
+    my $virtiofs = PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', $value);
+    $rpcenv->check_full($authuser, "/mapping/dir/$virtiofs->{dirid}", ['Mapping.Use']);
+
+    return 1;
+};
+
+my sub check_vm_create_dir_perm {
+    my ($rpcenv, $authuser, $vmid, $pool, $param) = @_;
+
+    return 1 if $authuser eq 'root@pam';
+
+    for (my $i = 0; $i < max_virtiofs(); $i++) {
+	my $opt = "virtiofs$i";
+	next if !$param->{$opt};
+	check_dir_perm($rpcenv, $authuser, $vmid, $pool, $opt, $param->{$opt});
+    }
+
+    return 1;
+};
+
 my $check_vm_modify_config_perm = sub {
     my ($rpcenv, $authuser, $vmid, $pool, $key_list) = @_;
 
@@ -811,7 +839,7 @@ my $check_vm_modify_config_perm = sub {
 	# else, as there the permission can be value dependent
 	next if PVE::QemuServer::is_valid_drivename($opt);
 	next if $opt eq 'cdrom';
-	next if $opt =~ m/^(?:unused|serial|usb|hostpci)\d+$/;
+	next if $opt =~ m/^(?:unused|serial|usb|hostpci|virtiofs)\d+$/;
 	next if $opt eq 'tags';
 
 
@@ -1114,6 +1142,7 @@ __PACKAGE__->register_method({
 	    &$check_vm_create_serial_perm($rpcenv, $authuser, $vmid, $pool, $param);
 	    check_vm_create_usb_perm($rpcenv, $authuser, $vmid, $pool, $param);
 	    check_vm_create_hostpci_perm($rpcenv, $authuser, $vmid, $pool, $param);
+	    check_vm_create_dir_perm($rpcenv, $authuser, $vmid, $pool, $param);
 
 	    PVE::QemuServer::check_bridge_access($rpcenv, $authuser, $param);
 	    &$check_cpu_model_access($rpcenv, $authuser, $param);
@@ -2005,6 +2034,10 @@ my $update_vm_api  = sub {
 		    check_hostpci_perm($rpcenv, $authuser, $vmid, undef, $opt, $val);
 		    PVE::QemuConfig->add_to_pending_delete($conf, $opt, $force);
 		    PVE::QemuConfig->write_config($vmid, $conf);
+		} elsif ($opt =~ m/^virtiofs\d$/) {
+		    check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, $val);
+		    PVE::QemuConfig->add_to_pending_delete($conf, $opt, $force);
+		    PVE::QemuConfig->write_config($vmid, $conf);
 		} elsif ($opt eq 'tags') {
 		    assert_tag_permissions($vmid, $val, '', $rpcenv, $authuser);
 		    delete $conf->{$opt};
@@ -2095,6 +2128,12 @@ my $update_vm_api  = sub {
 		    }
 		    check_hostpci_perm($rpcenv, $authuser, $vmid, undef, $opt, $param->{$opt});
 		    $conf->{pending}->{$opt} = $param->{$opt};
+		} elsif ($opt =~ m/^virtiofs\d$/) {
+		    if (my $oldvalue = $conf->{$opt}) {
+			check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, $oldvalue);
+		    }
+		    check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, $param->{$opt});
+		    $conf->{pending}->{$opt} = $param->{$opt};
 		} elsif ($opt eq 'tags') {
 		    assert_tag_permissions($vmid, $conf->{$opt}, $param->{$opt}, $rpcenv, $authuser);
 		    $conf->{pending}->{$opt} = PVE::GuestHelpers::get_unique_tags($param->{$opt});
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index b6fc1f17..748b0acf 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -33,6 +33,7 @@ use PVE::Exception qw(raise raise_param_exc);
 use PVE::Format qw(render_duration render_bytes);
 use PVE::GuestHelpers qw(safe_string_ne safe_num_ne safe_boolean_ne);
 use PVE::HA::Config;
+use PVE::Mapping::Dir;
 use PVE::Mapping::PCI;
 use PVE::Mapping::USB;
 use PVE::INotify;
@@ -62,6 +63,7 @@ use PVE::QemuServer::Monitor qw(mon_cmd);
 use PVE::QemuServer::PCI qw(print_pci_addr print_pcie_addr print_pcie_root_port parse_hostpci);
 use PVE::QemuServer::QMPHelpers qw(qemu_deviceadd qemu_devicedel qemu_objectadd qemu_objectdel);
 use PVE::QemuServer::USB;
+use PVE::QemuServer::Virtiofs qw(max_virtiofs start_all_virtiofsd);
 
 my $have_sdn;
 eval {
@@ -948,6 +950,10 @@ my $netdesc = {
 
 PVE::JSONSchema::register_standard_option("pve-qm-net", $netdesc);
 
+for (my $i = 0; $i < max_virtiofs(); $i++)  {
+    $confdesc->{"virtiofs$i"} = get_standard_option('pve-qm-virtiofs');
+}
+
 my $ipconfig_fmt = {
     ip => {
 	type => 'string',
@@ -3707,8 +3713,18 @@ sub config_to_command {
 	push @$cmd, get_cpu_options($conf, $arch, $kvm, $kvm_off, $machine_version, $winversion, $gpu_passthrough);
     }
 
+    my $virtiofs_enabled = PVE::QemuServer::Virtiofs::virtiofs_enabled($conf);
+
     PVE::QemuServer::Memory::config(
-	$conf, $vmid, $sockets, $cores, $hotplug_features->{memory}, $cmd);
+	$conf,
+	$vmid,
+	$sockets,
+	$cores,
+	$hotplug_features->{memory},
+	$cmd,
+	$machineFlags,
+	$virtiofs_enabled
+    );
 
     push @$cmd, '-S' if $conf->{freeze};
 
@@ -3998,6 +4014,8 @@ sub config_to_command {
 	push @$machineFlags, 'confidential-guest-support=sev0';
     }
 
+    PVE::QemuServer::Virtiofs::config($conf, $vmid, $devices);
+
     push @$cmd, @$devices;
     push @$cmd, '-rtc', join(',', @$rtcFlags) if scalar(@$rtcFlags);
     push @$cmd, '-machine', join(',', @$machineFlags) if scalar(@$machineFlags);
@@ -5806,6 +5824,8 @@ sub vm_start_nolock {
 	PVE::Tools::run_fork sub {
 	    PVE::Systemd::enter_systemd_scope($vmid, "Proxmox VE VM $vmid", %systemd_properties);
 
+	    my $virtiofs_sockets = start_all_virtiofsd($conf, $vmid);
+
 	    my $tpmpid;
 	    if ((my $tpm = $conf->{tpmstate0}) && !PVE::QemuConfig->is_template($conf)) {
 		# start the TPM emulator so QEMU can connect on start
@@ -5813,6 +5833,8 @@ sub vm_start_nolock {
 	    }
 
 	    my $exitcode = run_command($cmd, %run_params);
+	    eval { PVE::QemuServer::Virtiofs::close_sockets(@$virtiofs_sockets); };
+	    log_warn("closing virtiofs sockets failed - $@") if $@;
 	    if ($exitcode) {
 		if ($tpmpid) {
 		    warn "stopping swtpm instance (pid $tpmpid) due to QEMU startup error\n";
@@ -6482,7 +6504,10 @@ sub check_mapping_access {
 	    } else {
 		die "either 'host' or 'mapping' must be set.\n";
 	    }
-       }
+	} elsif ($opt =~ m/^virtiofs\d$/) {
+	    my $virtiofs = PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', $conf->{$opt});
+	    $rpcenv->check_full($user, "/mapping/dir/$virtiofs->{dirid}", ['Mapping.Use']);
+	}
    }
 };
 
diff --git a/PVE/QemuServer/Makefile b/PVE/QemuServer/Makefile
index 18fd13ea..3588e0e1 100644
--- a/PVE/QemuServer/Makefile
+++ b/PVE/QemuServer/Makefile
@@ -11,7 +11,8 @@ SOURCES=PCI.pm		\
 	CPUConfig.pm	\
 	CGroup.pm	\
 	Drive.pm	\
-	QMPHelpers.pm
+	QMPHelpers.pm	\
+	Virtiofs.pm
 
 .PHONY: install
 install: ${SOURCES}
diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
index e5024cd2..dcdb4f76 100644
--- a/PVE/QemuServer/Memory.pm
+++ b/PVE/QemuServer/Memory.pm
@@ -336,7 +336,7 @@ sub qemu_memdevices_list {
 }
 
 sub config {
-    my ($conf, $vmid, $sockets, $cores, $hotplug, $cmd) = @_;
+    my ($conf, $vmid, $sockets, $cores, $hotplug, $cmd, $machine_flags, $virtiofs_enabled) = @_;
 
     my $memory = get_current_memory($conf->{memory});
     my $static_memory = 0;
@@ -379,7 +379,8 @@ sub config {
 	    my $numa_memory = $numa->{memory};
 	    $numa_totalmemory += $numa_memory;
 
-	    my $mem_object = print_mem_object($conf, "ram-node$i", $numa_memory);
+	    my $memdev = $virtiofs_enabled ? "virtiofs-mem$i" : "ram-node$i";
+	    my $mem_object = print_mem_object($conf, $memdev, $numa_memory);
 
 	    # cpus
 	    my $cpulists = $numa->{cpus};
@@ -404,7 +405,7 @@ sub config {
 	    }
 
 	    push @$cmd, '-object', $mem_object;
-	    push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=ram-node$i";
+	    push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=$memdev";
 	}
 
 	die "total memory for NUMA nodes must be equal to vm static memory\n"
@@ -418,15 +419,20 @@ sub config {
 		die "host NUMA node$i doesn't exist\n"
 		    if !host_numanode_exists($i) && $conf->{hugepages};
 
-		my $mem_object = print_mem_object($conf, "ram-node$i", $numa_memory);
-		push @$cmd, '-object', $mem_object;
-
 		my $cpus = ($cores * $i);
 		$cpus .= "-" . ($cpus + $cores - 1) if $cores > 1;
 
-		push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=ram-node$i";
+		my $memdev = $virtiofs_enabled ? "virtiofs-mem$i" : "ram-node$i";
+		my $mem_object = print_mem_object($conf, $memdev, $numa_memory);
+		push @$cmd, '-object', $mem_object;
+		push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=$memdev";
 	    }
 	}
+    } elsif ($virtiofs_enabled) {
+	# kvm: '-machine memory-backend' and '-numa memdev' properties are mutually exclusive
+	push @$cmd, '-object', 'memory-backend-memfd,id=virtiofs-mem'
+	    .",size=$conf->{memory}M,share=on";
+	push @$machine_flags, 'memory-backend=virtiofs-mem';
     }
 
     if ($hotplug) {
@@ -453,6 +459,8 @@ sub print_mem_object {
 	my $path = hugepages_mount_path($hugepages_size);
 
 	return "memory-backend-file,id=$id,size=${size}M,mem-path=$path,share=on,prealloc=yes";
+    } elsif ($id =~ m/^virtiofs-mem/) {
+	return "memory-backend-memfd,id=$id,size=${size}M,share=on";
     } else {
 	return "memory-backend-ram,id=$id,size=${size}M";
     }
diff --git a/PVE/QemuServer/Virtiofs.pm b/PVE/QemuServer/Virtiofs.pm
new file mode 100644
index 00000000..94ef7b47
--- /dev/null
+++ b/PVE/QemuServer/Virtiofs.pm
@@ -0,0 +1,211 @@
+package PVE::QemuServer::Virtiofs;
+
+use strict;
+use warnings;
+
+use Fcntl qw(F_GETFD F_SETFD FD_CLOEXEC);
+use IO::Socket::UNIX;
+use POSIX;
+use Socket qw(SOCK_STREAM);
+
+use PVE::JSONSchema qw(parse_property_string);
+use PVE::Mapping::Dir;
+use PVE::QemuServer::Helpers;
+use PVE::RESTEnvironment qw(log_warn);
+
+use base qw(Exporter);
+
+our @EXPORT_OK = qw(
+max_virtiofs
+start_all_virtiofsd
+);
+
+my $MAX_VIRTIOFS = 10;
+my $socket_path_root = "/run/qemu-server/virtiofsd";
+
+my $virtiofs_fmt = {
+    'dirid' => {
+	type => 'string',
+	default_key => 1,
+	description => "Mapping identifier of the directory mapping to be shared with the guest."
+	    ." Also used as a mount tag inside the VM.",
+	format_description => 'mapping-id',
+	format => 'pve-configid',
+    },
+    'cache' => {
+	type => 'string',
+	description => "The caching policy the file system should use (auto, always, never).",
+	enum => [qw(auto always never)],
+	default => "auto",
+	optional => 1,
+    },
+    'direct-io' => {
+	type => 'boolean',
+	description => "Honor the O_DIRECT flag passed down by guest applications.",
+	default => 0,
+	optional => 1,
+    },
+    writeback => {
+	type => 'boolean',
+	description => "Enable writeback cache. If enabled, writes may be cached in the guest until"
+	    ." the file is closed or an fsync is performed.",
+	default => 0,
+	optional => 1,
+    },
+    'expose-xattr' => {
+	type => 'boolean',
+	description => "Enable support for extended attributes for this mount.",
+	default => 0,
+	optional => 1,
+    },
+    'expose-acl' => {
+	type => 'boolean',
+	description => "Enable support for POSIX ACLs (enabled ACL implies xattr) for this mount.",
+	default => 0,
+	optional => 1,
+    },
+};
+PVE::JSONSchema::register_format('pve-qm-virtiofs', $virtiofs_fmt);
+
+my $virtiofsdesc = {
+    optional => 1,
+    type => 'string', format => $virtiofs_fmt,
+    description => "Configuration for sharing a directory between host and guest using Virtio-fs.",
+};
+PVE::JSONSchema::register_standard_option("pve-qm-virtiofs", $virtiofsdesc);
+
+sub max_virtiofs {
+    return $MAX_VIRTIOFS;
+}
+
+sub assert_virtiofs_config {
+    my ($ostype, $virtiofs) = @_;
+
+    my $dir_cfg = PVE::Mapping::Dir::find_on_current_node($virtiofs->{dirid});
+
+    my $acl = $virtiofs->{'expose-acl'};
+    if ($acl && PVE::QemuServer::Helpers::windows_version($ostype)) {
+	log_warn(
+	    "Please disable ACLs for virtiofs on Windows VMs, otherwise"
+	    ." the virtiofs shared directory cannot be mounted."
+	);
+    }
+
+    eval { PVE::Mapping::Dir::assert_valid($dir_cfg) };
+    die "directory mapping invalid: $@\n" if $@;
+}
+
+sub config {
+    my ($conf, $vmid, $devices) = @_;
+
+    for (my $i = 0; $i < max_virtiofs(); $i++) {
+	my $opt = "virtiofs$i";
+
+	next if !$conf->{$opt};
+	my $virtiofs = parse_property_string('pve-qm-virtiofs', $conf->{$opt});
+
+	assert_virtiofs_config($conf->{ostype}, $virtiofs);
+
+	push @$devices, '-chardev', "socket,id=virtiofs$i,path=$socket_path_root/vm$vmid-fs$i";
+
+	# queue-size is set 1024 because of bug with Windows guests:
+	# https://bugzilla.redhat.com/show_bug.cgi?id=1873088
+	# 1024 is also always used in the virtiofs documentations:
+	# https://gitlab.com/virtio-fs/virtiofsd#examples
+	push @$devices, '-device', 'vhost-user-fs-pci,queue-size=1024'
+	    .",chardev=virtiofs$i,tag=$virtiofs->{dirid}";
+    }
+}
+
+sub virtiofs_enabled {
+    my ($conf) = @_;
+
+    my $virtiofs_enabled = 0;
+    for (my $i = 0; $i < max_virtiofs(); $i++) {
+	my $opt = "virtiofs$i";
+	next if !$conf->{$opt};
+	parse_property_string('pve-qm-virtiofs', $conf->{$opt});
+	$virtiofs_enabled = 1;
+    }
+    return $virtiofs_enabled;
+}
+
+sub start_all_virtiofsd {
+    my ($conf, $vmid) = @_;
+    my $virtiofs_sockets = [];
+    for (my $i = 0; $i < max_virtiofs(); $i++) {
+	my $opt = "virtiofs$i";
+
+	next if !$conf->{$opt};
+	my $virtiofs = parse_property_string('pve-qm-virtiofs', $conf->{$opt});
+
+	my $virtiofs_socket = start_virtiofsd($vmid, $i, $virtiofs);
+	push @$virtiofs_sockets, $virtiofs_socket;
+    }
+    return $virtiofs_sockets;
+}
+
+sub start_virtiofsd {
+    my ($vmid, $fsid, $virtiofs) = @_;
+
+    mkdir $socket_path_root;
+    my $socket_path = "$socket_path_root/vm$vmid-fs$fsid";
+    unlink($socket_path);
+    my $socket = IO::Socket::UNIX->new(
+	Type => SOCK_STREAM,
+	Local => $socket_path,
+	Listen => 1,
+    ) or die "cannot create socket - $!\n";
+
+    my $flags = fcntl($socket, F_GETFD, 0)
+	or die "failed to get file descriptor flags: $!\n";
+    fcntl($socket, F_SETFD, $flags & ~FD_CLOEXEC)
+	or die "failed to remove FD_CLOEXEC from file descriptor\n";
+
+    my $dir_cfg = PVE::Mapping::Dir::find_on_current_node($virtiofs->{dirid});
+
+    my $virtiofsd_bin = '/usr/libexec/virtiofsd';
+    my $fd = $socket->fileno();
+    my $path = $dir_cfg->{path};
+
+    my $could_not_fork_err = "could not fork to start virtiofsd\n";
+    my $pid = fork();
+    if ($pid == 0) {
+	POSIX::setsid();
+	$0 = "task pve-vm$vmid-virtiofs$fsid";
+	my $pid2 = fork();
+	if ($pid2 == 0) {
+	    my $cmd = [$virtiofsd_bin, "--fd=$fd", "--shared-dir=$path"];
+	    push @$cmd, '--xattr' if $virtiofs->{'expose-xattr'};
+	    push @$cmd, '--posix-acl' if $virtiofs->{'expose-acl'};
+	    push @$cmd, '--announce-submounts' if $dir_cfg->{'announce-submounts'};
+	    push @$cmd, '--allow-direct-io' if $virtiofs->{'direct-io'};
+	    push @$cmd, '--cache='.$virtiofs->{cache} if $virtiofs->{cache};
+	    push @$cmd, '--writeback' if $virtiofs->{'writeback'};
+	    push @$cmd, '--syslog';
+	    exec(@$cmd);
+	} elsif (!defined($pid2)) {
+	    die $could_not_fork_err;
+	} else {
+	    POSIX::_exit(0);
+	}
+    } elsif (!defined($pid)) {
+	die $could_not_fork_err;
+    } else {
+	waitpid($pid, 0);
+    }
+
+    # return socket to keep it alive,
+    # so that QEMU will wait for virtiofsd to start
+    return $socket;
+}
+
+sub close_sockets {
+    my @sockets = @_;
+    for my $socket (@sockets) {
+	shutdown($socket, 2);
+	close($socket);
+    }
+}
+
+1;
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [pve-devel] [PATCH qemu-server v14 6/12] migration: check_local_resources for virtiofs
  2025-03-04 11:57 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v14 0/12] virtiofs Markus Frank
                   ` (4 preceding siblings ...)
  2025-03-04 11:57 ` [pve-devel] [PATCH qemu-server v14 5/12] fix #1027: virtio-fs support Markus Frank
@ 2025-03-04 11:57 ` Markus Frank
  2025-04-02 13:13   ` Fabian Grünbichler
  2025-03-04 11:57 ` [pve-devel] [PATCH qemu-server v14 7/12] disable snapshot (with RAM) and hibernate with virtio-fs devices Markus Frank
                   ` (9 subsequent siblings)
  15 siblings, 1 reply; 30+ messages in thread
From: Markus Frank @ 2025-03-04 11:57 UTC (permalink / raw)
  To: pve-devel

add dir mapping checks to check_local_resources

Since the VM needs to be powered off for migration, migration should
work with a directory on shared storage with all caching settings.

Signed-off-by: Markus Frank <m.frank@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
---
v14:
* nothing changed

 PVE/QemuServer.pm            | 12 +++++++++++-
 test/MigrationTest/Shared.pm |  7 +++++++
 2 files changed, 18 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 748b0acf..648b28cd 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2486,6 +2486,7 @@ sub check_local_resources {
     my $nodelist = PVE::Cluster::get_nodelist();
     my $pci_map = PVE::Mapping::PCI::config();
     my $usb_map = PVE::Mapping::USB::config();
+    my $dir_map = PVE::Mapping::Dir::config();
 
     my $missing_mappings_by_node = { map { $_ => [] } @$nodelist };
 
@@ -2497,6 +2498,8 @@ sub check_local_resources {
 		$entry = PVE::Mapping::PCI::get_node_mapping($pci_map, $id, $node);
 	    } elsif ($type eq 'usb') {
 		$entry = PVE::Mapping::USB::get_node_mapping($usb_map, $id, $node);
+	    } elsif ($type eq 'dir') {
+		$entry = PVE::Mapping::Dir::get_node_mapping($dir_map, $id, $node);
 	    }
 	    if (!scalar($entry->@*)) {
 		push @{$missing_mappings_by_node->{$node}}, $key;
@@ -2525,9 +2528,16 @@ sub check_local_resources {
 		push @$mapped_res, $k;
 	    }
 	}
+	if ($k =~ m/^virtiofs/) {
+	    my $entry = parse_property_string('pve-qm-virtiofs', $conf->{$k});
+	    if ($entry->{dirid}) {
+		$add_missing_mapping->('dir', $k, $entry->{dirid});
+		push @$mapped_res, $k;
+	    }
+	}
 	# sockets are safe: they will recreated be on the target side post-migrate
 	next if $k =~ m/^serial/ && ($conf->{$k} eq 'socket');
-	push @loc_res, $k if $k =~ m/^(usb|hostpci|serial|parallel)\d+$/;
+	push @loc_res, $k if $k =~ m/^(usb|hostpci|serial|parallel|virtiofs)\d+$/;
     }
 
     die "VM uses local resources\n" if scalar @loc_res && !$noerr;
diff --git a/test/MigrationTest/Shared.pm b/test/MigrationTest/Shared.pm
index aa7203d1..c5d07222 100644
--- a/test/MigrationTest/Shared.pm
+++ b/test/MigrationTest/Shared.pm
@@ -90,6 +90,13 @@ $mapping_pci_module->mock(
     },
 );
 
+our $mapping_dir_module = Test::MockModule->new("PVE::Mapping::Dir");
+$mapping_dir_module->mock(
+    config => sub {
+	return {};
+    },
+);
+
 our $ha_config_module = Test::MockModule->new("PVE::HA::Config");
 $ha_config_module->mock(
     vm_is_ha_managed => sub {
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [pve-devel] [PATCH qemu-server v14 7/12] disable snapshot (with RAM) and hibernate with virtio-fs devices
  2025-03-04 11:57 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v14 0/12] virtiofs Markus Frank
                   ` (5 preceding siblings ...)
  2025-03-04 11:57 ` [pve-devel] [PATCH qemu-server v14 6/12] migration: check_local_resources for virtiofs Markus Frank
@ 2025-03-04 11:57 ` Markus Frank
  2025-03-04 11:57 ` [pve-devel] [PATCH manager v14 08/12] api: add resource map api endpoints for directories Markus Frank
                   ` (8 subsequent siblings)
  15 siblings, 0 replies; 30+ messages in thread
From: Markus Frank @ 2025-03-04 11:57 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
---
v14:
* nothing changed

 PVE/QemuServer.pm | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 648b28cd..5caef7ba 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2461,8 +2461,9 @@ sub check_non_migratable_resources {
     my ($conf, $state, $noerr) = @_;
 
     my @blockers = ();
-    if ($state && $conf->{"amd-sev"}) {
-	push @blockers, "amd-sev";
+    if ($state) {
+	push @blockers, "amd-sev" if $conf->{"amd-sev"};
+	push @blockers, "virtiofs" if PVE::QemuServer::Virtiofs::virtiofs_enabled($conf);
     }
 
     if (scalar(@blockers) && !$noerr) {
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [pve-devel] [PATCH manager v14 08/12] api: add resource map api endpoints for directories
  2025-03-04 11:57 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v14 0/12] virtiofs Markus Frank
                   ` (6 preceding siblings ...)
  2025-03-04 11:57 ` [pve-devel] [PATCH qemu-server v14 7/12] disable snapshot (with RAM) and hibernate with virtio-fs devices Markus Frank
@ 2025-03-04 11:57 ` Markus Frank
  2025-03-04 11:58 ` [pve-devel] [PATCH manager v14 09/12] ui: add edit window for dir mappings Markus Frank
                   ` (7 subsequent siblings)
  15 siblings, 0 replies; 30+ messages in thread
From: Markus Frank @ 2025-03-04 11:57 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
---
v14:
* added missing imports/uses
* addressed style nits and improved formatting

 PVE/API2/Cluster/Mapping.pm       |   7 +
 PVE/API2/Cluster/Mapping/Dir.pm   | 308 ++++++++++++++++++++++++++++++
 PVE/API2/Cluster/Mapping/Makefile |   1 +
 3 files changed, 316 insertions(+)
 create mode 100644 PVE/API2/Cluster/Mapping/Dir.pm

diff --git a/PVE/API2/Cluster/Mapping.pm b/PVE/API2/Cluster/Mapping.pm
index 40386579..9f0dcd2b 100644
--- a/PVE/API2/Cluster/Mapping.pm
+++ b/PVE/API2/Cluster/Mapping.pm
@@ -3,11 +3,17 @@ package PVE::API2::Cluster::Mapping;
 use strict;
 use warnings;
 
+use PVE::API2::Cluster::Mapping::Dir;
 use PVE::API2::Cluster::Mapping::PCI;
 use PVE::API2::Cluster::Mapping::USB;
 
 use base qw(PVE::RESTHandler);
 
+__PACKAGE__->register_method ({
+    subclass => "PVE::API2::Cluster::Mapping::Dir",
+    path => 'dir',
+});
+
 __PACKAGE__->register_method ({
     subclass => "PVE::API2::Cluster::Mapping::PCI",
     path => 'pci',
@@ -41,6 +47,7 @@ __PACKAGE__->register_method ({
 	my ($param) = @_;
 
 	my $result = [
+	    { name => 'dir' },
 	    { name => 'pci' },
 	    { name => 'usb' },
 	];
diff --git a/PVE/API2/Cluster/Mapping/Dir.pm b/PVE/API2/Cluster/Mapping/Dir.pm
new file mode 100644
index 00000000..f905cef3
--- /dev/null
+++ b/PVE/API2/Cluster/Mapping/Dir.pm
@@ -0,0 +1,308 @@
+package PVE::API2::Cluster::Mapping::Dir;
+
+use strict;
+use warnings;
+
+use Storable qw(dclone);
+
+use PVE::INotify;
+use PVE::JSONSchema qw(get_standard_option);
+use PVE::Mapping::Dir ();
+use PVE::RPCEnvironment;
+use PVE::SectionConfig;
+use PVE::Tools qw(extract_param);
+
+use base qw(PVE::RESTHandler);
+
+__PACKAGE__->register_method ({
+    name => 'index',
+    path => '',
+    method => 'GET',
+    # only proxy if we give the 'check-node' parameter
+    proxyto_callback => sub {
+	my ($rpcenv, $proxyto, $param) = @_;
+	return $param->{'check-node'} // 'localhost';
+    },
+    description => "List directory mapping",
+    permissions => {
+	description => "Only lists entries where you have 'Mapping.Modify', 'Mapping.Use' or"
+	    ." 'Mapping.Audit' permissions on '/mapping/dir/<id>'.",
+	user => 'all',
+    },
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    'check-node' => get_standard_option('pve-node', {
+		description => "If given, checks the configurations on the given node for"
+		    ." correctness, and adds relevant diagnostics for the directory to the response.",
+		optional => 1,
+	    }),
+	},
+    },
+    returns => {
+	type => 'array',
+	items => {
+	    type => "object",
+	    properties => {
+		id => {
+		    type => 'string',
+		    description => "The logical ID of the mapping."
+		},
+		map => {
+		    type => 'array',
+		    description => "The entries of the mapping.",
+		    items => {
+			type => 'string',
+			description => "A mapping for a node.",
+		    },
+		},
+		description => {
+		    type => 'string',
+		    description => "A description of the logical mapping.",
+		},
+		checks => {
+		    type => "array",
+		    optional => 1,
+		    description => "A list of checks, only present if 'check-node' is set.",
+		    items => {
+			type => 'object',
+			properties => {
+			    severity => {
+				type => "string",
+				enum => ['warning', 'error'],
+				description => "The severity of the error",
+			    },
+			    message => {
+				type => "string",
+				description => "The message of the error",
+			    },
+			},
+		    }
+		},
+	    },
+	},
+	links => [ { rel => 'child', href => "{id}" } ],
+    },
+    code => sub {
+	my ($param) = @_;
+
+	my $rpcenv = PVE::RPCEnvironment::get();
+	my $authuser = $rpcenv->get_user();
+
+	my $check_node = $param->{'check-node'};
+	my $local_node = PVE::INotify::nodename();
+
+	die "wrong node to check - $check_node != $local_node\n"
+	    if defined($check_node) && $check_node ne 'localhost' && $check_node ne $local_node;
+
+	my $cfg = PVE::Mapping::Dir::config();
+
+	my $can_see_mapping_privs = ['Mapping.Modify', 'Mapping.Use', 'Mapping.Audit'];
+
+	my $res = [];
+	for my $id (keys $cfg->{ids}->%*) {
+	    next if !$rpcenv->check_any($authuser, "/mapping/dir/$id", $can_see_mapping_privs, 1);
+	    next if !$cfg->{ids}->{$id};
+
+	    my $entry = dclone($cfg->{ids}->{$id});
+	    $entry->{id} = $id;
+	    $entry->{digest} = $cfg->{digest};
+
+	    if (defined($check_node)) {
+		$entry->{checks} = [];
+		if (my $mappings = PVE::Mapping::Dir::get_node_mapping($cfg, $id, $check_node)) {
+		    if (!scalar($mappings->@*)) {
+			push $entry->{checks}->@*, {
+			    severity => 'warning',
+			    message => "No mapping for node $check_node.",
+			};
+		    }
+		    for my $mapping ($mappings->@*) {
+			eval { PVE::Mapping::Dir::assert_valid($mapping) };
+			if (my $err = $@) {
+			    push $entry->{checks}->@*, {
+				severity => 'error',
+				message => "Invalid configuration: $err",
+			    };
+			}
+		    }
+		}
+	    }
+
+	    push @$res, $entry;
+	}
+
+	return $res;
+    },
+});
+
+__PACKAGE__->register_method ({
+    name => 'get',
+    protected => 1,
+    path => '{id}',
+    method => 'GET',
+    description => "Get directory mapping.",
+    permissions => {
+	check =>['or',
+	    ['perm', '/mapping/dir/{id}', ['Mapping.Use']],
+	    ['perm', '/mapping/dir/{id}', ['Mapping.Modify']],
+	    ['perm', '/mapping/dir/{id}', ['Mapping.Audit']],
+	],
+    },
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    id => {
+		type => 'string',
+		format => 'pve-configid',
+	    },
+	}
+    },
+    returns => { type => 'object' },
+    code => sub {
+	my ($param) = @_;
+
+	my $cfg = PVE::Mapping::Dir::config();
+	my $id = $param->{id};
+
+	my $entry = $cfg->{ids}->{$id};
+	die "mapping '$param->{id}' not found\n" if !defined($entry);
+
+	my $data = dclone($entry);
+
+	$data->{digest} = $cfg->{digest};
+
+	return $data;
+    }});
+
+__PACKAGE__->register_method ({
+    name => 'create',
+    protected => 1,
+    path => '',
+    method => 'POST',
+    description => "Create a new directory mapping.",
+    permissions => {
+	check => ['perm', '/mapping/dir', ['Mapping.Modify']],
+    },
+    parameters => PVE::Mapping::Dir->createSchema(1),
+    returns => {
+	type => 'null',
+    },
+    code => sub {
+	my ($param) = @_;
+
+	my $id = extract_param($param, 'id');
+
+	my $plugin = PVE::Mapping::Dir->lookup('dir');
+	my $opts = $plugin->check_config($id, $param, 1, 1);
+
+	my $map_list = $opts->{map};
+	PVE::Mapping::Dir::assert_valid_map_list($map_list);
+
+	PVE::Mapping::Dir::lock_dir_config(sub {
+	    my $cfg = PVE::Mapping::Dir::config();
+
+	    die "dir ID '$id' already defined\n" if defined($cfg->{ids}->{$id});
+
+	    $cfg->{ids}->{$id} = $opts;
+
+	    PVE::Mapping::Dir::write_dir_config($cfg);
+
+	}, "create directory mapping failed");
+
+	return;
+    },
+});
+
+__PACKAGE__->register_method ({
+    name => 'update',
+    protected => 1,
+    path => '{id}',
+    method => 'PUT',
+    description => "Update a directory mapping.",
+    permissions => {
+	check => ['perm', '/mapping/dir/{id}', ['Mapping.Modify']],
+    },
+    parameters => PVE::Mapping::Dir->updateSchema(),
+    returns => {
+	type => 'null',
+    },
+    code => sub {
+	my ($param) = @_;
+
+	my $digest = extract_param($param, 'digest');
+	my $delete = extract_param($param, 'delete');
+	my $id = extract_param($param, 'id');
+
+	if ($delete) {
+	    $delete = [ PVE::Tools::split_list($delete) ];
+	}
+
+	PVE::Mapping::Dir::lock_dir_config(sub {
+	    my $cfg = PVE::Mapping::Dir::config();
+
+	    PVE::Tools::assert_if_modified($cfg->{digest}, $digest) if defined($digest);
+
+	    die "dir ID '$id' does not exist\n" if !defined($cfg->{ids}->{$id});
+
+	    my $plugin = PVE::Mapping::Dir->lookup('dir');
+	    my $opts = $plugin->check_config($id, $param, 1, 1);
+
+	    my $map_list = $opts->{map};
+	    PVE::Mapping::Dir::assert_valid_map_list($map_list);
+
+	    my $data = $cfg->{ids}->{$id};
+
+	    my $options = $plugin->private()->{options}->{dir};
+	    PVE::SectionConfig::delete_from_config($data, $options, $opts, $delete);
+
+	    $data->{$_} = $opts->{$_} for keys $opts->%*;
+
+	    PVE::Mapping::Dir::write_dir_config($cfg);
+
+	}, "update directory mapping failed");
+
+	return;
+    },
+});
+
+__PACKAGE__->register_method ({
+    name => 'delete',
+    protected => 1,
+    path => '{id}',
+    method => 'DELETE',
+    description => "Remove directory mapping.",
+    permissions => {
+	check => [ 'perm', '/mapping/dir', ['Mapping.Modify']],
+    },
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    id => {
+		type => 'string',
+		format => 'pve-configid',
+	    },
+	}
+    },
+    returns => { type => 'null' },
+    code => sub {
+	my ($param) = @_;
+
+	my $id = $param->{id};
+
+	PVE::Mapping::Dir::lock_dir_config(sub {
+	    my $cfg = PVE::Mapping::Dir::config();
+
+	    if ($cfg->{ids}->{$id}) {
+		delete $cfg->{ids}->{$id};
+	    }
+
+	    PVE::Mapping::Dir::write_dir_config($cfg);
+
+	}, "delete dir mapping failed");
+
+	return;
+    }
+});
+
+1;
diff --git a/PVE/API2/Cluster/Mapping/Makefile b/PVE/API2/Cluster/Mapping/Makefile
index e7345ab4..5dbb3f5c 100644
--- a/PVE/API2/Cluster/Mapping/Makefile
+++ b/PVE/API2/Cluster/Mapping/Makefile
@@ -3,6 +3,7 @@ include ../../../../defines.mk
 # for node independent, cluster-wide applicable, API endpoints
 # ensure we do not conflict with files shipped by pve-cluster!!
 PERLSOURCE= 	\
+	Dir.pm  \
 	PCI.pm	\
 	USB.pm
 
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [pve-devel] [PATCH manager v14 09/12] ui: add edit window for dir mappings
  2025-03-04 11:57 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v14 0/12] virtiofs Markus Frank
                   ` (7 preceding siblings ...)
  2025-03-04 11:57 ` [pve-devel] [PATCH manager v14 08/12] api: add resource map api endpoints for directories Markus Frank
@ 2025-03-04 11:58 ` Markus Frank
  2025-03-04 11:58 ` [pve-devel] [PATCH manager v14 10/12] ui: add resource mapping view for directories Markus Frank
                   ` (6 subsequent siblings)
  15 siblings, 0 replies; 30+ messages in thread
From: Markus Frank @ 2025-03-04 11:58 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
v14:
* disallow commas and equal signs in path until the path can be quoted
in property strings

 www/manager6/Makefile             |   1 +
 www/manager6/window/DirMapEdit.js | 214 ++++++++++++++++++++++++++++++
 2 files changed, 215 insertions(+)
 create mode 100644 www/manager6/window/DirMapEdit.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index c94a5cdf..4b8677e3 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -138,6 +138,7 @@ JSSRC= 							\
 	window/TreeSettingsEdit.js			\
 	window/PCIMapEdit.js				\
 	window/USBMapEdit.js				\
+	window/DirMapEdit.js                            \
 	window/GuestImport.js				\
 	ha/Fencing.js					\
 	ha/GroupEdit.js					\
diff --git a/www/manager6/window/DirMapEdit.js b/www/manager6/window/DirMapEdit.js
new file mode 100644
index 00000000..2856f4c4
--- /dev/null
+++ b/www/manager6/window/DirMapEdit.js
@@ -0,0 +1,214 @@
+Ext.define('PVE.window.DirMapEditWindow', {
+    extend: 'Proxmox.window.Edit',
+
+    mixins: ['Proxmox.Mixin.CBind'],
+
+    cbindData: function(initialConfig) {
+	let me = this;
+	me.isCreate = !me.name;
+	me.method = me.isCreate ? 'POST' : 'PUT';
+	me.hideMapping = !!me.entryOnly;
+	me.hideComment = me.name && !me.entryOnly;
+	me.hideNodeSelector = me.nodename || me.entryOnly;
+	me.hideNode = !me.nodename || !me.hideNodeSelector;
+	return {
+	    name: me.name,
+	    nodename: me.nodename,
+	};
+    },
+
+    submitUrl: function(_url, data) {
+	let me = this;
+	let name = me.isCreate ? '' : me.name;
+	return `/cluster/mapping/dir/${name}`;
+    },
+
+    title: gettext('Add Dir mapping'),
+
+    onlineHelp: 'resource_mapping',
+
+    method: 'POST',
+
+    controller: {
+	xclass: 'Ext.app.ViewController',
+
+	onGetValues: function(values) {
+	    let me = this;
+	    let view = me.getView();
+	    values.node ??= view.nodename;
+
+	    let name = values.name;
+	    let description = values.description;
+	    let deletes = values.delete;
+
+	    delete values.description;
+	    delete values.name;
+	    delete values.delete;
+
+	    if (PVE.Parser.parseBoolean(values['announce-submounts'])) {
+		values['announce-submounts'] = 1;
+	    }
+
+	    let map = [];
+	    if (me.originalMap) {
+		map = PVE.Parser.filterPropertyStringList(me.originalMap, (e) => e.node !== values.node);
+	    }
+	    if (values.path) {
+		// TODO: Remove this when property string supports quotation of properties
+		if (!/^\/[^;,=()]+/.test(values.path)) {
+		    let errMsg = 'these symbols are currently not allowed in path: ;,=()';
+		    Ext.Msg.alert(gettext('Error'), errMsg);
+		    throw errMsg;
+		}
+		map.push(PVE.Parser.printPropertyString(values));
+	    }
+	    values = { map };
+
+	    if (description) {
+		values.description = description;
+	    }
+	    if (deletes && !view.isCreate) {
+		values.delete = deletes;
+	    }
+	    if (view.isCreate) {
+		values.id = name;
+	    }
+
+	    return values;
+	},
+
+	onSetValues: function(values) {
+	    let me = this;
+	    let view = me.getView();
+	    me.originalMap = [...values.map];
+	    let configuredNodes = [];
+	    PVE.Parser.filterPropertyStringList(values.map, (e) => {
+		configuredNodes.push(e.node);
+		e['announce-submounts'] = PVE.Parser.parseBoolean(e['announce-submounts']) ? 1 : 0;
+		if (e.node === view.nodename) {
+		    values = e;
+		}
+		return false;
+	    });
+
+	    me.lookup('nodeselector').disallowedNodes = configuredNodes;
+
+	    return values;
+	},
+
+	init: function(view) {
+	    let me = this;
+
+	    if (!view.nodename) {
+		//throw "no nodename given";
+	    }
+	},
+    },
+
+    items: [
+	{
+	    xtype: 'inputpanel',
+	    onGetValues: function(values) {
+		return this.up('window').getController().onGetValues(values);
+	    },
+
+	    onSetValues: function(values) {
+		return this.up('window').getController().onSetValues(values);
+	    },
+
+	    columnT: [
+		{
+		    xtype: 'displayfield',
+		    reference: 'directory-hint',
+		    columnWidth: 1,
+		    value: 'Make sure the directory exists.',
+		    cbind: {
+			disabled: '{hideMapping}',
+			hidden: '{hideMapping}',
+		    },
+		    userCls: 'pmx-hint',
+		},
+	    ],
+
+	    column1: [
+		{
+		    xtype: 'pmxDisplayEditField',
+		    fieldLabel: gettext('Name'),
+		    cbind: {
+			editable: '{!name}',
+			value: '{name}',
+			submitValue: '{isCreate}',
+		    },
+		    name: 'name',
+		    allowBlank: false,
+		},
+		{
+		    xtype: 'pveNodeSelector',
+		    reference: 'nodeselector',
+		    fieldLabel: gettext('Node'),
+		    name: 'node',
+		    cbind: {
+			disabled: '{hideNodeSelector}',
+			hidden: '{hideNodeSelector}',
+		    },
+		    allowBlank: false,
+		},
+	    ],
+
+	    column2: [
+		{
+		    xtype: 'fieldcontainer',
+		    defaultType: 'radiofield',
+		    layout: 'fit',
+		    cbind: {
+			disabled: '{hideMapping}',
+			hidden: '{hideMapping}',
+		    },
+		    items: [
+			{
+			    xtype: 'textfield',
+			    name: 'path',
+			    reference: 'path',
+			    value: '',
+			    emptyText: gettext('/some/path'),
+			    cbind: {
+				nodename: '{nodename}',
+				disabled: '{hideMapping}',
+			    },
+			    allowBlank: false,
+			    fieldLabel: gettext('Path'),
+			},
+			{
+			    xtype: 'proxmoxcheckbox',
+			    name: 'announce-submounts',
+			    fieldLabel: gettext('announce-submounts'),
+			    value: '1',
+			    deleteEmpty: false,
+			},
+		    ],
+		},
+	    ],
+
+	    columnB: [
+		{
+		    xtype: 'fieldcontainer',
+		    defaultType: 'radiofield',
+		    layout: 'fit',
+		    cbind: {
+			disabled: '{hideComment}',
+			hidden: '{hideComment}',
+		    },
+		    items: [
+			{
+			    xtype: 'proxmoxtextfield',
+			    fieldLabel: gettext('Comment'),
+			    submitValue: true,
+			    name: 'description',
+			    deleteEmpty: true,
+			},
+		    ],
+		},
+	    ],
+	},
+    ],
+});
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [pve-devel] [PATCH manager v14 10/12] ui: add resource mapping view for directories
  2025-03-04 11:57 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v14 0/12] virtiofs Markus Frank
                   ` (8 preceding siblings ...)
  2025-03-04 11:58 ` [pve-devel] [PATCH manager v14 09/12] ui: add edit window for dir mappings Markus Frank
@ 2025-03-04 11:58 ` Markus Frank
  2025-04-02 10:36   ` Laurențiu Leahu-Vlăducu
  2025-04-02 13:42   ` Daniel Kral
  2025-03-04 11:58 ` [pve-devel] [PATCH manager v14 11/12] ui: form: add selector for directory mappings Markus Frank
                   ` (5 subsequent siblings)
  15 siblings, 2 replies; 30+ messages in thread
From: Markus Frank @ 2025-03-04 11:58 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
v14:
* return HTML encoded comment

 www/manager6/Makefile         |  1 +
 www/manager6/dc/Config.js     | 10 +++++++++
 www/manager6/dc/DirMapView.js | 42 +++++++++++++++++++++++++++++++++++
 3 files changed, 53 insertions(+)
 create mode 100644 www/manager6/dc/DirMapView.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 4b8677e3..57c4d377 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -189,6 +189,7 @@ JSSRC= 							\
 	dc/RealmSyncJob.js				\
 	dc/PCIMapView.js				\
 	dc/USBMapView.js				\
+	dc/DirMapView.js				\
 	lxc/CmdMenu.js					\
 	lxc/Config.js					\
 	lxc/CreateWizard.js				\
diff --git a/www/manager6/dc/Config.js b/www/manager6/dc/Config.js
index 74728c83..2958fb88 100644
--- a/www/manager6/dc/Config.js
+++ b/www/manager6/dc/Config.js
@@ -329,6 +329,16 @@ Ext.define('PVE.dc.Config', {
 			    title: gettext('USB Devices'),
 			    flex: 1,
 			},
+			{
+			    xtype: 'splitter',
+			    collapsible: false,
+			    performCollapse: false,
+			},
+			{
+			    xtype: 'pveDcDirMapView',
+			    title: gettext('Directories'),
+			    flex: 1,
+			},
 		    ],
 		},
 	    );
diff --git a/www/manager6/dc/DirMapView.js b/www/manager6/dc/DirMapView.js
new file mode 100644
index 00000000..ff0ce633
--- /dev/null
+++ b/www/manager6/dc/DirMapView.js
@@ -0,0 +1,42 @@
+Ext.define('pve-resource-dir-tree', {
+    extend: 'Ext.data.Model',
+    idProperty: 'internalId',
+    fields: ['type', 'text', 'path', 'id', 'description', 'digest'],
+});
+
+Ext.define('PVE.dc.DirMapView', {
+    extend: 'PVE.tree.ResourceMapTree',
+    alias: 'widget.pveDcDirMapView',
+
+    editWindowClass: 'PVE.window.DirMapEditWindow',
+    baseUrl: '/cluster/mapping/dir',
+    mapIconCls: 'fa fa-folder',
+    entryIdProperty: 'path',
+
+    store: {
+	sorters: 'text',
+	model: 'pve-resource-dir-tree',
+	data: {},
+    },
+
+    columns: [
+	{
+	    xtype: 'treecolumn',
+	    text: gettext('ID/Node'),
+	    dataIndex: 'text',
+	    width: 200,
+	},
+	{
+	    text: gettext('announce-submounts'),
+	    dataIndex: 'announce-submounts',
+	},
+	{
+	    header: gettext('Comment'),
+	    dataIndex: 'description',
+	    renderer: function(value, _meta, record) {
+		return Ext.String.htmlEncode(value ?? record.data.comment);
+	    },
+	    flex: 1,
+	},
+    ],
+});
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [pve-devel] [PATCH manager v14 11/12] ui: form: add selector for directory mappings
  2025-03-04 11:57 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v14 0/12] virtiofs Markus Frank
                   ` (9 preceding siblings ...)
  2025-03-04 11:58 ` [pve-devel] [PATCH manager v14 10/12] ui: add resource mapping view for directories Markus Frank
@ 2025-03-04 11:58 ` Markus Frank
  2025-03-04 11:58 ` [pve-devel] [PATCH manager v14 12/12] ui: add options to add virtio-fs to qemu config Markus Frank
                   ` (4 subsequent siblings)
  15 siblings, 0 replies; 30+ messages in thread
From: Markus Frank @ 2025-03-04 11:58 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
v14:
* nothing changed

 www/manager6/Makefile               |  1 +
 www/manager6/form/DirMapSelector.js | 63 +++++++++++++++++++++++++++++
 2 files changed, 64 insertions(+)
 create mode 100644 www/manager6/form/DirMapSelector.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 57c4d377..fabbdd24 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -35,6 +35,7 @@ JSSRC= 							\
 	form/ContentTypeSelector.js			\
 	form/ControllerSelector.js			\
 	form/DayOfWeekSelector.js			\
+	form/DirMapSelector.js                          \
 	form/DiskFormatSelector.js			\
 	form/DiskStorageSelector.js			\
 	form/FileSelector.js				\
diff --git a/www/manager6/form/DirMapSelector.js b/www/manager6/form/DirMapSelector.js
new file mode 100644
index 00000000..473a2ffe
--- /dev/null
+++ b/www/manager6/form/DirMapSelector.js
@@ -0,0 +1,63 @@
+Ext.define('PVE.form.DirMapSelector', {
+    extend: 'Proxmox.form.ComboGrid',
+    alias: 'widget.pveDirMapSelector',
+
+    store: {
+	fields: ['name', 'path'],
+	filterOnLoad: true,
+	sorters: [
+	    {
+		property: 'id',
+		direction: 'ASC',
+	    },
+	],
+    },
+
+    allowBlank: false,
+    autoSelect: false,
+    displayField: 'id',
+    valueField: 'id',
+
+    listConfig: {
+	columns: [
+	    {
+		header: gettext('Directory ID'),
+		dataIndex: 'id',
+		flex: 1,
+	    },
+	    {
+		header: gettext('Comment'),
+		dataIndex: 'description',
+		flex: 1,
+	    },
+	],
+    },
+
+    setNodename: function(nodename) {
+	var me = this;
+
+	if (!nodename || me.nodename === nodename) {
+	    return;
+	}
+
+	me.nodename = nodename;
+
+	me.store.setProxy({
+	    type: 'proxmox',
+	    url: `/api2/json/cluster/mapping/dir?check-node=${nodename}`,
+	});
+
+	me.store.load();
+    },
+
+    initComponent: function() {
+	var me = this;
+
+	var nodename = me.nodename;
+	me.nodename = undefined;
+
+        me.callParent();
+
+	me.setNodename(nodename);
+    },
+});
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [pve-devel] [PATCH manager v14 12/12] ui: add options to add virtio-fs to qemu config
  2025-03-04 11:57 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v14 0/12] virtiofs Markus Frank
                   ` (10 preceding siblings ...)
  2025-03-04 11:58 ` [pve-devel] [PATCH manager v14 11/12] ui: form: add selector for directory mappings Markus Frank
@ 2025-03-04 11:58 ` Markus Frank
  2025-04-02 10:36   ` Laurențiu Leahu-Vlăducu
  2025-04-02 13:42   ` Daniel Kral
  2025-03-18  9:14 ` [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v14 0/12] virtiofs Markus Frank
                   ` (3 subsequent siblings)
  15 siblings, 2 replies; 30+ messages in thread
From: Markus Frank @ 2025-03-04 11:58 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
v14:
* disable expose-xattr when expose-acl is set
* added missing writeback cache option

 www/manager6/Makefile             |   1 +
 www/manager6/Utils.js             |   1 +
 www/manager6/qemu/HardwareView.js |  19 +++++
 www/manager6/qemu/VirtiofsEdit.js | 137 ++++++++++++++++++++++++++++++
 4 files changed, 158 insertions(+)
 create mode 100644 www/manager6/qemu/VirtiofsEdit.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index fabbdd24..fdf0e816 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -271,6 +271,7 @@ JSSRC= 							\
 	qemu/Smbios1Edit.js				\
 	qemu/SystemEdit.js				\
 	qemu/USBEdit.js					\
+	qemu/VirtiofsEdit.js				\
 	sdn/Browser.js					\
 	sdn/ControllerView.js				\
 	sdn/Status.js					\
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index 90011a8f..0f242ae1 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -1645,6 +1645,7 @@ Ext.define('PVE.Utils', {
 	serial: 4,
 	rng: 1,
 	tpmstate: 1,
+	virtiofs: 10,
     },
 
     // we can have usb6 and up only for specific machine/ostypes
diff --git a/www/manager6/qemu/HardwareView.js b/www/manager6/qemu/HardwareView.js
index c6d193fc..34aeb51e 100644
--- a/www/manager6/qemu/HardwareView.js
+++ b/www/manager6/qemu/HardwareView.js
@@ -319,6 +319,16 @@ Ext.define('PVE.qemu.HardwareView', {
 	    never_delete: !caps.nodes['Sys.Console'],
 	    header: gettext("VirtIO RNG"),
 	};
+	for (let i = 0; i < PVE.Utils.hardware_counts.virtiofs; i++) {
+	    let confid = "virtiofs" + i.toString();
+	    rows[confid] = {
+		group: 50,
+		order: i,
+		iconCls: 'folder',
+		editor: 'PVE.qemu.VirtiofsEdit',
+		header: gettext('Virtiofs') + ' (' + confid +')',
+	    };
+	}
 
 	var sorterFn = function(rec1, rec2) {
 	    var v1 = rec1.data.key;
@@ -595,6 +605,7 @@ Ext.define('PVE.qemu.HardwareView', {
 	    const noVMConfigDiskPerm = !caps.vms['VM.Config.Disk'];
 	    const noVMConfigCDROMPerm = !caps.vms['VM.Config.CDROM'];
 	    const noVMConfigCloudinitPerm = !caps.vms['VM.Config.Cloudinit'];
+	    const noVMConfigOptionsPerm = !caps.vms['VM.Config.Options'];
 
 	    me.down('#addUsb').setDisabled(noHWPerm || isAtUsbLimit());
 	    me.down('#addPci').setDisabled(noHWPerm || isAtLimit('hostpci'));
@@ -604,6 +615,7 @@ Ext.define('PVE.qemu.HardwareView', {
 	    me.down('#addRng').setDisabled(noSysConsolePerm || isAtLimit('rng'));
 	    efidisk_menuitem.setDisabled(noVMConfigDiskPerm || isAtLimit('efidisk'));
 	    me.down('#addTpmState').setDisabled(noVMConfigDiskPerm || isAtLimit('tpmstate'));
+	    me.down('#addVirtiofs').setDisabled(noVMConfigOptionsPerm || isAtLimit('virtiofs'));
 	    me.down('#addCloudinitDrive').setDisabled(noVMConfigCDROMPerm || noVMConfigCloudinitPerm || hasCloudInit);
 
 	    if (!rec) {
@@ -748,6 +760,13 @@ Ext.define('PVE.qemu.HardwareView', {
 				disabled: !caps.nodes['Sys.Console'],
 				handler: editorFactory('RNGEdit'),
 			    },
+			    {
+				text: gettext("Virtiofs"),
+				itemId: 'addVirtiofs',
+				iconCls: 'fa fa-folder',
+				disabled: !caps.nodes['Sys.Console'],
+				handler: editorFactory('VirtiofsEdit'),
+			    },
 			],
 		    }),
 		},
diff --git a/www/manager6/qemu/VirtiofsEdit.js b/www/manager6/qemu/VirtiofsEdit.js
new file mode 100644
index 00000000..0bbb5213
--- /dev/null
+++ b/www/manager6/qemu/VirtiofsEdit.js
@@ -0,0 +1,137 @@
+Ext.define('PVE.qemu.VirtiofsInputPanel', {
+    extend: 'Proxmox.panel.InputPanel',
+    xtype: 'pveVirtiofsInputPanel',
+    onlineHelp: 'qm_virtiofs',
+
+    insideWizard: false,
+
+    onGetValues: function(values) {
+	var me = this;
+	var confid = me.confid;
+	var params = {};
+	delete values.delete;
+	params[confid] = PVE.Parser.printPropertyString(values, 'dirid');
+	return params;
+    },
+
+    setSharedfiles: function(confid, data) {
+	var me = this;
+	me.confid = confid;
+	me.virtiofs = data;
+	me.setValues(me.virtiofs);
+    },
+    initComponent: function() {
+	let me = this;
+
+	me.nodename = me.pveSelNode.data.node;
+	if (!me.nodename) {
+	    throw "no node name specified";
+	}
+	me.items = [
+	    {
+		xtype: 'pveDirMapSelector',
+		emptyText: 'dirid',
+		nodename: me.nodename,
+		fieldLabel: gettext('Directory ID'),
+		name: 'dirid',
+		allowBlank: false,
+	    },
+	    {
+		xtype: 'proxmoxKVComboBox',
+		fieldLabel: gettext('Cache'),
+		name: 'cache',
+		value: '__default__',
+		deleteDefaultValue: false,
+		comboItems: [
+		    ['__default__', Proxmox.Utils.defaultText + ' (auto)'],
+		    ['auto', 'auto'],
+		    ['always', 'always'],
+		    ['never', 'never'],
+		],
+	    },
+	    {
+		xtype: 'proxmoxcheckbox',
+		fieldLabel: gettext('Writeback cache'),
+		name: 'writeback',
+	    },
+	    {
+		xtype: 'proxmoxcheckbox',
+		fieldLabel: gettext('expose-xattr'),
+		name: 'expose-xattr',
+	    },
+	    {
+		xtype: 'proxmoxcheckbox',
+		fieldLabel: gettext('expose-acl (implies expose-xattr)'),
+		name: 'expose-acl',
+		listeners: {
+		    change: function(f, value) {
+			let xattr = me.down('field[name=expose-xattr]');
+			xattr.setDisabled(value);
+			if (value) {
+			    xattr.setValue(0);
+			}
+		    },
+		},
+	    },
+	    {
+		xtype: 'proxmoxcheckbox',
+		fieldLabel: gettext('Direct IO'),
+		name: 'direct-io',
+	    },
+	];
+
+	me.virtiofs = {};
+	me.confid = 'virtiofs0';
+	me.callParent();
+    },
+});
+
+Ext.define('PVE.qemu.VirtiofsEdit', {
+    extend: 'Proxmox.window.Edit',
+
+    subject: gettext('Filesystem Passthrough'),
+
+    initComponent: function() {
+	var me = this;
+
+	me.isCreate = !me.confid;
+
+	var ipanel = Ext.create('PVE.qemu.VirtiofsInputPanel', {
+	    confid: me.confid,
+	    pveSelNode: me.pveSelNode,
+	    isCreate: me.isCreate,
+	});
+
+	Ext.applyIf(me, {
+	    items: ipanel,
+	});
+
+	me.callParent();
+
+	me.load({
+	    success: function(response) {
+		me.conf = response.result.data;
+		var i, confid;
+		if (!me.isCreate) {
+		    var value = me.conf[me.confid];
+		    var virtiofs = PVE.Parser.parsePropertyString(value, "dirid");
+		    if (!virtiofs) {
+			Ext.Msg.alert(gettext('Error'), 'Unable to parse virtiofs options');
+			me.close();
+			return;
+		    }
+		    ipanel.setSharedfiles(me.confid, virtiofs);
+		} else {
+		    for (i = 0; i < PVE.Utils.hardware_counts.virtiofs; i++) {
+			confid = 'virtiofs' + i.toString();
+			if (!Ext.isDefined(me.conf[confid])) {
+			    me.confid = confid;
+			    break;
+			}
+		    }
+		    ipanel.setSharedfiles(me.confid, {});
+		}
+	    },
+	});
+    },
+});
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v14 0/12] virtiofs
  2025-03-04 11:57 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v14 0/12] virtiofs Markus Frank
                   ` (11 preceding siblings ...)
  2025-03-04 11:58 ` [pve-devel] [PATCH manager v14 12/12] ui: add options to add virtio-fs to qemu config Markus Frank
@ 2025-03-18  9:14 ` Markus Frank
  2025-04-02 10:36 ` Laurențiu Leahu-Vlăducu
                   ` (2 subsequent siblings)
  15 siblings, 0 replies; 30+ messages in thread
From: Markus Frank @ 2025-03-18  9:14 UTC (permalink / raw)
  To: pve-devel

ping

On  2025-03-04 12:57, Markus Frank wrote:
> Virtio-fs is a shared file system that enables sharing a directory
> between host and guest VMs. It takes advantage of the locality of
> virtual machines and the hypervisor to get a higher throughput than
> the 9p remote file system protocol.
> 
> build-order:
> 1. cluster
> 2. guest-common
> 3. docs
> 4. qemu-server
> 5. manager
> 
> I did not get virtiofsd to run with run_command without creating
> zombie processes after stutdown. So I replaced run_command with exec
> for now. Maybe someone can find out why this happens.
> 
> 
> changes in v14:
> * disallow commas and equal signs in path until the path can be quoted
>   in property strings
> * addressed style nits and improved formatting
> * use max_virtiofs() in check_vm_create_dir_perm
> * removed unnecessary checks after parse_property_string
> * find_on_current_node returns only one entry
> * improved docs
> * added missing imports/uses
> 
> changes in v13:
> * removed acl/xattr attributes in node config
> * renamed acl/xattr in virtiofs qemu config to expose-acl/expose-xattr
> * renamed submounts in node config to announce-submounts
> * the "disable snapshot (with RAM) and hibernate with virtio-fs devices"
>   patch now uses the check_non_migratable_resources function
> * rewritten the part about announce-submounts in pve-docs patch
> 
> Changes in v12:
> * rebase to master as most patches could not be applied anymore
> 
> Changes in v11:
> * made submounts option on by default in WebUI and section config
> * PVE::QemuServer::Virtiofs dependency removed in QemuServer/Memory.pm
> * Minor changes to function/variable names
> * Disable snapshots (with RAM) and hibernate due to incompatibility
> 
> 
> 
> cluster:
> 
> Markus Frank (1):
>    add mapping/dir.cfg for resource mapping
> 
>   src/PVE/Cluster.pm  | 1 +
>   src/pmxcfs/status.c | 1 +
>   2 files changed, 2 insertions(+)
> 
> 
> 
> guest-common:
> 
> Markus Frank (1):
>    add dir mapping section config
> 
>   src/Makefile           |   1 +
>   src/PVE/Mapping/Dir.pm | 196 +++++++++++++++++++++++++++++++++++++++++
>   2 files changed, 197 insertions(+)
>   create mode 100644 src/PVE/Mapping/Dir.pm
> 
> 
> 
> docs:
> 
> Markus Frank (1):
>    add doc section for the shared filesystem virtio-fs
> 
>   qm.adoc | 99 +++++++++++++++++++++++++++++++++++++++++++++++++++++++--
>   1 file changed, 97 insertions(+), 2 deletions(-)
> 
> 
> 
> qemu-server:
> 
> Markus Frank (4):
>    control: add virtiofsd as runtime dependency for qemu-server
>    fix #1027: virtio-fs support
>    migration: check_local_resources for virtiofs
>    disable snapshot (with RAM) and hibernate with virtio-fs devices
> 
>   PVE/API2/Qemu.pm             |  41 ++++++-
>   PVE/QemuServer.pm            |  46 +++++++-
>   PVE/QemuServer/Makefile      |   3 +-
>   PVE/QemuServer/Memory.pm     |  22 ++--
>   PVE/QemuServer/Virtiofs.pm   | 211 +++++++++++++++++++++++++++++++++++
>   debian/control               |   1 +
>   test/MigrationTest/Shared.pm |   7 ++
>   7 files changed, 317 insertions(+), 14 deletions(-)
>   create mode 100644 PVE/QemuServer/Virtiofs.pm
> 
> 
> 
> manager:
> 
> Markus Frank (5):
>    api: add resource map api endpoints for directories
>    ui: add edit window for dir mappings
>    ui: add resource mapping view for directories
>    ui: form: add selector for directory mappings
>    ui: add options to add virtio-fs to qemu config
> 
>   PVE/API2/Cluster/Mapping.pm         |   7 +
>   PVE/API2/Cluster/Mapping/Dir.pm     | 308 ++++++++++++++++++++++++++++
>   PVE/API2/Cluster/Mapping/Makefile   |   1 +
>   www/manager6/Makefile               |   4 +
>   www/manager6/Utils.js               |   1 +
>   www/manager6/dc/Config.js           |  10 +
>   www/manager6/dc/DirMapView.js       |  42 ++++
>   www/manager6/form/DirMapSelector.js |  63 ++++++
>   www/manager6/qemu/HardwareView.js   |  19 ++
>   www/manager6/qemu/VirtiofsEdit.js   | 137 +++++++++++++
>   www/manager6/window/DirMapEdit.js   | 214 +++++++++++++++++++
>   11 files changed, 806 insertions(+)
>   create mode 100644 PVE/API2/Cluster/Mapping/Dir.pm
>   create mode 100644 www/manager6/dc/DirMapView.js
>   create mode 100644 www/manager6/form/DirMapSelector.js
>   create mode 100644 www/manager6/qemu/VirtiofsEdit.js
>   create mode 100644 www/manager6/window/DirMapEdit.js
> 



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v14 0/12] virtiofs
  2025-03-04 11:57 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v14 0/12] virtiofs Markus Frank
                   ` (12 preceding siblings ...)
  2025-03-18  9:14 ` [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v14 0/12] virtiofs Markus Frank
@ 2025-04-02 10:36 ` Laurențiu Leahu-Vlăducu
  2025-04-02 13:17 ` Fabian Grünbichler
  2025-04-02 13:45 ` Daniel Kral
  15 siblings, 0 replies; 30+ messages in thread
From: Laurențiu Leahu-Vlăducu @ 2025-04-02 10:36 UTC (permalink / raw)
  To: pve-devel

Thanks for working on this awesome feature! I tested it with Debian 12 
and Windows Server 2025 VMs and it worked really well.

This patch series LGTM and can be merged very soon, IMO.

I wrote some further comments as answers to the other patches.


For the whole series, please consider:

Tested-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com>
Reviewed-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com>


On 04.03.25 12:57, Markus Frank wrote:
> Virtio-fs is a shared file system that enables sharing a directory
> between host and guest VMs. It takes advantage of the locality of
> virtual machines and the hypervisor to get a higher throughput than
> the 9p remote file system protocol.
> 
> build-order:
> 1. cluster
> 2. guest-common
> 3. docs
> 4. qemu-server
> 5. manager
> 
> I did not get virtiofsd to run with run_command without creating
> zombie processes after stutdown. So I replaced run_command with exec
> for now. Maybe someone can find out why this happens.
> 
> 
> changes in v14:
> * disallow commas and equal signs in path until the path can be quoted
>   in property strings
> * addressed style nits and improved formatting
> * use max_virtiofs() in check_vm_create_dir_perm
> * removed unnecessary checks after parse_property_string
> * find_on_current_node returns only one entry
> * improved docs
> * added missing imports/uses
> 
> changes in v13:
> * removed acl/xattr attributes in node config
> * renamed acl/xattr in virtiofs qemu config to expose-acl/expose-xattr
> * renamed submounts in node config to announce-submounts
> * the "disable snapshot (with RAM) and hibernate with virtio-fs devices"
>   patch now uses the check_non_migratable_resources function
> * rewritten the part about announce-submounts in pve-docs patch
> 
> Changes in v12:
> * rebase to master as most patches could not be applied anymore
> 
> Changes in v11:
> * made submounts option on by default in WebUI and section config
> * PVE::QemuServer::Virtiofs dependency removed in QemuServer/Memory.pm
> * Minor changes to function/variable names
> * Disable snapshots (with RAM) and hibernate due to incompatibility
> 
> 
> 
> cluster:
> 
> Markus Frank (1):
>    add mapping/dir.cfg for resource mapping
> 
>   src/PVE/Cluster.pm  | 1 +
>   src/pmxcfs/status.c | 1 +
>   2 files changed, 2 insertions(+)
> 
> 
> 
> guest-common:
> 
> Markus Frank (1):
>    add dir mapping section config
> 
>   src/Makefile           |   1 +
>   src/PVE/Mapping/Dir.pm | 196 +++++++++++++++++++++++++++++++++++++++++
>   2 files changed, 197 insertions(+)
>   create mode 100644 src/PVE/Mapping/Dir.pm
> 
> 
> 
> docs:
> 
> Markus Frank (1):
>    add doc section for the shared filesystem virtio-fs
> 
>   qm.adoc | 99 +++++++++++++++++++++++++++++++++++++++++++++++++++++++--
>   1 file changed, 97 insertions(+), 2 deletions(-)
> 
> 
> 
> qemu-server:
> 
> Markus Frank (4):
>    control: add virtiofsd as runtime dependency for qemu-server
>    fix #1027: virtio-fs support
>    migration: check_local_resources for virtiofs
>    disable snapshot (with RAM) and hibernate with virtio-fs devices
> 
>   PVE/API2/Qemu.pm             |  41 ++++++-
>   PVE/QemuServer.pm            |  46 +++++++-
>   PVE/QemuServer/Makefile      |   3 +-
>   PVE/QemuServer/Memory.pm     |  22 ++--
>   PVE/QemuServer/Virtiofs.pm   | 211 +++++++++++++++++++++++++++++++++++
>   debian/control               |   1 +
>   test/MigrationTest/Shared.pm |   7 ++
>   7 files changed, 317 insertions(+), 14 deletions(-)
>   create mode 100644 PVE/QemuServer/Virtiofs.pm
> 
> 
> 
> manager:
> 
> Markus Frank (5):
>    api: add resource map api endpoints for directories
>    ui: add edit window for dir mappings
>    ui: add resource mapping view for directories
>    ui: form: add selector for directory mappings
>    ui: add options to add virtio-fs to qemu config
> 
>   PVE/API2/Cluster/Mapping.pm         |   7 +
>   PVE/API2/Cluster/Mapping/Dir.pm     | 308 ++++++++++++++++++++++++++++
>   PVE/API2/Cluster/Mapping/Makefile   |   1 +
>   www/manager6/Makefile               |   4 +
>   www/manager6/Utils.js               |   1 +
>   www/manager6/dc/Config.js           |  10 +
>   www/manager6/dc/DirMapView.js       |  42 ++++
>   www/manager6/form/DirMapSelector.js |  63 ++++++
>   www/manager6/qemu/HardwareView.js   |  19 ++
>   www/manager6/qemu/VirtiofsEdit.js   | 137 +++++++++++++
>   www/manager6/window/DirMapEdit.js   | 214 +++++++++++++++++++
>   11 files changed, 806 insertions(+)
>   create mode 100644 PVE/API2/Cluster/Mapping/Dir.pm
>   create mode 100644 www/manager6/dc/DirMapView.js
>   create mode 100644 www/manager6/form/DirMapSelector.js
>   create mode 100644 www/manager6/qemu/VirtiofsEdit.js
>   create mode 100644 www/manager6/window/DirMapEdit.js
> 



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [pve-devel] [PATCH manager v14 10/12] ui: add resource mapping view for directories
  2025-03-04 11:58 ` [pve-devel] [PATCH manager v14 10/12] ui: add resource mapping view for directories Markus Frank
@ 2025-04-02 10:36   ` Laurențiu Leahu-Vlăducu
  2025-04-02 13:42   ` Daniel Kral
  1 sibling, 0 replies; 30+ messages in thread
From: Laurențiu Leahu-Vlăducu @ 2025-04-02 10:36 UTC (permalink / raw)
  To: pve-devel

Some comments inline


Otherwise, please consider:

Reviewed-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com>
Tested-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com>


On 04.03.25 12:58, Markus Frank wrote:
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> ---
> v14:
> * return HTML encoded comment
> 
>   www/manager6/Makefile         |  1 +
>   www/manager6/dc/Config.js     | 10 +++++++++
>   www/manager6/dc/DirMapView.js | 42 +++++++++++++++++++++++++++++++++++
>   3 files changed, 53 insertions(+)
>   create mode 100644 www/manager6/dc/DirMapView.js
> 
> diff --git a/www/manager6/Makefile b/www/manager6/Makefile
> index 4b8677e3..57c4d377 100644
> --- a/www/manager6/Makefile
> +++ b/www/manager6/Makefile
> @@ -189,6 +189,7 @@ JSSRC= 							\
>   	dc/RealmSyncJob.js				\
>   	dc/PCIMapView.js				\
>   	dc/USBMapView.js				\
> +	dc/DirMapView.js				\
>   	lxc/CmdMenu.js					\
>   	lxc/Config.js					\
>   	lxc/CreateWizard.js				\
> diff --git a/www/manager6/dc/Config.js b/www/manager6/dc/Config.js
> index 74728c83..2958fb88 100644
> --- a/www/manager6/dc/Config.js
> +++ b/www/manager6/dc/Config.js
> @@ -329,6 +329,16 @@ Ext.define('PVE.dc.Config', {
>   			    title: gettext('USB Devices'),
>   			    flex: 1,
>   			},
> +			{
> +			    xtype: 'splitter',
> +			    collapsible: false,
> +			    performCollapse: false,
> +			},
> +			{
> +			    xtype: 'pveDcDirMapView',
> +			    title: gettext('Directories'),
> +			    flex: 1,
> +			},

This is only indirectly related to your changes, but: I noticed that the 
"Resource Mappings" tab does not have that much vertical space anymore. 
While this is fine on my (large) screen, it might be an issue with 
smaller (e.g. laptop) screens. In case we want to add more resource 
mappings in the future, it will get even worse.

It might be worth thinking whether it makes sense to split it into tabs, 
similar to how we're doing it in the PBS GUI, or the Tasks / Cluster log 
in PVE. I know that this adds a third layer of GUI (Datacenter -> 
Resource Mappings -> click correct tab), but I think it might be a bit 
more organized, and the space would be used more efficiently.

>   		    ],
>   		},
>   	    );
> diff --git a/www/manager6/dc/DirMapView.js b/www/manager6/dc/DirMapView.js
> new file mode 100644
> index 00000000..ff0ce633
> --- /dev/null
> +++ b/www/manager6/dc/DirMapView.js
> @@ -0,0 +1,42 @@
> +Ext.define('pve-resource-dir-tree', {
> +    extend: 'Ext.data.Model',
> +    idProperty: 'internalId',
> +    fields: ['type', 'text', 'path', 'id', 'description', 'digest'],
> +});
> +
> +Ext.define('PVE.dc.DirMapView', {
> +    extend: 'PVE.tree.ResourceMapTree',
> +    alias: 'widget.pveDcDirMapView',
> +
> +    editWindowClass: 'PVE.window.DirMapEditWindow',
> +    baseUrl: '/cluster/mapping/dir',
> +    mapIconCls: 'fa fa-folder',
> +    entryIdProperty: 'path',
> +
> +    store: {
> +	sorters: 'text',
> +	model: 'pve-resource-dir-tree',
> +	data: {},
> +    },
> +
> +    columns: [
> +	{
> +	    xtype: 'treecolumn',
> +	    text: gettext('ID/Node'),
> +	    dataIndex: 'text',
> +	    width: 200,
> +	},
> +	{
> +	    text: gettext('announce-submounts'),

Minor remark: while I understand why naming it 'announce-submounts' 
makes it clearer which underlying option it sets, I would usually not 
display it like this in the GUI, but by using a nice label. Although I 
also think it's hard to find a short label that is easy to understand, 
so maybe - just maybe - it would make sense to add a short tooltip which 
explains what the option does. On the other hand, I know that the user 
can always press "Help" to get to the docs that explain everything in 
detail, so I would only add a tooltip if it's possible to easily explain 
what the feature does.

> +	    dataIndex: 'announce-submounts',
> +	},
> +	{
> +	    header: gettext('Comment'),
> +	    dataIndex: 'description',
> +	    renderer: function(value, _meta, record) {
> +		return Ext.String.htmlEncode(value ?? record.data.comment);
> +	    },
> +	    flex: 1,
> +	},
> +    ],
> +});



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [pve-devel] [PATCH docs v14 3/12] add doc section for the shared filesystem virtio-fs
  2025-03-04 11:57 ` [pve-devel] [PATCH docs v14 3/12] add doc section for the shared filesystem virtio-fs Markus Frank
@ 2025-04-02 10:36   ` Laurențiu Leahu-Vlăducu
  2025-04-02 13:13   ` Fabian Grünbichler
  2025-04-02 13:44   ` Daniel Kral
  2 siblings, 0 replies; 30+ messages in thread
From: Laurențiu Leahu-Vlăducu @ 2025-04-02 10:36 UTC (permalink / raw)
  To: pve-devel

I think the docs should also - at least shortly - mention that:

1. The directory is only mounted temporarily and does not persist after 
a restart.
2. The reasoning behind this decision.
3. How to mount permanently, if still desired (despite the reasoning 
from point 2).


Other popular virtualization software also allow automatically mounting 
such things, so I think it makes sense to at least explain how to do it, 
even if we don't do it by default.


Some further comments inline


Otherwise, please consider:

Reviewed-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com>


On 04.03.25 12:57, Markus Frank wrote:
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> ---
> v14:
> * addressed formulation nits
> * added paragraph about expose-acl & expose-xattr
> 
>   qm.adoc | 99 +++++++++++++++++++++++++++++++++++++++++++++++++++++++--
>   1 file changed, 97 insertions(+), 2 deletions(-)
> 
> diff --git a/qm.adoc b/qm.adoc
> index 4bb8f2c..86b3877 100644
> --- a/qm.adoc
> +++ b/qm.adoc
> @@ -1202,6 +1202,100 @@ recommended to always use a limiter to avoid guests using too many host
>   resources. If desired, a value of '0' for `max_bytes` can be used to disable
>   all limits.
>   
> +[[qm_virtiofs]]
> +Virtio-fs
> +~~~~~~~~~
> +
> +Virtio-fs is a shared filesystem designed for virtual environments. It allows to
> +share a directory tree available on the host by mounting it within VMs. It does
> +not use the network stack and aims to offer similar performance and semantics as
> +the source filesystem.
> +
> +To use virtio-fs, the https://gitlab.com/virtio-fs/virtiofsd[virtiofsd] daemon
> +needs to run in the background. This happens automatically in {pve} when
> +starting a VM using a virtio-fs mount.
> +
> +Linux VMs with kernel >=5.4 support virtio-fs by default.
> +
> +There is a guide available on how to utilize virtio-fs in Windows VMs.
> +https://github.com/virtio-win/kvm-guest-drivers-windows/wiki/Virtiofs:-Shared-file-system
> +
> +Known Limitations
> +^^^^^^^^^^^^^^^^^
> +
> +* If virtiofsd crashes, its mount point will hang in the VM until the VM
> +is completely stopped.
> +* virtiofsd not responding may result in a hanging mount in the VM, similar to
> +an unreachable NFS.
> +* Memory hotplug does not work in combination with virtio-fs (also results in
> +hanging access).
> +* Memory related features such as live migration, snapshots, and hibernate are
> +not available with virtio-fs devices.
> +* Windows cannot understand ACLs in the context of virtio-fs. Therefore, do not
> +expose ACLs for Windows VMs, otherwise the virtio-fs device will not be
> +visible within the VM.
> +
> +Add Mapping for Shared Directories
> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> +
> +To add a mapping for a shared directory, you can use the API directly with
> +`pvesh` as described in the xref:resource_mapping[Resource Mapping] section:
> +
> +----
> +pvesh create /cluster/mapping/dir --id dir1 \
> +    --map node=node1,path=/path/to/share1 \
> +    --map node=node2,path=/path/to/share2,announce-submounts=1 \
> +----
> +
> +Set `announce-submounts` to `1` if multiple filesystems are mounted in a shared
> +directory. This tells the guest which directories are mount points to prevent
> +data loss/corruption. With `announce-submounts`, virtiofsd reports a different
> +device number for each submount it encounters. Without it, duplicates may be
> +created because inode IDs are only unique on a single filesystem.
> +
> +Add virtio-fs to a VM
> +^^^^^^^^^^^^^^^^^^^^^
> +
> +To share a directory using virtio-fs, add the parameter `virtiofs<N>` (N can be
> +anything between 0 and 9) to the VM config and use a directory ID (dirid) that
> +has been configured in the resource mapping. Additionally, you can set the
> +`cache` option to either `always`, `never`, or `auto` (default: `auto`),
> +depending on your requirements. How the different caching modes behave can be
> +read at https://lwn.net/Articles/774495/ under the "Caching Modes" section. To
> +enable writeback cache set `writeback` to `1`.

While I know that the linked article/patch contains a lot of 
information, unless we have a very good reason, I think we should either 
summarize that information in our own documentation, or we should create 
a separate section in our documentation that explains the information 
available there. The reason of that is that our documentation might be 
accessed from offline (air-gapped) machines that won't be able to access 
the internet.

> +
> +Virtiofsd supports ACL and xattr passthrough (can be enabled with the
> +`expose-acl` and `expose-xattr` options), allowing the guest to access ACLs and
> +xattrs if the underlying host filesystem supports them, but they must also be
> +compatible with the guest filesystem (for example most Linux filesystems support
> +ACLs, while Windows filesystems do not).
> +
> +The `expose-acl` option automatically implies `expose-xattr`, that is, it makes
> +no difference if you set `expose-xattr` to `0` if `expose-acl` is set to `1`.
> +
> +If you want virtio-fs to honor the `O_DIRECT` flag, you can set the `direct-io`
> +parameter to `1` (default: `0`). This will degrade performance, but is useful if
> +applications do their own caching.
> +
> +----
> +qm set <vmid> -virtiofs0 dirid=<dirid>,cache=always,direct-io=1
> +qm set <vmid> -virtiofs1 <dirid>,cache=never,expose-xattr=1
> +qm set <vmid> -virtiofs2 <dirid>,expose-acl=1,writeback=1
> +----
> +
> +To mount virtio-fs in a guest VM with the Linux kernel virtio-fs driver, run the
> +following command inside the guest:
> +
> +----
> +mount -t virtiofs <mount tag> <mount point>
> +----
> +
> +The dirid associated with the path on the current node is also used as the mount
> +tag (name used to mount the device on the guest).
> +
> +For more information on available virtiofsd parameters, see the
> +https://gitlab.com/virtio-fs/virtiofsd[GitLab virtiofsd project page].
> +
>   [[qm_bootorder]]
>   Device Boot Order
>   ~~~~~~~~~~~~~~~~~
> @@ -1885,8 +1979,9 @@ in the relevant tab in the `Resource Mappings` category, or on the cli with
>   
>   [thumbnail="screenshot/gui-datacenter-mapping-pci-edit.png"]
>   
> -Where `<type>` is the hardware type (currently either `pci` or `usb`) and
> -`<options>` are the device mappings and other configuration parameters.
> +Where `<type>` is the hardware type (currently either `pci`, `usb` or
> +xref:qm_virtiofs[dir]) and `<options>` are the device mappings and other
> +configuration parameters.
>   
>   Note that the options must include a map property with all identifying
>   properties of that hardware, so that it's possible to verify the hardware did



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [pve-devel] [PATCH manager v14 12/12] ui: add options to add virtio-fs to qemu config
  2025-03-04 11:58 ` [pve-devel] [PATCH manager v14 12/12] ui: add options to add virtio-fs to qemu config Markus Frank
@ 2025-04-02 10:36   ` Laurențiu Leahu-Vlăducu
  2025-04-02 14:06     ` Daniel Kral
  2025-04-02 13:42   ` Daniel Kral
  1 sibling, 1 reply; 30+ messages in thread
From: Laurențiu Leahu-Vlăducu @ 2025-04-02 10:36 UTC (permalink / raw)
  To: pve-devel

Some comments inline


Otherwise, please consider:

Reviewed-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com>
Tested-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com>


On 04.03.25 12:58, Markus Frank wrote:
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> ---
> v14:
> * disable expose-xattr when expose-acl is set
> * added missing writeback cache option
> 
>   www/manager6/Makefile             |   1 +
>   www/manager6/Utils.js             |   1 +
>   www/manager6/qemu/HardwareView.js |  19 +++++
>   www/manager6/qemu/VirtiofsEdit.js | 137 ++++++++++++++++++++++++++++++
>   4 files changed, 158 insertions(+)
>   create mode 100644 www/manager6/qemu/VirtiofsEdit.js
> 
> diff --git a/www/manager6/Makefile b/www/manager6/Makefile
> index fabbdd24..fdf0e816 100644
> --- a/www/manager6/Makefile
> +++ b/www/manager6/Makefile
> @@ -271,6 +271,7 @@ JSSRC= 							\
>   	qemu/Smbios1Edit.js				\
>   	qemu/SystemEdit.js				\
>   	qemu/USBEdit.js					\
> +	qemu/VirtiofsEdit.js				\
>   	sdn/Browser.js					\
>   	sdn/ControllerView.js				\
>   	sdn/Status.js					\
> diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
> index 90011a8f..0f242ae1 100644
> --- a/www/manager6/Utils.js
> +++ b/www/manager6/Utils.js
> @@ -1645,6 +1645,7 @@ Ext.define('PVE.Utils', {
>   	serial: 4,
>   	rng: 1,
>   	tpmstate: 1,
> +	virtiofs: 10,
>       },
>   
>       // we can have usb6 and up only for specific machine/ostypes
> diff --git a/www/manager6/qemu/HardwareView.js b/www/manager6/qemu/HardwareView.js
> index c6d193fc..34aeb51e 100644
> --- a/www/manager6/qemu/HardwareView.js
> +++ b/www/manager6/qemu/HardwareView.js
> @@ -319,6 +319,16 @@ Ext.define('PVE.qemu.HardwareView', {
>   	    never_delete: !caps.nodes['Sys.Console'],
>   	    header: gettext("VirtIO RNG"),
>   	};
> +	for (let i = 0; i < PVE.Utils.hardware_counts.virtiofs; i++) {
> +	    let confid = "virtiofs" + i.toString();
> +	    rows[confid] = {
> +		group: 50,
> +		order: i,
> +		iconCls: 'folder',
> +		editor: 'PVE.qemu.VirtiofsEdit',
> +		header: gettext('Virtiofs') + ' (' + confid +')',
> +	    };
> +	}
>   
>   	var sorterFn = function(rec1, rec2) {
>   	    var v1 = rec1.data.key;
> @@ -595,6 +605,7 @@ Ext.define('PVE.qemu.HardwareView', {
>   	    const noVMConfigDiskPerm = !caps.vms['VM.Config.Disk'];
>   	    const noVMConfigCDROMPerm = !caps.vms['VM.Config.CDROM'];
>   	    const noVMConfigCloudinitPerm = !caps.vms['VM.Config.Cloudinit'];
> +	    const noVMConfigOptionsPerm = !caps.vms['VM.Config.Options'];
>   
>   	    me.down('#addUsb').setDisabled(noHWPerm || isAtUsbLimit());
>   	    me.down('#addPci').setDisabled(noHWPerm || isAtLimit('hostpci'));
> @@ -604,6 +615,7 @@ Ext.define('PVE.qemu.HardwareView', {
>   	    me.down('#addRng').setDisabled(noSysConsolePerm || isAtLimit('rng'));
>   	    efidisk_menuitem.setDisabled(noVMConfigDiskPerm || isAtLimit('efidisk'));
>   	    me.down('#addTpmState').setDisabled(noVMConfigDiskPerm || isAtLimit('tpmstate'));
> +	    me.down('#addVirtiofs').setDisabled(noVMConfigOptionsPerm || isAtLimit('virtiofs'));
>   	    me.down('#addCloudinitDrive').setDisabled(noVMConfigCDROMPerm || noVMConfigCloudinitPerm || hasCloudInit);
>   
>   	    if (!rec) {
> @@ -748,6 +760,13 @@ Ext.define('PVE.qemu.HardwareView', {
>   				disabled: !caps.nodes['Sys.Console'],
>   				handler: editorFactory('RNGEdit'),
>   			    },
> +			    {
> +				text: gettext("Virtiofs"),
> +				itemId: 'addVirtiofs',
> +				iconCls: 'fa fa-folder',
> +				disabled: !caps.nodes['Sys.Console'],
> +				handler: editorFactory('VirtiofsEdit'),
> +			    },
>   			],
>   		    }),
>   		},
> diff --git a/www/manager6/qemu/VirtiofsEdit.js b/www/manager6/qemu/VirtiofsEdit.js
> new file mode 100644
> index 00000000..0bbb5213
> --- /dev/null
> +++ b/www/manager6/qemu/VirtiofsEdit.js
> @@ -0,0 +1,137 @@
> +Ext.define('PVE.qemu.VirtiofsInputPanel', {
> +    extend: 'Proxmox.panel.InputPanel',
> +    xtype: 'pveVirtiofsInputPanel',
> +    onlineHelp: 'qm_virtiofs',
> +
> +    insideWizard: false,
> +
> +    onGetValues: function(values) {
> +	var me = this;
> +	var confid = me.confid;
> +	var params = {};
> +	delete values.delete;
> +	params[confid] = PVE.Parser.printPropertyString(values, 'dirid');
> +	return params;
> +    },
> +
> +    setSharedfiles: function(confid, data) {
> +	var me = this;
> +	me.confid = confid;
> +	me.virtiofs = data;
> +	me.setValues(me.virtiofs);
> +    },
> +    initComponent: function() {
> +	let me = this;
> +
> +	me.nodename = me.pveSelNode.data.node;
> +	if (!me.nodename) {
> +	    throw "no node name specified";
> +	}
> +	me.items = [
> +	    {
> +		xtype: 'pveDirMapSelector',
> +		emptyText: 'dirid',
> +		nodename: me.nodename,
> +		fieldLabel: gettext('Directory ID'),
> +		name: 'dirid',
> +		allowBlank: false,
> +	    },
> +	    {
> +		xtype: 'proxmoxKVComboBox',
> +		fieldLabel: gettext('Cache'),
> +		name: 'cache',
> +		value: '__default__',
> +		deleteDefaultValue: false,
> +		comboItems: [
> +		    ['__default__', Proxmox.Utils.defaultText + ' (auto)'],
> +		    ['auto', 'auto'],
> +		    ['always', 'always'],
> +		    ['never', 'never'],
> +		],

Is it a good idea to use cache=auto by default? I know that "cache=auto" 
is the upstream default, but I was wondering whether it's not safer to 
use "never" by default in PVE to minimize the amount of potential 
issues. Feel free to contradict me, though ;)

By searching the internet I found some reports regarding cache=auto, 
e.g. 
https://discuss.linuxcontainers.org/t/current-state-of-virtiofs-under-virtual-machines/19166

> +	    },
> +	    {
> +		xtype: 'proxmoxcheckbox',
> +		fieldLabel: gettext('Writeback cache'),
> +		name: 'writeback',
> +	    },
> +	    {
> +		xtype: 'proxmoxcheckbox',
> +		fieldLabel: gettext('expose-xattr'),
> +		name: 'expose-xattr',
> +	    },
> +	    {
> +		xtype: 'proxmoxcheckbox',
> +		fieldLabel: gettext('expose-acl (implies expose-xattr)'),
> +		name: 'expose-acl',
> +		listeners: {
> +		    change: function(f, value) {
> +			let xattr = me.down('field[name=expose-xattr]');
> +			xattr.setDisabled(value);
> +			if (value) {
> +			    xattr.setValue(0);
> +			}
> +		    },
> +		},
> +	    },
> +	    {
> +		xtype: 'proxmoxcheckbox',
> +		fieldLabel: gettext('Direct IO'),
> +		name: 'direct-io',
> +	    },
> +	];

I think some of these options should be marked as advanced, as they 
won't be needed in most cases (feel free to contradict me, though).

While it's not always immediately clear what some of these options do, I 
think it's hard to summarize the functionality in a short tooltip, so 
linking to the docs (as you already do) is fine to me.

What is still unclear to me is what some caching combinations do, e.g. 
the user can set Cache to Never, but can also enable "Writeback cache". 
I know this has to do with QEMU and you simply exposed the GUI for 
setting the advanced options, but I was just wondering whether it 
wouldn't be possible to expose these options to our users in a way that 
the difference between the modes is clearer, e.g. "Read cache" and 
"Writeback cache". Feel free to use some entirely different naming, 
though ;)

> +
> +	me.virtiofs = {};
> +	me.confid = 'virtiofs0';
> +	me.callParent();
> +    },
> +});
> +
> +Ext.define('PVE.qemu.VirtiofsEdit', {
> +    extend: 'Proxmox.window.Edit',
> +
> +    subject: gettext('Filesystem Passthrough'),
> +
> +    initComponent: function() {
> +	var me = this;
> +
> +	me.isCreate = !me.confid;
> +
> +	var ipanel = Ext.create('PVE.qemu.VirtiofsInputPanel', {
> +	    confid: me.confid,
> +	    pveSelNode: me.pveSelNode,
> +	    isCreate: me.isCreate,
> +	});
> +
> +	Ext.applyIf(me, {
> +	    items: ipanel,
> +	});
> +
> +	me.callParent();
> +
> +	me.load({
> +	    success: function(response) {
> +		me.conf = response.result.data;
> +		var i, confid;
> +		if (!me.isCreate) {
> +		    var value = me.conf[me.confid];
> +		    var virtiofs = PVE.Parser.parsePropertyString(value, "dirid");
> +		    if (!virtiofs) {
> +			Ext.Msg.alert(gettext('Error'), 'Unable to parse virtiofs options');
> +			me.close();
> +			return;
> +		    }
> +		    ipanel.setSharedfiles(me.confid, virtiofs);
> +		} else {
> +		    for (i = 0; i < PVE.Utils.hardware_counts.virtiofs; i++) {
> +			confid = 'virtiofs' + i.toString();
> +			if (!Ext.isDefined(me.conf[confid])) {
> +			    me.confid = confid;
> +			    break;
> +			}
> +		    }
> +		    ipanel.setSharedfiles(me.confid, {});
> +		}
> +	    },
> +	});
> +    },
> +});



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [pve-devel] [PATCH qemu-server v14 6/12] migration: check_local_resources for virtiofs
  2025-03-04 11:57 ` [pve-devel] [PATCH qemu-server v14 6/12] migration: check_local_resources for virtiofs Markus Frank
@ 2025-04-02 13:13   ` Fabian Grünbichler
  0 siblings, 0 replies; 30+ messages in thread
From: Fabian Grünbichler @ 2025-04-02 13:13 UTC (permalink / raw)
  To: Proxmox VE development discussion

On March 4, 2025 12:57 pm, Markus Frank wrote:
> add dir mapping checks to check_local_resources
> 
> Since the VM needs to be powered off for migration, migration should
> work with a directory on shared storage with all caching settings.
> 
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
> ---
> v14:
> * nothing changed
> 
>  PVE/QemuServer.pm            | 12 +++++++++++-
>  test/MigrationTest/Shared.pm |  7 +++++++
>  2 files changed, 18 insertions(+), 1 deletion(-)
> 
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index 748b0acf..648b28cd 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -2486,6 +2486,7 @@ sub check_local_resources {
>      my $nodelist = PVE::Cluster::get_nodelist();
>      my $pci_map = PVE::Mapping::PCI::config();
>      my $usb_map = PVE::Mapping::USB::config();
> +    my $dir_map = PVE::Mapping::Dir::config();
>  
>      my $missing_mappings_by_node = { map { $_ => [] } @$nodelist };
>  
> @@ -2497,6 +2498,8 @@ sub check_local_resources {
>  		$entry = PVE::Mapping::PCI::get_node_mapping($pci_map, $id, $node);
>  	    } elsif ($type eq 'usb') {
>  		$entry = PVE::Mapping::USB::get_node_mapping($usb_map, $id, $node);
> +	    } elsif ($type eq 'dir') {
> +		$entry = PVE::Mapping::Dir::get_node_mapping($dir_map, $id, $node);
>  	    }
>  	    if (!scalar($entry->@*)) {
>  		push @{$missing_mappings_by_node->{$node}}, $key;
> @@ -2525,9 +2528,16 @@ sub check_local_resources {
>  		push @$mapped_res, $k;
>  	    }
>  	}
> +	if ($k =~ m/^virtiofs/) {
> +	    my $entry = parse_property_string('pve-qm-virtiofs', $conf->{$k});
> +	    if ($entry->{dirid}) {

dirid is not optional?

> +		$add_missing_mapping->('dir', $k, $entry->{dirid});
> +		push @$mapped_res, $k;
> +	    }
> +	}
>  	# sockets are safe: they will recreated be on the target side post-migrate
>  	next if $k =~ m/^serial/ && ($conf->{$k} eq 'socket');
> -	push @loc_res, $k if $k =~ m/^(usb|hostpci|serial|parallel)\d+$/;
> +	push @loc_res, $k if $k =~ m/^(usb|hostpci|serial|parallel|virtiofs)\d+$/;
>      }
>  
>      die "VM uses local resources\n" if scalar @loc_res && !$noerr;
> diff --git a/test/MigrationTest/Shared.pm b/test/MigrationTest/Shared.pm
> index aa7203d1..c5d07222 100644
> --- a/test/MigrationTest/Shared.pm
> +++ b/test/MigrationTest/Shared.pm
> @@ -90,6 +90,13 @@ $mapping_pci_module->mock(
>      },
>  );
>  
> +our $mapping_dir_module = Test::MockModule->new("PVE::Mapping::Dir");
> +$mapping_dir_module->mock(
> +    config => sub {
> +	return {};
> +    },
> +);
> +
>  our $ha_config_module = Test::MockModule->new("PVE::HA::Config");
>  $ha_config_module->mock(
>      vm_is_ha_managed => sub {
> -- 
> 2.39.5
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [pve-devel] [PATCH docs v14 3/12] add doc section for the shared filesystem virtio-fs
  2025-03-04 11:57 ` [pve-devel] [PATCH docs v14 3/12] add doc section for the shared filesystem virtio-fs Markus Frank
  2025-04-02 10:36   ` Laurențiu Leahu-Vlăducu
@ 2025-04-02 13:13   ` Fabian Grünbichler
  2025-04-02 13:44   ` Daniel Kral
  2 siblings, 0 replies; 30+ messages in thread
From: Fabian Grünbichler @ 2025-04-02 13:13 UTC (permalink / raw)
  To: Proxmox VE development discussion

On March 4, 2025 12:57 pm, Markus Frank wrote:
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> ---
> v14:
> * addressed formulation nits
> * added paragraph about expose-acl & expose-xattr
> 
>  qm.adoc | 99 +++++++++++++++++++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 97 insertions(+), 2 deletions(-)
> 
> diff --git a/qm.adoc b/qm.adoc
> index 4bb8f2c..86b3877 100644
> --- a/qm.adoc
> +++ b/qm.adoc
> @@ -1202,6 +1202,100 @@ recommended to always use a limiter to avoid guests using too many host
>  resources. If desired, a value of '0' for `max_bytes` can be used to disable
>  all limits.
>  
> +[[qm_virtiofs]]
> +Virtio-fs
> +~~~~~~~~~
> +
> +Virtio-fs is a shared filesystem designed for virtual environments. It allows to
> +share a directory tree available on the host by mounting it within VMs. It does
> +not use the network stack and aims to offer similar performance and semantics as
> +the source filesystem.
> +
> +To use virtio-fs, the https://gitlab.com/virtio-fs/virtiofsd[virtiofsd] daemon
> +needs to run in the background. This happens automatically in {pve} when
> +starting a VM using a virtio-fs mount.
> +
> +Linux VMs with kernel >=5.4 support virtio-fs by default.
> +
> +There is a guide available on how to utilize virtio-fs in Windows VMs.
> +https://github.com/virtio-win/kvm-guest-drivers-windows/wiki/Virtiofs:-Shared-file-system
> +
> +Known Limitations
> +^^^^^^^^^^^^^^^^^
> +
> +* If virtiofsd crashes, its mount point will hang in the VM until the VM
> +is completely stopped.
> +* virtiofsd not responding may result in a hanging mount in the VM, similar to
> +an unreachable NFS.
> +* Memory hotplug does not work in combination with virtio-fs (also results in
> +hanging access).

should we make them mutually exclusive then?

> +* Memory related features such as live migration, snapshots, and hibernate are
> +not available with virtio-fs devices.
> +* Windows cannot understand ACLs in the context of virtio-fs. Therefore, do not
> +expose ACLs for Windows VMs, otherwise the virtio-fs device will not be
> +visible within the VM.
> +
> +Add Mapping for Shared Directories
> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> +
> +To add a mapping for a shared directory, you can use the API directly with
> +`pvesh` as described in the xref:resource_mapping[Resource Mapping] section:
> +
> +----
> +pvesh create /cluster/mapping/dir --id dir1 \
> +    --map node=node1,path=/path/to/share1 \
> +    --map node=node2,path=/path/to/share2,announce-submounts=1 \
> +----
> +
> +Set `announce-submounts` to `1` if multiple filesystems are mounted in a shared
> +directory. This tells the guest which directories are mount points to prevent
> +data loss/corruption. With `announce-submounts`, virtiofsd reports a different
> +device number for each submount it encounters. Without it, duplicates may be
> +created because inode IDs are only unique on a single filesystem.
> +
> +Add virtio-fs to a VM
> +^^^^^^^^^^^^^^^^^^^^^
> +
> +To share a directory using virtio-fs, add the parameter `virtiofs<N>` (N can be
> +anything between 0 and 9) to the VM config and use a directory ID (dirid) that
> +has been configured in the resource mapping. Additionally, you can set the
> +`cache` option to either `always`, `never`, or `auto` (default: `auto`),
> +depending on your requirements. How the different caching modes behave can be
> +read at https://lwn.net/Articles/774495/ under the "Caching Modes" section. To
> +enable writeback cache set `writeback` to `1`.
> +
> +Virtiofsd supports ACL and xattr passthrough (can be enabled with the
> +`expose-acl` and `expose-xattr` options), allowing the guest to access ACLs and
> +xattrs if the underlying host filesystem supports them, but they must also be
> +compatible with the guest filesystem (for example most Linux filesystems support
> +ACLs, while Windows filesystems do not).
> +
> +The `expose-acl` option automatically implies `expose-xattr`, that is, it makes
> +no difference if you set `expose-xattr` to `0` if `expose-acl` is set to `1`.
> +
> +If you want virtio-fs to honor the `O_DIRECT` flag, you can set the `direct-io`
> +parameter to `1` (default: `0`). This will degrade performance, but is useful if
> +applications do their own caching.
> +
> +----
> +qm set <vmid> -virtiofs0 dirid=<dirid>,cache=always,direct-io=1
> +qm set <vmid> -virtiofs1 <dirid>,cache=never,expose-xattr=1
> +qm set <vmid> -virtiofs2 <dirid>,expose-acl=1,writeback=1
> +----
> +
> +To mount virtio-fs in a guest VM with the Linux kernel virtio-fs driver, run the
> +following command inside the guest:
> +
> +----
> +mount -t virtiofs <mount tag> <mount point>
> +----
> +
> +The dirid associated with the path on the current node is also used as the mount
> +tag (name used to mount the device on the guest).
> +
> +For more information on available virtiofsd parameters, see the
> +https://gitlab.com/virtio-fs/virtiofsd[GitLab virtiofsd project page].
> +
>  [[qm_bootorder]]
>  Device Boot Order
>  ~~~~~~~~~~~~~~~~~
> @@ -1885,8 +1979,9 @@ in the relevant tab in the `Resource Mappings` category, or on the cli with
>  
>  [thumbnail="screenshot/gui-datacenter-mapping-pci-edit.png"]
>  
> -Where `<type>` is the hardware type (currently either `pci` or `usb`) and
> -`<options>` are the device mappings and other configuration parameters.
> +Where `<type>` is the hardware type (currently either `pci`, `usb` or
> +xref:qm_virtiofs[dir]) and `<options>` are the device mappings and other
> +configuration parameters.
>  
>  Note that the options must include a map property with all identifying
>  properties of that hardware, so that it's possible to verify the hardware did
> -- 
> 2.39.5
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [pve-devel] [PATCH qemu-server v14 5/12] fix #1027: virtio-fs support
  2025-03-04 11:57 ` [pve-devel] [PATCH qemu-server v14 5/12] fix #1027: virtio-fs support Markus Frank
@ 2025-04-02 13:13   ` Fabian Grünbichler
  0 siblings, 0 replies; 30+ messages in thread
From: Fabian Grünbichler @ 2025-04-02 13:13 UTC (permalink / raw)
  To: Proxmox VE development discussion

On March 4, 2025 12:57 pm, Markus Frank wrote:
> add support for sharing directories with a guest vm.
> 
> virtio-fs needs virtiofsd to be started.
> In order to start virtiofsd as a process (despite being a daemon it is
> does not run in the background), a double-fork is used.
> 
> virtiofsd should close itself together with QEMU.
> 
> There are the parameters dirid and the optional parameters direct-io,
> cache and writeback. Additionally the expose-xattr & expose-acl
> parameter can be set to expose xattr & acl settings from the shared
> filesystem to the guest system.
> 
> The dirid gets mapped to the path on the current node and is also used
> as a mount tag (name used to mount the device on the guest).
> 
> example config:
> ```
> virtiofs0: foo,direct-io=1,cache=always,expose-acl=1
> virtiofs1: dirid=bar,cache=never,expose-xattr=1,writeback=1
> ```
> 
> For information on the optional parameters see the coherent doc patch
> and the official gitlab README:
> https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md
> 
> Also add a permission check for virtiofs directory access.
> 
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> ---
> v14:
> * use max_virtiofs() in check_vm_create_dir_perm
> * addressed style nits and improved formatting
> * added missing imports/uses
> * assert_virtiofs_config now only gets the ostype and the virtiofs cfg
> * removed unnecessary checks after parse_property_string
> 
>  PVE/API2/Qemu.pm           |  41 ++++++-
>  PVE/QemuServer.pm          |  29 ++++-
>  PVE/QemuServer/Makefile    |   3 +-
>  PVE/QemuServer/Memory.pm   |  22 ++--
>  PVE/QemuServer/Virtiofs.pm | 211 +++++++++++++++++++++++++++++++++++++
>  5 files changed, 295 insertions(+), 11 deletions(-)
>  create mode 100644 PVE/QemuServer/Virtiofs.pm
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index 5ac61aa5..7cefffdf 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -39,6 +39,7 @@ use PVE::QemuServer::MetaInfo;
>  use PVE::QemuServer::PCI;
>  use PVE::QemuServer::QMPHelpers;
>  use PVE::QemuServer::USB;
> +use PVE::QemuServer::Virtiofs qw(max_virtiofs);
>  use PVE::QemuMigrate;
>  use PVE::RPCEnvironment;
>  use PVE::AccessControl;
> @@ -801,6 +802,33 @@ my sub check_vm_create_hostpci_perm {
>      return 1;
>  };
>  
> +my sub check_dir_perm {
> +    my ($rpcenv, $authuser, $vmid, $pool, $opt, $value) = @_;
> +
> +    return 1 if $authuser eq 'root@pam';
> +
> +    $rpcenv->check_vm_perm($authuser, $vmid, $pool, ['VM.Config.Disk']);
> +
> +    my $virtiofs = PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', $value);
> +    $rpcenv->check_full($authuser, "/mapping/dir/$virtiofs->{dirid}", ['Mapping.Use']);
> +
> +    return 1;
> +};
> +
> +my sub check_vm_create_dir_perm {
> +    my ($rpcenv, $authuser, $vmid, $pool, $param) = @_;
> +
> +    return 1 if $authuser eq 'root@pam';
> +
> +    for (my $i = 0; $i < max_virtiofs(); $i++) {
> +	my $opt = "virtiofs$i";
> +	next if !$param->{$opt};
> +	check_dir_perm($rpcenv, $authuser, $vmid, $pool, $opt, $param->{$opt});
> +    }
> +
> +    return 1;
> +};
> +
>  my $check_vm_modify_config_perm = sub {
>      my ($rpcenv, $authuser, $vmid, $pool, $key_list) = @_;
>  
> @@ -811,7 +839,7 @@ my $check_vm_modify_config_perm = sub {
>  	# else, as there the permission can be value dependent
>  	next if PVE::QemuServer::is_valid_drivename($opt);
>  	next if $opt eq 'cdrom';
> -	next if $opt =~ m/^(?:unused|serial|usb|hostpci)\d+$/;
> +	next if $opt =~ m/^(?:unused|serial|usb|hostpci|virtiofs)\d+$/;
>  	next if $opt eq 'tags';
>  
>  
> @@ -1114,6 +1142,7 @@ __PACKAGE__->register_method({
>  	    &$check_vm_create_serial_perm($rpcenv, $authuser, $vmid, $pool, $param);
>  	    check_vm_create_usb_perm($rpcenv, $authuser, $vmid, $pool, $param);
>  	    check_vm_create_hostpci_perm($rpcenv, $authuser, $vmid, $pool, $param);
> +	    check_vm_create_dir_perm($rpcenv, $authuser, $vmid, $pool, $param);
>  
>  	    PVE::QemuServer::check_bridge_access($rpcenv, $authuser, $param);
>  	    &$check_cpu_model_access($rpcenv, $authuser, $param);
> @@ -2005,6 +2034,10 @@ my $update_vm_api  = sub {
>  		    check_hostpci_perm($rpcenv, $authuser, $vmid, undef, $opt, $val);
>  		    PVE::QemuConfig->add_to_pending_delete($conf, $opt, $force);
>  		    PVE::QemuConfig->write_config($vmid, $conf);
> +		} elsif ($opt =~ m/^virtiofs\d$/) {
> +		    check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, $val);
> +		    PVE::QemuConfig->add_to_pending_delete($conf, $opt, $force);
> +		    PVE::QemuConfig->write_config($vmid, $conf);
>  		} elsif ($opt eq 'tags') {
>  		    assert_tag_permissions($vmid, $val, '', $rpcenv, $authuser);
>  		    delete $conf->{$opt};
> @@ -2095,6 +2128,12 @@ my $update_vm_api  = sub {
>  		    }
>  		    check_hostpci_perm($rpcenv, $authuser, $vmid, undef, $opt, $param->{$opt});
>  		    $conf->{pending}->{$opt} = $param->{$opt};
> +		} elsif ($opt =~ m/^virtiofs\d$/) {
> +		    if (my $oldvalue = $conf->{$opt}) {
> +			check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, $oldvalue);
> +		    }
> +		    check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, $param->{$opt});
> +		    $conf->{pending}->{$opt} = $param->{$opt};
>  		} elsif ($opt eq 'tags') {
>  		    assert_tag_permissions($vmid, $conf->{$opt}, $param->{$opt}, $rpcenv, $authuser);
>  		    $conf->{pending}->{$opt} = PVE::GuestHelpers::get_unique_tags($param->{$opt});
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index b6fc1f17..748b0acf 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -33,6 +33,7 @@ use PVE::Exception qw(raise raise_param_exc);
>  use PVE::Format qw(render_duration render_bytes);
>  use PVE::GuestHelpers qw(safe_string_ne safe_num_ne safe_boolean_ne);
>  use PVE::HA::Config;
> +use PVE::Mapping::Dir;
>  use PVE::Mapping::PCI;
>  use PVE::Mapping::USB;
>  use PVE::INotify;
> @@ -62,6 +63,7 @@ use PVE::QemuServer::Monitor qw(mon_cmd);
>  use PVE::QemuServer::PCI qw(print_pci_addr print_pcie_addr print_pcie_root_port parse_hostpci);
>  use PVE::QemuServer::QMPHelpers qw(qemu_deviceadd qemu_devicedel qemu_objectadd qemu_objectdel);
>  use PVE::QemuServer::USB;
> +use PVE::QemuServer::Virtiofs qw(max_virtiofs start_all_virtiofsd);
>  
>  my $have_sdn;
>  eval {
> @@ -948,6 +950,10 @@ my $netdesc = {
>  
>  PVE::JSONSchema::register_standard_option("pve-qm-net", $netdesc);
>  
> +for (my $i = 0; $i < max_virtiofs(); $i++)  {
> +    $confdesc->{"virtiofs$i"} = get_standard_option('pve-qm-virtiofs');
> +}
> +
>  my $ipconfig_fmt = {
>      ip => {
>  	type => 'string',
> @@ -3707,8 +3713,18 @@ sub config_to_command {
>  	push @$cmd, get_cpu_options($conf, $arch, $kvm, $kvm_off, $machine_version, $winversion, $gpu_passthrough);
>      }
>  
> +    my $virtiofs_enabled = PVE::QemuServer::Virtiofs::virtiofs_enabled($conf);
> +
>      PVE::QemuServer::Memory::config(
> -	$conf, $vmid, $sockets, $cores, $hotplug_features->{memory}, $cmd);
> +	$conf,
> +	$vmid,
> +	$sockets,
> +	$cores,
> +	$hotplug_features->{memory},
> +	$cmd,
> +	$machineFlags,

machineFlags is only passed to

> +	$virtiofs_enabled
> +    );
>  
>      push @$cmd, '-S' if $conf->{freeze};
>  
> @@ -3998,6 +4014,8 @@ sub config_to_command {
>  	push @$machineFlags, 'confidential-guest-support=sev0';
>      }
>  
> +    PVE::QemuServer::Virtiofs::config($conf, $vmid, $devices);
> +
>      push @$cmd, @$devices;
>      push @$cmd, '-rtc', join(',', @$rtcFlags) if scalar(@$rtcFlags);
>      push @$cmd, '-machine', join(',', @$machineFlags) if scalar(@$machineFlags);
> @@ -5806,6 +5824,8 @@ sub vm_start_nolock {
>  	PVE::Tools::run_fork sub {
>  	    PVE::Systemd::enter_systemd_scope($vmid, "Proxmox VE VM $vmid", %systemd_properties);
>  
> +	    my $virtiofs_sockets = start_all_virtiofsd($conf, $vmid);
> +
>  	    my $tpmpid;
>  	    if ((my $tpm = $conf->{tpmstate0}) && !PVE::QemuConfig->is_template($conf)) {
>  		# start the TPM emulator so QEMU can connect on start
> @@ -5813,6 +5833,8 @@ sub vm_start_nolock {
>  	    }
>  
>  	    my $exitcode = run_command($cmd, %run_params);
> +	    eval { PVE::QemuServer::Virtiofs::close_sockets(@$virtiofs_sockets); };
> +	    log_warn("closing virtiofs sockets failed - $@") if $@;
>  	    if ($exitcode) {
>  		if ($tpmpid) {
>  		    warn "stopping swtpm instance (pid $tpmpid) due to QEMU startup error\n";
> @@ -6482,7 +6504,10 @@ sub check_mapping_access {
>  	    } else {
>  		die "either 'host' or 'mapping' must be set.\n";
>  	    }
> -       }
> +	} elsif ($opt =~ m/^virtiofs\d$/) {
> +	    my $virtiofs = PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', $conf->{$opt});
> +	    $rpcenv->check_full($user, "/mapping/dir/$virtiofs->{dirid}", ['Mapping.Use']);
> +	}
>     }
>  };
>  
> diff --git a/PVE/QemuServer/Makefile b/PVE/QemuServer/Makefile
> index 18fd13ea..3588e0e1 100644
> --- a/PVE/QemuServer/Makefile
> +++ b/PVE/QemuServer/Makefile
> @@ -11,7 +11,8 @@ SOURCES=PCI.pm		\
>  	CPUConfig.pm	\
>  	CGroup.pm	\
>  	Drive.pm	\
> -	QMPHelpers.pm
> +	QMPHelpers.pm	\
> +	Virtiofs.pm
>  
>  .PHONY: install
>  install: ${SOURCES}
> diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
> index e5024cd2..dcdb4f76 100644
> --- a/PVE/QemuServer/Memory.pm
> +++ b/PVE/QemuServer/Memory.pm
> @@ -336,7 +336,7 @@ sub qemu_memdevices_list {
>  }
>  
>  sub config {
> -    my ($conf, $vmid, $sockets, $cores, $hotplug, $cmd) = @_;
> +    my ($conf, $vmid, $sockets, $cores, $hotplug, $cmd, $machine_flags, $virtiofs_enabled) = @_;

there's only a single call site, should we maybe order the parameters
that are changed ($cmd and $machine_flags) last?

>  
>      my $memory = get_current_memory($conf->{memory});
>      my $static_memory = 0;
> @@ -379,7 +379,8 @@ sub config {
>  	    my $numa_memory = $numa->{memory};
>  	    $numa_totalmemory += $numa_memory;
>  
> -	    my $mem_object = print_mem_object($conf, "ram-node$i", $numa_memory);
> +	    my $memdev = $virtiofs_enabled ? "virtiofs-mem$i" : "ram-node$i";
> +	    my $mem_object = print_mem_object($conf, $memdev, $numa_memory);
>  
>  	    # cpus
>  	    my $cpulists = $numa->{cpus};
> @@ -404,7 +405,7 @@ sub config {
>  	    }
>  
>  	    push @$cmd, '-object', $mem_object;
> -	    push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=ram-node$i";
> +	    push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=$memdev";
>  	}
>  
>  	die "total memory for NUMA nodes must be equal to vm static memory\n"
> @@ -418,15 +419,20 @@ sub config {
>  		die "host NUMA node$i doesn't exist\n"
>  		    if !host_numanode_exists($i) && $conf->{hugepages};
>  
> -		my $mem_object = print_mem_object($conf, "ram-node$i", $numa_memory);
> -		push @$cmd, '-object', $mem_object;
> -
>  		my $cpus = ($cores * $i);
>  		$cpus .= "-" . ($cpus + $cores - 1) if $cores > 1;
>  
> -		push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=ram-node$i";
> +		my $memdev = $virtiofs_enabled ? "virtiofs-mem$i" : "ram-node$i";
> +		my $mem_object = print_mem_object($conf, $memdev, $numa_memory);
> +		push @$cmd, '-object', $mem_object;
> +		push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=$memdev";
>  	    }
>  	}
> +    } elsif ($virtiofs_enabled) {
> +	# kvm: '-machine memory-backend' and '-numa memdev' properties are mutually exclusive
> +	push @$cmd, '-object', 'memory-backend-memfd,id=virtiofs-mem'
> +	    .",size=$conf->{memory}M,share=on";
> +	push @$machine_flags, 'memory-backend=virtiofs-mem';
>      }
>  
>      if ($hotplug) {
> @@ -453,6 +459,8 @@ sub print_mem_object {
>  	my $path = hugepages_mount_path($hugepages_size);
>  
>  	return "memory-backend-file,id=$id,size=${size}M,mem-path=$path,share=on,prealloc=yes";
> +    } elsif ($id =~ m/^virtiofs-mem/) {
> +	return "memory-backend-memfd,id=$id,size=${size}M,share=on";
>      } else {
>  	return "memory-backend-ram,id=$id,size=${size}M";
>      }
> diff --git a/PVE/QemuServer/Virtiofs.pm b/PVE/QemuServer/Virtiofs.pm
> new file mode 100644
> index 00000000..94ef7b47
> --- /dev/null
> +++ b/PVE/QemuServer/Virtiofs.pm
> @@ -0,0 +1,211 @@
> +package PVE::QemuServer::Virtiofs;
> +
> +use strict;
> +use warnings;
> +
> +use Fcntl qw(F_GETFD F_SETFD FD_CLOEXEC);
> +use IO::Socket::UNIX;
> +use POSIX;
> +use Socket qw(SOCK_STREAM);
> +
> +use PVE::JSONSchema qw(parse_property_string);
> +use PVE::Mapping::Dir;
> +use PVE::QemuServer::Helpers;
> +use PVE::RESTEnvironment qw(log_warn);
> +
> +use base qw(Exporter);
> +
> +our @EXPORT_OK = qw(
> +max_virtiofs
> +start_all_virtiofsd
> +);
> +
> +my $MAX_VIRTIOFS = 10;
> +my $socket_path_root = "/run/qemu-server/virtiofsd";
> +
> +my $virtiofs_fmt = {
> +    'dirid' => {
> +	type => 'string',
> +	default_key => 1,
> +	description => "Mapping identifier of the directory mapping to be shared with the guest."
> +	    ." Also used as a mount tag inside the VM.",
> +	format_description => 'mapping-id',
> +	format => 'pve-configid',
> +    },
> +    'cache' => {
> +	type => 'string',
> +	description => "The caching policy the file system should use (auto, always, never).",
> +	enum => [qw(auto always never)],
> +	default => "auto",
> +	optional => 1,
> +    },
> +    'direct-io' => {
> +	type => 'boolean',
> +	description => "Honor the O_DIRECT flag passed down by guest applications.",
> +	default => 0,
> +	optional => 1,
> +    },
> +    writeback => {
> +	type => 'boolean',
> +	description => "Enable writeback cache. If enabled, writes may be cached in the guest until"
> +	    ." the file is closed or an fsync is performed.",
> +	default => 0,
> +	optional => 1,
> +    },
> +    'expose-xattr' => {
> +	type => 'boolean',
> +	description => "Enable support for extended attributes for this mount.",
> +	default => 0,
> +	optional => 1,
> +    },
> +    'expose-acl' => {
> +	type => 'boolean',
> +	description => "Enable support for POSIX ACLs (enabled ACL implies xattr) for this mount.",
> +	default => 0,
> +	optional => 1,
> +    },
> +};
> +PVE::JSONSchema::register_format('pve-qm-virtiofs', $virtiofs_fmt);
> +
> +my $virtiofsdesc = {
> +    optional => 1,
> +    type => 'string', format => $virtiofs_fmt,
> +    description => "Configuration for sharing a directory between host and guest using Virtio-fs.",
> +};
> +PVE::JSONSchema::register_standard_option("pve-qm-virtiofs", $virtiofsdesc);
> +
> +sub max_virtiofs {
> +    return $MAX_VIRTIOFS;
> +}
> +
> +sub assert_virtiofs_config {
> +    my ($ostype, $virtiofs) = @_;
> +
> +    my $dir_cfg = PVE::Mapping::Dir::find_on_current_node($virtiofs->{dirid});
> +
> +    my $acl = $virtiofs->{'expose-acl'};
> +    if ($acl && PVE::QemuServer::Helpers::windows_version($ostype)) {
> +	log_warn(
> +	    "Please disable ACLs for virtiofs on Windows VMs, otherwise"
> +	    ." the virtiofs shared directory cannot be mounted."
> +	);

this is called assert, but here we do a check and just warn if it fails?

> +    }
> +
> +    eval { PVE::Mapping::Dir::assert_valid($dir_cfg) };
> +    die "directory mapping invalid: $@\n" if $@;
> +}
> +
> +sub config {
> +    my ($conf, $vmid, $devices) = @_;
> +
> +    for (my $i = 0; $i < max_virtiofs(); $i++) {
> +	my $opt = "virtiofs$i";
> +
> +	next if !$conf->{$opt};
> +	my $virtiofs = parse_property_string('pve-qm-virtiofs', $conf->{$opt});
> +
> +	assert_virtiofs_config($conf->{ostype}, $virtiofs);
> +
> +	push @$devices, '-chardev', "socket,id=virtiofs$i,path=$socket_path_root/vm$vmid-fs$i";
> +
> +	# queue-size is set 1024 because of bug with Windows guests:
> +	# https://bugzilla.redhat.com/show_bug.cgi?id=1873088
> +	# 1024 is also always used in the virtiofs documentations:
> +	# https://gitlab.com/virtio-fs/virtiofsd#examples
> +	push @$devices, '-device', 'vhost-user-fs-pci,queue-size=1024'
> +	    .",chardev=virtiofs$i,tag=$virtiofs->{dirid}";
> +    }
> +}
> +
> +sub virtiofs_enabled {
> +    my ($conf) = @_;
> +
> +    my $virtiofs_enabled = 0;
> +    for (my $i = 0; $i < max_virtiofs(); $i++) {
> +	my $opt = "virtiofs$i";
> +	next if !$conf->{$opt};
> +	parse_property_string('pve-qm-virtiofs', $conf->{$opt});
> +	$virtiofs_enabled = 1;
> +    }
> +    return $virtiofs_enabled;
> +}
> +
> +sub start_all_virtiofsd {
> +    my ($conf, $vmid) = @_;
> +    my $virtiofs_sockets = [];
> +    for (my $i = 0; $i < max_virtiofs(); $i++) {
> +	my $opt = "virtiofs$i";
> +
> +	next if !$conf->{$opt};
> +	my $virtiofs = parse_property_string('pve-qm-virtiofs', $conf->{$opt});
> +
> +	my $virtiofs_socket = start_virtiofsd($vmid, $i, $virtiofs);
> +	push @$virtiofs_sockets, $virtiofs_socket;
> +    }
> +    return $virtiofs_sockets;
> +}
> +
> +sub start_virtiofsd {
> +    my ($vmid, $fsid, $virtiofs) = @_;
> +
> +    mkdir $socket_path_root;
> +    my $socket_path = "$socket_path_root/vm$vmid-fs$fsid";
> +    unlink($socket_path);
> +    my $socket = IO::Socket::UNIX->new(
> +	Type => SOCK_STREAM,
> +	Local => $socket_path,
> +	Listen => 1,
> +    ) or die "cannot create socket - $!\n";
> +
> +    my $flags = fcntl($socket, F_GETFD, 0)
> +	or die "failed to get file descriptor flags: $!\n";
> +    fcntl($socket, F_SETFD, $flags & ~FD_CLOEXEC)
> +	or die "failed to remove FD_CLOEXEC from file descriptor\n";
> +
> +    my $dir_cfg = PVE::Mapping::Dir::find_on_current_node($virtiofs->{dirid});
> +
> +    my $virtiofsd_bin = '/usr/libexec/virtiofsd';
> +    my $fd = $socket->fileno();
> +    my $path = $dir_cfg->{path};
> +
> +    my $could_not_fork_err = "could not fork to start virtiofsd\n";
> +    my $pid = fork();
> +    if ($pid == 0) {
> +	POSIX::setsid();
> +	$0 = "task pve-vm$vmid-virtiofs$fsid";
> +	my $pid2 = fork();
> +	if ($pid2 == 0) {
> +	    my $cmd = [$virtiofsd_bin, "--fd=$fd", "--shared-dir=$path"];
> +	    push @$cmd, '--xattr' if $virtiofs->{'expose-xattr'};
> +	    push @$cmd, '--posix-acl' if $virtiofs->{'expose-acl'};
> +	    push @$cmd, '--announce-submounts' if $dir_cfg->{'announce-submounts'};
> +	    push @$cmd, '--allow-direct-io' if $virtiofs->{'direct-io'};
> +	    push @$cmd, '--cache='.$virtiofs->{cache} if $virtiofs->{cache};
> +	    push @$cmd, '--writeback' if $virtiofs->{'writeback'};
> +	    push @$cmd, '--syslog';
> +	    exec(@$cmd);
> +	} elsif (!defined($pid2)) {
> +	    die $could_not_fork_err;
> +	} else {
> +	    POSIX::_exit(0);
> +	}
> +    } elsif (!defined($pid)) {
> +	die $could_not_fork_err;
> +    } else {
> +	waitpid($pid, 0);
> +    }
> +
> +    # return socket to keep it alive,
> +    # so that QEMU will wait for virtiofsd to start
> +    return $socket;
> +}
> +
> +sub close_sockets {
> +    my @sockets = @_;
> +    for my $socket (@sockets) {
> +	shutdown($socket, 2);
> +	close($socket);
> +    }
> +}
> +
> +1;
> -- 
> 2.39.5
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [pve-devel] [PATCH guest-common v14 2/12] add dir mapping section config
  2025-03-04 11:57 ` [pve-devel] [PATCH guest-common v14 2/12] add dir mapping section config Markus Frank
@ 2025-04-02 13:14   ` Fabian Grünbichler
  2025-04-02 15:20     ` Markus Frank
  2025-04-02 13:41   ` Daniel Kral
  1 sibling, 1 reply; 30+ messages in thread
From: Fabian Grünbichler @ 2025-04-02 13:14 UTC (permalink / raw)
  To: Proxmox VE development discussion

On March 4, 2025 12:57 pm, Markus Frank wrote:
> Adds a config file for directories by using a 'map' property string for
> each node mapping.
> 
> Next to node & path, there is the optional announce-submounts parameter
> which forces virtiofsd to report a different device number for each
> submount it encounters. Without it, duplicates may be created because
> inode IDs are only unique on a single filesystem.

some things about the announce-submounts option that might be worthwhile
to consider:
- does it hurt to set announce-submounts if there are none?
- it sounds like not setting it if there are any is dangerous

why don't we always set it? "announce-submounts" is also a really weird
name if we extend this to managed bind-mounts for containers (which I'd
very much like to do), if we keep it it should be a flag denoting that
this directory contains further mountpoints (which means it
requires/should default to announce-submounts for virtiofsd, and
recursive bind mounting for LXC)?

> 
> example config:
> ```
> some-dir-id
> 	map node=node1,path=/mnt/share/,announce-submounts=1
> 	map node=node2,path=/mnt/share/,
> ```
> 
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> ---
> v14:
> * disallow commas and equal signs in path until the path can be quoted
>  in property strings
> * addressed style nits and improved formatting
> 
>  src/Makefile           |   1 +
>  src/PVE/Mapping/Dir.pm | 196 +++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 197 insertions(+)
>  create mode 100644 src/PVE/Mapping/Dir.pm
> 
> diff --git a/src/Makefile b/src/Makefile
> index cbc40c1..030e7f7 100644
> --- a/src/Makefile
> +++ b/src/Makefile
> @@ -15,6 +15,7 @@ install: PVE
>  	install -m 0644 PVE/StorageTunnel.pm ${PERL5DIR}/PVE/
>  	install -m 0644 PVE/Tunnel.pm ${PERL5DIR}/PVE/
>  	install -d ${PERL5DIR}/PVE/Mapping
> +	install -m 0644 PVE/Mapping/Dir.pm ${PERL5DIR}/PVE/Mapping/
>  	install -m 0644 PVE/Mapping/PCI.pm ${PERL5DIR}/PVE/Mapping/
>  	install -m 0644 PVE/Mapping/USB.pm ${PERL5DIR}/PVE/Mapping/
>  	install -d ${PERL5DIR}/PVE/VZDump
> diff --git a/src/PVE/Mapping/Dir.pm b/src/PVE/Mapping/Dir.pm
> new file mode 100644
> index 0000000..581ec39
> --- /dev/null
> +++ b/src/PVE/Mapping/Dir.pm
> @@ -0,0 +1,196 @@
> +package PVE::Mapping::Dir;
> +
> +use strict;
> +use warnings;
> +
> +use PVE::Cluster qw(cfs_register_file cfs_read_file cfs_lock_file cfs_write_file);
> +use PVE::INotify;
> +use PVE::JSONSchema qw(get_standard_option parse_property_string);
> +use PVE::SectionConfig;
> +
> +use base qw(PVE::SectionConfig);
> +
> +my $FILENAME = 'mapping/dir.cfg';
> +
> +cfs_register_file($FILENAME,
> +    sub { __PACKAGE__->parse_config(@_); },
> +    sub { __PACKAGE__->write_config(@_); });
> +
> +
> +# so we don't have to repeat the type every time
> +sub parse_section_header {
> +    my ($class, $line) = @_;
> +
> +    if ($line =~ m/^(\S+)\s*$/) {
> +	my $id = $1;
> +	my $errmsg = undef; # set if you want to skip whole section
> +	eval { PVE::JSONSchema::pve_verify_configid($id) };
> +	$errmsg = $@ if $@;
> +	my $config = {}; # to return additional attributes
> +	return ('dir', $id, $errmsg, $config);
> +    }
> +    return undef;
> +}
> +
> +sub format_section_header {
> +    my ($class, $type, $sectionId, $scfg, $done_hash) = @_;
> +
> +    return "$sectionId\n";
> +}
> +
> +sub type {
> +    return 'dir';
> +}
> +
> +# temporary path format that also disallows commas and equal signs
> +# TODO: Remove this when property_string supports quotation of properties
> +PVE::JSONSchema::register_format('pve-storage-path-in-property-string', \&verify_path);
> +sub verify_path {
> +    my ($path, $noerr) = @_;
> +
> +    if ($path !~ m|^/[^;,=\(\)]+|) {
> +	return undef if $noerr;
> +	die "Value does not look like a valid absolute path."
> +	    ." These symbols are currently not allowed in path: ;,=())\n";

stray ')' in the error message

> +    }
> +    return $path;
> +}
> +
> +my $map_fmt = {
> +    node => get_standard_option('pve-node'),
> +    path => {
> +	description => "Absolute directory path that should be shared with the guest.",
> +	type => 'string',
> +	format => 'pve-storage-path-in-property-string',
> +    },
> +    'announce-submounts' => {
> +	type => 'boolean',
> +	description => "Announce that the directory contains other mounted file systems."
> +	    ." If this is not set and multiple file systems are mounted, the guest may"
> +	    ." encounter duplicates due to file system specific inode IDs.",
> +	optional => 1,
> +	default => 1,
> +    },
> +};
> +
> +my $defaultData = {
> +    propertyList => {
> +	id => {
> +	    type => 'string',
> +	    description => "The ID of the directory mapping",
> +	    format => 'pve-configid',
> +	},
> +	description => {
> +	    type => 'string',
> +	    description => "Description of the directory mapping",
> +	    optional => 1,
> +	    maxLength => 4096,
> +	},
> +	map => {
> +	    type => 'array',
> +	    description => 'A list of maps for the cluster nodes.',
> +	    optional => 1,
> +	    items => {
> +		type => 'string',
> +		format => $map_fmt,
> +	    },
> +	},
> +    },
> +};
> +
> +sub private {
> +    return $defaultData;
> +}
> +
> +sub options {
> +    return {
> +	description => { optional => 1 },
> +	map => {},
> +    };
> +}
> +
> +sub assert_valid {

this is called below in assert_valid_map_list without filtering entries
so that just those for the current node are considered..

> +    my ($dir_cfg) = @_;
> +
> +    my $path = $dir_cfg->{path};
> +
> +    verify_path($path);
> +
> +    if (! -e $path) {
> +	die "Path $path does not exist\n";
> +    } elsif (! -d $path) {
> +	die "Path $path exists, but is not a directory\n";
> +    }

so these checks don't make sense.. if the directory has to exist on all
nodes, then the per-node mappings could just be dropped entirely ;)

> +
> +    return 1;
> +};
> +
> +sub assert_valid_map_list {
> +    my ($map_list) = @_;
> +
> +    my %count;
> +    for my $map (@$map_list) {
> +	my $entry = parse_property_string($map_fmt, $map);
> +	assert_valid($entry);
> +	$count{$entry->{node}}++;
> +    }
> +    for my $node (keys %count) {
> +	if ($count{$node} > 1) {
> +	    die "Node '$node' is specified $count{$node} times.\n";
> +	}
> +    }
> +}
> +
> +sub config {
> +    return cfs_read_file($FILENAME);
> +}
> +
> +sub lock_dir_config {
> +    my ($code, $errmsg) = @_;
> +
> +    cfs_lock_file($FILENAME, undef, $code);
> +    if (my $err = $@) {
> +	$errmsg ? die "$errmsg: $err" : die $err;
> +    }
> +}
> +
> +sub write_dir_config {
> +    my ($cfg) = @_;
> +
> +    cfs_write_file($FILENAME, $cfg);
> +}
> +
> +sub find_on_current_node {
> +    my ($id) = @_;
> +
> +    my $cfg = config();
> +    my $node = PVE::INotify::nodename();
> +
> +    my $node_mapping = get_node_mapping($cfg, $id, $node);
> +    if (@{$node_mapping} > 1) {
> +	die "More than than one directory mapping for node $node.\n";
> +    }
> +    return $node_mapping->[0];
> +}
> +
> +sub get_node_mapping {
> +    my ($cfg, $id, $nodename) = @_;
> +
> +    return undef if !defined($cfg->{ids}->{$id});
> +
> +    my $res = [];
> +    my $mapping_list = $cfg->{ids}->{$id}->{map};
> +    for my $map (@{$mapping_list}) {
> +	my $entry = eval { parse_property_string($map_fmt, $map) };
> +	warn $@ if $@;
> +	if ($entry && $entry->{node} eq $nodename) {
> +	    push $res->@*, $entry;
> +	}
> +    }
> +    return $res;
> +}
> +
> +PVE::Mapping::Dir->register();
> +PVE::Mapping::Dir->init();
> +
> +1;
> -- 
> 2.39.5
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v14 0/12] virtiofs
  2025-03-04 11:57 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v14 0/12] virtiofs Markus Frank
                   ` (13 preceding siblings ...)
  2025-04-02 10:36 ` Laurențiu Leahu-Vlăducu
@ 2025-04-02 13:17 ` Fabian Grünbichler
  2025-04-02 13:45 ` Daniel Kral
  15 siblings, 0 replies; 30+ messages in thread
From: Fabian Grünbichler @ 2025-04-02 13:17 UTC (permalink / raw)
  To: Proxmox VE development discussion

other than the comments on individual patches nothing here jumped out at
me for the backend part, I haven't looked at the front-end.

some things that would be nice to consider/follow-up with:
- add a hotplug feature (set up all the machinery, even if no virtiofs
  is configured yet, then add/remove shares at runtime?)
- this would maybe allow removing shares before
  live-migrating/savevm-ing
- keep an eye on upstream live-migration support for shared backing dirs
- implement pve-container support via bind mounts

On March 4, 2025 12:57 pm, Markus Frank wrote:
> Virtio-fs is a shared file system that enables sharing a directory
> between host and guest VMs. It takes advantage of the locality of
> virtual machines and the hypervisor to get a higher throughput than
> the 9p remote file system protocol.
> 
> build-order:
> 1. cluster
> 2. guest-common
> 3. docs
> 4. qemu-server
> 5. manager
> 
> I did not get virtiofsd to run with run_command without creating
> zombie processes after stutdown. So I replaced run_command with exec
> for now. Maybe someone can find out why this happens.
> 
> 
> changes in v14:
> * disallow commas and equal signs in path until the path can be quoted
>  in property strings
> * addressed style nits and improved formatting
> * use max_virtiofs() in check_vm_create_dir_perm
> * removed unnecessary checks after parse_property_string
> * find_on_current_node returns only one entry
> * improved docs
> * added missing imports/uses
> 
> changes in v13:
> * removed acl/xattr attributes in node config
> * renamed acl/xattr in virtiofs qemu config to expose-acl/expose-xattr
> * renamed submounts in node config to announce-submounts
> * the "disable snapshot (with RAM) and hibernate with virtio-fs devices"
>  patch now uses the check_non_migratable_resources function
> * rewritten the part about announce-submounts in pve-docs patch
> 
> Changes in v12:
> * rebase to master as most patches could not be applied anymore
> 
> Changes in v11:
> * made submounts option on by default in WebUI and section config
> * PVE::QemuServer::Virtiofs dependency removed in QemuServer/Memory.pm
> * Minor changes to function/variable names
> * Disable snapshots (with RAM) and hibernate due to incompatibility
> 
> 
> 
> cluster:
> 
> Markus Frank (1):
>   add mapping/dir.cfg for resource mapping
> 
>  src/PVE/Cluster.pm  | 1 +
>  src/pmxcfs/status.c | 1 +
>  2 files changed, 2 insertions(+)
> 
> 
> 
> guest-common:
> 
> Markus Frank (1):
>   add dir mapping section config
> 
>  src/Makefile           |   1 +
>  src/PVE/Mapping/Dir.pm | 196 +++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 197 insertions(+)
>  create mode 100644 src/PVE/Mapping/Dir.pm
> 
> 
> 
> docs:
> 
> Markus Frank (1):
>   add doc section for the shared filesystem virtio-fs
> 
>  qm.adoc | 99 +++++++++++++++++++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 97 insertions(+), 2 deletions(-)
> 
> 
> 
> qemu-server:
> 
> Markus Frank (4):
>   control: add virtiofsd as runtime dependency for qemu-server
>   fix #1027: virtio-fs support
>   migration: check_local_resources for virtiofs
>   disable snapshot (with RAM) and hibernate with virtio-fs devices
> 
>  PVE/API2/Qemu.pm             |  41 ++++++-
>  PVE/QemuServer.pm            |  46 +++++++-
>  PVE/QemuServer/Makefile      |   3 +-
>  PVE/QemuServer/Memory.pm     |  22 ++--
>  PVE/QemuServer/Virtiofs.pm   | 211 +++++++++++++++++++++++++++++++++++
>  debian/control               |   1 +
>  test/MigrationTest/Shared.pm |   7 ++
>  7 files changed, 317 insertions(+), 14 deletions(-)
>  create mode 100644 PVE/QemuServer/Virtiofs.pm
> 
> 
> 
> manager:
> 
> Markus Frank (5):
>   api: add resource map api endpoints for directories
>   ui: add edit window for dir mappings
>   ui: add resource mapping view for directories
>   ui: form: add selector for directory mappings
>   ui: add options to add virtio-fs to qemu config
> 
>  PVE/API2/Cluster/Mapping.pm         |   7 +
>  PVE/API2/Cluster/Mapping/Dir.pm     | 308 ++++++++++++++++++++++++++++
>  PVE/API2/Cluster/Mapping/Makefile   |   1 +
>  www/manager6/Makefile               |   4 +
>  www/manager6/Utils.js               |   1 +
>  www/manager6/dc/Config.js           |  10 +
>  www/manager6/dc/DirMapView.js       |  42 ++++
>  www/manager6/form/DirMapSelector.js |  63 ++++++
>  www/manager6/qemu/HardwareView.js   |  19 ++
>  www/manager6/qemu/VirtiofsEdit.js   | 137 +++++++++++++
>  www/manager6/window/DirMapEdit.js   | 214 +++++++++++++++++++
>  11 files changed, 806 insertions(+)
>  create mode 100644 PVE/API2/Cluster/Mapping/Dir.pm
>  create mode 100644 www/manager6/dc/DirMapView.js
>  create mode 100644 www/manager6/form/DirMapSelector.js
>  create mode 100644 www/manager6/qemu/VirtiofsEdit.js
>  create mode 100644 www/manager6/window/DirMapEdit.js
> 
> -- 
> 2.39.5
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [pve-devel] [PATCH guest-common v14 2/12] add dir mapping section config
  2025-03-04 11:57 ` [pve-devel] [PATCH guest-common v14 2/12] add dir mapping section config Markus Frank
  2025-04-02 13:14   ` Fabian Grünbichler
@ 2025-04-02 13:41   ` Daniel Kral
  1 sibling, 0 replies; 30+ messages in thread
From: Daniel Kral @ 2025-04-02 13:41 UTC (permalink / raw)
  To: Proxmox VE development discussion, Markus Frank

One small comment inline.

Tested the obvious cases:

- trying to add multiple map entries with the same node for the same 
dirid results in an error in the WebGUI
- adding multiple map entries manually in the file for a dirid will make 
the VM fail to start with an error indicating that there are multiple 
paths for the same dirid and node, but only if it actually uses the 
mountpoint

Else the patch looks good to me, consider this as:

Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>

On 3/4/25 12:57, Markus Frank wrote:
> Adds a config file for directories by using a 'map' property string for
> each node mapping.
> 
> Next to node & path, there is the optional announce-submounts parameter
> which forces virtiofsd to report a different device number for each
> submount it encounters. Without it, duplicates may be created because
> inode IDs are only unique on a single filesystem.
> 
> example config:
> ```
> some-dir-id
> 	map node=node1,path=/mnt/share/,announce-submounts=1
> 	map node=node2,path=/mnt/share/,

nit: the trailing comma in the last map line will fail to display all 
directory mappings in the WebGUI because parsePropertyString() will 
return undefined in the ResourceMapTree component.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [pve-devel] [PATCH manager v14 12/12] ui: add options to add virtio-fs to qemu config
  2025-03-04 11:58 ` [pve-devel] [PATCH manager v14 12/12] ui: add options to add virtio-fs to qemu config Markus Frank
  2025-04-02 10:36   ` Laurențiu Leahu-Vlăducu
@ 2025-04-02 13:42   ` Daniel Kral
  1 sibling, 0 replies; 30+ messages in thread
From: Daniel Kral @ 2025-04-02 13:42 UTC (permalink / raw)
  To: Proxmox VE development discussion, Markus Frank

Some smaller higher level comments inline.

Else the patch looks good to me and with those addressed, consider this as:

Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>

On 3/4/25 12:58, Markus Frank wrote:
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> ---
> v14:
> * disable expose-xattr when expose-acl is set
> * added missing writeback cache option
> 
>   www/manager6/Makefile             |   1 +
>   www/manager6/Utils.js             |   1 +
>   www/manager6/qemu/HardwareView.js |  19 +++++
>   www/manager6/qemu/VirtiofsEdit.js | 137 ++++++++++++++++++++++++++++++
>   4 files changed, 158 insertions(+)
>   create mode 100644 www/manager6/qemu/VirtiofsEdit.js
> 
> diff --git a/www/manager6/Makefile b/www/manager6/Makefile
> index fabbdd24..fdf0e816 100644
> --- a/www/manager6/Makefile
> +++ b/www/manager6/Makefile
> @@ -271,6 +271,7 @@ JSSRC= 							\
>   	qemu/Smbios1Edit.js				\
>   	qemu/SystemEdit.js				\
>   	qemu/USBEdit.js					\
> +	qemu/VirtiofsEdit.js				\
>   	sdn/Browser.js					\
>   	sdn/ControllerView.js				\
>   	sdn/Status.js					\
> diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
> index 90011a8f..0f242ae1 100644
> --- a/www/manager6/Utils.js
> +++ b/www/manager6/Utils.js
> @@ -1645,6 +1645,7 @@ Ext.define('PVE.Utils', {
>   	serial: 4,
>   	rng: 1,
>   	tpmstate: 1,
> +	virtiofs: 10,
>       },
>   
>       // we can have usb6 and up only for specific machine/ostypes
> diff --git a/www/manager6/qemu/HardwareView.js b/www/manager6/qemu/HardwareView.js
> index c6d193fc..34aeb51e 100644
> --- a/www/manager6/qemu/HardwareView.js
> +++ b/www/manager6/qemu/HardwareView.js
> @@ -319,6 +319,16 @@ Ext.define('PVE.qemu.HardwareView', {
>   	    never_delete: !caps.nodes['Sys.Console'],
>   	    header: gettext("VirtIO RNG"),
>   	};
> +	for (let i = 0; i < PVE.Utils.hardware_counts.virtiofs; i++) {
> +	    let confid = "virtiofs" + i.toString();
> +	    rows[confid] = {
> +		group: 50,
> +		order: i,
> +		iconCls: 'folder',
> +		editor: 'PVE.qemu.VirtiofsEdit',
> +		header: gettext('Virtiofs') + ' (' + confid +')',
> +	    };
> +	}
>   
>   	var sorterFn = function(rec1, rec2) {
>   	    var v1 = rec1.data.key;
> @@ -595,6 +605,7 @@ Ext.define('PVE.qemu.HardwareView', {
>   	    const noVMConfigDiskPerm = !caps.vms['VM.Config.Disk'];
>   	    const noVMConfigCDROMPerm = !caps.vms['VM.Config.CDROM'];
>   	    const noVMConfigCloudinitPerm = !caps.vms['VM.Config.Cloudinit'];
> +	    const noVMConfigOptionsPerm = !caps.vms['VM.Config.Options'];
>   
>   	    me.down('#addUsb').setDisabled(noHWPerm || isAtUsbLimit());
>   	    me.down('#addPci').setDisabled(noHWPerm || isAtLimit('hostpci'));
> @@ -604,6 +615,7 @@ Ext.define('PVE.qemu.HardwareView', {
>   	    me.down('#addRng').setDisabled(noSysConsolePerm || isAtLimit('rng'));
>   	    efidisk_menuitem.setDisabled(noVMConfigDiskPerm || isAtLimit('efidisk'));
>   	    me.down('#addTpmState').setDisabled(noVMConfigDiskPerm || isAtLimit('tpmstate'));
> +	    me.down('#addVirtiofs').setDisabled(noVMConfigOptionsPerm || isAtLimit('virtiofs'));
>   	    me.down('#addCloudinitDrive').setDisabled(noVMConfigCDROMPerm || noVMConfigCloudinitPerm || hasCloudInit);
>   
>   	    if (!rec) {
> @@ -748,6 +760,13 @@ Ext.define('PVE.qemu.HardwareView', {
>   				disabled: !caps.nodes['Sys.Console'],
>   				handler: editorFactory('RNGEdit'),
>   			    },
> +			    {
> +				text: gettext("Virtiofs"),
> +				itemId: 'addVirtiofs',
> +				iconCls: 'fa fa-folder',
> +				disabled: !caps.nodes['Sys.Console'],
> +				handler: editorFactory('VirtiofsEdit'),
> +			    },
>   			],
>   		    }),
>   		},
> diff --git a/www/manager6/qemu/VirtiofsEdit.js b/www/manager6/qemu/VirtiofsEdit.js
> new file mode 100644
> index 00000000..0bbb5213
> --- /dev/null
> +++ b/www/manager6/qemu/VirtiofsEdit.js
> @@ -0,0 +1,137 @@
> +Ext.define('PVE.qemu.VirtiofsInputPanel', {
> +    extend: 'Proxmox.panel.InputPanel',
> +    xtype: 'pveVirtiofsInputPanel',
> +    onlineHelp: 'qm_virtiofs',
> +
> +    insideWizard: false,
> +
> +    onGetValues: function(values) {
> +	var me = this;
> +	var confid = me.confid;
> +	var params = {};
> +	delete values.delete;
> +	params[confid] = PVE.Parser.printPropertyString(values, 'dirid');
> +	return params;
> +    },
> +
> +    setSharedfiles: function(confid, data) {
> +	var me = this;
> +	me.confid = confid;
> +	me.virtiofs = data;
> +	me.setValues(me.virtiofs);
> +    },
> +    initComponent: function() {
> +	let me = this;
> +
> +	me.nodename = me.pveSelNode.data.node;
> +	if (!me.nodename) {
> +	    throw "no node name specified";
> +	}
> +	me.items = [
> +	    {
> +		xtype: 'pveDirMapSelector',
> +		emptyText: 'dirid',
> +		nodename: me.nodename,
> +		fieldLabel: gettext('Directory ID'),
> +		name: 'dirid',
> +		allowBlank: false,
> +	    },
> +	    {
> +		xtype: 'proxmoxKVComboBox',
> +		fieldLabel: gettext('Cache'),
> +		name: 'cache',
> +		value: '__default__',
> +		deleteDefaultValue: false,
> +		comboItems: [
> +		    ['__default__', Proxmox.Utils.defaultText + ' (auto)'],
> +		    ['auto', 'auto'],
> +		    ['always', 'always'],
> +		    ['never', 'never'],
> +		],
> +	    },
> +	    {
> +		xtype: 'proxmoxcheckbox',
> +		fieldLabel: gettext('Writeback cache'),
> +		name: 'writeback',
> +	    },
> +	    {
> +		xtype: 'proxmoxcheckbox',
> +		fieldLabel: gettext('expose-xattr'),

nit: "Enable xattr support" or "Expose extended attributes" / "Expose 
xattrs" could be a little user friendlier

> +		name: 'expose-xattr',
> +	    },
> +	    {
> +		xtype: 'proxmoxcheckbox',
> +		fieldLabel: gettext('expose-acl (implies expose-xattr)'),

nit: "Enable POSIX ACLs support" or "Expose POSIX ACLs" as above

I'm also thinking about if it wouldn't be a little clearer for users if 
either:

- 'expose-acl' is grayed out until 'expose-xattr' is enabled (maybe 
indent the 'expose-attr' option if possible to indicate that the one 
needs the other), or
- 'expose-xattr' is grayed out and set to true below if 'expose-acl' is 
enabled

AFAICS if virtiofsd is started with --xattr it only makes any xattr() 
filesystem calls fail with ENOSYS, and is automatically enabled if 
--posix-acl (or --security-label for that matter) is provided, so 
providing both instead of only one of them on the cmdline should make no 
difference...

> +		name: 'expose-acl',
> +		listeners: {
> +		    change: function(f, value) {
> +			let xattr = me.down('field[name=expose-xattr]');
> +			xattr.setDisabled(value);
> +			if (value) {
> +			    xattr.setValue(0);

...so `xattr.setValue(1);` should be possible here.

> +			}
> +		    },
> +		},
> +	    },

nit: and on another note, I think at least both of the xattr options 
(maybe also Direct IO?) could be put in "Advanced" section to make this 
a little shorter for the most important options.

> +	    {
> +		xtype: 'proxmoxcheckbox',
> +		fieldLabel: gettext('Direct IO'),

nit: users could benefit if this is called "Allow Direct IO" as only 
"Direct IO" could imply that all IO done by the guest is suddenly done 
with {O,IOCB}_DIRECT, which is obviously not the case.

> +		name: 'direct-io',
> +	    },
> +	];
> +
> +	me.virtiofs = {};
> +	me.confid = 'virtiofs0';
> +	me.callParent();
> +    },
> +});
> +
> +Ext.define('PVE.qemu.VirtiofsEdit', {
> +    extend: 'Proxmox.window.Edit',
> +
> +    subject: gettext('Filesystem Passthrough'),
> +
> +    initComponent: function() {
> +	var me = this;
> +
> +	me.isCreate = !me.confid;
> +
> +	var ipanel = Ext.create('PVE.qemu.VirtiofsInputPanel', {
> +	    confid: me.confid,
> +	    pveSelNode: me.pveSelNode,
> +	    isCreate: me.isCreate,
> +	});
> +
> +	Ext.applyIf(me, {
> +	    items: ipanel,
> +	});
> +
> +	me.callParent();
> +
> +	me.load({
> +	    success: function(response) {
> +		me.conf = response.result.data;
> +		var i, confid;
> +		if (!me.isCreate) {
> +		    var value = me.conf[me.confid];
> +		    var virtiofs = PVE.Parser.parsePropertyString(value, "dirid");
> +		    if (!virtiofs) {
> +			Ext.Msg.alert(gettext('Error'), 'Unable to parse virtiofs options');
> +			me.close();
> +			return;
> +		    }

I'm not sure about this here, but shouldn't this ever be reachable since 
the virtiofs entry will automatically disappear as soon as the virtiofs 
options are invalid?

But doesn't mean this shouldn't be here.

> +		    ipanel.setSharedfiles(me.confid, virtiofs);
> +		} else {
> +		    for (i = 0; i < PVE.Utils.hardware_counts.virtiofs; i++) {
> +			confid = 'virtiofs' + i.toString();
> +			if (!Ext.isDefined(me.conf[confid])) {
> +			    me.confid = confid;
> +			    break;
> +			}
> +		    }
> +		    ipanel.setSharedfiles(me.confid, {});
> +		}
> +	    },
> +	});
> +    },
> +});



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [pve-devel] [PATCH manager v14 10/12] ui: add resource mapping view for directories
  2025-03-04 11:58 ` [pve-devel] [PATCH manager v14 10/12] ui: add resource mapping view for directories Markus Frank
  2025-04-02 10:36   ` Laurențiu Leahu-Vlăducu
@ 2025-04-02 13:42   ` Daniel Kral
  1 sibling, 0 replies; 30+ messages in thread
From: Daniel Kral @ 2025-04-02 13:42 UTC (permalink / raw)
  To: Proxmox VE development discussion, Markus Frank

One comment inline, else LGTM and with this addressed consider this as:

Reviewed-by: Daniel Kral <d.kral@proxmox.com>

On 3/4/25 12:58, Markus Frank wrote:
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> ---
> v14:
> * return HTML encoded comment
> 
>   www/manager6/Makefile         |  1 +
>   www/manager6/dc/Config.js     | 10 +++++++++
>   www/manager6/dc/DirMapView.js | 42 +++++++++++++++++++++++++++++++++++
>   3 files changed, 53 insertions(+)
>   create mode 100644 www/manager6/dc/DirMapView.js
> 
> diff --git a/www/manager6/Makefile b/www/manager6/Makefile
> index 4b8677e3..57c4d377 100644
> --- a/www/manager6/Makefile
> +++ b/www/manager6/Makefile
> @@ -189,6 +189,7 @@ JSSRC= 							\
>   	dc/RealmSyncJob.js				\
>   	dc/PCIMapView.js				\
>   	dc/USBMapView.js				\
> +	dc/DirMapView.js				\
>   	lxc/CmdMenu.js					\
>   	lxc/Config.js					\
>   	lxc/CreateWizard.js				\
> diff --git a/www/manager6/dc/Config.js b/www/manager6/dc/Config.js
> index 74728c83..2958fb88 100644
> --- a/www/manager6/dc/Config.js
> +++ b/www/manager6/dc/Config.js
> @@ -329,6 +329,16 @@ Ext.define('PVE.dc.Config', {
>   			    title: gettext('USB Devices'),
>   			    flex: 1,
>   			},
> +			{
> +			    xtype: 'splitter',
> +			    collapsible: false,
> +			    performCollapse: false,
> +			},
> +			{
> +			    xtype: 'pveDcDirMapView',
> +			    title: gettext('Directories'),
> +			    flex: 1,
> +			},
>   		    ],
>   		},
>   	    );
> diff --git a/www/manager6/dc/DirMapView.js b/www/manager6/dc/DirMapView.js
> new file mode 100644
> index 00000000..ff0ce633
> --- /dev/null
> +++ b/www/manager6/dc/DirMapView.js
> @@ -0,0 +1,42 @@
> +Ext.define('pve-resource-dir-tree', {
> +    extend: 'Ext.data.Model',
> +    idProperty: 'internalId',
> +    fields: ['type', 'text', 'path', 'id', 'description', 'digest'],
> +});
> +
> +Ext.define('PVE.dc.DirMapView', {
> +    extend: 'PVE.tree.ResourceMapTree',
> +    alias: 'widget.pveDcDirMapView',
> +
> +    editWindowClass: 'PVE.window.DirMapEditWindow',
> +    baseUrl: '/cluster/mapping/dir',
> +    mapIconCls: 'fa fa-folder',
> +    entryIdProperty: 'path',
> +
> +    store: {
> +	sorters: 'text',
> +	model: 'pve-resource-dir-tree',
> +	data: {},
> +    },
> +
> +    columns: [
> +	{
> +	    xtype: 'treecolumn',
> +	    text: gettext('ID/Node'),
> +	    dataIndex: 'text',
> +	    width: 200,
> +	},
> +	{
> +	    text: gettext('announce-submounts'),
> +	    dataIndex: 'announce-submounts',
> +	},

This could also be in an advanced section as most users probably don't 
want to disable this. I'm not sure about how much overhead setting 
submounts in FUSE adds, but it should probably be negligible enough.

> +	{
> +	    header: gettext('Comment'),
> +	    dataIndex: 'description',
> +	    renderer: function(value, _meta, record) {
> +		return Ext.String.htmlEncode(value ?? record.data.comment);
> +	    },
> +	    flex: 1,
> +	},
> +    ],
> +});



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [pve-devel] [PATCH docs v14 3/12] add doc section for the shared filesystem virtio-fs
  2025-03-04 11:57 ` [pve-devel] [PATCH docs v14 3/12] add doc section for the shared filesystem virtio-fs Markus Frank
  2025-04-02 10:36   ` Laurențiu Leahu-Vlăducu
  2025-04-02 13:13   ` Fabian Grünbichler
@ 2025-04-02 13:44   ` Daniel Kral
  2 siblings, 0 replies; 30+ messages in thread
From: Daniel Kral @ 2025-04-02 13:44 UTC (permalink / raw)
  To: Proxmox VE development discussion, Markus Frank

Some comments inline, with those addressed consider this as:

Reviewed-by: Daniel Kral <d.kral@proxmox.com>

On 3/4/25 12:57, Markus Frank wrote:
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> ---
> v14:
> * addressed formulation nits
> * added paragraph about expose-acl & expose-xattr
> 
>   qm.adoc | 99 +++++++++++++++++++++++++++++++++++++++++++++++++++++++--
>   1 file changed, 97 insertions(+), 2 deletions(-)
> 
> diff --git a/qm.adoc b/qm.adoc
> index 4bb8f2c..86b3877 100644
> --- a/qm.adoc
> +++ b/qm.adoc
> @@ -1202,6 +1202,100 @@ recommended to always use a limiter to avoid guests using too many host
>   resources. If desired, a value of '0' for `max_bytes` can be used to disable
>   all limits.
>   
> +[[qm_virtiofs]]
> +Virtio-fs
> +~~~~~~~~~
> +
> +Virtio-fs is a shared filesystem designed for virtual environments. It allows to
> +share a directory tree available on the host by mounting it within VMs. It does
> +not use the network stack and aims to offer similar performance and semantics as
> +the source filesystem.
> +
> +To use virtio-fs, the https://gitlab.com/virtio-fs/virtiofsd[virtiofsd] daemon
> +needs to run in the background. This happens automatically in {pve} when
> +starting a VM using a virtio-fs mount.
> +
> +Linux VMs with kernel >=5.4 support virtio-fs by default.

It should be mentioned that this does not mean that all features for 
virtiofs are available on all kernels and of course distributions can 
also unselect the driver in their configs.

E.g. exposing POSIX ACLs on Fedora 32 with an older kernel did not work 
until exposing them was disabled on the host.

> +
> +There is a guide available on how to utilize virtio-fs in Windows VMs.
> +https://github.com/virtio-win/kvm-guest-drivers-windows/wiki/Virtiofs:-Shared-file-system

style nit: the full url could rather be a hyperlink on e.g. "guide"

> +
> +Known Limitations
> +^^^^^^^^^^^^^^^^^
> +
> +* If virtiofsd crashes, its mount point will hang in the VM until the VM
> +is completely stopped.
> +* virtiofsd not responding may result in a hanging mount in the VM, similar to
> +an unreachable NFS.
> +* Memory hotplug does not work in combination with virtio-fs (also results in
> +hanging access).
> +* Memory related features such as live migration, snapshots, and hibernate are
> +not available with virtio-fs devices.
> +* Windows cannot understand ACLs in the context of virtio-fs. Therefore, do not
> +expose ACLs for Windows VMs, otherwise the virtio-fs device will not be
> +visible within the VM.
> +
> +Add Mapping for Shared Directories
> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> +
> +To add a mapping for a shared directory, you can use the API directly with
> +`pvesh` as described in the xref:resource_mapping[Resource Mapping] section:
> +
> +----
> +pvesh create /cluster/mapping/dir --id dir1 \
> +    --map node=node1,path=/path/to/share1 \
> +    --map node=node2,path=/path/to/share2,announce-submounts=1 \
> +----
> +
> +Set `announce-submounts` to `1` if multiple filesystems are mounted in a shared
> +directory. This tells the guest which directories are mount points to prevent
> +data loss/corruption. With `announce-submounts`, virtiofsd reports a different
> +device number for each submount it encounters. Without it, duplicates may be
> +created because inode IDs are only unique on a single filesystem.
> +
> +Add virtio-fs to a VM
> +^^^^^^^^^^^^^^^^^^^^^
> +
> +To share a directory using virtio-fs, add the parameter `virtiofs<N>` (N can be
> +anything between 0 and 9) to the VM config and use a directory ID (dirid) that
> +has been configured in the resource mapping. Additionally, you can set the
> +`cache` option to either `always`, `never`, or `auto` (default: `auto`),
> +depending on your requirements. How the different caching modes behave can be
> +read at https://lwn.net/Articles/774495/ under the "Caching Modes" section. To

such a shame that the RFC has a great description but none of that is 
put in the actual kernel documentation or at least on the virtiofs site 
itself ;).

nit: this could also be a hyperlink, e.g. "can be read [here under the 
"Caching Modes" section]"

on another thought, it would maybe even be nice if the caching mode 
description is incorporated here as it shouldn't change in the future. 
also doesn't confuse users about suddenly reading a RFC and the 
'shared/no_shared' option being listed which we do not expose.

> +enable writeback cache set `writeback` to `1`.
> +
> +Virtiofsd supports ACL and xattr passthrough (can be enabled with the
> +`expose-acl` and `expose-xattr` options), allowing the guest to access ACLs and
> +xattrs if the underlying host filesystem supports them, but they must also be
> +compatible with the guest filesystem (for example most Linux filesystems support
> +ACLs, while Windows filesystems do not).
> +
> +The `expose-acl` option automatically implies `expose-xattr`, that is, it makes
> +no difference if you set `expose-xattr` to `0` if `expose-acl` is set to `1`.
> +
> +If you want virtio-fs to honor the `O_DIRECT` flag, you can set the `direct-io`
> +parameter to `1` (default: `0`). This will degrade performance, but is useful if
> +applications do their own caching.
> +
> +----
> +qm set <vmid> -virtiofs0 dirid=<dirid>,cache=always,direct-io=1
> +qm set <vmid> -virtiofs1 <dirid>,cache=never,expose-xattr=1
> +qm set <vmid> -virtiofs2 <dirid>,expose-acl=1,writeback=1
> +----
> +
> +To mount virtio-fs in a guest VM with the Linux kernel virtio-fs driver, run the
> +following command inside the guest:
> +
> +----
> +mount -t virtiofs <mount tag> <mount point>
> +----
> +
> +The dirid associated with the path on the current node is also used as the mount
> +tag (name used to mount the device on the guest).

nit: <mount tag> in the example above could then just be just <dirid> 
for brevity imo.

> +
> +For more information on available virtiofsd parameters, see the
> +https://gitlab.com/virtio-fs/virtiofsd[GitLab virtiofsd project page].
> +
>   [[qm_bootorder]]
>   Device Boot Order
>   ~~~~~~~~~~~~~~~~~
> @@ -1885,8 +1979,9 @@ in the relevant tab in the `Resource Mappings` category, or on the cli with
>   
>   [thumbnail="screenshot/gui-datacenter-mapping-pci-edit.png"]
>   
> -Where `<type>` is the hardware type (currently either `pci` or `usb`) and
> -`<options>` are the device mappings and other configuration parameters.
> +Where `<type>` is the hardware type (currently either `pci`, `usb` or
> +xref:qm_virtiofs[dir]) and `<options>` are the device mappings and other
> +configuration parameters.
>   
>   Note that the options must include a map property with all identifying
>   properties of that hardware, so that it's possible to verify the hardware did



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v14 0/12] virtiofs
  2025-03-04 11:57 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v14 0/12] virtiofs Markus Frank
                   ` (14 preceding siblings ...)
  2025-04-02 13:17 ` Fabian Grünbichler
@ 2025-04-02 13:45 ` Daniel Kral
  15 siblings, 0 replies; 30+ messages in thread
From: Daniel Kral @ 2025-04-02 13:45 UTC (permalink / raw)
  To: Proxmox VE development discussion, Markus Frank

On 3/4/25 12:57, Markus Frank wrote:
> Virtio-fs is a shared file system that enables sharing a directory
> between host and guest VMs. It takes advantage of the locality of
> virtual machines and the hypervisor to get a higher throughput than
> the 9p remote file system protocol.
> 
> build-order:
> 1. cluster
> 2. guest-common
> 3. docs
> 4. qemu-server
> 5. manager
> 
> I did not get virtiofsd to run with run_command without creating
> zombie processes after stutdown. So I replaced run_command with exec
> for now. Maybe someone can find out why this happens.
> 
> 
> changes in v14:
> * disallow commas and equal signs in path until the path can be quoted
>   in property strings
> * addressed style nits and improved formatting
> * use max_virtiofs() in check_vm_create_dir_perm
> * removed unnecessary checks after parse_property_string
> * find_on_current_node returns only one entry
> * improved docs
> * added missing imports/uses
> 
> changes in v13:
> * removed acl/xattr attributes in node config
> * renamed acl/xattr in virtiofs qemu config to expose-acl/expose-xattr
> * renamed submounts in node config to announce-submounts
> * the "disable snapshot (with RAM) and hibernate with virtio-fs devices"
>   patch now uses the check_non_migratable_resources function
> * rewritten the part about announce-submounts in pve-docs patch
> 
> Changes in v12:
> * rebase to master as most patches could not be applied anymore
> 
> Changes in v11:
> * made submounts option on by default in WebUI and section config
> * PVE::QemuServer::Virtiofs dependency removed in QemuServer/Memory.pm
> * Minor changes to function/variable names
> * Disable snapshots (with RAM) and hibernate due to incompatibility

Tested this series with three different guests:

- Debian 12.7 + Debian 12.10 (Kernel 6.1 series)
- Windows 11 Build 22631
- Fedora 32 (Kernel 5.6 series)

Picked the last because it's quite an old kernel and not debian-based.

Specific error cases on linux guests (for both Debian and Fedora):

- trying to snapshot VM with RAM with virtiofs -> results in error 
before snapshot of VM with RAM can be done
- trying to hibernate VM with virtiofs -> results in error before VM 
would be hibernated

xattr + POSIX ACLs:

- trying to `getfacl` on a file with POSIX ACLs set without 
`expose-xattr` nor `expose-acl` results in just the file mode 
permissions being printed
- trying to `getfacl` on a file with POSIX ACLs set with `expose-xattr` 
and `expose-acl` results in the POSIX ACLs also printed
- Fedora's kernel was probably too old to allow exposing the POSIX ACLs 
and resulted in a "Connection refused" if expose-acl was enabled (on the 
host syslog: Cannot enable posix ACLs, client does not support it)

Windows-specific error cases:

- starting a windows VM with POSIX ACLs enabled prints the warning 
correctly; also cannot be mounted inside the windows vm just as the docs say
- mounting a virtiofs mountpoint inside a windows vm with posix acls 
disabled works fine; for that matter, if solely xattr is enabled, it 
works as expected too

Everything works as expected here, thanks for the work on this!

Also tested this feature with multiple guests accessing the same shared 
file system "at the same time", i.e. just me working as a user on 
multiple files + writing to different files at the same time with `dd`.

I had some minor nits throughout the series, but else consider this 
series as:

Tested-by: Daniel Kral <d.kral@proxmox.com>
Reviewed-by: Daniel Kral <d.kral@proxmox.com>


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [pve-devel] [PATCH manager v14 12/12] ui: add options to add virtio-fs to qemu config
  2025-04-02 10:36   ` Laurențiu Leahu-Vlăducu
@ 2025-04-02 14:06     ` Daniel Kral
  0 siblings, 0 replies; 30+ messages in thread
From: Daniel Kral @ 2025-04-02 14:06 UTC (permalink / raw)
  To: Proxmox VE development discussion, Laurențiu Leahu-Vlăducu

On 4/2/25 12:36, Laurențiu Leahu-Vlăducu wrote:
> Is it a good idea to use cache=auto by default? I know that "cache=auto" 
> is the upstream default, but I was wondering whether it's not safer to 
> use "never" by default in PVE to minimize the amount of potential 
> issues. Feel free to contradict me, though ;)
> 
> By searching the internet I found some reports regarding cache=auto, 
> e.g. https://discuss.linuxcontainers.org/t/current-state-of-virtiofs- 
> under-virtual-machines/19166

Good point!

Additionally, there's a 'metadata' option upstream which seems like a 
compromise and might be another good option to add as soon as it's 
available from the Debian repositories or build them ourselves.

On 4/2/25 12:36, Laurențiu Leahu-Vlăducu wrote:
> I think some of these options should be marked as advanced, as they 
> won't be needed in most cases (feel free to contradict me, though).
> 
> While it's not always immediately clear what some of these options do, I 
> think it's hard to summarize the functionality in a short tooltip, so 
> linking to the docs (as you already do) is fine to me.
> 
> What is still unclear to me is what some caching combinations do, e.g. 
> the user can set Cache to Never, but can also enable "Writeback cache". 
> I know this has to do with QEMU and you simply exposed the GUI for 
> setting the advanced options, but I was just wondering whether it 
> wouldn't be possible to expose these options to our users in a way that 
> the difference between the modes is clearer, e.g. "Read cache" and 
> "Writeback cache". Feel free to use some entirely different naming, 
> though ;)

I also agree putting it at least in the "Advanced" section, but I'd not 
change the names too much from what virtiofs is exposing, but rather 
make it clear in the documentation or in some other form what the 
difference between both are.

The cache policy set by `cache={auto,always,never}` sets O_DIRECT for 
files with `cache=never` and FUSE's `KEEP_CACHE` for files with 
`cache=always` [0].

Some details for the writeback option are documented here [1].

[0] 
https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/src/passthrough/mod.rs#L953
[1] 
https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/src/passthrough/mod.rs#L231


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [pve-devel] [PATCH guest-common v14 2/12] add dir mapping section config
  2025-04-02 13:14   ` Fabian Grünbichler
@ 2025-04-02 15:20     ` Markus Frank
  0 siblings, 0 replies; 30+ messages in thread
From: Markus Frank @ 2025-04-02 15:20 UTC (permalink / raw)
  To: Proxmox VE development discussion, Fabian Grünbichler

Thanks for the feedback.

On  2025-04-02 15:14, Fabian Grünbichler wrote:
> On March 4, 2025 12:57 pm, Markus Frank wrote:
>> Adds a config file for directories by using a 'map' property string for
>> each node mapping.
>>
>> Next to node & path, there is the optional announce-submounts parameter
>> which forces virtiofsd to report a different device number for each
>> submount it encounters. Without it, duplicates may be created because
>> inode IDs are only unique on a single filesystem.
> 
> some things about the announce-submounts option that might be worthwhile
> to consider:
> - does it hurt to set announce-submounts if there are none?
> - it sounds like not setting it if there are any is dangerous
> 
> why don't we always set it? "announce-submounts" is also a really weird
> name if we extend this to managed bind-mounts for containers (which I'd
> very much like to do), if we keep it it should be a flag denoting that
> this directory contains further mountpoints (which means it
> requires/should default to announce-submounts for virtiofsd, and
> recursive bind mounting for LXC)?
I agree, I think it is best to remove this option altogether and always set it.
> 
>>
>> example config:
>> ```
>> some-dir-id
>> 	map node=node1,path=/mnt/share/,announce-submounts=1
>> 	map node=node2,path=/mnt/share/,
>> ```
>>
>> Signed-off-by: Markus Frank <m.frank@proxmox.com>
>> ---
>> v14:
>> * disallow commas and equal signs in path until the path can be quoted
>>   in property strings
>> * addressed style nits and improved formatting
>>
>>   src/Makefile           |   1 +
>>   src/PVE/Mapping/Dir.pm | 196 +++++++++++++++++++++++++++++++++++++++++
>>   2 files changed, 197 insertions(+)
>>   create mode 100644 src/PVE/Mapping/Dir.pm
>>
>> diff --git a/src/Makefile b/src/Makefile
>> index cbc40c1..030e7f7 100644
>> --- a/src/Makefile
>> +++ b/src/Makefile
>> @@ -15,6 +15,7 @@ install: PVE
>>   	install -m 0644 PVE/StorageTunnel.pm ${PERL5DIR}/PVE/
>>   	install -m 0644 PVE/Tunnel.pm ${PERL5DIR}/PVE/
>>   	install -d ${PERL5DIR}/PVE/Mapping
>> +	install -m 0644 PVE/Mapping/Dir.pm ${PERL5DIR}/PVE/Mapping/
>>   	install -m 0644 PVE/Mapping/PCI.pm ${PERL5DIR}/PVE/Mapping/
>>   	install -m 0644 PVE/Mapping/USB.pm ${PERL5DIR}/PVE/Mapping/
>>   	install -d ${PERL5DIR}/PVE/VZDump
>> diff --git a/src/PVE/Mapping/Dir.pm b/src/PVE/Mapping/Dir.pm
>> new file mode 100644
>> index 0000000..581ec39
>> --- /dev/null
>> +++ b/src/PVE/Mapping/Dir.pm
>> @@ -0,0 +1,196 @@
>> +package PVE::Mapping::Dir;
>> +
>> +use strict;
>> +use warnings;
>> +
>> +use PVE::Cluster qw(cfs_register_file cfs_read_file cfs_lock_file cfs_write_file);
>> +use PVE::INotify;
>> +use PVE::JSONSchema qw(get_standard_option parse_property_string);
>> +use PVE::SectionConfig;
>> +
>> +use base qw(PVE::SectionConfig);
>> +
>> +my $FILENAME = 'mapping/dir.cfg';
>> +
>> +cfs_register_file($FILENAME,
>> +    sub { __PACKAGE__->parse_config(@_); },
>> +    sub { __PACKAGE__->write_config(@_); });
>> +
>> +
>> +# so we don't have to repeat the type every time
>> +sub parse_section_header {
>> +    my ($class, $line) = @_;
>> +
>> +    if ($line =~ m/^(\S+)\s*$/) {
>> +	my $id = $1;
>> +	my $errmsg = undef; # set if you want to skip whole section
>> +	eval { PVE::JSONSchema::pve_verify_configid($id) };
>> +	$errmsg = $@ if $@;
>> +	my $config = {}; # to return additional attributes
>> +	return ('dir', $id, $errmsg, $config);
>> +    }
>> +    return undef;
>> +}
>> +
>> +sub format_section_header {
>> +    my ($class, $type, $sectionId, $scfg, $done_hash) = @_;
>> +
>> +    return "$sectionId\n";
>> +}
>> +
>> +sub type {
>> +    return 'dir';
>> +}
>> +
>> +# temporary path format that also disallows commas and equal signs
>> +# TODO: Remove this when property_string supports quotation of properties
>> +PVE::JSONSchema::register_format('pve-storage-path-in-property-string', \&verify_path);
>> +sub verify_path {
>> +    my ($path, $noerr) = @_;
>> +
>> +    if ($path !~ m|^/[^;,=\(\)]+|) {
>> +	return undef if $noerr;
>> +	die "Value does not look like a valid absolute path."
>> +	    ." These symbols are currently not allowed in path: ;,=())\n";
> 
> stray ')' in the error message
> 
>> +    }
>> +    return $path;
>> +}
>> +
>> +my $map_fmt = {
>> +    node => get_standard_option('pve-node'),
>> +    path => {
>> +	description => "Absolute directory path that should be shared with the guest.",
>> +	type => 'string',
>> +	format => 'pve-storage-path-in-property-string',
>> +    },
>> +    'announce-submounts' => {
>> +	type => 'boolean',
>> +	description => "Announce that the directory contains other mounted file systems."
>> +	    ." If this is not set and multiple file systems are mounted, the guest may"
>> +	    ." encounter duplicates due to file system specific inode IDs.",
>> +	optional => 1,
>> +	default => 1,
>> +    },
>> +};
>> +
>> +my $defaultData = {
>> +    propertyList => {
>> +	id => {
>> +	    type => 'string',
>> +	    description => "The ID of the directory mapping",
>> +	    format => 'pve-configid',
>> +	},
>> +	description => {
>> +	    type => 'string',
>> +	    description => "Description of the directory mapping",
>> +	    optional => 1,
>> +	    maxLength => 4096,
>> +	},
>> +	map => {
>> +	    type => 'array',
>> +	    description => 'A list of maps for the cluster nodes.',
>> +	    optional => 1,
>> +	    items => {
>> +		type => 'string',
>> +		format => $map_fmt,
>> +	    },
>> +	},
>> +    },
>> +};
>> +
>> +sub private {
>> +    return $defaultData;
>> +}
>> +
>> +sub options {
>> +    return {
>> +	description => { optional => 1 },
>> +	map => {},
>> +    };
>> +}
>> +
>> +sub assert_valid {
> 
> this is called below in assert_valid_map_list without filtering entries
> so that just those for the current node are considered..
> 
>> +    my ($dir_cfg) = @_;
>> +
>> +    my $path = $dir_cfg->{path};
>> +
>> +    verify_path($path);
>> +
>> +    if (! -e $path) {
>> +	die "Path $path does not exist\n";
>> +    } elsif (! -d $path) {
>> +	die "Path $path exists, but is not a directory\n";
>> +    }
> 
> so these checks don't make sense.. if the directory has to exist on all
> nodes, then the per-node mappings could just be dropped entirely ;)
> 
>> +
>> +    return 1;
>> +};
>> +
>> +sub assert_valid_map_list {
>> +    my ($map_list) = @_;
>> +
>> +    my %count;
>> +    for my $map (@$map_list) {
>> +	my $entry = parse_property_string($map_fmt, $map);
>> +	assert_valid($entry);
>> +	$count{$entry->{node}}++;
>> +    }
>> +    for my $node (keys %count) {
>> +	if ($count{$node} > 1) {
>> +	    die "Node '$node' is specified $count{$node} times.\n";
>> +	}
>> +    }
>> +}
>> +
>> +sub config {
>> +    return cfs_read_file($FILENAME);
>> +}
>> +
>> +sub lock_dir_config {
>> +    my ($code, $errmsg) = @_;
>> +
>> +    cfs_lock_file($FILENAME, undef, $code);
>> +    if (my $err = $@) {
>> +	$errmsg ? die "$errmsg: $err" : die $err;
>> +    }
>> +}
>> +
>> +sub write_dir_config {
>> +    my ($cfg) = @_;
>> +
>> +    cfs_write_file($FILENAME, $cfg);
>> +}
>> +
>> +sub find_on_current_node {
>> +    my ($id) = @_;
>> +
>> +    my $cfg = config();
>> +    my $node = PVE::INotify::nodename();
>> +
>> +    my $node_mapping = get_node_mapping($cfg, $id, $node);
>> +    if (@{$node_mapping} > 1) {
>> +	die "More than than one directory mapping for node $node.\n";
>> +    }
>> +    return $node_mapping->[0];
>> +}
>> +
>> +sub get_node_mapping {
>> +    my ($cfg, $id, $nodename) = @_;
>> +
>> +    return undef if !defined($cfg->{ids}->{$id});
>> +
>> +    my $res = [];
>> +    my $mapping_list = $cfg->{ids}->{$id}->{map};
>> +    for my $map (@{$mapping_list}) {
>> +	my $entry = eval { parse_property_string($map_fmt, $map) };
>> +	warn $@ if $@;
>> +	if ($entry && $entry->{node} eq $nodename) {
>> +	    push $res->@*, $entry;
>> +	}
>> +    }
>> +    return $res;
>> +}
>> +
>> +PVE::Mapping::Dir->register();
>> +PVE::Mapping::Dir->init();
>> +
>> +1;
>> -- 
>> 2.39.5
>>
>>
>>
>> _______________________________________________
>> pve-devel mailing list
>> pve-devel@lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>>
>>
>>
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2025-04-02 15:20 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-03-04 11:57 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v14 0/12] virtiofs Markus Frank
2025-03-04 11:57 ` [pve-devel] [PATCH cluster v14 1/12] add mapping/dir.cfg for resource mapping Markus Frank
2025-03-04 11:57 ` [pve-devel] [PATCH guest-common v14 2/12] add dir mapping section config Markus Frank
2025-04-02 13:14   ` Fabian Grünbichler
2025-04-02 15:20     ` Markus Frank
2025-04-02 13:41   ` Daniel Kral
2025-03-04 11:57 ` [pve-devel] [PATCH docs v14 3/12] add doc section for the shared filesystem virtio-fs Markus Frank
2025-04-02 10:36   ` Laurențiu Leahu-Vlăducu
2025-04-02 13:13   ` Fabian Grünbichler
2025-04-02 13:44   ` Daniel Kral
2025-03-04 11:57 ` [pve-devel] [PATCH qemu-server v14 4/12] control: add virtiofsd as runtime dependency for qemu-server Markus Frank
2025-03-04 11:57 ` [pve-devel] [PATCH qemu-server v14 5/12] fix #1027: virtio-fs support Markus Frank
2025-04-02 13:13   ` Fabian Grünbichler
2025-03-04 11:57 ` [pve-devel] [PATCH qemu-server v14 6/12] migration: check_local_resources for virtiofs Markus Frank
2025-04-02 13:13   ` Fabian Grünbichler
2025-03-04 11:57 ` [pve-devel] [PATCH qemu-server v14 7/12] disable snapshot (with RAM) and hibernate with virtio-fs devices Markus Frank
2025-03-04 11:57 ` [pve-devel] [PATCH manager v14 08/12] api: add resource map api endpoints for directories Markus Frank
2025-03-04 11:58 ` [pve-devel] [PATCH manager v14 09/12] ui: add edit window for dir mappings Markus Frank
2025-03-04 11:58 ` [pve-devel] [PATCH manager v14 10/12] ui: add resource mapping view for directories Markus Frank
2025-04-02 10:36   ` Laurențiu Leahu-Vlăducu
2025-04-02 13:42   ` Daniel Kral
2025-03-04 11:58 ` [pve-devel] [PATCH manager v14 11/12] ui: form: add selector for directory mappings Markus Frank
2025-03-04 11:58 ` [pve-devel] [PATCH manager v14 12/12] ui: add options to add virtio-fs to qemu config Markus Frank
2025-04-02 10:36   ` Laurențiu Leahu-Vlăducu
2025-04-02 14:06     ` Daniel Kral
2025-04-02 13:42   ` Daniel Kral
2025-03-18  9:14 ` [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v14 0/12] virtiofs Markus Frank
2025-04-02 10:36 ` Laurențiu Leahu-Vlăducu
2025-04-02 13:17 ` Fabian Grünbichler
2025-04-02 13:45 ` Daniel Kral

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal