public inbox for pve-devel@lists.proxmox.com
 help / color / mirror / Atom feed
* [pve-devel] [PATCH docs/qemu-server/manager v17 0/10] Virtio-fs
@ 2025-04-07 13:49 Markus Frank
  2025-04-07 13:49 ` [pve-devel] [PATCH docs v17 1/10] add doc section for the shared filesystem virtio-fs Markus Frank
                   ` (11 more replies)
  0 siblings, 12 replies; 15+ messages in thread
From: Markus Frank @ 2025-04-07 13:49 UTC (permalink / raw)
  To: pve-devel

Virtio-fs is a shared file system that enables sharing a directory
between host and guest VMs. It takes advantage of the locality of
virtual machines and the hypervisor to get a higher throughput than
the 9p remote file system protocol.

build-order:
1. cluster
2. guest-common
3. docs
4. qemu-server
5. manager

I did not get virtiofsd to run with run_command without creating
zombie processes after stutdown. So I replaced run_command with exec
for now. Maybe someone can find out why this happens.


v17:
* qemu-server:
  * d/control: 'virtiofsd' in Recommends instead of Depends
  * added check if virtiofsd is installed
* pve-manager:
  * moved directory mapping in its own tab to save vertical space in the
   "Resource Mappings" tab - discussed off-list with Dominik


docs:

Markus Frank (1):
  add doc section for the shared filesystem virtio-fs

 qm.adoc | 102 ++++++++++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 100 insertions(+), 2 deletions(-)



qemu-server:

Markus Frank (4):
  d/control: add virtiofsd to recommends for qemu-server
  fix #1027: virtio-fs support
  migration: check_local_resources for virtiofs
  disable snapshot (with RAM) and hibernate with virtio-fs devices

 PVE/API2/Qemu.pm             |  41 ++++++-
 PVE/QemuServer.pm            |  42 ++++++-
 PVE/QemuServer/Makefile      |   3 +-
 PVE/QemuServer/Memory.pm     |  25 +++--
 PVE/QemuServer/Virtiofs.pm   | 212 +++++++++++++++++++++++++++++++++++
 debian/control               |   5 +-
 test/MigrationTest/Shared.pm |   7 ++
 7 files changed, 320 insertions(+), 15 deletions(-)
 create mode 100644 PVE/QemuServer/Virtiofs.pm



manager:

Markus Frank (5):
  api: add resource map api endpoints for directories
  ui: add edit window for dir mappings
  ui: add resource mapping view for directories
  ui: form: add selector for directory mappings
  ui: add options to add virtio-fs to qemu config

 PVE/API2/Cluster/Mapping.pm         |   7 +
 PVE/API2/Cluster/Mapping/Dir.pm     | 308 ++++++++++++++++++++++++++++
 PVE/API2/Cluster/Mapping/Makefile   |   1 +
 www/manager6/Makefile               |   4 +
 www/manager6/Utils.js               |   1 +
 www/manager6/dc/Config.js           |   6 +
 www/manager6/dc/DirMapView.js       |  38 ++++
 www/manager6/form/DirMapSelector.js |  63 ++++++
 www/manager6/qemu/HardwareView.js   |  19 ++
 www/manager6/qemu/VirtiofsEdit.js   | 143 +++++++++++++
 www/manager6/window/DirMapEdit.js   | 204 ++++++++++++++++++
 11 files changed, 794 insertions(+)
 create mode 100644 PVE/API2/Cluster/Mapping/Dir.pm
 create mode 100644 www/manager6/dc/DirMapView.js
 create mode 100644 www/manager6/form/DirMapSelector.js
 create mode 100644 www/manager6/qemu/VirtiofsEdit.js
 create mode 100644 www/manager6/window/DirMapEdit.js

-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [pve-devel] [PATCH docs v17 1/10] add doc section for the shared filesystem virtio-fs
  2025-04-07 13:49 [pve-devel] [PATCH docs/qemu-server/manager v17 0/10] Virtio-fs Markus Frank
@ 2025-04-07 13:49 ` Markus Frank
  2025-04-07 13:49 ` [pve-devel] [PATCH qemu-server v17 2/10] d/control: add virtiofsd to Recommends for qemu-server Markus Frank
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 15+ messages in thread
From: Markus Frank @ 2025-04-07 13:49 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
---
nothing changed in v17

 qm.adoc | 102 ++++++++++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 100 insertions(+), 2 deletions(-)

diff --git a/qm.adoc b/qm.adoc
index 3eadac6..e0e1178 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -1225,6 +1225,103 @@ recommended to always use a limiter to avoid guests using too many host
 resources. If desired, a value of '0' for `max_bytes` can be used to disable
 all limits.
 
+[[qm_virtiofs]]
+Virtio-fs
+~~~~~~~~~
+
+Virtio-fs is a shared filesystem designed for virtual environments. It allows to
+share a directory tree available on the host by mounting it within VMs. It does
+not use the network stack and aims to offer similar performance and semantics as
+the source filesystem.
+
+To use virtio-fs, the https://gitlab.com/virtio-fs/virtiofsd[virtiofsd] daemon
+needs to run in the background. This happens automatically in {pve} when
+starting a VM using a virtio-fs mount.
+
+Linux VMs with kernel >=5.4 support virtio-fs by default
+(https://www.kernelconfig.io/CONFIG_VIRTIO_FS[virtiofs kernel module]), but some
+features require a newer kernel.
+
+There is a
+https://github.com/virtio-win/kvm-guest-drivers-windows/wiki/Virtiofs:-Shared-file-system[guide]
+available on how to utilize virtio-fs in Windows VMs.
+
+Known Limitations
+^^^^^^^^^^^^^^^^^
+
+* If virtiofsd crashes, its mount point will hang in the VM until the VM
+is completely stopped.
+* virtiofsd not responding may result in a hanging mount in the VM, similar to
+an unreachable NFS.
+* Memory hotplug does not work in combination with virtio-fs (also results in
+hanging access).
+* Memory related features such as live migration, snapshots, and hibernate are
+not available with virtio-fs devices.
+* Windows cannot understand ACLs in the context of virtio-fs. Therefore, do not
+expose ACLs for Windows VMs, otherwise the virtio-fs device will not be
+visible within the VM.
+
+Add Mapping for Shared Directories
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To add a mapping for a shared directory, you can use the API directly with
+`pvesh` as described in the xref:resource_mapping[Resource Mapping] section:
+
+----
+pvesh create /cluster/mapping/dir --id dir1 \
+    --map node=node1,path=/path/to/share1 \
+    --map node=node2,path=/path/to/share2 \
+----
+
+Add virtio-fs to a VM
+^^^^^^^^^^^^^^^^^^^^^
+
+To share a directory using virtio-fs, add the parameter `virtiofs<N>` (N can be
+anything between 0 and 9) to the VM config and use a directory ID (dirid) that
+has been configured in the resource mapping. Additionally, you can set the
+`cache` option to either `always`, `never`, `metadata`, or `auto` (default:
+`auto`), depending on your requirements. How the different caching modes behave
+can be read https://lwn.net/Articles/774495/[here under the "Caching Modes"
+section]. To enable writeback cache set `writeback` to `1`.
+
+Virtiofsd supports ACL and xattr passthrough (can be enabled with the
+`expose-acl` and `expose-xattr` options), allowing the guest to access ACLs and
+xattrs if the underlying host filesystem supports them, but they must also be
+compatible with the guest filesystem (for example most Linux filesystems support
+ACLs, while Windows filesystems do not).
+
+The `expose-acl` option automatically implies `expose-xattr`, that is, it makes
+no difference if you set `expose-xattr` to `0` if `expose-acl` is set to `1`.
+
+If you want virtio-fs to honor the `O_DIRECT` flag, you can set the `direct-io`
+parameter to `1` (default: `0`). This will degrade performance, but is useful if
+applications do their own caching.
+
+----
+qm set <vmid> -virtiofs0 dirid=<dirid>,cache=always,direct-io=1
+qm set <vmid> -virtiofs1 <dirid>,cache=never,expose-xattr=1
+qm set <vmid> -virtiofs2 <dirid>,expose-acl=1,writeback=1
+----
+
+To temporarily mount virtio-fs in a guest VM with the Linux kernel virtio-fs
+driver, run the following command inside the guest:
+
+----
+mount -t virtiofs <dirid> <mount point>
+----
+
+To have a persistent virtiofs mount, you can create an fstab entry:
+
+----
+<dirid> <mount point> virtiofs rw,relatime 0 0
+----
+
+The dirid associated with the path on the current node is also used as the mount
+tag (name used to mount the device on the guest).
+
+For more information on available virtiofsd parameters, see the
+https://gitlab.com/virtio-fs/virtiofsd[GitLab virtiofsd project page].
+
 [[qm_bootorder]]
 Device Boot Order
 ~~~~~~~~~~~~~~~~~
@@ -1913,8 +2010,9 @@ in the relevant tab in the `Resource Mappings` category, or on the cli with
 
 [thumbnail="screenshot/gui-datacenter-mapping-pci-edit.png"]
 
-Where `<type>` is the hardware type (currently either `pci` or `usb`) and
-`<options>` are the device mappings and other configuration parameters.
+Where `<type>` is the hardware type (currently either `pci`, `usb` or
+xref:qm_virtiofs[dir]) and `<options>` are the device mappings and other
+configuration parameters.
 
 Note that the options must include a map property with all identifying
 properties of that hardware, so that it's possible to verify the hardware did
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [pve-devel] [PATCH qemu-server v17 2/10] d/control: add virtiofsd to Recommends for qemu-server
  2025-04-07 13:49 [pve-devel] [PATCH docs/qemu-server/manager v17 0/10] Virtio-fs Markus Frank
  2025-04-07 13:49 ` [pve-devel] [PATCH docs v17 1/10] add doc section for the shared filesystem virtio-fs Markus Frank
@ 2025-04-07 13:49 ` Markus Frank
  2025-04-07 13:49 ` [pve-devel] [PATCH qemu-server v17 3/10] fix #1027: virtio-fs support Markus Frank
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 15+ messages in thread
From: Markus Frank @ 2025-04-07 13:49 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
v17:
* virtiofsd in Recommends instead of Depends

 debian/control | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/debian/control b/debian/control
index 0acc3126..2a37ef7e 100644
--- a/debian/control
+++ b/debian/control
@@ -58,8 +58,9 @@ Depends: dbus,
          ${misc:Depends},
          ${perl:Depends},
          ${shlibs:Depends},
-Recommends: proxmox-backup-file-restore (>= 2.1.9-2),
-            libpve-network-perl (>= 0.8.3),
+Recommends: libpve-network-perl (>= 0.8.3),
+            proxmox-backup-file-restore (>= 2.1.9-2),
+            virtiofsd,
 Suggests: pve-edk2-firmware-aarch64, pve-edk2-firmware-riscv,
 Breaks: pve-ha-manager (<< 4.0.1), pve-manager (<= 6.0-13),
 Description: Qemu Server Tools
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [pve-devel] [PATCH qemu-server v17 3/10] fix #1027: virtio-fs support
  2025-04-07 13:49 [pve-devel] [PATCH docs/qemu-server/manager v17 0/10] Virtio-fs Markus Frank
  2025-04-07 13:49 ` [pve-devel] [PATCH docs v17 1/10] add doc section for the shared filesystem virtio-fs Markus Frank
  2025-04-07 13:49 ` [pve-devel] [PATCH qemu-server v17 2/10] d/control: add virtiofsd to Recommends for qemu-server Markus Frank
@ 2025-04-07 13:49 ` Markus Frank
  2025-04-07 13:49 ` [pve-devel] [PATCH qemu-server v17 4/10] migration: check_local_resources for virtiofs Markus Frank
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 15+ messages in thread
From: Markus Frank @ 2025-04-07 13:49 UTC (permalink / raw)
  To: pve-devel

add support for sharing directories with a guest vm.

virtio-fs needs virtiofsd to be started.
In order to start virtiofsd as a process (despite being a daemon it is
does not run in the background), a double-fork is used.

virtiofsd should close itself together with QEMU.

There are the parameters dirid and the optional parameters direct-io,
cache and writeback. Additionally the expose-xattr & expose-acl
parameter can be set to expose xattr & acl settings from the shared
filesystem to the guest system.

The dirid gets mapped to the path on the current node and is also used
as a mount tag (name used to mount the device on the guest).

example config:
```
virtiofs0: foo,direct-io=1,cache=always,expose-acl=1
virtiofs1: dirid=bar,cache=never,expose-xattr=1,writeback=1
```

For information on the optional parameters see the coherent doc patch
and the official gitlab README:
https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md

Also add a permission check for virtiofs directory access.

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
v17:
* added check if virtiofsd is installed

 PVE/API2/Qemu.pm           |  41 ++++++-
 PVE/QemuServer.pm          |  27 ++++-
 PVE/QemuServer/Makefile    |   3 +-
 PVE/QemuServer/Memory.pm   |  25 +++--
 PVE/QemuServer/Virtiofs.pm | 212 +++++++++++++++++++++++++++++++++++++
 5 files changed, 298 insertions(+), 10 deletions(-)
 create mode 100644 PVE/QemuServer/Virtiofs.pm

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 8b51c043..8b7d3cee 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -40,6 +40,7 @@ use PVE::QemuServer::PCI;
 use PVE::QemuServer::QMPHelpers;
 use PVE::QemuServer::RNG;
 use PVE::QemuServer::USB;
+use PVE::QemuServer::Virtiofs qw(max_virtiofs);
 use PVE::QemuMigrate;
 use PVE::RPCEnvironment;
 use PVE::AccessControl;
@@ -838,6 +839,33 @@ my sub check_rng_perm {
     return 1;
 }
 
+my sub check_dir_perm {
+    my ($rpcenv, $authuser, $vmid, $pool, $opt, $value) = @_;
+
+    return 1 if $authuser eq 'root@pam';
+
+    $rpcenv->check_vm_perm($authuser, $vmid, $pool, ['VM.Config.Disk']);
+
+    my $virtiofs = PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', $value);
+    $rpcenv->check_full($authuser, "/mapping/dir/$virtiofs->{dirid}", ['Mapping.Use']);
+
+    return 1;
+};
+
+my sub check_vm_create_dir_perm {
+    my ($rpcenv, $authuser, $vmid, $pool, $param) = @_;
+
+    return 1 if $authuser eq 'root@pam';
+
+    for (my $i = 0; $i < max_virtiofs(); $i++) {
+	my $opt = "virtiofs$i";
+	next if !$param->{$opt};
+	check_dir_perm($rpcenv, $authuser, $vmid, $pool, $opt, $param->{$opt});
+    }
+
+    return 1;
+};
+
 my $check_vm_modify_config_perm = sub {
     my ($rpcenv, $authuser, $vmid, $pool, $key_list) = @_;
 
@@ -848,7 +876,7 @@ my $check_vm_modify_config_perm = sub {
 	# else, as there the permission can be value dependent
 	next if PVE::QemuServer::is_valid_drivename($opt);
 	next if $opt eq 'cdrom';
-	next if $opt =~ m/^(?:unused|serial|usb|hostpci)\d+$/;
+	next if $opt =~ m/^(?:unused|serial|usb|hostpci|virtiofs)\d+$/;
 	next if $opt eq 'tags';
 
 
@@ -1169,6 +1197,7 @@ __PACKAGE__->register_method({
 	    check_vm_create_hostpci_perm($rpcenv, $authuser, $vmid, $pool, $param);
 	    check_rng_perm($rpcenv, $authuser, $vmid, $pool, 'rng0', $param->{rng0})
 		if $param->{rng0};
+	    check_vm_create_dir_perm($rpcenv, $authuser, $vmid, $pool, $param);
 
 	    PVE::QemuServer::check_bridge_access($rpcenv, $authuser, $param);
 	    &$check_cpu_model_access($rpcenv, $authuser, $param);
@@ -2072,6 +2101,10 @@ my $update_vm_api  = sub {
 		    check_rng_perm($rpcenv, $authuser, $vmid, undef, $opt, $val);
 		    PVE::QemuConfig->add_to_pending_delete($conf, $opt, $force);
 		    PVE::QemuConfig->write_config($vmid, $conf);
+		} elsif ($opt =~ m/^virtiofs\d$/) {
+		    check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, $val);
+		    PVE::QemuConfig->add_to_pending_delete($conf, $opt, $force);
+		    PVE::QemuConfig->write_config($vmid, $conf);
 		} elsif ($opt eq 'tags') {
 		    assert_tag_permissions($vmid, $val, '', $rpcenv, $authuser);
 		    delete $conf->{$opt};
@@ -2168,6 +2201,12 @@ my $update_vm_api  = sub {
 		    }
 		    check_rng_perm($rpcenv, $authuser, $vmid, undef, $opt, $param->{$opt});
 		    $conf->{pending}->{$opt} = $param->{$opt};
+		} elsif ($opt =~ m/^virtiofs\d$/) {
+		    if (my $oldvalue = $conf->{$opt}) {
+			check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, $oldvalue);
+		    }
+		    check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, $param->{$opt});
+		    $conf->{pending}->{$opt} = $param->{$opt};
 		} elsif ($opt eq 'tags') {
 		    assert_tag_permissions($vmid, $conf->{$opt}, $param->{$opt}, $rpcenv, $authuser);
 		    $conf->{pending}->{$opt} = PVE::GuestHelpers::get_unique_tags($param->{$opt});
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index ea453667..77d24579 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -33,6 +33,7 @@ use PVE::Exception qw(raise raise_param_exc);
 use PVE::Format qw(render_duration render_bytes);
 use PVE::GuestHelpers qw(safe_string_ne safe_num_ne safe_boolean_ne);
 use PVE::HA::Config;
+use PVE::Mapping::Dir;
 use PVE::Mapping::PCI;
 use PVE::Mapping::USB;
 use PVE::INotify;
@@ -63,6 +64,7 @@ use PVE::QemuServer::PCI qw(print_pci_addr print_pcie_addr print_pcie_root_port
 use PVE::QemuServer::QMPHelpers qw(qemu_deviceadd qemu_devicedel qemu_objectadd qemu_objectdel);
 use PVE::QemuServer::RNG qw(parse_rng print_rng_device_commandline print_rng_object_commandline);
 use PVE::QemuServer::USB;
+use PVE::QemuServer::Virtiofs qw(max_virtiofs start_all_virtiofsd);
 
 my $have_sdn;
 eval {
@@ -923,6 +925,10 @@ my $netdesc = {
 
 PVE::JSONSchema::register_standard_option("pve-qm-net", $netdesc);
 
+for (my $i = 0; $i < max_virtiofs(); $i++)  {
+    $confdesc->{"virtiofs$i"} = get_standard_option('pve-qm-virtiofs');
+}
+
 my $ipconfig_fmt = {
     ip => {
 	type => 'string',
@@ -3711,8 +3717,18 @@ sub config_to_command {
 	push @$cmd, get_cpu_options($conf, $arch, $kvm, $kvm_off, $machine_version, $winversion, $gpu_passthrough);
     }
 
+    my $virtiofs_enabled = PVE::QemuServer::Virtiofs::virtiofs_enabled($conf);
+
     PVE::QemuServer::Memory::config(
-	$conf, $vmid, $sockets, $cores, $hotplug_features->{memory}, $cmd);
+	$conf,
+	$vmid,
+	$sockets,
+	$cores,
+	$hotplug_features->{memory},
+	$virtiofs_enabled,
+	$cmd,
+	$machineFlags,
+    );
 
     push @$cmd, '-S' if $conf->{freeze};
 
@@ -3997,6 +4013,8 @@ sub config_to_command {
 	push @$machineFlags, 'confidential-guest-support=sev0';
     }
 
+    PVE::QemuServer::Virtiofs::config($conf, $vmid, $devices);
+
     push @$cmd, @$devices;
     push @$cmd, '-rtc', join(',', @$rtcFlags) if scalar(@$rtcFlags);
     push @$cmd, '-machine', join(',', @$machineFlags) if scalar(@$machineFlags);
@@ -5784,6 +5802,8 @@ sub vm_start_nolock {
 	PVE::Tools::run_fork sub {
 	    PVE::Systemd::enter_systemd_scope($vmid, "Proxmox VE VM $vmid", %systemd_properties);
 
+	    my $virtiofs_sockets = start_all_virtiofsd($conf, $vmid);
+
 	    my $tpmpid;
 	    if ((my $tpm = $conf->{tpmstate0}) && !PVE::QemuConfig->is_template($conf)) {
 		# start the TPM emulator so QEMU can connect on start
@@ -5791,6 +5811,8 @@ sub vm_start_nolock {
 	    }
 
 	    my $exitcode = run_command($cmd, %run_params);
+	    eval { PVE::QemuServer::Virtiofs::close_sockets(@$virtiofs_sockets); };
+	    log_warn("closing virtiofs sockets failed - $@") if $@;
 	    if ($exitcode) {
 		if ($tpmpid) {
 		    warn "stopping swtpm instance (pid $tpmpid) due to QEMU startup error\n";
@@ -6471,6 +6493,9 @@ sub check_mapping_access {
 	    if ($device->{source} && $device->{source} eq '/dev/hwrng') {
 		$rpcenv->check_full($user, "/mapping/hwrng", ['Mapping.Use']);
 	    }
+	} elsif ($opt =~ m/^virtiofs\d$/) {
+	    my $virtiofs = PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', $conf->{$opt});
+	    $rpcenv->check_full($user, "/mapping/dir/$virtiofs->{dirid}", ['Mapping.Use']);
 	}
     }
 };
diff --git a/PVE/QemuServer/Makefile b/PVE/QemuServer/Makefile
index 83c6af79..8bcd484e 100644
--- a/PVE/QemuServer/Makefile
+++ b/PVE/QemuServer/Makefile
@@ -12,7 +12,8 @@ SOURCES=PCI.pm		\
 	CPUConfig.pm	\
 	CGroup.pm	\
 	Drive.pm	\
-	QMPHelpers.pm
+	QMPHelpers.pm	\
+	Virtiofs.pm
 
 .PHONY: install
 install: ${SOURCES}
diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
index e5024cd2..1111410a 100644
--- a/PVE/QemuServer/Memory.pm
+++ b/PVE/QemuServer/Memory.pm
@@ -336,7 +336,7 @@ sub qemu_memdevices_list {
 }
 
 sub config {
-    my ($conf, $vmid, $sockets, $cores, $hotplug, $cmd) = @_;
+    my ($conf, $vmid, $sockets, $cores, $hotplug, $virtiofs_enabled, $cmd, $machine_flags) = @_;
 
     my $memory = get_current_memory($conf->{memory});
     my $static_memory = 0;
@@ -367,6 +367,9 @@ sub config {
 
     die "numa needs to be enabled to use hugepages" if $conf->{hugepages} && !$conf->{numa};
 
+    die "Memory hotplug does not work in combination with virtio-fs.\n"
+	if $hotplug && $virtiofs_enabled;
+
     if ($conf->{numa}) {
 
 	my $numa_totalmemory = undef;
@@ -379,7 +382,8 @@ sub config {
 	    my $numa_memory = $numa->{memory};
 	    $numa_totalmemory += $numa_memory;
 
-	    my $mem_object = print_mem_object($conf, "ram-node$i", $numa_memory);
+	    my $memdev = $virtiofs_enabled ? "virtiofs-mem$i" : "ram-node$i";
+	    my $mem_object = print_mem_object($conf, $memdev, $numa_memory);
 
 	    # cpus
 	    my $cpulists = $numa->{cpus};
@@ -404,7 +408,7 @@ sub config {
 	    }
 
 	    push @$cmd, '-object', $mem_object;
-	    push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=ram-node$i";
+	    push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=$memdev";
 	}
 
 	die "total memory for NUMA nodes must be equal to vm static memory\n"
@@ -418,15 +422,20 @@ sub config {
 		die "host NUMA node$i doesn't exist\n"
 		    if !host_numanode_exists($i) && $conf->{hugepages};
 
-		my $mem_object = print_mem_object($conf, "ram-node$i", $numa_memory);
-		push @$cmd, '-object', $mem_object;
-
 		my $cpus = ($cores * $i);
 		$cpus .= "-" . ($cpus + $cores - 1) if $cores > 1;
 
-		push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=ram-node$i";
+		my $memdev = $virtiofs_enabled ? "virtiofs-mem$i" : "ram-node$i";
+		my $mem_object = print_mem_object($conf, $memdev, $numa_memory);
+		push @$cmd, '-object', $mem_object;
+		push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=$memdev";
 	    }
 	}
+    } elsif ($virtiofs_enabled) {
+	# kvm: '-machine memory-backend' and '-numa memdev' properties are mutually exclusive
+	push @$cmd, '-object', 'memory-backend-memfd,id=virtiofs-mem'
+	    .",size=$conf->{memory}M,share=on";
+	push @$machine_flags, 'memory-backend=virtiofs-mem';
     }
 
     if ($hotplug) {
@@ -453,6 +462,8 @@ sub print_mem_object {
 	my $path = hugepages_mount_path($hugepages_size);
 
 	return "memory-backend-file,id=$id,size=${size}M,mem-path=$path,share=on,prealloc=yes";
+    } elsif ($id =~ m/^virtiofs-mem/) {
+	return "memory-backend-memfd,id=$id,size=${size}M,share=on";
     } else {
 	return "memory-backend-ram,id=$id,size=${size}M";
     }
diff --git a/PVE/QemuServer/Virtiofs.pm b/PVE/QemuServer/Virtiofs.pm
new file mode 100644
index 00000000..13035c9b
--- /dev/null
+++ b/PVE/QemuServer/Virtiofs.pm
@@ -0,0 +1,212 @@
+package PVE::QemuServer::Virtiofs;
+
+use strict;
+use warnings;
+
+use Fcntl qw(F_GETFD F_SETFD FD_CLOEXEC);
+use IO::Socket::UNIX;
+use POSIX;
+use Socket qw(SOCK_STREAM);
+
+use PVE::JSONSchema qw(parse_property_string);
+use PVE::Mapping::Dir;
+use PVE::QemuServer::Helpers;
+use PVE::RESTEnvironment qw(log_warn);
+
+use base qw(Exporter);
+
+our @EXPORT_OK = qw(
+max_virtiofs
+start_all_virtiofsd
+);
+
+my $MAX_VIRTIOFS = 10;
+my $socket_path_root = "/run/qemu-server/virtiofsd";
+
+my $virtiofs_fmt = {
+    'dirid' => {
+	type => 'string',
+	default_key => 1,
+	description => "Mapping identifier of the directory mapping to be shared with the guest."
+	    ." Also used as a mount tag inside the VM.",
+	format_description => 'mapping-id',
+	format => 'pve-configid',
+    },
+    'cache' => {
+	type => 'string',
+	description => "The caching policy the file system should use (auto, always, metadata, never).",
+	enum => [qw(auto always metadata never)],
+	default => "auto",
+	optional => 1,
+    },
+    'direct-io' => {
+	type => 'boolean',
+	description => "Honor the O_DIRECT flag passed down by guest applications.",
+	default => 0,
+	optional => 1,
+    },
+    writeback => {
+	type => 'boolean',
+	description => "Enable writeback cache. If enabled, writes may be cached in the guest until"
+	    ." the file is closed or an fsync is performed.",
+	default => 0,
+	optional => 1,
+    },
+    'expose-xattr' => {
+	type => 'boolean',
+	description => "Enable support for extended attributes for this mount.",
+	default => 0,
+	optional => 1,
+    },
+    'expose-acl' => {
+	type => 'boolean',
+	description => "Enable support for POSIX ACLs (enabled ACL implies xattr) for this mount.",
+	default => 0,
+	optional => 1,
+    },
+};
+PVE::JSONSchema::register_format('pve-qm-virtiofs', $virtiofs_fmt);
+
+my $virtiofsdesc = {
+    optional => 1,
+    type => 'string', format => $virtiofs_fmt,
+    description => "Configuration for sharing a directory between host and guest using Virtio-fs.",
+};
+PVE::JSONSchema::register_standard_option("pve-qm-virtiofs", $virtiofsdesc);
+
+sub max_virtiofs {
+    return $MAX_VIRTIOFS;
+}
+
+sub assert_virtiofs_config {
+    my ($ostype, $virtiofs) = @_;
+
+    my $dir_cfg = PVE::Mapping::Dir::find_on_current_node($virtiofs->{dirid});
+
+    my $acl = $virtiofs->{'expose-acl'};
+    if ($acl && PVE::QemuServer::Helpers::windows_version($ostype)) {
+	die "Please disable ACLs for virtiofs on Windows VMs, otherwise"
+	    ." the virtiofs shared directory cannot be mounted.\n";
+    }
+
+    eval { PVE::Mapping::Dir::assert_valid($dir_cfg) };
+    die "directory mapping invalid: $@\n" if $@;
+}
+
+sub config {
+    my ($conf, $vmid, $devices) = @_;
+
+    for (my $i = 0; $i < max_virtiofs(); $i++) {
+	my $opt = "virtiofs$i";
+
+	next if !$conf->{$opt};
+	my $virtiofs = parse_property_string('pve-qm-virtiofs', $conf->{$opt});
+
+	assert_virtiofs_config($conf->{ostype}, $virtiofs);
+
+	push @$devices, '-chardev', "socket,id=virtiofs$i,path=$socket_path_root/vm$vmid-fs$i";
+
+	# queue-size is set 1024 because of bug with Windows guests:
+	# https://bugzilla.redhat.com/show_bug.cgi?id=1873088
+	# 1024 is also always used in the virtiofs documentations:
+	# https://gitlab.com/virtio-fs/virtiofsd#examples
+	push @$devices, '-device', 'vhost-user-fs-pci,queue-size=1024'
+	    .",chardev=virtiofs$i,tag=$virtiofs->{dirid}";
+    }
+}
+
+sub virtiofs_enabled {
+    my ($conf) = @_;
+
+    my $virtiofs_enabled = 0;
+    for (my $i = 0; $i < max_virtiofs(); $i++) {
+	my $opt = "virtiofs$i";
+	next if !$conf->{$opt};
+	parse_property_string('pve-qm-virtiofs', $conf->{$opt});
+	$virtiofs_enabled = 1;
+    }
+    return $virtiofs_enabled;
+}
+
+sub start_all_virtiofsd {
+    my ($conf, $vmid) = @_;
+    my $virtiofs_sockets = [];
+    for (my $i = 0; $i < max_virtiofs(); $i++) {
+	my $opt = "virtiofs$i";
+
+	next if !$conf->{$opt};
+	my $virtiofs = parse_property_string('pve-qm-virtiofs', $conf->{$opt});
+
+	my $virtiofs_socket = start_virtiofsd($vmid, $i, $virtiofs);
+	push @$virtiofs_sockets, $virtiofs_socket;
+    }
+    return $virtiofs_sockets;
+}
+
+sub start_virtiofsd {
+    my ($vmid, $fsid, $virtiofs) = @_;
+
+    mkdir $socket_path_root;
+    my $socket_path = "$socket_path_root/vm$vmid-fs$fsid";
+    unlink($socket_path);
+    my $socket = IO::Socket::UNIX->new(
+	Type => SOCK_STREAM,
+	Local => $socket_path,
+	Listen => 1,
+    ) or die "cannot create socket - $!\n";
+
+    my $flags = fcntl($socket, F_GETFD, 0)
+	or die "failed to get file descriptor flags: $!\n";
+    fcntl($socket, F_SETFD, $flags & ~FD_CLOEXEC)
+	or die "failed to remove FD_CLOEXEC from file descriptor\n";
+
+    my $dir_cfg = PVE::Mapping::Dir::find_on_current_node($virtiofs->{dirid});
+
+    my $virtiofsd_bin = '/usr/libexec/virtiofsd';
+    if (! -f $virtiofsd_bin) {
+	die "virtiofsd is not installed. To use virtio-fs, install virtiofsd via apt.\n";
+    }
+    my $fd = $socket->fileno();
+    my $path = $dir_cfg->{path};
+
+    my $could_not_fork_err = "could not fork to start virtiofsd\n";
+    my $pid = fork();
+    if ($pid == 0) {
+	POSIX::setsid();
+	$0 = "task pve-vm$vmid-virtiofs$fsid";
+	my $pid2 = fork();
+	if ($pid2 == 0) {
+	    my $cmd = [$virtiofsd_bin, "--fd=$fd", "--shared-dir=$path"];
+	    push @$cmd, '--xattr' if $virtiofs->{'expose-xattr'};
+	    push @$cmd, '--posix-acl' if $virtiofs->{'expose-acl'};
+	    push @$cmd, '--announce-submounts';
+	    push @$cmd, '--allow-direct-io' if $virtiofs->{'direct-io'};
+	    push @$cmd, '--cache='.$virtiofs->{cache} if $virtiofs->{cache};
+	    push @$cmd, '--writeback' if $virtiofs->{'writeback'};
+	    push @$cmd, '--syslog';
+	    exec(@$cmd);
+	} elsif (!defined($pid2)) {
+	    die $could_not_fork_err;
+	} else {
+	    POSIX::_exit(0);
+	}
+    } elsif (!defined($pid)) {
+	die $could_not_fork_err;
+    } else {
+	waitpid($pid, 0);
+    }
+
+    # return socket to keep it alive,
+    # so that QEMU will wait for virtiofsd to start
+    return $socket;
+}
+
+sub close_sockets {
+    my @sockets = @_;
+    for my $socket (@sockets) {
+	shutdown($socket, 2);
+	close($socket);
+    }
+}
+
+1;
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [pve-devel] [PATCH qemu-server v17 4/10] migration: check_local_resources for virtiofs
  2025-04-07 13:49 [pve-devel] [PATCH docs/qemu-server/manager v17 0/10] Virtio-fs Markus Frank
                   ` (2 preceding siblings ...)
  2025-04-07 13:49 ` [pve-devel] [PATCH qemu-server v17 3/10] fix #1027: virtio-fs support Markus Frank
@ 2025-04-07 13:49 ` Markus Frank
  2025-04-07 13:49 ` [pve-devel] [PATCH qemu-server v17 5/10] disable snapshot (with RAM) and hibernate with virtio-fs devices Markus Frank
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 15+ messages in thread
From: Markus Frank @ 2025-04-07 13:49 UTC (permalink / raw)
  To: pve-devel

add dir mapping checks to check_local_resources

Since the VM needs to be powered off for migration, migration should
work with a directory on shared storage with all caching settings.

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
nothing changed in v17

 PVE/QemuServer.pm            | 10 +++++++++-
 test/MigrationTest/Shared.pm |  7 +++++++
 2 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 77d24579..4bb35381 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2455,6 +2455,7 @@ sub check_local_resources {
     my $nodelist = PVE::Cluster::get_nodelist();
     my $pci_map = PVE::Mapping::PCI::config();
     my $usb_map = PVE::Mapping::USB::config();
+    my $dir_map = PVE::Mapping::Dir::config();
 
     my $missing_mappings_by_node = { map { $_ => [] } @$nodelist };
 
@@ -2466,6 +2467,8 @@ sub check_local_resources {
 		$entry = PVE::Mapping::PCI::get_node_mapping($pci_map, $id, $node);
 	    } elsif ($type eq 'usb') {
 		$entry = PVE::Mapping::USB::get_node_mapping($usb_map, $id, $node);
+	    } elsif ($type eq 'dir') {
+		$entry = PVE::Mapping::Dir::get_node_mapping($dir_map, $id, $node);
 	    }
 	    if (!scalar($entry->@*)) {
 		push @{$missing_mappings_by_node->{$node}}, $key;
@@ -2505,9 +2508,14 @@ sub check_local_resources {
 		next if !$state;
 	    }
 	}
+	if ($k =~ m/^virtiofs/) {
+	    my $entry = parse_property_string('pve-qm-virtiofs', $conf->{$k});
+	    $add_missing_mapping->('dir', $k, $entry->{dirid});
+	    $mapped_res->{$k} = { name => $entry->{dirid} };
+	}
 	# sockets are safe: they will recreated be on the target side post-migrate
 	next if $k =~ m/^serial/ && ($conf->{$k} eq 'socket');
-	push @loc_res, $k if $k =~ m/^(usb|hostpci|serial|parallel)\d+$/;
+	push @loc_res, $k if $k =~ m/^(usb|hostpci|serial|parallel|virtiofs)\d+$/;
     }
 
     die "VM uses local resources\n" if scalar @loc_res && !$noerr;
diff --git a/test/MigrationTest/Shared.pm b/test/MigrationTest/Shared.pm
index e69bf84f..f5fb70ff 100644
--- a/test/MigrationTest/Shared.pm
+++ b/test/MigrationTest/Shared.pm
@@ -90,6 +90,13 @@ $mapping_pci_module->mock(
     },
 );
 
+our $mapping_dir_module = Test::MockModule->new("PVE::Mapping::Dir");
+$mapping_dir_module->mock(
+    config => sub {
+	return {};
+    },
+);
+
 our $ha_config_module = Test::MockModule->new("PVE::HA::Config");
 $ha_config_module->mock(
     vm_is_ha_managed => sub {
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [pve-devel] [PATCH qemu-server v17 5/10] disable snapshot (with RAM) and hibernate with virtio-fs devices
  2025-04-07 13:49 [pve-devel] [PATCH docs/qemu-server/manager v17 0/10] Virtio-fs Markus Frank
                   ` (3 preceding siblings ...)
  2025-04-07 13:49 ` [pve-devel] [PATCH qemu-server v17 4/10] migration: check_local_resources for virtiofs Markus Frank
@ 2025-04-07 13:49 ` Markus Frank
  2025-04-07 13:49 ` [pve-devel] [PATCH manager v17 06/10] api: add resource map api endpoints for directories Markus Frank
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 15+ messages in thread
From: Markus Frank @ 2025-04-07 13:49 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
---
nothing changed in v17

 PVE/QemuServer.pm | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 4bb35381..86fa53e1 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2430,8 +2430,9 @@ sub check_non_migratable_resources {
     my ($conf, $state, $noerr) = @_;
 
     my @blockers = ();
-    if ($state && $conf->{"amd-sev"}) {
-	push @blockers, "amd-sev";
+    if ($state) {
+	push @blockers, "amd-sev" if $conf->{"amd-sev"};
+	push @blockers, "virtiofs" if PVE::QemuServer::Virtiofs::virtiofs_enabled($conf);
     }
 
     if (scalar(@blockers) && !$noerr) {
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [pve-devel] [PATCH manager v17 06/10] api: add resource map api endpoints for directories
  2025-04-07 13:49 [pve-devel] [PATCH docs/qemu-server/manager v17 0/10] Virtio-fs Markus Frank
                   ` (4 preceding siblings ...)
  2025-04-07 13:49 ` [pve-devel] [PATCH qemu-server v17 5/10] disable snapshot (with RAM) and hibernate with virtio-fs devices Markus Frank
@ 2025-04-07 13:49 ` Markus Frank
  2025-04-07 13:49 ` [pve-devel] [PATCH manager v17 07/10] ui: add edit window for dir mappings Markus Frank
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 15+ messages in thread
From: Markus Frank @ 2025-04-07 13:49 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
---
nothing changed in v17

 PVE/API2/Cluster/Mapping.pm       |   7 +
 PVE/API2/Cluster/Mapping/Dir.pm   | 308 ++++++++++++++++++++++++++++++
 PVE/API2/Cluster/Mapping/Makefile |   1 +
 3 files changed, 316 insertions(+)
 create mode 100644 PVE/API2/Cluster/Mapping/Dir.pm

diff --git a/PVE/API2/Cluster/Mapping.pm b/PVE/API2/Cluster/Mapping.pm
index 40386579..9f0dcd2b 100644
--- a/PVE/API2/Cluster/Mapping.pm
+++ b/PVE/API2/Cluster/Mapping.pm
@@ -3,11 +3,17 @@ package PVE::API2::Cluster::Mapping;
 use strict;
 use warnings;
 
+use PVE::API2::Cluster::Mapping::Dir;
 use PVE::API2::Cluster::Mapping::PCI;
 use PVE::API2::Cluster::Mapping::USB;
 
 use base qw(PVE::RESTHandler);
 
+__PACKAGE__->register_method ({
+    subclass => "PVE::API2::Cluster::Mapping::Dir",
+    path => 'dir',
+});
+
 __PACKAGE__->register_method ({
     subclass => "PVE::API2::Cluster::Mapping::PCI",
     path => 'pci',
@@ -41,6 +47,7 @@ __PACKAGE__->register_method ({
 	my ($param) = @_;
 
 	my $result = [
+	    { name => 'dir' },
 	    { name => 'pci' },
 	    { name => 'usb' },
 	];
diff --git a/PVE/API2/Cluster/Mapping/Dir.pm b/PVE/API2/Cluster/Mapping/Dir.pm
new file mode 100644
index 00000000..f905cef3
--- /dev/null
+++ b/PVE/API2/Cluster/Mapping/Dir.pm
@@ -0,0 +1,308 @@
+package PVE::API2::Cluster::Mapping::Dir;
+
+use strict;
+use warnings;
+
+use Storable qw(dclone);
+
+use PVE::INotify;
+use PVE::JSONSchema qw(get_standard_option);
+use PVE::Mapping::Dir ();
+use PVE::RPCEnvironment;
+use PVE::SectionConfig;
+use PVE::Tools qw(extract_param);
+
+use base qw(PVE::RESTHandler);
+
+__PACKAGE__->register_method ({
+    name => 'index',
+    path => '',
+    method => 'GET',
+    # only proxy if we give the 'check-node' parameter
+    proxyto_callback => sub {
+	my ($rpcenv, $proxyto, $param) = @_;
+	return $param->{'check-node'} // 'localhost';
+    },
+    description => "List directory mapping",
+    permissions => {
+	description => "Only lists entries where you have 'Mapping.Modify', 'Mapping.Use' or"
+	    ." 'Mapping.Audit' permissions on '/mapping/dir/<id>'.",
+	user => 'all',
+    },
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    'check-node' => get_standard_option('pve-node', {
+		description => "If given, checks the configurations on the given node for"
+		    ." correctness, and adds relevant diagnostics for the directory to the response.",
+		optional => 1,
+	    }),
+	},
+    },
+    returns => {
+	type => 'array',
+	items => {
+	    type => "object",
+	    properties => {
+		id => {
+		    type => 'string',
+		    description => "The logical ID of the mapping."
+		},
+		map => {
+		    type => 'array',
+		    description => "The entries of the mapping.",
+		    items => {
+			type => 'string',
+			description => "A mapping for a node.",
+		    },
+		},
+		description => {
+		    type => 'string',
+		    description => "A description of the logical mapping.",
+		},
+		checks => {
+		    type => "array",
+		    optional => 1,
+		    description => "A list of checks, only present if 'check-node' is set.",
+		    items => {
+			type => 'object',
+			properties => {
+			    severity => {
+				type => "string",
+				enum => ['warning', 'error'],
+				description => "The severity of the error",
+			    },
+			    message => {
+				type => "string",
+				description => "The message of the error",
+			    },
+			},
+		    }
+		},
+	    },
+	},
+	links => [ { rel => 'child', href => "{id}" } ],
+    },
+    code => sub {
+	my ($param) = @_;
+
+	my $rpcenv = PVE::RPCEnvironment::get();
+	my $authuser = $rpcenv->get_user();
+
+	my $check_node = $param->{'check-node'};
+	my $local_node = PVE::INotify::nodename();
+
+	die "wrong node to check - $check_node != $local_node\n"
+	    if defined($check_node) && $check_node ne 'localhost' && $check_node ne $local_node;
+
+	my $cfg = PVE::Mapping::Dir::config();
+
+	my $can_see_mapping_privs = ['Mapping.Modify', 'Mapping.Use', 'Mapping.Audit'];
+
+	my $res = [];
+	for my $id (keys $cfg->{ids}->%*) {
+	    next if !$rpcenv->check_any($authuser, "/mapping/dir/$id", $can_see_mapping_privs, 1);
+	    next if !$cfg->{ids}->{$id};
+
+	    my $entry = dclone($cfg->{ids}->{$id});
+	    $entry->{id} = $id;
+	    $entry->{digest} = $cfg->{digest};
+
+	    if (defined($check_node)) {
+		$entry->{checks} = [];
+		if (my $mappings = PVE::Mapping::Dir::get_node_mapping($cfg, $id, $check_node)) {
+		    if (!scalar($mappings->@*)) {
+			push $entry->{checks}->@*, {
+			    severity => 'warning',
+			    message => "No mapping for node $check_node.",
+			};
+		    }
+		    for my $mapping ($mappings->@*) {
+			eval { PVE::Mapping::Dir::assert_valid($mapping) };
+			if (my $err = $@) {
+			    push $entry->{checks}->@*, {
+				severity => 'error',
+				message => "Invalid configuration: $err",
+			    };
+			}
+		    }
+		}
+	    }
+
+	    push @$res, $entry;
+	}
+
+	return $res;
+    },
+});
+
+__PACKAGE__->register_method ({
+    name => 'get',
+    protected => 1,
+    path => '{id}',
+    method => 'GET',
+    description => "Get directory mapping.",
+    permissions => {
+	check =>['or',
+	    ['perm', '/mapping/dir/{id}', ['Mapping.Use']],
+	    ['perm', '/mapping/dir/{id}', ['Mapping.Modify']],
+	    ['perm', '/mapping/dir/{id}', ['Mapping.Audit']],
+	],
+    },
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    id => {
+		type => 'string',
+		format => 'pve-configid',
+	    },
+	}
+    },
+    returns => { type => 'object' },
+    code => sub {
+	my ($param) = @_;
+
+	my $cfg = PVE::Mapping::Dir::config();
+	my $id = $param->{id};
+
+	my $entry = $cfg->{ids}->{$id};
+	die "mapping '$param->{id}' not found\n" if !defined($entry);
+
+	my $data = dclone($entry);
+
+	$data->{digest} = $cfg->{digest};
+
+	return $data;
+    }});
+
+__PACKAGE__->register_method ({
+    name => 'create',
+    protected => 1,
+    path => '',
+    method => 'POST',
+    description => "Create a new directory mapping.",
+    permissions => {
+	check => ['perm', '/mapping/dir', ['Mapping.Modify']],
+    },
+    parameters => PVE::Mapping::Dir->createSchema(1),
+    returns => {
+	type => 'null',
+    },
+    code => sub {
+	my ($param) = @_;
+
+	my $id = extract_param($param, 'id');
+
+	my $plugin = PVE::Mapping::Dir->lookup('dir');
+	my $opts = $plugin->check_config($id, $param, 1, 1);
+
+	my $map_list = $opts->{map};
+	PVE::Mapping::Dir::assert_valid_map_list($map_list);
+
+	PVE::Mapping::Dir::lock_dir_config(sub {
+	    my $cfg = PVE::Mapping::Dir::config();
+
+	    die "dir ID '$id' already defined\n" if defined($cfg->{ids}->{$id});
+
+	    $cfg->{ids}->{$id} = $opts;
+
+	    PVE::Mapping::Dir::write_dir_config($cfg);
+
+	}, "create directory mapping failed");
+
+	return;
+    },
+});
+
+__PACKAGE__->register_method ({
+    name => 'update',
+    protected => 1,
+    path => '{id}',
+    method => 'PUT',
+    description => "Update a directory mapping.",
+    permissions => {
+	check => ['perm', '/mapping/dir/{id}', ['Mapping.Modify']],
+    },
+    parameters => PVE::Mapping::Dir->updateSchema(),
+    returns => {
+	type => 'null',
+    },
+    code => sub {
+	my ($param) = @_;
+
+	my $digest = extract_param($param, 'digest');
+	my $delete = extract_param($param, 'delete');
+	my $id = extract_param($param, 'id');
+
+	if ($delete) {
+	    $delete = [ PVE::Tools::split_list($delete) ];
+	}
+
+	PVE::Mapping::Dir::lock_dir_config(sub {
+	    my $cfg = PVE::Mapping::Dir::config();
+
+	    PVE::Tools::assert_if_modified($cfg->{digest}, $digest) if defined($digest);
+
+	    die "dir ID '$id' does not exist\n" if !defined($cfg->{ids}->{$id});
+
+	    my $plugin = PVE::Mapping::Dir->lookup('dir');
+	    my $opts = $plugin->check_config($id, $param, 1, 1);
+
+	    my $map_list = $opts->{map};
+	    PVE::Mapping::Dir::assert_valid_map_list($map_list);
+
+	    my $data = $cfg->{ids}->{$id};
+
+	    my $options = $plugin->private()->{options}->{dir};
+	    PVE::SectionConfig::delete_from_config($data, $options, $opts, $delete);
+
+	    $data->{$_} = $opts->{$_} for keys $opts->%*;
+
+	    PVE::Mapping::Dir::write_dir_config($cfg);
+
+	}, "update directory mapping failed");
+
+	return;
+    },
+});
+
+__PACKAGE__->register_method ({
+    name => 'delete',
+    protected => 1,
+    path => '{id}',
+    method => 'DELETE',
+    description => "Remove directory mapping.",
+    permissions => {
+	check => [ 'perm', '/mapping/dir', ['Mapping.Modify']],
+    },
+    parameters => {
+	additionalProperties => 0,
+	properties => {
+	    id => {
+		type => 'string',
+		format => 'pve-configid',
+	    },
+	}
+    },
+    returns => { type => 'null' },
+    code => sub {
+	my ($param) = @_;
+
+	my $id = $param->{id};
+
+	PVE::Mapping::Dir::lock_dir_config(sub {
+	    my $cfg = PVE::Mapping::Dir::config();
+
+	    if ($cfg->{ids}->{$id}) {
+		delete $cfg->{ids}->{$id};
+	    }
+
+	    PVE::Mapping::Dir::write_dir_config($cfg);
+
+	}, "delete dir mapping failed");
+
+	return;
+    }
+});
+
+1;
diff --git a/PVE/API2/Cluster/Mapping/Makefile b/PVE/API2/Cluster/Mapping/Makefile
index e7345ab4..5dbb3f5c 100644
--- a/PVE/API2/Cluster/Mapping/Makefile
+++ b/PVE/API2/Cluster/Mapping/Makefile
@@ -3,6 +3,7 @@ include ../../../../defines.mk
 # for node independent, cluster-wide applicable, API endpoints
 # ensure we do not conflict with files shipped by pve-cluster!!
 PERLSOURCE= 	\
+	Dir.pm  \
 	PCI.pm	\
 	USB.pm
 
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [pve-devel] [PATCH manager v17 07/10] ui: add edit window for dir mappings
  2025-04-07 13:49 [pve-devel] [PATCH docs/qemu-server/manager v17 0/10] Virtio-fs Markus Frank
                   ` (5 preceding siblings ...)
  2025-04-07 13:49 ` [pve-devel] [PATCH manager v17 06/10] api: add resource map api endpoints for directories Markus Frank
@ 2025-04-07 13:49 ` Markus Frank
  2025-04-07 13:49 ` [pve-devel] [PATCH manager v17 08/10] ui: add resource mapping view for directories Markus Frank
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 15+ messages in thread
From: Markus Frank @ 2025-04-07 13:49 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
nothing changed in v17

 www/manager6/Makefile             |   1 +
 www/manager6/window/DirMapEdit.js | 204 ++++++++++++++++++++++++++++++
 2 files changed, 205 insertions(+)
 create mode 100644 www/manager6/window/DirMapEdit.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index c94a5cdf..4b8677e3 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -138,6 +138,7 @@ JSSRC= 							\
 	window/TreeSettingsEdit.js			\
 	window/PCIMapEdit.js				\
 	window/USBMapEdit.js				\
+	window/DirMapEdit.js                            \
 	window/GuestImport.js				\
 	ha/Fencing.js					\
 	ha/GroupEdit.js					\
diff --git a/www/manager6/window/DirMapEdit.js b/www/manager6/window/DirMapEdit.js
new file mode 100644
index 00000000..520758c2
--- /dev/null
+++ b/www/manager6/window/DirMapEdit.js
@@ -0,0 +1,204 @@
+Ext.define('PVE.window.DirMapEditWindow', {
+    extend: 'Proxmox.window.Edit',
+
+    mixins: ['Proxmox.Mixin.CBind'],
+
+    cbindData: function(initialConfig) {
+	let me = this;
+	me.isCreate = !me.name;
+	me.method = me.isCreate ? 'POST' : 'PUT';
+	me.hideMapping = !!me.entryOnly;
+	me.hideComment = me.name && !me.entryOnly;
+	me.hideNodeSelector = me.nodename || me.entryOnly;
+	me.hideNode = !me.nodename || !me.hideNodeSelector;
+	return {
+	    name: me.name,
+	    nodename: me.nodename,
+	};
+    },
+
+    submitUrl: function(_url, data) {
+	let me = this;
+	let name = me.isCreate ? '' : me.name;
+	return `/cluster/mapping/dir/${name}`;
+    },
+
+    title: gettext('Add Dir mapping'),
+
+    onlineHelp: 'resource_mapping',
+
+    method: 'POST',
+
+    controller: {
+	xclass: 'Ext.app.ViewController',
+
+	onGetValues: function(values) {
+	    let me = this;
+	    let view = me.getView();
+	    values.node ??= view.nodename;
+
+	    let name = values.name;
+	    let description = values.description;
+	    let deletes = values.delete;
+
+	    delete values.description;
+	    delete values.name;
+	    delete values.delete;
+
+	    let map = [];
+	    if (me.originalMap) {
+		map = PVE.Parser.filterPropertyStringList(me.originalMap, (e) => e.node !== values.node);
+	    }
+	    if (values.path) {
+		// TODO: Remove this when property string supports quotation of properties
+		if (!/^\/[^;,=()]+/.test(values.path)) {
+		    let errMsg = "Value does not look like a valid absolute path."
+			+" These symbols are currently not allowed in path: ;,=()\n";
+		    Ext.Msg.alert(gettext('Error'), errMsg);
+		    // prevent sending a broken property string to the API
+		    throw errMsg;
+		}
+		map.push(PVE.Parser.printPropertyString(values));
+	    }
+	    values = { map };
+
+	    if (description) {
+		values.description = description;
+	    }
+	    if (deletes && !view.isCreate) {
+		values.delete = deletes;
+	    }
+	    if (view.isCreate) {
+		values.id = name;
+	    }
+
+	    return values;
+	},
+
+	onSetValues: function(values) {
+	    let me = this;
+	    let view = me.getView();
+	    me.originalMap = [...values.map];
+	    let configuredNodes = [];
+	    PVE.Parser.filterPropertyStringList(values.map, (e) => {
+		configuredNodes.push(e.node);
+		if (e.node === view.nodename) {
+		    values = e;
+		}
+		return false;
+	    });
+
+	    me.lookup('nodeselector').disallowedNodes = configuredNodes;
+
+	    return values;
+	},
+
+	init: function(view) {
+	    let me = this;
+
+	    if (!view.nodename) {
+		//throw "no nodename given";
+	    }
+	},
+    },
+
+    items: [
+	{
+	    xtype: 'inputpanel',
+	    onGetValues: function(values) {
+		return this.up('window').getController().onGetValues(values);
+	    },
+
+	    onSetValues: function(values) {
+		return this.up('window').getController().onSetValues(values);
+	    },
+
+	    columnT: [
+		{
+		    xtype: 'displayfield',
+		    reference: 'directory-hint',
+		    columnWidth: 1,
+		    value: 'Make sure the directory exists.',
+		    cbind: {
+			disabled: '{hideMapping}',
+			hidden: '{hideMapping}',
+		    },
+		    userCls: 'pmx-hint',
+		},
+	    ],
+
+	    column1: [
+		{
+		    xtype: 'pmxDisplayEditField',
+		    fieldLabel: gettext('Name'),
+		    cbind: {
+			editable: '{!name}',
+			value: '{name}',
+			submitValue: '{isCreate}',
+		    },
+		    name: 'name',
+		    allowBlank: false,
+		},
+		{
+		    xtype: 'pveNodeSelector',
+		    reference: 'nodeselector',
+		    fieldLabel: gettext('Node'),
+		    name: 'node',
+		    cbind: {
+			disabled: '{hideNodeSelector}',
+			hidden: '{hideNodeSelector}',
+		    },
+		    allowBlank: false,
+		},
+	    ],
+
+	    column2: [
+		{
+		    xtype: 'fieldcontainer',
+		    defaultType: 'radiofield',
+		    layout: 'fit',
+		    cbind: {
+			disabled: '{hideMapping}',
+			hidden: '{hideMapping}',
+		    },
+		    items: [
+			{
+			    xtype: 'textfield',
+			    name: 'path',
+			    reference: 'path',
+			    value: '',
+			    emptyText: gettext('/some/path'),
+			    cbind: {
+				nodename: '{nodename}',
+				disabled: '{hideMapping}',
+			    },
+			    allowBlank: false,
+			    fieldLabel: gettext('Path'),
+			},
+		    ],
+		},
+	    ],
+
+	    columnB: [
+		{
+		    xtype: 'fieldcontainer',
+		    defaultType: 'radiofield',
+		    layout: 'fit',
+		    cbind: {
+			disabled: '{hideComment}',
+			hidden: '{hideComment}',
+		    },
+		    items: [
+			{
+			    xtype: 'proxmoxtextfield',
+			    fieldLabel: gettext('Comment'),
+			    submitValue: true,
+			    name: 'description',
+			    deleteEmpty: true,
+			},
+		    ],
+		},
+	    ],
+	},
+    ],
+});
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [pve-devel] [PATCH manager v17 08/10] ui: add resource mapping view for directories
  2025-04-07 13:49 [pve-devel] [PATCH docs/qemu-server/manager v17 0/10] Virtio-fs Markus Frank
                   ` (6 preceding siblings ...)
  2025-04-07 13:49 ` [pve-devel] [PATCH manager v17 07/10] ui: add edit window for dir mappings Markus Frank
@ 2025-04-07 13:49 ` Markus Frank
  2025-04-07 13:49 ` [pve-devel] [PATCH manager v17 09/10] ui: form: add selector for directory mappings Markus Frank
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 15+ messages in thread
From: Markus Frank @ 2025-04-07 13:49 UTC (permalink / raw)
  To: pve-devel

To save vertical space in the Resource Mappings tab, there is a
temporary Directory Mappings tab until we find content fitting the
general 'resource mapping' panel and can put all types as submenu items.

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
v17:
* moved directory mapping in its own tab to save vertical space in the
 "Resource Mappings" tab - discussed off-list with Dominik

 www/manager6/Makefile         |  1 +
 www/manager6/dc/Config.js     |  6 ++++++
 www/manager6/dc/DirMapView.js | 38 +++++++++++++++++++++++++++++++++++
 3 files changed, 45 insertions(+)
 create mode 100644 www/manager6/dc/DirMapView.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 4b8677e3..57c4d377 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -189,6 +189,7 @@ JSSRC= 							\
 	dc/RealmSyncJob.js				\
 	dc/PCIMapView.js				\
 	dc/USBMapView.js				\
+	dc/DirMapView.js				\
 	lxc/CmdMenu.js					\
 	lxc/Config.js					\
 	lxc/CreateWizard.js				\
diff --git a/www/manager6/dc/Config.js b/www/manager6/dc/Config.js
index 74728c83..b79ba8dc 100644
--- a/www/manager6/dc/Config.js
+++ b/www/manager6/dc/Config.js
@@ -331,6 +331,12 @@ Ext.define('PVE.dc.Config', {
 			},
 		    ],
 		},
+		{
+		    xtype: 'pveDcDirMapView',
+		    itemId: 'directories',
+		    title: gettext('Directory Mappings'),
+		    iconCls: 'fa fa-folder',
+		},
 	    );
 	}
 
diff --git a/www/manager6/dc/DirMapView.js b/www/manager6/dc/DirMapView.js
new file mode 100644
index 00000000..a50a7ae7
--- /dev/null
+++ b/www/manager6/dc/DirMapView.js
@@ -0,0 +1,38 @@
+Ext.define('pve-resource-dir-tree', {
+    extend: 'Ext.data.Model',
+    idProperty: 'internalId',
+    fields: ['type', 'text', 'path', 'id', 'description', 'digest'],
+});
+
+Ext.define('PVE.dc.DirMapView', {
+    extend: 'PVE.tree.ResourceMapTree',
+    alias: 'widget.pveDcDirMapView',
+
+    editWindowClass: 'PVE.window.DirMapEditWindow',
+    baseUrl: '/cluster/mapping/dir',
+    mapIconCls: 'fa fa-folder',
+    entryIdProperty: 'path',
+
+    store: {
+	sorters: 'text',
+	model: 'pve-resource-dir-tree',
+	data: {},
+    },
+
+    columns: [
+	{
+	    xtype: 'treecolumn',
+	    text: gettext('ID/Node'),
+	    dataIndex: 'text',
+	    width: 200,
+	},
+	{
+	    header: gettext('Comment'),
+	    dataIndex: 'description',
+	    renderer: function(value, _meta, record) {
+		return Ext.String.htmlEncode(value ?? record.data.comment);
+	    },
+	    flex: 1,
+	},
+    ],
+});
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [pve-devel] [PATCH manager v17 09/10] ui: form: add selector for directory mappings
  2025-04-07 13:49 [pve-devel] [PATCH docs/qemu-server/manager v17 0/10] Virtio-fs Markus Frank
                   ` (7 preceding siblings ...)
  2025-04-07 13:49 ` [pve-devel] [PATCH manager v17 08/10] ui: add resource mapping view for directories Markus Frank
@ 2025-04-07 13:49 ` Markus Frank
  2025-04-07 13:49 ` [pve-devel] [PATCH manager v17 10/10] ui: add options to add virtio-fs to qemu config Markus Frank
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 15+ messages in thread
From: Markus Frank @ 2025-04-07 13:49 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
Reviewed-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
---
nothing changed in v17

 www/manager6/Makefile               |  1 +
 www/manager6/form/DirMapSelector.js | 63 +++++++++++++++++++++++++++++
 2 files changed, 64 insertions(+)
 create mode 100644 www/manager6/form/DirMapSelector.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 57c4d377..fabbdd24 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -35,6 +35,7 @@ JSSRC= 							\
 	form/ContentTypeSelector.js			\
 	form/ControllerSelector.js			\
 	form/DayOfWeekSelector.js			\
+	form/DirMapSelector.js                          \
 	form/DiskFormatSelector.js			\
 	form/DiskStorageSelector.js			\
 	form/FileSelector.js				\
diff --git a/www/manager6/form/DirMapSelector.js b/www/manager6/form/DirMapSelector.js
new file mode 100644
index 00000000..473a2ffe
--- /dev/null
+++ b/www/manager6/form/DirMapSelector.js
@@ -0,0 +1,63 @@
+Ext.define('PVE.form.DirMapSelector', {
+    extend: 'Proxmox.form.ComboGrid',
+    alias: 'widget.pveDirMapSelector',
+
+    store: {
+	fields: ['name', 'path'],
+	filterOnLoad: true,
+	sorters: [
+	    {
+		property: 'id',
+		direction: 'ASC',
+	    },
+	],
+    },
+
+    allowBlank: false,
+    autoSelect: false,
+    displayField: 'id',
+    valueField: 'id',
+
+    listConfig: {
+	columns: [
+	    {
+		header: gettext('Directory ID'),
+		dataIndex: 'id',
+		flex: 1,
+	    },
+	    {
+		header: gettext('Comment'),
+		dataIndex: 'description',
+		flex: 1,
+	    },
+	],
+    },
+
+    setNodename: function(nodename) {
+	var me = this;
+
+	if (!nodename || me.nodename === nodename) {
+	    return;
+	}
+
+	me.nodename = nodename;
+
+	me.store.setProxy({
+	    type: 'proxmox',
+	    url: `/api2/json/cluster/mapping/dir?check-node=${nodename}`,
+	});
+
+	me.store.load();
+    },
+
+    initComponent: function() {
+	var me = this;
+
+	var nodename = me.nodename;
+	me.nodename = undefined;
+
+        me.callParent();
+
+	me.setNodename(nodename);
+    },
+});
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [pve-devel] [PATCH manager v17 10/10] ui: add options to add virtio-fs to qemu config
  2025-04-07 13:49 [pve-devel] [PATCH docs/qemu-server/manager v17 0/10] Virtio-fs Markus Frank
                   ` (8 preceding siblings ...)
  2025-04-07 13:49 ` [pve-devel] [PATCH manager v17 09/10] ui: form: add selector for directory mappings Markus Frank
@ 2025-04-07 13:49 ` Markus Frank
  2025-04-07 22:42   ` Thomas Lamprecht
  2025-04-07 14:31 ` [pve-devel] [PATCH docs/qemu-server/manager v17 0/10] Virtio-fs Filip Schauer
  2025-04-07 22:52 ` Thomas Lamprecht
  11 siblings, 1 reply; 15+ messages in thread
From: Markus Frank @ 2025-04-07 13:49 UTC (permalink / raw)
  To: pve-devel

Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
nothing changed in v17

 www/manager6/Makefile             |   1 +
 www/manager6/Utils.js             |   1 +
 www/manager6/qemu/HardwareView.js |  19 ++++
 www/manager6/qemu/VirtiofsEdit.js | 143 ++++++++++++++++++++++++++++++
 4 files changed, 164 insertions(+)
 create mode 100644 www/manager6/qemu/VirtiofsEdit.js

diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index fabbdd24..fdf0e816 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -271,6 +271,7 @@ JSSRC= 							\
 	qemu/Smbios1Edit.js				\
 	qemu/SystemEdit.js				\
 	qemu/USBEdit.js					\
+	qemu/VirtiofsEdit.js				\
 	sdn/Browser.js					\
 	sdn/ControllerView.js				\
 	sdn/Status.js					\
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index 89416458..a1bcbe7e 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -1647,6 +1647,7 @@ Ext.define('PVE.Utils', {
 	serial: 4,
 	rng: 1,
 	tpmstate: 1,
+	virtiofs: 10,
     },
 
     // we can have usb6 and up only for specific machine/ostypes
diff --git a/www/manager6/qemu/HardwareView.js b/www/manager6/qemu/HardwareView.js
index 2e4dd880..4ce9908c 100644
--- a/www/manager6/qemu/HardwareView.js
+++ b/www/manager6/qemu/HardwareView.js
@@ -320,6 +320,16 @@ Ext.define('PVE.qemu.HardwareView', {
 	    never_delete: !caps.vms['VM.Config.HWType'] && !caps.mapping.hwrng['Mapping.Use'],
 	    header: gettext("VirtIO RNG"),
 	};
+	for (let i = 0; i < PVE.Utils.hardware_counts.virtiofs; i++) {
+	    let confid = "virtiofs" + i.toString();
+	    rows[confid] = {
+		group: 50,
+		order: i,
+		iconCls: 'folder',
+		editor: 'PVE.qemu.VirtiofsEdit',
+		header: gettext('Virtiofs') + ' (' + confid +')',
+	    };
+	}
 
 	var sorterFn = function(rec1, rec2) {
 	    var v1 = rec1.data.key;
@@ -595,6 +605,7 @@ Ext.define('PVE.qemu.HardwareView', {
 	    const noVMConfigDiskPerm = !caps.vms['VM.Config.Disk'];
 	    const noVMConfigCDROMPerm = !caps.vms['VM.Config.CDROM'];
 	    const noVMConfigCloudinitPerm = !caps.vms['VM.Config.Cloudinit'];
+	    const noVMConfigOptionsPerm = !caps.vms['VM.Config.Options'];
 
 	    me.down('#addUsb').setDisabled(noHWPerm || isAtUsbLimit());
 	    me.down('#addPci').setDisabled(noHWPerm || isAtLimit('hostpci'));
@@ -604,6 +615,7 @@ Ext.define('PVE.qemu.HardwareView', {
 	    me.down('#addRng').setDisabled(noVMConfigHWTypePerm || isAtLimit('rng'));
 	    efidisk_menuitem.setDisabled(noVMConfigDiskPerm || isAtLimit('efidisk'));
 	    me.down('#addTpmState').setDisabled(noVMConfigDiskPerm || isAtLimit('tpmstate'));
+	    me.down('#addVirtiofs').setDisabled(noVMConfigOptionsPerm || isAtLimit('virtiofs'));
 	    me.down('#addCloudinitDrive').setDisabled(noVMConfigCDROMPerm || noVMConfigCloudinitPerm || hasCloudInit);
 
 	    if (!rec) {
@@ -748,6 +760,13 @@ Ext.define('PVE.qemu.HardwareView', {
 				disabled: !caps.vms['VM.Config.HWType'] && !caps.mapping.hwrng['Mapping.Use'],
 				handler: editorFactory('RNGEdit'),
 			    },
+			    {
+				text: gettext("Virtiofs"),
+				itemId: 'addVirtiofs',
+				iconCls: 'fa fa-folder',
+				disabled: !caps.nodes['Sys.Console'],
+				handler: editorFactory('VirtiofsEdit'),
+			    },
 			],
 		    }),
 		},
diff --git a/www/manager6/qemu/VirtiofsEdit.js b/www/manager6/qemu/VirtiofsEdit.js
new file mode 100644
index 00000000..9ce5db21
--- /dev/null
+++ b/www/manager6/qemu/VirtiofsEdit.js
@@ -0,0 +1,143 @@
+Ext.define('PVE.qemu.VirtiofsInputPanel', {
+    extend: 'Proxmox.panel.InputPanel',
+    xtype: 'pveVirtiofsInputPanel',
+    onlineHelp: 'qm_virtiofs',
+
+    insideWizard: false,
+
+    onGetValues: function(values) {
+	var me = this;
+	var confid = me.confid;
+	var params = {};
+	delete values.delete;
+	params[confid] = PVE.Parser.printPropertyString(values, 'dirid');
+	return params;
+    },
+
+    setSharedfiles: function(confid, data) {
+	var me = this;
+	me.confid = confid;
+	me.virtiofs = data;
+	me.setValues(me.virtiofs);
+    },
+    initComponent: function() {
+	let me = this;
+
+	me.nodename = me.pveSelNode.data.node;
+	if (!me.nodename) {
+	    throw "no node name specified";
+	}
+	me.items = [
+	    {
+		xtype: 'pveDirMapSelector',
+		emptyText: 'dirid',
+		nodename: me.nodename,
+		fieldLabel: gettext('Directory ID'),
+		name: 'dirid',
+		allowBlank: false,
+	    },
+	    {
+		xtype: 'displayfield',
+		userCls: 'pmx-hint',
+		value: gettext('Directory Mappings can be managed under Datacenter -> Directory Mappings'),
+	    },
+	];
+	me.advancedItems = [
+	    {
+		xtype: 'proxmoxKVComboBox',
+		fieldLabel: gettext('Cache'),
+		name: 'cache',
+		value: '__default__',
+		deleteDefaultValue: false,
+		comboItems: [
+		    ['__default__', Proxmox.Utils.defaultText + ' (auto)'],
+		    ['auto', 'auto'],
+		    ['always', 'always'],
+		    ['metadata', 'metadata'],
+		    ['never', 'never'],
+		],
+	    },
+	    {
+		xtype: 'proxmoxcheckbox',
+		fieldLabel: gettext('Writeback cache'),
+		name: 'writeback',
+	    },
+	    {
+		xtype: 'proxmoxcheckbox',
+		fieldLabel: gettext('Enable xattr support'),
+		name: 'expose-xattr',
+	    },
+	    {
+		xtype: 'proxmoxcheckbox',
+		fieldLabel: gettext('Enable POSIX ACLs support (implies xattr support)'),
+		name: 'expose-acl',
+		listeners: {
+		    change: function(f, value) {
+			let xattr = me.down('field[name=expose-xattr]');
+			xattr.setDisabled(value);
+			xattr.setValue(value);
+		    },
+		},
+	    },
+	    {
+		xtype: 'proxmoxcheckbox',
+		fieldLabel: gettext('Allow Direct IO'),
+		name: 'direct-io',
+	    },
+	];
+
+	me.virtiofs = {};
+	me.confid = 'virtiofs0';
+	me.callParent();
+    },
+});
+
+Ext.define('PVE.qemu.VirtiofsEdit', {
+    extend: 'Proxmox.window.Edit',
+
+    subject: gettext('Filesystem Passthrough'),
+
+    initComponent: function() {
+	var me = this;
+
+	me.isCreate = !me.confid;
+
+	var ipanel = Ext.create('PVE.qemu.VirtiofsInputPanel', {
+	    confid: me.confid,
+	    pveSelNode: me.pveSelNode,
+	    isCreate: me.isCreate,
+	});
+
+	Ext.applyIf(me, {
+	    items: ipanel,
+	});
+
+	me.callParent();
+
+	me.load({
+	    success: function(response) {
+		me.conf = response.result.data;
+		var i, confid;
+		if (!me.isCreate) {
+		    var value = me.conf[me.confid];
+		    var virtiofs = PVE.Parser.parsePropertyString(value, "dirid");
+		    if (!virtiofs) {
+			Ext.Msg.alert(gettext('Error'), 'Unable to parse virtiofs options');
+			me.close();
+			return;
+		    }
+		    ipanel.setSharedfiles(me.confid, virtiofs);
+		} else {
+		    for (i = 0; i < PVE.Utils.hardware_counts.virtiofs; i++) {
+			confid = 'virtiofs' + i.toString();
+			if (!Ext.isDefined(me.conf[confid])) {
+			    me.confid = confid;
+			    break;
+			}
+		    }
+		    ipanel.setSharedfiles(me.confid, {});
+		}
+	    },
+	});
+    },
+});
-- 
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [pve-devel] [PATCH docs/qemu-server/manager v17 0/10] Virtio-fs
  2025-04-07 13:49 [pve-devel] [PATCH docs/qemu-server/manager v17 0/10] Virtio-fs Markus Frank
                   ` (9 preceding siblings ...)
  2025-04-07 13:49 ` [pve-devel] [PATCH manager v17 10/10] ui: add options to add virtio-fs to qemu config Markus Frank
@ 2025-04-07 14:31 ` Filip Schauer
  2025-04-07 22:52 ` Thomas Lamprecht
  11 siblings, 0 replies; 15+ messages in thread
From: Filip Schauer @ 2025-04-07 14:31 UTC (permalink / raw)
  To: pve-devel

Tested on a fresh proxmox-ve_8.4-ALPHA-1.iso install.

Configured a directory mapping and passed it to a Debian VM with the
default settings. Inside the VM I mounted the directory using
`mount -t virtiofs dirid /mnt/path`.

Writing some data to the directory from inside the VM with
`dd if=/dev/urandom of=/mnt/path/bench bs=1M` was decently fast:
190 MB/s

On 07/04/2025 15:49, Markus Frank wrote:
> Virtio-fs is a shared file system that enables sharing a directory
> between host and guest VMs. It takes advantage of the locality of
> virtual machines and the hypervisor to get a higher throughput than
> the 9p remote file system protocol.
>
> build-order:
> 1. cluster
> 2. guest-common
> 3. docs
> 4. qemu-server
> 5. manager
>
> I did not get virtiofsd to run with run_command without creating
> zombie processes after stutdown. So I replaced run_command with exec
> for now. Maybe someone can find out why this happens.
>
>
> v17:
> * qemu-server:
>    * d/control: 'virtiofsd' in Recommends instead of Depends
>    * added check if virtiofsd is installed
> * pve-manager:
>    * moved directory mapping in its own tab to save vertical space in the
>     "Resource Mappings" tab - discussed off-list with Dominik
>
>
> docs:
>
> Markus Frank (1):
>    add doc section for the shared filesystem virtio-fs
>
>   qm.adoc | 102 ++++++++++++++++++++++++++++++++++++++++++++++++++++++--
>   1 file changed, 100 insertions(+), 2 deletions(-)
>
>
>
> qemu-server:
>
> Markus Frank (4):
>    d/control: add virtiofsd to recommends for qemu-server
>    fix #1027: virtio-fs support
>    migration: check_local_resources for virtiofs
>    disable snapshot (with RAM) and hibernate with virtio-fs devices
>
>   PVE/API2/Qemu.pm             |  41 ++++++-
>   PVE/QemuServer.pm            |  42 ++++++-
>   PVE/QemuServer/Makefile      |   3 +-
>   PVE/QemuServer/Memory.pm     |  25 +++--
>   PVE/QemuServer/Virtiofs.pm   | 212 +++++++++++++++++++++++++++++++++++
>   debian/control               |   5 +-
>   test/MigrationTest/Shared.pm |   7 ++
>   7 files changed, 320 insertions(+), 15 deletions(-)
>   create mode 100644 PVE/QemuServer/Virtiofs.pm
>
>
>
> manager:
>
> Markus Frank (5):
>    api: add resource map api endpoints for directories
>    ui: add edit window for dir mappings
>    ui: add resource mapping view for directories
>    ui: form: add selector for directory mappings
>    ui: add options to add virtio-fs to qemu config
>
>   PVE/API2/Cluster/Mapping.pm         |   7 +
>   PVE/API2/Cluster/Mapping/Dir.pm     | 308 ++++++++++++++++++++++++++++
>   PVE/API2/Cluster/Mapping/Makefile   |   1 +
>   www/manager6/Makefile               |   4 +
>   www/manager6/Utils.js               |   1 +
>   www/manager6/dc/Config.js           |   6 +
>   www/manager6/dc/DirMapView.js       |  38 ++++
>   www/manager6/form/DirMapSelector.js |  63 ++++++
>   www/manager6/qemu/HardwareView.js   |  19 ++
>   www/manager6/qemu/VirtiofsEdit.js   | 143 +++++++++++++
>   www/manager6/window/DirMapEdit.js   | 204 ++++++++++++++++++
>   11 files changed, 794 insertions(+)
>   create mode 100644 PVE/API2/Cluster/Mapping/Dir.pm
>   create mode 100644 www/manager6/dc/DirMapView.js
>   create mode 100644 www/manager6/form/DirMapSelector.js
>   create mode 100644 www/manager6/qemu/VirtiofsEdit.js
>   create mode 100644 www/manager6/window/DirMapEdit.js
>


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [pve-devel] [PATCH manager v17 10/10] ui: add options to add virtio-fs to qemu config
  2025-04-07 13:49 ` [pve-devel] [PATCH manager v17 10/10] ui: add options to add virtio-fs to qemu config Markus Frank
@ 2025-04-07 22:42   ` Thomas Lamprecht
  2025-04-08  6:54     ` Dominik Csapak
  0 siblings, 1 reply; 15+ messages in thread
From: Thomas Lamprecht @ 2025-04-07 22:42 UTC (permalink / raw)
  To: Proxmox VE development discussion, Markus Frank

Am 07.04.25 um 15:49 schrieb Markus Frank:
> +	    {
> +		xtype: 'displayfield',
> +		userCls: 'pmx-hint',
> +		value: gettext('Directory Mappings can be managed under Datacenter -> Directory Mappings'),
> +	    },

Hints can be OK, but in the end they are always some form of
admission of bad UX, as said often OK as they at least slightly
improve the status quo, not everything can be perfected. But, if
they can be replaced by some actionable feature, then doing so is
always better.

Here we e.g. could replace the hint with showing a simple
"Goto <directory mapping management>" below the field that is a link
to the respective datacenter option tab.

@Dominik: didn't we have a function or use-site of something like
this already? I had the StateProvider in mind, but nothing very
obvious (albeit it would have the info required to assemble the link
I guess).


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [pve-devel] [PATCH docs/qemu-server/manager v17 0/10] Virtio-fs
  2025-04-07 13:49 [pve-devel] [PATCH docs/qemu-server/manager v17 0/10] Virtio-fs Markus Frank
                   ` (10 preceding siblings ...)
  2025-04-07 14:31 ` [pve-devel] [PATCH docs/qemu-server/manager v17 0/10] Virtio-fs Filip Schauer
@ 2025-04-07 22:52 ` Thomas Lamprecht
  11 siblings, 0 replies; 15+ messages in thread
From: Thomas Lamprecht @ 2025-04-07 22:52 UTC (permalink / raw)
  To: pve-devel, Markus Frank

On Mon, 07 Apr 2025 15:49:40 +0200, Markus Frank wrote:
> Virtio-fs is a shared file system that enables sharing a directory
> between host and guest VMs. It takes advantage of the locality of
> virtual machines and the hypervisor to get a higher throughput than
> the 9p remote file system protocol.
> 
> build-order:
> 1. cluster
> 2. guest-common
> 3. docs
> 4. qemu-server
> 5. manager
> 
> [...]

Applied, thanks!

[01/10] add doc section for the shared filesystem virtio-fs
        commit acc3795eed052714d8710c2cd8d8f6a3f2b5a8b3
[02/10] d/control: add virtiofsd to Recommends for qemu-server
        SQUASHED into 03/10
[03/10] fix #1027: virtio-fs support
        commit 87b22e3839a3e4c301d1320c92d7690df325d0a7
[04/10] migration: check_local_resources for virtiofs
        commit 7bfffaee5f93f77564201ba45e54dba769496c7d
[05/10] disable snapshot (with RAM) and hibernate with virtio-fs devices
        commit 64dad62fd863f4d20ada5fd90ae24af043a2915d
[06/10] api: add resource map api endpoints for directories
        commit: 3eaa1cd6a9fa7ddbb0ac0b54fbdefaaa34409828
[07/10] ui: add edit window for dir mappings
        commit: 04078c7b9bf42dfa159b4e9b2da24a1baed1bcb4
[08/10] ui: add resource mapping view for directories
        commit: 3ac01b4edee5884ce6af5003a082cb4f39d24870
[09/10] ui: form: add selector for directory mappings
        commit: 40f3056e19706a62b5d6f7e30b8969983a770f33
[10/10] ui: add options to add virtio-fs to qemu config
        commit: d67f05bc6fded42445a819a9c995cf79817d738d


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [pve-devel] [PATCH manager v17 10/10] ui: add options to add virtio-fs to qemu config
  2025-04-07 22:42   ` Thomas Lamprecht
@ 2025-04-08  6:54     ` Dominik Csapak
  0 siblings, 0 replies; 15+ messages in thread
From: Dominik Csapak @ 2025-04-08  6:54 UTC (permalink / raw)
  To: Thomas Lamprecht, Proxmox VE development discussion, Markus Frank

On 4/8/25 00:42, Thomas Lamprecht wrote:
> Am 07.04.25 um 15:49 schrieb Markus Frank:
>> +	    {
>> +		xtype: 'displayfield',
>> +		userCls: 'pmx-hint',
>> +		value: gettext('Directory Mappings can be managed under Datacenter -> Directory Mappings'),
>> +	    },
> 
> Hints can be OK, but in the end they are always some form of
> admission of bad UX, as said often OK as they at least slightly
> improve the status quo, not everything can be perfected. But, if
> they can be replaced by some actionable feature, then doing so is
> always better.
> 
> Here we e.g. could replace the hint with showing a simple
> "Goto <directory mapping management>" below the field that is a link
> to the respective datacenter option tab.
> 
> @Dominik: didn't we have a function or use-site of something like
> this already? I had the StateProvider in mind, but nothing very
> obvious (albeit it would have the info required to assemble the link
> I guess).


yes, i think you mean this:

---
click: function() {
     Ext.state.Manager.getProvider().set('dctab', { value: 'ceph' }, true);
},
---

where we link to the ceph panel from the datacenter summary in www/manager6/dc/Health.js



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2025-04-08  6:54 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-04-07 13:49 [pve-devel] [PATCH docs/qemu-server/manager v17 0/10] Virtio-fs Markus Frank
2025-04-07 13:49 ` [pve-devel] [PATCH docs v17 1/10] add doc section for the shared filesystem virtio-fs Markus Frank
2025-04-07 13:49 ` [pve-devel] [PATCH qemu-server v17 2/10] d/control: add virtiofsd to Recommends for qemu-server Markus Frank
2025-04-07 13:49 ` [pve-devel] [PATCH qemu-server v17 3/10] fix #1027: virtio-fs support Markus Frank
2025-04-07 13:49 ` [pve-devel] [PATCH qemu-server v17 4/10] migration: check_local_resources for virtiofs Markus Frank
2025-04-07 13:49 ` [pve-devel] [PATCH qemu-server v17 5/10] disable snapshot (with RAM) and hibernate with virtio-fs devices Markus Frank
2025-04-07 13:49 ` [pve-devel] [PATCH manager v17 06/10] api: add resource map api endpoints for directories Markus Frank
2025-04-07 13:49 ` [pve-devel] [PATCH manager v17 07/10] ui: add edit window for dir mappings Markus Frank
2025-04-07 13:49 ` [pve-devel] [PATCH manager v17 08/10] ui: add resource mapping view for directories Markus Frank
2025-04-07 13:49 ` [pve-devel] [PATCH manager v17 09/10] ui: form: add selector for directory mappings Markus Frank
2025-04-07 13:49 ` [pve-devel] [PATCH manager v17 10/10] ui: add options to add virtio-fs to qemu config Markus Frank
2025-04-07 22:42   ` Thomas Lamprecht
2025-04-08  6:54     ` Dominik Csapak
2025-04-07 14:31 ` [pve-devel] [PATCH docs/qemu-server/manager v17 0/10] Virtio-fs Filip Schauer
2025-04-07 22:52 ` Thomas Lamprecht

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal