* [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v15 0/12] virtiofs
@ 2025-04-03 10:34 Markus Frank
2025-04-03 10:34 ` [pve-devel] [PATCH cluster v15 1/12] add mapping/dir.cfg for resource mapping Markus Frank
` (15 more replies)
0 siblings, 16 replies; 26+ messages in thread
From: Markus Frank @ 2025-04-03 10:34 UTC (permalink / raw)
To: pve-devel
Virtio-fs is a shared file system that enables sharing a directory
between host and guest VMs. It takes advantage of the locality of
virtual machines and the hypervisor to get a higher throughput than
the 9p remote file system protocol.
build-order:
1. cluster
2. guest-common
3. docs
4. qemu-server
5. manager
I did not get virtiofsd to run with run_command without creating
zombie processes after stutdown. So I replaced run_command with exec
for now. Maybe someone can find out why this happens.
changes in v15:
* removed announce-submounts option altogether and always set it for
virtiofsd.
* added die if both memory hotplug and virtiofs are enabled
* added fstab entry example in the docs
* assert_valid_map_list: only run assert_valid for the mappings on the
current node
* see individual patches
changes in v14:
* disallow commas and equal signs in path until the path can be quoted
in property strings
* addressed style nits and improved formatting
* use max_virtiofs() in check_vm_create_dir_perm
* removed unnecessary checks after parse_property_string
* find_on_current_node returns only one entry
* improved docs
* added missing imports/uses
changes in v13:
* removed acl/xattr attributes in node config
* renamed acl/xattr in virtiofs qemu config to expose-acl/expose-xattr
* renamed submounts in node config to announce-submounts
* the "disable snapshot (with RAM) and hibernate with virtio-fs devices"
patch now uses the check_non_migratable_resources function
* rewritten the part about announce-submounts in pve-docs patch
Changes in v12:
* rebase to master as most patches could not be applied anymore
Changes in v11:
* made submounts option on by default in WebUI and section config
* PVE::QemuServer::Virtiofs dependency removed in QemuServer/Memory.pm
* Minor changes to function/variable names
* Disable snapshots (with RAM) and hibernate due to incompatibility
cluster:
Markus Frank (1):
add mapping/dir.cfg for resource mapping
src/PVE/Cluster.pm | 1 +
src/pmxcfs/status.c | 1 +
2 files changed, 2 insertions(+)
guest-common:
Markus Frank (1):
add dir mapping section config
src/Makefile | 1 +
src/PVE/Mapping/Dir.pm | 192 +++++++++++++++++++++++++++++++++++++++++
2 files changed, 193 insertions(+)
create mode 100644 src/PVE/Mapping/Dir.pm
docs:
Markus Frank (1):
add doc section for the shared filesystem virtio-fs
qm.adoc | 102 ++++++++++++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 100 insertions(+), 2 deletions(-)
qemu-server:
Markus Frank (4):
control: add virtiofsd as runtime dependency for qemu-server
fix #1027: virtio-fs support
migration: check_local_resources for virtiofs
disable snapshot (with RAM) and hibernate with virtio-fs devices
PVE/API2/Qemu.pm | 41 ++++++-
PVE/QemuServer.pm | 44 +++++++-
PVE/QemuServer/Makefile | 3 +-
PVE/QemuServer/Memory.pm | 25 +++--
PVE/QemuServer/Virtiofs.pm | 209 +++++++++++++++++++++++++++++++++++
debian/control | 1 +
test/MigrationTest/Shared.pm | 7 ++
7 files changed, 316 insertions(+), 14 deletions(-)
create mode 100644 PVE/QemuServer/Virtiofs.pm
manager:
Markus Frank (5):
api: add resource map api endpoints for directories
ui: add edit window for dir mappings
ui: add resource mapping view for directories
ui: form: add selector for directory mappings
ui: add options to add virtio-fs to qemu config
PVE/API2/Cluster/Mapping.pm | 7 +
PVE/API2/Cluster/Mapping/Dir.pm | 308 ++++++++++++++++++++++++++++
PVE/API2/Cluster/Mapping/Makefile | 1 +
www/manager6/Makefile | 4 +
www/manager6/Utils.js | 1 +
www/manager6/dc/Config.js | 10 +
www/manager6/dc/DirMapView.js | 38 ++++
www/manager6/form/DirMapSelector.js | 63 ++++++
www/manager6/qemu/HardwareView.js | 19 ++
www/manager6/qemu/VirtiofsEdit.js | 138 +++++++++++++
www/manager6/window/DirMapEdit.js | 202 ++++++++++++++++++
11 files changed, 791 insertions(+)
create mode 100644 PVE/API2/Cluster/Mapping/Dir.pm
create mode 100644 www/manager6/dc/DirMapView.js
create mode 100644 www/manager6/form/DirMapSelector.js
create mode 100644 www/manager6/qemu/VirtiofsEdit.js
create mode 100644 www/manager6/window/DirMapEdit.js
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* [pve-devel] [PATCH cluster v15 1/12] add mapping/dir.cfg for resource mapping
2025-04-03 10:34 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v15 0/12] virtiofs Markus Frank
@ 2025-04-03 10:34 ` Markus Frank
2025-04-04 13:46 ` Fiona Ebner
2025-04-03 10:34 ` [pve-devel] [PATCH guest-common v15 2/12] add dir mapping section config Markus Frank
` (14 subsequent siblings)
15 siblings, 1 reply; 26+ messages in thread
From: Markus Frank @ 2025-04-03 10:34 UTC (permalink / raw)
To: pve-devel
Add it to both the perl side (PVE/Cluster.pm) and pmxcfs side
(status.c).
This dir.cfg is used to map directory IDs to paths on selected hosts.
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
no changes in v15
src/PVE/Cluster.pm | 1 +
src/pmxcfs/status.c | 1 +
2 files changed, 2 insertions(+)
diff --git a/src/PVE/Cluster.pm b/src/PVE/Cluster.pm
index e0e3ee9..b9311c7 100644
--- a/src/PVE/Cluster.pm
+++ b/src/PVE/Cluster.pm
@@ -84,6 +84,7 @@ my $observed = {
'sdn/.running-config' => 1,
'virtual-guest/cpu-models.conf' => 1,
'virtual-guest/profiles.cfg' => 1,
+ 'mapping/dir.cfg' => 1,
'mapping/pci.cfg' => 1,
'mapping/usb.cfg' => 1,
};
diff --git a/src/pmxcfs/status.c b/src/pmxcfs/status.c
index ff5fcc4..39b17f4 100644
--- a/src/pmxcfs/status.c
+++ b/src/pmxcfs/status.c
@@ -114,6 +114,7 @@ static memdb_change_t memdb_change_array[] = {
{ .path = "virtual-guest/cpu-models.conf" },
{ .path = "virtual-guest/profiles.cfg" },
{ .path = "firewall/cluster.fw" },
+ { .path = "mapping/dir.cfg" },
{ .path = "mapping/pci.cfg" },
{ .path = "mapping/usb.cfg" },
};
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* [pve-devel] [PATCH guest-common v15 2/12] add dir mapping section config
2025-04-03 10:34 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v15 0/12] virtiofs Markus Frank
2025-04-03 10:34 ` [pve-devel] [PATCH cluster v15 1/12] add mapping/dir.cfg for resource mapping Markus Frank
@ 2025-04-03 10:34 ` Markus Frank
2025-04-04 12:17 ` Daniel Kral
2025-04-03 10:34 ` [pve-devel] [PATCH docs v15 3/12] add doc section for the shared filesystem virtio-fs Markus Frank
` (13 subsequent siblings)
15 siblings, 1 reply; 26+ messages in thread
From: Markus Frank @ 2025-04-03 10:34 UTC (permalink / raw)
To: pve-devel
Adds a config file for directories by using a 'map' property string for
each node mapping.
example config:
```
some-dir-id
map node=node1,path=/path/to/share/
map node=node2,path=/different/location/
```
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
v15:
* removed announce-submounts option altogether and always set it for
virtiofsd
* assert_valid_map_list: only run assert_valid for the mappings on the
current node
src/Makefile | 1 +
src/PVE/Mapping/Dir.pm | 192 +++++++++++++++++++++++++++++++++++++++++
2 files changed, 193 insertions(+)
create mode 100644 src/PVE/Mapping/Dir.pm
diff --git a/src/Makefile b/src/Makefile
index cbc40c1..030e7f7 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -15,6 +15,7 @@ install: PVE
install -m 0644 PVE/StorageTunnel.pm ${PERL5DIR}/PVE/
install -m 0644 PVE/Tunnel.pm ${PERL5DIR}/PVE/
install -d ${PERL5DIR}/PVE/Mapping
+ install -m 0644 PVE/Mapping/Dir.pm ${PERL5DIR}/PVE/Mapping/
install -m 0644 PVE/Mapping/PCI.pm ${PERL5DIR}/PVE/Mapping/
install -m 0644 PVE/Mapping/USB.pm ${PERL5DIR}/PVE/Mapping/
install -d ${PERL5DIR}/PVE/VZDump
diff --git a/src/PVE/Mapping/Dir.pm b/src/PVE/Mapping/Dir.pm
new file mode 100644
index 0000000..4673f83
--- /dev/null
+++ b/src/PVE/Mapping/Dir.pm
@@ -0,0 +1,192 @@
+package PVE::Mapping::Dir;
+
+use strict;
+use warnings;
+
+use PVE::Cluster qw(cfs_register_file cfs_read_file cfs_lock_file cfs_write_file);
+use PVE::INotify;
+use PVE::JSONSchema qw(get_standard_option parse_property_string);
+use PVE::SectionConfig;
+
+use base qw(PVE::SectionConfig);
+
+my $FILENAME = 'mapping/dir.cfg';
+
+cfs_register_file($FILENAME,
+ sub { __PACKAGE__->parse_config(@_); },
+ sub { __PACKAGE__->write_config(@_); });
+
+
+# so we don't have to repeat the type every time
+sub parse_section_header {
+ my ($class, $line) = @_;
+
+ if ($line =~ m/^(\S+)\s*$/) {
+ my $id = $1;
+ my $errmsg = undef; # set if you want to skip whole section
+ eval { PVE::JSONSchema::pve_verify_configid($id) };
+ $errmsg = $@ if $@;
+ my $config = {}; # to return additional attributes
+ return ('dir', $id, $errmsg, $config);
+ }
+ return undef;
+}
+
+sub format_section_header {
+ my ($class, $type, $sectionId, $scfg, $done_hash) = @_;
+
+ return "$sectionId\n";
+}
+
+sub type {
+ return 'dir';
+}
+
+# temporary path format that also disallows commas and equal signs
+# TODO: Remove this when property_string supports quotation of properties
+PVE::JSONSchema::register_format('pve-storage-path-in-property-string', \&verify_path);
+sub verify_path {
+ my ($path, $noerr) = @_;
+
+ if ($path !~ m|^/[^;,=\(\)]+|) {
+ return undef if $noerr;
+ die "Value does not look like a valid absolute path."
+ ." These symbols are currently not allowed in path: ;,=()\n";
+ }
+ return $path;
+}
+
+my $map_fmt = {
+ node => get_standard_option('pve-node'),
+ path => {
+ description => "Absolute directory path that should be shared with the guest.",
+ type => 'string',
+ format => 'pve-storage-path-in-property-string',
+ },
+};
+
+my $defaultData = {
+ propertyList => {
+ id => {
+ type => 'string',
+ description => "The ID of the directory mapping",
+ format => 'pve-configid',
+ },
+ description => {
+ type => 'string',
+ description => "Description of the directory mapping",
+ optional => 1,
+ maxLength => 4096,
+ },
+ map => {
+ type => 'array',
+ description => 'A list of maps for the cluster nodes.',
+ optional => 1,
+ items => {
+ type => 'string',
+ format => $map_fmt,
+ },
+ },
+ },
+};
+
+sub private {
+ return $defaultData;
+}
+
+sub options {
+ return {
+ description => { optional => 1 },
+ map => {},
+ };
+}
+
+sub assert_valid {
+ my ($dir_cfg) = @_;
+
+ my $path = $dir_cfg->{path};
+
+ verify_path($path);
+
+ if (! -e $path) {
+ die "Path $path does not exist\n";
+ } elsif (! -d $path) {
+ die "Path $path exists, but is not a directory\n";
+ }
+
+ return 1;
+};
+
+sub assert_valid_map_list {
+ my ($map_list) = @_;
+
+ my $nodename = PVE::INotify::nodename();
+
+ my %count;
+ for my $map (@$map_list) {
+ my $entry = parse_property_string($map_fmt, $map);
+ if ($entry->{node} eq $nodename) {
+ assert_valid($entry);
+ }
+ $count{$entry->{node}}++;
+ }
+ for my $node (keys %count) {
+ if ($count{$node} > 1) {
+ die "Node '$node' is specified $count{$node} times.\n";
+ }
+ }
+}
+
+sub config {
+ return cfs_read_file($FILENAME);
+}
+
+sub lock_dir_config {
+ my ($code, $errmsg) = @_;
+
+ cfs_lock_file($FILENAME, undef, $code);
+ if (my $err = $@) {
+ $errmsg ? die "$errmsg: $err" : die $err;
+ }
+}
+
+sub write_dir_config {
+ my ($cfg) = @_;
+
+ cfs_write_file($FILENAME, $cfg);
+}
+
+sub find_on_current_node {
+ my ($id) = @_;
+
+ my $cfg = config();
+ my $node = PVE::INotify::nodename();
+
+ my $node_mapping = get_node_mapping($cfg, $id, $node);
+ if (@{$node_mapping} > 1) {
+ die "More than than one directory mapping for node $node.\n";
+ }
+ return $node_mapping->[0];
+}
+
+sub get_node_mapping {
+ my ($cfg, $id, $nodename) = @_;
+
+ return undef if !defined($cfg->{ids}->{$id});
+
+ my $res = [];
+ my $mapping_list = $cfg->{ids}->{$id}->{map};
+ for my $map (@{$mapping_list}) {
+ my $entry = eval { parse_property_string($map_fmt, $map) };
+ warn $@ if $@;
+ if ($entry && $entry->{node} eq $nodename) {
+ push $res->@*, $entry;
+ }
+ }
+ return $res;
+}
+
+PVE::Mapping::Dir->register();
+PVE::Mapping::Dir->init();
+
+1;
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* [pve-devel] [PATCH docs v15 3/12] add doc section for the shared filesystem virtio-fs
2025-04-03 10:34 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v15 0/12] virtiofs Markus Frank
2025-04-03 10:34 ` [pve-devel] [PATCH cluster v15 1/12] add mapping/dir.cfg for resource mapping Markus Frank
2025-04-03 10:34 ` [pve-devel] [PATCH guest-common v15 2/12] add dir mapping section config Markus Frank
@ 2025-04-03 10:34 ` Markus Frank
2025-04-04 12:20 ` Daniel Kral
2025-04-03 10:34 ` [pve-devel] [PATCH qemu-server v15 4/12] control: add virtiofsd as runtime dependency for qemu-server Markus Frank
` (12 subsequent siblings)
15 siblings, 1 reply; 26+ messages in thread
From: Markus Frank @ 2025-04-03 10:34 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
v15:
* added fstab entry example in the docs
* added hyperlinks for websites
* removed announce-submounts part
qm.adoc | 102 ++++++++++++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 100 insertions(+), 2 deletions(-)
diff --git a/qm.adoc b/qm.adoc
index 40caaa3..f1bb20a 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -1202,6 +1202,103 @@ recommended to always use a limiter to avoid guests using too many host
resources. If desired, a value of '0' for `max_bytes` can be used to disable
all limits.
+[[qm_virtiofs]]
+Virtio-fs
+~~~~~~~~~
+
+Virtio-fs is a shared filesystem designed for virtual environments. It allows to
+share a directory tree available on the host by mounting it within VMs. It does
+not use the network stack and aims to offer similar performance and semantics as
+the source filesystem.
+
+To use virtio-fs, the https://gitlab.com/virtio-fs/virtiofsd[virtiofsd] daemon
+needs to run in the background. This happens automatically in {pve} when
+starting a VM using a virtio-fs mount.
+
+Linux VMs with kernel >=5.4 support virtio-fs by default
+(https://www.kernelconfig.io/CONFIG_VIRTIO_FS[virtiofs kernel module]), but some
+features require a newer kernel.
+
+There is a
+https://github.com/virtio-win/kvm-guest-drivers-windows/wiki/Virtiofs:-Shared-file-system[guide]
+available on how to utilize virtio-fs in Windows VMs.
+
+Known Limitations
+^^^^^^^^^^^^^^^^^
+
+* If virtiofsd crashes, its mount point will hang in the VM until the VM
+is completely stopped.
+* virtiofsd not responding may result in a hanging mount in the VM, similar to
+an unreachable NFS.
+* Memory hotplug does not work in combination with virtio-fs (also results in
+hanging access).
+* Memory related features such as live migration, snapshots, and hibernate are
+not available with virtio-fs devices.
+* Windows cannot understand ACLs in the context of virtio-fs. Therefore, do not
+expose ACLs for Windows VMs, otherwise the virtio-fs device will not be
+visible within the VM.
+
+Add Mapping for Shared Directories
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To add a mapping for a shared directory, you can use the API directly with
+`pvesh` as described in the xref:resource_mapping[Resource Mapping] section:
+
+----
+pvesh create /cluster/mapping/dir --id dir1 \
+ --map node=node1,path=/path/to/share \
+ --map node=node2,path=/path/to/different/dir \
+----
+
+Add virtio-fs to a VM
+^^^^^^^^^^^^^^^^^^^^^
+
+To share a directory using virtio-fs, add the parameter `virtiofs<N>` (N can be
+anything between 0 and 9) to the VM config and use a directory ID (dirid) that
+has been configured in the resource mapping. Additionally, you can set the
+`cache` option to either `always`, `never`, `metadata`, or `auto` (default:
+`auto`), depending on your requirements. How the different caching modes behave
+can be read https://lwn.net/Articles/774495/[here under the "Caching Modes"
+section]. To enable writeback cache set `writeback` to `1`.
+
+Virtiofsd supports ACL and xattr passthrough (can be enabled with the
+`expose-acl` and `expose-xattr` options), allowing the guest to access ACLs and
+xattrs if the underlying host filesystem supports them, but they must also be
+compatible with the guest filesystem (for example most Linux filesystems support
+ACLs, while Windows filesystems do not).
+
+The `expose-acl` option automatically implies `expose-xattr`, that is, it makes
+no difference if you set `expose-xattr` to `0` if `expose-acl` is set to `1`.
+
+If you want virtio-fs to honor the `O_DIRECT` flag, you can set the `direct-io`
+parameter to `1` (default: `0`). This will degrade performance, but is useful if
+applications do their own caching.
+
+----
+qm set <vmid> -virtiofs0 dirid=<dirid>,cache=always,direct-io=1
+qm set <vmid> -virtiofs1 <dirid>,cache=never,expose-xattr=1
+qm set <vmid> -virtiofs2 <dirid>,expose-acl=1,writeback=1
+----
+
+To temporarily mount virtio-fs in a guest VM with the Linux kernel virtio-fs
+driver, run the following command inside the guest:
+
+----
+mount -t virtiofs <dirid> <mount point>
+----
+
+To have a persistent virtiofs mount, you can create an fstab entry:
+
+----
+<dirid> <mount point> virtiofs rw,relatime 0 0
+----
+
+The dirid associated with the path on the current node is also used as the mount
+tag (name used to mount the device on the guest).
+
+For more information on available virtiofsd parameters, see the
+https://gitlab.com/virtio-fs/virtiofsd[GitLab virtiofsd project page].
+
[[qm_bootorder]]
Device Boot Order
~~~~~~~~~~~~~~~~~
@@ -1890,8 +1987,9 @@ in the relevant tab in the `Resource Mappings` category, or on the cli with
[thumbnail="screenshot/gui-datacenter-mapping-pci-edit.png"]
-Where `<type>` is the hardware type (currently either `pci` or `usb`) and
-`<options>` are the device mappings and other configuration parameters.
+Where `<type>` is the hardware type (currently either `pci`, `usb` or
+xref:qm_virtiofs[dir]) and `<options>` are the device mappings and other
+configuration parameters.
Note that the options must include a map property with all identifying
properties of that hardware, so that it's possible to verify the hardware did
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* [pve-devel] [PATCH qemu-server v15 4/12] control: add virtiofsd as runtime dependency for qemu-server
2025-04-03 10:34 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v15 0/12] virtiofs Markus Frank
` (2 preceding siblings ...)
2025-04-03 10:34 ` [pve-devel] [PATCH docs v15 3/12] add doc section for the shared filesystem virtio-fs Markus Frank
@ 2025-04-03 10:34 ` Markus Frank
2025-04-03 10:34 ` [pve-devel] [PATCH qemu-server v15 5/7] fix #1027: virtio-fs support Markus Frank
` (11 subsequent siblings)
15 siblings, 0 replies; 26+ messages in thread
From: Markus Frank @ 2025-04-03 10:34 UTC (permalink / raw)
To: pve-devel
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
no changes in v15
debian/control | 1 +
1 file changed, 1 insertion(+)
diff --git a/debian/control b/debian/control
index 81f0fad6..eda357a5 100644
--- a/debian/control
+++ b/debian/control
@@ -55,6 +55,7 @@ Depends: dbus,
socat,
swtpm,
swtpm-tools,
+ virtiofsd,
${misc:Depends},
${perl:Depends},
${shlibs:Depends},
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* [pve-devel] [PATCH qemu-server v15 5/7] fix #1027: virtio-fs support
2025-04-03 10:34 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v15 0/12] virtiofs Markus Frank
` (3 preceding siblings ...)
2025-04-03 10:34 ` [pve-devel] [PATCH qemu-server v15 4/12] control: add virtiofsd as runtime dependency for qemu-server Markus Frank
@ 2025-04-03 10:34 ` Markus Frank
2025-04-04 12:19 ` Daniel Kral
2025-04-03 10:34 ` [pve-devel] [PATCH qemu-server v15 6/12] migration: check_local_resources for virtiofs Markus Frank
` (10 subsequent siblings)
15 siblings, 1 reply; 26+ messages in thread
From: Markus Frank @ 2025-04-03 10:34 UTC (permalink / raw)
To: pve-devel
add support for sharing directories with a guest vm.
virtio-fs needs virtiofsd to be started.
In order to start virtiofsd as a process (despite being a daemon it is
does not run in the background), a double-fork is used.
virtiofsd should close itself together with QEMU.
There are the parameters dirid and the optional parameters direct-io,
cache and writeback. Additionally the expose-xattr & expose-acl
parameter can be set to expose xattr & acl settings from the shared
filesystem to the guest system.
The dirid gets mapped to the path on the current node and is also used
as a mount tag (name used to mount the device on the guest).
example config:
```
virtiofs0: foo,direct-io=1,cache=always,expose-acl=1
virtiofs1: dirid=bar,cache=never,expose-xattr=1,writeback=1
```
For information on the optional parameters see the coherent doc patch
and the official gitlab README:
https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md
Also add a permission check for virtiofs directory access.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
v15:
* removed announce-submounts option altogether and always set it for
virtiofsd
* added die if both memory hotplug and virtiofs are enabled
* changed parameter order of PVE::QemuServer::Memory::config so that
$cmd and $machine_flags are last.
* die instead of warn in assert_virtiofs_config when acl is enabled for
a windows VM
PVE/API2/Qemu.pm | 41 +++++++-
PVE/QemuServer.pm | 29 ++++-
PVE/QemuServer/Makefile | 3 +-
PVE/QemuServer/Memory.pm | 25 +++--
PVE/QemuServer/Virtiofs.pm | 209 +++++++++++++++++++++++++++++++++++++
5 files changed, 296 insertions(+), 11 deletions(-)
create mode 100644 PVE/QemuServer/Virtiofs.pm
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 156b1c7b..5a695306 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -39,6 +39,7 @@ use PVE::QemuServer::MetaInfo;
use PVE::QemuServer::PCI;
use PVE::QemuServer::QMPHelpers;
use PVE::QemuServer::USB;
+use PVE::QemuServer::Virtiofs qw(max_virtiofs);
use PVE::QemuMigrate;
use PVE::RPCEnvironment;
use PVE::AccessControl;
@@ -803,6 +804,33 @@ my sub check_vm_create_hostpci_perm {
return 1;
};
+my sub check_dir_perm {
+ my ($rpcenv, $authuser, $vmid, $pool, $opt, $value) = @_;
+
+ return 1 if $authuser eq 'root@pam';
+
+ $rpcenv->check_vm_perm($authuser, $vmid, $pool, ['VM.Config.Disk']);
+
+ my $virtiofs = PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', $value);
+ $rpcenv->check_full($authuser, "/mapping/dir/$virtiofs->{dirid}", ['Mapping.Use']);
+
+ return 1;
+};
+
+my sub check_vm_create_dir_perm {
+ my ($rpcenv, $authuser, $vmid, $pool, $param) = @_;
+
+ return 1 if $authuser eq 'root@pam';
+
+ for (my $i = 0; $i < max_virtiofs(); $i++) {
+ my $opt = "virtiofs$i";
+ next if !$param->{$opt};
+ check_dir_perm($rpcenv, $authuser, $vmid, $pool, $opt, $param->{$opt});
+ }
+
+ return 1;
+};
+
my $check_vm_modify_config_perm = sub {
my ($rpcenv, $authuser, $vmid, $pool, $key_list) = @_;
@@ -813,7 +841,7 @@ my $check_vm_modify_config_perm = sub {
# else, as there the permission can be value dependent
next if PVE::QemuServer::is_valid_drivename($opt);
next if $opt eq 'cdrom';
- next if $opt =~ m/^(?:unused|serial|usb|hostpci)\d+$/;
+ next if $opt =~ m/^(?:unused|serial|usb|hostpci|virtiofs)\d+$/;
next if $opt eq 'tags';
@@ -1116,6 +1144,7 @@ __PACKAGE__->register_method({
&$check_vm_create_serial_perm($rpcenv, $authuser, $vmid, $pool, $param);
check_vm_create_usb_perm($rpcenv, $authuser, $vmid, $pool, $param);
check_vm_create_hostpci_perm($rpcenv, $authuser, $vmid, $pool, $param);
+ check_vm_create_dir_perm($rpcenv, $authuser, $vmid, $pool, $param);
PVE::QemuServer::check_bridge_access($rpcenv, $authuser, $param);
&$check_cpu_model_access($rpcenv, $authuser, $param);
@@ -2007,6 +2036,10 @@ my $update_vm_api = sub {
check_hostpci_perm($rpcenv, $authuser, $vmid, undef, $opt, $val);
PVE::QemuConfig->add_to_pending_delete($conf, $opt, $force);
PVE::QemuConfig->write_config($vmid, $conf);
+ } elsif ($opt =~ m/^virtiofs\d$/) {
+ check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, $val);
+ PVE::QemuConfig->add_to_pending_delete($conf, $opt, $force);
+ PVE::QemuConfig->write_config($vmid, $conf);
} elsif ($opt eq 'tags') {
assert_tag_permissions($vmid, $val, '', $rpcenv, $authuser);
delete $conf->{$opt};
@@ -2097,6 +2130,12 @@ my $update_vm_api = sub {
}
check_hostpci_perm($rpcenv, $authuser, $vmid, undef, $opt, $param->{$opt});
$conf->{pending}->{$opt} = $param->{$opt};
+ } elsif ($opt =~ m/^virtiofs\d$/) {
+ if (my $oldvalue = $conf->{$opt}) {
+ check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, $oldvalue);
+ }
+ check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, $param->{$opt});
+ $conf->{pending}->{$opt} = $param->{$opt};
} elsif ($opt eq 'tags') {
assert_tag_permissions($vmid, $conf->{$opt}, $param->{$opt}, $rpcenv, $authuser);
$conf->{pending}->{$opt} = PVE::GuestHelpers::get_unique_tags($param->{$opt});
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index ffd5d56b..558b924d 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -33,6 +33,7 @@ use PVE::Exception qw(raise raise_param_exc);
use PVE::Format qw(render_duration render_bytes);
use PVE::GuestHelpers qw(safe_string_ne safe_num_ne safe_boolean_ne);
use PVE::HA::Config;
+use PVE::Mapping::Dir;
use PVE::Mapping::PCI;
use PVE::Mapping::USB;
use PVE::INotify;
@@ -62,6 +63,7 @@ use PVE::QemuServer::Monitor qw(mon_cmd);
use PVE::QemuServer::PCI qw(print_pci_addr print_pcie_addr print_pcie_root_port parse_hostpci);
use PVE::QemuServer::QMPHelpers qw(qemu_deviceadd qemu_devicedel qemu_objectadd qemu_objectdel);
use PVE::QemuServer::USB;
+use PVE::QemuServer::Virtiofs qw(max_virtiofs start_all_virtiofsd);
my $have_sdn;
eval {
@@ -948,6 +950,10 @@ my $netdesc = {
PVE::JSONSchema::register_standard_option("pve-qm-net", $netdesc);
+for (my $i = 0; $i < max_virtiofs(); $i++) {
+ $confdesc->{"virtiofs$i"} = get_standard_option('pve-qm-virtiofs');
+}
+
my $ipconfig_fmt = {
ip => {
type => 'string',
@@ -3707,8 +3713,18 @@ sub config_to_command {
push @$cmd, get_cpu_options($conf, $arch, $kvm, $kvm_off, $machine_version, $winversion, $gpu_passthrough);
}
+ my $virtiofs_enabled = PVE::QemuServer::Virtiofs::virtiofs_enabled($conf);
+
PVE::QemuServer::Memory::config(
- $conf, $vmid, $sockets, $cores, $hotplug_features->{memory}, $cmd);
+ $conf,
+ $vmid,
+ $sockets,
+ $cores,
+ $hotplug_features->{memory},
+ $virtiofs_enabled,
+ $cmd,
+ $machineFlags,
+ );
push @$cmd, '-S' if $conf->{freeze};
@@ -3998,6 +4014,8 @@ sub config_to_command {
push @$machineFlags, 'confidential-guest-support=sev0';
}
+ PVE::QemuServer::Virtiofs::config($conf, $vmid, $devices);
+
push @$cmd, @$devices;
push @$cmd, '-rtc', join(',', @$rtcFlags) if scalar(@$rtcFlags);
push @$cmd, '-machine', join(',', @$machineFlags) if scalar(@$machineFlags);
@@ -5806,6 +5824,8 @@ sub vm_start_nolock {
PVE::Tools::run_fork sub {
PVE::Systemd::enter_systemd_scope($vmid, "Proxmox VE VM $vmid", %systemd_properties);
+ my $virtiofs_sockets = start_all_virtiofsd($conf, $vmid);
+
my $tpmpid;
if ((my $tpm = $conf->{tpmstate0}) && !PVE::QemuConfig->is_template($conf)) {
# start the TPM emulator so QEMU can connect on start
@@ -5813,6 +5833,8 @@ sub vm_start_nolock {
}
my $exitcode = run_command($cmd, %run_params);
+ eval { PVE::QemuServer::Virtiofs::close_sockets(@$virtiofs_sockets); };
+ log_warn("closing virtiofs sockets failed - $@") if $@;
if ($exitcode) {
if ($tpmpid) {
warn "stopping swtpm instance (pid $tpmpid) due to QEMU startup error\n";
@@ -6485,7 +6507,10 @@ sub check_mapping_access {
} else {
die "either 'host' or 'mapping' must be set.\n";
}
- }
+ } elsif ($opt =~ m/^virtiofs\d$/) {
+ my $virtiofs = PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', $conf->{$opt});
+ $rpcenv->check_full($user, "/mapping/dir/$virtiofs->{dirid}", ['Mapping.Use']);
+ }
}
};
diff --git a/PVE/QemuServer/Makefile b/PVE/QemuServer/Makefile
index 18fd13ea..3588e0e1 100644
--- a/PVE/QemuServer/Makefile
+++ b/PVE/QemuServer/Makefile
@@ -11,7 +11,8 @@ SOURCES=PCI.pm \
CPUConfig.pm \
CGroup.pm \
Drive.pm \
- QMPHelpers.pm
+ QMPHelpers.pm \
+ Virtiofs.pm
.PHONY: install
install: ${SOURCES}
diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
index e5024cd2..1111410a 100644
--- a/PVE/QemuServer/Memory.pm
+++ b/PVE/QemuServer/Memory.pm
@@ -336,7 +336,7 @@ sub qemu_memdevices_list {
}
sub config {
- my ($conf, $vmid, $sockets, $cores, $hotplug, $cmd) = @_;
+ my ($conf, $vmid, $sockets, $cores, $hotplug, $virtiofs_enabled, $cmd, $machine_flags) = @_;
my $memory = get_current_memory($conf->{memory});
my $static_memory = 0;
@@ -367,6 +367,9 @@ sub config {
die "numa needs to be enabled to use hugepages" if $conf->{hugepages} && !$conf->{numa};
+ die "Memory hotplug does not work in combination with virtio-fs.\n"
+ if $hotplug && $virtiofs_enabled;
+
if ($conf->{numa}) {
my $numa_totalmemory = undef;
@@ -379,7 +382,8 @@ sub config {
my $numa_memory = $numa->{memory};
$numa_totalmemory += $numa_memory;
- my $mem_object = print_mem_object($conf, "ram-node$i", $numa_memory);
+ my $memdev = $virtiofs_enabled ? "virtiofs-mem$i" : "ram-node$i";
+ my $mem_object = print_mem_object($conf, $memdev, $numa_memory);
# cpus
my $cpulists = $numa->{cpus};
@@ -404,7 +408,7 @@ sub config {
}
push @$cmd, '-object', $mem_object;
- push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=ram-node$i";
+ push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=$memdev";
}
die "total memory for NUMA nodes must be equal to vm static memory\n"
@@ -418,15 +422,20 @@ sub config {
die "host NUMA node$i doesn't exist\n"
if !host_numanode_exists($i) && $conf->{hugepages};
- my $mem_object = print_mem_object($conf, "ram-node$i", $numa_memory);
- push @$cmd, '-object', $mem_object;
-
my $cpus = ($cores * $i);
$cpus .= "-" . ($cpus + $cores - 1) if $cores > 1;
- push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=ram-node$i";
+ my $memdev = $virtiofs_enabled ? "virtiofs-mem$i" : "ram-node$i";
+ my $mem_object = print_mem_object($conf, $memdev, $numa_memory);
+ push @$cmd, '-object', $mem_object;
+ push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=$memdev";
}
}
+ } elsif ($virtiofs_enabled) {
+ # kvm: '-machine memory-backend' and '-numa memdev' properties are mutually exclusive
+ push @$cmd, '-object', 'memory-backend-memfd,id=virtiofs-mem'
+ .",size=$conf->{memory}M,share=on";
+ push @$machine_flags, 'memory-backend=virtiofs-mem';
}
if ($hotplug) {
@@ -453,6 +462,8 @@ sub print_mem_object {
my $path = hugepages_mount_path($hugepages_size);
return "memory-backend-file,id=$id,size=${size}M,mem-path=$path,share=on,prealloc=yes";
+ } elsif ($id =~ m/^virtiofs-mem/) {
+ return "memory-backend-memfd,id=$id,size=${size}M,share=on";
} else {
return "memory-backend-ram,id=$id,size=${size}M";
}
diff --git a/PVE/QemuServer/Virtiofs.pm b/PVE/QemuServer/Virtiofs.pm
new file mode 100644
index 00000000..5b948d57
--- /dev/null
+++ b/PVE/QemuServer/Virtiofs.pm
@@ -0,0 +1,209 @@
+package PVE::QemuServer::Virtiofs;
+
+use strict;
+use warnings;
+
+use Fcntl qw(F_GETFD F_SETFD FD_CLOEXEC);
+use IO::Socket::UNIX;
+use POSIX;
+use Socket qw(SOCK_STREAM);
+
+use PVE::JSONSchema qw(parse_property_string);
+use PVE::Mapping::Dir;
+use PVE::QemuServer::Helpers;
+use PVE::RESTEnvironment qw(log_warn);
+
+use base qw(Exporter);
+
+our @EXPORT_OK = qw(
+max_virtiofs
+start_all_virtiofsd
+);
+
+my $MAX_VIRTIOFS = 10;
+my $socket_path_root = "/run/qemu-server/virtiofsd";
+
+my $virtiofs_fmt = {
+ 'dirid' => {
+ type => 'string',
+ default_key => 1,
+ description => "Mapping identifier of the directory mapping to be shared with the guest."
+ ." Also used as a mount tag inside the VM.",
+ format_description => 'mapping-id',
+ format => 'pve-configid',
+ },
+ 'cache' => {
+ type => 'string',
+ description => "The caching policy the file system should use (auto, always, metadata, never).",
+ enum => [qw(auto always metadata never)],
+ default => "auto",
+ optional => 1,
+ },
+ 'direct-io' => {
+ type => 'boolean',
+ description => "Honor the O_DIRECT flag passed down by guest applications.",
+ default => 0,
+ optional => 1,
+ },
+ writeback => {
+ type => 'boolean',
+ description => "Enable writeback cache. If enabled, writes may be cached in the guest until"
+ ." the file is closed or an fsync is performed.",
+ default => 0,
+ optional => 1,
+ },
+ 'expose-xattr' => {
+ type => 'boolean',
+ description => "Enable support for extended attributes for this mount.",
+ default => 0,
+ optional => 1,
+ },
+ 'expose-acl' => {
+ type => 'boolean',
+ description => "Enable support for POSIX ACLs (enabled ACL implies xattr) for this mount.",
+ default => 0,
+ optional => 1,
+ },
+};
+PVE::JSONSchema::register_format('pve-qm-virtiofs', $virtiofs_fmt);
+
+my $virtiofsdesc = {
+ optional => 1,
+ type => 'string', format => $virtiofs_fmt,
+ description => "Configuration for sharing a directory between host and guest using Virtio-fs.",
+};
+PVE::JSONSchema::register_standard_option("pve-qm-virtiofs", $virtiofsdesc);
+
+sub max_virtiofs {
+ return $MAX_VIRTIOFS;
+}
+
+sub assert_virtiofs_config {
+ my ($ostype, $virtiofs) = @_;
+
+ my $dir_cfg = PVE::Mapping::Dir::find_on_current_node($virtiofs->{dirid});
+
+ my $acl = $virtiofs->{'expose-acl'};
+ if ($acl && PVE::QemuServer::Helpers::windows_version($ostype)) {
+ die "Please disable ACLs for virtiofs on Windows VMs, otherwise"
+ ." the virtiofs shared directory cannot be mounted.\n";
+ }
+
+ eval { PVE::Mapping::Dir::assert_valid($dir_cfg) };
+ die "directory mapping invalid: $@\n" if $@;
+}
+
+sub config {
+ my ($conf, $vmid, $devices) = @_;
+
+ for (my $i = 0; $i < max_virtiofs(); $i++) {
+ my $opt = "virtiofs$i";
+
+ next if !$conf->{$opt};
+ my $virtiofs = parse_property_string('pve-qm-virtiofs', $conf->{$opt});
+
+ assert_virtiofs_config($conf->{ostype}, $virtiofs);
+
+ push @$devices, '-chardev', "socket,id=virtiofs$i,path=$socket_path_root/vm$vmid-fs$i";
+
+ # queue-size is set 1024 because of bug with Windows guests:
+ # https://bugzilla.redhat.com/show_bug.cgi?id=1873088
+ # 1024 is also always used in the virtiofs documentations:
+ # https://gitlab.com/virtio-fs/virtiofsd#examples
+ push @$devices, '-device', 'vhost-user-fs-pci,queue-size=1024'
+ .",chardev=virtiofs$i,tag=$virtiofs->{dirid}";
+ }
+}
+
+sub virtiofs_enabled {
+ my ($conf) = @_;
+
+ my $virtiofs_enabled = 0;
+ for (my $i = 0; $i < max_virtiofs(); $i++) {
+ my $opt = "virtiofs$i";
+ next if !$conf->{$opt};
+ parse_property_string('pve-qm-virtiofs', $conf->{$opt});
+ $virtiofs_enabled = 1;
+ }
+ return $virtiofs_enabled;
+}
+
+sub start_all_virtiofsd {
+ my ($conf, $vmid) = @_;
+ my $virtiofs_sockets = [];
+ for (my $i = 0; $i < max_virtiofs(); $i++) {
+ my $opt = "virtiofs$i";
+
+ next if !$conf->{$opt};
+ my $virtiofs = parse_property_string('pve-qm-virtiofs', $conf->{$opt});
+
+ my $virtiofs_socket = start_virtiofsd($vmid, $i, $virtiofs);
+ push @$virtiofs_sockets, $virtiofs_socket;
+ }
+ return $virtiofs_sockets;
+}
+
+sub start_virtiofsd {
+ my ($vmid, $fsid, $virtiofs) = @_;
+
+ mkdir $socket_path_root;
+ my $socket_path = "$socket_path_root/vm$vmid-fs$fsid";
+ unlink($socket_path);
+ my $socket = IO::Socket::UNIX->new(
+ Type => SOCK_STREAM,
+ Local => $socket_path,
+ Listen => 1,
+ ) or die "cannot create socket - $!\n";
+
+ my $flags = fcntl($socket, F_GETFD, 0)
+ or die "failed to get file descriptor flags: $!\n";
+ fcntl($socket, F_SETFD, $flags & ~FD_CLOEXEC)
+ or die "failed to remove FD_CLOEXEC from file descriptor\n";
+
+ my $dir_cfg = PVE::Mapping::Dir::find_on_current_node($virtiofs->{dirid});
+
+ my $virtiofsd_bin = '/usr/libexec/virtiofsd';
+ my $fd = $socket->fileno();
+ my $path = $dir_cfg->{path};
+
+ my $could_not_fork_err = "could not fork to start virtiofsd\n";
+ my $pid = fork();
+ if ($pid == 0) {
+ POSIX::setsid();
+ $0 = "task pve-vm$vmid-virtiofs$fsid";
+ my $pid2 = fork();
+ if ($pid2 == 0) {
+ my $cmd = [$virtiofsd_bin, "--fd=$fd", "--shared-dir=$path"];
+ push @$cmd, '--xattr' if $virtiofs->{'expose-xattr'};
+ push @$cmd, '--posix-acl' if $virtiofs->{'expose-acl'};
+ push @$cmd, '--announce-submounts';
+ push @$cmd, '--allow-direct-io' if $virtiofs->{'direct-io'};
+ push @$cmd, '--cache='.$virtiofs->{cache} if $virtiofs->{cache};
+ push @$cmd, '--writeback' if $virtiofs->{'writeback'};
+ push @$cmd, '--syslog';
+ exec(@$cmd);
+ } elsif (!defined($pid2)) {
+ die $could_not_fork_err;
+ } else {
+ POSIX::_exit(0);
+ }
+ } elsif (!defined($pid)) {
+ die $could_not_fork_err;
+ } else {
+ waitpid($pid, 0);
+ }
+
+ # return socket to keep it alive,
+ # so that QEMU will wait for virtiofsd to start
+ return $socket;
+}
+
+sub close_sockets {
+ my @sockets = @_;
+ for my $socket (@sockets) {
+ shutdown($socket, 2);
+ close($socket);
+ }
+}
+
+1;
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* [pve-devel] [PATCH qemu-server v15 6/12] migration: check_local_resources for virtiofs
2025-04-03 10:34 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v15 0/12] virtiofs Markus Frank
` (4 preceding siblings ...)
2025-04-03 10:34 ` [pve-devel] [PATCH qemu-server v15 5/7] fix #1027: virtio-fs support Markus Frank
@ 2025-04-03 10:34 ` Markus Frank
2025-04-04 12:18 ` Daniel Kral
2025-04-03 10:34 ` [pve-devel] [PATCH qemu-server v15 7/12] disable snapshot (with RAM) and hibernate with virtio-fs devices Markus Frank
` (9 subsequent siblings)
15 siblings, 1 reply; 26+ messages in thread
From: Markus Frank @ 2025-04-03 10:34 UTC (permalink / raw)
To: pve-devel
add dir mapping checks to check_local_resources
Since the VM needs to be powered off for migration, migration should
work with a directory on shared storage with all caching settings.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
v15:
* removed unnecessary "if ($entry->{dirid})" check
PVE/QemuServer.pm | 10 +++++++++-
test/MigrationTest/Shared.pm | 7 +++++++
2 files changed, 16 insertions(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 558b924d..5e9c3981 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2486,6 +2486,7 @@ sub check_local_resources {
my $nodelist = PVE::Cluster::get_nodelist();
my $pci_map = PVE::Mapping::PCI::config();
my $usb_map = PVE::Mapping::USB::config();
+ my $dir_map = PVE::Mapping::Dir::config();
my $missing_mappings_by_node = { map { $_ => [] } @$nodelist };
@@ -2497,6 +2498,8 @@ sub check_local_resources {
$entry = PVE::Mapping::PCI::get_node_mapping($pci_map, $id, $node);
} elsif ($type eq 'usb') {
$entry = PVE::Mapping::USB::get_node_mapping($usb_map, $id, $node);
+ } elsif ($type eq 'dir') {
+ $entry = PVE::Mapping::Dir::get_node_mapping($dir_map, $id, $node);
}
if (!scalar($entry->@*)) {
push @{$missing_mappings_by_node->{$node}}, $key;
@@ -2525,9 +2528,14 @@ sub check_local_resources {
push @$mapped_res, $k;
}
}
+ if ($k =~ m/^virtiofs/) {
+ my $entry = parse_property_string('pve-qm-virtiofs', $conf->{$k});
+ $add_missing_mapping->('dir', $k, $entry->{dirid});
+ push @$mapped_res, $k;
+ }
# sockets are safe: they will recreated be on the target side post-migrate
next if $k =~ m/^serial/ && ($conf->{$k} eq 'socket');
- push @loc_res, $k if $k =~ m/^(usb|hostpci|serial|parallel)\d+$/;
+ push @loc_res, $k if $k =~ m/^(usb|hostpci|serial|parallel|virtiofs)\d+$/;
}
die "VM uses local resources\n" if scalar @loc_res && !$noerr;
diff --git a/test/MigrationTest/Shared.pm b/test/MigrationTest/Shared.pm
index e69bf84f..f5fb70ff 100644
--- a/test/MigrationTest/Shared.pm
+++ b/test/MigrationTest/Shared.pm
@@ -90,6 +90,13 @@ $mapping_pci_module->mock(
},
);
+our $mapping_dir_module = Test::MockModule->new("PVE::Mapping::Dir");
+$mapping_dir_module->mock(
+ config => sub {
+ return {};
+ },
+);
+
our $ha_config_module = Test::MockModule->new("PVE::HA::Config");
$ha_config_module->mock(
vm_is_ha_managed => sub {
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* [pve-devel] [PATCH qemu-server v15 7/12] disable snapshot (with RAM) and hibernate with virtio-fs devices
2025-04-03 10:34 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v15 0/12] virtiofs Markus Frank
` (5 preceding siblings ...)
2025-04-03 10:34 ` [pve-devel] [PATCH qemu-server v15 6/12] migration: check_local_resources for virtiofs Markus Frank
@ 2025-04-03 10:34 ` Markus Frank
2025-04-03 10:34 ` [pve-devel] [PATCH manager v15 08/12] api: add resource map api endpoints for directories Markus Frank
` (8 subsequent siblings)
15 siblings, 0 replies; 26+ messages in thread
From: Markus Frank @ 2025-04-03 10:34 UTC (permalink / raw)
To: pve-devel
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
no changes in v15
PVE/QemuServer.pm | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 5e9c3981..3c14bf7f 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2461,8 +2461,9 @@ sub check_non_migratable_resources {
my ($conf, $state, $noerr) = @_;
my @blockers = ();
- if ($state && $conf->{"amd-sev"}) {
- push @blockers, "amd-sev";
+ if ($state) {
+ push @blockers, "amd-sev" if $conf->{"amd-sev"};
+ push @blockers, "virtiofs" if PVE::QemuServer::Virtiofs::virtiofs_enabled($conf);
}
if (scalar(@blockers) && !$noerr) {
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* [pve-devel] [PATCH manager v15 08/12] api: add resource map api endpoints for directories
2025-04-03 10:34 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v15 0/12] virtiofs Markus Frank
` (6 preceding siblings ...)
2025-04-03 10:34 ` [pve-devel] [PATCH qemu-server v15 7/12] disable snapshot (with RAM) and hibernate with virtio-fs devices Markus Frank
@ 2025-04-03 10:34 ` Markus Frank
2025-04-03 10:34 ` [pve-devel] [PATCH manager v15 09/12] ui: add edit window for dir mappings Markus Frank
` (7 subsequent siblings)
15 siblings, 0 replies; 26+ messages in thread
From: Markus Frank @ 2025-04-03 10:34 UTC (permalink / raw)
To: pve-devel
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
no changes in v15
PVE/API2/Cluster/Mapping.pm | 7 +
PVE/API2/Cluster/Mapping/Dir.pm | 308 ++++++++++++++++++++++++++++++
PVE/API2/Cluster/Mapping/Makefile | 1 +
3 files changed, 316 insertions(+)
create mode 100644 PVE/API2/Cluster/Mapping/Dir.pm
diff --git a/PVE/API2/Cluster/Mapping.pm b/PVE/API2/Cluster/Mapping.pm
index 40386579..9f0dcd2b 100644
--- a/PVE/API2/Cluster/Mapping.pm
+++ b/PVE/API2/Cluster/Mapping.pm
@@ -3,11 +3,17 @@ package PVE::API2::Cluster::Mapping;
use strict;
use warnings;
+use PVE::API2::Cluster::Mapping::Dir;
use PVE::API2::Cluster::Mapping::PCI;
use PVE::API2::Cluster::Mapping::USB;
use base qw(PVE::RESTHandler);
+__PACKAGE__->register_method ({
+ subclass => "PVE::API2::Cluster::Mapping::Dir",
+ path => 'dir',
+});
+
__PACKAGE__->register_method ({
subclass => "PVE::API2::Cluster::Mapping::PCI",
path => 'pci',
@@ -41,6 +47,7 @@ __PACKAGE__->register_method ({
my ($param) = @_;
my $result = [
+ { name => 'dir' },
{ name => 'pci' },
{ name => 'usb' },
];
diff --git a/PVE/API2/Cluster/Mapping/Dir.pm b/PVE/API2/Cluster/Mapping/Dir.pm
new file mode 100644
index 00000000..f905cef3
--- /dev/null
+++ b/PVE/API2/Cluster/Mapping/Dir.pm
@@ -0,0 +1,308 @@
+package PVE::API2::Cluster::Mapping::Dir;
+
+use strict;
+use warnings;
+
+use Storable qw(dclone);
+
+use PVE::INotify;
+use PVE::JSONSchema qw(get_standard_option);
+use PVE::Mapping::Dir ();
+use PVE::RPCEnvironment;
+use PVE::SectionConfig;
+use PVE::Tools qw(extract_param);
+
+use base qw(PVE::RESTHandler);
+
+__PACKAGE__->register_method ({
+ name => 'index',
+ path => '',
+ method => 'GET',
+ # only proxy if we give the 'check-node' parameter
+ proxyto_callback => sub {
+ my ($rpcenv, $proxyto, $param) = @_;
+ return $param->{'check-node'} // 'localhost';
+ },
+ description => "List directory mapping",
+ permissions => {
+ description => "Only lists entries where you have 'Mapping.Modify', 'Mapping.Use' or"
+ ." 'Mapping.Audit' permissions on '/mapping/dir/<id>'.",
+ user => 'all',
+ },
+ parameters => {
+ additionalProperties => 0,
+ properties => {
+ 'check-node' => get_standard_option('pve-node', {
+ description => "If given, checks the configurations on the given node for"
+ ." correctness, and adds relevant diagnostics for the directory to the response.",
+ optional => 1,
+ }),
+ },
+ },
+ returns => {
+ type => 'array',
+ items => {
+ type => "object",
+ properties => {
+ id => {
+ type => 'string',
+ description => "The logical ID of the mapping."
+ },
+ map => {
+ type => 'array',
+ description => "The entries of the mapping.",
+ items => {
+ type => 'string',
+ description => "A mapping for a node.",
+ },
+ },
+ description => {
+ type => 'string',
+ description => "A description of the logical mapping.",
+ },
+ checks => {
+ type => "array",
+ optional => 1,
+ description => "A list of checks, only present if 'check-node' is set.",
+ items => {
+ type => 'object',
+ properties => {
+ severity => {
+ type => "string",
+ enum => ['warning', 'error'],
+ description => "The severity of the error",
+ },
+ message => {
+ type => "string",
+ description => "The message of the error",
+ },
+ },
+ }
+ },
+ },
+ },
+ links => [ { rel => 'child', href => "{id}" } ],
+ },
+ code => sub {
+ my ($param) = @_;
+
+ my $rpcenv = PVE::RPCEnvironment::get();
+ my $authuser = $rpcenv->get_user();
+
+ my $check_node = $param->{'check-node'};
+ my $local_node = PVE::INotify::nodename();
+
+ die "wrong node to check - $check_node != $local_node\n"
+ if defined($check_node) && $check_node ne 'localhost' && $check_node ne $local_node;
+
+ my $cfg = PVE::Mapping::Dir::config();
+
+ my $can_see_mapping_privs = ['Mapping.Modify', 'Mapping.Use', 'Mapping.Audit'];
+
+ my $res = [];
+ for my $id (keys $cfg->{ids}->%*) {
+ next if !$rpcenv->check_any($authuser, "/mapping/dir/$id", $can_see_mapping_privs, 1);
+ next if !$cfg->{ids}->{$id};
+
+ my $entry = dclone($cfg->{ids}->{$id});
+ $entry->{id} = $id;
+ $entry->{digest} = $cfg->{digest};
+
+ if (defined($check_node)) {
+ $entry->{checks} = [];
+ if (my $mappings = PVE::Mapping::Dir::get_node_mapping($cfg, $id, $check_node)) {
+ if (!scalar($mappings->@*)) {
+ push $entry->{checks}->@*, {
+ severity => 'warning',
+ message => "No mapping for node $check_node.",
+ };
+ }
+ for my $mapping ($mappings->@*) {
+ eval { PVE::Mapping::Dir::assert_valid($mapping) };
+ if (my $err = $@) {
+ push $entry->{checks}->@*, {
+ severity => 'error',
+ message => "Invalid configuration: $err",
+ };
+ }
+ }
+ }
+ }
+
+ push @$res, $entry;
+ }
+
+ return $res;
+ },
+});
+
+__PACKAGE__->register_method ({
+ name => 'get',
+ protected => 1,
+ path => '{id}',
+ method => 'GET',
+ description => "Get directory mapping.",
+ permissions => {
+ check =>['or',
+ ['perm', '/mapping/dir/{id}', ['Mapping.Use']],
+ ['perm', '/mapping/dir/{id}', ['Mapping.Modify']],
+ ['perm', '/mapping/dir/{id}', ['Mapping.Audit']],
+ ],
+ },
+ parameters => {
+ additionalProperties => 0,
+ properties => {
+ id => {
+ type => 'string',
+ format => 'pve-configid',
+ },
+ }
+ },
+ returns => { type => 'object' },
+ code => sub {
+ my ($param) = @_;
+
+ my $cfg = PVE::Mapping::Dir::config();
+ my $id = $param->{id};
+
+ my $entry = $cfg->{ids}->{$id};
+ die "mapping '$param->{id}' not found\n" if !defined($entry);
+
+ my $data = dclone($entry);
+
+ $data->{digest} = $cfg->{digest};
+
+ return $data;
+ }});
+
+__PACKAGE__->register_method ({
+ name => 'create',
+ protected => 1,
+ path => '',
+ method => 'POST',
+ description => "Create a new directory mapping.",
+ permissions => {
+ check => ['perm', '/mapping/dir', ['Mapping.Modify']],
+ },
+ parameters => PVE::Mapping::Dir->createSchema(1),
+ returns => {
+ type => 'null',
+ },
+ code => sub {
+ my ($param) = @_;
+
+ my $id = extract_param($param, 'id');
+
+ my $plugin = PVE::Mapping::Dir->lookup('dir');
+ my $opts = $plugin->check_config($id, $param, 1, 1);
+
+ my $map_list = $opts->{map};
+ PVE::Mapping::Dir::assert_valid_map_list($map_list);
+
+ PVE::Mapping::Dir::lock_dir_config(sub {
+ my $cfg = PVE::Mapping::Dir::config();
+
+ die "dir ID '$id' already defined\n" if defined($cfg->{ids}->{$id});
+
+ $cfg->{ids}->{$id} = $opts;
+
+ PVE::Mapping::Dir::write_dir_config($cfg);
+
+ }, "create directory mapping failed");
+
+ return;
+ },
+});
+
+__PACKAGE__->register_method ({
+ name => 'update',
+ protected => 1,
+ path => '{id}',
+ method => 'PUT',
+ description => "Update a directory mapping.",
+ permissions => {
+ check => ['perm', '/mapping/dir/{id}', ['Mapping.Modify']],
+ },
+ parameters => PVE::Mapping::Dir->updateSchema(),
+ returns => {
+ type => 'null',
+ },
+ code => sub {
+ my ($param) = @_;
+
+ my $digest = extract_param($param, 'digest');
+ my $delete = extract_param($param, 'delete');
+ my $id = extract_param($param, 'id');
+
+ if ($delete) {
+ $delete = [ PVE::Tools::split_list($delete) ];
+ }
+
+ PVE::Mapping::Dir::lock_dir_config(sub {
+ my $cfg = PVE::Mapping::Dir::config();
+
+ PVE::Tools::assert_if_modified($cfg->{digest}, $digest) if defined($digest);
+
+ die "dir ID '$id' does not exist\n" if !defined($cfg->{ids}->{$id});
+
+ my $plugin = PVE::Mapping::Dir->lookup('dir');
+ my $opts = $plugin->check_config($id, $param, 1, 1);
+
+ my $map_list = $opts->{map};
+ PVE::Mapping::Dir::assert_valid_map_list($map_list);
+
+ my $data = $cfg->{ids}->{$id};
+
+ my $options = $plugin->private()->{options}->{dir};
+ PVE::SectionConfig::delete_from_config($data, $options, $opts, $delete);
+
+ $data->{$_} = $opts->{$_} for keys $opts->%*;
+
+ PVE::Mapping::Dir::write_dir_config($cfg);
+
+ }, "update directory mapping failed");
+
+ return;
+ },
+});
+
+__PACKAGE__->register_method ({
+ name => 'delete',
+ protected => 1,
+ path => '{id}',
+ method => 'DELETE',
+ description => "Remove directory mapping.",
+ permissions => {
+ check => [ 'perm', '/mapping/dir', ['Mapping.Modify']],
+ },
+ parameters => {
+ additionalProperties => 0,
+ properties => {
+ id => {
+ type => 'string',
+ format => 'pve-configid',
+ },
+ }
+ },
+ returns => { type => 'null' },
+ code => sub {
+ my ($param) = @_;
+
+ my $id = $param->{id};
+
+ PVE::Mapping::Dir::lock_dir_config(sub {
+ my $cfg = PVE::Mapping::Dir::config();
+
+ if ($cfg->{ids}->{$id}) {
+ delete $cfg->{ids}->{$id};
+ }
+
+ PVE::Mapping::Dir::write_dir_config($cfg);
+
+ }, "delete dir mapping failed");
+
+ return;
+ }
+});
+
+1;
diff --git a/PVE/API2/Cluster/Mapping/Makefile b/PVE/API2/Cluster/Mapping/Makefile
index e7345ab4..5dbb3f5c 100644
--- a/PVE/API2/Cluster/Mapping/Makefile
+++ b/PVE/API2/Cluster/Mapping/Makefile
@@ -3,6 +3,7 @@ include ../../../../defines.mk
# for node independent, cluster-wide applicable, API endpoints
# ensure we do not conflict with files shipped by pve-cluster!!
PERLSOURCE= \
+ Dir.pm \
PCI.pm \
USB.pm
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* [pve-devel] [PATCH manager v15 09/12] ui: add edit window for dir mappings
2025-04-03 10:34 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v15 0/12] virtiofs Markus Frank
` (7 preceding siblings ...)
2025-04-03 10:34 ` [pve-devel] [PATCH manager v15 08/12] api: add resource map api endpoints for directories Markus Frank
@ 2025-04-03 10:34 ` Markus Frank
2025-04-04 12:21 ` Daniel Kral
2025-04-03 10:34 ` [pve-devel] [PATCH manager v15 10/12] ui: add resource mapping view for directories Markus Frank
` (6 subsequent siblings)
15 siblings, 1 reply; 26+ messages in thread
From: Markus Frank @ 2025-04-03 10:34 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
v15:
* removed announce-submounts
www/manager6/Makefile | 1 +
www/manager6/window/DirMapEdit.js | 202 ++++++++++++++++++++++++++++++
2 files changed, 203 insertions(+)
create mode 100644 www/manager6/window/DirMapEdit.js
diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index c94a5cdf..4b8677e3 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -138,6 +138,7 @@ JSSRC= \
window/TreeSettingsEdit.js \
window/PCIMapEdit.js \
window/USBMapEdit.js \
+ window/DirMapEdit.js \
window/GuestImport.js \
ha/Fencing.js \
ha/GroupEdit.js \
diff --git a/www/manager6/window/DirMapEdit.js b/www/manager6/window/DirMapEdit.js
new file mode 100644
index 00000000..b635f34d
--- /dev/null
+++ b/www/manager6/window/DirMapEdit.js
@@ -0,0 +1,202 @@
+Ext.define('PVE.window.DirMapEditWindow', {
+ extend: 'Proxmox.window.Edit',
+
+ mixins: ['Proxmox.Mixin.CBind'],
+
+ cbindData: function(initialConfig) {
+ let me = this;
+ me.isCreate = !me.name;
+ me.method = me.isCreate ? 'POST' : 'PUT';
+ me.hideMapping = !!me.entryOnly;
+ me.hideComment = me.name && !me.entryOnly;
+ me.hideNodeSelector = me.nodename || me.entryOnly;
+ me.hideNode = !me.nodename || !me.hideNodeSelector;
+ return {
+ name: me.name,
+ nodename: me.nodename,
+ };
+ },
+
+ submitUrl: function(_url, data) {
+ let me = this;
+ let name = me.isCreate ? '' : me.name;
+ return `/cluster/mapping/dir/${name}`;
+ },
+
+ title: gettext('Add Dir mapping'),
+
+ onlineHelp: 'resource_mapping',
+
+ method: 'POST',
+
+ controller: {
+ xclass: 'Ext.app.ViewController',
+
+ onGetValues: function(values) {
+ let me = this;
+ let view = me.getView();
+ values.node ??= view.nodename;
+
+ let name = values.name;
+ let description = values.description;
+ let deletes = values.delete;
+
+ delete values.description;
+ delete values.name;
+ delete values.delete;
+
+ let map = [];
+ if (me.originalMap) {
+ map = PVE.Parser.filterPropertyStringList(me.originalMap, (e) => e.node !== values.node);
+ }
+ if (values.path) {
+ // TODO: Remove this when property string supports quotation of properties
+ if (!/^\/[^;,=()]+/.test(values.path)) {
+ let errMsg = 'these symbols are currently not allowed in path: ;,=()';
+ Ext.Msg.alert(gettext('Error'), errMsg);
+ throw errMsg;
+ }
+ map.push(PVE.Parser.printPropertyString(values));
+ }
+ values = { map };
+
+ if (description) {
+ values.description = description;
+ }
+ if (deletes && !view.isCreate) {
+ values.delete = deletes;
+ }
+ if (view.isCreate) {
+ values.id = name;
+ }
+
+ return values;
+ },
+
+ onSetValues: function(values) {
+ let me = this;
+ let view = me.getView();
+ me.originalMap = [...values.map];
+ let configuredNodes = [];
+ PVE.Parser.filterPropertyStringList(values.map, (e) => {
+ configuredNodes.push(e.node);
+ if (e.node === view.nodename) {
+ values = e;
+ }
+ return false;
+ });
+
+ me.lookup('nodeselector').disallowedNodes = configuredNodes;
+
+ return values;
+ },
+
+ init: function(view) {
+ let me = this;
+
+ if (!view.nodename) {
+ //throw "no nodename given";
+ }
+ },
+ },
+
+ items: [
+ {
+ xtype: 'inputpanel',
+ onGetValues: function(values) {
+ return this.up('window').getController().onGetValues(values);
+ },
+
+ onSetValues: function(values) {
+ return this.up('window').getController().onSetValues(values);
+ },
+
+ columnT: [
+ {
+ xtype: 'displayfield',
+ reference: 'directory-hint',
+ columnWidth: 1,
+ value: 'Make sure the directory exists.',
+ cbind: {
+ disabled: '{hideMapping}',
+ hidden: '{hideMapping}',
+ },
+ userCls: 'pmx-hint',
+ },
+ ],
+
+ column1: [
+ {
+ xtype: 'pmxDisplayEditField',
+ fieldLabel: gettext('Name'),
+ cbind: {
+ editable: '{!name}',
+ value: '{name}',
+ submitValue: '{isCreate}',
+ },
+ name: 'name',
+ allowBlank: false,
+ },
+ {
+ xtype: 'pveNodeSelector',
+ reference: 'nodeselector',
+ fieldLabel: gettext('Node'),
+ name: 'node',
+ cbind: {
+ disabled: '{hideNodeSelector}',
+ hidden: '{hideNodeSelector}',
+ },
+ allowBlank: false,
+ },
+ ],
+
+ column2: [
+ {
+ xtype: 'fieldcontainer',
+ defaultType: 'radiofield',
+ layout: 'fit',
+ cbind: {
+ disabled: '{hideMapping}',
+ hidden: '{hideMapping}',
+ },
+ items: [
+ {
+ xtype: 'textfield',
+ name: 'path',
+ reference: 'path',
+ value: '',
+ emptyText: gettext('/some/path'),
+ cbind: {
+ nodename: '{nodename}',
+ disabled: '{hideMapping}',
+ },
+ allowBlank: false,
+ fieldLabel: gettext('Path'),
+ },
+ ],
+ },
+ ],
+
+ columnB: [
+ {
+ xtype: 'fieldcontainer',
+ defaultType: 'radiofield',
+ layout: 'fit',
+ cbind: {
+ disabled: '{hideComment}',
+ hidden: '{hideComment}',
+ },
+ items: [
+ {
+ xtype: 'proxmoxtextfield',
+ fieldLabel: gettext('Comment'),
+ submitValue: true,
+ name: 'description',
+ deleteEmpty: true,
+ },
+ ],
+ },
+ ],
+ },
+ ],
+});
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* [pve-devel] [PATCH manager v15 10/12] ui: add resource mapping view for directories
2025-04-03 10:34 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v15 0/12] virtiofs Markus Frank
` (8 preceding siblings ...)
2025-04-03 10:34 ` [pve-devel] [PATCH manager v15 09/12] ui: add edit window for dir mappings Markus Frank
@ 2025-04-03 10:34 ` Markus Frank
2025-04-04 12:21 ` Daniel Kral
2025-04-03 10:34 ` [pve-devel] [PATCH manager v15 11/12] ui: form: add selector for directory mappings Markus Frank
` (5 subsequent siblings)
15 siblings, 1 reply; 26+ messages in thread
From: Markus Frank @ 2025-04-03 10:34 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
v15:
* removed announce-submounts
www/manager6/Makefile | 1 +
www/manager6/dc/Config.js | 10 +++++++++
www/manager6/dc/DirMapView.js | 38 +++++++++++++++++++++++++++++++++++
3 files changed, 49 insertions(+)
create mode 100644 www/manager6/dc/DirMapView.js
diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 4b8677e3..57c4d377 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -189,6 +189,7 @@ JSSRC= \
dc/RealmSyncJob.js \
dc/PCIMapView.js \
dc/USBMapView.js \
+ dc/DirMapView.js \
lxc/CmdMenu.js \
lxc/Config.js \
lxc/CreateWizard.js \
diff --git a/www/manager6/dc/Config.js b/www/manager6/dc/Config.js
index 74728c83..2958fb88 100644
--- a/www/manager6/dc/Config.js
+++ b/www/manager6/dc/Config.js
@@ -329,6 +329,16 @@ Ext.define('PVE.dc.Config', {
title: gettext('USB Devices'),
flex: 1,
},
+ {
+ xtype: 'splitter',
+ collapsible: false,
+ performCollapse: false,
+ },
+ {
+ xtype: 'pveDcDirMapView',
+ title: gettext('Directories'),
+ flex: 1,
+ },
],
},
);
diff --git a/www/manager6/dc/DirMapView.js b/www/manager6/dc/DirMapView.js
new file mode 100644
index 00000000..a50a7ae7
--- /dev/null
+++ b/www/manager6/dc/DirMapView.js
@@ -0,0 +1,38 @@
+Ext.define('pve-resource-dir-tree', {
+ extend: 'Ext.data.Model',
+ idProperty: 'internalId',
+ fields: ['type', 'text', 'path', 'id', 'description', 'digest'],
+});
+
+Ext.define('PVE.dc.DirMapView', {
+ extend: 'PVE.tree.ResourceMapTree',
+ alias: 'widget.pveDcDirMapView',
+
+ editWindowClass: 'PVE.window.DirMapEditWindow',
+ baseUrl: '/cluster/mapping/dir',
+ mapIconCls: 'fa fa-folder',
+ entryIdProperty: 'path',
+
+ store: {
+ sorters: 'text',
+ model: 'pve-resource-dir-tree',
+ data: {},
+ },
+
+ columns: [
+ {
+ xtype: 'treecolumn',
+ text: gettext('ID/Node'),
+ dataIndex: 'text',
+ width: 200,
+ },
+ {
+ header: gettext('Comment'),
+ dataIndex: 'description',
+ renderer: function(value, _meta, record) {
+ return Ext.String.htmlEncode(value ?? record.data.comment);
+ },
+ flex: 1,
+ },
+ ],
+});
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* [pve-devel] [PATCH manager v15 11/12] ui: form: add selector for directory mappings
2025-04-03 10:34 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v15 0/12] virtiofs Markus Frank
` (9 preceding siblings ...)
2025-04-03 10:34 ` [pve-devel] [PATCH manager v15 10/12] ui: add resource mapping view for directories Markus Frank
@ 2025-04-03 10:34 ` Markus Frank
2025-04-03 10:34 ` [pve-devel] [PATCH manager v15 12/12] ui: add options to add virtio-fs to qemu config Markus Frank
` (4 subsequent siblings)
15 siblings, 0 replies; 26+ messages in thread
From: Markus Frank @ 2025-04-03 10:34 UTC (permalink / raw)
To: pve-devel
Reviewed-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
no changes in v15
www/manager6/Makefile | 1 +
www/manager6/form/DirMapSelector.js | 63 +++++++++++++++++++++++++++++
2 files changed, 64 insertions(+)
create mode 100644 www/manager6/form/DirMapSelector.js
diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 57c4d377..fabbdd24 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -35,6 +35,7 @@ JSSRC= \
form/ContentTypeSelector.js \
form/ControllerSelector.js \
form/DayOfWeekSelector.js \
+ form/DirMapSelector.js \
form/DiskFormatSelector.js \
form/DiskStorageSelector.js \
form/FileSelector.js \
diff --git a/www/manager6/form/DirMapSelector.js b/www/manager6/form/DirMapSelector.js
new file mode 100644
index 00000000..473a2ffe
--- /dev/null
+++ b/www/manager6/form/DirMapSelector.js
@@ -0,0 +1,63 @@
+Ext.define('PVE.form.DirMapSelector', {
+ extend: 'Proxmox.form.ComboGrid',
+ alias: 'widget.pveDirMapSelector',
+
+ store: {
+ fields: ['name', 'path'],
+ filterOnLoad: true,
+ sorters: [
+ {
+ property: 'id',
+ direction: 'ASC',
+ },
+ ],
+ },
+
+ allowBlank: false,
+ autoSelect: false,
+ displayField: 'id',
+ valueField: 'id',
+
+ listConfig: {
+ columns: [
+ {
+ header: gettext('Directory ID'),
+ dataIndex: 'id',
+ flex: 1,
+ },
+ {
+ header: gettext('Comment'),
+ dataIndex: 'description',
+ flex: 1,
+ },
+ ],
+ },
+
+ setNodename: function(nodename) {
+ var me = this;
+
+ if (!nodename || me.nodename === nodename) {
+ return;
+ }
+
+ me.nodename = nodename;
+
+ me.store.setProxy({
+ type: 'proxmox',
+ url: `/api2/json/cluster/mapping/dir?check-node=${nodename}`,
+ });
+
+ me.store.load();
+ },
+
+ initComponent: function() {
+ var me = this;
+
+ var nodename = me.nodename;
+ me.nodename = undefined;
+
+ me.callParent();
+
+ me.setNodename(nodename);
+ },
+});
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* [pve-devel] [PATCH manager v15 12/12] ui: add options to add virtio-fs to qemu config
2025-04-03 10:34 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v15 0/12] virtiofs Markus Frank
` (10 preceding siblings ...)
2025-04-03 10:34 ` [pve-devel] [PATCH manager v15 11/12] ui: form: add selector for directory mappings Markus Frank
@ 2025-04-03 10:34 ` Markus Frank
2025-04-04 12:21 ` Daniel Kral
2025-04-03 11:18 ` [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v15 0/12] virtiofs Markus Frank
` (3 subsequent siblings)
15 siblings, 1 reply; 26+ messages in thread
From: Markus Frank @ 2025-04-03 10:34 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
v15:
* moved all options except dirid to an advanced tab
* improved field labels
www/manager6/Makefile | 1 +
www/manager6/Utils.js | 1 +
www/manager6/qemu/HardwareView.js | 19 ++++
www/manager6/qemu/VirtiofsEdit.js | 138 ++++++++++++++++++++++++++++++
4 files changed, 159 insertions(+)
create mode 100644 www/manager6/qemu/VirtiofsEdit.js
diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index fabbdd24..fdf0e816 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -271,6 +271,7 @@ JSSRC= \
qemu/Smbios1Edit.js \
qemu/SystemEdit.js \
qemu/USBEdit.js \
+ qemu/VirtiofsEdit.js \
sdn/Browser.js \
sdn/ControllerView.js \
sdn/Status.js \
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index aa415759..5a642076 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -1645,6 +1645,7 @@ Ext.define('PVE.Utils', {
serial: 4,
rng: 1,
tpmstate: 1,
+ virtiofs: 10,
},
// we can have usb6 and up only for specific machine/ostypes
diff --git a/www/manager6/qemu/HardwareView.js b/www/manager6/qemu/HardwareView.js
index c6d193fc..34aeb51e 100644
--- a/www/manager6/qemu/HardwareView.js
+++ b/www/manager6/qemu/HardwareView.js
@@ -319,6 +319,16 @@ Ext.define('PVE.qemu.HardwareView', {
never_delete: !caps.nodes['Sys.Console'],
header: gettext("VirtIO RNG"),
};
+ for (let i = 0; i < PVE.Utils.hardware_counts.virtiofs; i++) {
+ let confid = "virtiofs" + i.toString();
+ rows[confid] = {
+ group: 50,
+ order: i,
+ iconCls: 'folder',
+ editor: 'PVE.qemu.VirtiofsEdit',
+ header: gettext('Virtiofs') + ' (' + confid +')',
+ };
+ }
var sorterFn = function(rec1, rec2) {
var v1 = rec1.data.key;
@@ -595,6 +605,7 @@ Ext.define('PVE.qemu.HardwareView', {
const noVMConfigDiskPerm = !caps.vms['VM.Config.Disk'];
const noVMConfigCDROMPerm = !caps.vms['VM.Config.CDROM'];
const noVMConfigCloudinitPerm = !caps.vms['VM.Config.Cloudinit'];
+ const noVMConfigOptionsPerm = !caps.vms['VM.Config.Options'];
me.down('#addUsb').setDisabled(noHWPerm || isAtUsbLimit());
me.down('#addPci').setDisabled(noHWPerm || isAtLimit('hostpci'));
@@ -604,6 +615,7 @@ Ext.define('PVE.qemu.HardwareView', {
me.down('#addRng').setDisabled(noSysConsolePerm || isAtLimit('rng'));
efidisk_menuitem.setDisabled(noVMConfigDiskPerm || isAtLimit('efidisk'));
me.down('#addTpmState').setDisabled(noVMConfigDiskPerm || isAtLimit('tpmstate'));
+ me.down('#addVirtiofs').setDisabled(noVMConfigOptionsPerm || isAtLimit('virtiofs'));
me.down('#addCloudinitDrive').setDisabled(noVMConfigCDROMPerm || noVMConfigCloudinitPerm || hasCloudInit);
if (!rec) {
@@ -748,6 +760,13 @@ Ext.define('PVE.qemu.HardwareView', {
disabled: !caps.nodes['Sys.Console'],
handler: editorFactory('RNGEdit'),
},
+ {
+ text: gettext("Virtiofs"),
+ itemId: 'addVirtiofs',
+ iconCls: 'fa fa-folder',
+ disabled: !caps.nodes['Sys.Console'],
+ handler: editorFactory('VirtiofsEdit'),
+ },
],
}),
},
diff --git a/www/manager6/qemu/VirtiofsEdit.js b/www/manager6/qemu/VirtiofsEdit.js
new file mode 100644
index 00000000..267034fe
--- /dev/null
+++ b/www/manager6/qemu/VirtiofsEdit.js
@@ -0,0 +1,138 @@
+Ext.define('PVE.qemu.VirtiofsInputPanel', {
+ extend: 'Proxmox.panel.InputPanel',
+ xtype: 'pveVirtiofsInputPanel',
+ onlineHelp: 'qm_virtiofs',
+
+ insideWizard: false,
+
+ onGetValues: function(values) {
+ var me = this;
+ var confid = me.confid;
+ var params = {};
+ delete values.delete;
+ params[confid] = PVE.Parser.printPropertyString(values, 'dirid');
+ return params;
+ },
+
+ setSharedfiles: function(confid, data) {
+ var me = this;
+ me.confid = confid;
+ me.virtiofs = data;
+ me.setValues(me.virtiofs);
+ },
+ initComponent: function() {
+ let me = this;
+
+ me.nodename = me.pveSelNode.data.node;
+ if (!me.nodename) {
+ throw "no node name specified";
+ }
+ me.items = [
+ {
+ xtype: 'pveDirMapSelector',
+ emptyText: 'dirid',
+ nodename: me.nodename,
+ fieldLabel: gettext('Directory ID'),
+ name: 'dirid',
+ allowBlank: false,
+ },
+ ];
+ me.advancedItems = [
+ {
+ xtype: 'proxmoxKVComboBox',
+ fieldLabel: gettext('Cache'),
+ name: 'cache',
+ value: '__default__',
+ deleteDefaultValue: false,
+ comboItems: [
+ ['__default__', Proxmox.Utils.defaultText + ' (auto)'],
+ ['auto', 'auto'],
+ ['always', 'always'],
+ ['metadata', 'metadata'],
+ ['never', 'never'],
+ ],
+ },
+ {
+ xtype: 'proxmoxcheckbox',
+ fieldLabel: gettext('Writeback cache'),
+ name: 'writeback',
+ },
+ {
+ xtype: 'proxmoxcheckbox',
+ fieldLabel: gettext('Enable xattr support'),
+ name: 'expose-xattr',
+ },
+ {
+ xtype: 'proxmoxcheckbox',
+ fieldLabel: gettext('Enable POSIX ACLs support (implies xattr support)'),
+ name: 'expose-acl',
+ listeners: {
+ change: function(f, value) {
+ let xattr = me.down('field[name=expose-xattr]');
+ xattr.setDisabled(value);
+ xattr.setValue(value);
+ },
+ },
+ },
+ {
+ xtype: 'proxmoxcheckbox',
+ fieldLabel: gettext('Allow Direct IO'),
+ name: 'direct-io',
+ },
+ ];
+
+ me.virtiofs = {};
+ me.confid = 'virtiofs0';
+ me.callParent();
+ },
+});
+
+Ext.define('PVE.qemu.VirtiofsEdit', {
+ extend: 'Proxmox.window.Edit',
+
+ subject: gettext('Filesystem Passthrough'),
+
+ initComponent: function() {
+ var me = this;
+
+ me.isCreate = !me.confid;
+
+ var ipanel = Ext.create('PVE.qemu.VirtiofsInputPanel', {
+ confid: me.confid,
+ pveSelNode: me.pveSelNode,
+ isCreate: me.isCreate,
+ });
+
+ Ext.applyIf(me, {
+ items: ipanel,
+ });
+
+ me.callParent();
+
+ me.load({
+ success: function(response) {
+ me.conf = response.result.data;
+ var i, confid;
+ if (!me.isCreate) {
+ var value = me.conf[me.confid];
+ var virtiofs = PVE.Parser.parsePropertyString(value, "dirid");
+ if (!virtiofs) {
+ Ext.Msg.alert(gettext('Error'), 'Unable to parse virtiofs options');
+ me.close();
+ return;
+ }
+ ipanel.setSharedfiles(me.confid, virtiofs);
+ } else {
+ for (i = 0; i < PVE.Utils.hardware_counts.virtiofs; i++) {
+ confid = 'virtiofs' + i.toString();
+ if (!Ext.isDefined(me.conf[confid])) {
+ me.confid = confid;
+ break;
+ }
+ }
+ ipanel.setSharedfiles(me.confid, {});
+ }
+ },
+ });
+ },
+});
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v15 0/12] virtiofs
2025-04-03 10:34 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v15 0/12] virtiofs Markus Frank
` (11 preceding siblings ...)
2025-04-03 10:34 ` [pve-devel] [PATCH manager v15 12/12] ui: add options to add virtio-fs to qemu config Markus Frank
@ 2025-04-03 11:18 ` Markus Frank
2025-04-03 14:11 ` Lukas Wagner
` (2 subsequent siblings)
15 siblings, 0 replies; 26+ messages in thread
From: Markus Frank @ 2025-04-03 11:18 UTC (permalink / raw)
To: pve-devel
I forgot one v15 change.
On 2025-04-03 12:34, Markus Frank wrote:
> Virtio-fs is a shared file system that enables sharing a directory
> between host and guest VMs. It takes advantage of the locality of
> virtual machines and the hypervisor to get a higher throughput than
> the 9p remote file system protocol.
>
> build-order:
> 1. cluster
> 2. guest-common
> 3. docs
> 4. qemu-server
> 5. manager
>
> I did not get virtiofsd to run with run_command without creating
> zombie processes after stutdown. So I replaced run_command with exec
> for now. Maybe someone can find out why this happens.
>
> changes in v15:
> * removed announce-submounts option altogether and always set it for
> virtiofsd.
> * added die if both memory hotplug and virtiofs are enabled
> * added fstab entry example in the docs
> * assert_valid_map_list: only run assert_valid for the mappings on the
> current node
> * see individual patches
* added metadata cache option
>
> changes in v14:
> * disallow commas and equal signs in path until the path can be quoted
> in property strings
> * addressed style nits and improved formatting
> * use max_virtiofs() in check_vm_create_dir_perm
> * removed unnecessary checks after parse_property_string
> * find_on_current_node returns only one entry
> * improved docs
> * added missing imports/uses
>
> changes in v13:
> * removed acl/xattr attributes in node config
> * renamed acl/xattr in virtiofs qemu config to expose-acl/expose-xattr
> * renamed submounts in node config to announce-submounts
> * the "disable snapshot (with RAM) and hibernate with virtio-fs devices"
> patch now uses the check_non_migratable_resources function
> * rewritten the part about announce-submounts in pve-docs patch
>
> Changes in v12:
> * rebase to master as most patches could not be applied anymore
>
> Changes in v11:
> * made submounts option on by default in WebUI and section config
> * PVE::QemuServer::Virtiofs dependency removed in QemuServer/Memory.pm
> * Minor changes to function/variable names
> * Disable snapshots (with RAM) and hibernate due to incompatibility
>
>
>
> cluster:
>
> Markus Frank (1):
> add mapping/dir.cfg for resource mapping
>
> src/PVE/Cluster.pm | 1 +
> src/pmxcfs/status.c | 1 +
> 2 files changed, 2 insertions(+)
>
>
> guest-common:
>
> Markus Frank (1):
> add dir mapping section config
>
> src/Makefile | 1 +
> src/PVE/Mapping/Dir.pm | 192 +++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 193 insertions(+)
> create mode 100644 src/PVE/Mapping/Dir.pm
>
>
> docs:
>
> Markus Frank (1):
> add doc section for the shared filesystem virtio-fs
>
> qm.adoc | 102 ++++++++++++++++++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 100 insertions(+), 2 deletions(-)
>
>
>
> qemu-server:
>
> Markus Frank (4):
> control: add virtiofsd as runtime dependency for qemu-server
> fix #1027: virtio-fs support
> migration: check_local_resources for virtiofs
> disable snapshot (with RAM) and hibernate with virtio-fs devices
>
> PVE/API2/Qemu.pm | 41 ++++++-
> PVE/QemuServer.pm | 44 +++++++-
> PVE/QemuServer/Makefile | 3 +-
> PVE/QemuServer/Memory.pm | 25 +++--
> PVE/QemuServer/Virtiofs.pm | 209 +++++++++++++++++++++++++++++++++++
> debian/control | 1 +
> test/MigrationTest/Shared.pm | 7 ++
> 7 files changed, 316 insertions(+), 14 deletions(-)
> create mode 100644 PVE/QemuServer/Virtiofs.pm
>
>
>
> manager:
>
> Markus Frank (5):
> api: add resource map api endpoints for directories
> ui: add edit window for dir mappings
> ui: add resource mapping view for directories
> ui: form: add selector for directory mappings
> ui: add options to add virtio-fs to qemu config
>
> PVE/API2/Cluster/Mapping.pm | 7 +
> PVE/API2/Cluster/Mapping/Dir.pm | 308 ++++++++++++++++++++++++++++
> PVE/API2/Cluster/Mapping/Makefile | 1 +
> www/manager6/Makefile | 4 +
> www/manager6/Utils.js | 1 +
> www/manager6/dc/Config.js | 10 +
> www/manager6/dc/DirMapView.js | 38 ++++
> www/manager6/form/DirMapSelector.js | 63 ++++++
> www/manager6/qemu/HardwareView.js | 19 ++
> www/manager6/qemu/VirtiofsEdit.js | 138 +++++++++++++
> www/manager6/window/DirMapEdit.js | 202 ++++++++++++++++++
> 11 files changed, 791 insertions(+)
> create mode 100644 PVE/API2/Cluster/Mapping/Dir.pm
> create mode 100644 www/manager6/dc/DirMapView.js
> create mode 100644 www/manager6/form/DirMapSelector.js
> create mode 100644 www/manager6/qemu/VirtiofsEdit.js
> create mode 100644 www/manager6/window/DirMapEdit.js
>
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v15 0/12] virtiofs
2025-04-03 10:34 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v15 0/12] virtiofs Markus Frank
` (12 preceding siblings ...)
2025-04-03 11:18 ` [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v15 0/12] virtiofs Markus Frank
@ 2025-04-03 14:11 ` Lukas Wagner
2025-04-04 8:27 ` Lukas Wagner
2025-04-04 12:22 ` Daniel Kral
15 siblings, 0 replies; 26+ messages in thread
From: Lukas Wagner @ 2025-04-03 14:11 UTC (permalink / raw)
To: Proxmox VE development discussion, Markus Frank
On 2025-04-03 12:34, Markus Frank wrote:
> Virtio-fs is a shared file system that enables sharing a directory
> between host and guest VMs. It takes advantage of the locality of
> virtual machines and the hypervisor to get a higher throughput than
> the 9p remote file system protocol.
>
> build-order:
> 1. cluster
> 2. guest-common
> 3. docs
> 4. qemu-server
> 5. manager
>
> I did not get virtiofsd to run with run_command without creating
> zombie processes after stutdown. So I replaced run_command with exec
> for now. Maybe someone can find out why this happens.
>
> changes in v15:
> * removed announce-submounts option altogether and always set it for
> virtiofsd.
> * added die if both memory hotplug and virtiofs are enabled
> * added fstab entry example in the docs
> * assert_valid_map_list: only run assert_valid for the mappings on the
> current node
> * see individual patches
>
Cool stuff! Tried it out, seems to work as promised (only tested a Linux guest though)
Tested-by: Lukas Wagner <l.wagner@proxmox.com>
--
- Lukas
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v15 0/12] virtiofs
2025-04-03 10:34 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v15 0/12] virtiofs Markus Frank
` (13 preceding siblings ...)
2025-04-03 14:11 ` Lukas Wagner
@ 2025-04-04 8:27 ` Lukas Wagner
2025-04-04 15:08 ` Markus Frank
2025-04-04 12:22 ` Daniel Kral
15 siblings, 1 reply; 26+ messages in thread
From: Lukas Wagner @ 2025-04-04 8:27 UTC (permalink / raw)
To: Proxmox VE development discussion, Markus Frank
On 2025-04-03 12:34, Markus Frank wrote:
> Virtio-fs is a shared file system that enables sharing a directory
> between host and guest VMs. It takes advantage of the locality of
> virtual machines and the hypervisor to get a higher throughput than
> the 9p remote file system protocol.
>
Some thoughts - no blockers though, can easily be done in follow-ups:
- Thinking through my potential use-cases for a feature like this, I think it would be pretty nice
to expose the 'readonly' flag [1] under 'Advanced', the same way you did it with e.g. "Allow Direct IO"
- I'd remove the "Make sure the directory exists." banner in the "Add Dir Mapping" dialog.
People get an error message any way when they try to create a mapping and the directory
does not exist.
- I tried this feature out without reading any documentation first, basically to check for any obvious
UX issues that are not as clearly noticable if you know how everything works.
Trying to add a filesystem passthrough, my first instinct was to go to the VM's Hardware
settings. I was presented with a new "Virtiofs" option under "Add".
Maybe this could be called "Directory Passthrough" or "Filesystem Passthrough" to better
convey what it does? Users might not be aware what "Virtiofs" is if they haven't read the docs.
Inside the new "Add" dialog I was then stuck at first. It allowed me to select a Directory ID,
but I haven't created one yet. As a user it was not completely clear what the next step would be
at this point.
Maybe this could show a small hint about where to create
new mappings? e.g. "Directory Mappings can be managed under Datacenter → Resource Mapping" or alike.
- In the documentation it is not really mentioned that one is able to create a
mapping from the GUI. The text reads like you have to use the `pvesh` command to create it.
- Maybe some table for the available options (e.g. direct io, cache, etc.) with the effects
and possible values would be easier to comprehend than embedding it into free-flowing text.
As I said, no blockers from my side though :) Great work!
[1] https://gitlab.com/virtio-fs/virtiofsd#faq
--
- Lukas
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [pve-devel] [PATCH guest-common v15 2/12] add dir mapping section config
2025-04-03 10:34 ` [pve-devel] [PATCH guest-common v15 2/12] add dir mapping section config Markus Frank
@ 2025-04-04 12:17 ` Daniel Kral
0 siblings, 0 replies; 26+ messages in thread
From: Daniel Kral @ 2025-04-04 12:17 UTC (permalink / raw)
To: Proxmox VE development discussion, Markus Frank
Two comments inline.
On 4/3/25 12:34, Markus Frank wrote:
> Adds a config file for directories by using a 'map' property string for
> each node mapping.
>
> example config:
> ```
> some-dir-id
> map node=node1,path=/path/to/share/
> map node=node2,path=/different/location/
> ```
>
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> ---
> v15:
> * removed announce-submounts option altogether and always set it for
> virtiofsd
> * assert_valid_map_list: only run assert_valid for the mappings on the
> current node
>
> src/Makefile | 1 +
> src/PVE/Mapping/Dir.pm | 192 +++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 193 insertions(+)
> create mode 100644 src/PVE/Mapping/Dir.pm
>
> diff --git a/src/Makefile b/src/Makefile
> index cbc40c1..030e7f7 100644
> --- a/src/Makefile
> +++ b/src/Makefile
> @@ -15,6 +15,7 @@ install: PVE
> install -m 0644 PVE/StorageTunnel.pm ${PERL5DIR}/PVE/
> install -m 0644 PVE/Tunnel.pm ${PERL5DIR}/PVE/
> install -d ${PERL5DIR}/PVE/Mapping
> + install -m 0644 PVE/Mapping/Dir.pm ${PERL5DIR}/PVE/Mapping/
> install -m 0644 PVE/Mapping/PCI.pm ${PERL5DIR}/PVE/Mapping/
> install -m 0644 PVE/Mapping/USB.pm ${PERL5DIR}/PVE/Mapping/
> install -d ${PERL5DIR}/PVE/VZDump
> diff --git a/src/PVE/Mapping/Dir.pm b/src/PVE/Mapping/Dir.pm
> new file mode 100644
> index 0000000..4673f83
> --- /dev/null
> +++ b/src/PVE/Mapping/Dir.pm
> @@ -0,0 +1,192 @@
> +package PVE::Mapping::Dir;
> +
> +use strict;
> +use warnings;
> +
> +use PVE::Cluster qw(cfs_register_file cfs_read_file cfs_lock_file cfs_write_file);
> +use PVE::INotify;
> +use PVE::JSONSchema qw(get_standard_option parse_property_string);
> +use PVE::SectionConfig;
> +
> +use base qw(PVE::SectionConfig);
> +
> +my $FILENAME = 'mapping/dir.cfg';
> +
> +cfs_register_file($FILENAME,
> + sub { __PACKAGE__->parse_config(@_); },
> + sub { __PACKAGE__->write_config(@_); });
> +
> +
> +# so we don't have to repeat the type every time
> +sub parse_section_header {
> + my ($class, $line) = @_;
> +
> + if ($line =~ m/^(\S+)\s*$/) {
> + my $id = $1;
> + my $errmsg = undef; # set if you want to skip whole section
> + eval { PVE::JSONSchema::pve_verify_configid($id) };
> + $errmsg = $@ if $@;
> + my $config = {}; # to return additional attributes
> + return ('dir', $id, $errmsg, $config);
> + }
> + return undef;
> +}
> +
> +sub format_section_header {
> + my ($class, $type, $sectionId, $scfg, $done_hash) = @_;
> +
> + return "$sectionId\n";
> +}
> +
> +sub type {
> + return 'dir';
> +}
> +
> +# temporary path format that also disallows commas and equal signs
> +# TODO: Remove this when property_string supports quotation of properties
> +PVE::JSONSchema::register_format('pve-storage-path-in-property-string', \&verify_path);
> +sub verify_path {
> + my ($path, $noerr) = @_;
> +
> + if ($path !~ m|^/[^;,=\(\)]+|) {
> + return undef if $noerr;
> + die "Value does not look like a valid absolute path."
> + ." These symbols are currently not allowed in path: ;,=()\n";
> + }
> + return $path;
> +}
nit: usually I've seen these defined as `sub pve_verify_*`, but I'm not
sure if we have anything set in stone here.
> +
> +my $map_fmt = {
> + node => get_standard_option('pve-node'),
> + path => {
> + description => "Absolute directory path that should be shared with the guest.",
> + type => 'string',
> + format => 'pve-storage-path-in-property-string',
> + },
> +};
> +
> +my $defaultData = {
> + propertyList => {
> + id => {
> + type => 'string',
> + description => "The ID of the directory mapping",
> + format => 'pve-configid',
> + },
> + description => {
> + type => 'string',
> + description => "Description of the directory mapping",
> + optional => 1,
> + maxLength => 4096,
> + },
> + map => {
> + type => 'array',
> + description => 'A list of maps for the cluster nodes.',
> + optional => 1,
> + items => {
> + type => 'string',
> + format => $map_fmt,
> + },
> + },
> + },
> +};
> +
> +sub private {
> + return $defaultData;
> +}
> +
> +sub options {
> + return {
> + description => { optional => 1 },
> + map => {},
> + };
> +}
> +
> +sub assert_valid {
> + my ($dir_cfg) = @_;
> +
> + my $path = $dir_cfg->{path};
> +
> + verify_path($path);
> +
> + if (! -e $path) {
> + die "Path $path does not exist\n";
> + } elsif (! -d $path) {
> + die "Path $path exists, but is not a directory\n";
> + }
> +
> + return 1;
> +};
> +
> +sub assert_valid_map_list {
> + my ($map_list) = @_;
> +
> + my $nodename = PVE::INotify::nodename();
> +
> + my %count;
> + for my $map (@$map_list) {
> + my $entry = parse_property_string($map_fmt, $map);
> + if ($entry->{node} eq $nodename) {
> + assert_valid($entry);
> + }
I see why this wouldn't work otherwise, but it's a shame that when
creating the dir mappings for multiple nodes on a single node, whether
the path actually exists in the filesystem is only checked for the
current node here when adding the dir mapping.
It would be a nice to have if we could error out for other nodes here
too, but it's not a deal breaker as this correctly asserts for the
important parts, that is when starting the VM.
> + $count{$entry->{node}}++;
> + }
> + for my $node (keys %count) {
> + if ($count{$node} > 1) {
> + die "Node '$node' is specified $count{$node} times.\n";
> + }
> + }
> +}
> +
> +sub config {
> + return cfs_read_file($FILENAME);
> +}
> +
> +sub lock_dir_config {
> + my ($code, $errmsg) = @_;
> +
> + cfs_lock_file($FILENAME, undef, $code);
> + if (my $err = $@) {
> + $errmsg ? die "$errmsg: $err" : die $err;
> + }
> +}
> +
> +sub write_dir_config {
> + my ($cfg) = @_;
> +
> + cfs_write_file($FILENAME, $cfg);
> +}
> +
> +sub find_on_current_node {
> + my ($id) = @_;
> +
> + my $cfg = config();
> + my $node = PVE::INotify::nodename();
> +
> + my $node_mapping = get_node_mapping($cfg, $id, $node);
> + if (@{$node_mapping} > 1) {
> + die "More than than one directory mapping for node $node.\n";
> + }
> + return $node_mapping->[0];
> +}
> +
> +sub get_node_mapping {
> + my ($cfg, $id, $nodename) = @_;
> +
> + return undef if !defined($cfg->{ids}->{$id});
> +
> + my $res = [];
> + my $mapping_list = $cfg->{ids}->{$id}->{map};
> + for my $map (@{$mapping_list}) {
> + my $entry = eval { parse_property_string($map_fmt, $map) };
> + warn $@ if $@;
> + if ($entry && $entry->{node} eq $nodename) {
> + push $res->@*, $entry;
> + }
> + }
> + return $res;
> +}
> +
> +PVE::Mapping::Dir->register();
> +PVE::Mapping::Dir->init();
> +
> +1;
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [pve-devel] [PATCH qemu-server v15 6/12] migration: check_local_resources for virtiofs
2025-04-03 10:34 ` [pve-devel] [PATCH qemu-server v15 6/12] migration: check_local_resources for virtiofs Markus Frank
@ 2025-04-04 12:18 ` Daniel Kral
0 siblings, 0 replies; 26+ messages in thread
From: Daniel Kral @ 2025-04-04 12:18 UTC (permalink / raw)
To: Proxmox VE development discussion, Markus Frank
This patch did not apply correctly anymore since there were some other
patches applied upstream since last time.
A comment about that inline.
On 4/3/25 12:34, Markus Frank wrote:
> add dir mapping checks to check_local_resources
>
> Since the VM needs to be powered off for migration, migration should
> work with a directory on shared storage with all caching settings.
>
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> ---
> v15:
> * removed unnecessary "if ($entry->{dirid})" check
>
> PVE/QemuServer.pm | 10 +++++++++-
> test/MigrationTest/Shared.pm | 7 +++++++
> 2 files changed, 16 insertions(+), 1 deletion(-)
>
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index 558b924d..5e9c3981 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -2486,6 +2486,7 @@ sub check_local_resources {
> my $nodelist = PVE::Cluster::get_nodelist();
> my $pci_map = PVE::Mapping::PCI::config();
> my $usb_map = PVE::Mapping::USB::config();
> + my $dir_map = PVE::Mapping::Dir::config();
>
> my $missing_mappings_by_node = { map { $_ => [] } @$nodelist };
>
> @@ -2497,6 +2498,8 @@ sub check_local_resources {
> $entry = PVE::Mapping::PCI::get_node_mapping($pci_map, $id, $node);
> } elsif ($type eq 'usb') {
> $entry = PVE::Mapping::USB::get_node_mapping($usb_map, $id, $node);
> + } elsif ($type eq 'dir') {
> + $entry = PVE::Mapping::Dir::get_node_mapping($dir_map, $id, $node);
> }
> if (!scalar($entry->@*)) {
> push @{$missing_mappings_by_node->{$node}}, $key;
> @@ -2525,9 +2528,14 @@ sub check_local_resources {
> push @$mapped_res, $k;
> }
> }
> + if ($k =~ m/^virtiofs/) {
> + my $entry = parse_property_string('pve-qm-virtiofs', $conf->{$k});
> + $add_missing_mapping->('dir', $k, $entry->{dirid});
> + push @$mapped_res, $k;
0f55e8be ("api: migration preconditions: return detailed info for
mapped-resources") changed the $mapped_res to a hash reference.
I've replaced this locally on my test nodes with to make it work again:
$mapped_res->{$k} = { name => $entry->{dirid} };
> + }
> # sockets are safe: they will recreated be on the target side post-migrate
> next if $k =~ m/^serial/ && ($conf->{$k} eq 'socket');
> - push @loc_res, $k if $k =~ m/^(usb|hostpci|serial|parallel)\d+$/;
> + push @loc_res, $k if $k =~ m/^(usb|hostpci|serial|parallel|virtiofs)\d+$/;
> }
>
> die "VM uses local resources\n" if scalar @loc_res && !$noerr;
> diff --git a/test/MigrationTest/Shared.pm b/test/MigrationTest/Shared.pm
> index e69bf84f..f5fb70ff 100644
> --- a/test/MigrationTest/Shared.pm
> +++ b/test/MigrationTest/Shared.pm
> @@ -90,6 +90,13 @@ $mapping_pci_module->mock(
> },
> );
>
> +our $mapping_dir_module = Test::MockModule->new("PVE::Mapping::Dir");
> +$mapping_dir_module->mock(
> + config => sub {
> + return {};
> + },
> +);
> +
> our $ha_config_module = Test::MockModule->new("PVE::HA::Config");
> $ha_config_module->mock(
> vm_is_ha_managed => sub {
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [pve-devel] [PATCH qemu-server v15 5/7] fix #1027: virtio-fs support
2025-04-03 10:34 ` [pve-devel] [PATCH qemu-server v15 5/7] fix #1027: virtio-fs support Markus Frank
@ 2025-04-04 12:19 ` Daniel Kral
0 siblings, 0 replies; 26+ messages in thread
From: Daniel Kral @ 2025-04-04 12:19 UTC (permalink / raw)
To: Proxmox VE development discussion, Markus Frank
Some comments inline about undefined values.
Also two small notes about testing the assertions, which both works as
expected (die for Windows + POSIX ACLs and memory hotplug + virtiofs).
On 4/3/25 12:34, Markus Frank wrote:
> add support for sharing directories with a guest vm.
>
> virtio-fs needs virtiofsd to be started.
> In order to start virtiofsd as a process (despite being a daemon it is
> does not run in the background), a double-fork is used.
>
> virtiofsd should close itself together with QEMU.
>
> There are the parameters dirid and the optional parameters direct-io,
> cache and writeback. Additionally the expose-xattr & expose-acl
> parameter can be set to expose xattr & acl settings from the shared
> filesystem to the guest system.
>
> The dirid gets mapped to the path on the current node and is also used
> as a mount tag (name used to mount the device on the guest).
>
> example config:
> ```
> virtiofs0: foo,direct-io=1,cache=always,expose-acl=1
> virtiofs1: dirid=bar,cache=never,expose-xattr=1,writeback=1
> ```
>
> For information on the optional parameters see the coherent doc patch
> and the official gitlab README:
> https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md
>
> Also add a permission check for virtiofs directory access.
>
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> ---
> v15:
> * removed announce-submounts option altogether and always set it for
> virtiofsd
> * added die if both memory hotplug and virtiofs are enabled
> * changed parameter order of PVE::QemuServer::Memory::config so that
> $cmd and $machine_flags are last.
> * die instead of warn in assert_virtiofs_config when acl is enabled for
> a windows VM
>
> PVE/API2/Qemu.pm | 41 +++++++-
> PVE/QemuServer.pm | 29 ++++-
> PVE/QemuServer/Makefile | 3 +-
> PVE/QemuServer/Memory.pm | 25 +++--
> PVE/QemuServer/Virtiofs.pm | 209 +++++++++++++++++++++++++++++++++++++
> 5 files changed, 296 insertions(+), 11 deletions(-)
> create mode 100644 PVE/QemuServer/Virtiofs.pm
>
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index 156b1c7b..5a695306 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -39,6 +39,7 @@ use PVE::QemuServer::MetaInfo;
> use PVE::QemuServer::PCI;
> use PVE::QemuServer::QMPHelpers;
> use PVE::QemuServer::USB;
> +use PVE::QemuServer::Virtiofs qw(max_virtiofs);
> use PVE::QemuMigrate;
> use PVE::RPCEnvironment;
> use PVE::AccessControl;
> @@ -803,6 +804,33 @@ my sub check_vm_create_hostpci_perm {
> return 1;
> };
>
> +my sub check_dir_perm {
> + my ($rpcenv, $authuser, $vmid, $pool, $opt, $value) = @_;
> +
> + return 1 if $authuser eq 'root@pam';
> +
> + $rpcenv->check_vm_perm($authuser, $vmid, $pool, ['VM.Config.Disk']);
> +
> + my $virtiofs = PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', $value);
> + $rpcenv->check_full($authuser, "/mapping/dir/$virtiofs->{dirid}", ['Mapping.Use']);
> +
> + return 1;
> +};
> +
> +my sub check_vm_create_dir_perm {
> + my ($rpcenv, $authuser, $vmid, $pool, $param) = @_;
> +
> + return 1 if $authuser eq 'root@pam';
> +
> + for (my $i = 0; $i < max_virtiofs(); $i++) {
> + my $opt = "virtiofs$i";
> + next if !$param->{$opt};
> + check_dir_perm($rpcenv, $authuser, $vmid, $pool, $opt, $param->{$opt});
> + }
> +
> + return 1;
> +};
> +
> my $check_vm_modify_config_perm = sub {
> my ($rpcenv, $authuser, $vmid, $pool, $key_list) = @_;
>
> @@ -813,7 +841,7 @@ my $check_vm_modify_config_perm = sub {
> # else, as there the permission can be value dependent
> next if PVE::QemuServer::is_valid_drivename($opt);
> next if $opt eq 'cdrom';
> - next if $opt =~ m/^(?:unused|serial|usb|hostpci)\d+$/;
> + next if $opt =~ m/^(?:unused|serial|usb|hostpci|virtiofs)\d+$/;
> next if $opt eq 'tags';
>
>
> @@ -1116,6 +1144,7 @@ __PACKAGE__->register_method({
> &$check_vm_create_serial_perm($rpcenv, $authuser, $vmid, $pool, $param);
> check_vm_create_usb_perm($rpcenv, $authuser, $vmid, $pool, $param);
> check_vm_create_hostpci_perm($rpcenv, $authuser, $vmid, $pool, $param);
> + check_vm_create_dir_perm($rpcenv, $authuser, $vmid, $pool, $param);
>
> PVE::QemuServer::check_bridge_access($rpcenv, $authuser, $param);
> &$check_cpu_model_access($rpcenv, $authuser, $param);
> @@ -2007,6 +2036,10 @@ my $update_vm_api = sub {
> check_hostpci_perm($rpcenv, $authuser, $vmid, undef, $opt, $val);
> PVE::QemuConfig->add_to_pending_delete($conf, $opt, $force);
> PVE::QemuConfig->write_config($vmid, $conf);
> + } elsif ($opt =~ m/^virtiofs\d$/) {
> + check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, $val);
> + PVE::QemuConfig->add_to_pending_delete($conf, $opt, $force);
> + PVE::QemuConfig->write_config($vmid, $conf);
> } elsif ($opt eq 'tags') {
> assert_tag_permissions($vmid, $val, '', $rpcenv, $authuser);
> delete $conf->{$opt};
> @@ -2097,6 +2130,12 @@ my $update_vm_api = sub {
> }
> check_hostpci_perm($rpcenv, $authuser, $vmid, undef, $opt, $param->{$opt});
> $conf->{pending}->{$opt} = $param->{$opt};
> + } elsif ($opt =~ m/^virtiofs\d$/) {
> + if (my $oldvalue = $conf->{$opt}) {
> + check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, $oldvalue);
> + }
> + check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, $param->{$opt});
> + $conf->{pending}->{$opt} = $param->{$opt};
> } elsif ($opt eq 'tags') {
> assert_tag_permissions($vmid, $conf->{$opt}, $param->{$opt}, $rpcenv, $authuser);
> $conf->{pending}->{$opt} = PVE::GuestHelpers::get_unique_tags($param->{$opt});
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index ffd5d56b..558b924d 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -33,6 +33,7 @@ use PVE::Exception qw(raise raise_param_exc);
> use PVE::Format qw(render_duration render_bytes);
> use PVE::GuestHelpers qw(safe_string_ne safe_num_ne safe_boolean_ne);
> use PVE::HA::Config;
> +use PVE::Mapping::Dir;
> use PVE::Mapping::PCI;
> use PVE::Mapping::USB;
> use PVE::INotify;
> @@ -62,6 +63,7 @@ use PVE::QemuServer::Monitor qw(mon_cmd);
> use PVE::QemuServer::PCI qw(print_pci_addr print_pcie_addr print_pcie_root_port parse_hostpci);
> use PVE::QemuServer::QMPHelpers qw(qemu_deviceadd qemu_devicedel qemu_objectadd qemu_objectdel);
> use PVE::QemuServer::USB;
> +use PVE::QemuServer::Virtiofs qw(max_virtiofs start_all_virtiofsd);
>
> my $have_sdn;
> eval {
> @@ -948,6 +950,10 @@ my $netdesc = {
>
> PVE::JSONSchema::register_standard_option("pve-qm-net", $netdesc);
>
> +for (my $i = 0; $i < max_virtiofs(); $i++) {
> + $confdesc->{"virtiofs$i"} = get_standard_option('pve-qm-virtiofs');
> +}
> +
> my $ipconfig_fmt = {
> ip => {
> type => 'string',
> @@ -3707,8 +3713,18 @@ sub config_to_command {
> push @$cmd, get_cpu_options($conf, $arch, $kvm, $kvm_off, $machine_version, $winversion, $gpu_passthrough);
> }
>
> + my $virtiofs_enabled = PVE::QemuServer::Virtiofs::virtiofs_enabled($conf);
> +
> PVE::QemuServer::Memory::config(
> - $conf, $vmid, $sockets, $cores, $hotplug_features->{memory}, $cmd);
> + $conf,
> + $vmid,
> + $sockets,
> + $cores,
> + $hotplug_features->{memory},
> + $virtiofs_enabled,
> + $cmd,
> + $machineFlags,
> + );
>
> push @$cmd, '-S' if $conf->{freeze};
>
> @@ -3998,6 +4014,8 @@ sub config_to_command {
> push @$machineFlags, 'confidential-guest-support=sev0';
> }
>
> + PVE::QemuServer::Virtiofs::config($conf, $vmid, $devices);
> +
If a VM has a virtiofs in its config, which does only define a dir
mapping for other nodes than the current one the VM should start on,
then this will fail with a rather weird error message on start:
Use of uninitialized value $path in pattern match (m//) at
/usr/share/perl5/PVE/Mapping/Dir.pm line 51.
TASK ERROR: directory mapping invalid: Value does not look like a valid
absolute path. These symbols are currently not allowed in path: ;,=()
This is because in config() it calls assert_virtiofs_config(), but the
find_on_current_node() will return undef and therefore results in
assert_valid() having an undef value to check against the regex.
Since it does make sense to allow VMs to have virtiofs configured with
only specific nodes having a dir mapping, it would be great to improve
this when the VM is started on a node, where there is no dir mapping on
the current node, that it either drops the virtiofs config or errors out
entirely.
> push @$cmd, @$devices;
> push @$cmd, '-rtc', join(',', @$rtcFlags) if scalar(@$rtcFlags);
> push @$cmd, '-machine', join(',', @$machineFlags) if scalar(@$machineFlags);
> @@ -5806,6 +5824,8 @@ sub vm_start_nolock {
> PVE::Tools::run_fork sub {
> PVE::Systemd::enter_systemd_scope($vmid, "Proxmox VE VM $vmid", %systemd_properties);
>
> + my $virtiofs_sockets = start_all_virtiofsd($conf, $vmid);
> +
> my $tpmpid;
> if ((my $tpm = $conf->{tpmstate0}) && !PVE::QemuConfig->is_template($conf)) {
> # start the TPM emulator so QEMU can connect on start
> @@ -5813,6 +5833,8 @@ sub vm_start_nolock {
> }
>
> my $exitcode = run_command($cmd, %run_params);
> + eval { PVE::QemuServer::Virtiofs::close_sockets(@$virtiofs_sockets); };
> + log_warn("closing virtiofs sockets failed - $@") if $@;
> if ($exitcode) {
> if ($tpmpid) {
> warn "stopping swtpm instance (pid $tpmpid) due to QEMU startup error\n";
> @@ -6485,7 +6507,10 @@ sub check_mapping_access {
> } else {
> die "either 'host' or 'mapping' must be set.\n";
> }
> - }
> + } elsif ($opt =~ m/^virtiofs\d$/) {
> + my $virtiofs = PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', $conf->{$opt});
> + $rpcenv->check_full($user, "/mapping/dir/$virtiofs->{dirid}", ['Mapping.Use']);
> + }
> }
> };
>
> diff --git a/PVE/QemuServer/Makefile b/PVE/QemuServer/Makefile
> index 18fd13ea..3588e0e1 100644
> --- a/PVE/QemuServer/Makefile
> +++ b/PVE/QemuServer/Makefile
> @@ -11,7 +11,8 @@ SOURCES=PCI.pm \
> CPUConfig.pm \
> CGroup.pm \
> Drive.pm \
> - QMPHelpers.pm
> + QMPHelpers.pm \
> + Virtiofs.pm
>
> .PHONY: install
> install: ${SOURCES}
> diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
> index e5024cd2..1111410a 100644
> --- a/PVE/QemuServer/Memory.pm
> +++ b/PVE/QemuServer/Memory.pm
> @@ -336,7 +336,7 @@ sub qemu_memdevices_list {
> }
>
> sub config {
> - my ($conf, $vmid, $sockets, $cores, $hotplug, $cmd) = @_;
> + my ($conf, $vmid, $sockets, $cores, $hotplug, $virtiofs_enabled, $cmd, $machine_flags) = @_;
>
> my $memory = get_current_memory($conf->{memory});
> my $static_memory = 0;
> @@ -367,6 +367,9 @@ sub config {
>
> die "numa needs to be enabled to use hugepages" if $conf->{hugepages} && !$conf->{numa};
>
> + die "Memory hotplug does not work in combination with virtio-fs.\n"
> + if $hotplug && $virtiofs_enabled;
note: Ran into this accidentally as one of my VMs had this enabled
before, works as expected.
> +
> if ($conf->{numa}) {
>
> my $numa_totalmemory = undef;
> @@ -379,7 +382,8 @@ sub config {
> my $numa_memory = $numa->{memory};
> $numa_totalmemory += $numa_memory;
>
> - my $mem_object = print_mem_object($conf, "ram-node$i", $numa_memory);
> + my $memdev = $virtiofs_enabled ? "virtiofs-mem$i" : "ram-node$i";
> + my $mem_object = print_mem_object($conf, $memdev, $numa_memory);
>
> # cpus
> my $cpulists = $numa->{cpus};
> @@ -404,7 +408,7 @@ sub config {
> }
>
> push @$cmd, '-object', $mem_object;
> - push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=ram-node$i";
> + push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=$memdev";
> }
>
> die "total memory for NUMA nodes must be equal to vm static memory\n"
> @@ -418,15 +422,20 @@ sub config {
> die "host NUMA node$i doesn't exist\n"
> if !host_numanode_exists($i) && $conf->{hugepages};
>
> - my $mem_object = print_mem_object($conf, "ram-node$i", $numa_memory);
> - push @$cmd, '-object', $mem_object;
> -
> my $cpus = ($cores * $i);
> $cpus .= "-" . ($cpus + $cores - 1) if $cores > 1;
>
> - push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=ram-node$i";
> + my $memdev = $virtiofs_enabled ? "virtiofs-mem$i" : "ram-node$i";
> + my $mem_object = print_mem_object($conf, $memdev, $numa_memory);
> + push @$cmd, '-object', $mem_object;
> + push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=$memdev";
> }
> }
> + } elsif ($virtiofs_enabled) {
> + # kvm: '-machine memory-backend' and '-numa memdev' properties are mutually exclusive
> + push @$cmd, '-object', 'memory-backend-memfd,id=virtiofs-mem'
> + .",size=$conf->{memory}M,share=on";
> + push @$machine_flags, 'memory-backend=virtiofs-mem';
> }
>
> if ($hotplug) {
> @@ -453,6 +462,8 @@ sub print_mem_object {
> my $path = hugepages_mount_path($hugepages_size);
>
> return "memory-backend-file,id=$id,size=${size}M,mem-path=$path,share=on,prealloc=yes";
> + } elsif ($id =~ m/^virtiofs-mem/) {
> + return "memory-backend-memfd,id=$id,size=${size}M,share=on";
> } else {
> return "memory-backend-ram,id=$id,size=${size}M";
> }
> diff --git a/PVE/QemuServer/Virtiofs.pm b/PVE/QemuServer/Virtiofs.pm
> new file mode 100644
> index 00000000..5b948d57
> --- /dev/null
> +++ b/PVE/QemuServer/Virtiofs.pm
> @@ -0,0 +1,209 @@
> +package PVE::QemuServer::Virtiofs;
> +
> +use strict;
> +use warnings;
> +
> +use Fcntl qw(F_GETFD F_SETFD FD_CLOEXEC);
> +use IO::Socket::UNIX;
> +use POSIX;
> +use Socket qw(SOCK_STREAM);
> +
> +use PVE::JSONSchema qw(parse_property_string);
> +use PVE::Mapping::Dir;
> +use PVE::QemuServer::Helpers;
> +use PVE::RESTEnvironment qw(log_warn);
> +
> +use base qw(Exporter);
> +
> +our @EXPORT_OK = qw(
> +max_virtiofs
> +start_all_virtiofsd
> +);
> +
> +my $MAX_VIRTIOFS = 10;
> +my $socket_path_root = "/run/qemu-server/virtiofsd";
> +
> +my $virtiofs_fmt = {
> + 'dirid' => {
> + type => 'string',
> + default_key => 1,
> + description => "Mapping identifier of the directory mapping to be shared with the guest."
> + ." Also used as a mount tag inside the VM.",
> + format_description => 'mapping-id',
> + format => 'pve-configid',
> + },
> + 'cache' => {
> + type => 'string',
> + description => "The caching policy the file system should use (auto, always, metadata, never).",
> + enum => [qw(auto always metadata never)],
> + default => "auto",
> + optional => 1,
> + },
> + 'direct-io' => {
> + type => 'boolean',
> + description => "Honor the O_DIRECT flag passed down by guest applications.",
> + default => 0,
> + optional => 1,
> + },
> + writeback => {
> + type => 'boolean',
> + description => "Enable writeback cache. If enabled, writes may be cached in the guest until"
> + ." the file is closed or an fsync is performed.",
> + default => 0,
> + optional => 1,
> + },
> + 'expose-xattr' => {
> + type => 'boolean',
> + description => "Enable support for extended attributes for this mount.",
> + default => 0,
> + optional => 1,
> + },
> + 'expose-acl' => {
> + type => 'boolean',
> + description => "Enable support for POSIX ACLs (enabled ACL implies xattr) for this mount.",
> + default => 0,
> + optional => 1,
> + },
> +};
> +PVE::JSONSchema::register_format('pve-qm-virtiofs', $virtiofs_fmt);
> +
> +my $virtiofsdesc = {
> + optional => 1,
> + type => 'string', format => $virtiofs_fmt,
> + description => "Configuration for sharing a directory between host and guest using Virtio-fs.",
> +};
> +PVE::JSONSchema::register_standard_option("pve-qm-virtiofs", $virtiofsdesc);
> +
> +sub max_virtiofs {
> + return $MAX_VIRTIOFS;
> +}
> +
> +sub assert_virtiofs_config {
> + my ($ostype, $virtiofs) = @_;
> +
> + my $dir_cfg = PVE::Mapping::Dir::find_on_current_node($virtiofs->{dirid});
This is affected by $dir_cfg being undef if there is no dir mapping on
the current node, as described in another reply.
> +
> + my $acl = $virtiofs->{'expose-acl'};
> + if ($acl && PVE::QemuServer::Helpers::windows_version($ostype)) {
> + die "Please disable ACLs for virtiofs on Windows VMs, otherwise"
> + ." the virtiofs shared directory cannot be mounted.\n";
note: Tested this, works as expected now.
> + }
> +
> + eval { PVE::Mapping::Dir::assert_valid($dir_cfg) };
> + die "directory mapping invalid: $@\n" if $@;
> +}
> +
> +sub config {
> + my ($conf, $vmid, $devices) = @_;
> +
> + for (my $i = 0; $i < max_virtiofs(); $i++) {
> + my $opt = "virtiofs$i";
> +
> + next if !$conf->{$opt};
> + my $virtiofs = parse_property_string('pve-qm-virtiofs', $conf->{$opt});
> +
> + assert_virtiofs_config($conf->{ostype}, $virtiofs);
> +
> + push @$devices, '-chardev', "socket,id=virtiofs$i,path=$socket_path_root/vm$vmid-fs$i";
> +
> + # queue-size is set 1024 because of bug with Windows guests:
> + # https://bugzilla.redhat.com/show_bug.cgi?id=1873088
> + # 1024 is also always used in the virtiofs documentations:
> + # https://gitlab.com/virtio-fs/virtiofsd#examples
> + push @$devices, '-device', 'vhost-user-fs-pci,queue-size=1024'
> + .",chardev=virtiofs$i,tag=$virtiofs->{dirid}";
> + }
> +}
> +
> +sub virtiofs_enabled {
> + my ($conf) = @_;
> +
> + my $virtiofs_enabled = 0;
> + for (my $i = 0; $i < max_virtiofs(); $i++) {
> + my $opt = "virtiofs$i";
> + next if !$conf->{$opt};
> + parse_property_string('pve-qm-virtiofs', $conf->{$opt});
> + $virtiofs_enabled = 1;
> + }
> + return $virtiofs_enabled;
> +}
> +
> +sub start_all_virtiofsd {
> + my ($conf, $vmid) = @_;
> + my $virtiofs_sockets = [];
> + for (my $i = 0; $i < max_virtiofs(); $i++) {
> + my $opt = "virtiofs$i";
> +
> + next if !$conf->{$opt};
> + my $virtiofs = parse_property_string('pve-qm-virtiofs', $conf->{$opt});
> +
> + my $virtiofs_socket = start_virtiofsd($vmid, $i, $virtiofs);
> + push @$virtiofs_sockets, $virtiofs_socket;
> + }
> + return $virtiofs_sockets;
> +}
> +
> +sub start_virtiofsd {
> + my ($vmid, $fsid, $virtiofs) = @_;
> +
> + mkdir $socket_path_root;
> + my $socket_path = "$socket_path_root/vm$vmid-fs$fsid";
> + unlink($socket_path);
> + my $socket = IO::Socket::UNIX->new(
> + Type => SOCK_STREAM,
> + Local => $socket_path,
> + Listen => 1,
> + ) or die "cannot create socket - $!\n";
> +
> + my $flags = fcntl($socket, F_GETFD, 0)
> + or die "failed to get file descriptor flags: $!\n";
> + fcntl($socket, F_SETFD, $flags & ~FD_CLOEXEC)
> + or die "failed to remove FD_CLOEXEC from file descriptor\n";
> +
> + my $dir_cfg = PVE::Mapping::Dir::find_on_current_node($virtiofs->{dirid});
This would also be affected by $dir_cfg being undef if there is no dir
mapping on the current node.
> +
> + my $virtiofsd_bin = '/usr/libexec/virtiofsd';
> + my $fd = $socket->fileno();
> + my $path = $dir_cfg->{path};
> +
> + my $could_not_fork_err = "could not fork to start virtiofsd\n";
> + my $pid = fork();
> + if ($pid == 0) {
> + POSIX::setsid();
> + $0 = "task pve-vm$vmid-virtiofs$fsid";
> + my $pid2 = fork();
> + if ($pid2 == 0) {
> + my $cmd = [$virtiofsd_bin, "--fd=$fd", "--shared-dir=$path"];
> + push @$cmd, '--xattr' if $virtiofs->{'expose-xattr'};
> + push @$cmd, '--posix-acl' if $virtiofs->{'expose-acl'};
> + push @$cmd, '--announce-submounts';
note: AFAICS --announce-submounts is the default for virtiofsd, but it's
better safe to explicitly set it here anyway.
> + push @$cmd, '--allow-direct-io' if $virtiofs->{'direct-io'};
> + push @$cmd, '--cache='.$virtiofs->{cache} if $virtiofs->{cache};
> + push @$cmd, '--writeback' if $virtiofs->{'writeback'};
> + push @$cmd, '--syslog';
> + exec(@$cmd);
> + } elsif (!defined($pid2)) {
> + die $could_not_fork_err;
> + } else {
> + POSIX::_exit(0);
> + }
> + } elsif (!defined($pid)) {
> + die $could_not_fork_err;
> + } else {
> + waitpid($pid, 0);
> + }
> +
> + # return socket to keep it alive,
> + # so that QEMU will wait for virtiofsd to start
> + return $socket;
> +}
> +
> +sub close_sockets {
> + my @sockets = @_;
> + for my $socket (@sockets) {
> + shutdown($socket, 2);
> + close($socket);
> + }
> +}
> +
> +1;
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [pve-devel] [PATCH docs v15 3/12] add doc section for the shared filesystem virtio-fs
2025-04-03 10:34 ` [pve-devel] [PATCH docs v15 3/12] add doc section for the shared filesystem virtio-fs Markus Frank
@ 2025-04-04 12:20 ` Daniel Kral
0 siblings, 0 replies; 26+ messages in thread
From: Daniel Kral @ 2025-04-04 12:20 UTC (permalink / raw)
To: Proxmox VE development discussion, Markus Frank
On 4/3/25 12:34, Markus Frank wrote:
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> ---
> v15:
> * added fstab entry example in the docs
> * added hyperlinks for websites
> * removed announce-submounts part
I second Lukas' comments about adding some words about the WebGUI and
the options being in a table, which would make the documentation nicer,
but should definitely not be a blocker for this patch series, and as the
implemented changes from above LGTM, so consider this as:
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [pve-devel] [PATCH manager v15 09/12] ui: add edit window for dir mappings
2025-04-03 10:34 ` [pve-devel] [PATCH manager v15 09/12] ui: add edit window for dir mappings Markus Frank
@ 2025-04-04 12:21 ` Daniel Kral
0 siblings, 0 replies; 26+ messages in thread
From: Daniel Kral @ 2025-04-04 12:21 UTC (permalink / raw)
To: Proxmox VE development discussion, Markus Frank
One comment inline.
On 4/3/25 12:34, Markus Frank wrote:
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> ---
> v15:
> * removed announce-submounts
>
> www/manager6/Makefile | 1 +
> www/manager6/window/DirMapEdit.js | 202 ++++++++++++++++++++++++++++++
> 2 files changed, 203 insertions(+)
> create mode 100644 www/manager6/window/DirMapEdit.js
>
> diff --git a/www/manager6/Makefile b/www/manager6/Makefile
> index c94a5cdf..4b8677e3 100644
> --- a/www/manager6/Makefile
> +++ b/www/manager6/Makefile
> @@ -138,6 +138,7 @@ JSSRC= \
> window/TreeSettingsEdit.js \
> window/PCIMapEdit.js \
> window/USBMapEdit.js \
> + window/DirMapEdit.js \
> window/GuestImport.js \
> ha/Fencing.js \
> ha/GroupEdit.js \
> diff --git a/www/manager6/window/DirMapEdit.js b/www/manager6/window/DirMapEdit.js
> new file mode 100644
> index 00000000..b635f34d
> --- /dev/null
> +++ b/www/manager6/window/DirMapEdit.js
> @@ -0,0 +1,202 @@
> +Ext.define('PVE.window.DirMapEditWindow', {
> + extend: 'Proxmox.window.Edit',
> +
> + mixins: ['Proxmox.Mixin.CBind'],
> +
> + cbindData: function(initialConfig) {
> + let me = this;
> + me.isCreate = !me.name;
> + me.method = me.isCreate ? 'POST' : 'PUT';
> + me.hideMapping = !!me.entryOnly;
> + me.hideComment = me.name && !me.entryOnly;
> + me.hideNodeSelector = me.nodename || me.entryOnly;
> + me.hideNode = !me.nodename || !me.hideNodeSelector;
> + return {
> + name: me.name,
> + nodename: me.nodename,
> + };
> + },
> +
> + submitUrl: function(_url, data) {
> + let me = this;
> + let name = me.isCreate ? '' : me.name;
> + return `/cluster/mapping/dir/${name}`;
> + },
> +
> + title: gettext('Add Dir mapping'),
> +
> + onlineHelp: 'resource_mapping',
> +
> + method: 'POST',
> +
> + controller: {
> + xclass: 'Ext.app.ViewController',
> +
> + onGetValues: function(values) {
> + let me = this;
> + let view = me.getView();
> + values.node ??= view.nodename;
> +
> + let name = values.name;
> + let description = values.description;
> + let deletes = values.delete;
> +
> + delete values.description;
> + delete values.name;
> + delete values.delete;
> +
> + let map = [];
> + if (me.originalMap) {
> + map = PVE.Parser.filterPropertyStringList(me.originalMap, (e) => e.node !== values.node);
> + }
> + if (values.path) {
> + // TODO: Remove this when property string supports quotation of properties
> + if (!/^\/[^;,=()]+/.test(values.path)) {
> + let errMsg = 'these symbols are currently not allowed in path: ;,=()';
This error message duplicates the one in the verify_path in the backend
and should also be capitalized at the start. Couldn't we verify this in
the API handler and return the error from there to not duplicate this
code (esp. if it gets out of sync)?
On another note, this should also tell the user that the path must be a
absolute path (which it does in the regex but doesn't say so in the
error message).
> + Ext.Msg.alert(gettext('Error'), errMsg);
> + throw errMsg;
> + }
> + map.push(PVE.Parser.printPropertyString(values));
> + }
> + values = { map };
> +
> + if (description) {
> + values.description = description;
> + }
> + if (deletes && !view.isCreate) {
> + values.delete = deletes;
> + }
> + if (view.isCreate) {
> + values.id = name;
> + }
> +
> + return values;
> + },
> +
> + onSetValues: function(values) {
> + let me = this;
> + let view = me.getView();
> + me.originalMap = [...values.map];
> + let configuredNodes = [];
> + PVE.Parser.filterPropertyStringList(values.map, (e) => {
> + configuredNodes.push(e.node);
> + if (e.node === view.nodename) {
> + values = e;
> + }
> + return false;
> + });
> +
> + me.lookup('nodeselector').disallowedNodes = configuredNodes;
> +
> + return values;
> + },
> +
> + init: function(view) {
> + let me = this;
> +
> + if (!view.nodename) {
> + //throw "no nodename given";
> + }
> + },
> + },
> +
> + items: [
> + {
> + xtype: 'inputpanel',
> + onGetValues: function(values) {
> + return this.up('window').getController().onGetValues(values);
> + },
> +
> + onSetValues: function(values) {
> + return this.up('window').getController().onSetValues(values);
> + },
> +
> + columnT: [
> + {
> + xtype: 'displayfield',
> + reference: 'directory-hint',
> + columnWidth: 1,
> + value: 'Make sure the directory exists.',
> + cbind: {
> + disabled: '{hideMapping}',
> + hidden: '{hideMapping}',
> + },
> + userCls: 'pmx-hint',
> + },
> + ],
> +
> + column1: [
> + {
> + xtype: 'pmxDisplayEditField',
> + fieldLabel: gettext('Name'),
> + cbind: {
> + editable: '{!name}',
> + value: '{name}',
> + submitValue: '{isCreate}',
> + },
> + name: 'name',
> + allowBlank: false,
> + },
> + {
> + xtype: 'pveNodeSelector',
> + reference: 'nodeselector',
> + fieldLabel: gettext('Node'),
> + name: 'node',
> + cbind: {
> + disabled: '{hideNodeSelector}',
> + hidden: '{hideNodeSelector}',
> + },
> + allowBlank: false,
> + },
> + ],
> +
> + column2: [
> + {
> + xtype: 'fieldcontainer',
> + defaultType: 'radiofield',
> + layout: 'fit',
> + cbind: {
> + disabled: '{hideMapping}',
> + hidden: '{hideMapping}',
> + },
> + items: [
> + {
> + xtype: 'textfield',
> + name: 'path',
> + reference: 'path',
> + value: '',
> + emptyText: gettext('/some/path'),
> + cbind: {
> + nodename: '{nodename}',
> + disabled: '{hideMapping}',
> + },
> + allowBlank: false,
> + fieldLabel: gettext('Path'),
> + },
> + ],
> + },
> + ],
> +
> + columnB: [
> + {
> + xtype: 'fieldcontainer',
> + defaultType: 'radiofield',
> + layout: 'fit',
> + cbind: {
> + disabled: '{hideComment}',
> + hidden: '{hideComment}',
> + },
> + items: [
> + {
> + xtype: 'proxmoxtextfield',
> + fieldLabel: gettext('Comment'),
> + submitValue: true,
> + name: 'description',
> + deleteEmpty: true,
> + },
> + ],
> + },
> + ],
> + },
> + ],
> +});
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [pve-devel] [PATCH manager v15 10/12] ui: add resource mapping view for directories
2025-04-03 10:34 ` [pve-devel] [PATCH manager v15 10/12] ui: add resource mapping view for directories Markus Frank
@ 2025-04-04 12:21 ` Daniel Kral
0 siblings, 0 replies; 26+ messages in thread
From: Daniel Kral @ 2025-04-04 12:21 UTC (permalink / raw)
To: pve-devel
On 4/3/25 12:34, Markus Frank wrote:
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> ---
> v15:
> * removed announce-submounts
Changes LGTM here, so consider this as:
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [pve-devel] [PATCH manager v15 12/12] ui: add options to add virtio-fs to qemu config
2025-04-03 10:34 ` [pve-devel] [PATCH manager v15 12/12] ui: add options to add virtio-fs to qemu config Markus Frank
@ 2025-04-04 12:21 ` Daniel Kral
0 siblings, 0 replies; 26+ messages in thread
From: Daniel Kral @ 2025-04-04 12:21 UTC (permalink / raw)
To: Proxmox VE development discussion, Markus Frank
On 4/3/25 12:34, Markus Frank wrote:
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> ---
> v15:
> * moved all options except dirid to an advanced tab
> * improved field labels
Great, looks better with less options up front if Advanced is ticked
off. So consider this as:
Reviewed-by: Daniel Kral <d.kral@proxmox.com>
Tested-by: Daniel Kral <d.kral@proxmox.com>
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v15 0/12] virtiofs
2025-04-03 10:34 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v15 0/12] virtiofs Markus Frank
` (14 preceding siblings ...)
2025-04-04 8:27 ` Lukas Wagner
@ 2025-04-04 12:22 ` Daniel Kral
15 siblings, 0 replies; 26+ messages in thread
From: Daniel Kral @ 2025-04-04 12:22 UTC (permalink / raw)
To: Proxmox VE development discussion, Markus Frank
On 4/3/25 12:34, Markus Frank wrote:
> Virtio-fs is a shared file system that enables sharing a directory
> between host and guest VMs. It takes advantage of the locality of
> virtual machines and the hypervisor to get a higher throughput than
> the 9p remote file system protocol.
>
> build-order:
> 1. cluster
> 2. guest-common
> 3. docs
> 4. qemu-server
> 5. manager
>
> I did not get virtiofsd to run with run_command without creating
> zombie processes after stutdown. So I replaced run_command with exec
> for now. Maybe someone can find out why this happens.
>
> changes in v15:
> * removed announce-submounts option altogether and always set it for
> virtiofsd.
> * added die if both memory hotplug and virtiofs are enabled
> * added fstab entry example in the docs
> * assert_valid_map_list: only run assert_valid for the mappings on the
> current node
> * see individual patches
Gave this another look and spin with the same setup as yesterday (single
node and the three mentioned Debian/Fedora/Win11 VMs) and a 3 node test
cluster to also test the remaining parts in a cluster setting with
migration.
=== Migration tests ===
Tested that offline migration and live migration is blocked correctly,
if the resource is not available on another node, i.e. whether there is
a dir mapping on another node.
If I try to live migrate a VM from one node to another, where the target
does not have a dir mapping defined, I get the following precondition
errors in the WebGUI:
- Can't migrate VM with local resources: virtiofs
- Can't migrate running VM with mapped resources: virtiofs0, virtiofs1
- Mapped Resources (virtiofs1) not available on selected target
And on the CLI:
- can't migrate VM which uses local devices: virtiofs
If I try to migrate a VM from one node to another, where the target does
not have a dir mapping defined, I get the following precondition errors
in the WebGUI:
- Mapped Resources (virtiofs1) not available on selected target
And on the CLI:
- can't migrate to 'node2': missing mapped devices virtiofs1
=== Config tests ===
Otherwise, I did the same tests with xattr / POSIX ACLs as before and
this worked just as before. Also the existence of the dir mapping path
on the current node is asserted correctly, but in a multi-node setting
this could still be improved, see my comments inline.
Otherwise, there were some undefined variables warnings and some
warnings that could be improved, I also noted them in my comments inline.
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [pve-devel] [PATCH cluster v15 1/12] add mapping/dir.cfg for resource mapping
2025-04-03 10:34 ` [pve-devel] [PATCH cluster v15 1/12] add mapping/dir.cfg for resource mapping Markus Frank
@ 2025-04-04 13:46 ` Fiona Ebner
0 siblings, 0 replies; 26+ messages in thread
From: Fiona Ebner @ 2025-04-04 13:46 UTC (permalink / raw)
To: Proxmox VE development discussion, Markus Frank
Am 03.04.25 um 12:34 schrieb Markus Frank:
> Add it to both the perl side (PVE/Cluster.pm) and pmxcfs side
> (status.c).
> This dir.cfg is used to map directory IDs to paths on selected hosts.
>
> Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
> Reviewed-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com
> Reviewed-by: Daniel Kral <d.kral@proxmox.com>
> Tested-by: Laurențiu Leahu-Vlăducu <l.leahu-vladucu@proxmox.com>
> Tested-by: Daniel Kral <d.kral@proxmox.com>
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
Should there be another version: Please note that your Signed-off-by
trailer should stay the first trailer. Trailers should be ordered
chronologically/casually. You first signed it off, then it was
reviewed/tested.
Similar for some other patches in the series.
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v15 0/12] virtiofs
2025-04-04 8:27 ` Lukas Wagner
@ 2025-04-04 15:08 ` Markus Frank
0 siblings, 0 replies; 26+ messages in thread
From: Markus Frank @ 2025-04-04 15:08 UTC (permalink / raw)
To: Lukas Wagner, Proxmox VE development discussion
On 2025-04-04 10:27, Lukas Wagner wrote:
> On 2025-04-03 12:34, Markus Frank wrote:
>> Virtio-fs is a shared file system that enables sharing a directory
>> between host and guest VMs. It takes advantage of the locality of
>> virtual machines and the hypervisor to get a higher throughput than
>> the 9p remote file system protocol.
>>
>
> Some thoughts - no blockers though, can easily be done in follow-ups:
>
> - Thinking through my potential use-cases for a feature like this, I think it would be pretty nice
> to expose the 'readonly' flag [1] under 'Advanced', the same way you did it with e.g. "Allow Direct IO"
readonly does not exist in virtiofsd 1.10.1. Could be a feature for PVE 9.0.
>
> - I'd remove the "Make sure the directory exists." banner in the "Add Dir Mapping" dialog.
> People get an error message any way when they try to create a mapping and the directory
> does not exist.
I would keep it, as we currently cannot check if it exists on a different node.
>
> - I tried this feature out without reading any documentation first, basically to check for any obvious
> UX issues that are not as clearly noticable if you know how everything works.
> Trying to add a filesystem passthrough, my first instinct was to go to the VM's Hardware
> settings. I was presented with a new "Virtiofs" option under "Add".
> Maybe this could be called "Directory Passthrough" or "Filesystem Passthrough" to better
> convey what it does? Users might not be aware what "Virtiofs" is if they haven't read the docs.
I think you should read the docs when using Virtiofs because of the current limitations.
Directory Passthrough may sound like it would just work out of the box like bind mounts.
Also there might be different "Directory Passthrough" technologies in the future.
>
> Inside the new "Add" dialog I was then stuck at first. It allowed me to select a Directory ID,
> but I haven't created one yet. As a user it was not completely clear what the next step would be
> at this point.
> Maybe this could show a small hint about where to create
> new mappings? e.g. "Directory Mappings can be managed under Datacenter → Resource Mapping" or alike.
Good idea. I will have it in v16.
>
> - In the documentation it is not really mentioned that one is able to create a
> mapping from the GUI. The text reads like you have to use the `pvesh` command to create it.
It is described in the Resource Mapping section that is linked.
>
> - Maybe some table for the available options (e.g. direct io, cache, etc.) with the effects
> and possible values would be easier to comprehend than embedding it into free-flowing text.
>
> As I said, no blockers from my side though :) Great work!
>
> [1] https://gitlab.com/virtio-fs/virtiofsd#faq
>
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
end of thread, other threads:[~2025-04-04 15:09 UTC | newest]
Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-04-03 10:34 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v15 0/12] virtiofs Markus Frank
2025-04-03 10:34 ` [pve-devel] [PATCH cluster v15 1/12] add mapping/dir.cfg for resource mapping Markus Frank
2025-04-04 13:46 ` Fiona Ebner
2025-04-03 10:34 ` [pve-devel] [PATCH guest-common v15 2/12] add dir mapping section config Markus Frank
2025-04-04 12:17 ` Daniel Kral
2025-04-03 10:34 ` [pve-devel] [PATCH docs v15 3/12] add doc section for the shared filesystem virtio-fs Markus Frank
2025-04-04 12:20 ` Daniel Kral
2025-04-03 10:34 ` [pve-devel] [PATCH qemu-server v15 4/12] control: add virtiofsd as runtime dependency for qemu-server Markus Frank
2025-04-03 10:34 ` [pve-devel] [PATCH qemu-server v15 5/7] fix #1027: virtio-fs support Markus Frank
2025-04-04 12:19 ` Daniel Kral
2025-04-03 10:34 ` [pve-devel] [PATCH qemu-server v15 6/12] migration: check_local_resources for virtiofs Markus Frank
2025-04-04 12:18 ` Daniel Kral
2025-04-03 10:34 ` [pve-devel] [PATCH qemu-server v15 7/12] disable snapshot (with RAM) and hibernate with virtio-fs devices Markus Frank
2025-04-03 10:34 ` [pve-devel] [PATCH manager v15 08/12] api: add resource map api endpoints for directories Markus Frank
2025-04-03 10:34 ` [pve-devel] [PATCH manager v15 09/12] ui: add edit window for dir mappings Markus Frank
2025-04-04 12:21 ` Daniel Kral
2025-04-03 10:34 ` [pve-devel] [PATCH manager v15 10/12] ui: add resource mapping view for directories Markus Frank
2025-04-04 12:21 ` Daniel Kral
2025-04-03 10:34 ` [pve-devel] [PATCH manager v15 11/12] ui: form: add selector for directory mappings Markus Frank
2025-04-03 10:34 ` [pve-devel] [PATCH manager v15 12/12] ui: add options to add virtio-fs to qemu config Markus Frank
2025-04-04 12:21 ` Daniel Kral
2025-04-03 11:18 ` [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v15 0/12] virtiofs Markus Frank
2025-04-03 14:11 ` Lukas Wagner
2025-04-04 8:27 ` Lukas Wagner
2025-04-04 15:08 ` Markus Frank
2025-04-04 12:22 ` Daniel Kral
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox
Service provided by Proxmox Server Solutions GmbH | Privacy | Legal