* [pve-devel] [PATCH cluster v13 1/12] add mapping/dir.cfg for resource mapping
2025-01-22 10:08 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v13 0/12] virtiofs Markus Frank
@ 2025-01-22 10:08 ` Markus Frank
2025-01-22 10:08 ` [pve-devel] [PATCH guest-common v13 2/12] add dir mapping section config Markus Frank
` (11 subsequent siblings)
12 siblings, 0 replies; 25+ messages in thread
From: Markus Frank @ 2025-01-22 10:08 UTC (permalink / raw)
To: pve-devel
Add it to both the perl side (PVE/Cluster.pm) and pmxcfs side
(status.c).
This dir.cfg is used to map directory IDs to paths on selected hosts.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
---
src/PVE/Cluster.pm | 1 +
src/pmxcfs/status.c | 1 +
2 files changed, 2 insertions(+)
diff --git a/src/PVE/Cluster.pm b/src/PVE/Cluster.pm
index e0e3ee9..b9311c7 100644
--- a/src/PVE/Cluster.pm
+++ b/src/PVE/Cluster.pm
@@ -84,6 +84,7 @@ my $observed = {
'sdn/.running-config' => 1,
'virtual-guest/cpu-models.conf' => 1,
'virtual-guest/profiles.cfg' => 1,
+ 'mapping/dir.cfg' => 1,
'mapping/pci.cfg' => 1,
'mapping/usb.cfg' => 1,
};
diff --git a/src/pmxcfs/status.c b/src/pmxcfs/status.c
index ff5fcc4..39b17f4 100644
--- a/src/pmxcfs/status.c
+++ b/src/pmxcfs/status.c
@@ -114,6 +114,7 @@ static memdb_change_t memdb_change_array[] = {
{ .path = "virtual-guest/cpu-models.conf" },
{ .path = "virtual-guest/profiles.cfg" },
{ .path = "firewall/cluster.fw" },
+ { .path = "mapping/dir.cfg" },
{ .path = "mapping/pci.cfg" },
{ .path = "mapping/usb.cfg" },
};
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 25+ messages in thread
* [pve-devel] [PATCH guest-common v13 2/12] add dir mapping section config
2025-01-22 10:08 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v13 0/12] virtiofs Markus Frank
2025-01-22 10:08 ` [pve-devel] [PATCH cluster v13 1/12] add mapping/dir.cfg for resource mapping Markus Frank
@ 2025-01-22 10:08 ` Markus Frank
2025-02-18 12:35 ` Fiona Ebner
2025-02-19 10:06 ` Fiona Ebner
2025-01-22 10:08 ` [pve-devel] [PATCH docs v13 3/12] add doc section for the shared filesystem virtio-fs Markus Frank
` (10 subsequent siblings)
12 siblings, 2 replies; 25+ messages in thread
From: Markus Frank @ 2025-01-22 10:08 UTC (permalink / raw)
To: pve-devel
Adds a config file for directories by using a 'map' property string for
each node mapping.
Next to node & path, there is the optional announce-submounts parameter
which forces virtiofsd to report a different device number for each
submount it encounters. Without it, duplicates may be created because
inode IDs are only unique on a single filesystem.
example config:
```
some-dir-id
map node=node1,path=/mnt/share/,announce-submounts=1
map node=node2,path=/mnt/share/,
```
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
src/Makefile | 1 +
src/PVE/Mapping/Dir.pm | 185 +++++++++++++++++++++++++++++++++++++++++
2 files changed, 186 insertions(+)
create mode 100644 src/PVE/Mapping/Dir.pm
diff --git a/src/Makefile b/src/Makefile
index cbc40c1..030e7f7 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -15,6 +15,7 @@ install: PVE
install -m 0644 PVE/StorageTunnel.pm ${PERL5DIR}/PVE/
install -m 0644 PVE/Tunnel.pm ${PERL5DIR}/PVE/
install -d ${PERL5DIR}/PVE/Mapping
+ install -m 0644 PVE/Mapping/Dir.pm ${PERL5DIR}/PVE/Mapping/
install -m 0644 PVE/Mapping/PCI.pm ${PERL5DIR}/PVE/Mapping/
install -m 0644 PVE/Mapping/USB.pm ${PERL5DIR}/PVE/Mapping/
install -d ${PERL5DIR}/PVE/VZDump
diff --git a/src/PVE/Mapping/Dir.pm b/src/PVE/Mapping/Dir.pm
new file mode 100644
index 0000000..e6e2237
--- /dev/null
+++ b/src/PVE/Mapping/Dir.pm
@@ -0,0 +1,185 @@
+package PVE::Mapping::Dir;
+
+use strict;
+use warnings;
+
+use PVE::Cluster qw(cfs_register_file cfs_read_file cfs_lock_file cfs_write_file);
+use PVE::INotify;
+use PVE::JSONSchema qw(get_standard_option parse_property_string);
+use PVE::SectionConfig;
+
+use base qw(PVE::SectionConfig);
+
+my $FILENAME = 'mapping/dir.cfg';
+
+cfs_register_file($FILENAME,
+ sub { __PACKAGE__->parse_config(@_); },
+ sub { __PACKAGE__->write_config(@_); });
+
+
+# so we don't have to repeat the type every time
+sub parse_section_header {
+ my ($class, $line) = @_;
+
+ if ($line =~ m/^(\S+)\s*$/) {
+ my $id = $1;
+ my $errmsg = undef; # set if you want to skip whole section
+ eval { PVE::JSONSchema::pve_verify_configid($id) };
+ $errmsg = $@ if $@;
+ my $config = {}; # to return additional attributes
+ return ('dir', $id, $errmsg, $config);
+ }
+ return undef;
+}
+
+sub format_section_header {
+ my ($class, $type, $sectionId, $scfg, $done_hash) = @_;
+
+ return "$sectionId\n";
+}
+
+sub type {
+ return 'dir';
+}
+
+my $map_fmt = {
+ node => get_standard_option('pve-node'),
+ path => {
+ description => "Absolute directory path that should be shared with the guest.",
+ type => 'string',
+ format => 'pve-storage-path',
+ },
+ 'announce-submounts' => {
+ type => 'boolean',
+ description => "Announce that the directory contains other mounted file systems."
+ ." If this is not set and multiple file systems are mounted, the guest may"
+ ." encounter duplicates due to file system specific inode IDs.",
+ optional => 1,
+ default => 1,
+ },
+ description => {
+ description => "Description of the node specific directory.",
+ type => 'string',
+ optional => 1,
+ maxLength => 4096,
+ },
+};
+
+my $defaultData = {
+ propertyList => {
+ id => {
+ type => 'string',
+ description => "The ID of the directory",
+ format => 'pve-configid',
+ },
+ description => {
+ type => 'string',
+ description => "Description of the directory",
+ optional => 1,
+ maxLength => 4096,
+ },
+ map => {
+ type => 'array',
+ description => 'A list of maps for the cluster nodes.',
+ optional => 1,
+ items => {
+ type => 'string',
+ format => $map_fmt,
+ },
+ },
+ },
+};
+
+sub private {
+ return $defaultData;
+}
+
+sub map_fmt {
+ return $map_fmt;
+}
+
+sub options {
+ return {
+ description => { optional => 1 },
+ map => {},
+ };
+}
+
+sub assert_valid {
+ my ($dir_cfg) = @_;
+
+ my $path = $dir_cfg->{path};
+
+ if (! -e $path) {
+ die "Path $path does not exist\n";
+ } elsif (! -d $path) {
+ die "Path $path exists, but is not a directory\n";
+ }
+
+ return 1;
+};
+
+sub assert_no_duplicate_node {
+ my ($map_list) = @_;
+
+ my %count;
+ for my $map (@$map_list) {
+ my $entry = parse_property_string($map_fmt, $map);
+ $count{$entry->{node}}++;
+ }
+ for my $node (keys %count) {
+ if ($count{$node} > 1) {
+ die "Node '$node' is specified $count{$node} times.\n";
+ }
+ }
+}
+
+sub config {
+ return cfs_read_file($FILENAME);
+}
+
+sub lock_dir_config {
+ my ($code, $errmsg) = @_;
+
+ cfs_lock_file($FILENAME, undef, $code);
+ if (my $err = $@) {
+ $errmsg ? die "$errmsg: $err" : die $err;
+ }
+}
+
+sub write_dir_config {
+ my ($cfg) = @_;
+
+ cfs_write_file($FILENAME, $cfg);
+}
+
+sub find_on_current_node {
+ my ($id) = @_;
+
+ my $cfg = config();
+ my $node = PVE::INotify::nodename();
+
+ return get_node_mapping($cfg, $id, $node);
+}
+
+sub get_node_mapping {
+ my ($cfg, $id, $nodename) = @_;
+
+ return undef if !defined($cfg->{ids}->{$id});
+
+ my $res = [];
+ my $mapping_list = $cfg->{ids}->{$id}->{map};
+ for my $map (@{$mapping_list}) {
+ my $entry = eval { parse_property_string($map_fmt, $map) };
+ warn $@ if $@;
+ if ($entry && $entry->{node} eq $nodename) {
+ push $res->@*, $entry;
+ }
+ }
+ return $res;
+}
+
+PVE::Mapping::Dir->register();
+PVE::Mapping::Dir->init();
+
+1;
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [pve-devel] [PATCH guest-common v13 2/12] add dir mapping section config
2025-01-22 10:08 ` [pve-devel] [PATCH guest-common v13 2/12] add dir mapping section config Markus Frank
@ 2025-02-18 12:35 ` Fiona Ebner
2025-02-19 10:06 ` Fiona Ebner
1 sibling, 0 replies; 25+ messages in thread
From: Fiona Ebner @ 2025-02-18 12:35 UTC (permalink / raw)
To: Proxmox VE development discussion, Markus Frank
Am 22.01.25 um 11:08 schrieb Markus Frank:
> Adds a config file for directories by using a 'map' property string for
> each node mapping.
>
> Next to node & path, there is the optional announce-submounts parameter
> which forces virtiofsd to report a different device number for each
> submount it encounters. Without it, duplicates may be created because
> inode IDs are only unique on a single filesystem.
>
> example config:
> ```
> some-dir-id
> map node=node1,path=/mnt/share/,announce-submounts=1
> map node=node2,path=/mnt/share/,
> ```
>
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
Just a few small comments, if there will be a v14, those should be
addressed. Otherwise:
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
> +my $map_fmt = {
> + node => get_standard_option('pve-node'),
> + path => {
> + description => "Absolute directory path that should be shared with the guest.",
> + type => 'string',
> + format => 'pve-storage-path',
> + },
> + 'announce-submounts' => {
> + type => 'boolean',
> + description => "Announce that the directory contains other mounted file systems."
> + ." If this is not set and multiple file systems are mounted, the guest may"
> + ." encounter duplicates due to file system specific inode IDs.",
Style nit: wrong indentation for the two lines above
> + optional => 1,
> + default => 1,
> + },
> + description => {
> + description => "Description of the node specific directory.",
> + type => 'string',
> + optional => 1,
> + maxLength => 4096,
> + },
> +};
> +
> +my $defaultData = {
> + propertyList => {
> + id => {
> + type => 'string',
> + description => "The ID of the directory",
Nit: I'd clarify slightly more with "of the directory mapping"
> + format => 'pve-configid',
> + },
> + description => {
> + type => 'string',
> + description => "Description of the directory",
Nit: I'd clarify slightly more with "of the directory mapping"
> + optional => 1,
> + maxLength => 4096,
> + },
> + map => {
> + type => 'array',
> + description => 'A list of maps for the cluster nodes.',
> + optional => 1,
> + items => {
> + type => 'string',
> + format => $map_fmt,
> + },
> + },
> + },
> +};
> +
> +sub private {
> + return $defaultData;
> +}
> +
> +sub map_fmt {
> + return $map_fmt;
> +}
The map_fmt() subroutine is never called from anywhere or am I missing
something?
> +sub write_dir_config {
> + my ($cfg) = @_;
> +
> + cfs_write_file($FILENAME, $cfg);
Rather orthogonal to the series, but since this is a new configuration
file, should we start out with it always being UTF-8? I sent an RFC for
allowing to register such configuration files:
https://lore.proxmox.com/pve-devel/20250218123006.61691-1-f.ebner@proxmox.com/T/#t
If some proposal for registering UTF-8 configs is accepted before this
series lands, it would be nice to opt into it. But it's not a blocker
and could still be done when applying if everything else is ready.
> +}
> +
> +sub find_on_current_node {
> + my ($id) = @_;
> +
> + my $cfg = config();
> + my $node = PVE::INotify::nodename();
> +
> + return get_node_mapping($cfg, $id, $node);
> +}
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [pve-devel] [PATCH guest-common v13 2/12] add dir mapping section config
2025-01-22 10:08 ` [pve-devel] [PATCH guest-common v13 2/12] add dir mapping section config Markus Frank
2025-02-18 12:35 ` Fiona Ebner
@ 2025-02-19 10:06 ` Fiona Ebner
2025-02-19 17:00 ` Thomas Lamprecht
1 sibling, 1 reply; 25+ messages in thread
From: Fiona Ebner @ 2025-02-19 10:06 UTC (permalink / raw)
To: Proxmox VE development discussion, Markus Frank
Am 22.01.25 um 11:08 schrieb Markus Frank:
> +my $map_fmt = {
> + node => get_standard_option('pve-node'),
> + path => {
> + description => "Absolute directory path that should be shared with the guest.",
> + type => 'string',
> + format => 'pve-storage-path',
I now noticed that paths with commas do not work for the mapping. They
cannot be passed along, since the API endpoint requires the full mapping
as a property string and we have no way of escaping commas AFAIK. To
make it work, we could require paths to be (base64?) encoded when passed
via API. We might also want to think about writing them encoded to the
config file since it's also saved as the property string there.
If we do not want to support such paths, I think we should restrict the
allowed characters, e.g. our storage plugin already has a
$SAFE_CHAR_WITH_WHITESPACE_CLASS_RE that could be used to register a
restricted format (if we don't already have one). Doesn't support
anything non-ASCII either though. And it does support '=' which causes
issues in the UI (seems to work in the backend though).
@other devs: opinions?
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [pve-devel] [PATCH guest-common v13 2/12] add dir mapping section config
2025-02-19 10:06 ` Fiona Ebner
@ 2025-02-19 17:00 ` Thomas Lamprecht
0 siblings, 0 replies; 25+ messages in thread
From: Thomas Lamprecht @ 2025-02-19 17:00 UTC (permalink / raw)
To: Proxmox VE development discussion, Fiona Ebner, Markus Frank
Cc: Wolfgang Bumiller
Am 19.02.25 um 11:06 schrieb Fiona Ebner:
> Am 22.01.25 um 11:08 schrieb Markus Frank:
>> +my $map_fmt = {
>> + node => get_standard_option('pve-node'),
>> + path => {
>> + description => "Absolute directory path that should be shared with the guest.",
>> + type => 'string',
>> + format => 'pve-storage-path',
>
> I now noticed that paths with commas do not work for the mapping. They
> cannot be passed along, since the API endpoint requires the full mapping
> as a property string and we have no way of escaping commas AFAIK. To
> make it work, we could require paths to be (base64?) encoded when passed
> via API. We might also want to think about writing them encoded to the
> config file since it's also saved as the property string there.
>
> If we do not want to support such paths, I think we should restrict the
> allowed characters, e.g. our storage plugin already has a
> $SAFE_CHAR_WITH_WHITESPACE_CLASS_RE that could be used to register a
> restricted format (if we don't already have one). Doesn't support
> anything non-ASCII either though. And it does support '=' which causes
> issues in the UI (seems to work in the backend though).
>
> @other devs: opinions?
We got quotation for property strings to handle such cases in the rust
code bases, I forgot how far we got with adding that support in the perl
side (CC'ing Wolfgang for that).
Besides that, properties that accept paths should be ideally quite
flexible. We still can start out with a more restricted format if that
makes development (for now) easier, to avoid scope creep and "ugly"
encodings that make editing configs directly way harder.
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 25+ messages in thread
* [pve-devel] [PATCH docs v13 3/12] add doc section for the shared filesystem virtio-fs
2025-01-22 10:08 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v13 0/12] virtiofs Markus Frank
2025-01-22 10:08 ` [pve-devel] [PATCH cluster v13 1/12] add mapping/dir.cfg for resource mapping Markus Frank
2025-01-22 10:08 ` [pve-devel] [PATCH guest-common v13 2/12] add dir mapping section config Markus Frank
@ 2025-01-22 10:08 ` Markus Frank
2025-02-19 11:49 ` Fiona Ebner
2025-01-22 10:08 ` [pve-devel] [PATCH qemu-server v13 4/12] control: add virtiofsd as runtime dependency for qemu-server Markus Frank
` (9 subsequent siblings)
12 siblings, 1 reply; 25+ messages in thread
From: Markus Frank @ 2025-01-22 10:08 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
qm.adoc | 92 +++++++++++++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 90 insertions(+), 2 deletions(-)
diff --git a/qm.adoc b/qm.adoc
index 4bb8f2c..5ad79c1 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -1202,6 +1202,93 @@ recommended to always use a limiter to avoid guests using too many host
resources. If desired, a value of '0' for `max_bytes` can be used to disable
all limits.
+[[qm_virtiofs]]
+Virtio-fs
+~~~~~~~~~
+
+Virtio-fs is a shared file system designed for virtual environments. It allows
+to share a directory tree available on the host by mounting it within VMs. It
+does not use the network stack and aims to offer similar performance and
+semantics as the source file system.
+
+To use virtio-fs, the https://gitlab.com/virtio-fs/virtiofsd[virtiofsd] daemon
+needs to run in the background. This happens automatically in {pve} when
+starting a related VM.
+
+Linux VMs with kernel >=5.4 support virtio-fs by default.
+
+There is a guide available on how to utilize virtio-fs in Windows VMs.
+https://github.com/virtio-win/kvm-guest-drivers-windows/wiki/Virtiofs:-Shared-file-system
+
+Known Limitations
+^^^^^^^^^^^^^^^^^
+
+* If Virtiofsd should crash, its mount point will hang in the VM until the VM
+is completely stopped.
+* Virtiofsd not responding may result in NFS-like hanging access in the VM.
+* Memory hotplug does not work in combination with virtio-fs (also results in
+hanging access).
+* Memory related features such as live migration, snapshots, and hibernate are
+not available with virtio-fs devices.
+* Windows cannot understand ACLs. Therefore, disable it for Windows VMs,
+otherwise the virtio-fs device will not be visible within the VMs.
+
+Add Mapping for Shared Directories
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To add a mapping for a shared directory, you can use the API directly with
+`pvesh` as described in the xref:resource_mapping[Resource Mapping] section:
+
+----
+pvesh create /cluster/mapping/dir --id dir1 \
+ --map node=node1,path=/path/to/share1 \
+ --map node=node2,path=/path/to/share2,submounts=1 \
+----
+
+Set `announce-submounts` to `1` if multiple filesystems are mounted in a shared
+directory, to tell the guest which directories are mount points to prevent data
+loss/corruption. With `announce-submounts`, virtiofsd reports a different device
+number for each submount it encounters. Without it, duplicates may be created
+because inode IDs are only unique on a single filesystem.
+
+Add virtio-fs to a VM
+^^^^^^^^^^^^^^^^^^^^^
+
+To share a directory using virtio-fs, add the parameter `virtiofs<N>` (N can be
+anything between 0 and 9) to the VM config and use a directory ID (dirid) that
+has been configured in the resource mapping. Additionally, you can set the
+`cache` option to either `always`, `never`, or `auto` (default: `auto`),
+depending on your requirements. How the different caching modes behave can be
+read at https://lwn.net/Articles/774495/ under the "Caching Modes" section. To
+enable writeback cache set `writeback` to `1`.
+
+If you want virtio-fs to honor the `O_DIRECT` flag, you can set the `direct-io`
+parameter to `1` (default: `0`). This will degrade performance, but is useful if
+applications do their own caching.
+
+The `expose-acl` parameter automatically implies `expose-xattr`, that is, it
+makes no difference if you set `expose-xattr` to `0` if `expose-acl` is set to
+`1`.
+
+----
+qm set <vmid> -virtiofs0 dirid=<dirid>,cache=always,direct-io=1
+qm set <vmid> -virtiofs1 <dirid>,cache=never,expose-xattr=1
+qm set <vmid> -virtiofs2 <dirid>,expose-acl=1,writeback=1
+----
+
+To mount virtio-fs in a guest VM with the Linux kernel virtio-fs driver, run the
+following command inside the guest:
+
+----
+mount -t virtiofs <mount tag> <mount point>
+----
+
+The dirid associated with the path on the current node is also used as the mount
+tag (name used to mount the device on the guest).
+
+For more information on available virtiofsd parameters, see the
+https://gitlab.com/virtio-fs/virtiofsd[GitLab virtiofsd project page].
+
[[qm_bootorder]]
Device Boot Order
~~~~~~~~~~~~~~~~~
@@ -1885,8 +1972,9 @@ in the relevant tab in the `Resource Mappings` category, or on the cli with
[thumbnail="screenshot/gui-datacenter-mapping-pci-edit.png"]
-Where `<type>` is the hardware type (currently either `pci` or `usb`) and
-`<options>` are the device mappings and other configuration parameters.
+Where `<type>` is the hardware type (currently either `pci`, `usb` or
+xref:qm_virtiofs[dir]) and `<options>` are the device mappings and other
+configuration parameters.
Note that the options must include a map property with all identifying
properties of that hardware, so that it's possible to verify the hardware did
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [pve-devel] [PATCH docs v13 3/12] add doc section for the shared filesystem virtio-fs
2025-01-22 10:08 ` [pve-devel] [PATCH docs v13 3/12] add doc section for the shared filesystem virtio-fs Markus Frank
@ 2025-02-19 11:49 ` Fiona Ebner
0 siblings, 0 replies; 25+ messages in thread
From: Fiona Ebner @ 2025-02-19 11:49 UTC (permalink / raw)
To: Proxmox VE development discussion, Markus Frank
Am 22.01.25 um 11:08 schrieb Markus Frank:
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
> ---
> qm.adoc | 92 +++++++++++++++++++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 90 insertions(+), 2 deletions(-)
>
> diff --git a/qm.adoc b/qm.adoc
> index 4bb8f2c..5ad79c1 100644
> --- a/qm.adoc
> +++ b/qm.adoc
> @@ -1202,6 +1202,93 @@ recommended to always use a limiter to avoid guests using too many host
> resources. If desired, a value of '0' for `max_bytes` can be used to disable
> all limits.
>
> +[[qm_virtiofs]]
> +Virtio-fs
> +~~~~~~~~~
> +
> +Virtio-fs is a shared file system designed for virtual environments. It allows
> +to share a directory tree available on the host by mounting it within VMs. It
> +does not use the network stack and aims to offer similar performance and
> +semantics as the source file system.
> +
> +To use virtio-fs, the https://gitlab.com/virtio-fs/virtiofsd[virtiofsd] daemon
> +needs to run in the background. This happens automatically in {pve} when
> +starting a related VM.
Nit: "related" sounds a bit vague IMHO. Maybe be explicit with something
like "when starting a VM using a virtio-fs mount"
> +
> +Linux VMs with kernel >=5.4 support virtio-fs by default.
> +
> +There is a guide available on how to utilize virtio-fs in Windows VMs.
> +https://github.com/virtio-win/kvm-guest-drivers-windows/wiki/Virtiofs:-Shared-file-system
> +
> +Known Limitations
> +^^^^^^^^^^^^^^^^^
> +
> +* If Virtiofsd should crash, its mount point will hang in the VM until the VM
Nit: "If Virtiofsd should crash" -> "If virtiofsd crashes"
> +is completely stopped.
> +* Virtiofsd not responding may result in NFS-like hanging access in the VM.
Nit: again, I'd not capitalize virtiofsd
"NFS-like hanging access" might not be clear to all users. Maybe
something like "a hanging mount in the VM, similar to an unreachable NFS"
> +* Memory hotplug does not work in combination with virtio-fs (also results in
> +hanging access).
> +* Memory related features such as live migration, snapshots, and hibernate are
> +not available with virtio-fs devices.
> +* Windows cannot understand ACLs. Therefore, disable it for Windows VMs,
"ACLs" -> "ACLs in the context of virtio-fs" or "ACLs for virtio-fs mounts"
Nit: "disable it" -> "disable ACLs", never hurts to be explicit in docs
> +otherwise the virtio-fs device will not be visible within the VMs.
> +
> +Add Mapping for Shared Directories
> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> +
> +To add a mapping for a shared directory, you can use the API directly with
> +`pvesh` as described in the xref:resource_mapping[Resource Mapping] section:
> +
> +----
> +pvesh create /cluster/mapping/dir --id dir1 \
> + --map node=node1,path=/path/to/share1 \
> + --map node=node2,path=/path/to/share2,submounts=1 \
The example still uses 'submounts' rather than 'announce-submounts'.
> +----
> +
> +Set `announce-submounts` to `1` if multiple filesystems are mounted in a shared
> +directory, to tell the guest which directories are mount points to prevent data
Nit: I'd split it in two sentences: "directory. This tells"
> +loss/corruption. With `announce-submounts`, virtiofsd reports a different device
> +number for each submount it encounters. Without it, duplicates may be created
> +because inode IDs are only unique on a single filesystem.
> +
> +Add virtio-fs to a VM
> +^^^^^^^^^^^^^^^^^^^^^
> +
> +To share a directory using virtio-fs, add the parameter `virtiofs<N>` (N can be
> +anything between 0 and 9) to the VM config and use a directory ID (dirid) that
> +has been configured in the resource mapping. Additionally, you can set the
> +`cache` option to either `always`, `never`, or `auto` (default: `auto`),
> +depending on your requirements. How the different caching modes behave can be
> +read at https://lwn.net/Articles/774495/ under the "Caching Modes" section. To
> +enable writeback cache set `writeback` to `1`.
> +
> +If you want virtio-fs to honor the `O_DIRECT` flag, you can set the `direct-io`
> +parameter to `1` (default: `0`). This will degrade performance, but is useful if
> +applications do their own caching.
> +
I'd add a sentence describing expose-acl and expose-xattr first. And
mention again that expose-acl should not be used for Windows to make it
unlikely that user miss it.
> +The `expose-acl` parameter automatically implies `expose-xattr`, that is, it
> +makes no difference if you set `expose-xattr` to `0` if `expose-acl` is set to
> +`1`.
> +
> +----
> +qm set <vmid> -virtiofs0 dirid=<dirid>,cache=always,direct-io=1
> +qm set <vmid> -virtiofs1 <dirid>,cache=never,expose-xattr=1
> +qm set <vmid> -virtiofs2 <dirid>,expose-acl=1,writeback=1
> +----
> +
> +To mount virtio-fs in a guest VM with the Linux kernel virtio-fs driver, run the
> +following command inside the guest:
> +
> +----
> +mount -t virtiofs <mount tag> <mount point>
> +----
> +
> +The dirid associated with the path on the current node is also used as the mount
> +tag (name used to mount the device on the guest).
> +
> +For more information on available virtiofsd parameters, see the
> +https://gitlab.com/virtio-fs/virtiofsd[GitLab virtiofsd project page].
> +
> [[qm_bootorder]]
> Device Boot Order
> ~~~~~~~~~~~~~~~~~
> @@ -1885,8 +1972,9 @@ in the relevant tab in the `Resource Mappings` category, or on the cli with
>
> [thumbnail="screenshot/gui-datacenter-mapping-pci-edit.png"]
>
> -Where `<type>` is the hardware type (currently either `pci` or `usb`) and
> -`<options>` are the device mappings and other configuration parameters.
> +Where `<type>` is the hardware type (currently either `pci`, `usb` or
> +xref:qm_virtiofs[dir]) and `<options>` are the device mappings and other
> +configuration parameters.
>
> Note that the options must include a map property with all identifying
> properties of that hardware, so that it's possible to verify the hardware did
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 25+ messages in thread
* [pve-devel] [PATCH qemu-server v13 4/12] control: add virtiofsd as runtime dependency for qemu-server
2025-01-22 10:08 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v13 0/12] virtiofs Markus Frank
` (2 preceding siblings ...)
2025-01-22 10:08 ` [pve-devel] [PATCH docs v13 3/12] add doc section for the shared filesystem virtio-fs Markus Frank
@ 2025-01-22 10:08 ` Markus Frank
2025-02-19 11:51 ` Fiona Ebner
2025-01-22 10:08 ` [pve-devel] [PATCH qemu-server v13 5/12] fix #1027: virtio-fs support Markus Frank
` (8 subsequent siblings)
12 siblings, 1 reply; 25+ messages in thread
From: Markus Frank @ 2025-01-22 10:08 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
debian/control | 1 +
1 file changed, 1 insertion(+)
diff --git a/debian/control b/debian/control
index 81f0fad6..eda357a5 100644
--- a/debian/control
+++ b/debian/control
@@ -55,6 +55,7 @@ Depends: dbus,
socat,
swtpm,
swtpm-tools,
+ virtiofsd,
${misc:Depends},
${perl:Depends},
${shlibs:Depends},
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [pve-devel] [PATCH qemu-server v13 4/12] control: add virtiofsd as runtime dependency for qemu-server
2025-01-22 10:08 ` [pve-devel] [PATCH qemu-server v13 4/12] control: add virtiofsd as runtime dependency for qemu-server Markus Frank
@ 2025-02-19 11:51 ` Fiona Ebner
0 siblings, 0 replies; 25+ messages in thread
From: Fiona Ebner @ 2025-02-19 11:51 UTC (permalink / raw)
To: Proxmox VE development discussion, Markus Frank
Am 22.01.25 um 11:08 schrieb Markus Frank:
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
> ---
> debian/control | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/debian/control b/debian/control
> index 81f0fad6..eda357a5 100644
> --- a/debian/control
> +++ b/debian/control
> @@ -55,6 +55,7 @@ Depends: dbus,
> socat,
> swtpm,
> swtpm-tools,
> + virtiofsd,
> ${misc:Depends},
> ${perl:Depends},
> ${shlibs:Depends},
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 25+ messages in thread
* [pve-devel] [PATCH qemu-server v13 5/12] fix #1027: virtio-fs support
2025-01-22 10:08 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v13 0/12] virtiofs Markus Frank
` (3 preceding siblings ...)
2025-01-22 10:08 ` [pve-devel] [PATCH qemu-server v13 4/12] control: add virtiofsd as runtime dependency for qemu-server Markus Frank
@ 2025-01-22 10:08 ` Markus Frank
2025-02-19 13:43 ` Fiona Ebner
2025-01-22 10:08 ` [pve-devel] [PATCH qemu-server v13 6/12] migration: check_local_resources for virtiofs Markus Frank
` (7 subsequent siblings)
12 siblings, 1 reply; 25+ messages in thread
From: Markus Frank @ 2025-01-22 10:08 UTC (permalink / raw)
To: pve-devel
add support for sharing directories with a guest vm.
virtio-fs needs virtiofsd to be started.
In order to start virtiofsd as a process (despite being a daemon it is
does not run in the background), a double-fork is used.
virtiofsd should close itself together with QEMU.
There are the parameters dirid and the optional parameters direct-io,
cache and writeback. Additionally the expose-xattr & expose-acl
parameter can be set to expose xattr & acl settings from the shared
filesystem to the guest system.
The dirid gets mapped to the path on the current node and is also used
as a mount tag (name used to mount the device on the guest).
example config:
```
virtiofs0: foo,direct-io=1,cache=always,expose-acl=1
virtiofs1: dirid=bar,cache=never,expose-xattr=1,writeback=1
```
For information on the optional parameters see the coherent doc patch
and the official gitlab README:
https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md
Also add a permission check for virtiofs directory access.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
PVE/API2/Qemu.pm | 40 ++++++-
PVE/QemuServer.pm | 22 +++-
PVE/QemuServer/Makefile | 3 +-
PVE/QemuServer/Memory.pm | 23 ++--
PVE/QemuServer/Virtiofs.pm | 223 +++++++++++++++++++++++++++++++++++++
5 files changed, 300 insertions(+), 11 deletions(-)
create mode 100644 PVE/QemuServer/Virtiofs.pm
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 295260e7..16ff31af 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -39,6 +39,7 @@ use PVE::QemuServer::MetaInfo;
use PVE::QemuServer::PCI;
use PVE::QemuServer::QMPHelpers;
use PVE::QemuServer::USB;
+use PVE::QemuServer::Virtiofs;
use PVE::QemuMigrate;
use PVE::RPCEnvironment;
use PVE::AccessControl;
@@ -801,6 +802,32 @@ my sub check_vm_create_hostpci_perm {
return 1;
};
+my sub check_dir_perm {
+ my ($rpcenv, $authuser, $vmid, $pool, $opt, $value) = @_;
+
+ return 1 if $authuser eq 'root@pam';
+
+ $rpcenv->check_vm_perm($authuser, $vmid, $pool, ['VM.Config.Disk']);
+
+ my $virtiofs = PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', $value);
+ $rpcenv->check_full($authuser, "/mapping/dir/$virtiofs->{dirid}", ['Mapping.Use']);
+
+ return 1;
+};
+
+my sub check_vm_create_dir_perm {
+ my ($rpcenv, $authuser, $vmid, $pool, $param) = @_;
+
+ return 1 if $authuser eq 'root@pam';
+
+ for my $opt (keys %{$param}) {
+ next if $opt !~ m/^virtiofs\d+$/;
+ check_dir_perm($rpcenv, $authuser, $vmid, $pool, $opt, $param->{$opt});
+ }
+
+ return 1;
+};
+
my $check_vm_modify_config_perm = sub {
my ($rpcenv, $authuser, $vmid, $pool, $key_list) = @_;
@@ -811,7 +838,7 @@ my $check_vm_modify_config_perm = sub {
# else, as there the permission can be value dependent
next if PVE::QemuServer::is_valid_drivename($opt);
next if $opt eq 'cdrom';
- next if $opt =~ m/^(?:unused|serial|usb|hostpci)\d+$/;
+ next if $opt =~ m/^(?:unused|serial|usb|hostpci|virtiofs)\d+$/;
next if $opt eq 'tags';
@@ -1114,6 +1141,7 @@ __PACKAGE__->register_method({
&$check_vm_create_serial_perm($rpcenv, $authuser, $vmid, $pool, $param);
check_vm_create_usb_perm($rpcenv, $authuser, $vmid, $pool, $param);
check_vm_create_hostpci_perm($rpcenv, $authuser, $vmid, $pool, $param);
+ check_vm_create_dir_perm($rpcenv, $authuser, $vmid, $pool, $param);
PVE::QemuServer::check_bridge_access($rpcenv, $authuser, $param);
&$check_cpu_model_access($rpcenv, $authuser, $param);
@@ -2005,6 +2033,10 @@ my $update_vm_api = sub {
check_hostpci_perm($rpcenv, $authuser, $vmid, undef, $opt, $val);
PVE::QemuConfig->add_to_pending_delete($conf, $opt, $force);
PVE::QemuConfig->write_config($vmid, $conf);
+ } elsif ($opt =~ m/^virtiofs\d$/) {
+ check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, $val);
+ PVE::QemuConfig->add_to_pending_delete($conf, $opt, $force);
+ PVE::QemuConfig->write_config($vmid, $conf);
} elsif ($opt eq 'tags') {
assert_tag_permissions($vmid, $val, '', $rpcenv, $authuser);
delete $conf->{$opt};
@@ -2095,6 +2127,12 @@ my $update_vm_api = sub {
}
check_hostpci_perm($rpcenv, $authuser, $vmid, undef, $opt, $param->{$opt});
$conf->{pending}->{$opt} = $param->{$opt};
+ } elsif ($opt =~ m/^virtiofs\d$/) {
+ if (my $oldvalue = $conf->{$opt}) {
+ check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, $oldvalue);
+ }
+ check_dir_perm($rpcenv, $authuser, $vmid, undef, $opt, $param->{$opt});
+ $conf->{pending}->{$opt} = $param->{$opt};
} elsif ($opt eq 'tags') {
assert_tag_permissions($vmid, $conf->{$opt}, $param->{$opt}, $rpcenv, $authuser);
$conf->{pending}->{$opt} = PVE::GuestHelpers::get_unique_tags($param->{$opt});
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 808c0e1c..7907604a 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -33,6 +33,7 @@ use PVE::Exception qw(raise raise_param_exc);
use PVE::Format qw(render_duration render_bytes);
use PVE::GuestHelpers qw(safe_string_ne safe_num_ne safe_boolean_ne);
use PVE::HA::Config;
+use PVE::Mapping::Dir;
use PVE::Mapping::PCI;
use PVE::Mapping::USB;
use PVE::INotify;
@@ -61,6 +62,7 @@ use PVE::QemuServer::Monitor qw(mon_cmd);
use PVE::QemuServer::PCI qw(print_pci_addr print_pcie_addr print_pcie_root_port parse_hostpci);
use PVE::QemuServer::QMPHelpers qw(qemu_deviceadd qemu_devicedel qemu_objectadd qemu_objectdel);
use PVE::QemuServer::USB;
+use PVE::QemuServer::Virtiofs qw(max_virtiofs start_all_virtiofsd);
my $have_sdn;
eval {
@@ -947,6 +949,10 @@ my $netdesc = {
PVE::JSONSchema::register_standard_option("pve-qm-net", $netdesc);
+for (my $i = 0; $i < max_virtiofs(); $i++) {
+ $confdesc->{"virtiofs$i"} = get_standard_option('pve-qm-virtiofs');
+}
+
my $ipconfig_fmt = {
ip => {
type => 'string',
@@ -3703,8 +3709,11 @@ sub config_to_command {
push @$cmd, get_cpu_options($conf, $arch, $kvm, $kvm_off, $machine_version, $winversion, $gpu_passthrough);
}
+ my $virtiofs_enabled = PVE::QemuServer::Virtiofs::virtiofs_enabled($conf);
+
PVE::QemuServer::Memory::config(
- $conf, $vmid, $sockets, $cores, $hotplug_features->{memory}, $cmd);
+ $conf, $vmid, $sockets, $cores, $hotplug_features->{memory}, $cmd,
+ $machineFlags, $virtiofs_enabled);
push @$cmd, '-S' if $conf->{freeze};
@@ -3994,6 +4003,8 @@ sub config_to_command {
push @$machineFlags, 'confidential-guest-support=sev0';
}
+ PVE::QemuServer::Virtiofs::config($conf, $vmid, $devices);
+
push @$cmd, @$devices;
push @$cmd, '-rtc', join(',', @$rtcFlags) if scalar(@$rtcFlags);
push @$cmd, '-machine', join(',', @$machineFlags) if scalar(@$machineFlags);
@@ -5792,6 +5803,8 @@ sub vm_start_nolock {
PVE::Tools::run_fork sub {
PVE::Systemd::enter_systemd_scope($vmid, "Proxmox VE VM $vmid", %systemd_properties);
+ my $virtiofs_sockets = start_all_virtiofsd($conf, $vmid);
+
my $tpmpid;
if ((my $tpm = $conf->{tpmstate0}) && !PVE::QemuConfig->is_template($conf)) {
# start the TPM emulator so QEMU can connect on start
@@ -5804,8 +5817,10 @@ sub vm_start_nolock {
warn "stopping swtpm instance (pid $tpmpid) due to QEMU startup error\n";
kill 'TERM', $tpmpid;
}
+ PVE::QemuServer::Virtiofs::close_sockets(@$virtiofs_sockets);
die "QEMU exited with code $exitcode\n";
}
+ PVE::QemuServer::Virtiofs::close_sockets(@$virtiofs_sockets);
};
};
@@ -6465,7 +6480,10 @@ sub check_mapping_access {
} else {
die "either 'host' or 'mapping' must be set.\n";
}
- }
+ } elsif ($opt =~ m/^virtiofs\d$/) {
+ my $virtiofs = PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', $conf->{$opt});
+ $rpcenv->check_full($user, "/mapping/dir/$virtiofs->{dirid}", ['Mapping.Use']);
+ }
}
};
diff --git a/PVE/QemuServer/Makefile b/PVE/QemuServer/Makefile
index 18fd13ea..3588e0e1 100644
--- a/PVE/QemuServer/Makefile
+++ b/PVE/QemuServer/Makefile
@@ -11,7 +11,8 @@ SOURCES=PCI.pm \
CPUConfig.pm \
CGroup.pm \
Drive.pm \
- QMPHelpers.pm
+ QMPHelpers.pm \
+ Virtiofs.pm
.PHONY: install
install: ${SOURCES}
diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
index e5024cd2..0f87bbc0 100644
--- a/PVE/QemuServer/Memory.pm
+++ b/PVE/QemuServer/Memory.pm
@@ -336,7 +336,7 @@ sub qemu_memdevices_list {
}
sub config {
- my ($conf, $vmid, $sockets, $cores, $hotplug, $cmd) = @_;
+ my ($conf, $vmid, $sockets, $cores, $hotplug, $cmd, $machine_flags, $virtiofs_enabled) = @_;
my $memory = get_current_memory($conf->{memory});
my $static_memory = 0;
@@ -379,7 +379,8 @@ sub config {
my $numa_memory = $numa->{memory};
$numa_totalmemory += $numa_memory;
- my $mem_object = print_mem_object($conf, "ram-node$i", $numa_memory);
+ my $memdev = $virtiofs_enabled ? "virtiofs-mem$i" : "ram-node$i";
+ my $mem_object = print_mem_object($conf, $memdev, $numa_memory);
# cpus
my $cpulists = $numa->{cpus};
@@ -404,7 +405,7 @@ sub config {
}
push @$cmd, '-object', $mem_object;
- push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=ram-node$i";
+ push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=$memdev";
}
die "total memory for NUMA nodes must be equal to vm static memory\n"
@@ -418,15 +419,21 @@ sub config {
die "host NUMA node$i doesn't exist\n"
if !host_numanode_exists($i) && $conf->{hugepages};
- my $mem_object = print_mem_object($conf, "ram-node$i", $numa_memory);
- push @$cmd, '-object', $mem_object;
-
my $cpus = ($cores * $i);
$cpus .= "-" . ($cpus + $cores - 1) if $cores > 1;
- push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=ram-node$i";
+ my $memdev = $virtiofs_enabled ? "virtiofs-mem$i" : "ram-node$i";
+ my $mem_object = print_mem_object($conf, $memdev, $numa_memory);
+ push @$cmd, '-object', $mem_object;
+ push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=$memdev";
}
}
+ } elsif ($virtiofs_enabled) {
+ # kvm: '-machine memory-backend' and '-numa memdev' properties are
+ # mutually exclusive
+ push @$cmd, '-object', 'memory-backend-memfd,id=virtiofs-mem'
+ .",size=$conf->{memory}M,share=on";
+ push @$machine_flags, 'memory-backend=virtiofs-mem';
}
if ($hotplug) {
@@ -453,6 +460,8 @@ sub print_mem_object {
my $path = hugepages_mount_path($hugepages_size);
return "memory-backend-file,id=$id,size=${size}M,mem-path=$path,share=on,prealloc=yes";
+ } elsif ($id =~ m/^virtiofs-mem/) {
+ return "memory-backend-memfd,id=$id,size=${size}M,share=on";
} else {
return "memory-backend-ram,id=$id,size=${size}M";
}
diff --git a/PVE/QemuServer/Virtiofs.pm b/PVE/QemuServer/Virtiofs.pm
new file mode 100644
index 00000000..e1aabd5a
--- /dev/null
+++ b/PVE/QemuServer/Virtiofs.pm
@@ -0,0 +1,223 @@
+package PVE::QemuServer::Virtiofs;
+
+use strict;
+use warnings;
+
+use Fcntl qw(F_GETFD F_SETFD FD_CLOEXEC);
+use IO::Socket::UNIX;
+use POSIX;
+use Socket qw(SOCK_STREAM);
+
+use PVE::JSONSchema qw(get_standard_option parse_property_string);
+use PVE::Mapping::Dir;
+use PVE::RESTEnvironment qw(log_warn);
+
+use base qw(Exporter);
+
+our @EXPORT_OK = qw(
+max_virtiofs
+start_all_virtiofsd
+);
+
+my $MAX_VIRTIOFS = 10;
+my $socket_path_root = "/run/qemu-server/virtiofsd";
+
+my $virtiofs_fmt = {
+ 'dirid' => {
+ type => 'string',
+ default_key => 1,
+ description => "Mapping identifier of the directory mapping to be shared with the guest."
+ ." Also used as a mount tag inside the VM.",
+ format_description => 'mapping-id',
+ format => 'pve-configid',
+ },
+ 'cache' => {
+ type => 'string',
+ description => "The caching policy the file system should use (auto, always, never).",
+ enum => [qw(auto always never)],
+ default => "auto",
+ optional => 1,
+ },
+ 'direct-io' => {
+ type => 'boolean',
+ description => "Honor the O_DIRECT flag passed down by guest applications.",
+ default => 0,
+ optional => 1,
+ },
+ writeback => {
+ type => 'boolean',
+ description => "Enable writeback cache. If enabled, writes may be cached in the guest until"
+ ." the file is closed or an fsync is performed.",
+ default => 0,
+ optional => 1,
+ },
+ 'expose-xattr' => {
+ type => 'boolean',
+ description => "Overwrite the xattr option from mapping and explicitly enable/disable"
+ ." support for extended attributes for the VM.",
+ default => 0,
+ optional => 1,
+ },
+ 'expose-acl' => {
+ type => 'boolean',
+ description => "Overwrite the acl option from mapping and explicitly enable/disable support"
+ ." for posix ACLs (enabled acl implies xattr) for the VM.",
+ default => 0,
+ optional => 1,
+ },
+};
+PVE::JSONSchema::register_format('pve-qm-virtiofs', $virtiofs_fmt);
+
+my $virtiofsdesc = {
+ optional => 1,
+ type => 'string', format => $virtiofs_fmt,
+ description => "Configuration for sharing a directory between host and guest using Virtio-fs.",
+};
+PVE::JSONSchema::register_standard_option("pve-qm-virtiofs", $virtiofsdesc);
+
+sub max_virtiofs {
+ return $MAX_VIRTIOFS;
+}
+
+sub assert_virtiofs_config {
+ my ($conf, $virtiofs) = @_;
+
+ my $dir_cfg = PVE::Mapping::Dir::config()->{ids}->{$virtiofs->{dirid}};
+ my $node_list = PVE::Mapping::Dir::find_on_current_node($virtiofs->{dirid});
+
+ my $acl = $virtiofs->{'expose-acl'};
+ if ($acl && PVE::QemuServer::Helpers::windows_version($conf->{ostype})) {
+ log_warn(
+ "Please disable ACLs for virtiofs on Windows VMs, otherwise"
+ ." the virtiofs shared directory cannot be mounted."
+ );
+ }
+
+ if (!$node_list || scalar($node_list->@*) != 1) {
+ die "virtiofs needs exactly one mapping for this node\n";
+ }
+
+ eval { PVE::Mapping::Dir::assert_valid($node_list->[0]) };
+ die "directory mapping invalid: $@\n" if $@;
+}
+
+sub config {
+ my ($conf, $vmid, $devices) = @_;
+
+ for (my $i = 0; $i < max_virtiofs(); $i++) {
+ my $opt = "virtiofs$i";
+
+ next if !$conf->{$opt};
+ my $virtiofs = parse_property_string('pve-qm-virtiofs', $conf->{$opt});
+ next if !$virtiofs;
+
+ assert_virtiofs_config($conf, $virtiofs);
+
+ push @$devices, '-chardev', "socket,id=virtiofs$i,path=$socket_path_root/vm$vmid-fs$i";
+
+ # queue-size is set 1024 because of bug with Windows guests:
+ # https://bugzilla.redhat.com/show_bug.cgi?id=1873088
+ # 1024 is also always used in the virtiofs documentations:
+ # https://gitlab.com/virtio-fs/virtiofsd#examples
+ push @$devices, '-device', 'vhost-user-fs-pci,queue-size=1024'
+ .",chardev=virtiofs$i,tag=$virtiofs->{dirid}";
+ }
+}
+
+sub virtiofs_enabled {
+ my ($conf) = @_;
+
+ my $virtiofs_enabled = 0;
+ for (my $i = 0; $i < max_virtiofs(); $i++) {
+ my $opt = "virtiofs$i";
+ next if !$conf->{$opt};
+ my $virtiofs = parse_property_string('pve-qm-virtiofs', $conf->{$opt});
+ if ($virtiofs) {
+ $virtiofs_enabled = 1;
+ last;
+ }
+ }
+ return $virtiofs_enabled;
+}
+
+sub start_all_virtiofsd {
+ my ($conf, $vmid) = @_;
+ my $virtiofs_sockets = [];
+ for (my $i = 0; $i < max_virtiofs(); $i++) {
+ my $opt = "virtiofs$i";
+
+ next if !$conf->{$opt};
+ my $virtiofs = parse_property_string('pve-qm-virtiofs', $conf->{$opt});
+ next if !$virtiofs;
+
+ my $virtiofs_socket = start_virtiofsd($vmid, $i, $virtiofs);
+ push @$virtiofs_sockets, $virtiofs_socket;
+ }
+ return $virtiofs_sockets;
+}
+
+sub start_virtiofsd {
+ my ($vmid, $fsid, $virtiofs) = @_;
+
+ mkdir $socket_path_root;
+ my $socket_path = "$socket_path_root/vm$vmid-fs$fsid";
+ unlink($socket_path);
+ my $socket = IO::Socket::UNIX->new(
+ Type => SOCK_STREAM,
+ Local => $socket_path,
+ Listen => 1,
+ ) or die "cannot create socket - $!\n";
+
+ my $flags = fcntl($socket, F_GETFD, 0)
+ or die "failed to get file descriptor flags: $!\n";
+ fcntl($socket, F_SETFD, $flags & ~FD_CLOEXEC)
+ or die "failed to remove FD_CLOEXEC from file descriptor\n";
+
+ my $dir_cfg = PVE::Mapping::Dir::config()->{ids}->{$virtiofs->{dirid}};
+ my $node_list = PVE::Mapping::Dir::find_on_current_node($virtiofs->{dirid});
+ my $node_cfg = $node_list->[0];
+
+ my $virtiofsd_bin = '/usr/libexec/virtiofsd';
+ my $fd = $socket->fileno();
+ my $path = $node_cfg->{path};
+
+ my $could_not_fork_err = "could not fork to start virtiofsd\n";
+ my $pid = fork();
+ if ($pid == 0) {
+ setsid();
+ $0 = "task pve-vm$vmid-virtiofs$fsid";
+ my $pid2 = fork();
+ if ($pid2 == 0) {
+ my $cmd = [$virtiofsd_bin, "--fd=$fd", "--shared-dir=$path"];
+ push @$cmd, '--xattr' if $virtiofs->{'expose-xattr'};
+ push @$cmd, '--posix-acl' if $virtiofs->{'expose-acl'};
+ push @$cmd, '--announce-submounts' if $node_cfg->{'announce-submounts'};
+ push @$cmd, '--allow-direct-io' if $virtiofs->{'direct-io'};
+ push @$cmd, '--cache='.$virtiofs->{cache} if $virtiofs->{cache};
+ push @$cmd, '--writeback' if $virtiofs->{'writeback'};
+ push @$cmd, '--syslog';
+ exec(@$cmd);
+ } elsif (!defined($pid2)) {
+ die $could_not_fork_err;
+ } else {
+ POSIX::_exit(0);
+ }
+ } elsif (!defined($pid)) {
+ die $could_not_fork_err;
+ } else {
+ waitpid($pid, 0);
+ }
+
+ # return socket to keep it alive,
+ # so that QEMU will wait for virtiofsd to start
+ return $socket;
+}
+
+sub close_sockets {
+ my @sockets = @_;
+ for my $socket (@sockets) {
+ shutdown($socket, 2);
+ close($socket);
+ }
+}
+1;
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [pve-devel] [PATCH qemu-server v13 5/12] fix #1027: virtio-fs support
2025-01-22 10:08 ` [pve-devel] [PATCH qemu-server v13 5/12] fix #1027: virtio-fs support Markus Frank
@ 2025-02-19 13:43 ` Fiona Ebner
0 siblings, 0 replies; 25+ messages in thread
From: Fiona Ebner @ 2025-02-19 13:43 UTC (permalink / raw)
To: Proxmox VE development discussion, Markus Frank
Looking very good! Just some nits and issue with schema description I
noticed below:
Am 22.01.25 um 11:08 schrieb Markus Frank:
> @@ -801,6 +802,32 @@ my sub check_vm_create_hostpci_perm {
> return 1;
> };
>
> +my sub check_dir_perm {
> + my ($rpcenv, $authuser, $vmid, $pool, $opt, $value) = @_;
> +
> + return 1 if $authuser eq 'root@pam';
> +
> + $rpcenv->check_vm_perm($authuser, $vmid, $pool, ['VM.Config.Disk']);
> +
> + my $virtiofs = PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', $value);
> + $rpcenv->check_full($authuser, "/mapping/dir/$virtiofs->{dirid}", ['Mapping.Use']);
> +
> + return 1;
> +};
> +
> +my sub check_vm_create_dir_perm {
> + my ($rpcenv, $authuser, $vmid, $pool, $param) = @_;
> +
> + return 1 if $authuser eq 'root@pam';
> +
> + for my $opt (keys %{$param}) {
Nit: sort just to have predictable failure. Or better, iterate over the
known virtiofs keys instead like you do elsewhere.
> + next if $opt !~ m/^virtiofs\d+$/;
> + check_dir_perm($rpcenv, $authuser, $vmid, $pool, $opt, $param->{$opt});
> + }
> +
> + return 1;
> +};
> +
---snip 8<---
> @@ -3703,8 +3709,11 @@ sub config_to_command {
> push @$cmd, get_cpu_options($conf, $arch, $kvm, $kvm_off, $machine_version, $winversion, $gpu_passthrough);
> }
>
> + my $virtiofs_enabled = PVE::QemuServer::Virtiofs::virtiofs_enabled($conf);
> +
> PVE::QemuServer::Memory::config(
> - $conf, $vmid, $sockets, $cores, $hotplug_features->{memory}, $cmd);
> + $conf, $vmid, $sockets, $cores, $hotplug_features->{memory}, $cmd,
> + $machineFlags, $virtiofs_enabled);
Style nit: I'm afraid it's necessary to use one arguement per line to
comply with our style guide now
>
> push @$cmd, '-S' if $conf->{freeze};
>
> @@ -3994,6 +4003,8 @@ sub config_to_command {
> push @$machineFlags, 'confidential-guest-support=sev0';
> }
>
> + PVE::QemuServer::Virtiofs::config($conf, $vmid, $devices);
> +
> push @$cmd, @$devices;
> push @$cmd, '-rtc', join(',', @$rtcFlags) if scalar(@$rtcFlags);
> push @$cmd, '-machine', join(',', @$machineFlags) if scalar(@$machineFlags);
> @@ -5792,6 +5803,8 @@ sub vm_start_nolock {
> PVE::Tools::run_fork sub {
> PVE::Systemd::enter_systemd_scope($vmid, "Proxmox VE VM $vmid", %systemd_properties);
>
> + my $virtiofs_sockets = start_all_virtiofsd($conf, $vmid);
> +
> my $tpmpid;
> if ((my $tpm = $conf->{tpmstate0}) && !PVE::QemuConfig->is_template($conf)) {
> # start the TPM emulator so QEMU can connect on start
> @@ -5804,8 +5817,10 @@ sub vm_start_nolock {
> warn "stopping swtpm instance (pid $tpmpid) due to QEMU startup error\n";
> kill 'TERM', $tpmpid;
> }
> + PVE::QemuServer::Virtiofs::close_sockets(@$virtiofs_sockets);
> die "QEMU exited with code $exitcode\n";
> }
> + PVE::QemuServer::Virtiofs::close_sockets(@$virtiofs_sockets);
Instead of having it twice, could be done slightly above and safeguarded
by an eval (should not be critical to fail here IMHO) like
my $exitcode = run_command($cmd, %run_params);
eval { PVE::QemuServer::Virtiofs::close_sockets(@$virtiofs_sockets); };
log_warn("closing virtiofs sockets failed - $@") if $@;
if ($exitcode) {
---snip 8<---
> @@ -418,15 +419,21 @@ sub config {
> die "host NUMA node$i doesn't exist\n"
> if !host_numanode_exists($i) && $conf->{hugepages};
>
> - my $mem_object = print_mem_object($conf, "ram-node$i", $numa_memory);
> - push @$cmd, '-object', $mem_object;
> -
> my $cpus = ($cores * $i);
> $cpus .= "-" . ($cpus + $cores - 1) if $cores > 1;
>
> - push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=ram-node$i";
> + my $memdev = $virtiofs_enabled ? "virtiofs-mem$i" : "ram-node$i";
> + my $mem_object = print_mem_object($conf, $memdev, $numa_memory);
> + push @$cmd, '-object', $mem_object;
> + push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=$memdev";
> }
> }
> + } elsif ($virtiofs_enabled) {
> + # kvm: '-machine memory-backend' and '-numa memdev' properties are
> + # mutually exclusive
Style nit: above comment fits within 100 characters on one line
> + push @$cmd, '-object', 'memory-backend-memfd,id=virtiofs-mem'
> + .",size=$conf->{memory}M,share=on";
> + push @$machine_flags, 'memory-backend=virtiofs-mem';
> }
>
> if ($hotplug) {
> @@ -453,6 +460,8 @@ sub print_mem_object {
> my $path = hugepages_mount_path($hugepages_size);
>
> return "memory-backend-file,id=$id,size=${size}M,mem-path=$path,share=on,prealloc=yes";
> + } elsif ($id =~ m/^virtiofs-mem/) {
> + return "memory-backend-memfd,id=$id,size=${size}M,share=on";
> } else {
> return "memory-backend-ram,id=$id,size=${size}M";
> }
> diff --git a/PVE/QemuServer/Virtiofs.pm b/PVE/QemuServer/Virtiofs.pm
> new file mode 100644
> index 00000000..e1aabd5a
> --- /dev/null
> +++ b/PVE/QemuServer/Virtiofs.pm
> @@ -0,0 +1,223 @@
> +package PVE::QemuServer::Virtiofs;
> +
> +use strict;
> +use warnings;
> +
> +use Fcntl qw(F_GETFD F_SETFD FD_CLOEXEC);
> +use IO::Socket::UNIX;
> +use POSIX;
> +use Socket qw(SOCK_STREAM);
> +
> +use PVE::JSONSchema qw(get_standard_option parse_property_string);
Nit: get_standard_option is not used
> +use PVE::Mapping::Dir;
> +use PVE::RESTEnvironment qw(log_warn);
> +
missing
use PVE::QemuServer::Helpers;
for the PVE::QemuServer::Helpers::windows_version() call
> +use base qw(Exporter);
> +
> +our @EXPORT_OK = qw(
> +max_virtiofs
> +start_all_virtiofsd
> +);
> +
> +my $MAX_VIRTIOFS = 10;
> +my $socket_path_root = "/run/qemu-server/virtiofsd";
> +
> +my $virtiofs_fmt = {
> + 'dirid' => {
> + type => 'string',
> + default_key => 1,
> + description => "Mapping identifier of the directory mapping to be shared with the guest."
> + ." Also used as a mount tag inside the VM.",
> + format_description => 'mapping-id',
> + format => 'pve-configid',
> + },
> + 'cache' => {
> + type => 'string',
> + description => "The caching policy the file system should use (auto, always, never).",
> + enum => [qw(auto always never)],
> + default => "auto",
> + optional => 1,
> + },
> + 'direct-io' => {
> + type => 'boolean',
> + description => "Honor the O_DIRECT flag passed down by guest applications.",
> + default => 0,
> + optional => 1,
> + },
> + writeback => {
> + type => 'boolean',
> + description => "Enable writeback cache. If enabled, writes may be cached in the guest until"
> + ." the file is closed or an fsync is performed.",
> + default => 0,
> + optional => 1,
> + },
> + 'expose-xattr' => {
> + type => 'boolean',
> + description => "Overwrite the xattr option from mapping and explicitly enable/disable"
> + ." support for extended attributes for the VM.",
The setting is not VM-wide, so
s/for the VM/for this mount/
The option doesn't exist for the mapping anymore so should not mention
overwrite, right?
> + default => 0,
> + optional => 1,
> + },
> + 'expose-acl' => {
> + type => 'boolean',
> + description => "Overwrite the acl option from mapping and explicitly enable/disable support"
> + ." for posix ACLs (enabled acl implies xattr) for the VM.",
s/posix/POSIX/
s/for the VM/for this mount/
The option doesn't exist for the mapping anymore so should not mention
overwrite, right?
> + default => 0,
> + optional => 1,
> + },
> +};
> +PVE::JSONSchema::register_format('pve-qm-virtiofs', $virtiofs_fmt);
> +
> +my $virtiofsdesc = {
> + optional => 1,
> + type => 'string', format => $virtiofs_fmt,
> + description => "Configuration for sharing a directory between host and guest using Virtio-fs.",
> +};
> +PVE::JSONSchema::register_standard_option("pve-qm-virtiofs", $virtiofsdesc);
> +
> +sub max_virtiofs {
> + return $MAX_VIRTIOFS;
> +}
> +
> +sub assert_virtiofs_config {
> + my ($conf, $virtiofs) = @_;
Nit: could also pass in just the ostype. That would also avoid potential
to confuse $conf with the virtiofs config in this context.
> +
> + my $dir_cfg = PVE::Mapping::Dir::config()->{ids}->{$virtiofs->{dirid}};
> + my $node_list = PVE::Mapping::Dir::find_on_current_node($virtiofs->{dirid});
> +
> + my $acl = $virtiofs->{'expose-acl'};
> + if ($acl && PVE::QemuServer::Helpers::windows_version($conf->{ostype})) {
> + log_warn(
> + "Please disable ACLs for virtiofs on Windows VMs, otherwise"
> + ." the virtiofs shared directory cannot be mounted."
> + );
> + }
> +
> + if (!$node_list || scalar($node_list->@*) != 1) {
> + die "virtiofs needs exactly one mapping for this node\n";
> + }
> +
> + eval { PVE::Mapping::Dir::assert_valid($node_list->[0]) };
> + die "directory mapping invalid: $@\n" if $@;
> +}
> +
> +sub config {
> + my ($conf, $vmid, $devices) = @_;
> +
> + for (my $i = 0; $i < max_virtiofs(); $i++) {
> + my $opt = "virtiofs$i";
> +
> + next if !$conf->{$opt};
> + my $virtiofs = parse_property_string('pve-qm-virtiofs', $conf->{$opt});
> + next if !$virtiofs;
Nit: I'd "die" instead of "next" since this is unexpected, i.e.
parse_property_string() was successful so there should be a result. And
I don't think you even need it. We don't check this elsewhere either,
because the only way parse_property_string() can return something false
but not die if there is a custom validator for the format and that one
returns something false but not die (which would be a bug with the
validator).
> +
> + assert_virtiofs_config($conf, $virtiofs);
> +
> + push @$devices, '-chardev', "socket,id=virtiofs$i,path=$socket_path_root/vm$vmid-fs$i";
> +
> + # queue-size is set 1024 because of bug with Windows guests:
> + # https://bugzilla.redhat.com/show_bug.cgi?id=1873088
> + # 1024 is also always used in the virtiofs documentations:
> + # https://gitlab.com/virtio-fs/virtiofsd#examples
> + push @$devices, '-device', 'vhost-user-fs-pci,queue-size=1024'
> + .",chardev=virtiofs$i,tag=$virtiofs->{dirid}";
> + }
> +}
> +
> +sub virtiofs_enabled {
> + my ($conf) = @_;
> +
> + my $virtiofs_enabled = 0;
> + for (my $i = 0; $i < max_virtiofs(); $i++) {
> + my $opt = "virtiofs$i";
> + next if !$conf->{$opt};
> + my $virtiofs = parse_property_string('pve-qm-virtiofs', $conf->{$opt});
> + if ($virtiofs) {
Nit: Similar to above, if we do get a result, it will always evaluate to
true.
> + $virtiofs_enabled = 1;
> + last;
> + }
> + }
> + return $virtiofs_enabled;
> +}
> +
> +sub start_all_virtiofsd {
> + my ($conf, $vmid) = @_;
> + my $virtiofs_sockets = [];
> + for (my $i = 0; $i < max_virtiofs(); $i++) {
> + my $opt = "virtiofs$i";
> +
> + next if !$conf->{$opt};
> + my $virtiofs = parse_property_string('pve-qm-virtiofs', $conf->{$opt});
> + next if !$virtiofs;
Nit: Similar to above. Or did you ever run into the situation where the
result from parse_property_string() would be false?
> +
> + my $virtiofs_socket = start_virtiofsd($vmid, $i, $virtiofs);
> + push @$virtiofs_sockets, $virtiofs_socket;
> + }
> + return $virtiofs_sockets;
> +}
> +
> +sub start_virtiofsd {
> + my ($vmid, $fsid, $virtiofs) = @_;
> +
> + mkdir $socket_path_root;
> + my $socket_path = "$socket_path_root/vm$vmid-fs$fsid";
> + unlink($socket_path);
> + my $socket = IO::Socket::UNIX->new(
> + Type => SOCK_STREAM,
> + Local => $socket_path,
> + Listen => 1,
> + ) or die "cannot create socket - $!\n";
> +
> + my $flags = fcntl($socket, F_GETFD, 0)
> + or die "failed to get file descriptor flags: $!\n";
> + fcntl($socket, F_SETFD, $flags & ~FD_CLOEXEC)
> + or die "failed to remove FD_CLOEXEC from file descriptor\n";
> +
> + my $dir_cfg = PVE::Mapping::Dir::config()->{ids}->{$virtiofs->{dirid}};
> + my $node_list = PVE::Mapping::Dir::find_on_current_node($virtiofs->{dirid});
> + my $node_cfg = $node_list->[0];
Nit: this should also check if there is exactly one entry in the result.
But since all callers require that, we can make the
find_on_current_node() function check this itself and return the single
entry directly.
> +
> + my $virtiofsd_bin = '/usr/libexec/virtiofsd';
> + my $fd = $socket->fileno();
> + my $path = $node_cfg->{path};
> +
> + my $could_not_fork_err = "could not fork to start virtiofsd\n";
> + my $pid = fork();
> + if ($pid == 0) {
> + setsid();
Is this automatically imported by POSIX? I'd still be explicit here with
POSIX::setsid();
> + $0 = "task pve-vm$vmid-virtiofs$fsid";
> + my $pid2 = fork();
> + if ($pid2 == 0) {
> + my $cmd = [$virtiofsd_bin, "--fd=$fd", "--shared-dir=$path"];
> + push @$cmd, '--xattr' if $virtiofs->{'expose-xattr'};
> + push @$cmd, '--posix-acl' if $virtiofs->{'expose-acl'};
> + push @$cmd, '--announce-submounts' if $node_cfg->{'announce-submounts'};
> + push @$cmd, '--allow-direct-io' if $virtiofs->{'direct-io'};
> + push @$cmd, '--cache='.$virtiofs->{cache} if $virtiofs->{cache};
> + push @$cmd, '--writeback' if $virtiofs->{'writeback'};
> + push @$cmd, '--syslog';
> + exec(@$cmd);
> + } elsif (!defined($pid2)) {
> + die $could_not_fork_err;
> + } else {
> + POSIX::_exit(0);
> + }
> + } elsif (!defined($pid)) {
> + die $could_not_fork_err;
> + } else {
> + waitpid($pid, 0);
> + }
> +
> + # return socket to keep it alive,
> + # so that QEMU will wait for virtiofsd to start
> + return $socket;
> +}
> +
> +sub close_sockets {
> + my @sockets = @_;
> + for my $socket (@sockets) {
> + shutdown($socket, 2);
> + close($socket);
> + }
> +}
Style nit: missing blank line here
> +1;
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 25+ messages in thread
* [pve-devel] [PATCH qemu-server v13 6/12] migration: check_local_resources for virtiofs
2025-01-22 10:08 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v13 0/12] virtiofs Markus Frank
` (4 preceding siblings ...)
2025-01-22 10:08 ` [pve-devel] [PATCH qemu-server v13 5/12] fix #1027: virtio-fs support Markus Frank
@ 2025-01-22 10:08 ` Markus Frank
2025-02-19 13:56 ` Fiona Ebner
2025-01-22 10:08 ` [pve-devel] [PATCH qemu-server v13 7/12] disable snapshot (with RAM) and hibernate with virtio-fs devices Markus Frank
` (6 subsequent siblings)
12 siblings, 1 reply; 25+ messages in thread
From: Markus Frank @ 2025-01-22 10:08 UTC (permalink / raw)
To: pve-devel
add dir mapping checks to check_local_resources
Since the VM needs to be powered off for migration, migration should
work with a directory on shared storage with all caching settings.
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
PVE/QemuServer.pm | 12 +++++++++++-
test/MigrationTest/Shared.pm | 7 +++++++
2 files changed, 18 insertions(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 7907604a..9cbbceb2 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2485,6 +2485,7 @@ sub check_local_resources {
my $nodelist = PVE::Cluster::get_nodelist();
my $pci_map = PVE::Mapping::PCI::config();
my $usb_map = PVE::Mapping::USB::config();
+ my $dir_map = PVE::Mapping::Dir::config();
my $missing_mappings_by_node = { map { $_ => [] } @$nodelist };
@@ -2496,6 +2497,8 @@ sub check_local_resources {
$entry = PVE::Mapping::PCI::get_node_mapping($pci_map, $id, $node);
} elsif ($type eq 'usb') {
$entry = PVE::Mapping::USB::get_node_mapping($usb_map, $id, $node);
+ } elsif ($type eq 'dir') {
+ $entry = PVE::Mapping::Dir::get_node_mapping($dir_map, $id, $node);
}
if (!scalar($entry->@*)) {
push @{$missing_mappings_by_node->{$node}}, $key;
@@ -2524,9 +2527,16 @@ sub check_local_resources {
push @$mapped_res, $k;
}
}
+ if ($k =~ m/^virtiofs/) {
+ my $entry = parse_property_string('pve-qm-virtiofs', $conf->{$k});
+ if ($entry->{dirid}) {
+ $add_missing_mapping->('dir', $k, $entry->{dirid});
+ push @$mapped_res, $k;
+ }
+ }
# sockets are safe: they will recreated be on the target side post-migrate
next if $k =~ m/^serial/ && ($conf->{$k} eq 'socket');
- push @loc_res, $k if $k =~ m/^(usb|hostpci|serial|parallel)\d+$/;
+ push @loc_res, $k if $k =~ m/^(usb|hostpci|serial|parallel|virtiofs)\d+$/;
}
die "VM uses local resources\n" if scalar @loc_res && !$noerr;
diff --git a/test/MigrationTest/Shared.pm b/test/MigrationTest/Shared.pm
index aa7203d1..c5d07222 100644
--- a/test/MigrationTest/Shared.pm
+++ b/test/MigrationTest/Shared.pm
@@ -90,6 +90,13 @@ $mapping_pci_module->mock(
},
);
+our $mapping_dir_module = Test::MockModule->new("PVE::Mapping::Dir");
+$mapping_dir_module->mock(
+ config => sub {
+ return {};
+ },
+);
+
our $ha_config_module = Test::MockModule->new("PVE::HA::Config");
$ha_config_module->mock(
vm_is_ha_managed => sub {
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 25+ messages in thread
* [pve-devel] [PATCH qemu-server v13 7/12] disable snapshot (with RAM) and hibernate with virtio-fs devices
2025-01-22 10:08 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v13 0/12] virtiofs Markus Frank
` (5 preceding siblings ...)
2025-01-22 10:08 ` [pve-devel] [PATCH qemu-server v13 6/12] migration: check_local_resources for virtiofs Markus Frank
@ 2025-01-22 10:08 ` Markus Frank
2025-02-19 13:58 ` Fiona Ebner
2025-01-22 10:08 ` [pve-devel] [PATCH manager v13 08/12] api: add resource map api endpoints for directories Markus Frank
` (5 subsequent siblings)
12 siblings, 1 reply; 25+ messages in thread
From: Markus Frank @ 2025-01-22 10:08 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
PVE/QemuServer.pm | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index b89a7e71..00178575 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2460,8 +2460,9 @@ sub check_non_migratable_resources {
my ($conf, $state, $noerr) = @_;
my @blockers = ();
- if ($state && $conf->{"amd-sev"}) {
- push @blockers, "amd-sev";
+ if ($state) {
+ push @blockers, "amd-sev" if $conf->{"amd-sev"};
+ push @blockers, "virtiofs" if PVE::QemuServer::Virtiofs::virtiofs_enabled($conf);
}
if (scalar(@blockers) && !$noerr) {
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 25+ messages in thread
* [pve-devel] [PATCH manager v13 08/12] api: add resource map api endpoints for directories
2025-01-22 10:08 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v13 0/12] virtiofs Markus Frank
` (6 preceding siblings ...)
2025-01-22 10:08 ` [pve-devel] [PATCH qemu-server v13 7/12] disable snapshot (with RAM) and hibernate with virtio-fs devices Markus Frank
@ 2025-01-22 10:08 ` Markus Frank
2025-02-19 14:14 ` Fiona Ebner
2025-01-22 10:08 ` [pve-devel] [PATCH manager v13 09/12] ui: add edit window for dir mappings Markus Frank
` (4 subsequent siblings)
12 siblings, 1 reply; 25+ messages in thread
From: Markus Frank @ 2025-01-22 10:08 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
PVE/API2/Cluster/Mapping.pm | 7 +
PVE/API2/Cluster/Mapping/Dir.pm | 307 ++++++++++++++++++++++++++++++
PVE/API2/Cluster/Mapping/Makefile | 1 +
3 files changed, 315 insertions(+)
create mode 100644 PVE/API2/Cluster/Mapping/Dir.pm
diff --git a/PVE/API2/Cluster/Mapping.pm b/PVE/API2/Cluster/Mapping.pm
index 40386579..9f0dcd2b 100644
--- a/PVE/API2/Cluster/Mapping.pm
+++ b/PVE/API2/Cluster/Mapping.pm
@@ -3,11 +3,17 @@ package PVE::API2::Cluster::Mapping;
use strict;
use warnings;
+use PVE::API2::Cluster::Mapping::Dir;
use PVE::API2::Cluster::Mapping::PCI;
use PVE::API2::Cluster::Mapping::USB;
use base qw(PVE::RESTHandler);
+__PACKAGE__->register_method ({
+ subclass => "PVE::API2::Cluster::Mapping::Dir",
+ path => 'dir',
+});
+
__PACKAGE__->register_method ({
subclass => "PVE::API2::Cluster::Mapping::PCI",
path => 'pci',
@@ -41,6 +47,7 @@ __PACKAGE__->register_method ({
my ($param) = @_;
my $result = [
+ { name => 'dir' },
{ name => 'pci' },
{ name => 'usb' },
];
diff --git a/PVE/API2/Cluster/Mapping/Dir.pm b/PVE/API2/Cluster/Mapping/Dir.pm
new file mode 100644
index 00000000..5218241f
--- /dev/null
+++ b/PVE/API2/Cluster/Mapping/Dir.pm
@@ -0,0 +1,307 @@
+package PVE::API2::Cluster::Mapping::Dir;
+
+use strict;
+use warnings;
+
+use Storable qw(dclone);
+
+use PVE::INotify;
+use PVE::JSONSchema qw(get_standard_option parse_property_string);
+use PVE::Mapping::Dir ();
+use PVE::RPCEnvironment;
+use PVE::Tools qw(extract_param);
+
+use base qw(PVE::RESTHandler);
+
+__PACKAGE__->register_method ({
+ name => 'index',
+ path => '',
+ method => 'GET',
+ # only proxy if we give the 'check-node' parameter
+ proxyto_callback => sub {
+ my ($rpcenv, $proxyto, $param) = @_;
+ return $param->{'check-node'} // 'localhost';
+ },
+ description => "List directory mapping",
+ permissions => {
+ description => "Only lists entries where you have 'Mapping.Modify', 'Mapping.Use' or".
+ " 'Mapping.Audit' permissions on '/mapping/dir/<id>'.",
+ user => 'all',
+ },
+ parameters => {
+ additionalProperties => 0,
+ properties => {
+ 'check-node' => get_standard_option('pve-node', {
+ description => "If given, checks the configurations on the given node for ".
+ "correctness, and adds relevant diagnostics for the directory to the response.",
+ optional => 1,
+ }),
+ },
+ },
+ returns => {
+ type => 'array',
+ items => {
+ type => "object",
+ properties => {
+ id => {
+ type => 'string',
+ description => "The logical ID of the mapping."
+ },
+ map => {
+ type => 'array',
+ description => "The entries of the mapping.",
+ items => {
+ type => 'string',
+ description => "A mapping for a node.",
+ },
+ },
+ description => {
+ type => 'string',
+ description => "A description of the logical mapping.",
+ },
+ checks => {
+ type => "array",
+ optional => 1,
+ description => "A list of checks, only present if 'check-node' is set.",
+ items => {
+ type => 'object',
+ properties => {
+ severity => {
+ type => "string",
+ enum => ['warning', 'error'],
+ description => "The severity of the error",
+ },
+ message => {
+ type => "string",
+ description => "The message of the error",
+ },
+ },
+ }
+ },
+ },
+ },
+ links => [ { rel => 'child', href => "{id}" } ],
+ },
+ code => sub {
+ my ($param) = @_;
+
+ my $rpcenv = PVE::RPCEnvironment::get();
+ my $authuser = $rpcenv->get_user();
+
+ my $check_node = $param->{'check-node'};
+ my $local_node = PVE::INotify::nodename();
+
+ die "wrong node to check - $check_node != $local_node\n"
+ if defined($check_node) && $check_node ne 'localhost' && $check_node ne $local_node;
+
+ my $cfg = PVE::Mapping::Dir::config();
+
+ my $can_see_mapping_privs = ['Mapping.Modify', 'Mapping.Use', 'Mapping.Audit'];
+
+ my $res = [];
+ for my $id (keys $cfg->{ids}->%*) {
+ next if !$rpcenv->check_any($authuser, "/mapping/dir/$id", $can_see_mapping_privs, 1);
+ next if !$cfg->{ids}->{$id};
+
+ my $entry = dclone($cfg->{ids}->{$id});
+ $entry->{id} = $id;
+ $entry->{digest} = $cfg->{digest};
+
+ if (defined($check_node)) {
+ $entry->{checks} = [];
+ if (my $mappings = PVE::Mapping::Dir::get_node_mapping($cfg, $id, $check_node)) {
+ if (!scalar($mappings->@*)) {
+ push $entry->{checks}->@*, {
+ severity => 'warning',
+ message => "No mapping for node $check_node.",
+ };
+ }
+ for my $mapping ($mappings->@*) {
+ eval { PVE::Mapping::Dir::assert_valid($mapping) };
+ if (my $err = $@) {
+ push $entry->{checks}->@*, {
+ severity => 'error',
+ message => "Invalid configuration: $err",
+ };
+ }
+ }
+ }
+ }
+
+ push @$res, $entry;
+ }
+
+ return $res;
+ },
+});
+
+__PACKAGE__->register_method ({
+ name => 'get',
+ protected => 1,
+ path => '{id}',
+ method => 'GET',
+ description => "Get directory mapping.",
+ permissions => {
+ check =>['or',
+ ['perm', '/mapping/dir/{id}', ['Mapping.Use']],
+ ['perm', '/mapping/dir/{id}', ['Mapping.Modify']],
+ ['perm', '/mapping/dir/{id}', ['Mapping.Audit']],
+ ],
+ },
+ parameters => {
+ additionalProperties => 0,
+ properties => {
+ id => {
+ type => 'string',
+ format => 'pve-configid',
+ },
+ }
+ },
+ returns => { type => 'object' },
+ code => sub {
+ my ($param) = @_;
+
+ my $cfg = PVE::Mapping::Dir::config();
+ my $id = $param->{id};
+
+ my $entry = $cfg->{ids}->{$id};
+ die "mapping '$param->{id}' not found\n" if !defined($entry);
+
+ my $data = dclone($entry);
+
+ $data->{digest} = $cfg->{digest};
+
+ return $data;
+ }});
+
+__PACKAGE__->register_method ({
+ name => 'create',
+ protected => 1,
+ path => '',
+ method => 'POST',
+ description => "Create a new directory mapping.",
+ permissions => {
+ check => ['perm', '/mapping/dir', ['Mapping.Modify']],
+ },
+ parameters => PVE::Mapping::Dir->createSchema(1),
+ returns => {
+ type => 'null',
+ },
+ code => sub {
+ my ($param) = @_;
+
+ my $id = extract_param($param, 'id');
+
+ my $plugin = PVE::Mapping::Dir->lookup('dir');
+ my $opts = $plugin->check_config($id, $param, 1, 1);
+
+ my $map_list = $opts->{map};
+ PVE::Mapping::Dir::assert_no_duplicate_node($map_list);
+
+ PVE::Mapping::Dir::lock_dir_config(sub {
+ my $cfg = PVE::Mapping::Dir::config();
+
+ die "dir ID '$id' already defined\n" if defined($cfg->{ids}->{$id});
+
+ $cfg->{ids}->{$id} = $opts;
+
+ PVE::Mapping::Dir::write_dir_config($cfg);
+
+ }, "create directory mapping failed");
+
+ return;
+ },
+});
+
+__PACKAGE__->register_method ({
+ name => 'update',
+ protected => 1,
+ path => '{id}',
+ method => 'PUT',
+ description => "Update a directory mapping.",
+ permissions => {
+ check => ['perm', '/mapping/dir/{id}', ['Mapping.Modify']],
+ },
+ parameters => PVE::Mapping::Dir->updateSchema(),
+ returns => {
+ type => 'null',
+ },
+ code => sub {
+ my ($param) = @_;
+
+ my $digest = extract_param($param, 'digest');
+ my $delete = extract_param($param, 'delete');
+ my $id = extract_param($param, 'id');
+
+ if ($delete) {
+ $delete = [ PVE::Tools::split_list($delete) ];
+ }
+
+ PVE::Mapping::Dir::lock_dir_config(sub {
+ my $cfg = PVE::Mapping::Dir::config();
+
+ PVE::Tools::assert_if_modified($cfg->{digest}, $digest) if defined($digest);
+
+ die "dir ID '$id' does not exist\n" if !defined($cfg->{ids}->{$id});
+
+ my $plugin = PVE::Mapping::Dir->lookup('dir');
+ my $opts = $plugin->check_config($id, $param, 1, 1);
+
+ my $map_list = $opts->{map};
+ PVE::Mapping::Dir::assert_no_duplicate_node($map_list);
+
+ my $data = $cfg->{ids}->{$id};
+
+ my $options = $plugin->private()->{options}->{dir};
+ PVE::SectionConfig::delete_from_config($data, $options, $opts, $delete);
+
+ $data->{$_} = $opts->{$_} for keys $opts->%*;
+
+ PVE::Mapping::Dir::write_dir_config($cfg);
+
+ }, "update directory mapping failed");
+
+ return;
+ },
+});
+
+__PACKAGE__->register_method ({
+ name => 'delete',
+ protected => 1,
+ path => '{id}',
+ method => 'DELETE',
+ description => "Remove directory mapping.",
+ permissions => {
+ check => [ 'perm', '/mapping/dir', ['Mapping.Modify']],
+ },
+ parameters => {
+ additionalProperties => 0,
+ properties => {
+ id => {
+ type => 'string',
+ format => 'pve-configid',
+ },
+ }
+ },
+ returns => { type => 'null' },
+ code => sub {
+ my ($param) = @_;
+
+ my $id = $param->{id};
+
+ PVE::Mapping::Dir::lock_dir_config(sub {
+ my $cfg = PVE::Mapping::Dir::config();
+
+ if ($cfg->{ids}->{$id}) {
+ delete $cfg->{ids}->{$id};
+ }
+
+ PVE::Mapping::Dir::write_dir_config($cfg);
+
+ }, "delete dir mapping failed");
+
+ return;
+ }
+});
+
+1;
diff --git a/PVE/API2/Cluster/Mapping/Makefile b/PVE/API2/Cluster/Mapping/Makefile
index e7345ab4..5dbb3f5c 100644
--- a/PVE/API2/Cluster/Mapping/Makefile
+++ b/PVE/API2/Cluster/Mapping/Makefile
@@ -3,6 +3,7 @@ include ../../../../defines.mk
# for node independent, cluster-wide applicable, API endpoints
# ensure we do not conflict with files shipped by pve-cluster!!
PERLSOURCE= \
+ Dir.pm \
PCI.pm \
USB.pm
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [pve-devel] [PATCH manager v13 08/12] api: add resource map api endpoints for directories
2025-01-22 10:08 ` [pve-devel] [PATCH manager v13 08/12] api: add resource map api endpoints for directories Markus Frank
@ 2025-02-19 14:14 ` Fiona Ebner
0 siblings, 0 replies; 25+ messages in thread
From: Fiona Ebner @ 2025-02-19 14:14 UTC (permalink / raw)
To: Proxmox VE development discussion, Markus Frank
Am 22.01.25 um 11:08 schrieb Markus Frank:
> Signed-off-by: Markus Frank <m.frank@proxmox.com>
Some minor nits, but otherwise:
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
> diff --git a/PVE/API2/Cluster/Mapping/Dir.pm b/PVE/API2/Cluster/Mapping/Dir.pm
> new file mode 100644
> index 00000000..5218241f
> --- /dev/null
> +++ b/PVE/API2/Cluster/Mapping/Dir.pm
> @@ -0,0 +1,307 @@
> +package PVE::API2::Cluster::Mapping::Dir;
> +
> +use strict;
> +use warnings;
> +
> +use Storable qw(dclone);
> +
> +use PVE::INotify;
> +use PVE::JSONSchema qw(get_standard_option parse_property_string);
Nit: parse_property_string is not used.
> +use PVE::Mapping::Dir ();
> +use PVE::RPCEnvironment;
Missing
use PVE::SectionConfig;
because of PVE::SectionConfig::delete_from_config() below
> +use PVE::Tools qw(extract_param);
> +
> +use base qw(PVE::RESTHandler);
> +
> +__PACKAGE__->register_method ({
> + name => 'index',
> + path => '',
> + method => 'GET',
> + # only proxy if we give the 'check-node' parameter
> + proxyto_callback => sub {
> + my ($rpcenv, $proxyto, $param) = @_;
> + return $param->{'check-node'} // 'localhost';
> + },
> + description => "List directory mapping",
> + permissions => {
> + description => "Only lists entries where you have 'Mapping.Modify', 'Mapping.Use' or".
> + " 'Mapping.Audit' permissions on '/mapping/dir/<id>'.",
I know these are copied, but while we're at it:
Style nit: dot should be on this line
https://pve.proxmox.com/wiki/Perl_Style_Guide#Wrapping_Strings
> + user => 'all',
> + },
> + parameters => {
> + additionalProperties => 0,
> + properties => {
> + 'check-node' => get_standard_option('pve-node', {
> + description => "If given, checks the configurations on the given node for ".
> + "correctness, and adds relevant diagnostics for the directory to the response.",
Style nit: dot and space should be on this line
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 25+ messages in thread
* [pve-devel] [PATCH manager v13 09/12] ui: add edit window for dir mappings
2025-01-22 10:08 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v13 0/12] virtiofs Markus Frank
` (7 preceding siblings ...)
2025-01-22 10:08 ` [pve-devel] [PATCH manager v13 08/12] api: add resource map api endpoints for directories Markus Frank
@ 2025-01-22 10:08 ` Markus Frank
2025-01-22 10:08 ` [pve-devel] [PATCH manager v13 10/12] ui: add resource mapping view for directories Markus Frank
` (3 subsequent siblings)
12 siblings, 0 replies; 25+ messages in thread
From: Markus Frank @ 2025-01-22 10:08 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
www/manager6/Makefile | 1 +
www/manager6/window/DirMapEdit.js | 208 ++++++++++++++++++++++++++++++
2 files changed, 209 insertions(+)
create mode 100644 www/manager6/window/DirMapEdit.js
diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index c94a5cdf..4b8677e3 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -138,6 +138,7 @@ JSSRC= \
window/TreeSettingsEdit.js \
window/PCIMapEdit.js \
window/USBMapEdit.js \
+ window/DirMapEdit.js \
window/GuestImport.js \
ha/Fencing.js \
ha/GroupEdit.js \
diff --git a/www/manager6/window/DirMapEdit.js b/www/manager6/window/DirMapEdit.js
new file mode 100644
index 00000000..c8ffc0ed
--- /dev/null
+++ b/www/manager6/window/DirMapEdit.js
@@ -0,0 +1,208 @@
+Ext.define('PVE.window.DirMapEditWindow', {
+ extend: 'Proxmox.window.Edit',
+
+ mixins: ['Proxmox.Mixin.CBind'],
+
+ cbindData: function(initialConfig) {
+ let me = this;
+ me.isCreate = !me.name;
+ me.method = me.isCreate ? 'POST' : 'PUT';
+ me.hideMapping = !!me.entryOnly;
+ me.hideComment = me.name && !me.entryOnly;
+ me.hideNodeSelector = me.nodename || me.entryOnly;
+ me.hideNode = !me.nodename || !me.hideNodeSelector;
+ return {
+ name: me.name,
+ nodename: me.nodename,
+ };
+ },
+
+ submitUrl: function(_url, data) {
+ let me = this;
+ let name = me.isCreate ? '' : me.name;
+ return `/cluster/mapping/dir/${name}`;
+ },
+
+ title: gettext('Add Dir mapping'),
+
+ onlineHelp: 'resource_mapping',
+
+ method: 'POST',
+
+ controller: {
+ xclass: 'Ext.app.ViewController',
+
+ onGetValues: function(values) {
+ let me = this;
+ let view = me.getView();
+ values.node ??= view.nodename;
+
+ let name = values.name;
+ let description = values.description;
+ let deletes = values.delete;
+
+ delete values.description;
+ delete values.name;
+ delete values.delete;
+
+ if (PVE.Parser.parseBoolean(values['announce-submounts'])) {
+ values['announce-submounts'] = 1;
+ }
+
+ let map = [];
+ if (me.originalMap) {
+ map = PVE.Parser.filterPropertyStringList(me.originalMap, (e) => e.node !== values.node);
+ }
+ if (values.path) {
+ map.push(PVE.Parser.printPropertyString(values));
+ }
+ values = { map };
+
+ if (description) {
+ values.description = description;
+ }
+ if (deletes && !view.isCreate) {
+ values.delete = deletes;
+ }
+ if (view.isCreate) {
+ values.id = name;
+ }
+
+ return values;
+ },
+
+ onSetValues: function(values) {
+ let me = this;
+ let view = me.getView();
+ me.originalMap = [...values.map];
+ let configuredNodes = [];
+ PVE.Parser.filterPropertyStringList(values.map, (e) => {
+ configuredNodes.push(e.node);
+ e['announce-submounts'] = PVE.Parser.parseBoolean(e['announce-submounts']) ? 1 : 0;
+ if (e.node === view.nodename) {
+ values = e;
+ }
+ return false;
+ });
+
+ me.lookup('nodeselector').disallowedNodes = configuredNodes;
+
+ return values;
+ },
+
+ init: function(view) {
+ let me = this;
+
+ if (!view.nodename) {
+ //throw "no nodename given";
+ }
+ },
+ },
+
+ items: [
+ {
+ xtype: 'inputpanel',
+ onGetValues: function(values) {
+ return this.up('window').getController().onGetValues(values);
+ },
+
+ onSetValues: function(values) {
+ return this.up('window').getController().onSetValues(values);
+ },
+
+ columnT: [
+ {
+ xtype: 'displayfield',
+ reference: 'directory-hint',
+ columnWidth: 1,
+ value: 'Make sure the directory exists.',
+ cbind: {
+ disabled: '{hideMapping}',
+ hidden: '{hideMapping}',
+ },
+ userCls: 'pmx-hint',
+ },
+ ],
+
+ column1: [
+ {
+ xtype: 'pmxDisplayEditField',
+ fieldLabel: gettext('Name'),
+ cbind: {
+ editable: '{!name}',
+ value: '{name}',
+ submitValue: '{isCreate}',
+ },
+ name: 'name',
+ allowBlank: false,
+ },
+ {
+ xtype: 'pveNodeSelector',
+ reference: 'nodeselector',
+ fieldLabel: gettext('Node'),
+ name: 'node',
+ cbind: {
+ disabled: '{hideNodeSelector}',
+ hidden: '{hideNodeSelector}',
+ },
+ allowBlank: false,
+ },
+ ],
+
+ column2: [
+ {
+ xtype: 'fieldcontainer',
+ defaultType: 'radiofield',
+ layout: 'fit',
+ cbind: {
+ disabled: '{hideMapping}',
+ hidden: '{hideMapping}',
+ },
+ items: [
+ {
+ xtype: 'textfield',
+ name: 'path',
+ reference: 'path',
+ value: '',
+ emptyText: gettext('/some/path'),
+ cbind: {
+ nodename: '{nodename}',
+ disabled: '{hideMapping}',
+ },
+ allowBlank: false,
+ fieldLabel: gettext('Path'),
+ },
+ {
+ xtype: 'proxmoxcheckbox',
+ name: 'announce-submounts',
+ fieldLabel: gettext('announce-submounts'),
+ value: '1',
+ deleteEmpty: false,
+ },
+ ],
+ },
+ ],
+
+ columnB: [
+ {
+ xtype: 'fieldcontainer',
+ defaultType: 'radiofield',
+ layout: 'fit',
+ cbind: {
+ disabled: '{hideComment}',
+ hidden: '{hideComment}',
+ },
+ items: [
+ {
+ xtype: 'proxmoxtextfield',
+ fieldLabel: gettext('Comment'),
+ submitValue: true,
+ name: 'description',
+ deleteEmpty: true,
+ },
+ ],
+ },
+ ],
+ },
+ ],
+});
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 25+ messages in thread
* [pve-devel] [PATCH manager v13 10/12] ui: add resource mapping view for directories
2025-01-22 10:08 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v13 0/12] virtiofs Markus Frank
` (8 preceding siblings ...)
2025-01-22 10:08 ` [pve-devel] [PATCH manager v13 09/12] ui: add edit window for dir mappings Markus Frank
@ 2025-01-22 10:08 ` Markus Frank
2025-02-18 14:29 ` Fiona Ebner
2025-01-22 10:09 ` [pve-devel] [PATCH manager v13 11/12] ui: form: add selector for directory mappings Markus Frank
` (2 subsequent siblings)
12 siblings, 1 reply; 25+ messages in thread
From: Markus Frank @ 2025-01-22 10:08 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
www/manager6/Makefile | 1 +
www/manager6/dc/Config.js | 10 +++++++++
www/manager6/dc/DirMapView.js | 42 +++++++++++++++++++++++++++++++++++
3 files changed, 53 insertions(+)
create mode 100644 www/manager6/dc/DirMapView.js
diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 4b8677e3..57c4d377 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -189,6 +189,7 @@ JSSRC= \
dc/RealmSyncJob.js \
dc/PCIMapView.js \
dc/USBMapView.js \
+ dc/DirMapView.js \
lxc/CmdMenu.js \
lxc/Config.js \
lxc/CreateWizard.js \
diff --git a/www/manager6/dc/Config.js b/www/manager6/dc/Config.js
index 74728c83..2958fb88 100644
--- a/www/manager6/dc/Config.js
+++ b/www/manager6/dc/Config.js
@@ -329,6 +329,16 @@ Ext.define('PVE.dc.Config', {
title: gettext('USB Devices'),
flex: 1,
},
+ {
+ xtype: 'splitter',
+ collapsible: false,
+ performCollapse: false,
+ },
+ {
+ xtype: 'pveDcDirMapView',
+ title: gettext('Directories'),
+ flex: 1,
+ },
],
},
);
diff --git a/www/manager6/dc/DirMapView.js b/www/manager6/dc/DirMapView.js
new file mode 100644
index 00000000..aa0bb10b
--- /dev/null
+++ b/www/manager6/dc/DirMapView.js
@@ -0,0 +1,42 @@
+Ext.define('pve-resource-dir-tree', {
+ extend: 'Ext.data.Model',
+ idProperty: 'internalId',
+ fields: ['type', 'text', 'path', 'id', 'description', 'digest'],
+});
+
+Ext.define('PVE.dc.DirMapView', {
+ extend: 'PVE.tree.ResourceMapTree',
+ alias: 'widget.pveDcDirMapView',
+
+ editWindowClass: 'PVE.window.DirMapEditWindow',
+ baseUrl: '/cluster/mapping/dir',
+ mapIconCls: 'fa fa-folder',
+ entryIdProperty: 'path',
+
+ store: {
+ sorters: 'text',
+ model: 'pve-resource-dir-tree',
+ data: {},
+ },
+
+ columns: [
+ {
+ xtype: 'treecolumn',
+ text: gettext('ID/Node'),
+ dataIndex: 'text',
+ width: 200,
+ },
+ {
+ text: gettext('announce-submounts'),
+ dataIndex: 'announce-submounts',
+ },
+ {
+ header: gettext('Comment'),
+ dataIndex: 'description',
+ renderer: function(value, _meta, record) {
+ return value ?? record.data.comment;
+ },
+ flex: 1,
+ },
+ ],
+});
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [pve-devel] [PATCH manager v13 10/12] ui: add resource mapping view for directories
2025-01-22 10:08 ` [pve-devel] [PATCH manager v13 10/12] ui: add resource mapping view for directories Markus Frank
@ 2025-02-18 14:29 ` Fiona Ebner
0 siblings, 0 replies; 25+ messages in thread
From: Fiona Ebner @ 2025-02-18 14:29 UTC (permalink / raw)
To: Proxmox VE development discussion, Markus Frank
Am 22.01.25 um 11:08 schrieb Markus Frank:
> + },
> + {
> + header: gettext('Comment'),
> + dataIndex: 'description',
> + renderer: function(value, _meta, record) {
> + return value ?? record.data.comment;
Needs to be HTML encoded, see how it is done for USB/PCI
> + },
> + flex: 1,
> + },
> + ],
> +});
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 25+ messages in thread
* [pve-devel] [PATCH manager v13 11/12] ui: form: add selector for directory mappings
2025-01-22 10:08 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v13 0/12] virtiofs Markus Frank
` (9 preceding siblings ...)
2025-01-22 10:08 ` [pve-devel] [PATCH manager v13 10/12] ui: add resource mapping view for directories Markus Frank
@ 2025-01-22 10:09 ` Markus Frank
2025-01-22 10:09 ` [pve-devel] [PATCH manager v13 12/12] ui: add options to add virtio-fs to qemu config Markus Frank
2025-02-19 14:19 ` [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v13 0/12] virtiofs Fiona Ebner
12 siblings, 0 replies; 25+ messages in thread
From: Markus Frank @ 2025-01-22 10:09 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
www/manager6/Makefile | 1 +
www/manager6/form/DirMapSelector.js | 63 +++++++++++++++++++++++++++++
2 files changed, 64 insertions(+)
create mode 100644 www/manager6/form/DirMapSelector.js
diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index 57c4d377..fabbdd24 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -35,6 +35,7 @@ JSSRC= \
form/ContentTypeSelector.js \
form/ControllerSelector.js \
form/DayOfWeekSelector.js \
+ form/DirMapSelector.js \
form/DiskFormatSelector.js \
form/DiskStorageSelector.js \
form/FileSelector.js \
diff --git a/www/manager6/form/DirMapSelector.js b/www/manager6/form/DirMapSelector.js
new file mode 100644
index 00000000..473a2ffe
--- /dev/null
+++ b/www/manager6/form/DirMapSelector.js
@@ -0,0 +1,63 @@
+Ext.define('PVE.form.DirMapSelector', {
+ extend: 'Proxmox.form.ComboGrid',
+ alias: 'widget.pveDirMapSelector',
+
+ store: {
+ fields: ['name', 'path'],
+ filterOnLoad: true,
+ sorters: [
+ {
+ property: 'id',
+ direction: 'ASC',
+ },
+ ],
+ },
+
+ allowBlank: false,
+ autoSelect: false,
+ displayField: 'id',
+ valueField: 'id',
+
+ listConfig: {
+ columns: [
+ {
+ header: gettext('Directory ID'),
+ dataIndex: 'id',
+ flex: 1,
+ },
+ {
+ header: gettext('Comment'),
+ dataIndex: 'description',
+ flex: 1,
+ },
+ ],
+ },
+
+ setNodename: function(nodename) {
+ var me = this;
+
+ if (!nodename || me.nodename === nodename) {
+ return;
+ }
+
+ me.nodename = nodename;
+
+ me.store.setProxy({
+ type: 'proxmox',
+ url: `/api2/json/cluster/mapping/dir?check-node=${nodename}`,
+ });
+
+ me.store.load();
+ },
+
+ initComponent: function() {
+ var me = this;
+
+ var nodename = me.nodename;
+ me.nodename = undefined;
+
+ me.callParent();
+
+ me.setNodename(nodename);
+ },
+});
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 25+ messages in thread
* [pve-devel] [PATCH manager v13 12/12] ui: add options to add virtio-fs to qemu config
2025-01-22 10:08 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v13 0/12] virtiofs Markus Frank
` (10 preceding siblings ...)
2025-01-22 10:09 ` [pve-devel] [PATCH manager v13 11/12] ui: form: add selector for directory mappings Markus Frank
@ 2025-01-22 10:09 ` Markus Frank
2025-02-19 14:17 ` Fiona Ebner
2025-02-19 14:19 ` [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v13 0/12] virtiofs Fiona Ebner
12 siblings, 1 reply; 25+ messages in thread
From: Markus Frank @ 2025-01-22 10:09 UTC (permalink / raw)
To: pve-devel
Signed-off-by: Markus Frank <m.frank@proxmox.com>
---
www/manager6/Makefile | 1 +
www/manager6/Utils.js | 1 +
www/manager6/qemu/HardwareView.js | 19 +++++
www/manager6/qemu/VirtiofsEdit.js | 123 ++++++++++++++++++++++++++++++
4 files changed, 144 insertions(+)
create mode 100644 www/manager6/qemu/VirtiofsEdit.js
diff --git a/www/manager6/Makefile b/www/manager6/Makefile
index fabbdd24..fdf0e816 100644
--- a/www/manager6/Makefile
+++ b/www/manager6/Makefile
@@ -271,6 +271,7 @@ JSSRC= \
qemu/Smbios1Edit.js \
qemu/SystemEdit.js \
qemu/USBEdit.js \
+ qemu/VirtiofsEdit.js \
sdn/Browser.js \
sdn/ControllerView.js \
sdn/Status.js \
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index 90011a8f..0f242ae1 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -1645,6 +1645,7 @@ Ext.define('PVE.Utils', {
serial: 4,
rng: 1,
tpmstate: 1,
+ virtiofs: 10,
},
// we can have usb6 and up only for specific machine/ostypes
diff --git a/www/manager6/qemu/HardwareView.js b/www/manager6/qemu/HardwareView.js
index c6d193fc..34aeb51e 100644
--- a/www/manager6/qemu/HardwareView.js
+++ b/www/manager6/qemu/HardwareView.js
@@ -319,6 +319,16 @@ Ext.define('PVE.qemu.HardwareView', {
never_delete: !caps.nodes['Sys.Console'],
header: gettext("VirtIO RNG"),
};
+ for (let i = 0; i < PVE.Utils.hardware_counts.virtiofs; i++) {
+ let confid = "virtiofs" + i.toString();
+ rows[confid] = {
+ group: 50,
+ order: i,
+ iconCls: 'folder',
+ editor: 'PVE.qemu.VirtiofsEdit',
+ header: gettext('Virtiofs') + ' (' + confid +')',
+ };
+ }
var sorterFn = function(rec1, rec2) {
var v1 = rec1.data.key;
@@ -595,6 +605,7 @@ Ext.define('PVE.qemu.HardwareView', {
const noVMConfigDiskPerm = !caps.vms['VM.Config.Disk'];
const noVMConfigCDROMPerm = !caps.vms['VM.Config.CDROM'];
const noVMConfigCloudinitPerm = !caps.vms['VM.Config.Cloudinit'];
+ const noVMConfigOptionsPerm = !caps.vms['VM.Config.Options'];
me.down('#addUsb').setDisabled(noHWPerm || isAtUsbLimit());
me.down('#addPci').setDisabled(noHWPerm || isAtLimit('hostpci'));
@@ -604,6 +615,7 @@ Ext.define('PVE.qemu.HardwareView', {
me.down('#addRng').setDisabled(noSysConsolePerm || isAtLimit('rng'));
efidisk_menuitem.setDisabled(noVMConfigDiskPerm || isAtLimit('efidisk'));
me.down('#addTpmState').setDisabled(noVMConfigDiskPerm || isAtLimit('tpmstate'));
+ me.down('#addVirtiofs').setDisabled(noVMConfigOptionsPerm || isAtLimit('virtiofs'));
me.down('#addCloudinitDrive').setDisabled(noVMConfigCDROMPerm || noVMConfigCloudinitPerm || hasCloudInit);
if (!rec) {
@@ -748,6 +760,13 @@ Ext.define('PVE.qemu.HardwareView', {
disabled: !caps.nodes['Sys.Console'],
handler: editorFactory('RNGEdit'),
},
+ {
+ text: gettext("Virtiofs"),
+ itemId: 'addVirtiofs',
+ iconCls: 'fa fa-folder',
+ disabled: !caps.nodes['Sys.Console'],
+ handler: editorFactory('VirtiofsEdit'),
+ },
],
}),
},
diff --git a/www/manager6/qemu/VirtiofsEdit.js b/www/manager6/qemu/VirtiofsEdit.js
new file mode 100644
index 00000000..76891d67
--- /dev/null
+++ b/www/manager6/qemu/VirtiofsEdit.js
@@ -0,0 +1,123 @@
+Ext.define('PVE.qemu.VirtiofsInputPanel', {
+ extend: 'Proxmox.panel.InputPanel',
+ xtype: 'pveVirtiofsInputPanel',
+ onlineHelp: 'qm_virtiofs',
+
+ insideWizard: false,
+
+ onGetValues: function(values) {
+ var me = this;
+ var confid = me.confid;
+ var params = {};
+ delete values.delete;
+ params[confid] = PVE.Parser.printPropertyString(values, 'dirid');
+ return params;
+ },
+
+ setSharedfiles: function(confid, data) {
+ var me = this;
+ me.confid = confid;
+ me.virtiofs = data;
+ me.setValues(me.virtiofs);
+ },
+ initComponent: function() {
+ let me = this;
+
+ me.nodename = me.pveSelNode.data.node;
+ if (!me.nodename) {
+ throw "no node name specified";
+ }
+ me.items = [
+ {
+ xtype: 'pveDirMapSelector',
+ emptyText: 'dirid',
+ nodename: me.nodename,
+ fieldLabel: gettext('Directory ID'),
+ name: 'dirid',
+ allowBlank: false,
+ },
+ {
+ xtype: 'proxmoxKVComboBox',
+ fieldLabel: gettext('Cache'),
+ name: 'cache',
+ value: '__default__',
+ deleteDefaultValue: false,
+ comboItems: [
+ ['__default__', Proxmox.Utils.defaultText + ' (auto)'],
+ ['auto', 'auto'],
+ ['always', 'always'],
+ ['never', 'never'],
+ ],
+ },
+ {
+ xtype: 'proxmoxcheckbox',
+ fieldLabel: gettext('expose-xattr'),
+ name: 'expose-xattr',
+ },
+ {
+ xtype: 'proxmoxcheckbox',
+ fieldLabel: gettext('expose-acl (implies expose-xattr)'),
+ name: 'expose-acl',
+ },
+ {
+ xtype: 'proxmoxcheckbox',
+ fieldLabel: gettext('Direct-io'),
+ name: 'direct-io',
+ },
+ ];
+
+ me.virtiofs = {};
+ me.confid = 'virtiofs0';
+ me.callParent();
+ },
+});
+
+Ext.define('PVE.qemu.VirtiofsEdit', {
+ extend: 'Proxmox.window.Edit',
+
+ subject: gettext('Filesystem Passthrough'),
+
+ initComponent: function() {
+ var me = this;
+
+ me.isCreate = !me.confid;
+
+ var ipanel = Ext.create('PVE.qemu.VirtiofsInputPanel', {
+ confid: me.confid,
+ pveSelNode: me.pveSelNode,
+ isCreate: me.isCreate,
+ });
+
+ Ext.applyIf(me, {
+ items: ipanel,
+ });
+
+ me.callParent();
+
+ me.load({
+ success: function(response) {
+ me.conf = response.result.data;
+ var i, confid;
+ if (!me.isCreate) {
+ var value = me.conf[me.confid];
+ var virtiofs = PVE.Parser.parsePropertyString(value, "dirid");
+ if (!virtiofs) {
+ Ext.Msg.alert(gettext('Error'), 'Unable to parse virtiofs options');
+ me.close();
+ return;
+ }
+ ipanel.setSharedfiles(me.confid, virtiofs);
+ } else {
+ for (i = 0; i < PVE.Utils.hardware_counts.virtiofs; i++) {
+ confid = 'virtiofs' + i.toString();
+ if (!Ext.isDefined(me.conf[confid])) {
+ me.confid = confid;
+ break;
+ }
+ }
+ ipanel.setSharedfiles(me.confid, {});
+ }
+ },
+ });
+ },
+});
--
2.39.5
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [pve-devel] [PATCH manager v13 12/12] ui: add options to add virtio-fs to qemu config
2025-01-22 10:09 ` [pve-devel] [PATCH manager v13 12/12] ui: add options to add virtio-fs to qemu config Markus Frank
@ 2025-02-19 14:17 ` Fiona Ebner
0 siblings, 0 replies; 25+ messages in thread
From: Fiona Ebner @ 2025-02-19 14:17 UTC (permalink / raw)
To: Proxmox VE development discussion, Markus Frank
Am 22.01.25 um 11:09 schrieb Markus Frank:
> + {
> + xtype: 'proxmoxcheckbox',
> + fieldLabel: gettext('expose-xattr'),
> + name: 'expose-xattr',
> + },
> + {
> + xtype: 'proxmoxcheckbox',
> + fieldLabel: gettext('expose-acl (implies expose-xattr)'),
> + name: 'expose-acl',
Didn't look over the UI patches, but just something I noticed while
using: would be nice if it could automatically gray out the expose-xattr
checkbox if selected.
> + },
> + {
> + xtype: 'proxmoxcheckbox',
> + fieldLabel: gettext('Direct-io'),
Nit: "Direct IO" reads nicer IMHO
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v13 0/12] virtiofs
2025-01-22 10:08 [pve-devel] [PATCH cluster/guest-common/docs/qemu-server/manager v13 0/12] virtiofs Markus Frank
` (11 preceding siblings ...)
2025-01-22 10:09 ` [pve-devel] [PATCH manager v13 12/12] ui: add options to add virtio-fs to qemu config Markus Frank
@ 2025-02-19 14:19 ` Fiona Ebner
12 siblings, 0 replies; 25+ messages in thread
From: Fiona Ebner @ 2025-02-19 14:19 UTC (permalink / raw)
To: Proxmox VE development discussion, Markus Frank
Good job! From my perspective, this is very close to ready :)
Am 22.01.25 um 11:08 schrieb Markus Frank:
> Virtio-fs is a shared file system that enables sharing a directory
> between host and guest VMs. It takes advantage of the locality of
> virtual machines and the hypervisor to get a higher throughput than
> the 9p remote file system protocol.
>
> build-order:
> 1. cluster
> 2. guest-common
> 3. docs
> 4. qemu-server
> 5. manager
>
> I did not get virtiofsd to run with run_command without creating
> zombie processes after stutdown. So I replaced run_command with exec
> for now. Maybe someone can find out why this happens.
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
^ permalink raw reply [flat|nested] 25+ messages in thread